DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Hua-Sheng
2013-09-15
A unified, fast, and effective approach is developed for numerical calculation of the well-known plasma dispersion function with extensions from Maxwellian distribution to almost arbitrary distribution functions, such as the δ, flat top, triangular, κ or Lorentzian, slowing down, and incomplete Maxwellian distributions. The singularity and analytic continuation problems are also solved generally. Given that the usual conclusion γ∝∂f{sub 0}/∂v is only a rough approximation when discussing the distribution function effects on Landau damping, this approach provides a useful tool for rigorous calculations of the linear wave and instability properties of plasma for general distribution functions. The results are alsomore » verified via a linear initial value simulation approach. Intuitive visualizations of the generalized plasma dispersion function are also provided.« less
A distributed planning concept for Space Station payload operations
NASA Technical Reports Server (NTRS)
Hagopian, Jeff; Maxwell, Theresa; Reed, Tracey
1994-01-01
The complex and diverse nature of the payload operations to be performed on the Space Station requires a robust and flexible planning approach. The planning approach for Space Station payload operations must support the phased development of the Space Station, as well as the geographically distributed users of the Space Station. To date, the planning approach for manned operations in space has been one of centralized planning to the n-th degree of detail. This approach, while valid for short duration flights, incurs high operations costs and is not conducive to long duration Space Station operations. The Space Station payload operations planning concept must reduce operations costs, accommodate phased station development, support distributed users, and provide flexibility. One way to meet these objectives is to distribute the planning functions across a hierarchy of payload planning organizations based on their particular needs and expertise. This paper presents a planning concept which satisfies all phases of the development of the Space Station (manned Shuttle flights, unmanned Station operations, and permanent manned operations), and the migration from centralized to distributed planning functions. Identified in this paper are the payload planning functions which can be distributed and the process by which these functions are performed.
NASA Astrophysics Data System (ADS)
Khajehei, S.; Madadgar, S.; Moradkhani, H.
2014-12-01
The reliability and accuracy of hydrological predictions are subject to various sources of uncertainty, including meteorological forcing, initial conditions, model parameters and model structure. To reduce the total uncertainty in hydrological applications, one approach is to reduce the uncertainty in meteorological forcing by using the statistical methods based on the conditional probability density functions (pdf). However, one of the requirements for current methods is to assume the Gaussian distribution for the marginal distribution of the observed and modeled meteorology. Here we propose a Bayesian approach based on Copula functions to develop the conditional distribution of precipitation forecast needed in deriving a hydrologic model for a sub-basin in the Columbia River Basin. Copula functions are introduced as an alternative approach in capturing the uncertainties related to meteorological forcing. Copulas are multivariate joint distribution of univariate marginal distributions, which are capable to model the joint behavior of variables with any level of correlation and dependency. The method is applied to the monthly forecast of CPC with 0.25x0.25 degree resolution to reproduce the PRISM dataset over 1970-2000. Results are compared with Ensemble Pre-Processor approach as a common procedure used by National Weather Service River forecast centers in reproducing observed climatology during a ten-year verification period (2000-2010).
New approach in the quantum statistical parton distribution
NASA Astrophysics Data System (ADS)
Sohaily, Sozha; Vaziri (Khamedi), Mohammad
2017-12-01
An attempt to find simple parton distribution functions (PDFs) based on quantum statistical approach is presented. The PDFs described by the statistical model have very interesting physical properties which help to understand the structure of partons. The longitudinal portion of distribution functions are given by applying the maximum entropy principle. An interesting and simple approach to determine the statistical variables exactly without fitting and fixing parameters is surveyed. Analytic expressions of the x-dependent PDFs are obtained in the whole x region [0, 1], and the computed distributions are consistent with the experimental observations. The agreement with experimental data, gives a robust confirm of our simple presented statistical model.
A common distributed language approach to software integration
NASA Technical Reports Server (NTRS)
Antonelli, Charles J.; Volz, Richard A.; Mudge, Trevor N.
1989-01-01
An important objective in software integration is the development of techniques to allow programs written in different languages to function together. Several approaches are discussed toward achieving this objective and the Common Distributed Language Approach is presented as the approach of choice.
Covariant extension of the GPD overlap representation at low Fock states
Chouika, N.; Mezrag, C.; Moutarde, H.; ...
2017-12-26
Here, we present a novel approach to compute generalized parton distributions within the lightfront wave function overlap framework. We show how to systematically extend generalized parton distributions computed within the DGLAP region to the ERBL one, fulfilling at the same time both the polynomiality and positivity conditions. We exemplify our method using pion lightfront wave functions inspired by recent results of non-perturbative continuum techniques and algebraic nucleon lightfront wave functions. We also test the robustness of our algorithm on reggeized phenomenological parameterizations. This approach paves the way to a better understanding of the nucleon structure from non-perturbative techniques and tomore » a unification of generalized parton distributions and transverse momentum dependent parton distribution functions phenomenology through lightfront wave functions.« less
Yang, Yanzheng; Zhu, Qiuan; Peng, Changhui; Wang, Han; Xue, Wei; Lin, Guanghui; Wen, Zhongming; Chang, Jie; Wang, Meng; Liu, Guobin; Li, Shiqing
2016-01-01
Increasing evidence indicates that current dynamic global vegetation models (DGVMs) have suffered from insufficient realism and are difficult to improve, particularly because they are built on plant functional type (PFT) schemes. Therefore, new approaches, such as plant trait-based methods, are urgently needed to replace PFT schemes when predicting the distribution of vegetation and investigating vegetation sensitivity. As an important direction towards constructing next-generation DGVMs based on plant functional traits, we propose a novel approach for modelling vegetation distributions and analysing vegetation sensitivity through trait-climate relationships in China. The results demonstrated that a Gaussian mixture model (GMM) trained with a LMA-Nmass-LAI data combination yielded an accuracy of 72.82% in simulating vegetation distribution, providing more detailed parameter information regarding community structures and ecosystem functions. The new approach also performed well in analyses of vegetation sensitivity to different climatic scenarios. Although the trait-climate relationship is not the only candidate useful for predicting vegetation distributions and analysing climatic sensitivity, it sheds new light on the development of next-generation trait-based DGVMs. PMID:27052108
dftools: Distribution function fitting
NASA Astrophysics Data System (ADS)
Obreschkow, Danail
2018-05-01
dftools, written in R, finds the most likely P parameters of a D-dimensional distribution function (DF) generating N objects, where each object is specified by D observables with measurement uncertainties. For instance, if the objects are galaxies, it can fit a mass function (D=1), a mass-size distribution (D=2) or the mass-spin-morphology distribution (D=3). Unlike most common fitting approaches, this method accurately accounts for measurement in uncertainties and complex selection functions.
A non-stationary cost-benefit based bivariate extreme flood estimation approach
NASA Astrophysics Data System (ADS)
Qi, Wei; Liu, Junguo
2018-02-01
Cost-benefit analysis and flood frequency analysis have been integrated into a comprehensive framework to estimate cost effective design values. However, previous cost-benefit based extreme flood estimation is based on stationary assumptions and analyze dependent flood variables separately. A Non-Stationary Cost-Benefit based bivariate design flood estimation (NSCOBE) approach is developed in this study to investigate influence of non-stationarities in both the dependence of flood variables and the marginal distributions on extreme flood estimation. The dependence is modeled utilizing copula functions. Previous design flood selection criteria are not suitable for NSCOBE since they ignore time changing dependence of flood variables. Therefore, a risk calculation approach is proposed based on non-stationarities in both marginal probability distributions and copula functions. A case study with 54-year observed data is utilized to illustrate the application of NSCOBE. Results show NSCOBE can effectively integrate non-stationarities in both copula functions and marginal distributions into cost-benefit based design flood estimation. It is also found that there is a trade-off between maximum probability of exceedance calculated from copula functions and marginal distributions. This study for the first time provides a new approach towards a better understanding of influence of non-stationarities in both copula functions and marginal distributions on extreme flood estimation, and could be beneficial to cost-benefit based non-stationary bivariate design flood estimation across the world.
NASA Technical Reports Server (NTRS)
Gurgiolo, Chris; Vinas, Adolfo F.
2009-01-01
This paper presents a spherical harmonic analysis of the plasma velocity distribution function using high-angular, energy, and time resolution Cluster data obtained from the PEACE spectrometer instrument to demonstrate how this analysis models the particle distribution function and its moments and anisotropies. The results show that spherical harmonic analysis produced a robust physical representation model of the velocity distribution function, resolving the main features of the measured distributions. From the spherical harmonic analysis, a minimum set of nine spectral coefficients was obtained from which the moment (up to the heat flux), anisotropy, and asymmetry calculations of the velocity distribution function were obtained. The spherical harmonic method provides a potentially effective "compression" technique that can be easily carried out onboard a spacecraft to determine the moments and anisotropies of the particle velocity distribution function for any species. These calculations were implemented using three different approaches, namely, the standard traditional integration, the spherical harmonic (SPH) spectral coefficients integration, and the singular value decomposition (SVD) on the spherical harmonic methods. A comparison among the various methods shows that both SPH and SVD approaches provide remarkable agreement with the standard moment integration method.
Single-diffractive production of dijets within the kt-factorization approach
NASA Astrophysics Data System (ADS)
Łuszczak, Marta; Maciuła, Rafał; Szczurek, Antoni; Babiarz, Izabela
2017-09-01
We discuss single-diffractive production of dijets. The cross section is calculated within the resolved Pomeron picture, for the first time in the kt-factorization approach, neglecting transverse momentum of the Pomeron. We use Kimber-Martin-Ryskin unintegrated parton (gluon, quark, antiquark) distributions in both the proton as well as in the Pomeron or subleading Reggeon. The unintegrated parton distributions are calculated based on conventional mmht2014nlo parton distribution functions in the proton and H1 Collaboration diffractive parton distribution functions used previously in the analysis of diffractive structure function and dijets at HERA. For comparison, we present results of calculations performed within the collinear-factorization approach. Our results remain those obtained in the next-to-leading-order approach. The calculation is (must be) supplemented by the so-called gap survival factor, which may, in general, depend on kinematical variables. We try to describe the existing data from Tevatron and make detailed predictions for possible LHC measurements. Several differential distributions are calculated. The E¯T, η ¯ and xp ¯ distributions are compared with the Tevatron data. A reasonable agreement is obtained for the first two distributions. The last one requires introducing a gap survival factor which depends on kinematical variables. We discuss how the phenomenological dependence on one kinematical variable may influence dependence on other variables such as E¯T and η ¯. Several distributions for the LHC are shown.
Phase pupil functions for focal-depth enhancement derived from a Wigner distribution function.
Zalvidea, D; Sicre, E E
1998-06-10
A method for obtaining phase-retardation functions, which give rise to an increase of the image focal depth, is proposed. To this end, the Wigner distribution function corresponding to a specific aperture that has an associated small depth of focus in image space is conveniently sheared in the phase-space domain to generate a new Wigner distribution function. From this new function a more uniform on-axis image irradiance can be accomplished. This approach is illustrated by comparison of the imaging performance of both the derived phase function and a previously reported logarithmic phase distribution.
Derivation of Hunt equation for suspension distribution using Shannon entropy theory
NASA Astrophysics Data System (ADS)
Kundu, Snehasis
2017-12-01
In this study, the Hunt equation for computing suspension concentration in sediment-laden flows is derived using Shannon entropy theory. Considering the inverse of the void ratio as a random variable and using principle of maximum entropy, probability density function and cumulative distribution function of suspension concentration is derived. A new and more general cumulative distribution function for the flow domain is proposed which includes several specific other models of CDF reported in literature. This general form of cumulative distribution function also helps to derive the Rouse equation. The entropy based approach helps to estimate model parameters using suspension data of sediment concentration which shows the advantage of using entropy theory. Finally model parameters in the entropy based model are also expressed as functions of the Rouse number to establish a link between the parameters of the deterministic and probabilistic approaches.
NASA Astrophysics Data System (ADS)
Jahn, J. M.; Denton, R. E.; Nose, M.; Bonnell, J. W.; Kurth, W. S.; Livadiotis, G.; Larsen, B.; Goldstein, J.
2016-12-01
Determining the total plasma density from ion data is essentially an impossible task for particle instruments. The lowest instrument energy threshold never includes the coldest particles (i.e., Emin> 0 eV), and any positive spacecraft charging—which is normal for a sunlit spacecraft—exacerbates the problem by shifting the detectable minimum energy to higher values. For ion data, traditionally field line resonance measurements of ULF waves in the magnetosphere have been used to determine the mass loading of magnetic field lines in this case. This approach delivers a reduced ion mass M that represents the mass ratio of all ions on a magnetic field line. For multi-species plasmas like the plasmasphere this bounds the problem, but it does not provide a unique solution. To at least estimate partial densities using particle instruments, one traditionally performs fits to the measured particle distribution functions under the assumption that the underlying particle distributions are Maxwellian. Uncertainties performing a fit aside, there is usually no possibility to detect a possible bi-Maxwellian distribution where one of the Maxwellians is very cold. The tail of such a distribution may fall completely below the low energy threshold of the measurement. In this paper we present a different approach to determining the fractional temperatures Ti and densities ni in a multi-species plasma. First, we describe and demonstrate an approach to determine Ti and ni that does not require fitting but relies more on the mathematical properties of the distribution functions. We apply our approach to Van Allen Probes measurements of the plasmaspheric H+, He+, and O+ distribution functions under the assumption that the particle distributions are Maxwellian. We compare our results to mass loading results from the Van Allen Probes field line resonance analyses (for composition) and to the total (electron) plasma density derived from the EFW and EMFISIS experiments. Then we expand our approach to allow for kappa distributions instead. While this introduces an additional degree of freedom and therefore requires fitting, our approach is still better constrained than the traditional Maxwell fitting and may hold the key to a better understanding of the true nature of plasmaspheric particle distributions.
Bayesian hierarchical functional data analysis via contaminated informative priors.
Scarpa, Bruno; Dunson, David B
2009-09-01
A variety of flexible approaches have been proposed for functional data analysis, allowing both the mean curve and the distribution about the mean to be unknown. Such methods are most useful when there is limited prior information. Motivated by applications to modeling of temperature curves in the menstrual cycle, this article proposes a flexible approach for incorporating prior information in semiparametric Bayesian analyses of hierarchical functional data. The proposed approach is based on specifying the distribution of functions as a mixture of a parametric hierarchical model and a nonparametric contamination. The parametric component is chosen based on prior knowledge, while the contamination is characterized as a functional Dirichlet process. In the motivating application, the contamination component allows unanticipated curve shapes in unhealthy menstrual cycles. Methods are developed for posterior computation, and the approach is applied to data from a European fecundability study.
NASA Astrophysics Data System (ADS)
Danesh Yazdi, M.; Klaus, J.; Condon, L. E.; Maxwell, R. M.
2017-12-01
Recent advancements in analytical solutions to quantify water and solute time-variant travel time distributions (TTDs) and the related StorAge Selection (SAS) functions synthesize catchment complexity into a simplified, lumped representation. While these analytical approaches are easy and efficient in application, they require high frequency hydrochemical data for parameter estimation. Alternatively, integrated hydrologic models coupled to Lagrangian particle-tracking approaches can directly simulate age under different catchment geometries and complexity at a greater computational expense. Here, we compare and contrast the two approaches by exploring the influence of the spatial distribution of subsurface heterogeneity, interactions between distinct flow domains, diversity of flow pathways, and recharge rate on the shape of TTDs and the relating SAS functions. To this end, we use a parallel three-dimensional variably saturated groundwater model, ParFlow, to solve for the velocity fields in the subsurface. A particle-tracking model, SLIM, is then implemented to determine the age distributions at every real time and domain location, facilitating a direct characterization of the SAS functions as opposed to analytical approaches requiring calibration of such functions. Steady-state results reveal that the assumption of random age sampling scheme might only hold in the saturated region of homogeneous catchments resulting in an exponential TTD. This assumption is however violated when the vadose zone is included as the underlying SAS function gives a higher preference to older ages. The dynamical variability of the true SAS functions is also shown to be largely masked by the smooth analytical SAS functions. As the variability of subsurface spatial heterogeneity increases, the shape of TTD approaches a power-law distribution function, including a broader distribution of shorter and longer travel times. We further found that larger (smaller) magnitude of effective precipitation shifts the scale of TTD towards younger (older) travel times, while the shape of the TTD remains untouched. This work constitutes a first step in linking a numerical transport model and analytical solutions of TTD to study their assumptions and limitations, providing physical inferences for empirical parameters.
NASA Astrophysics Data System (ADS)
Iskandar, I.
2018-03-01
The exponential distribution is the most widely used reliability analysis. This distribution is very suitable for representing the lengths of life of many cases and is available in a simple statistical form. The characteristic of this distribution is a constant hazard rate. The exponential distribution is the lower rank of the Weibull distributions. In this paper our effort is to introduce the basic notions that constitute an exponential competing risks model in reliability analysis using Bayesian analysis approach and presenting their analytic methods. The cases are limited to the models with independent causes of failure. A non-informative prior distribution is used in our analysis. This model describes the likelihood function and follows with the description of the posterior function and the estimations of the point, interval, hazard function, and reliability. The net probability of failure if only one specific risk is present, crude probability of failure due to a specific risk in the presence of other causes, and partial crude probabilities are also included.
Kang; Ih; Kim; Kim
2000-03-01
In this study, a new prediction method is suggested for sound transmission loss (STL) of multilayered panels of infinite extent. Conventional methods such as random or field incidence approach often given significant discrepancies in predicting STL of multilayered panels when compared with the experiments. In this paper, appropriate directional distributions of incident energy to predict the STL of multilayered panels are proposed. In order to find a weighting function to represent the directional distribution of incident energy on the wall in a reverberation chamber, numerical simulations by using a ray-tracing technique are carried out. Simulation results reveal that the directional distribution can be approximately expressed by the Gaussian distribution function in terms of the angle of incidence. The Gaussian function is applied to predict the STL of various multilayered panel configurations as well as single panels. The compared results between the measurement and the prediction show good agreements, which validate the proposed Gaussian function approach.
Peculiarities of the momentum distribution functions of strongly correlated charged fermions
NASA Astrophysics Data System (ADS)
Larkin, A. S.; Filinov, V. S.; Fortov, V. E.
2018-01-01
New numerical version of the Wigner approach to quantum thermodynamics of strongly coupled systems of particles has been developed for extreme conditions, when analytical approximations based on different kinds of perturbation theories cannot be applied. An explicit analytical expression of the Wigner function has been obtained in linear and harmonic approximations. Fermi statistical effects are accounted for by effective pair pseudopotential depending on coordinates, momenta and degeneracy parameter of particles and taking into account Pauli blocking of fermions. A new quantum Monte-Carlo method for calculations of average values of arbitrary quantum operators has been developed. Calculations of the momentum distribution functions and the pair correlation functions of degenerate ideal Fermi gas have been carried out for testing the developed approach. Comparison of the obtained momentum distribution functions of strongly correlated Coulomb systems with the Maxwell-Boltzmann and the Fermi distributions shows the significant influence of interparticle interaction both at small momenta and in high energy quantum ‘tails’.
NASA Astrophysics Data System (ADS)
Cheng, Qin-Bo; Chen, Xi; Xu, Chong-Yu; Reinhardt-Imjela, Christian; Schulte, Achim
2014-11-01
In this study, the likelihood functions for uncertainty analysis of hydrological models are compared and improved through the following steps: (1) the equivalent relationship between the Nash-Sutcliffe Efficiency coefficient (NSE) and the likelihood function with Gaussian independent and identically distributed residuals is proved; (2) a new estimation method of the Box-Cox transformation (BC) parameter is developed to improve the effective elimination of the heteroscedasticity of model residuals; and (3) three likelihood functions-NSE, Generalized Error Distribution with BC (BC-GED) and Skew Generalized Error Distribution with BC (BC-SGED)-are applied for SWAT-WB-VSA (Soil and Water Assessment Tool - Water Balance - Variable Source Area) model calibration in the Baocun watershed, Eastern China. Performances of calibrated models are compared using the observed river discharges and groundwater levels. The result shows that the minimum variance constraint can effectively estimate the BC parameter. The form of the likelihood function significantly impacts on the calibrated parameters and the simulated results of high and low flow components. SWAT-WB-VSA with the NSE approach simulates flood well, but baseflow badly owing to the assumption of Gaussian error distribution, where the probability of the large error is low, but the small error around zero approximates equiprobability. By contrast, SWAT-WB-VSA with the BC-GED or BC-SGED approach mimics baseflow well, which is proved in the groundwater level simulation. The assumption of skewness of the error distribution may be unnecessary, because all the results of the BC-SGED approach are nearly the same as those of the BC-GED approach.
Use of the Weibull function to predict future diameter distributions from current plot data
Quang V. Cao
2012-01-01
The Weibull function has been widely used to characterize diameter distributions in forest stands. The future diameter distribution of a forest stand can be predicted by use of a Weibull probability density function from current inventory data for that stand. The parameter recovery approach has been used to ârecoverâ the Weibull parameters from diameter moments or...
ERIC Educational Resources Information Center
Milk, Robert D.
This study analyzes how two bilingual classroom language distribution approaches affect classroom language use patterns. The two strategies, separate instruction in the two languages vs. the new concurrent language usage approach (NCA) allowing use of both languages with strict guidelines for language alternation, are observed on videotapes of a…
Structural frequency functions for an impulsive, distributed forcing function
NASA Technical Reports Server (NTRS)
Bateman, Vesta I.
1987-01-01
The response of a penetrator structure to a spatially distributed mechanical impulse with a magnitude approaching field test force levels (1-2 Mlb) were measured. The frequency response function calculated from the response to this unique forcing function is compared to frequency response functions calculated from response to point forces of about 2000 pounds. The results show that the strain gages installed on the penetrator case respond similiarly to a point, axial force and to a spatially distributed, axial force. This result suggests that the distributed axial force generated in a penetration event may be reconstructed as a point axial force when the penetrator behaves in linear manner.
Dynamics of modulated beams in spectral domain
Yampolsky, Nikolai A.
2017-07-16
General formalism for describing dynamics of modulated beams along linear beamlines is developed. We describe modulated beams with spectral distribution function which represents Fourier transform of the conventional beam distribution function in the 6-dimensional phase space. The introduced spectral distribution function is localized in some region of the spectral domain for nearly monochromatic modulations. It can be characterized with a small number of typical parameters such as the lowest order moments of the spectral distribution. We study evolution of the modulated beams in linear beamlines and find that characteristic spectral parameters transform linearly. The developed approach significantly simplifies analysis ofmore » various schemes proposed for seeding X-ray free electron lasers. We use this approach to study several recently proposed schemes and find the bandwidth of the output bunching in each case.« less
Maximum entropy approach to H -theory: Statistical mechanics of hierarchical systems
NASA Astrophysics Data System (ADS)
Vasconcelos, Giovani L.; Salazar, Domingos S. P.; Macêdo, A. M. S.
2018-02-01
A formalism, called H-theory, is applied to the problem of statistical equilibrium of a hierarchical complex system with multiple time and length scales. In this approach, the system is formally treated as being composed of a small subsystem—representing the region where the measurements are made—in contact with a set of "nested heat reservoirs" corresponding to the hierarchical structure of the system, where the temperatures of the reservoirs are allowed to fluctuate owing to the complex interactions between degrees of freedom at different scales. The probability distribution function (pdf) of the temperature of the reservoir at a given scale, conditioned on the temperature of the reservoir at the next largest scale in the hierarchy, is determined from a maximum entropy principle subject to appropriate constraints that describe the thermal equilibrium properties of the system. The marginal temperature distribution of the innermost reservoir is obtained by integrating over the conditional distributions of all larger scales, and the resulting pdf is written in analytical form in terms of certain special transcendental functions, known as the Fox H functions. The distribution of states of the small subsystem is then computed by averaging the quasiequilibrium Boltzmann distribution over the temperature of the innermost reservoir. This distribution can also be written in terms of H functions. The general family of distributions reported here recovers, as particular cases, the stationary distributions recently obtained by Macêdo et al. [Phys. Rev. E 95, 032315 (2017), 10.1103/PhysRevE.95.032315] from a stochastic dynamical approach to the problem.
Maximum entropy approach to H-theory: Statistical mechanics of hierarchical systems.
Vasconcelos, Giovani L; Salazar, Domingos S P; Macêdo, A M S
2018-02-01
A formalism, called H-theory, is applied to the problem of statistical equilibrium of a hierarchical complex system with multiple time and length scales. In this approach, the system is formally treated as being composed of a small subsystem-representing the region where the measurements are made-in contact with a set of "nested heat reservoirs" corresponding to the hierarchical structure of the system, where the temperatures of the reservoirs are allowed to fluctuate owing to the complex interactions between degrees of freedom at different scales. The probability distribution function (pdf) of the temperature of the reservoir at a given scale, conditioned on the temperature of the reservoir at the next largest scale in the hierarchy, is determined from a maximum entropy principle subject to appropriate constraints that describe the thermal equilibrium properties of the system. The marginal temperature distribution of the innermost reservoir is obtained by integrating over the conditional distributions of all larger scales, and the resulting pdf is written in analytical form in terms of certain special transcendental functions, known as the Fox H functions. The distribution of states of the small subsystem is then computed by averaging the quasiequilibrium Boltzmann distribution over the temperature of the innermost reservoir. This distribution can also be written in terms of H functions. The general family of distributions reported here recovers, as particular cases, the stationary distributions recently obtained by Macêdo et al. [Phys. Rev. E 95, 032315 (2017)10.1103/PhysRevE.95.032315] from a stochastic dynamical approach to the problem.
Robust inference in the negative binomial regression model with an application to falls data.
Aeberhard, William H; Cantoni, Eva; Heritier, Stephane
2014-12-01
A popular way to model overdispersed count data, such as the number of falls reported during intervention studies, is by means of the negative binomial (NB) distribution. Classical estimating methods are well-known to be sensitive to model misspecifications, taking the form of patients falling much more than expected in such intervention studies where the NB regression model is used. We extend in this article two approaches for building robust M-estimators of the regression parameters in the class of generalized linear models to the NB distribution. The first approach achieves robustness in the response by applying a bounded function on the Pearson residuals arising in the maximum likelihood estimating equations, while the second approach achieves robustness by bounding the unscaled deviance components. For both approaches, we explore different choices for the bounding functions. Through a unified notation, we show how close these approaches may actually be as long as the bounding functions are chosen and tuned appropriately, and provide the asymptotic distributions of the resulting estimators. Moreover, we introduce a robust weighted maximum likelihood estimator for the overdispersion parameter, specific to the NB distribution. Simulations under various settings show that redescending bounding functions yield estimates with smaller biases under contamination while keeping high efficiency at the assumed model, and this for both approaches. We present an application to a recent randomized controlled trial measuring the effectiveness of an exercise program at reducing the number of falls among people suffering from Parkinsons disease to illustrate the diagnostic use of such robust procedures and their need for reliable inference. © 2014, The International Biometric Society.
NASA Technical Reports Server (NTRS)
Mavris, Dimitri N.; Bandte, Oliver; Schrage, Daniel P.
1996-01-01
This paper outlines an approach for the determination of economically viable robust design solutions using the High Speed Civil Transport (HSCT) as a case study. Furthermore, the paper states the advantages of a probability based aircraft design over the traditional point design approach. It also proposes a new methodology called Robust Design Simulation (RDS) which treats customer satisfaction as the ultimate design objective. RDS is based on a probabilistic approach to aerospace systems design, which views the chosen objective as a distribution function introduced by so called noise or uncertainty variables. Since the designer has no control over these variables, a variability distribution is defined for each one of them. The cumulative effect of all these distributions causes the overall variability of the objective function. For cases where the selected objective function depends heavily on these noise variables, it may be desirable to obtain a design solution that minimizes this dependence. The paper outlines a step by step approach on how to achieve such a solution for the HSCT case study and introduces an evaluation criterion which guarantees the highest customer satisfaction. This customer satisfaction is expressed by the probability of achieving objective function values less than a desired target value.
Maximum entropy approach to statistical inference for an ocean acoustic waveguide.
Knobles, D P; Sagers, J D; Koch, R A
2012-02-01
A conditional probability distribution suitable for estimating the statistical properties of ocean seabed parameter values inferred from acoustic measurements is derived from a maximum entropy principle. The specification of the expectation value for an error function constrains the maximization of an entropy functional. This constraint determines the sensitivity factor (β) to the error function of the resulting probability distribution, which is a canonical form that provides a conservative estimate of the uncertainty of the parameter values. From the conditional distribution, marginal distributions for individual parameters can be determined from integration over the other parameters. The approach is an alternative to obtaining the posterior probability distribution without an intermediary determination of the likelihood function followed by an application of Bayes' rule. In this paper the expectation value that specifies the constraint is determined from the values of the error function for the model solutions obtained from a sparse number of data samples. The method is applied to ocean acoustic measurements taken on the New Jersey continental shelf. The marginal probability distribution for the values of the sound speed ratio at the surface of the seabed and the source levels of a towed source are examined for different geoacoustic model representations. © 2012 Acoustical Society of America
Thomas E. Dilts; Peter J. Weisberg; Camie M. Dencker; Jeanne C. Chambers
2015-01-01
We have three goals. (1) To develop a suite of functionally relevant climate variables for modelling vegetation distribution on arid and semi-arid landscapes of the Great Basin, USA. (2) To compare the predictive power of vegetation distribution models based on mechanistically proximate factors (water deficit variables) and factors that are more mechanistically removed...
Liu, Jian; Miller, William H
2011-03-14
We show the exact expression of the quantum mechanical time correlation function in the phase space formulation of quantum mechanics. The trajectory-based dynamics that conserves the quantum canonical distribution-equilibrium Liouville dynamics (ELD) proposed in Paper I is then used to approximately evaluate the exact expression. It gives exact thermal correlation functions (of even nonlinear operators, i.e., nonlinear functions of position or momentum operators) in the classical, high temperature, and harmonic limits. Various methods have been presented for the implementation of ELD. Numerical tests of the ELD approach in the Wigner or Husimi phase space have been made for a harmonic oscillator and two strongly anharmonic model problems, for each potential autocorrelation functions of both linear and nonlinear operators have been calculated. It suggests ELD can be a potentially useful approach for describing quantum effects for complex systems in condense phase.
Properties of two-mode squeezed number states
NASA Technical Reports Server (NTRS)
Chizhov, Alexei V.; Murzakhmetov, B. K.
1994-01-01
Photon statistics and phase properties of two-mode squeezed number states are studied. It is shown that photon number distribution and Pegg-Barnett phase distribution for such states have similar (N + 1)-peak structure for nonzero value of the difference in the number of photons between modes. Exact analytical formulas for phase distributions based on different phase approaches are derived. The Pegg-Barnett phase distribution and the phase quasiprobability distribution associated with the Wigner function are close to each other, while the phase quasiprobability distribution associated with the Q function carries less phase information.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abeykoon, A. M. Milinda; Hu, Hefei; Wu, Lijun
2015-01-30
Different protocols for calibrating electron pair distribution function (ePDF) measurements are explored and described for quantitative studies on nanomaterials. It is found that the most accurate approach to determine the camera length is to use a standard calibration sample of Au nanoparticles from the National Institute of Standards and Technology. Different protocols for data collection are also explored, as are possible operational errors, to find the best approaches for accurate data collection for quantitative ePDF studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abeykoon, A. M. Milinda; Hu, Hefei; Wu, Lijun
2015-02-01
We explore and describe different protocols for calibrating electron pair distribution function (ePDF) measurements for quantitative studies on nano-materials. We find the most accurate approach to determine the camera-length is to use a standard calibration sample of Au nanoparticles from National Institute of Standards and Technology. Different protocols for data collection are also explored, as are possible operational errors, to find the best approaches for accurate data collection for quantitative ePDF studies.
A Comparative Study of Probability Collectives Based Multi-agent Systems and Genetic Algorithms
NASA Technical Reports Server (NTRS)
Huang, Chien-Feng; Wolpert, David H.; Bieniawski, Stefan; Strauss, Charles E. M.
2005-01-01
We compare Genetic Algorithms (GA's) with Probability Collectives (PC), a new framework for distributed optimization and control. In contrast to GA's, PC-based methods do not update populations of solutions. Instead they update an explicitly parameterized probability distribution p over the space of solutions. That updating of p arises as the optimization of a functional of p. The functional is chosen so that any p that optimizes it should be p peaked about good solutions. The PC approach works in both continuous and discrete problems. It does not suffer from the resolution limitation of the finite bit length encoding of parameters into GA alleles. It also has deep connections with both game theory and statistical physics. We review the PC approach using its motivation as the information theoretic formulation of bounded rationality for multi-agent systems. It is then compared with GA's on a diverse set of problems. To handle high dimensional surfaces, in the PC method investigated here p is restricted to a product distribution. Each distribution in that product is controlled by a separate agent. The test functions were selected for their difficulty using either traditional gradient descent or genetic algorithms. On those functions the PC-based approach significantly outperforms traditional GA's in both rate of descent, trapping in false minima, and long term optimization.
Assignment of functional activations to probabilistic cytoarchitectonic areas revisited.
Eickhoff, Simon B; Paus, Tomas; Caspers, Svenja; Grosbras, Marie-Helene; Evans, Alan C; Zilles, Karl; Amunts, Katrin
2007-07-01
Probabilistic cytoarchitectonic maps in standard reference space provide a powerful tool for the analysis of structure-function relationships in the human brain. While these microstructurally defined maps have already been successfully used in the analysis of somatosensory, motor or language functions, several conceptual issues in the analysis of structure-function relationships still demand further clarification. In this paper, we demonstrate the principle approaches for anatomical localisation of functional activations based on probabilistic cytoarchitectonic maps by exemplary analysis of an anterior parietal activation evoked by visual presentation of hand gestures. After consideration of the conceptual basis and implementation of volume or local maxima labelling, we comment on some potential interpretational difficulties, limitations and caveats that could be encountered. Extending and supplementing these methods, we then propose a supplementary approach for quantification of structure-function correspondences based on distribution analysis. This approach relates the cytoarchitectonic probabilities observed at a particular functionally defined location to the areal specific null distribution of probabilities across the whole brain (i.e., the full probability map). Importantly, this method avoids the need for a unique classification of voxels to a single cortical area and may increase the comparability between results obtained for different areas. Moreover, as distribution-based labelling quantifies the "central tendency" of an activation with respect to anatomical areas, it will, in combination with the established methods, allow an advanced characterisation of the anatomical substrates of functional activations. Finally, the advantages and disadvantages of the various methods are discussed, focussing on the question of which approach is most appropriate for a particular situation.
Statistical approach to partial equilibrium analysis
NASA Astrophysics Data System (ADS)
Wang, Yougui; Stanley, H. E.
2009-04-01
A statistical approach to market equilibrium and efficiency analysis is proposed in this paper. One factor that governs the exchange decisions of traders in a market, named willingness price, is highlighted and constitutes the whole theory. The supply and demand functions are formulated as the distributions of corresponding willing exchange over the willingness price. The laws of supply and demand can be derived directly from these distributions. The characteristics of excess demand function are analyzed and the necessary conditions for the existence and uniqueness of equilibrium point of the market are specified. The rationing rates of buyers and sellers are introduced to describe the ratio of realized exchange to willing exchange, and their dependence on the market price is studied in the cases of shortage and surplus. The realized market surplus, which is the criterion of market efficiency, can be written as a function of the distributions of willing exchange and the rationing rates. With this approach we can strictly prove that a market is efficient in the state of equilibrium.
Augmenting aquatic species sensitivity distributions with interspecies toxicity estimation models
Species sensitivity distributions (SSD) are cumulative distribution functions of species toxicity values. The SSD approach is increasingly being used in ecological risk assessment, but is often limited by available toxicity data necessary for diverse species representation. In ...
NASA Astrophysics Data System (ADS)
Simonin, Olivier; Zaichik, Leonid I.; Alipchenkov, Vladimir M.; Février, Pierre
2006-12-01
The objective of the paper is to elucidate a connection between two approaches that have been separately proposed for modelling the statistical spatial properties of inertial particles in turbulent fluid flows. One of the approaches proposed recently by Février, Simonin, and Squires [J. Fluid Mech. 533, 1 (2005)] is based on the partitioning of particle turbulent velocity field into spatially correlated (mesoscopic Eulerian) and random-uncorrelated (quasi-Brownian) components. The other approach stems from a kinetic equation for the two-point probability density function of the velocity distributions of two particles [Zaichik and Alipchenkov, Phys. Fluids 15, 1776 (2003)]. Comparisons between these approaches are performed for isotropic homogeneous turbulence and demonstrate encouraging agreement.
Condensate statistics and thermodynamics of weakly interacting Bose gas: Recursion relation approach
NASA Astrophysics Data System (ADS)
Dorfman, K. E.; Kim, M.; Svidzinsky, A. A.
2011-03-01
We study condensate statistics and thermodynamics of weakly interacting Bose gas with a fixed total number N of particles in a cubic box. We find the exact recursion relation for the canonical ensemble partition function. Using this relation, we calculate the distribution function of condensate particles for N=200. We also calculate the distribution function based on multinomial expansion of the characteristic function. Similar to the ideal gas, both approaches give exact statistical moments for all temperatures in the framework of Bogoliubov model. We compare them with the results of unconstraint canonical ensemble quasiparticle formalism and the hybrid master equation approach. The present recursion relation can be used for any external potential and boundary conditions. We investigate the temperature dependence of the first few statistical moments of condensate fluctuations as well as thermodynamic potentials and heat capacity analytically and numerically in the whole temperature range.
Grid-based Continual Analysis of Molecular Interior for Drug Discovery, QSAR and QSPR.
Potemkin, Andrey V; Grishina, Maria A; Potemkin, Vladimir A
2017-01-01
In 1979, R.D.Cramer and M.Milne made a first realization of 3D comparison of molecules by aligning them in space and by mapping their molecular fields to a 3D grid. Further, this approach was developed as the DYLOMMS (Dynamic Lattice- Oriented Molecular Modelling System) approach. In 1984, H.Wold and S.Wold proposed the use of partial least squares (PLS) analysis, instead of principal component analysis, to correlate the field values with biological activities. Then, in 1988, the method which was called CoMFA (Comparative Molecular Field Analysis) was introduced and the appropriate software became commercially available. Since 1988, a lot of 3D QSAR methods, algorithms and their modifications are introduced for solving of virtual drug discovery problems (e.g., CoMSIA, CoMMA, HINT, HASL, GOLPE, GRID, PARM, Raptor, BiS, CiS, ConGO,). All the methods can be divided into two groups (classes):1. Methods studying the exterior of molecules; 2) Methods studying the interior of molecules. A series of grid-based computational technologies for Continual Molecular Interior analysis (CoMIn) are invented in the current paper. The grid-based analysis is fulfilled by means of a lattice construction analogously to many other grid-based methods. The further continual elucidation of molecular structure is performed in various ways. (i) In terms of intermolecular interactions potentials. This can be represented as a superposition of Coulomb, Van der Waals interactions and hydrogen bonds. All the potentials are well known continual functions and their values can be determined in all lattice points for a molecule. (ii) In the terms of quantum functions such as electron density distribution, Laplacian and Hamiltonian of electron density distribution, potential energy distribution, the highest occupied and the lowest unoccupied molecular orbitals distribution and their superposition. To reduce time of calculations using quantum methods based on the first principles, an original quantum free-orbital approach AlteQ is proposed. All the functions can be calculated using a quantum approach at a sufficient level of theory and their values can be determined in all lattice points for a molecule. Then, the molecules of a dataset can be superimposed in the lattice for the maximal coincidence (or minimal deviations) of the potentials (i) or the quantum functions (ii). The methods and criteria of the superimposition are discussed. After that a functional relationship between biological activity or property and characteristics of potentials (i) or functions (ii) is created. The methods of the quantitative relationship construction are discussed. New approaches for rational virtual drug design based on the intermolecular potentials and quantum functions are invented. All the invented methods are realized at www.chemosophia.com web page. Therefore, a set of 3D QSAR approaches for continual molecular interior study giving a lot of opportunities for virtual drug discovery, virtual screening and ligand-based drug design are invented. The continual elucidation of molecular structure is performed in the terms of intermolecular interactions potentials and in the terms of quantum functions such as electron density distribution, Laplacian and Hamiltonian of electron density distribution, potential energy distribution, the highest occupied and the lowest unoccupied molecular orbitals distribution and their superposition. To reduce time of calculations using quantum methods based on the first principles, an original quantum free-orbital approach AlteQ is proposed. The methods of the quantitative relationship construction are discussed. New approaches for rational virtual drug design based on the intermolecular potentials and quantum functions are invented. All the invented methods are realized at www.chemosophia.com web page. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Ploetz, Elizabeth A; Karunaweera, Sadish; Smith, Paul E
2015-01-28
Fluctuation solution theory has provided an alternative view of many liquid mixture properties in terms of particle number fluctuations. The particle number fluctuations can also be related to integrals of the corresponding two body distribution functions between molecular pairs in order to provide a more physical picture of solution behavior and molecule affinities. Here, we extend this type of approach to provide expressions for higher order triplet and quadruplet fluctuations, and thereby integrals over the corresponding distribution functions, all of which can be obtained from available experimental thermodynamic data. The fluctuations and integrals are then determined using the International Association for the Properties of Water and Steam Formulation 1995 (IAPWS-95) equation of state for the liquid phase of pure water. The results indicate small, but significant, deviations from a Gaussian distribution for the molecules in this system. The pressure and temperature dependence of the fluctuations and integrals, as well as the limiting behavior as one approaches both the triple point and the critical point, are also examined.
NASA Astrophysics Data System (ADS)
Ploetz, Elizabeth A.; Karunaweera, Sadish; Smith, Paul E.
2015-01-01
Fluctuation solution theory has provided an alternative view of many liquid mixture properties in terms of particle number fluctuations. The particle number fluctuations can also be related to integrals of the corresponding two body distribution functions between molecular pairs in order to provide a more physical picture of solution behavior and molecule affinities. Here, we extend this type of approach to provide expressions for higher order triplet and quadruplet fluctuations, and thereby integrals over the corresponding distribution functions, all of which can be obtained from available experimental thermodynamic data. The fluctuations and integrals are then determined using the International Association for the Properties of Water and Steam Formulation 1995 (IAPWS-95) equation of state for the liquid phase of pure water. The results indicate small, but significant, deviations from a Gaussian distribution for the molecules in this system. The pressure and temperature dependence of the fluctuations and integrals, as well as the limiting behavior as one approaches both the triple point and the critical point, are also examined.
Random walk to a nonergodic equilibrium concept
NASA Astrophysics Data System (ADS)
Bel, G.; Barkai, E.
2006-01-01
Random walk models, such as the trap model, continuous time random walks, and comb models, exhibit weak ergodicity breaking, when the average waiting time is infinite. The open question is, what statistical mechanical theory replaces the canonical Boltzmann-Gibbs theory for such systems? In this paper a nonergodic equilibrium concept is investigated, for a continuous time random walk model in a potential field. In particular we show that in the nonergodic phase the distribution of the occupation time of the particle in a finite region of space approaches U- or W-shaped distributions related to the arcsine law. We show that when conditions of detailed balance are applied, these distributions depend on the partition function of the problem, thus establishing a relation between the nonergodic dynamics and canonical statistical mechanics. In the ergodic phase the distribution function of the occupation times approaches a δ function centered on the value predicted based on standard Boltzmann-Gibbs statistics. The relation of our work to single-molecule experiments is briefly discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loebl, N.; Maruhn, J. A.; Reinhard, P.-G.
2011-09-15
By calculating the Wigner distribution function in the reaction plane, we are able to probe the phase-space behavior in the time-dependent Hartree-Fock scheme during a heavy-ion collision in a consistent framework. Various expectation values of operators are calculated by evaluating the corresponding integrals over the Wigner function. In this approach, it is straightforward to define and analyze quantities even locally. We compare the Wigner distribution function with the smoothed Husimi distribution function. Different reaction scenarios are presented by analyzing central and noncentral {sup 16}O +{sup 16}O and {sup 96}Zr +{sup 132}Sn collisions. Although we observe strong dissipation in the timemore » evolution of global observables, there is no evidence for complete equilibration in the local analysis of the Wigner function. Because the initial phase-space volumes of the fragments barely merge and mean values of the observables are conserved in fusion reactions over thousands of fm/c, we conclude that the time-dependent Hartree-Fock method provides a good description of the early stage of a heavy-ion collision but does not provide a mechanism to change the phase-space structure in a dramatic way necessary to obtain complete equilibration.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Horowitz, Kelsey A; Ding, Fei; Mather, Barry A
The increasing deployment of distributed photovoltaic systems (DPV) can impact operations at the distribution level and the transmission level of the electric grid. It is important to develop and implement forward-looking approaches for calculating distribution upgrade costs that can be used to inform system planning, market and tariff design, cost allocation, and other policymaking as penetration levels of DPV increase. Using a bottom-up approach that involves iterative hosting capacity analysis, this report calculates distribution upgrade costs as a function of DPV penetration on three real feeders - two in California and one in the Northeastern United States.
NASA Astrophysics Data System (ADS)
Lan, Bo; Lowe, Michael J. S.; Dunne, Fionn P. E.
2015-10-01
A new spherical convolution approach has been presented which couples HCP single crystal wave speed (the kernel function) with polycrystal c-axis pole distribution function to give the resultant polycrystal wave speed response. The three functions have been expressed as spherical harmonic expansions thus enabling application of the de-convolution technique to enable any one of the three to be determined from knowledge of the other two. Hence, the forward problem of determination of polycrystal wave speed from knowledge of single crystal wave speed response and the polycrystal pole distribution has been solved for a broad range of experimentally representative HCP polycrystal textures. The technique provides near-perfect representation of the sensitivity of wave speed to polycrystal texture as well as quantitative prediction of polycrystal wave speed. More importantly, a solution to the inverse problem is presented in which texture, as a c-axis distribution function, is determined from knowledge of the kernel function and the polycrystal wave speed response. It has also been explained why it has been widely reported in the literature that only texture coefficients up to 4th degree may be obtained from ultrasonic measurements. Finally, the de-convolution approach presented provides the potential for the measurement of polycrystal texture from ultrasonic wave speed measurements.
Gaussian statistics for palaeomagnetic vectors
Love, J.J.; Constable, C.G.
2003-01-01
With the aim of treating the statistics of palaeomagnetic directions and intensities jointly and consistently, we represent the mean and the variance of palaeomagnetic vectors, at a particular site and of a particular polarity, by a probability density function in a Cartesian three-space of orthogonal magnetic-field components consisting of a single (unimoda) non-zero mean, spherically-symmetrical (isotropic) Gaussian function. For palaeomagnetic data of mixed polarities, we consider a bimodal distribution consisting of a pair of such symmetrical Gaussian functions, with equal, but opposite, means and equal variances. For both the Gaussian and bi-Gaussian distributions, and in the spherical three-space of intensity, inclination, and declination, we obtain analytical expressions for the marginal density functions, the cumulative distributions, and the expected values and variances for each spherical coordinate (including the angle with respect to the axis of symmetry of the distributions). The mathematical expressions for the intensity and off-axis angle are closed-form and especially manageable, with the intensity distribution being Rayleigh-Rician. In the limit of small relative vectorial dispersion, the Gaussian (bi-Gaussian) directional distribution approaches a Fisher (Bingham) distribution and the intensity distribution approaches a normal distribution. In the opposite limit of large relative vectorial dispersion, the directional distributions approach a spherically-uniform distribution and the intensity distribution approaches a Maxwell distribution. We quantify biases in estimating the properties of the vector field resulting from the use of simple arithmetic averages, such as estimates of the intensity or the inclination of the mean vector, or the variances of these quantities. With the statistical framework developed here and using the maximum-likelihood method, which gives unbiased estimates in the limit of large data numbers, we demonstrate how to formulate the inverse problem, and how to estimate the mean and variance of the magnetic vector field, even when the data consist of mixed combinations of directions and intensities. We examine palaeomagnetic secular-variation data from Hawaii and Re??union, and although these two sites are on almost opposite latitudes, we find significant differences in the mean vector and differences in the local vectorial variances, with the Hawaiian data being particularly anisotropic. These observations are inconsistent with a description of the mean field as being a simple geocentric axial dipole and with secular variation being statistically symmetrical with respect to reflection through the equatorial plane. Finally, our analysis of palaeomagnetic acquisition data from the 1960 Kilauea flow in Hawaii and the Holocene Xitle flow in Mexico, is consistent with the widely held suspicion that directional data are more accurate than intensity data.
Gaussian statistics for palaeomagnetic vectors
NASA Astrophysics Data System (ADS)
Love, J. J.; Constable, C. G.
2003-03-01
With the aim of treating the statistics of palaeomagnetic directions and intensities jointly and consistently, we represent the mean and the variance of palaeomagnetic vectors, at a particular site and of a particular polarity, by a probability density function in a Cartesian three-space of orthogonal magnetic-field components consisting of a single (unimodal) non-zero mean, spherically-symmetrical (isotropic) Gaussian function. For palaeomagnetic data of mixed polarities, we consider a bimodal distribution consisting of a pair of such symmetrical Gaussian functions, with equal, but opposite, means and equal variances. For both the Gaussian and bi-Gaussian distributions, and in the spherical three-space of intensity, inclination, and declination, we obtain analytical expressions for the marginal density functions, the cumulative distributions, and the expected values and variances for each spherical coordinate (including the angle with respect to the axis of symmetry of the distributions). The mathematical expressions for the intensity and off-axis angle are closed-form and especially manageable, with the intensity distribution being Rayleigh-Rician. In the limit of small relative vectorial dispersion, the Gaussian (bi-Gaussian) directional distribution approaches a Fisher (Bingham) distribution and the intensity distribution approaches a normal distribution. In the opposite limit of large relative vectorial dispersion, the directional distributions approach a spherically-uniform distribution and the intensity distribution approaches a Maxwell distribution. We quantify biases in estimating the properties of the vector field resulting from the use of simple arithmetic averages, such as estimates of the intensity or the inclination of the mean vector, or the variances of these quantities. With the statistical framework developed here and using the maximum-likelihood method, which gives unbiased estimates in the limit of large data numbers, we demonstrate how to formulate the inverse problem, and how to estimate the mean and variance of the magnetic vector field, even when the data consist of mixed combinations of directions and intensities. We examine palaeomagnetic secular-variation data from Hawaii and Réunion, and although these two sites are on almost opposite latitudes, we find significant differences in the mean vector and differences in the local vectorial variances, with the Hawaiian data being particularly anisotropic. These observations are inconsistent with a description of the mean field as being a simple geocentric axial dipole and with secular variation being statistically symmetrical with respect to reflection through the equatorial plane. Finally, our analysis of palaeomagnetic acquisition data from the 1960 Kilauea flow in Hawaii and the Holocene Xitle flow in Mexico, is consistent with the widely held suspicion that directional data are more accurate than intensity data.
NASA Astrophysics Data System (ADS)
Kakehashi, Yoshiro; Chandra, Sumal
2016-04-01
We have developed a first-principles local ansatz wavefunction approach with momentum-dependent variational parameters on the basis of the tight-binding LDA+U Hamiltonian. The theory goes beyond the first-principles Gutzwiller approach and quantitatively describes correlated electron systems. Using the theory, we find that the momentum distribution function (MDF) bands of paramagnetic bcc Fe along high-symmetry lines show a large deviation from the Fermi-Dirac function for the d electrons with eg symmetry and yield the momentum-dependent mass enhancement factors. The calculated average mass enhancement m*/m = 1.65 is consistent with low-temperature specific heat data as well as recent angle-resolved photoemission spectroscopy (ARPES) data.
Yura, Harold T; Hanson, Steen G
2012-04-01
Methods for simulation of two-dimensional signals with arbitrary power spectral densities and signal amplitude probability density functions are disclosed. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In most cases the method provides satisfactory results and can thus be considered an engineering approach. Several illustrative examples with relevance for optics are given.
Shizgal, Bernie D
2018-05-01
This paper considers two nonequilibrium model systems described by linear Fokker-Planck equations for the time-dependent velocity distribution functions that yield steady state Kappa distributions for specific system parameters. The first system describes the time evolution of a charged test particle in a constant temperature heat bath of a second charged particle. The time dependence of the distribution function of the test particle is given by a Fokker-Planck equation with drift and diffusion coefficients for Coulomb collisions as well as a diffusion coefficient for wave-particle interactions. A second system involves the Fokker-Planck equation for electrons dilutely dispersed in a constant temperature heat bath of atoms or ions and subject to an external time-independent uniform electric field. The momentum transfer cross section for collisions between the two components is assumed to be a power law in reduced speed. The time-dependent Fokker-Planck equations for both model systems are solved with a numerical finite difference method and the approach to equilibrium is rationalized with the Kullback-Leibler relative entropy. For particular choices of the system parameters for both models, the steady distribution is found to be a Kappa distribution. Kappa distributions were introduced as an empirical fitting function that well describe the nonequilibrium features of the distribution functions of electrons and ions in space science as measured by satellite instruments. The calculation of the Kappa distribution from the Fokker-Planck equations provides a direct physically based dynamical approach in contrast to the nonextensive entropy formalism by Tsallis [J. Stat. Phys. 53, 479 (1988)JSTPBS0022-471510.1007/BF01016429].
NASA Astrophysics Data System (ADS)
Shizgal, Bernie D.
2018-05-01
This paper considers two nonequilibrium model systems described by linear Fokker-Planck equations for the time-dependent velocity distribution functions that yield steady state Kappa distributions for specific system parameters. The first system describes the time evolution of a charged test particle in a constant temperature heat bath of a second charged particle. The time dependence of the distribution function of the test particle is given by a Fokker-Planck equation with drift and diffusion coefficients for Coulomb collisions as well as a diffusion coefficient for wave-particle interactions. A second system involves the Fokker-Planck equation for electrons dilutely dispersed in a constant temperature heat bath of atoms or ions and subject to an external time-independent uniform electric field. The momentum transfer cross section for collisions between the two components is assumed to be a power law in reduced speed. The time-dependent Fokker-Planck equations for both model systems are solved with a numerical finite difference method and the approach to equilibrium is rationalized with the Kullback-Leibler relative entropy. For particular choices of the system parameters for both models, the steady distribution is found to be a Kappa distribution. Kappa distributions were introduced as an empirical fitting function that well describe the nonequilibrium features of the distribution functions of electrons and ions in space science as measured by satellite instruments. The calculation of the Kappa distribution from the Fokker-Planck equations provides a direct physically based dynamical approach in contrast to the nonextensive entropy formalism by Tsallis [J. Stat. Phys. 53, 479 (1988), 10.1007/BF01016429].
Analysis of the proton longitudinal structure function from the gluon distribution function
NASA Astrophysics Data System (ADS)
Boroun, G. R.; Rezaei, B.
2012-11-01
We make a critical, next-to-leading order, study of the relationship between the longitudinal structure function F L and the gluon distribution proposed in Cooper-Sarkar et al. (Z. Phys. C 39:281, 1988; Acta Phys. Pol. B 34:2911 2003), which is frequently used to extract the gluon distribution from the proton longitudinal structure function at small x. The gluon density is obtained by expanding at particular choices of the point of expansion and compared with the hard Pomeron behavior for the gluon density. Comparisons with H1 data are made and predictions for the proposed best approach are also provided.
Derivation of an eigenvalue probability density function relating to the Poincaré disk
NASA Astrophysics Data System (ADS)
Forrester, Peter J.; Krishnapur, Manjunath
2009-09-01
A result of Zyczkowski and Sommers (2000 J. Phys. A: Math. Gen. 33 2045-57) gives the eigenvalue probability density function for the top N × N sub-block of a Haar distributed matrix from U(N + n). In the case n >= N, we rederive this result, starting from knowledge of the distribution of the sub-blocks, introducing the Schur decomposition and integrating over all variables except the eigenvalues. The integration is done by identifying a recursive structure which reduces the dimension. This approach is inspired by an analogous approach which has been recently applied to determine the eigenvalue probability density function for random matrices A-1B, where A and B are random matrices with entries standard complex normals. We relate the eigenvalue distribution of the sub-blocks to a many-body quantum state, and to the one-component plasma, on the pseudosphere.
Comment on "Troublesome aspects of the Renyi-MaxEnt treatment"
NASA Astrophysics Data System (ADS)
Oikonomou, Thomas; Bagci, G. Baris
2017-11-01
Plastino et al. [Plastino et al., Phys. Rev. E 94, 012145 (2016), 10.1103/PhysRevE.94.012145] recently stated that the Rényi entropy is not suitable for thermodynamics by using functional calculus, since it leads to anomalous results unlike the Tsallis entropy. We first show that the Tsallis entropy also leads to such anomalous behaviors if one adopts the same functional calculus approach. Second, we note that one of the Lagrange multipliers is set in an ad hoc manner in the functional calculus approach of Plastino et al. Finally, the explanation for these anomalous behaviors is provided by observing that the generalized distributions obtained by Plastino et al. do not yield the ordinary canonical partition function in the appropriate limit and therefore cannot be considered as genuine generalized distributions.
NASA Astrophysics Data System (ADS)
Dunn, S. M.; Colohan, R. J. E.
1999-09-01
A snow component has been developed for the distributed hydrological model, DIY, using an approach that sequentially evaluates the behaviour of different functions as they are implemented in the model. The evaluation is performed using multi-objective functions to ensure that the internal structure of the model is correct. The development of the model, using a sub-catchment in the Cairngorm Mountains in Scotland, demonstrated that the degree-day model can be enhanced for hydroclimatic conditions typical of those found in Scotland, without increasing meteorological data requirements. An important element of the snow model is a function to account for wind re-distribution. This causes large accumulations of snow in small pockets, which are shown to be important in sustaining baseflows in the rivers during the late spring and early summer, long after the snowpack has melted from the bulk of the catchment. The importance of the wind function would not have been identified using a single objective function of total streamflow to evaluate the model behaviour.
New advances in the statistical parton distributions approach
NASA Astrophysics Data System (ADS)
Soffer, Jacques; Bourrely, Claude
2016-03-01
The quantum statistical parton distributions approach proposed more than one decade ago is revisited by considering a larger set of recent and accurate Deep Inelastic Scattering experimental results. It enables us to improve the description of the data by means of a new determination of the parton distributions. This global next-to-leading order QCD analysis leads to a good description of several structure functions, involving unpolarized parton distributions and helicity distributions, in terms of a rather small number of free parameters. There are many serious challenging issues. The predictions of this theoretical approach will be tested for single-jet production and charge asymmetry in W± production in p¯p and pp collisions up to LHC energies, using recent data and also for forthcoming experimental results. Presented by J. So.er at POETIC 2015
Quantile Functions, Convergence in Quantile, and Extreme Value Distribution Theory.
1980-11-01
Gnanadesikan (1968). Quantile functions are advocated by Parzen (1979) as providing an approach to probability-based data analysis. Quantile functions are... Gnanadesikan , R. (1968). Probability Plotting Methods for the Analysis of Data, Biomtrika, 55, 1-17.
Quantitative Mapping of the Spatial Distribution of Nanoparticles in Endo-Lysosomes by Local pH.
Wang, Jing; MacEwan, Sarah R; Chilkoti, Ashutosh
2017-02-08
Understanding the intracellular distribution and trafficking of nanoparticle drug carriers is necessary to elucidate their mechanisms of drug delivery and is helpful in the rational design of novel nanoparticle drug delivery systems. The traditional immunofluorescence method to study intracellular distribution of nanoparticles using organelle-specific antibodies is laborious and subject to artifacts. As an alternative, we developed a new method that exploits ratiometric fluorescence imaging of a pH-sensitive Lysosensor dye to visualize and quantify the spatial distribution of nanoparticles in the endosomes and lysosomes of live cells. Using this method, we compared the endolysosomal distribution of cell-penetrating peptide (CPP)-functionalized micelles to unfunctionalized micelles and found that CPP-functionalized micelles exhibited faster endosome-to-lysosome trafficking than unfunctionalized micelles. Ratiometric fluorescence imaging of pH-sensitive Lysosensor dye allows rapid quantitative mapping of nanoparticle distribution in endolysosomes in live cells while minimizing artifacts caused by extensive sample manipulation typical of alternative approaches. This new method can thus serve as an alternative to traditional immunofluorescence approaches to study the intracellular distribution and trafficking of nanoparticles within endosomes and lysosomes.
A phase space approach to wave propagation with dispersion.
Ben-Benjamin, Jonathan S; Cohen, Leon; Loughlin, Patrick J
2015-08-01
A phase space approximation method for linear dispersive wave propagation with arbitrary initial conditions is developed. The results expand on a previous approximation in terms of the Wigner distribution of a single mode. In contrast to this previously considered single-mode case, the approximation presented here is for the full wave and is obtained by a different approach. This solution requires one to obtain (i) the initial modal functions from the given initial wave, and (ii) the initial cross-Wigner distribution between different modal functions. The full wave is the sum of modal functions. The approximation is obtained for general linear wave equations by transforming the equations to phase space, and then solving in the new domain. It is shown that each modal function of the wave satisfies a Schrödinger-type equation where the equivalent "Hamiltonian" operator is the dispersion relation corresponding to the mode and where the wavenumber is replaced by the wavenumber operator. Application to the beam equation is considered to illustrate the approach.
NASA Astrophysics Data System (ADS)
Yan, Wang-Ji; Ren, Wei-Xin
2018-01-01
This study applies the theoretical findings of circularly-symmetric complex normal ratio distribution Yan and Ren (2016) [1,2] to transmissibility-based modal analysis from a statistical viewpoint. A probabilistic model of transmissibility function in the vicinity of the resonant frequency is formulated in modal domain, while some insightful comments are offered. It theoretically reveals that the statistics of transmissibility function around the resonant frequency is solely dependent on 'noise-to-signal' ratio and mode shapes. As a sequel to the development of the probabilistic model of transmissibility function in modal domain, this study poses the process of modal identification in the context of Bayesian framework by borrowing a novel paradigm. Implementation issues unique to the proposed approach are resolved by Lagrange multiplier approach. Also, this study explores the possibility of applying Bayesian analysis in distinguishing harmonic components and structural ones. The approaches are verified through simulated data and experimentally testing data. The uncertainty behavior due to variation of different factors is also discussed in detail.
Polymer-Attached Functional Inorganic-Organic Hybrid Nano-Composite Aerogels
2003-01-01
drugs. The chemistry to synthesize polyamino- siloxane based aerogel composite was discussed. In addition, two approaches to synthesize PHEMA aerogel... Composite Aerogels DISTRIBUTION: Approved for public release, distribution unlimited This paper is part of the following report: TITLE: Materials...Proc. Vol. 740 © 2003 Materials Research Society 112.24 Polymer-Attached Functional Inorganic-Organic Hybrid Nano- composite Aerogels Xipeng Liu, Mingzhe
Novel trends in pair distribution function approaches on bulk systems with nanoscale heterogeneities
Emil S. Bozin; Billinge, Simon J. L.
2016-07-29
Novel materials for high performance applications increasingly exhibit structural order on the nanometer length scale; a domain where crystallography, the basis of Rietveld refinement, fails [1]. In such instances the total scattering approach, which treats Bragg and diffuse scattering on an equal basis, is a powerful approach. In recent years, the analysis of the total scattering data became an invaluable tool and the gold standard for studying nanocrystalline, nanoporous, and disordered crystalline materials. The data may be analyzed in reciprocal space directly, or Fourier transformed to the real-space atomic pair distribution function (PDF) and this intuitive function examined for localmore » structural information. Here we give a number of illustrative examples, for convenience picked from our own work, of recent developments and applications of total scattering and PDF analysis to novel complex materials. There are many other wonderful examples from the work of others.« less
Jacquet, Claire; Mouillot, David; Kulbicki, Michel; Gravel, Dominique
2017-02-01
The Theory of Island Biogeography (TIB) predicts how area and isolation influence species richness equilibrium on insular habitats. However, the TIB remains silent about functional trait composition and provides no information on the scaling of functional diversity with area, an observation that is now documented in many systems. To fill this gap, we develop a probabilistic approach to predict the distribution of a trait as a function of habitat area and isolation, extending the TIB beyond the traditional species-area relationship. We compare model predictions to the body-size distribution of piscivorous and herbivorous fishes found on tropical reefs worldwide. We find that small and isolated reefs have a higher proportion of large-sized species than large and connected reefs. We also find that knowledge of species body-size and trophic position improves the predictions of fish occupancy on tropical reefs, supporting both the allometric and trophic theory of island biogeography. The integration of functional ecology to island biogeography is broadly applicable to any functional traits and provides a general probabilistic approach to study the scaling of trait distribution with habitat area and isolation. © 2016 John Wiley & Sons Ltd/CNRS.
A general framework for updating belief distributions.
Bissiri, P G; Holmes, C C; Walker, S G
2016-11-01
We propose a framework for general Bayesian inference. We argue that a valid update of a prior belief distribution to a posterior can be made for parameters which are connected to observations through a loss function rather than the traditional likelihood function, which is recovered as a special case. Modern application areas make it increasingly challenging for Bayesians to attempt to model the true data-generating mechanism. For instance, when the object of interest is low dimensional, such as a mean or median, it is cumbersome to have to achieve this via a complete model for the whole data distribution. More importantly, there are settings where the parameter of interest does not directly index a family of density functions and thus the Bayesian approach to learning about such parameters is currently regarded as problematic. Our framework uses loss functions to connect information in the data to functionals of interest. The updating of beliefs then follows from a decision theoretic approach involving cumulative loss functions. Importantly, the procedure coincides with Bayesian updating when a true likelihood is known yet provides coherent subjective inference in much more general settings. Connections to other inference frameworks are highlighted.
A distributed computing approach to mission operations support. [for spacecraft
NASA Technical Reports Server (NTRS)
Larsen, R. L.
1975-01-01
Computing mission operation support includes orbit determination, attitude processing, maneuver computation, resource scheduling, etc. The large-scale third-generation distributed computer network discussed is capable of fulfilling these dynamic requirements. It is shown that distribution of resources and control leads to increased reliability, and exhibits potential for incremental growth. Through functional specialization, a distributed system may be tuned to very specific operational requirements. Fundamental to the approach is the notion of process-to-process communication, which is effected through a high-bandwidth communications network. Both resource-sharing and load-sharing may be realized in the system.
Kaon quark distribution functions in the chiral constituent quark model
NASA Astrophysics Data System (ADS)
Watanabe, Akira; Sawada, Takahiro; Kao, Chung Wen
2018-04-01
We investigate the valence u and s ¯ quark distribution functions of the K+ meson, vK (u )(x ,Q2) and vK (s ¯)(x ,Q2), in the framework of the chiral constituent quark model. We judiciously choose the bare distributions at the initial scale to generate the dressed distributions at the higher scale, considering the meson cloud effects and the QCD evolution, which agree with the phenomenologically satisfactory valence quark distribution of the pion and the experimental data of the ratio vK (u )(x ,Q2)/vπ (u )(x ,Q2) . We show how the meson cloud effects affect the bare distribution functions in detail. We find that a smaller S U (3 ) flavor symmetry breaking effect is observed, compared with results of the preceding studies based on other approaches.
Interval Estimation of Seismic Hazard Parameters
NASA Astrophysics Data System (ADS)
Orlecka-Sikora, Beata; Lasocki, Stanislaw
2017-03-01
The paper considers Poisson temporal occurrence of earthquakes and presents a way to integrate uncertainties of the estimates of mean activity rate and magnitude cumulative distribution function in the interval estimation of the most widely used seismic hazard functions, such as the exceedance probability and the mean return period. The proposed algorithm can be used either when the Gutenberg-Richter model of magnitude distribution is accepted or when the nonparametric estimation is in use. When the Gutenberg-Richter model of magnitude distribution is used the interval estimation of its parameters is based on the asymptotic normality of the maximum likelihood estimator. When the nonparametric kernel estimation of magnitude distribution is used, we propose the iterated bias corrected and accelerated method for interval estimation based on the smoothed bootstrap and second-order bootstrap samples. The changes resulted from the integrated approach in the interval estimation of the seismic hazard functions with respect to the approach, which neglects the uncertainty of the mean activity rate estimates have been studied using Monte Carlo simulations and two real dataset examples. The results indicate that the uncertainty of mean activity rate affects significantly the interval estimates of hazard functions only when the product of activity rate and the time period, for which the hazard is estimated, is no more than 5.0. When this product becomes greater than 5.0, the impact of the uncertainty of cumulative distribution function of magnitude dominates the impact of the uncertainty of mean activity rate in the aggregated uncertainty of the hazard functions. Following, the interval estimates with and without inclusion of the uncertainty of mean activity rate converge. The presented algorithm is generic and can be applied also to capture the propagation of uncertainty of estimates, which are parameters of a multiparameter function, onto this function.
Comment on "Troublesome aspects of the Renyi-MaxEnt treatment".
Oikonomou, Thomas; Bagci, G Baris
2017-11-01
Plastino et al. [Plastino et al., Phys. Rev. E 94, 012145 (2016)1539-375510.1103/PhysRevE.94.012145] recently stated that the Rényi entropy is not suitable for thermodynamics by using functional calculus, since it leads to anomalous results unlike the Tsallis entropy. We first show that the Tsallis entropy also leads to such anomalous behaviors if one adopts the same functional calculus approach. Second, we note that one of the Lagrange multipliers is set in an ad hoc manner in the functional calculus approach of Plastino et al. Finally, the explanation for these anomalous behaviors is provided by observing that the generalized distributions obtained by Plastino et al. do not yield the ordinary canonical partition function in the appropriate limit and therefore cannot be considered as genuine generalized distributions.
Bivariate sub-Gaussian model for stock index returns
NASA Astrophysics Data System (ADS)
Jabłońska-Sabuka, Matylda; Teuerle, Marek; Wyłomańska, Agnieszka
2017-11-01
Financial time series are commonly modeled with methods assuming data normality. However, the real distribution can be nontrivial, also not having an explicitly formulated probability density function. In this work we introduce novel parameter estimation and high-powered distribution testing methods which do not rely on closed form densities, but use the characteristic functions for comparison. The approach applied to a pair of stock index returns demonstrates that such a bivariate vector can be a sample coming from a bivariate sub-Gaussian distribution. The methods presented here can be applied to any nontrivially distributed financial data, among others.
Transfer function concept for ultrasonic characterization of material microstructures
NASA Technical Reports Server (NTRS)
Vary, A.; Kautz, H. E.
1986-01-01
The approach given depends on treating material microstructures as elastomechanical filters that have analytically definable transfer functions. These transfer functions can be defined in terms of the frequency dependence of the ultrasonic attenuation coefficient. The transfer function concept provides a basis for synthesizing expressions that characterize polycrystalline materials relative to microstructural factors such as mean grain size, grain-size distribution functions, and grain boundary energy transmission. Although the approach is nonrigorous, it leads to a rational basis for combining the previously mentioned diverse and fragmented equations for ultrasonic attenuation coefficients.
Analyzing coastal environments by means of functional data analysis
NASA Astrophysics Data System (ADS)
Sierra, Carlos; Flor-Blanco, Germán; Ordoñez, Celestino; Flor, Germán; Gallego, José R.
2017-07-01
Here we used Functional Data Analysis (FDA) to examine particle-size distributions (PSDs) in a beach/shallow marine sedimentary environment in Gijón Bay (NW Spain). The work involved both Functional Principal Components Analysis (FPCA) and Functional Cluster Analysis (FCA). The grainsize of the sand samples was characterized by means of laser dispersion spectroscopy. Within this framework, FPCA was used as a dimension reduction technique to explore and uncover patterns in grain-size frequency curves. This procedure proved useful to describe variability in the structure of the data set. Moreover, an alternative approach, FCA, was applied to identify clusters and to interpret their spatial distribution. Results obtained with this latter technique were compared with those obtained by means of two vector approaches that combine PCA with CA (Cluster Analysis). The first method, the point density function (PDF), was employed after adapting a log-normal distribution to each PSD and resuming each of the density functions by its mean, sorting, skewness and kurtosis. The second applied a centered-log-ratio (clr) to the original data. PCA was then applied to the transformed data, and finally CA to the retained principal component scores. The study revealed functional data analysis, specifically FPCA and FCA, as a suitable alternative with considerable advantages over traditional vector analysis techniques in sedimentary geology studies.
NASA Astrophysics Data System (ADS)
Gadjiev, Bahruz; Progulova, Tatiana
2015-01-01
We consider a multifractal structure as a mixture of fractal substructures and introduce a distribution function f (α), where α is a fractal dimension. Then we can introduce g(p)˜
Classical statistical mechanics approach to multipartite entanglement
NASA Astrophysics Data System (ADS)
Facchi, P.; Florio, G.; Marzolino, U.; Parisi, G.; Pascazio, S.
2010-06-01
We characterize the multipartite entanglement of a system of n qubits in terms of the distribution function of the bipartite purity over balanced bipartitions. We search for maximally multipartite entangled states, whose average purity is minimal, and recast this optimization problem into a problem of statistical mechanics, by introducing a cost function, a fictitious temperature and a partition function. By investigating the high-temperature expansion, we obtain the first three moments of the distribution. We find that the problem exhibits frustration.
NASA Astrophysics Data System (ADS)
Mukhopadhyay, Saumyadip; Abraham, John
2012-07-01
The unsteady flamelet progress variable (UFPV) model has been proposed by Pitsch and Ihme ["An unsteady/flamelet progress variable method for LES of nonpremixed turbulent combustion," AIAA Paper No. 2005-557, 2005] for modeling the averaged/filtered chemistry source terms in Reynolds averaged simulations and large eddy simulations of reacting non-premixed combustion. In the UFPV model, a look-up table of source terms is generated as a function of mixture fraction Z, scalar dissipation rate χ, and progress variable C by solving the unsteady flamelet equations. The assumption is that the unsteady flamelet represents the evolution of the reacting mixing layer in the non-premixed flame. We assess the accuracy of the model in predicting autoignition and flame development in compositionally stratified n-heptane/air mixtures using direct numerical simulations (DNS). The focus in this work is primarily on the assessment of accuracy of the probability density functions (PDFs) employed for obtaining averaged source terms. The performance of commonly employed presumed functions, such as the dirac-delta distribution function, the β distribution function, and statistically most likely distribution (SMLD) approach in approximating the shapes of the PDFs of the reactive and the conserved scalars is evaluated. For unimodal distributions, it is observed that functions that need two-moment information, e.g., the β distribution function and the SMLD approach with two-moment closure, are able to reasonably approximate the actual PDF. As the distribution becomes multimodal, higher moment information is required. Differences are observed between the ignition trends obtained from DNS and those predicted by the look-up table, especially for smaller gradients where the flamelet assumption becomes less applicable. The formulation assumes that the shape of the χ(Z) profile can be modeled by an error function which remains unchanged in the presence of heat release. We show that this assumption is not accurate.
Epping, Ruben; Panne, Ulrich; Falkenhagen, Jana
2017-02-07
Statistical ethylene oxide (EO) and propylene oxide (PO) copolymers of different monomer compositions and different average molar masses additionally containing two kinds of end groups (FTD) were investigated by ultra high pressure liquid chromatography under critical conditions (UP-LCCC) combined with electrospray ionization time-of flight mass spectrometry (ESI-TOF-MS). Theoretical predictions of the existence of a critical adsorption point (CPA) for statistical copolymers with a given chemical and sequence distribution1 could be studied and confirmed. A fundamentally new approach to determine these critical conditions in a copolymer, alongside the inevitable chemical composition distribution (CCD), with mass spectrometric detection, is described. The shift of the critical eluent composition with the monomer composition of the polymers was determined. Due to the broad molar mass distribution (MMD) and the presumed existence of different end group functionalities as well as monomer sequence distribution (MSD), gradient separation only by CCD was not possible. Therefore, isocratic separation conditions at the CPA of definite CCD fractions were developed. Although the various present distributions partly superimposed the separation process, the goal of separation by end group functionality was still achieved on the basis of the additional dimension of ESI-TOF-MS. The existence of HO-H besides the desired allylO-H end group functionalities was confirmed and their amount estimated. Furthermore, indications for a MSD were found by UPLC/MS/MS measurements. This approach offers for the first time the possibility to obtain a fingerprint of a broad distributed statistical copolymer including MMD, FTD, CCD, and MSD.
Multi-Instance Metric Transfer Learning for Genome-Wide Protein Function Prediction.
Xu, Yonghui; Min, Huaqing; Wu, Qingyao; Song, Hengjie; Ye, Bicui
2017-02-06
Multi-Instance (MI) learning has been proven to be effective for the genome-wide protein function prediction problems where each training example is associated with multiple instances. Many studies in this literature attempted to find an appropriate Multi-Instance Learning (MIL) method for genome-wide protein function prediction under a usual assumption, the underlying distribution from testing data (target domain, i.e., TD) is the same as that from training data (source domain, i.e., SD). However, this assumption may be violated in real practice. To tackle this problem, in this paper, we propose a Multi-Instance Metric Transfer Learning (MIMTL) approach for genome-wide protein function prediction. In MIMTL, we first transfer the source domain distribution to the target domain distribution by utilizing the bag weights. Then, we construct a distance metric learning method with the reweighted bags. At last, we develop an alternative optimization scheme for MIMTL. Comprehensive experimental evidence on seven real-world organisms verifies the effectiveness and efficiency of the proposed MIMTL approach over several state-of-the-art methods.
Deformation dependence of proton decay rates and angular distributions in a time-dependent approach
NASA Astrophysics Data System (ADS)
Carjan, N.; Talou, P.; Strottman, D.
1998-12-01
A new, time-dependent, approach to proton decay from axially symmetric deformed nuclei is presented. The two-dimensional time-dependent Schrödinger equation for the interaction between the emitted proton and the rest of the nucleus is solved numerically for well defined initial quasi-stationary proton states. Applied to the hypothetical proton emission from excited states in deformed nuclei of 208Pb, this approach shows that the problem cannot be reduced to one dimension. There are in general more than one directions of emission with wide distributions around them, determined mainly by the quantum numbers of the initial wave function rather than by the potential landscape. The distribution of the "residual" angular momentum and its variation in time play a major role in the determination of the decay rate. In a couple of cases, no exponential decay was found during the calculated time evolution (2×10-21 sec) although more than half of the wave function escaped during that time.
Zaluzhnyy, I A; Kurta, R P; Menushenkov, A P; Ostrovskii, B I; Vartanyants, I A
2016-09-01
An x-ray scattering approach to determine the two-dimensional (2D) pair distribution function (PDF) in partially ordered 2D systems is proposed. We derive relations between the structure factor and PDF that enable quantitative studies of positional and bond-orientational (BO) order in real space. We apply this approach in the x-ray study of a liquid crystal (LC) film undergoing the smectic-A-hexatic-B phase transition, to analyze the interplay between the positional and BO order during the temperature evolution of the LC film. We analyze the positional correlation length in different directions in real space.
Model error estimation for distributed systems described by elliptic equations
NASA Technical Reports Server (NTRS)
Rodriguez, G.
1983-01-01
A function space approach is used to develop a theory for estimation of the errors inherent in an elliptic partial differential equation model for a distributed parameter system. By establishing knowledge of the inevitable deficiencies in the model, the error estimates provide a foundation for updating the model. The function space solution leads to a specification of a method for computation of the model error estimates and development of model error analysis techniques for comparison between actual and estimated errors. The paper summarizes the model error estimation approach as well as an application arising in the area of modeling for static shape determination of large flexible systems.
A Variational Approach to Simultaneous Image Segmentation and Bias Correction.
Zhang, Kaihua; Liu, Qingshan; Song, Huihui; Li, Xuelong
2015-08-01
This paper presents a novel variational approach for simultaneous estimation of bias field and segmentation of images with intensity inhomogeneity. We model intensity of inhomogeneous objects to be Gaussian distributed with different means and variances, and then introduce a sliding window to map the original image intensity onto another domain, where the intensity distribution of each object is still Gaussian but can be better separated. The means of the Gaussian distributions in the transformed domain can be adaptively estimated by multiplying the bias field with a piecewise constant signal within the sliding window. A maximum likelihood energy functional is then defined on each local region, which combines the bias field, the membership function of the object region, and the constant approximating the true signal from its corresponding object. The energy functional is then extended to the whole image domain by the Bayesian learning approach. An efficient iterative algorithm is proposed for energy minimization, via which the image segmentation and bias field correction are simultaneously achieved. Furthermore, the smoothness of the obtained optimal bias field is ensured by the normalized convolutions without extra cost. Experiments on real images demonstrated the superiority of the proposed algorithm to other state-of-the-art representative methods.
Refractive laser beam shaping by means of a functional differential equation based design approach.
Duerr, Fabian; Thienpont, Hugo
2014-04-07
Many laser applications require specific irradiance distributions to ensure optimal performance. Geometric optical design methods based on numerical calculation of two plano-aspheric lenses have been thoroughly studied in the past. In this work, we present an alternative new design approach based on functional differential equations that allows direct calculation of the rotational symmetric lens profiles described by two-point Taylor polynomials. The formalism is used to design a Gaussian to flat-top irradiance beam shaping system but also to generate a more complex dark-hollow Gaussian (donut-like) irradiance distribution with zero intensity in the on-axis region. The presented ray tracing results confirm the high accuracy of both calculated solutions and emphasize the potential of this design approach for refractive beam shaping applications.
NASA Astrophysics Data System (ADS)
Rodrigues, João Fabrício Mota; Coelho, Marco Túlio Pacheco; Ribeiro, Bruno R.
2018-04-01
Species distribution models (SDM) have been broadly used in ecology to address theoretical and practical problems. Currently, there are two main approaches to generate SDMs: (i) correlative, which is based on species occurrences and environmental predictor layers and (ii) process-based models, which are constructed based on species' functional traits and physiological tolerances. The distributions estimated by each approach are based on different components of species niche. Predictions of correlative models approach species realized niches, while predictions of process-based are more akin to species fundamental niche. Here, we integrated the predictions of fundamental and realized distributions of the freshwater turtle Trachemys dorbigni. Fundamental distribution was estimated using data of T. dorbigni's egg incubation temperature, and realized distribution was estimated using species occurrence records. Both types of distributions were estimated using the same regression approaches (logistic regression and support vector machines), both considering macroclimatic and microclimatic temperatures. The realized distribution of T. dorbigni was generally nested in its fundamental distribution reinforcing theoretical assumptions that the species' realized niche is a subset of its fundamental niche. Both modelling algorithms produced similar results but microtemperature generated better results than macrotemperature for the incubation model. Finally, our results reinforce the conclusion that species realized distributions are constrained by other factors other than just thermal tolerances.
Choi, Yun Ho; Yoo, Sung Jin
2017-03-28
A minimal-approximation-based distributed adaptive consensus tracking approach is presented for strict-feedback multiagent systems with unknown heterogeneous nonlinearities and control directions under a directed network. Existing approximation-based consensus results for uncertain nonlinear multiagent systems in lower-triangular form have used multiple function approximators in each local controller to approximate unmatched nonlinearities of each follower. Thus, as the follower's order increases, the number of the approximators used in its local controller increases. However, the proposed approach employs only one function approximator to construct the local controller of each follower regardless of the order of the follower. The recursive design methodology using a new error transformation is derived for the proposed minimal-approximation-based design. Furthermore, a bounding lemma on parameters of Nussbaum functions is presented to handle the unknown control direction problem in the minimal-approximation-based distributed consensus tracking framework and the stability of the overall closed-loop system is rigorously analyzed in the Lyapunov sense.
MaxEnt alternatives to pearson family distributions
NASA Astrophysics Data System (ADS)
Stokes, Barrie J.
2012-05-01
In a previous MaxEnt conference [11] a method of obtaining MaxEnt univariate distributions under a variety of constraints was presented. The Mathematica function Interpolation[], normally used with numerical data, can also process "semi-symbolic" data, and Lagrange Multiplier equations were solved for a set of symbolic ordinates describing the required MaxEnt probability density function. We apply a more developed version of this approach to finding MaxEnt distributions having prescribed β1 and β2 values, and compare the entropy of the MaxEnt distribution to that of the Pearson family distribution having the same β1 and β2. These MaxEnt distributions do have, in general, greater entropy than the related Pearson distribution. In accordance with Jaynes' Maximum Entropy Principle, these MaxEnt distributions are thus to be preferred to the corresponding Pearson distributions as priors in Bayes' Theorem.
A novel generalized normal distribution for human longevity and other negatively skewed data.
Robertson, Henry T; Allison, David B
2012-01-01
Negatively skewed data arise occasionally in statistical practice; perhaps the most familiar example is the distribution of human longevity. Although other generalizations of the normal distribution exist, we demonstrate a new alternative that apparently fits human longevity data better. We propose an alternative approach of a normal distribution whose scale parameter is conditioned on attained age. This approach is consistent with previous findings that longevity conditioned on survival to the modal age behaves like a normal distribution. We derive such a distribution and demonstrate its accuracy in modeling human longevity data from life tables. The new distribution is characterized by 1. An intuitively straightforward genesis; 2. Closed forms for the pdf, cdf, mode, quantile, and hazard functions; and 3. Accessibility to non-statisticians, based on its close relationship to the normal distribution.
A Novel Generalized Normal Distribution for Human Longevity and other Negatively Skewed Data
Robertson, Henry T.; Allison, David B.
2012-01-01
Negatively skewed data arise occasionally in statistical practice; perhaps the most familiar example is the distribution of human longevity. Although other generalizations of the normal distribution exist, we demonstrate a new alternative that apparently fits human longevity data better. We propose an alternative approach of a normal distribution whose scale parameter is conditioned on attained age. This approach is consistent with previous findings that longevity conditioned on survival to the modal age behaves like a normal distribution. We derive such a distribution and demonstrate its accuracy in modeling human longevity data from life tables. The new distribution is characterized by 1. An intuitively straightforward genesis; 2. Closed forms for the pdf, cdf, mode, quantile, and hazard functions; and 3. Accessibility to non-statisticians, based on its close relationship to the normal distribution. PMID:22623974
Spatio-temporal analysis of aftershock sequences in terms of Non Extensive Statistical Physics.
NASA Astrophysics Data System (ADS)
Chochlaki, Kalliopi; Vallianatos, Filippos
2017-04-01
Earth's seismicity is considered as an extremely complicated process where long-range interactions and fracturing exist (Vallianatos et al., 2016). For this reason, in order to analyze it, we use an innovative methodological approach, introduced by Tsallis (Tsallis, 1988; 2009), named Non Extensive Statistical Physics. This approach introduce a generalization of the Boltzmann-Gibbs statistical mechanics and it is based on the definition of Tsallis entropy Sq, which maximized leads the the so-called q-exponential function that expresses the probability distribution function that maximizes the Sq. In the present work, we utilize the concept of Non Extensive Statistical Physics in order to analyze the spatiotemporal properties of several aftershock series. Marekova (Marekova, 2014) suggested that the probability densities of the inter-event distances between successive aftershocks follow a beta distribution. Using the same data set we analyze the inter-event distance distribution of several aftershocks sequences in different geographic regions by calculating non extensive parameters that determine the behavior of the system and by fitting the q-exponential function, which expresses the degree of non-extentivity of the investigated system. Furthermore, the inter-event times distribution of the aftershocks as well as the frequency-magnitude distribution has been analyzed. The results supports the applicability of Non Extensive Statistical Physics ideas in aftershock sequences where a strong correlation exists along with memory effects. References C. Tsallis, Possible generalization of Boltzmann-Gibbs statistics, J. Stat. Phys. 52 (1988) 479-487. doi:10.1007/BF01016429 C. Tsallis, Introduction to nonextensive statistical mechanics: Approaching a complex world, 2009. doi:10.1007/978-0-387-85359-8. E. Marekova, Analysis of the spatial distribution between successive earthquakes in aftershocks series, Annals of Geophysics, 57, 5, doi:10.4401/ag-6556, 2014 F. Vallianatos, G. Papadakis, G. Michas, Generalized statistical mechanics approaches to earthquakes and tectonics. Proc. R. Soc. A, 472, 20160497, 2016.
A two-stage Monte Carlo approach to the expression of uncertainty with finite sample sizes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crowder, Stephen Vernon; Moyer, Robert D.
2005-05-01
Proposed supplement I to the GUM outlines a 'propagation of distributions' approach to deriving the distribution of a measurand for any non-linear function and for any set of random inputs. The supplement's proposed Monte Carlo approach assumes that the distributions of the random inputs are known exactly. This implies that the sample sizes are effectively infinite. In this case, the mean of the measurand can be determined precisely using a large number of Monte Carlo simulations. In practice, however, the distributions of the inputs will rarely be known exactly, but must be estimated using possibly small samples. If these approximatedmore » distributions are treated as exact, the uncertainty in estimating the mean is not properly taken into account. In this paper, we propose a two-stage Monte Carlo procedure that explicitly takes into account the finite sample sizes used to estimate parameters of the input distributions. We will illustrate the approach with a case study involving the efficiency of a thermistor mount power sensor. The performance of the proposed approach will be compared to the standard GUM approach for finite samples using simple non-linear measurement equations. We will investigate performance in terms of coverage probabilities of derived confidence intervals.« less
A Sparse Bayesian Approach for Forward-Looking Superresolution Radar Imaging
Zhang, Yin; Zhang, Yongchao; Huang, Yulin; Yang, Jianyu
2017-01-01
This paper presents a sparse superresolution approach for high cross-range resolution imaging of forward-looking scanning radar based on the Bayesian criterion. First, a novel forward-looking signal model is established as the product of the measurement matrix and the cross-range target distribution, which is more accurate than the conventional convolution model. Then, based on the Bayesian criterion, the widely-used sparse regularization is considered as the penalty term to recover the target distribution. The derivation of the cost function is described, and finally, an iterative expression for minimizing this function is presented. Alternatively, this paper discusses how to estimate the single parameter of Gaussian noise. With the advantage of a more accurate model, the proposed sparse Bayesian approach enjoys a lower model error. Meanwhile, when compared with the conventional superresolution methods, the proposed approach shows high cross-range resolution and small location error. The superresolution results for the simulated point target, scene data, and real measured data are presented to demonstrate the superior performance of the proposed approach. PMID:28604583
An Empirical Bayes Approach to Mantel-Haenszel DIF Analysis.
ERIC Educational Resources Information Center
Zwick, Rebecca; Thayer, Dorothy T.; Lewis, Charles
1999-01-01
Developed an empirical Bayes enhancement to Mantel-Haenszel (MH) analysis of differential item functioning (DIF) in which it is assumed that the MH statistics are normally distributed and that the prior distribution of underlying DIF parameters is also normal. (Author/SLD)
NASA Astrophysics Data System (ADS)
Khajehei, Sepideh; Moradkhani, Hamid
2015-04-01
Producing reliable and accurate hydrologic ensemble forecasts are subject to various sources of uncertainty, including meteorological forcing, initial conditions, model structure, and model parameters. Producing reliable and skillful precipitation ensemble forecasts is one approach to reduce the total uncertainty in hydrological applications. Currently, National Weather Prediction (NWP) models are developing ensemble forecasts for various temporal ranges. It is proven that raw products from NWP models are biased in mean and spread. Given the above state, there is a need for methods that are able to generate reliable ensemble forecasts for hydrological applications. One of the common techniques is to apply statistical procedures in order to generate ensemble forecast from NWP-generated single-value forecasts. The procedure is based on the bivariate probability distribution between the observation and single-value precipitation forecast. However, one of the assumptions of the current method is fitting Gaussian distribution to the marginal distributions of observed and modeled climate variable. Here, we have described and evaluated a Bayesian approach based on Copula functions to develop an ensemble precipitation forecast from the conditional distribution of single-value precipitation forecasts. Copula functions are known as the multivariate joint distribution of univariate marginal distributions, which are presented as an alternative procedure in capturing the uncertainties related to meteorological forcing. Copulas are capable of modeling the joint distribution of two variables with any level of correlation and dependency. This study is conducted over a sub-basin in the Columbia River Basin in USA using the monthly precipitation forecasts from Climate Forecast System (CFS) with 0.5x0.5 Deg. spatial resolution to reproduce the observations. The verification is conducted on a different period and the superiority of the procedure is compared with Ensemble Pre-Processor approach currently used by National Weather Service River Forecast Centers in USA.
Variance computations for functional of absolute risk estimates.
Pfeiffer, R M; Petracci, E
2011-07-01
We present a simple influence function based approach to compute the variances of estimates of absolute risk and functions of absolute risk. We apply this approach to criteria that assess the impact of changes in the risk factor distribution on absolute risk for an individual and at the population level. As an illustration we use an absolute risk prediction model for breast cancer that includes modifiable risk factors in addition to standard breast cancer risk factors. Influence function based variance estimates for absolute risk and the criteria are compared to bootstrap variance estimates.
Variance computations for functional of absolute risk estimates
Pfeiffer, R.M.; Petracci, E.
2011-01-01
We present a simple influence function based approach to compute the variances of estimates of absolute risk and functions of absolute risk. We apply this approach to criteria that assess the impact of changes in the risk factor distribution on absolute risk for an individual and at the population level. As an illustration we use an absolute risk prediction model for breast cancer that includes modifiable risk factors in addition to standard breast cancer risk factors. Influence function based variance estimates for absolute risk and the criteria are compared to bootstrap variance estimates. PMID:21643476
Systems of frequency distributions for water and environmental engineering
NASA Astrophysics Data System (ADS)
Singh, Vijay P.
2018-09-01
A wide spectrum of frequency distributions are used in hydrologic, hydraulic, environmental and water resources engineering. These distributions may have different origins, are based on different hypotheses, and belong to different generating systems. Review of literature suggests that different systems of frequency distributions employed in science and engineering in general and environmental and water engineering in particular have been derived using different approaches which include (1) differential equations, (2) distribution elasticity, (3) genetic theory, (4) generating functions, (5) transformations, (6) Bessel function, (7) expansions, and (8) entropy maximization. This paper revisits these systems of distributions and discusses the hypotheses that are used for deriving these systems. It also proposes, based on empirical evidence, another general system of distributions and derives a number of distributions from this general system that are used in environmental and water engineering.
Tang, Jian; Jiang, Xiaoliang
2017-01-01
Image segmentation has always been a considerable challenge in image analysis and understanding due to the intensity inhomogeneity, which is also commonly known as bias field. In this paper, we present a novel region-based approach based on local entropy for segmenting images and estimating the bias field simultaneously. Firstly, a local Gaussian distribution fitting (LGDF) energy function is defined as a weighted energy integral, where the weight is local entropy derived from a grey level distribution of local image. The means of this objective function have a multiplicative factor that estimates the bias field in the transformed domain. Then, the bias field prior is fully used. Therefore, our model can estimate the bias field more accurately. Finally, minimization of this energy function with a level set regularization term, image segmentation, and bias field estimation can be achieved. Experiments on images of various modalities demonstrated the superior performance of the proposed method when compared with other state-of-the-art approaches.
An Efficient Numerical Approach for Nonlinear Fokker-Planck equations
NASA Astrophysics Data System (ADS)
Otten, Dustin; Vedula, Prakash
2009-03-01
Fokker-Planck equations which are nonlinear with respect to their probability densities that occur in many nonequilibrium systems relevant to mean field interaction models, plasmas, classical fermions and bosons can be challenging to solve numerically. To address some underlying challenges in obtaining numerical solutions, we propose a quadrature based moment method for efficient and accurate determination of transient (and stationary) solutions of nonlinear Fokker-Planck equations. In this approach the distribution function is represented as a collection of Dirac delta functions with corresponding quadrature weights and locations, that are in turn determined from constraints based on evolution of generalized moments. Properties of the distribution function can be obtained by solution of transport equations for quadrature weights and locations. We will apply this computational approach to study a wide range of problems, including the Desai-Zwanzig Model (for nonlinear muscular contraction) and multivariate nonlinear Fokker-Planck equations describing classical fermions and bosons, and will also demonstrate good agreement with results obtained from Monte Carlo and other standard numerical methods.
Kim, Sunghee; Kim, Ki Chul; Lee, Seung Woo; Jang, Seung Soon
2016-07-27
Understanding the thermodynamic stability and redox properties of oxygen functional groups on graphene is critical to systematically design stable graphene-based positive electrode materials with high potential for lithium-ion battery applications. In this work, we study the thermodynamic and redox properties of graphene functionalized with carbonyl and hydroxyl groups, and the evolution of these properties with the number, types and distribution of functional groups by employing the density functional theory method. It is found that the redox potential of the functionalized graphene is sensitive to the types, number, and distribution of oxygen functional groups. First, the carbonyl group induces higher redox potential than the hydroxyl group. Second, more carbonyl groups would result in higher redox potential. Lastly, the locally concentrated distribution of the carbonyl group is more beneficial to have higher redox potential compared to the uniformly dispersed distribution. In contrast, the distribution of the hydroxyl group does not affect the redox potential significantly. Thermodynamic investigation demonstrates that the incorporation of carbonyl groups at the edge of graphene is a promising strategy for designing thermodynamically stable positive electrode materials with high redox potentials.
A Hermite-based lattice Boltzmann model with artificial viscosity for compressible viscous flows
NASA Astrophysics Data System (ADS)
Qiu, Ruofan; Chen, Rongqian; Zhu, Chenxiang; You, Yancheng
2018-05-01
A lattice Boltzmann model on Hermite basis for compressible viscous flows is presented in this paper. The model is developed in the framework of double-distribution-function approach, which has adjustable specific-heat ratio and Prandtl number. It contains a density distribution function for the flow field and a total energy distribution function for the temperature field. The equilibrium distribution function is determined by Hermite expansion, and the D3Q27 and D3Q39 three-dimensional (3D) discrete velocity models are used, in which the discrete velocity model can be replaced easily. Moreover, an artificial viscosity is introduced to enhance the model for capturing shock waves. The model is tested through several cases of compressible flows, including 3D supersonic viscous flows with boundary layer. The effect of artificial viscosity is estimated. Besides, D3Q27 and D3Q39 models are further compared in the present platform.
Optimal Sensor Placement in Active Multistatic Sonar Networks
2014-06-01
As b→ 0, the Fermi function approaches the cookie cutter model. 1Discovered in 1926 by Enrico Fermi and Paul Dirac when researching electron...Thesis Co-Advisors: Emily M. Craparo Craig W. Rasmussen Second Reader: Mümtaz Karataş Approved for public release; distribution is unlimited THIS PAGE...A. 12a. DISTRIBUTION / AVAILABILITY STATEMENT Approved for public release; distribution is unlimited 12b. DISTRIBUTION CODE 13. ABSTRACT (maximum 200
Distributions of Bacterial Generalists among the Guts of Birds, Fish, and Mammals - abstract
Complex distributions of bacterial taxa within diverse animal microbiomes have inspired ecological and biogeographical approaches to revealing the functions of taxa that may be most important for host health. Of particular interest are bacteria that find many diverse habitats sui...
NASA Astrophysics Data System (ADS)
Raymond, Neil; Iouchtchenko, Dmitri; Roy, Pierre-Nicholas; Nooijen, Marcel
2018-05-01
We introduce a new path integral Monte Carlo method for investigating nonadiabatic systems in thermal equilibrium and demonstrate an approach to reducing stochastic error. We derive a general path integral expression for the partition function in a product basis of continuous nuclear and discrete electronic degrees of freedom without the use of any mapping schemes. We separate our Hamiltonian into a harmonic portion and a coupling portion; the partition function can then be calculated as the product of a Monte Carlo estimator (of the coupling contribution to the partition function) and a normalization factor (that is evaluated analytically). A Gaussian mixture model is used to evaluate the Monte Carlo estimator in a computationally efficient manner. Using two model systems, we demonstrate our approach to reduce the stochastic error associated with the Monte Carlo estimator. We show that the selection of the harmonic oscillators comprising the sampling distribution directly affects the efficiency of the method. Our results demonstrate that our path integral Monte Carlo method's deviation from exact Trotter calculations is dominated by the choice of the sampling distribution. By improving the sampling distribution, we can drastically reduce the stochastic error leading to lower computational cost.
Hazard function analysis for flood planning under nonstationarity
NASA Astrophysics Data System (ADS)
Read, Laura K.; Vogel, Richard M.
2016-05-01
The field of hazard function analysis (HFA) involves a probabilistic assessment of the "time to failure" or "return period," T, of an event of interest. HFA is used in epidemiology, manufacturing, medicine, actuarial statistics, reliability engineering, economics, and elsewhere. For a stationary process, the probability distribution function (pdf) of the return period always follows an exponential distribution, the same is not true for nonstationary processes. When the process of interest, X, exhibits nonstationary behavior, HFA can provide a complementary approach to risk analysis with analytical tools particularly useful for hydrological applications. After a general introduction to HFA, we describe a new mathematical linkage between the magnitude of the flood event, X, and its return period, T, for nonstationary processes. We derive the probabilistic properties of T for a nonstationary one-parameter exponential model of X, and then use both Monte-Carlo simulation and HFA to generalize the behavior of T when X arises from a nonstationary two-parameter lognormal distribution. For this case, our findings suggest that a two-parameter Weibull distribution provides a reasonable approximation for the pdf of T. We document how HFA can provide an alternative approach to characterize the probabilistic properties of both nonstationary flood series and the resulting pdf of T.
Bernstein-Greene-Kruskal theory of electron holes in superthermal space plasma
NASA Astrophysics Data System (ADS)
Aravindakshan, Harikrishnan; Kakad, Amar; Kakad, Bharati
2018-05-01
Several spacecraft missions have observed electron holes (EHs) in Earth's and other planetary magnetospheres. These EHs are modeled with the stationary solutions of Vlasov-Poisson equations, obtained by adopting the Bernstein-Greene-Kruskal (BGK) approach. Through the literature survey, we find that the BGK EHs are modelled by using either thermal distribution function or any statistical distribution derived from particular spacecraft observations. However, Maxwell distributions are quite rare in space plasmas; instead, most of these plasmas are superthermal in nature and generally described by kappa distribution. We have developed a one-dimensional BGK model of EHs for space plasma that follows superthermal kappa distribution. The analytical solution of trapped electron distribution function for such plasmas is derived. The trapped particle distribution function in plasma following kappa distribution is found to be steeper and denser as compared to that for Maxwellian distribution. The width-amplitude relation of perturbation for superthermal plasma is derived and allowed regions of stable BGK solutions are obtained. We find that the stable BGK solutions are better supported by superthermal plasmas compared to that of thermal plasmas for small amplitude perturbations.
Proving refinement transformations using extended denotational semantics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Winter, V.L.; Boyle, J.M.
1996-04-01
TAMPR is a fully automatic transformation system based on syntactic rewrites. Our approach in a correctness proof is to map the transformation into an axiomatized mathematical domain where formal (and automated) reasoning can be performed. This mapping is accomplished via an extended denotational semantic paradigm. In this approach, the abstract notion of a program state is distributed between an environment function and a store function. Such a distribution introduces properties that go beyond the abstract state that is being modeled. The reasoning framework needs to be aware of these properties in order to successfully complete a correctness proof. This papermore » discusses some of our experiences in proving the correctness of TAMPR transformations.« less
NASA Astrophysics Data System (ADS)
Milani, Armin Ebrahimi; Haghifam, Mahmood Reza
2008-10-01
The reconfiguration is an operation process used for optimization with specific objectives by means of changing the status of switches in a distribution network. In this paper each objectives is normalized with inspiration from fuzzy sets-to cause optimization more flexible- and formulized as a unique multi-objective function. The genetic algorithm is used for solving the suggested model, in which there is no risk of non-liner objective functions and constraints. The effectiveness of the proposed method is demonstrated through the examples.
Multivariate η-μ fading distribution with arbitrary correlation model
NASA Astrophysics Data System (ADS)
Ghareeb, Ibrahim; Atiani, Amani
2018-03-01
An extensive analysis for the multivariate ? distribution with arbitrary correlation is presented, where novel analytical expressions for the multivariate probability density function, cumulative distribution function and moment generating function (MGF) of arbitrarily correlated and not necessarily identically distributed ? power random variables are derived. Also, this paper provides exact-form expression for the MGF of the instantaneous signal-to-noise ratio at the combiner output in a diversity reception system with maximal-ratio combining and post-detection equal-gain combining operating in slow frequency nonselective arbitrarily correlated not necessarily identically distributed ?-fading channels. The average bit error probability of differentially detected quadrature phase shift keying signals with post-detection diversity reception system over arbitrarily correlated and not necessarily identical fading parameters ?-fading channels is determined by using the MGF-based approach. The effect of fading correlation between diversity branches, fading severity parameters and diversity level is studied.
System approach to distributed sensor management
NASA Astrophysics Data System (ADS)
Mayott, Gregory; Miller, Gordon; Harrell, John; Hepp, Jared; Self, Mid
2010-04-01
Since 2003, the US Army's RDECOM CERDEC Night Vision Electronic Sensor Directorate (NVESD) has been developing a distributed Sensor Management System (SMS) that utilizes a framework which demonstrates application layer, net-centric sensor management. The core principles of the design support distributed and dynamic discovery of sensing devices and processes through a multi-layered implementation. This results in a sensor management layer that acts as a System with defined interfaces for which the characteristics, parameters, and behaviors can be described. Within the framework, the definition of a protocol is required to establish the rules for how distributed sensors should operate. The protocol defines the behaviors, capabilities, and message structures needed to operate within the functional design boundaries. The protocol definition addresses the requirements for a device (sensors or processes) to dynamically join or leave a sensor network, dynamically describe device control and data capabilities, and allow dynamic addressing of publish and subscribe functionality. The message structure is a multi-tiered definition that identifies standard, extended, and payload representations that are specifically designed to accommodate the need for standard representations of common functions, while supporting the need for feature-based functions that are typically vendor specific. The dynamic qualities of the protocol enable a User GUI application the flexibility of mapping widget-level controls to each device based on reported capabilities in real-time. The SMS approach is designed to accommodate scalability and flexibility within a defined architecture. The distributed sensor management framework and its application to a tactical sensor network will be described in this paper.
Vanreusel, Wouter; Maes, Dirk; Van Dyck, Hans
2007-02-01
Numerous models for predicting species distribution have been developed for conservation purposes. Most of them make use of environmental data (e.g., climate, topography, land use) at a coarse grid resolution (often kilometres). Such approaches are useful for conservation policy issues including reserve-network selection. The efficiency of predictive models for species distribution is usually tested on the area for which they were developed. Although highly interesting from the point of view of conservation efficiency, transferability of such models to independent areas is still under debate. We tested the transferability of habitat-based predictive distribution models for two regionally threatened butterflies, the green hairstreak (Callophrys rubi) and the grayling (Hipparchia semele), within and among three nature reserves in northeastern Belgium. We built predictive models based on spatially detailed maps of area-wide distribution and density of ecological resources. We used resources directly related to ecological functions (host plants, nectar sources, shelter, microclimate) rather than environmental surrogate variables. We obtained models that performed well with few resource variables. All models were transferable--although to different degrees--among the independent areas within the same broad geographical region. We argue that habitat models based on essential functional resources could transfer better in space than models that use indirect environmental variables. Because functional variables can easily be interpreted and even be directly affected by terrain managers, these models can be useful tools to guide species-adapted reserve management.
New approach in bivariate drought duration and severity analysis
NASA Astrophysics Data System (ADS)
Montaseri, Majid; Amirataee, Babak; Rezaie, Hossein
2018-04-01
The copula functions have been widely applied as an advance technique to create joint probability distribution of drought duration and severity. The approach of data collection as well as the amount of data and dispersion of data series can last a significant impact on creating such joint probability distribution using copulas. Usually, such traditional analyses have shed an Unconnected Drought Runs (UDR) approach towards droughts. In other word, droughts with different durations would be independent of each other. Emphasis on such data collection method causes the omission of actual potentials of short-term extreme droughts located within a long-term UDR. Meanwhile, traditional method is often faced with significant gap in drought data series. However, a long-term UDR can be approached as a combination of short-term Connected Drought Runs (CDR). Therefore this study aims to evaluate systematically two UDR and CDR procedures in joint probability of drought duration and severity investigations. For this purpose, rainfall data (1971-2013) from 24 rain gauges in Lake Urmia basin, Iran were applied. Also, seven common univariate marginal distributions and seven types of bivariate copulas were examined. Compared to traditional approach, the results demonstrated a significant comparative advantage of the new approach. Such comparative advantages led to determine the correct copula function, more accurate estimation of copula parameter, more realistic estimation of joint/conditional probabilities of drought duration and severity and significant reduction in uncertainty for modeling.
NASA Astrophysics Data System (ADS)
Vadukumpully, Sajini; Gupta, Jhinuk; Zhang, Yongping; Xu, Guo Qin; Valiyaveettil, Suresh
2011-01-01
A facile and simple approach for the covalent functionalization of surfactant wrapped graphene sheets is described. The approach involves functionalization of dispersible graphene sheets with various alkylazides and 11-azidoundecanoic acid proved the best azide for enhanced dispersibility. The functionalization was confirmed by infrared spectroscopy and scanning tunneling microscopy. The free carboxylic acidgroups can bind to gold nanoparticles, which were introduced as markers for the reactive sites. The interaction between gold nanoparticles and the graphene sheets was followed by UV-vis spectroscopy. The gold nanoparticle-graphene composite was characterized by transmission electron microscopy and atomic force microscopy, demonstrating the uniform distribution of gold nanoparticles all over the surface. Our results open the possibility to control the functionalization on graphene in the construction of composite nanomaterials.A facile and simple approach for the covalent functionalization of surfactant wrapped graphene sheets is described. The approach involves functionalization of dispersible graphene sheets with various alkylazides and 11-azidoundecanoic acid proved the best azide for enhanced dispersibility. The functionalization was confirmed by infrared spectroscopy and scanning tunneling microscopy. The free carboxylic acidgroups can bind to gold nanoparticles, which were introduced as markers for the reactive sites. The interaction between gold nanoparticles and the graphene sheets was followed by UV-vis spectroscopy. The gold nanoparticle-graphene composite was characterized by transmission electron microscopy and atomic force microscopy, demonstrating the uniform distribution of gold nanoparticles all over the surface. Our results open the possibility to control the functionalization on graphene in the construction of composite nanomaterials. Electronic Supplementary Information (ESI) available: Synthesis and characterization details of dodecylazide, hexylazide, 11-azidoundecanol (AUO), micrographs (SEM and TEM images) of the various azide functionalized samples and the statistical analysis of the graphene thickness. See 10.1039/c0nr00547a.
Continuum-kinetic approach to sheath simulations
NASA Astrophysics Data System (ADS)
Cagas, Petr; Hakim, Ammar; Srinivasan, Bhuvana
2016-10-01
Simulations of sheaths are performed using a novel continuum-kinetic model with collisions including ionization/recombination. A discontinuous Galerkin method is used to directly solve the Boltzmann-Poisson system to obtain a particle distribution function. Direct discretization of the distribution function has advantages of being noise-free compared to particle-in-cell methods. The distribution function, which is available at each node of the configuration space, can be readily used to calculate the collision integrals in order to get ionization and recombination operators. Analytical models are used to obtain the cross-sections as a function of energy. Results will be presented incorporating surface physics with a classical sheath in Hall thruster-relevant geometry. This work was sponsored by the Air Force Office of Scientific Research under Grant Number FA9550-15-1-0193.
A Collaborative Neurodynamic Approach to Multiple-Objective Distributed Optimization.
Yang, Shaofu; Liu, Qingshan; Wang, Jun
2018-04-01
This paper is concerned with multiple-objective distributed optimization. Based on objective weighting and decision space decomposition, a collaborative neurodynamic approach to multiobjective distributed optimization is presented. In the approach, a system of collaborative neural networks is developed to search for Pareto optimal solutions, where each neural network is associated with one objective function and given constraints. Sufficient conditions are derived for ascertaining the convergence to a Pareto optimal solution of the collaborative neurodynamic system. In addition, it is proved that each connected subsystem can generate a Pareto optimal solution when the communication topology is disconnected. Then, a switching-topology-based method is proposed to compute multiple Pareto optimal solutions for discretized approximation of Pareto front. Finally, simulation results are discussed to substantiate the performance of the collaborative neurodynamic approach. A portfolio selection application is also given.
NASA Astrophysics Data System (ADS)
Kim, M.; Pangle, L. A.; Cardoso, C.; Lora, M.; Meira, A.; Volkmann, T. H. M.; Wang, Y.; Harman, C. J.; Troch, P. A. A.
2015-12-01
Transit time distributions (TTDs) are an efficient way of characterizing complex transport dynamics of a hydrologic system. Time-invariant TTD has been studied extensively, but TTDs are time-varying under unsteady hydrologic systems due to both external variability (e.g., time-variability in fluxes), and internal variability (e.g., time-varying flow pathways). The use of "flow-weighted time" has been suggested to account for the effect of external variability on TTDs, but neglects the role of internal variability. Recently, to account both types of variability, StorAge Selection (SAS) function approaches were developed. One of these approaches enables the transport characteristics of a system - how the different aged water in the storage is sampled by the outflow - to be parameterized by time-variable probability distribution called the rank SAS (rSAS) function, and uses it directly to determine the time-variable TTDs resulting from a given timeseries of fluxes in and out of a system. Unlike TTDs, the form of the rSAS function varies only due to changes in flow pathways, but is not affected by the timing of fluxes alone. However, the relation between physical mechanisms and the time-varying rSAS functions are not well understood. In this study, relative effects of internal and external variability on the TTDs are examined using observations from a homogeneously packed 1 m3 sloping soil lysimeter. The observations suggest the importance of internal variability on TTDs, and reinforce the need to account for this variability using time-variable rSAS functions. Furthermore, the relative usefulness of two other formulations of SAS functions and the mortality rate (which plays a similar role to SAS functions in the McKendrick-von Foerster model of age-structured population dynamics) are also discussed. Finally, numerical modeling is used to explore the role of internal and external variability for hydrologic systems with diverse geomorphic and climate characteristics. This works will give an insight that which approach (or SAS function) is preferable under different conditions.
Neti, Prasad V.S.V.; Howell, Roger W.
2008-01-01
Recently, the distribution of radioactivity among a population of cells labeled with 210Po was shown to be well described by a log normal distribution function (J Nucl Med 47, 6 (2006) 1049-1058) with the aid of an autoradiographic approach. To ascertain the influence of Poisson statistics on the interpretation of the autoradiographic data, the present work reports on a detailed statistical analyses of these data. Methods The measured distributions of alpha particle tracks per cell were subjected to statistical tests with Poisson (P), log normal (LN), and Poisson – log normal (P – LN) models. Results The LN distribution function best describes the distribution of radioactivity among cell populations exposed to 0.52 and 3.8 kBq/mL 210Po-citrate. When cells were exposed to 67 kBq/mL, the P – LN distribution function gave a better fit, however, the underlying activity distribution remained log normal. Conclusions The present analysis generally provides further support for the use of LN distributions to describe the cellular uptake of radioactivity. Care should be exercised when analyzing autoradiographic data on activity distributions to ensure that Poisson processes do not distort the underlying LN distribution. PMID:16741316
NASA Astrophysics Data System (ADS)
Al-Hawat, Sh; Naddaf, M.
2005-04-01
The electron energy distribution function (EEDF) was determined from the second derivative of the I-V Langmuir probe characteristics and, thereafter, theoretically calculated by solving the plasma kinetic equation, using the black wall (BW) approximation, in the positive column of a neon glow discharge. The pressure has been varied from 0.5 to 4 Torr and the current from 10 to 30 mA. The measured electron temperature, density and electric field strength were used as input data for solving the kinetic equation. Comparisons were made between the EEDFs obtained from experiment, the BW approach, the Maxwellian distribution and the Rutcher solution of the kinetic equation in the elastic energy range. The best conditions for the BW approach are found to be under the discharge conditions: current density jd = 4.45 mA cm-2 and normalized electric field strength E/p = 1.88 V cm-1 Torr-1.
Probability distribution functions for intermittent scrape-off layer plasma fluctuations
NASA Astrophysics Data System (ADS)
Theodorsen, A.; Garcia, O. E.
2018-03-01
A stochastic model for intermittent fluctuations in the scrape-off layer of magnetically confined plasmas has been constructed based on a super-position of uncorrelated pulses arriving according to a Poisson process. In the most common applications of the model, the pulse amplitudes are assumed exponentially distributed, supported by conditional averaging of large-amplitude fluctuations in experimental measurement data. This basic assumption has two potential limitations. First, statistical analysis of measurement data using conditional averaging only reveals the tail of the amplitude distribution to be exponentially distributed. Second, exponentially distributed amplitudes leads to a positive definite signal which cannot capture fluctuations in for example electric potential and radial velocity. Assuming pulse amplitudes which are not positive definite often make finding a closed form for the probability density function (PDF) difficult, even if the characteristic function remains relatively simple. Thus estimating model parameters requires an approach based on the characteristic function, not the PDF. In this contribution, the effect of changing the amplitude distribution on the moments, PDF and characteristic function of the process is investigated and a parameter estimation method using the empirical characteristic function is presented and tested on synthetically generated data. This proves valuable for describing intermittent fluctuations of all plasma parameters in the boundary region of magnetized plasmas.
NASA Astrophysics Data System (ADS)
Zhu, Kaiqun; Song, Yan; Zhang, Sunjie; Zhong, Zhaozhun
2017-07-01
In this paper, a non-fragile observer-based output feedback control problem for the polytopic uncertain system under distributed model predictive control (MPC) approach is discussed. By decomposing the global system into some subsystems, the computation complexity is reduced, so it follows that the online designing time can be saved.Moreover, an observer-based output feedback control algorithm is proposed in the framework of distributed MPC to deal with the difficulties in obtaining the states measurements. In this way, the presented observer-based output-feedback MPC strategy is more flexible and applicable in practice than the traditional state-feedback one. What is more, the non-fragility of the controller has been taken into consideration in favour of increasing the robustness of the polytopic uncertain system. After that, a sufficient stability criterion is presented by using Lyapunov-like functional approach, meanwhile, the corresponding control law and the upper bound of the quadratic cost function are derived by solving an optimisation subject to convex constraints. Finally, some simulation examples are employed to show the effectiveness of the method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Radyushkin, Anatoly V.
Here, we show that quasi-PDFs may be treated as hybrids of PDFs and primordial rest-frame momentum distributions of partons. This results in a complicated convolution nature of quasi-PDFs that necessitates using large p 3≳ 3 GeV momenta to get reasonably close to the PDF limit. Furthemore, as an alternative approach, we propose to use pseudo-PDFs P(x, zmore » $$2\\atop{3}$$) that generalize the light-front PDFs onto spacelike intervals and are related to Ioffe-time distributions M (v, z$$2\\atop{3}$$), the functions of the Ioffe time v = p 3 z 3 and the distance parameter z$$2\\atop{3}$$ with respect to which it displays perturbative evolution for small z 3. In this form, one may divide out the z$$2\\atop{3}$$ dependence coming from the primordial rest-frame distribution and from the problematic factor due to lattice renormalization of the gauge link. The v-dependence remains intact and determines the shape of PDFs.« less
Radyushkin, Anatoly V.
2017-08-28
Here, we show that quasi-PDFs may be treated as hybrids of PDFs and primordial rest-frame momentum distributions of partons. This results in a complicated convolution nature of quasi-PDFs that necessitates using large p 3≳ 3 GeV momenta to get reasonably close to the PDF limit. Furthemore, as an alternative approach, we propose to use pseudo-PDFs P(x, zmore » $$2\\atop{3}$$) that generalize the light-front PDFs onto spacelike intervals and are related to Ioffe-time distributions M (v, z$$2\\atop{3}$$), the functions of the Ioffe time v = p 3 z 3 and the distance parameter z$$2\\atop{3}$$ with respect to which it displays perturbative evolution for small z 3. In this form, one may divide out the z$$2\\atop{3}$$ dependence coming from the primordial rest-frame distribution and from the problematic factor due to lattice renormalization of the gauge link. The v-dependence remains intact and determines the shape of PDFs.« less
On the consequences of bi-Maxwellian plasma distributions for parallel electric fields
NASA Technical Reports Server (NTRS)
Olsen, Richard C.
1992-01-01
The objective is to use the measurements of the equatorial particle distributions to obtain the parallel electric field structure and the evolution of the plasma distribution function along the field line. Appropriate uses of kinetic theory allows us to use the measured ( and inferred) particle distributions to obtain the electric field, and hence the variation on plasma density along the magnetic field line. The approach, here, is to utilize the adiabatic invariants, and assume the plasma distributions are in equilibrium.
Sentürk, Damla; Dalrymple, Lorien S; Nguyen, Danh V
2014-11-30
We propose functional linear models for zero-inflated count data with a focus on the functional hurdle and functional zero-inflated Poisson (ZIP) models. Although the hurdle model assumes the counts come from a mixture of a degenerate distribution at zero and a zero-truncated Poisson distribution, the ZIP model considers a mixture of a degenerate distribution at zero and a standard Poisson distribution. We extend the generalized functional linear model framework with a functional predictor and multiple cross-sectional predictors to model counts generated by a mixture distribution. We propose an estimation procedure for functional hurdle and ZIP models, called penalized reconstruction, geared towards error-prone and sparsely observed longitudinal functional predictors. The approach relies on dimension reduction and pooling of information across subjects involving basis expansions and penalized maximum likelihood techniques. The developed functional hurdle model is applied to modeling hospitalizations within the first 2 years from initiation of dialysis, with a high percentage of zeros, in the Comprehensive Dialysis Study participants. Hospitalization counts are modeled as a function of sparse longitudinal measurements of serum albumin concentrations, patient demographics, and comorbidities. Simulation studies are used to study finite sample properties of the proposed method and include comparisons with an adaptation of standard principal components regression. Copyright © 2014 John Wiley & Sons, Ltd.
Inference of Functionally-Relevant N-acetyltransferase Residues Based on Statistical Correlations.
Neuwald, Andrew F; Altschul, Stephen F
2016-12-01
Over evolutionary time, members of a superfamily of homologous proteins sharing a common structural core diverge into subgroups filling various functional niches. At the sequence level, such divergence appears as correlations that arise from residue patterns distinct to each subgroup. Such a superfamily may be viewed as a population of sequences corresponding to a complex, high-dimensional probability distribution. Here we model this distribution as hierarchical interrelated hidden Markov models (hiHMMs), which describe these sequence correlations implicitly. By characterizing such correlations one may hope to obtain information regarding functionally-relevant properties that have thus far evaded detection. To do so, we infer a hiHMM distribution from sequence data using Bayes' theorem and Markov chain Monte Carlo (MCMC) sampling, which is widely recognized as the most effective approach for characterizing a complex, high dimensional distribution. Other routines then map correlated residue patterns to available structures with a view to hypothesis generation. When applied to N-acetyltransferases, this reveals sequence and structural features indicative of functionally important, yet generally unknown biochemical properties. Even for sets of proteins for which nothing is known beyond unannotated sequences and structures, this can lead to helpful insights. We describe, for example, a putative coenzyme-A-induced-fit substrate binding mechanism mediated by arginine residue switching between salt bridge and π-π stacking interactions. A suite of programs implementing this approach is available (psed.igs.umaryland.edu).
Friston, Karl J.; Li, Baojuan; Daunizeau, Jean; Stephan, Klaas E.
2011-01-01
This paper is about inferring or discovering the functional architecture of distributed systems using Dynamic Causal Modelling (DCM). We describe a scheme that recovers the (dynamic) Bayesian dependency graph (connections in a network) using observed network activity. This network discovery uses Bayesian model selection to identify the sparsity structure (absence of edges or connections) in a graph that best explains observed time-series. The implicit adjacency matrix specifies the form of the network (e.g., cyclic or acyclic) and its graph-theoretical attributes (e.g., degree distribution). The scheme is illustrated using functional magnetic resonance imaging (fMRI) time series to discover functional brain networks. Crucially, it can be applied to experimentally evoked responses (activation studies) or endogenous activity in task-free (resting state) fMRI studies. Unlike conventional approaches to network discovery, DCM permits the analysis of directed and cyclic graphs. Furthermore, it eschews (implausible) Markovian assumptions about the serial independence of random fluctuations. The scheme furnishes a network description of distributed activity in the brain that is optimal in the sense of having the greatest conditional probability, relative to other networks. The networks are characterised in terms of their connectivity or adjacency matrices and conditional distributions over the directed (and reciprocal) effective connectivity between connected nodes or regions. We envisage that this approach will provide a useful complement to current analyses of functional connectivity for both activation and resting-state studies. PMID:21182971
A Langevin approach to multi-scale modeling
Hirvijoki, Eero
2018-04-13
In plasmas, distribution functions often demonstrate long anisotropic tails or otherwise significant deviations from local Maxwellians. The tails, especially if they are pulled out from the bulk, pose a serious challenge for numerical simulations as resolving both the bulk and the tail on the same mesh is often challenging. A multi-scale approach, providing evolution equations for the bulk and the tail individually, could offer a resolution in the sense that both populations could be treated on separate meshes or different reduction techniques applied to the bulk and the tail population. In this paper, we propose a multi-scale method which allowsmore » us to split a distribution function into a bulk and a tail so that both populations remain genuine, non-negative distribution functions and may carry density, momentum, and energy. The proposed method is based on the observation that the motion of an individual test particle in a plasma obeys a stochastic differential equation, also referred to as a Langevin equation. Finally, this allows us to define transition probabilities between the bulk and the tail and to provide evolution equations for both populations separately.« less
A Langevin approach to multi-scale modeling
NASA Astrophysics Data System (ADS)
Hirvijoki, Eero
2018-04-01
In plasmas, distribution functions often demonstrate long anisotropic tails or otherwise significant deviations from local Maxwellians. The tails, especially if they are pulled out from the bulk, pose a serious challenge for numerical simulations as resolving both the bulk and the tail on the same mesh is often challenging. A multi-scale approach, providing evolution equations for the bulk and the tail individually, could offer a resolution in the sense that both populations could be treated on separate meshes or different reduction techniques applied to the bulk and the tail population. In this letter, we propose a multi-scale method which allows us to split a distribution function into a bulk and a tail so that both populations remain genuine, non-negative distribution functions and may carry density, momentum, and energy. The proposed method is based on the observation that the motion of an individual test particle in a plasma obeys a stochastic differential equation, also referred to as a Langevin equation. This allows us to define transition probabilities between the bulk and the tail and to provide evolution equations for both populations separately.
A Langevin approach to multi-scale modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hirvijoki, Eero
In plasmas, distribution functions often demonstrate long anisotropic tails or otherwise significant deviations from local Maxwellians. The tails, especially if they are pulled out from the bulk, pose a serious challenge for numerical simulations as resolving both the bulk and the tail on the same mesh is often challenging. A multi-scale approach, providing evolution equations for the bulk and the tail individually, could offer a resolution in the sense that both populations could be treated on separate meshes or different reduction techniques applied to the bulk and the tail population. In this paper, we propose a multi-scale method which allowsmore » us to split a distribution function into a bulk and a tail so that both populations remain genuine, non-negative distribution functions and may carry density, momentum, and energy. The proposed method is based on the observation that the motion of an individual test particle in a plasma obeys a stochastic differential equation, also referred to as a Langevin equation. Finally, this allows us to define transition probabilities between the bulk and the tail and to provide evolution equations for both populations separately.« less
Effects of the gap slope on the distribution of removal rate in Belt-MRF.
Wang, Dekang; Hu, Haixiang; Li, Longxiang; Bai, Yang; Luo, Xiao; Xue, Donglin; Zhang, Xuejun
2017-10-30
Belt magnetorheological finishing (Belt-MRF) is a promising tool for large-optics processing. However, before using a spot, its shape should be designed and controlled by the polishing gap. Previous research revealed a remarkably nonlinear relationship between the removal function and normal pressure distribution. The pressure is nonlinearly related to the gap geometry, precluding prediction of the removal function given the polishing gap. Here, we used the concepts of gap slope and virtual ribbon to develop a model of removal profiles in Belt-MRF. Between the belt and the workpiece in the main polishing area, a gap which changes linearly along the flow direction was created using a flat-bottom magnet box. The pressure distribution and removal function were calculated. Simulations were consistent with experiments. Different removal functions, consistent with theoretical calculations, were obtained by adjusting the gap slope. This approach allows to predict removal functions in Belt-MRF.
Similarity of the Outer Region of the Turbulent Boundary
2009-02-09
comply with a collection of information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE...DISTRIBUTION / AVAILABILITY STATEMENT DISTRIBUTION A : APPROVED FOR PUBLIC RELEASE: DISTRIBUTION UNLIMITED 13. SUPPLEMENTARY NOTES The U.S. Government...terms by a stream function approach using the transformed x-momentum balance equation and the transformed Reynolds stress transport equation. The
Coordinated scheduling for dynamic real-time systems
NASA Technical Reports Server (NTRS)
Natarajan, Swaminathan; Zhao, Wei
1994-01-01
In this project, we addressed issues in coordinated scheduling for dynamic real-time systems. In particular, we concentrated on design and implementation of a new distributed real-time system called R-Shell. The design objective of R-Shell is to provide computing support for space programs that have large, complex, fault-tolerant distributed real-time applications. In R-shell, the approach is based on the concept of scheduling agents, which reside in the application run-time environment, and are customized to provide just those resource management functions which are needed by the specific application. With this approach, we avoid the need for a sophisticated OS which provides a variety of generalized functionality, while still not burdening application programmers with heavy responsibility for resource management. In this report, we discuss the R-Shell approach, summarize the achievement of the project, and describe a preliminary prototype of R-Shell system.
Development of uncertainty-based work injury model using Bayesian structural equation modelling.
Chatterjee, Snehamoy
2014-01-01
This paper proposed a Bayesian method-based structural equation model (SEM) of miners' work injury for an underground coal mine in India. The environmental and behavioural variables for work injury were identified and causal relationships were developed. For Bayesian modelling, prior distributions of SEM parameters are necessary to develop the model. In this paper, two approaches were adopted to obtain prior distribution for factor loading parameters and structural parameters of SEM. In the first approach, the prior distributions were considered as a fixed distribution function with specific parameter values, whereas, in the second approach, prior distributions of the parameters were generated from experts' opinions. The posterior distributions of these parameters were obtained by applying Bayesian rule. The Markov Chain Monte Carlo sampling in the form Gibbs sampling was applied for sampling from the posterior distribution. The results revealed that all coefficients of structural and measurement model parameters are statistically significant in experts' opinion-based priors, whereas, two coefficients are not statistically significant when fixed prior-based distributions are applied. The error statistics reveals that Bayesian structural model provides reasonably good fit of work injury with high coefficient of determination (0.91) and less mean squared error as compared to traditional SEM.
Combining electromagnetic gyro-kinetic particle-in-cell simulations with collisions
NASA Astrophysics Data System (ADS)
Slaby, Christoph; Kleiber, Ralf; Könies, Axel
2017-09-01
It has been an open question whether for electromagnetic gyro-kinetic particle-in-cell (PIC) simulations pitch-angle collisions and the recently introduced pullback transformation scheme (Mishchenko et al., 2014; Kleiber et al., 2016) are consistent. This question is positively answered by comparing the PIC code EUTERPE with an approach based on an expansion of the perturbed distribution function in eigenfunctions of the pitch-angle collision operator (Legendre polynomials) to solve the electromagnetic drift-kinetic equation with collisions in slab geometry. It is shown how both approaches yield the same results for the frequency and damping rate of a kinetic Alfvén wave and how the perturbed distribution function is substantially changed by the presence of pitch-angle collisions.
Hydration of Caffeine at High Temperature by Neutron Scattering and Simulation Studies.
Tavagnacco, L; Brady, J W; Bruni, F; Callear, S; Ricci, M A; Saboungi, M L; Cesàro, A
2015-10-22
The solvation of caffeine in water is examined with neutron diffraction experiments at 353 K. The experimental data, obtained by taking advantage of isotopic H/D substitution in water, were analyzed by empirical potential structure refinement (EPSR) in order to extract partial structure factors and site-site radial distribution functions. In parallel, molecular dynamics (MD) simulations were carried out to interpret the data and gain insight into the intermolecular interactions in the solutions and the solvation process. The results obtained with the two approaches evidence differences in the individual radial distribution functions, although both confirm the presence of caffeine stacks at this temperature. The two approaches point to different accessibility of water to the caffeine sites due to different stacking configurations.
Application of Lagrangian blending functions for grid generation around airplane geometries
NASA Technical Reports Server (NTRS)
Abolhassani, Jamshid S.; Sadrehaghighi, Ideen; Tiwari, Surendra N.
1990-01-01
A simple procedure was developed and applied for the grid generation around an airplane geometry. This approach is based on a transfinite interpolation with Lagrangian interpolation for the blending functions. A monotonic rational quadratic spline interpolation was employed for the grid distributions.
Mapping Resource Selection Functions in Wildlife Studies: Concerns and Recommendations
Morris, Lillian R.; Proffitt, Kelly M.; Blackburn, Jason K.
2018-01-01
Predicting the spatial distribution of animals is an important and widely used tool with applications in wildlife management, conservation, and population health. Wildlife telemetry technology coupled with the availability of spatial data and GIS software have facilitated advancements in species distribution modeling. There are also challenges related to these advancements including the accurate and appropriate implementation of species distribution modeling methodology. Resource Selection Function (RSF) modeling is a commonly used approach for understanding species distributions and habitat usage, and mapping the RSF results can enhance study findings and make them more accessible to researchers and wildlife managers. Currently, there is no consensus in the literature on the most appropriate method for mapping RSF results, methods are frequently not described, and mapping approaches are not always related to accuracy metrics. We conducted a systematic review of the RSF literature to summarize the methods used to map RSF outputs, discuss the relationship between mapping approaches and accuracy metrics, performed a case study on the implications of employing different mapping methods, and provide recommendations as to appropriate mapping techniques for RSF studies. We found extensive variability in methodology for mapping RSF results. Our case study revealed that the most commonly used approaches for mapping RSF results led to notable differences in the visual interpretation of RSF results, and there is a concerning disconnect between accuracy metrics and mapping methods. We make 5 recommendations for researchers mapping the results of RSF studies, which are focused on carefully selecting and describing the method used to map RSF studies, and relating mapping approaches to accuracy metrics. PMID:29887652
Improving Advanced Inverter Control Convergence in Distribution Power Flow
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nagarajan, Adarsh; Palmintier, Bryan; Ding, Fei
Simulation of modern distribution system powerflow increasingly requires capturing the impact of advanced PV inverter voltage regulation on powerflow. With Volt/var control, the inverter adjusts its reactive power flow as a function of the point of common coupling (PCC) voltage. Similarly, Volt/watt control curtails active power production as a function of PCC voltage. However, with larger systems and higher penetrations of PV, this active/reactive power flow itself can cause significant changes to the PCC voltage potentially introducing oscillations that slow the convergence of system simulations. Improper treatment of these advanced inverter functions could potentially lead to incorrect results. This papermore » explores a simple approach to speed such convergence by blending in the previous iteration's reactive power estimate to dampen these oscillations. Results with a single large (5MW) PV system and with multiple 500kW advanced inverters show dramatic improvements using this approach.« less
Probability density cloud as a geometrical tool to describe statistics of scattered light.
Yaitskova, Natalia
2017-04-01
First-order statistics of scattered light is described using the representation of the probability density cloud, which visualizes a two-dimensional distribution for complex amplitude. The geometric parameters of the cloud are studied in detail and are connected to the statistical properties of phase. The moment-generating function for intensity is obtained in a closed form through these parameters. An example of exponentially modified normal distribution is provided to illustrate the functioning of this geometrical approach.
A probabilistic approach to photovoltaic generator performance prediction
NASA Astrophysics Data System (ADS)
Khallat, M. A.; Rahman, S.
1986-09-01
A method for predicting the performance of a photovoltaic (PV) generator based on long term climatological data and expected cell performance is described. The equations for cell model formulation are provided. Use of the statistical model for characterizing the insolation level is discussed. The insolation data is fitted to appropriate probability distribution functions (Weibull, beta, normal). The probability distribution functions are utilized to evaluate the capacity factors of PV panels or arrays. An example is presented revealing the applicability of the procedure.
Agricultural Fragility Estimates Subjected to Volcanic Ash Fall Hazards
NASA Astrophysics Data System (ADS)
Ham, H. J.; Lee, S.; Choi, S. H.; Yun, W. S.
2015-12-01
Agricultural Fragility Estimates Subjected to Volcanic Ash Fall Hazards Hee Jung Ham1, Seung-Hun Choi1, Woo-Seok Yun1, Sungsu Lee2 1Department of Architectural Engineering, Kangwon National University, Korea 2Division of Civil Engineering, Chungbuk National University, Korea ABSTRACT In this study, fragility functions are developed to estimate expected volcanic ash damages of the agricultural sector in Korea. The fragility functions are derived from two approaches: 1) empirical approach based on field observations of impacts to agriculture from the 2006 eruption of Merapi volcano in Indonesia and 2) the FOSM (first-order second-moment) analytical approach based on distribution and thickness of volcanic ash observed from the 1980 eruption of Mt. Saint Helens and agricultural facility specifications in Korea. Fragility function to each agricultural commodity class is presented by a cumulative distribution function of the generalized extreme value distribution. Different functions are developed to estimate production losses from outdoor and greenhouse farming. Seasonal climate influences vulnerability of each agricultural crop and is found to be a crucial component in determining fragility of agricultural commodities to an ash fall. In the study, the seasonality coefficient is established as a multiplier of fragility function to consider the seasonal vulnerability. Yields of the different agricultural commodities are obtained from Korean Statistical Information Service to create a baseline for future agricultural volcanic loss estimation. Numerically simulated examples of scenario ash fall events at Mt. Baekdu volcano are utilized to illustrate the application of the developed fragility functions. Acknowledgements This research was supported by a grant 'Development of Advanced Volcanic Disaster Response System considering Potential Volcanic Risk around Korea' [MPSS-NH-2015-81] from the Natural Hazard Mitigation Research Group, Ministry of Public Safety and Security of Korea. References Nimlos, T. J. and Hans, Z., The Distribution and Thickness of Volcanic Ash in Montana, Northwest Science, Vol. 56, No. 3, 1982. Wilson, T., Kaye, G., Stewart, C., and Cole, J., Impacts of the 2006 Eruption of Merapi Volcano, Indonesia, on Agriculture and Infrastructure, GNS Science Report, 2007.
NASA Astrophysics Data System (ADS)
Rath, Kristin; Fierer, Noah; Rousk, Johannes
2017-04-01
Our knowledge of the dynamics structuring microbial communities and the consequences this has for soil functions is rudimentary. In particular, predictions of the response of microbial communities to environmental change and the implications for associated ecosystem processes remain elusive. Understanding how environmental factors structure microbial communities and regulate the functions they perform is key to a mechanistic understanding of how biogeochemical cycles respond to environmental change. Soil salinization is an agricultural problem in many parts of the world. The activity of soil microorganisms is reduced in saline soils compared to non-saline soil. However, soil salinity often co-varies with other factors, making it difficult to assign responses of microbial communities to direct effects of salinity. A trait-based approach allows us to connect the environmental factor salinity with the responses of microbial community composition and functioning. Salinity along a salinity gradient serves as a filter for the community trait distribution of salt tolerance, selecting for higher salt tolerance at more saline sites. This trait-environment relationship can be used to predict responses of microbial communities to environmental change. Our aims were to (i) use salinity along natural salinity gradients as an environmental filter, and (ii) link the resulting filtered trait-distributions of the communities (the trait being salt tolerance) to the community composition. Soil samples were obtained from two replicated salinity gradients along an Australian salt lake, spanning a wide range of soil salinities (0.1 dS m-1 to >50 dS m-1). In one of the two gradients salinity was correlated with pH. Community trait distributions for salt tolerance were assessed by establishing dose-dependences for extracted bacterial communities using growth rate assays. In addition, functional parameters were measured along the salt gradients. Community composition of sites was compared through 16S rRNA gene amplicon sequencing. Microbial community composition changed greatly along the salinity gradients. Using the salt-tolerance assessments to estimate bacterial trait-distributions we could determine substantial differences in tolerance to salt revealing a strong causal connection between environment and trait distributions. By constraining the community composition with salinity tolerance in ordinations, we could assign which community differences were directly due to a shift in community trait distributions. These analyses revealed that a substantial part (up to 30%) of the community composition differences were directly driven by environmental salt concentrations.. Even though communities in saline soils had trait-distributions aligned to their environment, their performance (respiration, growth rates) was lower than those in non-saline soils and remained low even after input of organic material. Using a trait-based approach we could connect filtered trait distributions along environmental gradients, to the composition of the microbial community. We show that soil salinity played an important role in shaping microbial community composition by selecting for communities with higher salt tolerance. The shift toward bacterial communities with trait distributions matched to salt environments probably compensated for much of the potential loss of function induced by salinity, resulting in a degree of apparent functional redundancy for decomposition. However, more tolerant communities still showed reduced functioning, suggesting a trade-off between salt tolerance and performance.
NASA Astrophysics Data System (ADS)
Prasad, Guru; Jayaram, Sanjay; Ward, Jami; Gupta, Pankaj
2004-08-01
In this paper, Aximetric proposes a decentralized Command and Control (C2) architecture for a distributed control of a cluster of on-board health monitoring and software enabled control systems called SimBOX that will use some of the real-time infrastructure (RTI) functionality from the current military real-time simulation architecture. The uniqueness of the approach is to provide a "plug and play environment" for various system components that run at various data rates (Hz) and the ability to replicate or transfer C2 operations to various subsystems in a scalable manner. This is possible by providing a communication bus called "Distributed Shared Data Bus" and a distributed computing environment used to scale the control needs by providing a self-contained computing, data logging and control function module that can be rapidly reconfigured to perform different functions. This kind of software-enabled control is very much needed to meet the needs of future aerospace command and control functions.
NASA Astrophysics Data System (ADS)
Prasad, Guru; Jayaram, Sanjay; Ward, Jami; Gupta, Pankaj
2004-09-01
In this paper, Aximetric proposes a decentralized Command and Control (C2) architecture for a distributed control of a cluster of on-board health monitoring and software enabled control systems called
Banik, Suman Kumar; Bag, Bidhan Chandra; Ray, Deb Shankar
2002-05-01
Traditionally, quantum Brownian motion is described by Fokker-Planck or diffusion equations in terms of quasiprobability distribution functions, e.g., Wigner functions. These often become singular or negative in the full quantum regime. In this paper a simple approach to non-Markovian theory of quantum Brownian motion using true probability distribution functions is presented. Based on an initial coherent state representation of the bath oscillators and an equilibrium canonical distribution of the quantum mechanical mean values of their coordinates and momenta, we derive a generalized quantum Langevin equation in c numbers and show that the latter is amenable to a theoretical analysis in terms of the classical theory of non-Markovian dynamics. The corresponding Fokker-Planck, diffusion, and Smoluchowski equations are the exact quantum analogs of their classical counterparts. The present work is independent of path integral techniques. The theory as developed here is a natural extension of its classical version and is valid for arbitrary temperature and friction (the Smoluchowski equation being considered in the overdamped limit).
A Heuristic Probabilistic Approach to Estimating Size-Dependent Mobility of Nonuniform Sediment
NASA Astrophysics Data System (ADS)
Woldegiorgis, B. T.; Wu, F. C.; van Griensven, A.; Bauwens, W.
2017-12-01
Simulating the mechanism of bed sediment mobility is essential for modelling sediment dynamics. Despite the fact that many studies are carried out on this subject, they use complex mathematical formulations that are computationally expensive, and are often not easy for implementation. In order to present a simple and computationally efficient complement to detailed sediment mobility models, we developed a heuristic probabilistic approach to estimating the size-dependent mobilities of nonuniform sediment based on the pre- and post-entrainment particle size distributions (PSDs), assuming that the PSDs are lognormally distributed. The approach fits a lognormal probability density function (PDF) to the pre-entrainment PSD of bed sediment and uses the threshold particle size of incipient motion and the concept of sediment mixture to estimate the PSDs of the entrained sediment and post-entrainment bed sediment. The new approach is simple in physical sense and significantly reduces the complexity and computation time and resource required by detailed sediment mobility models. It is calibrated and validated with laboratory and field data by comparing to the size-dependent mobilities predicted with the existing empirical lognormal cumulative distribution function (CDF) approach. The novel features of the current approach are: (1) separating the entrained and non-entrained sediments by a threshold particle size, which is a modified critical particle size of incipient motion by accounting for the mixed-size effects, and (2) using the mixture-based pre- and post-entrainment PSDs to provide a continuous estimate of the size-dependent sediment mobility.
Application of the mobility power flow approach to structural response from distributed loading
NASA Technical Reports Server (NTRS)
Cuschieri, J. M.
1988-01-01
The problem of the vibration power flow through coupled substructures when one of the substructures is subjected to a distributed load is addressed. In all the work performed thus far, point force excitation was considered. However, in the case of the excitation of an aircraft fuselage, distributed loading on the whole surface of a panel can be as important as the excitation from directly applied forces at defined locations on the structures. Thus using a mobility power flow approach, expressions are developed for the transmission of vibrational power between two coupled plate substructures in an L configuration, with one of the surfaces of one of the plate substructures being subjected to a distributed load. The types of distributed loads that are considered are a force load with an arbitrary function in space and a distributed load similar to that from acoustic excitation.
A Density Functional for Liquid 3He Based on the Aziz Potential
NASA Astrophysics Data System (ADS)
Barranco, M.; Hernández, E. S.; Mayol, R.; Navarro, J.; Pi, M.; Szybisz, L.
2006-09-01
We propose a new class of density functionals for liquid 3He based on the Aziz helium-helium interaction screened at short distances by the microscopically calculated two-body distribution function g(r). Our aim is to reduce to a minumum the unavoidable phenomenological ingredients inherent to any density functional approach. Results for the homogeneous liquid and droplets are presented and discussed.
Applications of Lagrangian blending functions for grid generation around airplane geometries
NASA Technical Reports Server (NTRS)
Abolhassani, Jamshid S.; Sadrehaghighi, Ideen; Tiwari, Surendra N.; Smith, Robert E.
1990-01-01
A simple procedure has been developed and applied for the grid generation around an airplane geometry. This approach is based on a transfinite interpolation with Lagrangian interpolation for the blending functions. A monotonic rational quadratic spline interpolation has been employed for the grid distributions.
RELATIONSHIP BETWEEN PHYLOGENETIC DISTRIBUTION AND GENOMIC FEATURES IN NEUROSPORA CRASSA
USDA-ARS?s Scientific Manuscript database
In the post-genome era, insufficient functional annotation of predicted genes greatly restricts the potential of mining genome data. We demonstrate that an evolutionary approach, which is independent of functional annotation, has great potential as a tool for genome analysis. We chose the genome o...
Force Density Function Relationships in 2-D Granular Media
NASA Technical Reports Server (NTRS)
Youngquist, Robert C.; Metzger, Philip T.; Kilts, Kelly N.
2004-01-01
An integral transform relationship is developed to convert between two important probability density functions (distributions) used in the study of contact forces in granular physics. Developing this transform has now made it possible to compare and relate various theoretical approaches with one another and with the experimental data despite the fact that one may predict the Cartesian probability density and another the force magnitude probability density. Also, the transforms identify which functional forms are relevant to describe the probability density observed in nature, and so the modified Bessel function of the second kind has been identified as the relevant form for the Cartesian probability density corresponding to exponential forms in the force magnitude distribution. Furthermore, it is shown that this transform pair supplies a sufficient mathematical framework to describe the evolution of the force magnitude distribution under shearing. Apart from the choice of several coefficients, whose evolution of values must be explained in the physics, this framework successfully reproduces the features of the distribution that are taken to be an indicator of jamming and unjamming in a granular packing. Key words. Granular Physics, Probability Density Functions, Fourier Transforms
Spatial distribution of nuclei in progressive nucleation: Modeling and application
NASA Astrophysics Data System (ADS)
Tomellini, Massimo
2018-04-01
Phase transformations ruled by non-simultaneous nucleation and growth do not lead to random distribution of nuclei. Since nucleation is only allowed in the untransformed portion of space, positions of nuclei are correlated. In this article an analytical approach is presented for computing pair-correlation function of nuclei in progressive nucleation. This quantity is further employed for characterizing the spatial distribution of nuclei through the nearest neighbor distribution function. The modeling is developed for nucleation in 2D space with power growth law and it is applied to describe electrochemical nucleation where correlation effects are significant. Comparison with both computer simulations and experimental data lends support to the model which gives insights into the transition from Poissonian to correlated nearest neighbor probability density.
DMS Advanced Applications for Accommodating High Penetrations of DERs and Microgrids: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pratt, Annabelle; Veda, Santosh; Maitra, Arindam
Efficient and effective management of the electrical distribution system requires an integrated system approach for Distribution Management Systems (DMS), Distributed Energy Resources (DERs), Distributed Energy Resources Management System (DERMS), and microgrids to work in harmony. This paper highlights some of the outcomes from a U.S. Department of Energy (DOE), Office of Electricity (OE) project, including 1) Architecture of these integrated systems, and 2) Expanded functions of two example DMS applications, Volt-VAR optimization (VVO) and Fault Location, Isolation and Service Restoration (FLISR), to accommodate DER. For these two example applications, the relevant DER Group Functions necessary to support communication between DMSmore » and Microgrid Controller (MC) in grid-tied mode are identified.« less
DMS Advanced Applications for Accommodating High Penetrations of DERs and Microgrids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pratt, Annabelle; Veda, Santosh; Maitra, Arindam
Efficient and effective management of the electric distribution system requires an integrated approach to allow various systems to work in harmony, including distribution management systems (DMS), distributed energy resources (DERs), distributed energy resources management systems, and microgrids. This study highlights some outcomes from a recent project sponsored by the US Department of Energy, Office of Electricity Delivery and Energy Reliability, including information about (i) the architecture of these integrated systems and (ii) expanded functions of two example DMS applications to accommodate DERs: volt-var optimisation and fault location, isolation, and service restoration. In addition, the relevant DER group functions necessary tomore » support communications between the DMS and a microgrid controller in grid-tied mode are identified.« less
Two approximations of the present value distribution of a disability annuity
NASA Astrophysics Data System (ADS)
Spreeuw, Jaap
2006-02-01
The distribution function of the present value of a cash flow can be approximated by means of a distribution function of a random variable, which is also the present value of a sequence of payments, but with a simpler structure. The corresponding random variable has the same expectation as the random variable corresponding to the original distribution function and is a stochastic upper bound of convex order. A sharper upper bound can be obtained if more information about the risk is available. In this paper, it will be shown that such an approach can be adopted for disability annuities (also known as income protection policies) in a three state model under Markov assumptions. Benefits are payable during any spell of disability whilst premiums are only due whenever the insured is healthy. The quality of the two approximations is investigated by comparing the distributions obtained with the one derived from the algorithm presented in the paper by Hesselager and Norberg [Insurance Math. Econom. 18 (1996) 35-42].
NASA Astrophysics Data System (ADS)
Dobronets, Boris S.; Popova, Olga A.
2018-05-01
The paper considers a new approach of regression modeling that uses aggregated data presented in the form of density functions. Approaches to Improving the reliability of aggregation of empirical data are considered: improving accuracy and estimating errors. We discuss the procedures of data aggregation as a preprocessing stage for subsequent to regression modeling. An important feature of study is demonstration of the way how represent the aggregated data. It is proposed to use piecewise polynomial models, including spline aggregate functions. We show that the proposed approach to data aggregation can be interpreted as the frequency distribution. To study its properties density function concept is used. Various types of mathematical models of data aggregation are discussed. For the construction of regression models, it is proposed to use data representation procedures based on piecewise polynomial models. New approaches to modeling functional dependencies based on spline aggregations are proposed.
A functional architecture of the human brain: Emerging insights from the science of emotion
Lindquist, Kristen A.; Barrett, Lisa Feldman
2012-01-01
The ‘faculty psychology’ approach to the mind, which attempts to explain mental function in terms of categories that reflect modular ‘faculties’, such as emotions, cognitions, and perceptions, has dominated research into the mind and its physical correlates. In this paper, we argue that brain organization does not respect the commonsense categories belonging to the faculty psychology approach. We review recent research from the science of emotion demonstrating that the human brain contains broadly distributed functional networks that can each be re-described as basic psychological operations that interact to produce a range of mental states, including, but not limited to, anger, sadness, fear, disgust, and so on. When compared to the faculty psychology approach, this ‘constructionist’ approach provides an alternative functional architecture to guide the design and interpretation of experiments in cognitive neuroscience. PMID:23036719
Stinchcombe, Adam R; Peskin, Charles S; Tranchina, Daniel
2012-06-01
We present a generalization of a population density approach for modeling and analysis of stochastic gene expression. In the model, the gene of interest fluctuates stochastically between an inactive state, in which transcription cannot occur, and an active state, in which discrete transcription events occur; and the individual mRNA molecules are degraded stochastically in an independent manner. This sort of model in simplest form with exponential dwell times has been used to explain experimental estimates of the discrete distribution of random mRNA copy number. In our generalization, the random dwell times in the inactive and active states, T_{0} and T_{1}, respectively, are independent random variables drawn from any specified distributions. Consequently, the probability per unit time of switching out of a state depends on the time since entering that state. Our method exploits a connection between the fully discrete random process and a related continuous process. We present numerical methods for computing steady-state mRNA distributions and an analytical derivation of the mRNA autocovariance function. We find that empirical estimates of the steady-state mRNA probability mass function from Monte Carlo simulations of laboratory data do not allow one to distinguish between underlying models with exponential and nonexponential dwell times in some relevant parameter regimes. However, in these parameter regimes and where the autocovariance function has negative lobes, the autocovariance function disambiguates the two types of models. Our results strongly suggest that temporal data beyond the autocovariance function is required in general to characterize gene switching.
A new approach to simulating collisionless dark matter fluids
NASA Astrophysics Data System (ADS)
Hahn, Oliver; Abel, Tom; Kaehler, Ralf
2013-09-01
Recently, we have shown how current cosmological N-body codes already follow the fine grained phase-space information of the dark matter fluid. Using a tetrahedral tessellation of the three-dimensional manifold that describes perfectly cold fluids in six-dimensional phase space, the phase-space distribution function can be followed throughout the simulation. This allows one to project the distribution function into configuration space to obtain highly accurate densities, velocities and velocity dispersions. Here, we exploit this technique to show first steps on how to devise an improved particle-mesh technique. At its heart, the new method thus relies on a piecewise linear approximation of the phase-space distribution function rather than the usual particle discretization. We use pseudo-particles that approximate the masses of the tetrahedral cells up to quadrupolar order as the locations for cloud-in-cell (CIC) deposit instead of the particle locations themselves as in standard CIC deposit. We demonstrate that this modification already gives much improved stability and more accurate dynamics of the collisionless dark matter fluid at high force and low mass resolution. We demonstrate the validity and advantages of this method with various test problems as well as hot/warm dark matter simulations which have been known to exhibit artificial fragmentation. This completely unphysical behaviour is much reduced in the new approach. The current limitations of our approach are discussed in detail and future improvements are outlined.
A Gaussian Model-Based Probabilistic Approach for Pulse Transit Time Estimation.
Jang, Dae-Geun; Park, Seung-Hun; Hahn, Minsoo
2016-01-01
In this paper, we propose a new probabilistic approach to pulse transit time (PTT) estimation using a Gaussian distribution model. It is motivated basically by the hypothesis that PTTs normalized by RR intervals follow the Gaussian distribution. To verify the hypothesis, we demonstrate the effects of arterial compliance on the normalized PTTs using the Moens-Korteweg equation. Furthermore, we observe a Gaussian distribution of the normalized PTTs on real data. In order to estimate the PTT using the hypothesis, we first assumed that R-waves in the electrocardiogram (ECG) can be correctly identified. The R-waves limit searching ranges to detect pulse peaks in the photoplethysmogram (PPG) and to synchronize the results with cardiac beats--i.e., the peaks of the PPG are extracted within the corresponding RR interval of the ECG as pulse peak candidates. Their probabilities of being the actual pulse peak are then calculated using a Gaussian probability function. The parameters of the Gaussian function are automatically updated when a new pulse peak is identified. This update makes the probability function adaptive to variations of cardiac cycles. Finally, the pulse peak is identified as the candidate with the highest probability. The proposed approach is tested on a database where ECG and PPG waveforms are collected simultaneously during the submaximal bicycle ergometer exercise test. The results are promising, suggesting that the method provides a simple but more accurate PTT estimation in real applications.
Discriminating topology in galaxy distributions using network analysis
NASA Astrophysics Data System (ADS)
Hong, Sungryong; Coutinho, Bruno C.; Dey, Arjun; Barabási, Albert-L.; Vogelsberger, Mark; Hernquist, Lars; Gebhardt, Karl
2016-07-01
The large-scale distribution of galaxies is generally analysed using the two-point correlation function. However, this statistic does not capture the topology of the distribution, and it is necessary to resort to higher order correlations to break degeneracies. We demonstrate that an alternate approach using network analysis can discriminate between topologically different distributions that have similar two-point correlations. We investigate two galaxy point distributions, one produced by a cosmological simulation and the other by a Lévy walk. For the cosmological simulation, we adopt the redshift z = 0.58 slice from Illustris and select galaxies with stellar masses greater than 108 M⊙. The two-point correlation function of these simulated galaxies follows a single power law, ξ(r) ˜ r-1.5. Then, we generate Lévy walks matching the correlation function and abundance with the simulated galaxies. We find that, while the two simulated galaxy point distributions have the same abundance and two-point correlation function, their spatial distributions are very different; most prominently, filamentary structures, absent in Lévy fractals. To quantify these missing topologies, we adopt network analysis tools and measure diameter, giant component, and transitivity from networks built by a conventional friends-of-friends recipe with various linking lengths. Unlike the abundance and two-point correlation function, these network quantities reveal a clear separation between the two simulated distributions; therefore, the galaxy distribution simulated by Illustris is not a Lévy fractal quantitatively. We find that the described network quantities offer an efficient tool for discriminating topologies and for comparing observed and theoretical distributions.
Harvesting implementation for the GI-cat distributed catalog
NASA Astrophysics Data System (ADS)
Boldrini, Enrico; Papeschi, Fabrizio; Bigagli, Lorenzo; Mazzetti, Paolo
2010-05-01
GI-cat framework implements a distributed catalog service supporting different international standards and interoperability arrangements in use by the geoscientific community. The distribution functionality in conjunction with the mediation functionality allows to seamlessly query remote heterogeneous data sources, including OGC Web Services - e.e. OGC CSW, WCS, WFS and WMS, community standards such as UNIDATA THREDDS/OPeNDAP, SeaDataNet CDI (Common Data Index), GBIF (Global Biodiversity Information Facility) services and OpenSearch engines. In the GI-cat modular architecture a distributor component carry out the distribution functionality by query delegation to the mediator components (one for each different data source). Each of these mediator components is able to query a specific data source and convert back the results by mapping of the foreign data model to the GI-cat internal one, based on ISO 19139. In order to cope with deployment scenarios in which local data is expected, an harvesting approach has been experimented. The new strategy comes in addition to the consolidated distributed approach, allowing the user to switch between a remote and a local search at will for each federated resource; this extends GI-cat configuration possibilities. The harvesting strategy is designed in GI-cat by the use at the core of a local cache component, implemented as a native XML database and based on eXist. The different heterogeneous sources are queried for the bulk of available data; this data is then injected into the cache component after being converted to the GI-cat data model. The query and conversion steps are performed by the mediator components that were are part of the GI-cat framework. Afterward each new query can be exercised against local data that have been stored in the cache component. Considering both advantages and shortcomings that affect harvesting and query distribution approaches, it comes out that a user driven tuning is required to take the best of them. This is often related to the specific user scenarios to be implemented. GI-cat proved to be a flexible framework to address user need. The GI-cat configurator tool was updated to make such a tuning possible: each data source can be configured to enable either harvesting or query distribution approaches; in the former case an appropriate harvesting interval can be set.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Del Dotto, Alessio; Pace, Emanuele; Salme, Giovanni
Poincare covariant definitions for the spin-dependent spectral function and for the momentum distributions within the light-front Hamiltonian dynamics are proposed for a three-fermion bound system, starting from the light-front wave function of the system. The adopted approach is based on the Bakamjian–Thomas construction of the Poincaré generators, which allows one to easily import the familiar and wide knowledge on the nuclear interaction into a light-front framework. The proposed formalism can find useful applications in refined nuclear calculations, such as those needed for evaluating the European Muon Collaboration effect or the semi-inclusive deep inelastic cross sections with polarized nuclear targets, sincemore » remarkably the light-front unpolarized momentum distribution by definition fulfills both normalization and momentum sum rules. As a result, also shown is a straightforward generalization of the definition of the light-front spectral function to an A-nucleon system.« less
LETTER TO THE EDITOR: Exact energy distribution function in a time-dependent harmonic oscillator
NASA Astrophysics Data System (ADS)
Robnik, Marko; Romanovski, Valery G.; Stöckmann, Hans-Jürgen
2006-09-01
Following a recent work by Robnik and Romanovski (2006 J. Phys. A: Math. Gen. 39 L35, 2006 Open Syst. Inf. Dyn. 13 197-222), we derive an explicit formula for the universal distribution function of the final energies in a time-dependent 1D harmonic oscillator, whose functional form does not depend on the details of the frequency ω(t) and is closely related to the conservation of the adiabatic invariant. The normalized distribution function is P(x) = \\pi^{-1} (2\\mu^2 - x^2)^{-\\frac{1}{2}} , where x=E_1- \\skew3\\bar{E}_1 ; E1 is the final energy, \\skew3\\bar{E}_1 is its average value and µ2 is the variance of E1. \\skew3\\bar{E}_1 and µ2 can be calculated exactly using the WKB approach to all orders.
M-dwarf exoplanet surface density distribution. A log-normal fit from 0.07 to 400 AU
NASA Astrophysics Data System (ADS)
Meyer, Michael R.; Amara, Adam; Reggiani, Maddalena; Quanz, Sascha P.
2018-04-01
Aims: We fit a log-normal function to the M-dwarf orbital surface density distribution of gas giant planets, over the mass range 1-10 times that of Jupiter, from 0.07 to 400 AU. Methods: We used a Markov chain Monte Carlo approach to explore the likelihoods of various parameter values consistent with point estimates of the data given our assumed functional form. Results: This fit is consistent with radial velocity, microlensing, and direct-imaging observations, is well-motivated from theoretical and phenomenological points of view, and predicts results of future surveys. We present probability distributions for each parameter and a maximum likelihood estimate solution. Conclusions: We suggest that this function makes more physical sense than other widely used functions, and we explore the implications of our results on the design of future exoplanet surveys.
Del Dotto, Alessio; Pace, Emanuele; Salme, Giovanni; ...
2017-01-10
Poincare covariant definitions for the spin-dependent spectral function and for the momentum distributions within the light-front Hamiltonian dynamics are proposed for a three-fermion bound system, starting from the light-front wave function of the system. The adopted approach is based on the Bakamjian–Thomas construction of the Poincaré generators, which allows one to easily import the familiar and wide knowledge on the nuclear interaction into a light-front framework. The proposed formalism can find useful applications in refined nuclear calculations, such as those needed for evaluating the European Muon Collaboration effect or the semi-inclusive deep inelastic cross sections with polarized nuclear targets, sincemore » remarkably the light-front unpolarized momentum distribution by definition fulfills both normalization and momentum sum rules. As a result, also shown is a straightforward generalization of the definition of the light-front spectral function to an A-nucleon system.« less
Electrostatic field and charge distribution in small charged dielectric droplets
NASA Astrophysics Data System (ADS)
Storozhev, V. B.
2004-08-01
The charge distribution in small dielectric droplets is calculated on the basis of continuum medium approximation. There are considered charged liquid spherical droplets of methanol in the range of nanometer sizes. The problem is solved by the following way. We find the free energy of some ion in dielectric droplet, which is a function of distribution of other ions in the droplet. The probability of location of the ion in some element of volume in the droplet is a function of its free energy in this element of volume. The same approach can be applied to other ions in the droplet. The obtained charge distribution differs considerably from the surface distribution. The curve of the charge distribution in the droplet as a function of radius has maximum near the surface. Relative concentration of charges in the vicinity of the center of the droplet does not equal to zero, and it is the higher, the less is the total charge of the droplet. According to the estimates the model is applicable if the droplet radius is larger than 10 nm.
Towards an improved ensemble precipitation forecast: A probabilistic post-processing approach
NASA Astrophysics Data System (ADS)
Khajehei, Sepideh; Moradkhani, Hamid
2017-03-01
Recently, ensemble post-processing (EPP) has become a commonly used approach for reducing the uncertainty in forcing data and hence hydrologic simulation. The procedure was introduced to build ensemble precipitation forecasts based on the statistical relationship between observations and forecasts. More specifically, the approach relies on a transfer function that is developed based on a bivariate joint distribution between the observations and the simulations in the historical period. The transfer function is used to post-process the forecast. In this study, we propose a Bayesian EPP approach based on copula functions (COP-EPP) to improve the reliability of the precipitation ensemble forecast. Evaluation of the copula-based method is carried out by comparing the performance of the generated ensemble precipitation with the outputs from an existing procedure, i.e. mixed type meta-Gaussian distribution. Monthly precipitation from Climate Forecast System Reanalysis (CFS) and gridded observation from Parameter-Elevation Relationships on Independent Slopes Model (PRISM) have been employed to generate the post-processed ensemble precipitation. Deterministic and probabilistic verification frameworks are utilized in order to evaluate the outputs from the proposed technique. Distribution of seasonal precipitation for the generated ensemble from the copula-based technique is compared to the observation and raw forecasts for three sub-basins located in the Western United States. Results show that both techniques are successful in producing reliable and unbiased ensemble forecast, however, the COP-EPP demonstrates considerable improvement in the ensemble forecast in both deterministic and probabilistic verification, in particular in characterizing the extreme events in wet seasons.
Quantal diffusion description of multinucleon transfers in heavy-ion collisions
NASA Astrophysics Data System (ADS)
Ayik, S.; Yilmaz, B.; Yilmaz, O.; Umar, A. S.
2018-05-01
Employing the stochastic mean-field (SMF) approach, we develop a quantal diffusion description of the multi-nucleon transfer in heavy-ion collisions at finite impact parameters. The quantal transport coefficients are determined by the occupied single-particle wave functions of the time-dependent Hartree-Fock equations. As a result, the primary fragment mass and charge distribution functions are determined entirely in terms of the mean-field properties. This powerful description does not involve any adjustable parameter, includes the effects of shell structure, and is consistent with the fluctuation-dissipation theorem of the nonequilibrium statistical mechanics. As a first application of the approach, we analyze the fragment mass distribution in 48Ca+ 238U collisions at the center-of-mass energy Ec.m.=193 MeV and compare the calculations with the experimental data.
Distribution Free Approach for Coordination of a Supply Chain with Consumer Return
NASA Astrophysics Data System (ADS)
Hu, Jinsong; Xu, Yuanji
Consumer return is considered in a coordination of a supply chain consisting of one manufacturer and one retailer. A distribution free approach is employed to deal with a centralized decision model and a decentralized model which are constructed under the situation with only knowing the demand function's mean and variance, respectively. A markdown money contract is designed to coordinate the supply chain, and it is also proved that the contract can make the supply chain perfectly coordinated. Several numerical examples are given at the end of this paper.
Distributed Coordinated Control of Large-Scale Nonlinear Networks
Kundu, Soumya; Anghel, Marian
2015-11-08
We provide a distributed coordinated approach to the stability analysis and control design of largescale nonlinear dynamical systems by using a vector Lyapunov functions approach. In this formulation the large-scale system is decomposed into a network of interacting subsystems and the stability of the system is analyzed through a comparison system. However finding such comparison system is not trivial. In this work, we propose a sum-of-squares based completely decentralized approach for computing the comparison systems for networks of nonlinear systems. Moreover, based on the comparison systems, we introduce a distributed optimal control strategy in which the individual subsystems (agents) coordinatemore » with their immediate neighbors to design local control policies that can exponentially stabilize the full system under initial disturbances.We illustrate the control algorithm on a network of interacting Van der Pol systems.« less
Compounding approach for univariate time series with nonstationary variances
NASA Astrophysics Data System (ADS)
Schäfer, Rudi; Barkhofen, Sonja; Guhr, Thomas; Stöckmann, Hans-Jürgen; Kuhl, Ulrich
2015-12-01
A defining feature of nonstationary systems is the time dependence of their statistical parameters. Measured time series may exhibit Gaussian statistics on short time horizons, due to the central limit theorem. The sample statistics for long time horizons, however, averages over the time-dependent variances. To model the long-term statistical behavior, we compound the local distribution with the distribution of its parameters. Here, we consider two concrete, but diverse, examples of such nonstationary systems: the turbulent air flow of a fan and a time series of foreign exchange rates. Our main focus is to empirically determine the appropriate parameter distribution for the compounding approach. To this end, we extract the relevant time scales by decomposing the time signals into windows and determine the distribution function of the thus obtained local variances.
Compounding approach for univariate time series with nonstationary variances.
Schäfer, Rudi; Barkhofen, Sonja; Guhr, Thomas; Stöckmann, Hans-Jürgen; Kuhl, Ulrich
2015-12-01
A defining feature of nonstationary systems is the time dependence of their statistical parameters. Measured time series may exhibit Gaussian statistics on short time horizons, due to the central limit theorem. The sample statistics for long time horizons, however, averages over the time-dependent variances. To model the long-term statistical behavior, we compound the local distribution with the distribution of its parameters. Here, we consider two concrete, but diverse, examples of such nonstationary systems: the turbulent air flow of a fan and a time series of foreign exchange rates. Our main focus is to empirically determine the appropriate parameter distribution for the compounding approach. To this end, we extract the relevant time scales by decomposing the time signals into windows and determine the distribution function of the thus obtained local variances.
The heterogeneity of segmental dynamics of filled EPDM by (1)H transverse relaxation NMR.
Moldovan, D; Fechete, R; Demco, D E; Culea, E; Blümich, B; Herrmann, V; Heinz, M
2011-01-01
Residual second moment of dipolar interactions M(2) and correlation time segmental dynamics distributions were measured by Hahn-echo decays in combination with inverse Laplace transform for a series of unfilled and filled EPDM samples as functions of carbon-black N683 filler content. The fillers-polymer chain interactions which dramatically restrict the mobility of bound rubber modify the dynamics of mobile chains. These changes depend on the filler content and can be evaluated from distributions of M(2). A dipolar filter was applied to eliminate the contribution of bound rubber. In the first approach the Hahn-echo decays were fitted with a theoretical relationship to obtain the average values of the (1)H residual second moment
The heterogeneity of segmental dynamics of filled EPDM by 1H transverse relaxation NMR
NASA Astrophysics Data System (ADS)
Moldovan, D.; Fechete, R.; Demco, D. E.; Culea, E.; Blümich, B.; Herrmann, V.; Heinz, M.
2011-01-01
Residual second moment of dipolar interactions M∼2 and correlation time segmental dynamics distributions were measured by Hahn-echo decays in combination with inverse Laplace transform for a series of unfilled and filled EPDM samples as functions of carbon-black N683 filler content. The fillers-polymer chain interactions which dramatically restrict the mobility of bound rubber modify the dynamics of mobile chains. These changes depend on the filler content and can be evaluated from distributions of M∼2. A dipolar filter was applied to eliminate the contribution of bound rubber. In the first approach the Hahn-echo decays were fitted with a theoretical relationship to obtain the average values of the 1H residual second moment
Gaussian functional regression for output prediction: Model assimilation and experimental design
NASA Astrophysics Data System (ADS)
Nguyen, N. C.; Peraire, J.
2016-03-01
In this paper, we introduce a Gaussian functional regression (GFR) technique that integrates multi-fidelity models with model reduction to efficiently predict the input-output relationship of a high-fidelity model. The GFR method combines the high-fidelity model with a low-fidelity model to provide an estimate of the output of the high-fidelity model in the form of a posterior distribution that can characterize uncertainty in the prediction. A reduced basis approximation is constructed upon the low-fidelity model and incorporated into the GFR method to yield an inexpensive posterior distribution of the output estimate. As this posterior distribution depends crucially on a set of training inputs at which the high-fidelity models are simulated, we develop a greedy sampling algorithm to select the training inputs. Our approach results in an output prediction model that inherits the fidelity of the high-fidelity model and has the computational complexity of the reduced basis approximation. Numerical results are presented to demonstrate the proposed approach.
NASA Technical Reports Server (NTRS)
Alexandrov, Mikhail D.; Cairns, Brian; Mishchenko, Michael I.
2012-01-01
We present a novel technique for remote sensing of cloud droplet size distributions. Polarized reflectances in the scattering angle range between 135deg and 165deg exhibit a sharply defined rainbow structure, the shape of which is determined mostly by single scattering properties of cloud particles, and therefore, can be modeled using the Mie theory. Fitting the observed rainbow with such a model (computed for a parameterized family of particle size distributions) has been used for cloud droplet size retrievals. We discovered that the relationship between the rainbow structures and the corresponding particle size distributions is deeper than it had been commonly understood. In fact, the Mie theory-derived polarized reflectance as a function of reduced scattering angle (in the rainbow angular range) and the (monodisperse) particle radius appears to be a proxy to a kernel of an integral transform (similar to the sine Fourier transform on the positive semi-axis). This approach, called the rainbow Fourier transform (RFT), allows us to accurately retrieve the shape of the droplet size distribution by the application of the corresponding inverse transform to the observed polarized rainbow. While the basis functions of the proxy-transform are not exactly orthogonal in the finite angular range, this procedure needs to be complemented by a simple regression technique, which removes the retrieval artifacts. This non-parametric approach does not require any a priori knowledge of the droplet size distribution functional shape and is computationally fast (no look-up tables, no fitting, computations are the same as for the forward modeling).
Automation of Space Station module power management and distribution system
NASA Technical Reports Server (NTRS)
Bechtel, Robert; Weeks, Dave; Walls, Bryan
1990-01-01
Viewgraphs on automation of space station module (SSM) power management and distribution (PMAD) system are presented. Topics covered include: reasons for power system automation; SSM/PMAD approach to automation; SSM/PMAD test bed; SSM/PMAD topology; functional partitioning; SSM/PMAD control; rack level autonomy; FRAMES AI system; and future technology needs for power system automation.
Time-sliced perturbation theory for large scale structure I: general formalism
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blas, Diego; Garny, Mathias; Sibiryakov, Sergey
2016-07-01
We present a new analytic approach to describe large scale structure formation in the mildly non-linear regime. The central object of the method is the time-dependent probability distribution function generating correlators of the cosmological observables at a given moment of time. Expanding the distribution function around the Gaussian weight we formulate a perturbative technique to calculate non-linear corrections to cosmological correlators, similar to the diagrammatic expansion in a three-dimensional Euclidean quantum field theory, with time playing the role of an external parameter. For the physically relevant case of cold dark matter in an Einstein-de Sitter universe, the time evolution ofmore » the distribution function can be found exactly and is encapsulated by a time-dependent coupling constant controlling the perturbative expansion. We show that all building blocks of the expansion are free from spurious infrared enhanced contributions that plague the standard cosmological perturbation theory. This paves the way towards the systematic resummation of infrared effects in large scale structure formation. We also argue that the approach proposed here provides a natural framework to account for the influence of short-scale dynamics on larger scales along the lines of effective field theory.« less
da Silva, Pedro Giovâni; Hernández, Malva Isabel Medina
2015-01-01
Community structure is driven by mechanisms linked to environmental, spatial and temporal processes, which have been successfully addressed using metacommunity framework. The relative importance of processes shaping community structure can be identified using several different approaches. Two approaches that are increasingly being used are functional diversity and community deconstruction. Functional diversity is measured using various indices that incorporate distinct community attributes. Community deconstruction is a way to disentangle species responses to ecological processes by grouping species with similar traits. We used these two approaches to determine whether they are improvements over traditional measures (e.g., species composition, abundance, biomass) for identification of the main processes driving dung beetle (Scarabaeinae) community structure in a fragmented mainland-island landscape in southern Brazilian Atlantic Forest. We sampled five sites in each of four large forest areas, two on the mainland and two on the island. Sampling was performed in 2012 and 2013. We collected abundance and biomass data from 100 sampling points distributed over 20 sampling sites. We studied environmental, spatial and temporal effects on dung beetle community across three spatial scales, i.e., between sites, between areas and mainland-island. The γ-diversity based on species abundance was mainly attributed to β-diversity as a consequence of the increase in mean α- and β-diversity between areas. Variation partitioning on abundance, biomass and functional diversity showed scale-dependence of processes structuring dung beetle metacommunities. We identified two major groups of responses among 17 functional groups. In general, environmental filters were important at both local and regional scales. Spatial factors were important at the intermediate scale. Our study supports the notion of scale-dependence of environmental, spatial and temporal processes in the distribution and functional organization of Scarabaeinae beetles. We conclude that functional diversity may be used as a complementary approach to traditional measures, and that community deconstruction allows sufficient disentangling of responses of different trait-based groups. PMID:25822150
A Simpli ed, General Approach to Simulating from Multivariate Copula Functions
Barry Goodwin
2012-01-01
Copulas have become an important analytic tool for characterizing multivariate distributions and dependence. One is often interested in simulating data from copula estimates. The process can be analytically and computationally complex and usually involves steps that are unique to a given parametric copula. We describe an alternative approach that uses \\probability{...
Various modeling approaches have been developed for metal binding on humic substances. However, most of these models are still curve-fitting exercises-- the resulting set of parameters such as affinity constants (or the distribution of them) is found to depend on pH, ionic stren...
Accounting for range uncertainties in the optimization of intensity modulated proton therapy.
Unkelbach, Jan; Chan, Timothy C Y; Bortfeld, Thomas
2007-05-21
Treatment plans optimized for intensity modulated proton therapy (IMPT) may be sensitive to range variations. The dose distribution may deteriorate substantially when the actual range of a pencil beam does not match the assumed range. We present two treatment planning concepts for IMPT which incorporate range uncertainties into the optimization. The first method is a probabilistic approach. The range of a pencil beam is assumed to be a random variable, which makes the delivered dose and the value of the objective function a random variable too. We then propose to optimize the expectation value of the objective function. The second approach is a robust formulation that applies methods developed in the field of robust linear programming. This approach optimizes the worst case dose distribution that may occur, assuming that the ranges of the pencil beams may vary within some interval. Both methods yield treatment plans that are considerably less sensitive to range variations compared to conventional treatment plans optimized without accounting for range uncertainties. In addition, both approaches--although conceptually different--yield very similar results on a qualitative level.
TOPICS IN THEORY OF GENERALIZED PARTON DISTRIBUTIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Radyushkin, Anatoly V.
Several topics in the theory of generalized parton distributions (GPDs) are reviewed. First, we give a brief overview of the basics of the theory of generalized parton distributions and their relationship with simpler phenomenological functions, viz. form factors, parton densities and distribution amplitudes. Then, we discuss recent developments in building models for GPDs that are based on the formalism of double distributions (DDs). A special attention is given to a careful analysis of the singularity structure of DDs. The DD formalism is applied to construction of a model GPDs with a singular Regge behavior. Within the developed DD-based approach, wemore » discuss the structure of GPD sum rules. It is shown that separation of DDs into the so-called ``plus'' part and the $D$-term part may be treated as a renormalization procedure for the GPD sum rules. This approach is compared with an alternative prescription based on analytic regularization.« less
Saleem, Muhammad; Sharif, Kashif; Fahmi, Aliya
2018-04-27
Applications of Pareto distribution are common in reliability, survival and financial studies. In this paper, A Pareto mixture distribution is considered to model a heterogeneous population comprising of two subgroups. Each of two subgroups is characterized by the same functional form with unknown distinct shape and scale parameters. Bayes estimators have been derived using flat and conjugate priors using squared error loss function. Standard errors have also been derived for the Bayes estimators. An interesting feature of this study is the preparation of components of Fisher Information matrix.
On Improving Efficiency of Differential Evolution for Aerodynamic Shape Optimization Applications
NASA Technical Reports Server (NTRS)
Madavan, Nateri K.
2004-01-01
Differential Evolution (DE) is a simple and robust evolutionary strategy that has been proven effective in determining the global optimum for several difficult optimization problems. Although DE offers several advantages over traditional optimization approaches, its use in applications such as aerodynamic shape optimization where the objective function evaluations are computationally expensive is limited by the large number of function evaluations often required. In this paper various approaches for improving the efficiency of DE are reviewed and discussed. These approaches are implemented in a DE-based aerodynamic shape optimization method that uses a Navier-Stokes solver for the objective function evaluations. Parallelization techniques on distributed computers are used to reduce turnaround times. Results are presented for the inverse design of a turbine airfoil. The efficiency improvements achieved by the different approaches are evaluated and compared.
NASA Astrophysics Data System (ADS)
Campo, Lorenzo; Castelli, Fabio; Caparrini, Francesca
2010-05-01
The modern distributed hydrological models allow the representation of the different surface and subsurface phenomena with great accuracy and high spatial and temporal resolution. Such complexity requires, in general, an equally accurate parametrization. A number of approaches have been followed in this respect, from simple local search method (like Nelder-Mead algorithm), that minimize a cost function representing some distance between model's output and available measures, to more complex approaches like dynamic filters (such as the Ensemble Kalman Filter) that carry on an assimilation of the observations. In this work the first approach was followed in order to compare the performances of three different direct search algorithms on the calibration of a distributed hydrological balance model. The direct search family can be defined as that category of algorithms that make no use of derivatives of the cost function (that is, in general, a black box) and comprehend a large number of possible approaches. The main benefit of this class of methods is that they don't require changes in the implementation of the numerical codes to be calibrated. The first algorithm is the classical Nelder-Mead, often used in many applications and utilized as reference. The second algorithm is a GSS (Generating Set Search) algorithm, built in order to guarantee the conditions of global convergence and suitable for a parallel and multi-start implementation, here presented. The third one is the EGO algorithm (Efficient Global Optimization), that is particularly suitable to calibrate black box cost functions that require expensive computational resource (like an hydrological simulation). EGO minimizes the number of evaluations of the cost function balancing the need to minimize a response surface that approximates the problem and the need to improve the approximation sampling where prediction error may be high. The hydrological model to be calibrated was MOBIDIC, a complete balance distributed model developed at the Department of Civil and Environmental Engineering of the University of Florence. Discussion on the comparisons between the effectiveness of the different algorithms on different cases of study on Central Italy basins is provided.
Multiagent distributed watershed management
NASA Astrophysics Data System (ADS)
Giuliani, M.; Castelletti, A.; Amigoni, F.; Cai, X.
2012-04-01
Deregulation and democratization of water along with increasing environmental awareness are challenging integrated water resources planning and management worldwide. The traditional centralized approach to water management, as described in much of water resources literature, is often unfeasible in most of the modern social and institutional contexts. Thus it should be reconsidered from a more realistic and distributed perspective, in order to account for the presence of multiple and often independent Decision Makers (DMs) and many conflicting stakeholders. Game theory based approaches are often used to study these situations of conflict (Madani, 2010), but they are limited to a descriptive perspective. Multiagent systems (see Wooldridge, 2009), instead, seem to be a more suitable paradigm because they naturally allow to represent a set of self-interested agents (DMs and/or stakeholders) acting in a distributed decision process at the agent level, resulting in a promising compromise alternative between the ideal centralized solution and the actual uncoordinated practices. Casting a water management problem in a multiagent framework allows to exploit the techniques and methods that are already available in this field for solving distributed optimization problems. In particular, in Distributed Constraint Satisfaction Problems (DCSP, see Yokoo et al., 2000), each agent controls some variables according to his own utility function but has to satisfy inter-agent constraints; while in Distributed Constraint Optimization Problems (DCOP, see Modi et al., 2005), the problem is generalized by introducing a global objective function to be optimized that requires a coordination mechanism between the agents. In this work, we apply a DCSP-DCOP based approach to model a steady state hypothetical watershed management problem (Yang et al., 2009), involving several active human agents (i.e. agents who make decisions) and reactive ecological agents (i.e. agents representing environmental interests). Different scenarios of distributed management are simulated, i.e. a situation where all the agents act independently, a situation in which a global coordination takes place and in-between solutions. The solutions are compared with the ones presented in Yang et al. (2009), aiming to present more general multiagent approaches to solve distributed management problems.
NASA Astrophysics Data System (ADS)
Saputro, D. R. S.; Amalia, F.; Widyaningsih, P.; Affan, R. C.
2018-05-01
Bayesian method is a method that can be used to estimate the parameters of multivariate multiple regression model. Bayesian method has two distributions, there are prior and posterior distributions. Posterior distribution is influenced by the selection of prior distribution. Jeffreys’ prior distribution is a kind of Non-informative prior distribution. This prior is used when the information about parameter not available. Non-informative Jeffreys’ prior distribution is combined with the sample information resulting the posterior distribution. Posterior distribution is used to estimate the parameter. The purposes of this research is to estimate the parameters of multivariate regression model using Bayesian method with Non-informative Jeffreys’ prior distribution. Based on the results and discussion, parameter estimation of β and Σ which were obtained from expected value of random variable of marginal posterior distribution function. The marginal posterior distributions for β and Σ are multivariate normal and inverse Wishart. However, in calculation of the expected value involving integral of a function which difficult to determine the value. Therefore, approach is needed by generating of random samples according to the posterior distribution characteristics of each parameter using Markov chain Monte Carlo (MCMC) Gibbs sampling algorithm.
NASA Astrophysics Data System (ADS)
Liu, Y. Y.; Xie, S. H.; Jin, G.; Li, J. Y.
2009-04-01
Magnetoelectric annealing is necessary to remove antiferromagnetic domains and induce macroscopic magnetoelectric effect in polycrystalline magnetoelectric materials, and in this paper, we study the effective magnetoelectric properties of perpendicularly annealed polycrystalline Cr2O3 using effective medium approximation. The effect of temperatures, grain aspect ratios, and two different types of orientation distribution function have been analyzed, and unusual material symmetry is observed when the orientation distribution function only depends on Euler angle ψ. Optimal grain aspect ratio and texture coefficient are also identified. The approach can be applied to analyze the microstructural field distribution and macroscopic properties of a wide range of magnetoelectric polycrystals.
Information theory lateral density distribution for Earth inferred from global gravity field
NASA Technical Reports Server (NTRS)
Rubincam, D. P.
1981-01-01
Information Theory Inference, better known as the Maximum Entropy Method, was used to infer the lateral density distribution inside the Earth. The approach assumed that the Earth consists of indistinguishable Maxwell-Boltzmann particles populating infinitesimal volume elements, and followed the standard methods of statistical mechanics (maximizing the entropy function). The GEM 10B spherical harmonic gravity field coefficients, complete to degree and order 36, were used as constraints on the lateral density distribution. The spherically symmetric part of the density distribution was assumed to be known. The lateral density variation was assumed to be small compared to the spherically symmetric part. The resulting information theory density distribution for the cases of no crust removed, 30 km of compensated crust removed, and 30 km of uncompensated crust removed all gave broad density anomalies extending deep into the mantle, but with the density contrasts being the greatest towards the surface (typically + or 0.004 g cm 3 in the first two cases and + or - 0.04 g cm 3 in the third). None of the density distributions resemble classical organized convection cells. The information theory approach may have use in choosing Standard Earth Models, but, the inclusion of seismic data into the approach appears difficult.
Entangled-coherent-state quantum key distribution with entanglement witnessing
NASA Astrophysics Data System (ADS)
Simon, David S.; Jaeger, Gregg; Sergienko, Alexander V.
2014-01-01
An entanglement-witness approach to quantum coherent-state key distribution and a system for its practical implementation are described. In this approach, eavesdropping can be detected by a change in sign of either of two witness functions: an entanglement witness S or an eavesdropping witness W. The effects of loss and eavesdropping on system operation are evaluated as a function of distance. Although the eavesdropping witness W does not directly witness entanglement for the system, its behavior remains related to that of the true entanglement witness S. Furthermore, W is easier to implement experimentally than S. W crosses the axis at a finite distance, in a manner reminiscent of entanglement sudden death. The distance at which this occurs changes measurably when an eavesdropper is present. The distance dependence of the two witnesses due to amplitude reduction and due to increased variance resulting from both ordinary propagation losses and possible eavesdropping activity is provided. Finally, the information content and secure key rate of a continuous variable protocol using this witness approach are given.
Vacuum quantum stress tensor fluctuations: A diagonalization approach
NASA Astrophysics Data System (ADS)
Schiappacasse, Enrico D.; Fewster, Christopher J.; Ford, L. H.
2018-01-01
Large vacuum fluctuations of a quantum stress tensor can be described by the asymptotic behavior of its probability distribution. Here we focus on stress tensor operators which have been averaged with a sampling function in time. The Minkowski vacuum state is not an eigenstate of the time-averaged operator, but can be expanded in terms of its eigenstates. We calculate the probability distribution and the cumulative probability distribution for obtaining a given value in a measurement of the time-averaged operator taken in the vacuum state. In these calculations, we study a specific operator that contributes to the stress-energy tensor of a massless scalar field in Minkowski spacetime, namely, the normal ordered square of the time derivative of the field. We analyze the rate of decrease of the tail of the probability distribution for different temporal sampling functions, such as compactly supported functions and the Lorentzian function. We find that the tails decrease relatively slowly, as exponentials of fractional powers, in agreement with previous work using the moments of the distribution. Our results lend additional support to the conclusion that large vacuum stress tensor fluctuations are more probable than large thermal fluctuations, and may have observable effects.
ERIC Educational Resources Information Center
Fidalgo, Angel M.; Ferreres, Doris; Muniz, Jose
2004-01-01
Sample-size restrictions limit the contingency table approaches based on asymptotic distributions, such as the Mantel-Haenszel (MH) procedure, for detecting differential item functioning (DIF) in many practical applications. Within this framework, the present study investigated the power and Type I error performance of empirical and inferential…
Hamiltonian Monte Carlo acceleration using surrogate functions with random bases.
Zhang, Cheng; Shahbaba, Babak; Zhao, Hongkai
2017-11-01
For big data analysis, high computational cost for Bayesian methods often limits their applications in practice. In recent years, there have been many attempts to improve computational efficiency of Bayesian inference. Here we propose an efficient and scalable computational technique for a state-of-the-art Markov chain Monte Carlo methods, namely, Hamiltonian Monte Carlo. The key idea is to explore and exploit the structure and regularity in parameter space for the underlying probabilistic model to construct an effective approximation of its geometric properties. To this end, we build a surrogate function to approximate the target distribution using properly chosen random bases and an efficient optimization process. The resulting method provides a flexible, scalable, and efficient sampling algorithm, which converges to the correct target distribution. We show that by choosing the basis functions and optimization process differently, our method can be related to other approaches for the construction of surrogate functions such as generalized additive models or Gaussian process models. Experiments based on simulated and real data show that our approach leads to substantially more efficient sampling algorithms compared to existing state-of-the-art methods.
NASA Astrophysics Data System (ADS)
Dubreuil, S.; Salaün, M.; Rodriguez, E.; Petitjean, F.
2018-01-01
This study investigates the construction and identification of the probability distribution of random modal parameters (natural frequencies and effective parameters) in structural dynamics. As these parameters present various types of dependence structures, the retained approach is based on pair copula construction (PCC). A literature review leads us to choose a D-Vine model for the construction of modal parameters probability distributions. Identification of this model is based on likelihood maximization which makes it sensitive to the dimension of the distribution, namely the number of considered modes in our context. To this respect, a mode selection preprocessing step is proposed. It allows the selection of the relevant random modes for a given transfer function. The second point, addressed in this study, concerns the choice of the D-Vine model. Indeed, D-Vine model is not uniquely defined. Two strategies are proposed and compared. The first one is based on the context of the study whereas the second one is purely based on statistical considerations. Finally, the proposed approaches are numerically studied and compared with respect to their capabilities, first in the identification of the probability distribution of random modal parameters and second in the estimation of the 99 % quantiles of some transfer functions.
Confined active Brownian particles: theoretical description of propulsion-induced accumulation
NASA Astrophysics Data System (ADS)
Das, Shibananda; Gompper, Gerhard; Winkler, Roland G.
2018-01-01
The stationary-state distribution function of confined active Brownian particles (ABPs) is analyzed by computer simulations and analytical calculations. We consider a radial harmonic as well as an anharmonic confinement potential. In the simulations, the ABP is propelled with a prescribed velocity along a body-fixed direction, which is changing in a diffusive manner. For the analytical approach, the Cartesian components of the propulsion velocity are assumed to change independently; active Ornstein-Uhlenbeck particle (AOUP). This results in very different velocity distribution functions. The analytical solution of the Fokker-Planck equation for an AOUP in a harmonic potential is presented and a conditional distribution function is provided for the radial particle distribution at a given magnitude of the propulsion velocity. This conditional probability distribution facilitates the description of the coupling of the spatial coordinate and propulsion, which yields activity-induced accumulation of particles. For the anharmonic potential, a probability distribution function is derived within the unified colored noise approximation. The comparison of the simulation results with theoretical predictions yields good agreement for large rotational diffusion coefficients, e.g. due to tumbling, even for large propulsion velocities (Péclet numbers). However, we find significant deviations already for moderate Péclet number, when the rotational diffusion coefficient is on the order of the thermal one.
Quasi-Newton methods for parameter estimation in functional differential equations
NASA Technical Reports Server (NTRS)
Brewer, Dennis W.
1988-01-01
A state-space approach to parameter estimation in linear functional differential equations is developed using the theory of linear evolution equations. A locally convergent quasi-Newton type algorithm is applied to distributed systems with particular emphasis on parameters that induce unbounded perturbations of the state. The algorithm is computationally implemented on several functional differential equations, including coefficient and delay estimation in linear delay-differential equations.
NASA Astrophysics Data System (ADS)
Pedretti, Daniele
2017-04-01
Power-law (PL) distributions are widely adopted to define the late-time scaling of solute breakthrough curves (BTCs) during transport experiments in highly heterogeneous media. However, from a statistical perspective, distinguishing between a PL distribution and another tailed distribution is difficult, particularly when a qualitative assessment based on visual analysis of double-logarithmic plotting is used. This presentation aims to discuss the results from a recent analysis where a suite of statistical tools was applied to evaluate rigorously the scaling of BTCs from experiments that generate tailed distributions typically described as PL at late time. To this end, a set of BTCs from numerical simulations in highly heterogeneous media were generated using a transition probability approach (T-PROGS) coupled to a finite different numerical solver of the flow equation (MODFLOW) and a random walk particle tracking approach for Lagrangian transport (RW3D). The T-PROGS fields assumed randomly distributed hydraulic heterogeneities with long correlation scales creating solute channeling and anomalous transport. For simplicity, transport was simulated as purely advective. This combination of tools generates strongly non-symmetric BTCs visually resembling PL distributions at late time when plotted in double log scales. Unlike other combination of modeling parameters and boundary conditions (e.g. matrix diffusion in fractures), at late time no direct link exists between the mathematical functions describing scaling of these curves and physical parameters controlling transport. The results suggest that the statistical tests fail to describe the majority of curves as PL distributed. Moreover, they suggest that PL or lognormal distributions have the same likelihood to represent parametrically the shape of the tails. It is noticeable that forcing a model to reproduce the tail as PL functions results in a distribution of PL slopes comprised between 1.2 and 4, which are the typical values observed during field experiments. We conclude that care must be taken when defining a BTC late time distribution as a power law function. Even though the estimated scaling factors are found to fall in traditional ranges, the actual distribution controlling the scaling of concentration may different from a power-law function, with direct consequences for instance for the selection of effective parameters in upscaling modeling solutions.
Nonparametric Bayesian models for a spatial covariance.
Reich, Brian J; Fuentes, Montserrat
2012-01-01
A crucial step in the analysis of spatial data is to estimate the spatial correlation function that determines the relationship between a spatial process at two locations. The standard approach to selecting the appropriate correlation function is to use prior knowledge or exploratory analysis, such as a variogram analysis, to select the correct parametric correlation function. Rather that selecting a particular parametric correlation function, we treat the covariance function as an unknown function to be estimated from the data. We propose a flexible prior for the correlation function to provide robustness to the choice of correlation function. We specify the prior for the correlation function using spectral methods and the Dirichlet process prior, which is a common prior for an unknown distribution function. Our model does not require Gaussian data or spatial locations on a regular grid. The approach is demonstrated using a simulation study as well as an analysis of California air pollution data.
Boundary-Layer Receptivity and Integrated Transition Prediction
NASA Technical Reports Server (NTRS)
Chang, Chau-Lyan; Choudhari, Meelan
2005-01-01
The adjoint parabold stability equations (PSE) formulation is used to calculate the boundary layer receptivity to localized surface roughness and suction for compressible boundary layers. Receptivity efficiency functions predicted by the adjoint PSE approach agree well with results based on other nonparallel methods including linearized Navier-Stokes equations for both Tollmien-Schlichting waves and crossflow instability in swept wing boundary layers. The receptivity efficiency function can be regarded as the Green's function to the disturbance amplitude evolution in a nonparallel (growing) boundary layer. Given the Fourier transformed geometry factor distribution along the chordwise direction, the linear disturbance amplitude evolution for a finite size, distributed nonuniformity can be computed by evaluating the integral effects of both disturbance generation and linear amplification. The synergistic approach via the linear adjoint PSE for receptivity and nonlinear PSE for disturbance evolution downstream of the leading edge forms the basis for an integrated transition prediction tool. Eventually, such physics-based, high fidelity prediction methods could simulate the transition process from the disturbance generation through the nonlinear breakdown in a holistic manner.
NASA Astrophysics Data System (ADS)
Buddendorf, B.; Fabris, L.; Malcolm, I.; Lazzaro, G.; Tetzlaff, D.; Botter, G.; Soulsby, C.
2016-12-01
Wild Atlantic salmon populations in Scottish rivers constitute an important economic and recreational resource, as well as being a key component of biodiversity. Salmon have specific habitat requirements at different life stages and their distribution is therefore strongly influenced by a complex suite of biological and physical controls. Stream hydrodynamics have a strong influence on habitat quality and affect the distribution and density of juvenile salmon. As stream hydrodynamics directly relate to stream flow variability and channel morphology, the effects of hydroclimatic drivers on the spatial and temporal variability of habitat suitability can be assessed. Critical Displacement Velocity (CDV), which describes the velocity at which fish can no longer hold station, is one potential approach for characterising habitat suitability. CDV is obtained using an empirical formula that depends on fish size and stream temperature. By characterising the proportion of a reach below CDV it is possible to assess the suitable area. We demonstrate that a generic analytical approach based on field survey and hydraulic modelling can provide insights on the interactions between flow regime and average suitable area (SA) for juvenile salmon that could be extended to other aquatic species. Analytical functions are used to model the pdf of stream flow p(q) and the relationship between flow and suitable area SA(q). Theoretically these functions can assume any form. Here we used a gamma distribution to model p(q) and a gamma function to model SA(q). Integrating the product of these functions we obtain an analytical expression of SA. Since parameters of p(q) can be estimated from meteorological and flow measurements, they can be used directly to predict the effect of flow regime on SA. We show the utility of the approach with reference to 6 electrofishing sites in a single river system where long term (50 years) data on spatially distributed juvenile salmon densities are available.
Decisions with Uncertain Consequences—A Total Ordering on Loss-Distributions
König, Sandra; Schauer, Stefan
2016-01-01
Decisions are often based on imprecise, uncertain or vague information. Likewise, the consequences of an action are often equally unpredictable, thus putting the decision maker into a twofold jeopardy. Assuming that the effects of an action can be modeled by a random variable, then the decision problem boils down to comparing different effects (random variables) by comparing their distribution functions. Although the full space of probability distributions cannot be ordered, a properly restricted subset of distributions can be totally ordered in a practically meaningful way. We call these loss-distributions, since they provide a substitute for the concept of loss-functions in decision theory. This article introduces the theory behind the necessary restrictions and the hereby constructible total ordering on random loss variables, which enables decisions under uncertainty of consequences. Using data obtained from simulations, we demonstrate the practical applicability of our approach. PMID:28030572
Towards a model of pion generalized parton distributions from Dyson-Schwinger equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moutarde, H.
2015-04-10
We compute the pion quark Generalized Parton Distribution H{sup q} and Double Distributions F{sup q} and G{sup q} in a coupled Bethe-Salpeter and Dyson-Schwinger approach. We use simple algebraic expressions inspired by the numerical resolution of Dyson-Schwinger and Bethe-Salpeter equations. We explicitly check the support and polynomiality properties, and the behavior under charge conjugation or time invariance of our model. We derive analytic expressions for the pion Double Distributions and Generalized Parton Distribution at vanishing pion momentum transfer at a low scale. Our model compares very well to experimental pion form factor or parton distribution function data.
Four Theorems on the Psychometric Function
May, Keith A.; Solomon, Joshua A.
2013-01-01
In a 2-alternative forced-choice (2AFC) discrimination task, observers choose which of two stimuli has the higher value. The psychometric function for this task gives the probability of a correct response for a given stimulus difference, . This paper proves four theorems about the psychometric function. Assuming the observer applies a transducer and adds noise, Theorem 1 derives a convenient general expression for the psychometric function. Discrimination data are often fitted with a Weibull function. Theorem 2 proves that the Weibull “slope” parameter, , can be approximated by , where is the of the Weibull function that fits best to the cumulative noise distribution, and depends on the transducer. We derive general expressions for and , from which we derive expressions for specific cases. One case that follows naturally from our general analysis is Pelli's finding that, when , . We also consider two limiting cases. Theorem 3 proves that, as sensitivity improves, 2AFC performance will usually approach that for a linear transducer, whatever the actual transducer; we show that this does not apply at signal levels where the transducer gradient is zero, which explains why it does not apply to contrast detection. Theorem 4 proves that, when the exponent of a power-function transducer approaches zero, 2AFC performance approaches that of a logarithmic transducer. We show that the power-function exponents of 0.4–0.5 fitted to suprathreshold contrast discrimination data are close enough to zero for the fitted psychometric function to be practically indistinguishable from that of a log transducer. Finally, Weibull reflects the shape of the noise distribution, and we used our results to assess the recent claim that internal noise has higher kurtosis than a Gaussian. Our analysis of for contrast discrimination suggests that, if internal noise is stimulus-independent, it has lower kurtosis than a Gaussian. PMID:24124456
Distributed robust adaptive control of high order nonlinear multi agent systems.
Hashemi, Mahnaz; Shahgholian, Ghazanfar
2018-03-01
In this paper, a robust adaptive neural network based controller is presented for multi agent high order nonlinear systems with unknown nonlinear functions, unknown control gains and unknown actuator failures. At first, Neural Network (NN) is used to approximate the nonlinear uncertainty terms derived from the controller design procedure for the followers. Then, a novel distributed robust adaptive controller is developed by combining the backstepping method and the Dynamic Surface Control (DSC) approach. The proposed controllers are distributed in the sense that the designed controller for each follower agent only requires relative state information between itself and its neighbors. By using the Young's inequality, only few parameters need to be tuned regardless of NN nodes number. Accordingly, the problems of dimensionality curse and explosion of complexity are counteracted, simultaneously. New adaptive laws are designed by choosing the appropriate Lyapunov-Krasovskii functionals. The proposed approach proves the boundedness of all the closed-loop signals in addition to the convergence of the distributed tracking errors to a small neighborhood of the origin. Simulation results indicate that the proposed controller is effective and robust. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Off-the-shelf Control of Data Analysis Software
NASA Astrophysics Data System (ADS)
Wampler, S.
The Gemini Project must provide convenient access to data analysis facilities to a wide user community. The international nature of this community makes the selection of data analysis software particularly interesting, with staunch advocates of systems such as ADAM and IRAF among the users. Additionally, the continuing trends towards increased use of networked systems and distributed processing impose additional complexity. To meet these needs, the Gemini Project is proposing the novel approach of using low-cost, off-the-shelf software to abstract out both the control and distribution of data analysis from the functionality of the data analysis software. For example, the orthogonal nature of control versus function means that users might select analysis routines from both ADAM and IRAF as appropriate, distributing these routines across a network of machines. It is the belief of the Gemini Project that this approach results in a system that is highly flexible, maintainable, and inexpensive to develop. The Khoros visualization system is presented as an example of control software that is currently available for providing the control and distribution within a data analysis system. The visual programming environment provided with Khoros is also discussed as a means to providing convenient access to this control.
Matching Pursuit with Asymmetric Functions for Signal Decomposition and Parameterization
Spustek, Tomasz; Jedrzejczak, Wiesław Wiktor; Blinowska, Katarzyna Joanna
2015-01-01
The method of adaptive approximations by Matching Pursuit makes it possible to decompose signals into basic components (called atoms). The approach relies on fitting, in an iterative way, functions from a large predefined set (called dictionary) to an analyzed signal. Usually, symmetric functions coming from the Gabor family (sine modulated Gaussian) are used. However Gabor functions may not be optimal in describing waveforms present in physiological and medical signals. Many biomedical signals contain asymmetric components, usually with a steep rise and slower decay. For the decomposition of this kind of signal we introduce a dictionary of functions of various degrees of asymmetry – from symmetric Gabor atoms to highly asymmetric waveforms. The application of this enriched dictionary to Otoacoustic Emissions and Steady-State Visually Evoked Potentials demonstrated the advantages of the proposed method. The approach provides more sparse representation, allows for correct determination of the latencies of the components and removes the "energy leakage" effect generated by symmetric waveforms that do not sufficiently match the structures of the analyzed signal. Additionally, we introduced a time-frequency-amplitude distribution that is more adequate for representation of asymmetric atoms than the conventional time-frequency-energy distribution. PMID:26115480
Computing exact bundle compliance control charts via probability generating functions.
Chen, Binchao; Matis, Timothy; Benneyan, James
2016-06-01
Compliance to evidenced-base practices, individually and in 'bundles', remains an important focus of healthcare quality improvement for many clinical conditions. The exact probability distribution of composite bundle compliance measures used to develop corresponding control charts and other statistical tests is based on a fairly large convolution whose direct calculation can be computationally prohibitive. Various series expansions and other approximation approaches have been proposed, each with computational and accuracy tradeoffs, especially in the tails. This same probability distribution also arises in other important healthcare applications, such as for risk-adjusted outcomes and bed demand prediction, with the same computational difficulties. As an alternative, we use probability generating functions to rapidly obtain exact results and illustrate the improved accuracy and detection over other methods. Numerical testing across a wide range of applications demonstrates the computational efficiency and accuracy of this approach.
Gomez-Ramirez, Jaime; Sanz, Ricardo
2013-09-01
One of the most important scientific challenges today is the quantitative and predictive understanding of biological function. Classical mathematical and computational approaches have been enormously successful in modeling inert matter, but they may be inadequate to address inherent features of biological systems. We address the conceptual and methodological obstacles that lie in the inverse problem in biological systems modeling. We introduce a full Bayesian approach (FBA), a theoretical framework to study biological function, in which probability distributions are conditional on biophysical information that physically resides in the biological system that is studied by the scientist. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Malkin, B. Z.; Abishev, N. M.; Baibekov, E. I.; Pytalev, D. S.; Boldyrev, K. N.; Popova, M. N.; Bettinelli, M.
2017-07-01
We construct a distribution function of the strain-tensor components induced by point defects in an elastically anisotropic continuum, which can be used to account quantitatively for many effects observed in different branches of condensed matter physics. Parameters of the derived six-dimensional generalized Lorentz distribution are expressed through the integrals computed over the array of strains. The distribution functions for the cubic diamond and elpasolite crystals and tetragonal crystals with the zircon and scheelite structures are presented. Our theoretical approach is supported by a successful modeling of specific line shapes of singlet-doublet transitions of the T m3 + ions doped into AB O4 (A =Y , Lu; B =P , V) crystals with zircon structure, observed in high-resolution optical spectra. The values of the defect strengths of impurity T m3 + ions in the oxygen surroundings, obtained as a result of this modeling, can be used in future studies of random strains in different rare-earth oxides.
A risk-based multi-objective model for optimal placement of sensors in water distribution system
NASA Astrophysics Data System (ADS)
Naserizade, Sareh S.; Nikoo, Mohammad Reza; Montaseri, Hossein
2018-02-01
In this study, a new stochastic model based on Conditional Value at Risk (CVaR) and multi-objective optimization methods is developed for optimal placement of sensors in water distribution system (WDS). This model determines minimization of risk which is caused by simultaneous multi-point contamination injection in WDS using CVaR approach. The CVaR considers uncertainties of contamination injection in the form of probability distribution function and calculates low-probability extreme events. In this approach, extreme losses occur at tail of the losses distribution function. Four-objective optimization model based on NSGA-II algorithm is developed to minimize losses of contamination injection (through CVaR of affected population and detection time) and also minimize the two other main criteria of optimal placement of sensors including probability of undetected events and cost. Finally, to determine the best solution, Preference Ranking Organization METHod for Enrichment Evaluation (PROMETHEE), as a subgroup of Multi Criteria Decision Making (MCDM) approach, is utilized to rank the alternatives on the trade-off curve among objective functions. Also, sensitivity analysis is done to investigate the importance of each criterion on PROMETHEE results considering three relative weighting scenarios. The effectiveness of the proposed methodology is examined through applying it to Lamerd WDS in the southwestern part of Iran. The PROMETHEE suggests 6 sensors with suitable distribution that approximately cover all regions of WDS. Optimal values related to CVaR of affected population and detection time as well as probability of undetected events for the best optimal solution are equal to 17,055 persons, 31 mins and 0.045%, respectively. The obtained results of the proposed methodology in Lamerd WDS show applicability of CVaR-based multi-objective simulation-optimization model for incorporating the main uncertainties of contamination injection in order to evaluate extreme value of losses in WDS.
NASA Astrophysics Data System (ADS)
Zorila, Alexandru; Stratan, Aurel; Nemes, George
2018-01-01
We compare the ISO-recommended (the standard) data-reduction algorithm used to determine the surface laser-induced damage threshold of optical materials by the S-on-1 test with two newly suggested algorithms, both named "cumulative" algorithms/methods, a regular one and a limit-case one, intended to perform in some respects better than the standard one. To avoid additional errors due to real experiments, a simulated test is performed, named the reverse approach. This approach simulates the real damage experiments, by generating artificial test-data of damaged and non-damaged sites, based on an assumed, known damage threshold fluence of the target and on a given probability distribution function to induce the damage. In this work, a database of 12 sets of test-data containing both damaged and non-damaged sites was generated by using four different reverse techniques and by assuming three specific damage probability distribution functions. The same value for the threshold fluence was assumed, and a Gaussian fluence distribution on each irradiated site was considered, as usual for the S-on-1 test. Each of the test-data was independently processed by the standard and by the two cumulative data-reduction algorithms, the resulting fitted probability distributions were compared with the initially assumed probability distribution functions, and the quantities used to compare these algorithms were determined. These quantities characterize the accuracy and the precision in determining the damage threshold and the goodness of fit of the damage probability curves. The results indicate that the accuracy in determining the absolute damage threshold is best for the ISO-recommended method, the precision is best for the limit-case of the cumulative method, and the goodness of fit estimator (adjusted R-squared) is almost the same for all three algorithms.
Remote-sensing based approach to forecast habitat quality under climate change scenarios.
Requena-Mullor, Juan M; López, Enrique; Castro, Antonio J; Alcaraz-Segura, Domingo; Castro, Hermelindo; Reyes, Andrés; Cabello, Javier
2017-01-01
As climate change is expected to have a significant impact on species distributions, there is an urgent challenge to provide reliable information to guide conservation biodiversity policies. In addressing this challenge, we propose a remote sensing-based approach to forecast the future habitat quality for European badger, a species not abundant and at risk of local extinction in the arid environments of southeastern Spain, by incorporating environmental variables related with the ecosystem functioning and correlated with climate and land use. Using ensemble prediction methods, we designed global spatial distribution models for the distribution range of badger using presence-only data and climate variables. Then, we constructed regional models for an arid region in the southeast Spain using EVI (Enhanced Vegetation Index) derived variables and weighting the pseudo-absences with the global model projections applied to this region. Finally, we forecast the badger potential spatial distribution in the time period 2071-2099 based on IPCC scenarios incorporating the uncertainty derived from the predicted values of EVI-derived variables. By including remotely sensed descriptors of the temporal dynamics and spatial patterns of ecosystem functioning into spatial distribution models, results suggest that future forecast is less favorable for European badgers than not including them. In addition, change in spatial pattern of habitat suitability may become higher than when forecasts are based just on climate variables. Since the validity of future forecast only based on climate variables is currently questioned, conservation policies supported by such information could have a biased vision and overestimate or underestimate the potential changes in species distribution derived from climate change. The incorporation of ecosystem functional attributes derived from remote sensing in the modeling of future forecast may contribute to the improvement of the detection of ecological responses under climate change scenarios.
Remote-sensing based approach to forecast habitat quality under climate change scenarios
Requena-Mullor, Juan M.; López, Enrique; Castro, Antonio J.; Alcaraz-Segura, Domingo; Castro, Hermelindo; Reyes, Andrés; Cabello, Javier
2017-01-01
As climate change is expected to have a significant impact on species distributions, there is an urgent challenge to provide reliable information to guide conservation biodiversity policies. In addressing this challenge, we propose a remote sensing-based approach to forecast the future habitat quality for European badger, a species not abundant and at risk of local extinction in the arid environments of southeastern Spain, by incorporating environmental variables related with the ecosystem functioning and correlated with climate and land use. Using ensemble prediction methods, we designed global spatial distribution models for the distribution range of badger using presence-only data and climate variables. Then, we constructed regional models for an arid region in the southeast Spain using EVI (Enhanced Vegetation Index) derived variables and weighting the pseudo-absences with the global model projections applied to this region. Finally, we forecast the badger potential spatial distribution in the time period 2071–2099 based on IPCC scenarios incorporating the uncertainty derived from the predicted values of EVI-derived variables. By including remotely sensed descriptors of the temporal dynamics and spatial patterns of ecosystem functioning into spatial distribution models, results suggest that future forecast is less favorable for European badgers than not including them. In addition, change in spatial pattern of habitat suitability may become higher than when forecasts are based just on climate variables. Since the validity of future forecast only based on climate variables is currently questioned, conservation policies supported by such information could have a biased vision and overestimate or underestimate the potential changes in species distribution derived from climate change. The incorporation of ecosystem functional attributes derived from remote sensing in the modeling of future forecast may contribute to the improvement of the detection of ecological responses under climate change scenarios. PMID:28257501
The Density Functional Theory of Flies: Predicting distributions of interacting active organisms
NASA Astrophysics Data System (ADS)
Kinkhabwala, Yunus; Valderrama, Juan; Cohen, Itai; Arias, Tomas
On October 2nd, 2016, 52 people were crushed in a stampede when a crowd panicked at a religious gathering in Ethiopia. The ability to predict the state of a crowd and whether it is susceptible to such transitions could help prevent such catastrophes. While current techniques such as agent based models can predict transitions in emergent behaviors of crowds, the assumptions used to describe the agents are often ad hoc and the simulations are computationally expensive making their application to real-time crowd prediction challenging. Here, we pursue an orthogonal approach and ask whether a reduced set of variables, such as the local densities, are sufficient to describe the state of a crowd. Inspired by the theoretical framework of Density Functional Theory, we have developed a system that uses only measurements of local densities to extract two independent crowd behavior functions: (1) preferences for locations and (2) interactions between individuals. With these two functions, we have accurately predicted how a model system of walking Drosophila melanogaster distributes itself in an arbitrary 2D environment. In addition, this density-based approach measures properties of the crowd from only observations of the crowd itself without any knowledge of the detailed interactions and thus it can make predictions about the resulting distributions of these flies in arbitrary environments, in real-time. This research was supported in part by ARO W911NF-16-1-0433.
Modeling gene expression measurement error: a quasi-likelihood approach
Strimmer, Korbinian
2003-01-01
Background Using suitable error models for gene expression measurements is essential in the statistical analysis of microarray data. However, the true probabilistic model underlying gene expression intensity readings is generally not known. Instead, in currently used approaches some simple parametric model is assumed (usually a transformed normal distribution) or the empirical distribution is estimated. However, both these strategies may not be optimal for gene expression data, as the non-parametric approach ignores known structural information whereas the fully parametric models run the risk of misspecification. A further related problem is the choice of a suitable scale for the model (e.g. observed vs. log-scale). Results Here a simple semi-parametric model for gene expression measurement error is presented. In this approach inference is based an approximate likelihood function (the extended quasi-likelihood). Only partial knowledge about the unknown true distribution is required to construct this function. In case of gene expression this information is available in the form of the postulated (e.g. quadratic) variance structure of the data. As the quasi-likelihood behaves (almost) like a proper likelihood, it allows for the estimation of calibration and variance parameters, and it is also straightforward to obtain corresponding approximate confidence intervals. Unlike most other frameworks, it also allows analysis on any preferred scale, i.e. both on the original linear scale as well as on a transformed scale. It can also be employed in regression approaches to model systematic (e.g. array or dye) effects. Conclusions The quasi-likelihood framework provides a simple and versatile approach to analyze gene expression data that does not make any strong distributional assumptions about the underlying error model. For several simulated as well as real data sets it provides a better fit to the data than competing models. In an example it also improved the power of tests to identify differential expression. PMID:12659637
Brind'Amour, Anik; Boisclair, Daniel; Dray, Stéphane; Legendre, Pierre
2011-03-01
Understanding the relationships between species biological traits and the environment is crucial to predicting the effect of habitat perturbations on fish communities. It is also an essential step in the assessment of the functional diversity. Using two complementary three-matrix approaches (fourth-corner and RLQ analyses), we tested the hypothesis that feeding-oriented traits determine the spatial distributions of littoral fish species by assessing the relationship between fish spatial distributions, fish species traits, and habitat characteristics in two Laurentian Shield lakes. Significant associations between the feeding-oriented traits and the environmental characteristics suggested that fish communities in small lakes (displaying low species richness) can be spatially structured. Three groups of traits, mainly categorized by the species spatial and temporal feeding activity, were identified. The water column may be divided in two sections, each of them corresponding to a group of traits related to the vertical distribution of the prey coupled with the position of the mouth. Lake areas of low structural complexity were inhabited by functional assemblages dominated by surface feeders while structurally more complex areas were occupied by mid-water and benthic feeders. A third group referring to the time of feeding activity was observed. Our work could serve as a guideline study to evaluate species traits x environment associations at multiple spatial scales. Our results indicate that three-matrix statistical approaches are powerful tools that can be used to study such relationships. These recent statistical approaches open up new research directions such as the study of spatially based biological functions in lakes. They also provide new analytical tools for determining, for example, the potential size of freshwater protected areas.
Closed-form confidence intervals for functions of the normal mean and standard deviation.
Donner, Allan; Zou, G Y
2012-08-01
Confidence interval methods for a normal mean and standard deviation are well known and simple to apply. However, the same cannot be said for important functions of these parameters. These functions include the normal distribution percentiles, the Bland-Altman limits of agreement, the coefficient of variation and Cohen's effect size. We present a simple approach to this problem by using variance estimates recovered from confidence limits computed for the mean and standard deviation separately. All resulting confidence intervals have closed forms. Simulation results demonstrate that this approach performs very well for limits of agreement, coefficients of variation and their differences.
Ye, Xin; Garikapati, Venu M.; You, Daehyun; ...
2017-11-08
Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ye, Xin; Garikapati, Venu M.; You, Daehyun
Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less
Wildey, R.L.
1988-01-01
A method is derived for determining the dependence of radar backscatter on incidence angle that is applicable to the region corresponding to a particular radar image. The method is based on enforcing mathematical consistency between the frequency distribution of the image's pixel signals (histogram of DN values with suitable normalizations) and a one-dimensional frequency distribution of slope component, as might be obtained from a radar or laser altimetry profile in or near the area imaged. In order to achieve a unique solution, the auxiliary assumption is made that the two-dimensional frequency distribution of slope is isotropic. The backscatter is not derived in absolute units. The method is developed in such a way as to separate the reflectance function from the pixel-signal transfer characteristic. However, these two sources of variation are distinguishable only on the basis of a weak dependence on the azimuthal component of slope; therefore such an approach can be expected to be ill-conditioned unless the revision of the transfer characteristic is limited to the determination of an additive instrumental background level. The altimetry profile does not have to be registered in the image, and the statistical nature of the approach minimizes pixel noise effects and the effects of a disparity between the resolutions of the image and the altimetry profile, except in the wings of the distribution where low-number statistics preclude accuracy anyway. The problem of dealing with unknown slope components perpendicular to the profiling traverse, which besets the one-to-one comparison between individual slope components and pixel-signal values, disappears in the present approach. In order to test the resulting algorithm, an artificial radar image was generated from the digitized topographic map of the Lake Champlain West quadrangle in the Adirondack Mountains, U.S.A., using an arbitrarily selected reflectance function. From the same map, a one-dimensional frequency distribution of slope component was extracted. The algorithm recaptured the original reflectance function to the degree that, for the central 90% of the data, the discrepancy translates to a RMS slope error of 0.1 ???. For the central 99% of the data, the maximum error translates to 1 ???; at the absolute extremes of the data the error grows to 6 ???. ?? 1988 Kluwer Academic Publishers.
Arbour, Jessica Hilary; López-Fernández, Hernán
2013-01-01
Diversity and disparity are unequally distributed both phylogenetically and geographically. This uneven distribution may be owing to differences in diversification rates between clades resulting from processes such as adaptive radiation. We examined the rate and distribution of evolution in feeding biomechanics in the extremely diverse and continentally distributed South American geophagine cichlids. Evolutionary patterns in multivariate functional morphospace were examined using a phylomorphospace approach, disparity-through-time analyses and by comparing Brownian motion (BM) and adaptive peak evolutionary models using maximum likelihood. The most species-rich and functionally disparate clade (CAS) expanded more efficiently in morphospace and evolved more rapidly compared with both BM expectations and its sister clade (GGD). Members of the CAS clade also exhibited an early burst in functional evolution that corresponds to the development of modern ecological roles and may have been related to the colonization of a novel adaptive peak characterized by fast oral jaw mechanics. Furthermore, reduced ecological opportunity following this early burst may have restricted functional evolution in the GGD clade, which is less species-rich and more ecologically specialized. Patterns of evolution in ecologically important functional traits are consistent with a pattern of adaptive radiation within the most diverse clade of Geophagini. PMID:23740780
On Improving Efficiency of Differential Evolution for Aerodynamic Shape Optimization Applications
NASA Technical Reports Server (NTRS)
Madavan, Nateri K.
2004-01-01
Differential Evolution (DE) is a simple and robust evolutionary strategy that has been provEn effective in determining the global optimum for several difficult optimization problems. Although DE offers several advantages over traditional optimization approaches, its use in applications such as aerodynamic shape optimization where the objective function evaluations are computationally expensive is limited by the large number of function evaluations often required. In this paper various approaches for improving the efficiency of DE are reviewed and discussed. Several approaches that have proven effective for other evolutionary algorithms are modified and implemented in a DE-based aerodynamic shape optimization method that uses a Navier-Stokes solver for the objective function evaluations. Parallelization techniques on distributed computers are used to reduce turnaround times. Results are presented for standard test optimization problems and for the inverse design of a turbine airfoil. The efficiency improvements achieved by the different approaches are evaluated and compared.
Optimal dynamic control of resources in a distributed system
NASA Technical Reports Server (NTRS)
Shin, Kang G.; Krishna, C. M.; Lee, Yann-Hang
1989-01-01
The authors quantitatively formulate the problem of controlling resources in a distributed system so as to optimize a reward function and derive optimal control strategies using Markov decision theory. The control variables treated are quite general; they could be control decisions related to system configuration, repair, diagnostics, files, or data. Two algorithms for resource control in distributed systems are derived for time-invariant and periodic environments, respectively. A detailed example to demonstrate the power and usefulness of the approach is provided.
Distribution of shape elongations of main belt asteroids derived from Pan-STARRS1 photometry
NASA Astrophysics Data System (ADS)
Cibulková, H.; Nortunen, H.; Ďurech, J.; Kaasalainen, M.; Vereš, P.; Jedicke, R.; Wainscoat, R. J.; Mommert, M.; Trilling, D. E.; Schunová-Lilly, E.; Magnier, E. A.; Waters, C.; Flewelling, H.
2018-04-01
Context. A considerable amount of photometric data is produced by surveys such as Pan-STARRS, LONEOS, WISE, or Catalina. These data are a rich source of information about the physical properties of asteroids. There are several possible approaches for using these data. Light curve inversion is a typical method that works with individual asteroids. Our approach in focusing on large groups of asteroids, such as dynamical families and taxonomic classes, is statistical; the data are not sufficient for individual models. Aim. Our aim is to study the distributions of shape elongation b/a and the spin axis latitude β for various subpopulations of asteroids and to compare our results, based on Pan-STARRS1 survey, with statistics previously carried out using various photometric databases, such as Lowell and WISE. Methods: We used the LEADER algorithm to compare the b/a and β distributions for various subpopulations of asteroids. The algorithm creates a cumulative distributive function (CDF) of observed brightness variations, and computes the b/a and β distributions with analytical basis functions that yield the observed CDF. A variant of LEADER is used to solve the joint distributions for synthetic populations to test the validity of the method. Results: When comparing distributions of shape elongation for groups of asteroids with different diameters D, we found that there are no differences for D < 25 km. We also constructed distributions for asteroids with different rotation periods and revealed that the fastest rotators with P = 0 - 4 h are more spheroidal than the population with P = 4-8 h.
NASA Astrophysics Data System (ADS)
Bovy Jo; Hogg, David W.; Roweis, Sam T.
2011-06-01
We generalize the well-known mixtures of Gaussians approach to density estimation and the accompanying Expectation-Maximization technique for finding the maximum likelihood parameters of the mixture to the case where each data point carries an individual d-dimensional uncertainty covariance and has unique missing data properties. This algorithm reconstructs the error-deconvolved or "underlying" distribution function common to all samples, even when the individual data points are samples from different distributions, obtained by convolving the underlying distribution with the heteroskedastic uncertainty distribution of the data point and projecting out the missing data directions. We show how this basic algorithm can be extended with conjugate priors on all of the model parameters and a "split-and-"erge- procedure designed to avoid local maxima of the likelihood. We demonstrate the full method by applying it to the problem of inferring the three-dimensional veloc! ity distribution of stars near the Sun from noisy two-dimensional, transverse velocity measurements from the Hipparcos satellite.
An Overview Of Wideband Signal Analysis Techniques
NASA Astrophysics Data System (ADS)
Speiser, Jeffrey M.; Whitehouse, Harper J.
1989-11-01
This paper provides a unifying perspective for several narowband and wideband signal processing techniques. It considers narrowband ambiguity functions and Wigner-Ville distibutions, together with the wideband ambiguity function and several proposed approaches to a wideband version of the Wigner-Ville distribution (WVD). A unifying perspective is provided by the methodology of unitary representations and ray representations of transformation groups.
New approach to the retrieval of AOD and its uncertainty from MISR observations over dark water
NASA Astrophysics Data System (ADS)
Witek, Marcin L.; Garay, Michael J.; Diner, David J.; Bull, Michael A.; Seidel, Felix C.
2018-01-01
A new method for retrieving aerosol optical depth (AOD) and its uncertainty from Multi-angle Imaging SpectroRadiometer (MISR) observations over dark water is outlined. MISR's aerosol retrieval algorithm calculates cost functions between observed and pre-simulated radiances for a range of AODs (from 0.0 to 3.0) and a prescribed set of aerosol mixtures. The previous version 22 (V22) operational algorithm considered only the AOD that minimized the cost function for each aerosol mixture and then used a combination of these values to compute the final, best estimate
AOD and associated uncertainty. The new approach considers the entire range of cost functions associated with each aerosol mixture. The uncertainty of the reported AOD depends on a combination of (a) the absolute values of the cost functions for each aerosol mixture, (b) the widths of the cost function distributions as a function of AOD, and (c) the spread of the cost function distributions among the ensemble of mixtures. A key benefit of the new approach is that, unlike the V22 algorithm, it does not rely on empirical thresholds imposed on the cost function to determine the success or failure of a particular mixture. Furthermore, a new aerosol retrieval confidence index (ARCI) is established that can be used to screen high-AOD retrieval blunders caused by cloud contamination or other factors. Requiring ARCI ≥ 0.15 as a condition for retrieval success is supported through statistical analysis and outperforms the thresholds used in the V22 algorithm. The described changes to the MISR dark water algorithm will become operational in the new MISR aerosol product (V23), planned for release in 2017.
New Approach to the Retrieval of AOD and its Uncertainty from MISR Observations Over Dark Water
NASA Astrophysics Data System (ADS)
Witek, M. L.; Garay, M. J.; Diner, D. J.; Bull, M. A.; Seidel, F.
2017-12-01
A new method for retrieving aerosol optical depth (AOD) and its uncertainty from Multi-angle Imaging SpectroRadiometer (MISR) observations over dark water is outlined. MISR's aerosol retrieval algorithm calculates cost functions between observed and pre-simulated radiances for a range of AODs (from 0.0 to 3.0) and a prescribed set of aerosol mixtures. The previous Version 22 (V22) operational algorithm considered only the AOD that minimized the cost function for each aerosol mixture, then used a combination of these values to compute the final, "best estimate" AOD and associated uncertainty. The new approach considers the entire range of cost functions associated with each aerosol mixture. The uncertainty of the reported AOD depends on a combination of a) the absolute values of the cost functions for each aerosol mixture, b) the widths of the cost function distributions as a function of AOD, and c) the spread of the cost function distributions among the ensemble of mixtures. A key benefit of the new approach is that, unlike the V22 algorithm, it does not rely on arbitrary thresholds imposed on the cost function to determine the success or failure of a particular mixture. Furthermore, a new Aerosol Retrieval Confidence Index (ARCI) is established that can be used to screen high-AOD retrieval blunders caused by cloud contamination or other factors. Requiring ARCI≥0.15 as a condition for retrieval success is supported through statistical analysis and outperforms the thresholds used in the V22 algorithm. The described changes to the MISR dark water algorithm will become operational in the new MISR aerosol product (V23), planned for release in 2017.
Objectively combining AR5 instrumental period and paleoclimate climate sensitivity evidence
NASA Astrophysics Data System (ADS)
Lewis, Nicholas; Grünwald, Peter
2018-03-01
Combining instrumental period evidence regarding equilibrium climate sensitivity with largely independent paleoclimate proxy evidence should enable a more constrained sensitivity estimate to be obtained. Previous, subjective Bayesian approaches involved selection of a prior probability distribution reflecting the investigators' beliefs about climate sensitivity. Here a recently developed approach employing two different statistical methods—objective Bayesian and frequentist likelihood-ratio—is used to combine instrumental period and paleoclimate evidence based on data presented and assessments made in the IPCC Fifth Assessment Report. Probabilistic estimates from each source of evidence are represented by posterior probability density functions (PDFs) of physically-appropriate form that can be uniquely factored into a likelihood function and a noninformative prior distribution. The three-parameter form is shown accurately to fit a wide range of estimated climate sensitivity PDFs. The likelihood functions relating to the probabilistic estimates from the two sources are multiplicatively combined and a prior is derived that is noninformative for inference from the combined evidence. A posterior PDF that incorporates the evidence from both sources is produced using a single-step approach, which avoids the order-dependency that would arise if Bayesian updating were used. Results are compared with an alternative approach using the frequentist signed root likelihood ratio method. Results from these two methods are effectively identical, and provide a 5-95% range for climate sensitivity of 1.1-4.05 K (median 1.87 K).
Spatial Point Pattern Analysis of Neurons Using Ripley's K-Function in 3D
Jafari-Mamaghani, Mehrdad; Andersson, Mikael; Krieger, Patrik
2010-01-01
The aim of this paper is to apply a non-parametric statistical tool, Ripley's K-function, to analyze the 3-dimensional distribution of pyramidal neurons. Ripley's K-function is a widely used tool in spatial point pattern analysis. There are several approaches in 2D domains in which this function is executed and analyzed. Drawing consistent inferences on the underlying 3D point pattern distributions in various applications is of great importance as the acquisition of 3D biological data now poses lesser of a challenge due to technological progress. As of now, most of the applications of Ripley's K-function in 3D domains do not focus on the phenomenon of edge correction, which is discussed thoroughly in this paper. The main goal is to extend the theoretical and practical utilization of Ripley's K-function and corresponding tests based on bootstrap resampling from 2D to 3D domains. PMID:20577588
Unveiling saturation effects from nuclear structure function measurements at the EIC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marquet, Cyrille; Moldes, Manoel R.; Zurita, Pia
Here, we analyze the possibility of extracting a clear signal of non-linear parton saturation effects from future measurements of nuclear structure functions at the Electron–Ion Collider (EIC), in the small-x region. Our approach consists in generating pseudodata for electron-gold collisions, using the running-coupling Balitsky–Kovchegov evolution equation, and in assessing the compatibility of these saturated pseudodata with existing sets of nuclear parton distribution functions (nPDFs), extrapolated if necessary. The level of disagreement between the two is quantified by applying a Bayesian reweighting technique. This allows to infer the parton distributions needed in order to describe the pseudodata, which we find quitemore » different from the actual distributions, especially for sea quarks and gluons. This tension suggests that, should saturation effects impact the future nuclear structure function data as predicted, a successful refitting of the nPDFs may not be achievable, which would unambiguously signal the presence of non-linear effects.« less
Unveiling saturation effects from nuclear structure function measurements at the EIC
Marquet, Cyrille; Moldes, Manoel R.; Zurita, Pia
2017-07-21
Here, we analyze the possibility of extracting a clear signal of non-linear parton saturation effects from future measurements of nuclear structure functions at the Electron–Ion Collider (EIC), in the small-x region. Our approach consists in generating pseudodata for electron-gold collisions, using the running-coupling Balitsky–Kovchegov evolution equation, and in assessing the compatibility of these saturated pseudodata with existing sets of nuclear parton distribution functions (nPDFs), extrapolated if necessary. The level of disagreement between the two is quantified by applying a Bayesian reweighting technique. This allows to infer the parton distributions needed in order to describe the pseudodata, which we find quitemore » different from the actual distributions, especially for sea quarks and gluons. This tension suggests that, should saturation effects impact the future nuclear structure function data as predicted, a successful refitting of the nPDFs may not be achievable, which would unambiguously signal the presence of non-linear effects.« less
Nonequilibrium approach regarding metals from a linearised kappa distribution
NASA Astrophysics Data System (ADS)
Domenech-Garret, J. L.
2017-10-01
The widely used kappa distribution functions develop high-energy tails through an adjustable kappa parameter. The aim of this work is to show that such a parameter can itself be regarded as a function, which entangles information about the sources of disequilibrium. We first derive and analyse an expanded Fermi-Dirac kappa distribution. Later, we use this expanded form to obtain an explicit analytical expression for the kappa parameter of a heated metal on which an external electric field is applied. We show that such a kappa index causes departures from equilibrium depending on the physical magnitudes. Finally, we study the role of temperature and electric field on such a parameter, which characterises the electron population of a metal out of equilibrium.
Study of the zinc-silver oxide battery system
NASA Technical Reports Server (NTRS)
Nanis, L.
1973-01-01
Theoretical and experimental models for the evaluation of current distribution in flooded, porous electrodes are discussed. An approximation for the local current distribution function was derived for conditions of a linear overpotential, a uniform concentration, and a very conductive matrix. By considering the porous electrode to be an analog of chemical catalyst structures, a dimensionless performance parameter was derived from the approximated current distribution function. In this manner the electrode behavior was characterized in terms of an electrochemical Thiele parameter and an effectiveness factor. It was shown that the electrochemical engineering approach makes possible the organizations of theoretical descriptions and of practical experience in the form of dimensionless parameters, such as the electrochemical Thiele parameters, and hence provides useful information for the design of new electrochemical systems.
Comparative genomics approaches to understanding and manipulating plant metabolism.
Bradbury, Louis M T; Niehaus, Tom D; Hanson, Andrew D
2013-04-01
Over 3000 genomes, including numerous plant genomes, are now sequenced. However, their annotation remains problematic as illustrated by the many conserved genes with no assigned function, vague annotations such as 'kinase', or even wrong ones. Around 40% of genes of unknown function that are conserved between plants and microbes are probably metabolic enzymes or transporters; finding functions for these genes is a major challenge. Comparative genomics has correctly predicted functions for many such genes by analyzing genomic context, and gene fusions, distributions and co-expression. Comparative genomics complements genetic and biochemical approaches to dissect metabolism, continues to increase in power and decrease in cost, and has a pivotal role in modeling and engineering by helping identify functions for all metabolic genes. Copyright © 2012 Elsevier Ltd. All rights reserved.
Zauber, Henrik; Szymanski, Witold; Schulze, Waltraud X
2013-12-01
During the last decade, research on plasma membrane focused increasingly on the analysis of so-called microdomains. It has been shown that function of many membrane-associated proteins involved in signaling and transport depends on their conditional segregation within sterol-enriched membrane domains. High throughput proteomic analysis of sterol-protein interactions are often based on analyzing detergent resistant membrane fraction enriched in sterols and associated proteins, which also contain proteins from these microdomain structures. Most studies so far focused exclusively on the characterization of detergent resistant membrane protein composition and abundances. This approach has received some criticism because of its unspecificity and many co-purifying proteins. In this study, by a label-free quantitation approach, we extended the characterization of membrane microdomains by particularly studying distributions of each protein between detergent resistant membrane and detergent-soluble fractions (DSF). This approach allows a more stringent definition of dynamic processes between different membrane phases and provides a means of identification of co-purifying proteins. We developed a random sampling algorithm, called Unicorn, allowing for robust statistical testing of alterations in the protein distribution ratios of the two different fractions. Unicorn was validated on proteomic data from methyl-β-cyclodextrin treated plasma membranes and the sterol biosynthesis mutant smt1. Both, chemical treatment and sterol-biosynthesis mutation affected similar protein classes in their membrane phase distribution and particularly proteins with signaling and transport functions.
Zauber, Henrik; Szymanski, Witold; Schulze, Waltraud X.
2013-01-01
During the last decade, research on plasma membrane focused increasingly on the analysis of so-called microdomains. It has been shown that function of many membrane-associated proteins involved in signaling and transport depends on their conditional segregation within sterol-enriched membrane domains. High throughput proteomic analysis of sterol-protein interactions are often based on analyzing detergent resistant membrane fraction enriched in sterols and associated proteins, which also contain proteins from these microdomain structures. Most studies so far focused exclusively on the characterization of detergent resistant membrane protein composition and abundances. This approach has received some criticism because of its unspecificity and many co-purifying proteins. In this study, by a label-free quantitation approach, we extended the characterization of membrane microdomains by particularly studying distributions of each protein between detergent resistant membrane and detergent-soluble fractions (DSF). This approach allows a more stringent definition of dynamic processes between different membrane phases and provides a means of identification of co-purifying proteins. We developed a random sampling algorithm, called Unicorn, allowing for robust statistical testing of alterations in the protein distribution ratios of the two different fractions. Unicorn was validated on proteomic data from methyl-β-cyclodextrin treated plasma membranes and the sterol biosynthesis mutant smt1. Both, chemical treatment and sterol-biosynthesis mutation affected similar protein classes in their membrane phase distribution and particularly proteins with signaling and transport functions. PMID:24030099
Method and device for landing aircraft dependent on runway occupancy time
NASA Technical Reports Server (NTRS)
Ghalebsaz Jeddi, Babak (Inventor)
2012-01-01
A technique for landing aircraft using an aircraft landing accident avoidance device is disclosed. The technique includes determining at least two probability distribution functions; determining a safe lower limit on a separation between a lead aircraft and a trail aircraft on a glide slope to the runway; determining a maximum sustainable safe attempt-to-land rate on the runway based on the safe lower limit and the probability distribution functions; directing the trail aircraft to enter the glide slope with a target separation from the lead aircraft corresponding to the maximum sustainable safe attempt-to-land rate; while the trail aircraft is in the glide slope, determining an actual separation between the lead aircraft and the trail aircraft; and directing the trail aircraft to execute a go-around maneuver if the actual separation approaches the safe lower limit. Probability distribution functions include runway occupancy time, and landing time interval and/or inter-arrival distance.
Weak values of a quantum observable and the cross-Wigner distribution.
de Gosson, Maurice A; de Gosson, Serge M
2012-01-09
We study the weak values of a quantum observable from the point of view of the Wigner formalism. The main actor here is the cross-Wigner transform of two functions, which is in disguise the cross-ambiguity function familiar from radar theory and time-frequency analysis. It allows us to express weak values using a complex probability distribution. We suggest that our approach seems to confirm that the weak value of an observable is, as conjectured by several authors, due to the interference of two wavefunctions, one coming from the past, and the other from the future.
Smeared quasidistributions in perturbation theory
NASA Astrophysics Data System (ADS)
Monahan, Christopher
2018-03-01
Quasi- and pseudodistributions provide a new approach to determining parton distribution functions from first principles' calculations of QCD. Here, I calculate the flavor nonsinglet unpolarized quasidistribution at one loop in perturbation theory, using the gradient flow to remove ultraviolet divergences. I demonstrate that, as expected, the gradient flow does not change the infrared structure of the quasidistribution at one loop and use the results to match the smeared matrix elements to those in the MS ¯ scheme. This matching calculation is required to relate numerical results obtained from nonperturbative lattice QCD computations to light-front parton distribution functions extracted from global analyses of experimental data.
Moon, Ji-Eun; Kim, Sung-Hun; Han, Jung-Suk; Yang, Jae-Ho; Lee, Jai-Bong
2010-06-01
If orthodontists and restorative dentists establish the interdisciplinary approach to esthetic dentistry, the esthetic and functional outcome of their combined efforts will be greatly enhanced. This article describes satisfying esthetic results obtained by the distribution of space for restoration by orthodontic treatment and porcelain laminate veneers in uneven space between maxillary anterior teeth. It is proposed that the use of orthodontic treatment for re-distribution of the space and the use of porcelain laminate veneers to alter crown anatomy provide maximum esthetic and functional correction for patients with irregular interdental spacing.
NASA Astrophysics Data System (ADS)
Zhang, Hai; Ye, Renyu; Liu, Song; Cao, Jinde; Alsaedi, Ahmad; Li, Xiaodi
2018-02-01
This paper is concerned with the asymptotic stability of the Riemann-Liouville fractional-order neural networks with discrete and distributed delays. By constructing a suitable Lyapunov functional, two sufficient conditions are derived to ensure that the addressed neural network is asymptotically stable. The presented stability criteria are described in terms of the linear matrix inequalities. The advantage of the proposed method is that one may avoid calculating the fractional-order derivative of the Lyapunov functional. Finally, a numerical example is given to show the validity and feasibility of the theoretical results.
Liu, Bo; Liu, Pei; Xu, Zhenli; Zhou, Shenggao
2013-10-01
Near a charged surface, counterions of different valences and sizes cluster; and their concentration profiles stratify. At a distance from such a surface larger than the Debye length, the electric field is screened by counterions. Recent studies by a variational mean-field approach that includes ionic size effects and by Monte Carlo simulations both suggest that the counterion stratification is determined by the ionic valence-to-volume ratios. Central in the mean-field approach is a free-energy functional of ionic concentrations in which the ionic size effects are included through the entropic effect of solvent molecules. The corresponding equilibrium conditions define the generalized Boltzmann distributions relating the ionic concentrations to the electrostatic potential. This paper presents a detailed analysis and numerical calculations of such a free-energy functional to understand the dependence of the ionic charge density on the electrostatic potential through the generalized Boltzmann distributions, the role of ionic valence-to-volume ratios in the counterion stratification, and the modification of Debye length due to the effect of ionic sizes.
Liu, Bo; Liu, Pei; Xu, Zhenli; Zhou, Shenggao
2013-01-01
Near a charged surface, counterions of different valences and sizes cluster; and their concentration profiles stratify. At a distance from such a surface larger than the Debye length, the electric field is screened by counterions. Recent studies by a variational mean-field approach that includes ionic size effects and by Monte Carlo simulations both suggest that the counterion stratification is determined by the ionic valence-to-volume ratios. Central in the mean-field approach is a free-energy functional of ionic concentrations in which the ionic size effects are included through the entropic effect of solvent molecules. The corresponding equilibrium conditions define the generalized Boltzmann distributions relating the ionic concentrations to the electrostatic potential. This paper presents a detailed analysis and numerical calculations of such a free-energy functional to understand the dependence of the ionic charge density on the electrostatic potential through the generalized Boltzmann distributions, the role of ionic valence-to-volume ratios in the counterion stratification, and the modification of Debye length due to the effect of ionic sizes. PMID:24465094
NASA Astrophysics Data System (ADS)
Erazo, Kalil; Nagarajaiah, Satish
2017-06-01
In this paper an offline approach for output-only Bayesian identification of stochastic nonlinear systems is presented. The approach is based on a re-parameterization of the joint posterior distribution of the parameters that define a postulated state-space stochastic model class. In the re-parameterization the state predictive distribution is included, marginalized, and estimated recursively in a state estimation step using an unscented Kalman filter, bypassing state augmentation as required by existing online methods. In applications expectations of functions of the parameters are of interest, which requires the evaluation of potentially high-dimensional integrals; Markov chain Monte Carlo is adopted to sample the posterior distribution and estimate the expectations. The proposed approach is suitable for nonlinear systems subjected to non-stationary inputs whose realization is unknown, and that are modeled as stochastic processes. Numerical verification and experimental validation examples illustrate the effectiveness and advantages of the approach, including: (i) an increased numerical stability with respect to augmented-state unscented Kalman filtering, avoiding divergence of the estimates when the forcing input is unmeasured; (ii) the ability to handle arbitrary prior and posterior distributions. The experimental validation of the approach is conducted using data from a large-scale structure tested on a shake table. It is shown that the approach is robust to inherent modeling errors in the description of the system and forcing input, providing accurate prediction of the dynamic response when the excitation history is unknown.
Network Security Validation Using Game Theory
NASA Astrophysics Data System (ADS)
Papadopoulou, Vicky; Gregoriades, Andreas
Non-functional requirements (NFR) such as network security recently gained widespread attention in distributed information systems. Despite their importance however, there is no systematic approach to validate these requirements given the complexity and uncertainty characterizing modern networks. Traditionally, network security requirements specification has been the results of a reactive process. This however, limited the immunity property of the distributed systems that depended on these networks. Security requirements specification need a proactive approach. Networks' infrastructure is constantly under attack by hackers and malicious software that aim to break into computers. To combat these threats, network designers need sophisticated security validation techniques that will guarantee the minimum level of security for their future networks. This paper presents a game-theoretic approach to security requirements validation. An introduction to game theory is presented along with an example that demonstrates the application of the approach.
A note on a simplified and general approach to simulating from multivariate copula functions
Barry K. Goodwin
2013-01-01
Copulas have become an important analytic tool for characterizing multivariate distributions and dependence. One is often interested in simulating data from copula estimates. The process can be analytically and computationally complex and usually involves steps that are unique to a given parametric copula. We describe an alternative approach that uses âProbability-...
Leff, Daniel Richard; Orihuela-Espina, Felipe; Leong, Julian; Darzi, Ara; Yang, Guang-Zhong
2008-01-01
Learning to perform Minimally Invasive Surgery (MIS) requires considerable attention, concentration and spatial ability. Theoretically, this leads to activation in executive control (prefrontal) and visuospatial (parietal) centres of the brain. A novel approach is presented in this paper for analysing the flow of fronto-parietal haemodynamic behaviour and the associated variability between subjects. Serially acquired functional Near Infrared Spectroscopy (fNIRS) data from fourteen laparoscopic novices at different stages of learning is projected into a low-dimensional 'geospace', where sequentially acquired data is mapped to different locations. A trip distribution matrix based on consecutive directed trips between locations in the geospace reveals confluent fronto-parietal haemodynamic changes and a gravity model is applied to populate this matrix. To model global convergence in haemodynamic behaviour, a Markov chain is constructed and by comparing sequential haemodynamic distributions to the Markov's stationary distribution, inter-subject variability in learning an MIS task can be identified.
Yu, Chanki; Lee, Sang Wook
2016-05-20
We present a reliable and accurate global optimization framework for estimating parameters of isotropic analytical bidirectional reflectance distribution function (BRDF) models. This approach is based on a branch and bound strategy with linear programming and interval analysis. Conventional local optimization is often very inefficient for BRDF estimation since its fitting quality is highly dependent on initial guesses due to the nonlinearity of analytical BRDF models. The algorithm presented in this paper employs L1-norm error minimization to estimate BRDF parameters in a globally optimal way and interval arithmetic to derive our feasibility problem and lower bounding function. Our method is developed for the Cook-Torrance model but with several normal distribution functions such as the Beckmann, Berry, and GGX functions. Experiments have been carried out to validate the presented method using 100 isotropic materials from the MERL BRDF database, and our experimental results demonstrate that the L1-norm minimization provides a more accurate and reliable solution than the L2-norm minimization.
Keller, Katharina; Mertens, Valerie; Qi, Mian; Nalepa, Anna I; Godt, Adelheid; Savitsky, Anton; Jeschke, Gunnar; Yulikov, Maxim
2017-07-21
Extraction of distance distributions between high-spin paramagnetic centers from relaxation induced dipolar modulation enhancement (RIDME) data is affected by the presence of overtones of dipolar frequencies. As previously proposed, we account for these overtones by using a modified kernel function in Tikhonov regularization analysis. This paper analyzes the performance of such an approach on a series of model compounds with the Gd(iii)-PyMTA complex serving as paramagnetic high-spin label. We describe the calibration of the overtone coefficients for the RIDME kernel, demonstrate the accuracy of distance distributions obtained with this approach, and show that for our series of Gd-rulers RIDME technique provides more accurate distance distributions than Gd(iii)-Gd(iii) double electron-electron resonance (DEER). The analysis of RIDME data including harmonic overtones can be performed using the MATLAB-based program OvertoneAnalysis, which is available as open-source software from the web page of ETH Zurich. This approach opens a perspective for the routine use of the RIDME technique with high-spin labels in structural biology and structural studies of other soft matter.
Prosodic disambiguation of noun/verb homophones in child-directed speech.
Conwell, Erin
2017-05-01
One strategy that children might use to sort words into grammatical categories such as noun and verb is distributional bootstrapping, in which local co-occurrence information is used to distinguish between categories. Words that can be used in more than one grammatical category could be problematic for this approach. Using naturalistic corpus data, this study asks whether noun and verb uses of ambiguous words might differ prosodically as a function of their grammatical category in child-directed speech. The results show that noun and verb uses of ambiguous words in sentence-medial positions do differ from one another in terms of duration, vowel duration, pitch change, and vowel quality measures. However, sentence-final tokens are not different as a function of the category in which they were used. The availability of prosodic cues to category in natural child-directed speech could allow learners using a distributional bootstrapping approach to avoid conflating grammatical categories.
NASA Astrophysics Data System (ADS)
Korovin, Iakov S.; Tkachenko, Maxim G.
2018-03-01
In this paper we present a heuristic approach, improving the efficiency of methods, used for creation of efficient architecture of water distribution networks. The essence of the approach is a procedure of search space reduction the by limiting the range of available pipe diameters that can be used for each edge of the network graph. In order to proceed the reduction, two opposite boundary scenarios for the distribution of flows are analysed, after which the resulting range is further narrowed by applying a flow rate limitation for each edge of the network. The first boundary scenario provides the most uniform distribution of the flow in the network, the opposite scenario created the net with the highest possible flow level. The parameters of both distributions are calculated by optimizing systems of quadratic functions in a confined space, which can be effectively performed with small time costs. This approach was used to modify the genetic algorithm (GA). The proposed GA provides a variable number of variants of each gene, according to the number of diameters in list, taking into account flow restrictions. The proposed approach was implemented to the evaluation of a well-known test network - the Hanoi water distribution network [1], the results of research were compared with a classical GA with an unlimited search space. On the test data, the proposed trip significantly reduced the search space and provided faster and more obvious convergence in comparison with the classical version of GA.
Calculating the n-point correlation function with general and efficient python code
NASA Astrophysics Data System (ADS)
Genier, Fred; Bellis, Matthew
2018-01-01
There are multiple approaches to understanding the evolution of large-scale structure in our universe and with it the role of baryonic matter, dark matter, and dark energy at different points in history. One approach is to calculate the n-point correlation function estimator for galaxy distributions, sometimes choosing a particular type of galaxy, such as luminous red galaxies. The standard way to calculate these estimators is with pair counts (for the 2-point correlation function) and with triplet counts (for the 3-point correlation function). These are O(n2) and O(n3) problems, respectively and with the number of galaxies that will be characterized in future surveys, having efficient and general code will be of increasing importance. Here we show a proof-of-principle approach to the 2-point correlation function that relies on pre-calculating galaxy locations in coarse “voxels”, thereby reducing the total number of necessary calculations. The code is written in python, making it easily accessible and extensible and is open-sourced to the community. Basic results and performance tests using SDSS/BOSS data will be shown and we discuss the application of this approach to the 3-point correlation function.
Barnes, M P; Ebert, M A
2008-03-01
The concept of electron pencil-beam dose distributions is central to pencil-beam algorithms used in electron beam radiotherapy treatment planning. The Hogstrom algorithm, which is a common algorithm for electron treatment planning, models large electron field dose distributions by the superposition of a series of pencil beam dose distributions. This means that the accurate characterisation of an electron pencil beam is essential for the accuracy of the dose algorithm. The aim of this study was to evaluate a measurement based approach for obtaining electron pencil-beam dose distributions. The primary incentive for the study was the accurate calculation of dose distributions for narrow fields as traditional electron algorithms are generally inaccurate for such geometries. Kodak X-Omat radiographic film was used in a solid water phantom to measure the dose distribution of circular 12 MeV beams from a Varian 21EX linear accelerator. Measurements were made for beams of diameter, 1.5, 2, 4, 8, 16 and 32 mm. A blocked-field technique was used to subtract photon contamination in the beam. The "error function" derived from Fermi-Eyges Multiple Coulomb Scattering (MCS) theory for corresponding square fields was used to fit resulting dose distributions so that extrapolation down to a pencil beam distribution could be made. The Monte Carlo codes, BEAM and EGSnrc were used to simulate the experimental arrangement. The 8 mm beam dose distribution was also measured with TLD-100 microcubes. Agreement between film, TLD and Monte Carlo simulation results were found to be consistent with the spatial resolution used. The study has shown that it is possible to extrapolate narrow electron beam dose distributions down to a pencil beam dose distribution using the error function. However, due to experimental uncertainties and measurement difficulties, Monte Carlo is recommended as the method of choice for characterising electron pencil-beam dose distributions.
NASA Astrophysics Data System (ADS)
Hopkins, Paul; Fortini, Andrea; Archer, Andrew J.; Schmidt, Matthias
2010-12-01
We describe a test particle approach based on dynamical density functional theory (DDFT) for studying the correlated time evolution of the particles that constitute a fluid. Our theory provides a means of calculating the van Hove distribution function by treating its self and distinct parts as the two components of a binary fluid mixture, with the "self " component having only one particle, the "distinct" component consisting of all the other particles, and using DDFT to calculate the time evolution of the density profiles for the two components. We apply this approach to a bulk fluid of Brownian hard spheres and compare to results for the van Hove function and the intermediate scattering function from Brownian dynamics computer simulations. We find good agreement at low and intermediate densities using the very simple Ramakrishnan-Yussouff [Phys. Rev. B 19, 2775 (1979)] approximation for the excess free energy functional. Since the DDFT is based on the equilibrium Helmholtz free energy functional, we can probe a free energy landscape that underlies the dynamics. Within the mean-field approximation we find that as the particle density increases, this landscape develops a minimum, while an exact treatment of a model confined situation shows that for an ergodic fluid this landscape should be monotonic. We discuss possible implications for slow, glassy, and arrested dynamics at high densities.
Study of transionospheric signal scintillation: Quasi- particle approach
NASA Astrophysics Data System (ADS)
Lyle, Ruthie D.
1998-07-01
A quasi-particle approach is applied to study amplitude scintillation of transionospheric signals caused by Bottomside Sinusoidal (BSS) irregularities. The quasi- particle method exploits wave-particle duality, viewing the wave as a distribution of quasi-particles. This is accomplished by transforming the autocorrelation of the wave function into a Wigner distribution function, which serves as a distribution of quasi-particles in the (/vec r,/ /vec k) phase space. The quasi-particle distribution at any instant of time represents the instantaneous state of the wave. Scattering of the signal by the ionospheric irregularities is equivalent to the evolution of the quasi-particle distribution, due to the collision of the quasi-particles with objects arising from the presence of the BSS irregularities. Subsequently, the perturbed quasi-particle distribution facilitates the computation of average space time propagation properties of the wave. Thus, the scintillation index S4 is determined. Incorporation of essential BSS features in the analysis is accomplished by analytically modeling the power spectrum of the BSS irregularities measured in-situ by the low orbiting Atmosphere-E (AE - E) Satellite. The effect of BSS irregularities on transionospheric signals has been studied. The numerical results agree well with multi-satellite scintillation observations made at Huancayo Peru in close time correspondence with BSS irregularities observed by the AE - E satellite over a few nights (December 8-11, 1979). During this period, the severity of the scintillation varied from moderate to intense, S4 = 0.1-0.8.
NASA Astrophysics Data System (ADS)
Farzaneh, Saeed; Forootan, Ehsan
2018-03-01
The computerized ionospheric tomography is a method for imaging the Earth's ionosphere using a sounding technique and computing the slant total electron content (STEC) values from data of the global positioning system (GPS). The most common approach for ionospheric tomography is the voxel-based model, in which (1) the ionosphere is divided into voxels, (2) the STEC is then measured along (many) satellite signal paths, and finally (3) an inversion procedure is applied to reconstruct the electron density distribution of the ionosphere. In this study, a computationally efficient approach is introduced, which improves the inversion procedure of step 3. Our proposed method combines the empirical orthogonal function and the spherical Slepian base functions to describe the vertical and horizontal distribution of electron density, respectively. Thus, it can be applied on regional and global case studies. Numerical application is demonstrated using the ground-based GPS data over South America. Our results are validated against ionospheric tomography obtained from the constellation observing system for meteorology, ionosphere, and climate (COSMIC) observations and the global ionosphere map estimated by international centers, as well as by comparison with STEC derived from independent GPS stations. Using the proposed approach, we find that while using 30 GPS measurements in South America, one can achieve comparable accuracy with those from COSMIC data within the reported accuracy (1 × 1011 el/cm3) of the product. Comparisons with real observations of two GPS stations indicate an absolute difference is less than 2 TECU (where 1 total electron content unit, TECU, is 1016 electrons/m2).
Multivariate quadrature for representing cloud condensation nuclei activity of aerosol populations
Fierce, Laura; McGraw, Robert L.
2017-07-26
Here, sparse representations of atmospheric aerosols are needed for efficient regional- and global-scale chemical transport models. Here we introduce a new framework for representing aerosol distributions, based on the quadrature method of moments. Given a set of moment constraints, we show how linear programming, combined with an entropy-inspired cost function, can be used to construct optimized quadrature representations of aerosol distributions. The sparse representations derived from this approach accurately reproduce cloud condensation nuclei (CCN) activity for realistically complex distributions simulated by a particleresolved model. Additionally, the linear programming techniques described in this study can be used to bound key aerosolmore » properties, such as the number concentration of CCN. Unlike the commonly used sparse representations, such as modal and sectional schemes, the maximum-entropy approach described here is not constrained to pre-determined size bins or assumed distribution shapes. This study is a first step toward a particle-based aerosol scheme that will track multivariate aerosol distributions with sufficient computational efficiency for large-scale simulations.« less
Multivariate quadrature for representing cloud condensation nuclei activity of aerosol populations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fierce, Laura; McGraw, Robert L.
Here, sparse representations of atmospheric aerosols are needed for efficient regional- and global-scale chemical transport models. Here we introduce a new framework for representing aerosol distributions, based on the quadrature method of moments. Given a set of moment constraints, we show how linear programming, combined with an entropy-inspired cost function, can be used to construct optimized quadrature representations of aerosol distributions. The sparse representations derived from this approach accurately reproduce cloud condensation nuclei (CCN) activity for realistically complex distributions simulated by a particleresolved model. Additionally, the linear programming techniques described in this study can be used to bound key aerosolmore » properties, such as the number concentration of CCN. Unlike the commonly used sparse representations, such as modal and sectional schemes, the maximum-entropy approach described here is not constrained to pre-determined size bins or assumed distribution shapes. This study is a first step toward a particle-based aerosol scheme that will track multivariate aerosol distributions with sufficient computational efficiency for large-scale simulations.« less
An analysis of annual maximum streamflows in Terengganu, Malaysia using TL-moments approach
NASA Astrophysics Data System (ADS)
Ahmad, Ummi Nadiah; Shabri, Ani; Zakaria, Zahrahtul Amani
2013-02-01
TL-moments approach has been used in an analysis to determine the best-fitting distributions to represent the annual series of maximum streamflow data over 12 stations in Terengganu, Malaysia. The TL-moments with different trimming values are used to estimate the parameter of the selected distributions namely: generalized pareto (GPA), generalized logistic, and generalized extreme value distribution. The influence of TL-moments on estimated probability distribution functions are examined by evaluating the relative root mean square error and relative bias of quantile estimates through Monte Carlo simulations. The boxplot is used to show the location of the median and the dispersion of the data, which helps in reaching the decisive conclusions. For most of the cases, the results show that TL-moments with one smallest value was trimmed from the conceptual sample (TL-moments (1,0)), of GPA distribution was the most appropriate in majority of the stations for describing the annual maximum streamflow series in Terengganu, Malaysia.
Cihan, Abdullah; Birkholzer, Jens; Trevisan, Luca; ...
2014-12-31
During CO 2 injection and storage in deep reservoirs, the injected CO 2 enters into an initially brine saturated porous medium, and after the injection stops, natural groundwater flow eventually displaces the injected mobile-phase CO 2, leaving behind residual non-wetting fluid. Accurate modeling of two-phase flow processes are needed for predicting fate and transport of injected CO 2, evaluating environmental risks and designing more effective storage schemes. The entrapped non-wetting fluid saturation is typically a function of the spatially varying maximum saturation at the end of injection. At the pore-scale, distribution of void sizes and connectivity of void space playmore » a major role for the macroscopic hysteresis behavior and capillary entrapment of wetting and non-wetting fluids. This paper presents development of an approach based on the connectivity of void space for modeling hysteretic capillary pressure-saturation-relative permeability relationships. The new approach uses void-size distribution and a measure of void space connectivity to compute the hysteretic constitutive functions and to predict entrapped fluid phase saturations. Two functions, the drainage connectivity function and the wetting connectivity function, are introduced to characterize connectivity of fluids in void space during drainage and wetting processes. These functions can be estimated through pore-scale simulations in computer-generated porous media or from traditional experimental measurements of primary drainage and main wetting curves. The hysteresis model for saturation-capillary pressure is tested successfully by comparing the model-predicted residual saturation and scanning curves with actual data sets obtained from column experiments found in the literature. A numerical two-phase model simulator with the new hysteresis functions is tested against laboratory experiments conducted in a quasi-two-dimensional flow cell (91.4cm×5.6cm×61cm), packed with homogeneous and heterogeneous sands. Initial results show that the model can predict spatial and temporal distribution of injected fluid during the experiments reasonably well. However, further analyses are needed for comprehensively testing the ability of the model to predict transient two-phase flow processes and capillary entrapment in geological reservoirs during geological carbon sequestration.« less
Barbet-Massin, Morgane; Jetz, Walter
2015-08-01
Animal assemblages fulfill a critical set of ecological functions for ecosystems that may be altered substantially as climate change-induced distribution changes lead to community disaggregation and reassembly. We combine species and community perspectives to assess the consequences of projected geographic range changes for the diverse functional attributes of avian assemblages worldwide. Assemblage functional structure is projected to change highly unevenly across space. These differences arise from both changes in the number of species and changes in species' relative local functional redundancy or distinctness. They sometimes result in substantial losses of functional diversity that could have severe consequences for ecosystem health. Range expansions may counter functional losses in high-latitude regions, but offer little compensation in many tropical and subtropical biomes. Future management of local community function and ecosystem services thus relies on understanding the global dynamics of species distributions and multiscale approaches that include the biogeographic context of species traits. © 2015 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Salajegheh, Maral; Nejad, S. Mohammad Moosavi; Khanpour, Hamzeh; Tehrani, S. Atashbar
2018-05-01
In this paper, we present SMKA18 analysis, which is a first attempt to extract the set of next-to-next-leading-order (NNLO) spin-dependent parton distribution functions (spin-dependent PDFs) and their uncertainties determined through the Laplace transform technique and Jacobi polynomial approach. Using the Laplace transformations, we present an analytical solution for the spin-dependent Dokshitzer-Gribov-Lipatov-Altarelli-Parisi evolution equations at NNLO approximation. The results are extracted using a wide range of proton g1p(x ,Q2) , neutron g1n(x ,Q2) , and deuteron g1d(x ,Q2) spin-dependent structure functions data set including the most recent high-precision measurements from COMPASS16 experiments at CERN, which are playing an increasingly important role in global spin-dependent fits. The careful estimations of uncertainties have been done using the standard Hessian error propagation. We will compare our results with the available spin-dependent inclusive deep inelastic scattering data set and other results for the spin-dependent PDFs in literature. The results obtained for the spin-dependent PDFs as well as spin-dependent structure functions are clearly explained both in the small and large values of x .
Advanced Unstructured Grid Generation for Complex Aerodynamic Applications
NASA Technical Reports Server (NTRS)
Pirzadeh, Shahyar Z.
2008-01-01
A new approach for distribution of grid points on the surface and in the volume has been developed and implemented in the NASA unstructured grid generation code VGRID. In addition to the point and line sources of prior work, the new approach utilizes surface and volume sources for automatic curvature-based grid sizing and convenient point distribution in the volume. A new exponential growth function produces smoother and more efficient grids and provides superior control over distribution of grid points in the field. All types of sources support anisotropic grid stretching which not only improves the grid economy but also provides more accurate solutions for certain aerodynamic applications. The new approach does not require a three-dimensional background grid as in the previous methods. Instead, it makes use of an efficient bounding-box auxiliary medium for storing grid parameters defined by surface sources. The new approach is less memory-intensive and more efficient computationally. The grids generated with the new method either eliminate the need for adaptive grid refinement for certain class of problems or provide high quality initial grids that would enhance the performance of many adaptation methods.
Advanced Unstructured Grid Generation for Complex Aerodynamic Applications
NASA Technical Reports Server (NTRS)
Pirzadeh, Shahyar
2010-01-01
A new approach for distribution of grid points on the surface and in the volume has been developed. In addition to the point and line sources of prior work, the new approach utilizes surface and volume sources for automatic curvature-based grid sizing and convenient point distribution in the volume. A new exponential growth function produces smoother and more efficient grids and provides superior control over distribution of grid points in the field. All types of sources support anisotropic grid stretching which not only improves the grid economy but also provides more accurate solutions for certain aerodynamic applications. The new approach does not require a three-dimensional background grid as in the previous methods. Instead, it makes use of an efficient bounding-box auxiliary medium for storing grid parameters defined by surface sources. The new approach is less memory-intensive and more efficient computationally. The grids generated with the new method either eliminate the need for adaptive grid refinement for certain class of problems or provide high quality initial grids that would enhance the performance of many adaptation methods.
A thermal/nonthermal approach to solar flares
NASA Technical Reports Server (NTRS)
Benka, Stephen G.
1991-01-01
An approach for modeling solar flare high-energy emissions is developed in which both thermal and nonthermal particles coexist and contribute to the radiation. The thermal/nonthermal distribution function is interpreted physically by postulating the existence of DC sheets in the flare region. The currents then provide both primary plasma heating through Joule dissipation, and runaway electron acceleration. The physics of runaway acceleration is discussed. Several methods are presented for obtaining approximations to the thermal/nonthermal distribution function, both within the current sheets and outside of them. Theoretical hard x ray spectra are calculated, allowing for thermal bremsstrahlung from the heated plasma electrons impinging on the chromosphere. A simple model for hard x ray images of two-ribbon flares is presented. Theoretical microwave gyrosynchrotron spectra are calculated and analyzed, uncovering important new effects caused by the interplay of thermal and nonthermal particles. The theoretical spectra are compared with observed high resolution spectra of solar flares, and excellent agreement is found, in both hard x rays and microwaves. The future detailed application of this approach to solar flares is discussed, as are possible refinements to this theory.
Thermal noise model of antiferromagnetic dynamics: A macroscopic approach
NASA Astrophysics Data System (ADS)
Li, Xilai; Semenov, Yuriy; Kim, Ki Wook
In the search for post-silicon technologies, antiferromagnetic (AFM) spintronics is receiving widespread attention. Due to faster dynamics when compared with its ferromagnetic counterpart, AFM enables ultra-fast magnetization switching and THz oscillations. A crucial factor that affects the stability of antiferromagnetic dynamics is the thermal fluctuation, rarely considered in AFM research. Here, we derive from theory both stochastic dynamic equations for the macroscopic AFM Neel vector (L-vector) and the corresponding Fokker-Plank equation for the L-vector distribution function. For the dynamic equation approach, thermal noise is modeled by a stochastic fluctuating magnetic field that affects the AFM dynamics. The field is correlated within the correlation time and the amplitude is derived from the energy dissipation theory. For the distribution function approach, the inertial behavior of AFM dynamics forces consideration of the generalized space, including both coordinates and velocities. Finally, applying the proposed thermal noise model, we analyze a particular case of L-vector reversal of AFM nanoparticles by voltage controlled perpendicular magnetic anisotropy (PMA) with a tailored pulse width. This work was supported, in part, by SRC/NRI SWAN.
Anomalous current from the covariant Wigner function
NASA Astrophysics Data System (ADS)
Prokhorov, George; Teryaev, Oleg
2018-04-01
We consider accelerated and rotating media of weakly interacting fermions in local thermodynamic equilibrium on the basis of kinetic approach. Kinetic properties of such media can be described by covariant Wigner function incorporating the relativistic distribution functions of particles with spin. We obtain the formulae for axial current by summation of the terms of all orders of thermal vorticity tensor, chemical potential, both for massive and massless particles. In the massless limit all the terms of fourth and higher orders of vorticity and third order of chemical potential and temperature equal zero. It is shown, that axial current gets a topological component along the 4-acceleration vector. The similarity between different approaches to baryon polarization is established.
Olds, Daniel; Wang, Hsiu -Wen; Page, Katharine L.
2015-09-04
In this work we discuss the potential problems and currently available solutions in modeling powder-diffraction based pair-distribution function (PDF) data from systems where morphological feature information content includes distances in the nanometer length scale, such as finite nanoparticles, nanoporous networks, and nanoscale precipitates in bulk materials. The implications of an experimental finite minimum Q-value are addressed by simulation, which also demonstrates the advantages of combining PDF data with small angle scattering data (SAS). In addition, we introduce a simple Fortran90 code, DShaper, which may be incorporated into PDF data fitting routines in order to approximate the so-called shape-function for anymore » atomistic model.« less
A deterministic width function model
NASA Astrophysics Data System (ADS)
Puente, C. E.; Sivakumar, B.
Use of a deterministic fractal-multifractal (FM) geometric method to model width functions of natural river networks, as derived distributions of simple multifractal measures via fractal interpolating functions, is reported. It is first demonstrated that the FM procedure may be used to simulate natural width functions, preserving their most relevant features like their overall shape and texture and their observed power-law scaling on their power spectra. It is then shown, via two natural river networks (Racoon and Brushy creeks in the United States), that the FM approach may also be used to closely approximate existing width functions.
Time-Series INSAR: An Integer Least-Squares Approach For Distributed Scatterers
NASA Astrophysics Data System (ADS)
Samiei-Esfahany, Sami; Hanssen, Ramon F.
2012-01-01
The objective of this research is to extend the geode- tic mathematical model which was developed for persistent scatterers to a model which can exploit distributed scatterers (DS). The main focus is on the integer least- squares framework, and the main challenge is to include the decorrelation effect in the mathematical model. In order to adapt the integer least-squares mathematical model for DS we altered the model from a single master to a multi-master configuration and introduced the decorrelation effect stochastically. This effect is described in our model by a full covariance matrix. We propose to de- rive this covariance matrix by numerical integration of the (joint) probability distribution function (PDF) of interferometric phases. This PDF is a function of coherence values and can be directly computed from radar data. We show that the use of this model can improve the performance of temporal phase unwrapping of distributed scatterers.
Distributed Adaptive Fuzzy Control for Nonlinear Multiagent Systems Via Sliding Mode Observers.
Shen, Qikun; Shi, Peng; Shi, Yan
2016-12-01
In this paper, the problem of distributed adaptive fuzzy control is investigated for high-order uncertain nonlinear multiagent systems on directed graph with a fixed topology. It is assumed that only the outputs of each follower and its neighbors are available in the design of its distributed controllers. Equivalent output injection sliding mode observers are proposed for each follower to estimate the states of itself and its neighbors, and an observer-based distributed adaptive controller is designed for each follower to guarantee that it asymptotically synchronizes to a leader with tracking errors being semi-globally uniform ultimate bounded, in which fuzzy logic systems are utilized to approximate unknown functions. Based on algebraic graph theory and Lyapunov function approach, using Filippov-framework, the closed-loop system stability analysis is conducted. Finally, numerical simulations are provided to illustrate the effectiveness and potential of the developed design techniques.
Universality classes of fluctuation dynamics in hierarchical complex systems
NASA Astrophysics Data System (ADS)
Macêdo, A. M. S.; González, Iván R. Roa; Salazar, D. S. P.; Vasconcelos, G. L.
2017-03-01
A unified approach is proposed to describe the statistics of the short-time dynamics of multiscale complex systems. The probability density function of the relevant time series (signal) is represented as a statistical superposition of a large time-scale distribution weighted by the distribution of certain internal variables that characterize the slowly changing background. The dynamics of the background is formulated as a hierarchical stochastic model whose form is derived from simple physical constraints, which in turn restrict the dynamics to only two possible classes. The probability distributions of both the signal and the background have simple representations in terms of Meijer G functions. The two universality classes for the background dynamics manifest themselves in the signal distribution as two types of tails: power law and stretched exponential, respectively. A detailed analysis of empirical data from classical turbulence and financial markets shows excellent agreement with the theory.
Brown, Jeffrey S; Holmes, John H; Shah, Kiran; Hall, Ken; Lazarus, Ross; Platt, Richard
2010-06-01
Comparative effectiveness research, medical product safety evaluation, and quality measurement will require the ability to use electronic health data held by multiple organizations. There is no consensus about whether to create regional or national combined (eg, "all payer") databases for these purposes, or distributed data networks that leave most Protected Health Information and proprietary data in the possession of the original data holders. Demonstrate functions of a distributed research network that supports research needs and also address data holders concerns about participation. Key design functions included strong local control of data uses and a centralized web-based querying interface. We implemented a pilot distributed research network and evaluated the design considerations, utility for research, and the acceptability to data holders of methods for menu-driven querying. We developed and tested a central, web-based interface with supporting network software. Specific functions assessed include query formation and distribution, query execution and review, and aggregation of results. This pilot successfully evaluated temporal trends in medication use and diagnoses at 5 separate sites, demonstrating some of the possibilities of using a distributed research network. The pilot demonstrated the potential utility of the design, which addressed the major concerns of both users and data holders. No serious obstacles were identified that would prevent development of a fully functional, scalable network. Distributed networks are capable of addressing nearly all anticipated uses of routinely collected electronic healthcare data. Distributed networks would obviate the need for centralized databases, thus avoiding numerous obstacles.
streamgap-pepper: Effects of peppering streams with many small impacts
NASA Astrophysics Data System (ADS)
Bovy, Jo; Erkal, Denis; Sanders, Jason
2017-02-01
streamgap-pepper computes the effect of subhalo fly-bys on cold tidal streams based on the action-angle representation of streams. A line-of-parallel-angle approach is used to calculate the perturbed distribution function of a given stream segment by undoing the effect of all impacts. This approach allows one to compute the perturbed stream density and track in any coordinate system in minutes for realizations of the subhalo distribution down to 10^5 Msun, accounting for the stream's internal dispersion and overlapping impacts. This code uses galpy (ascl:1411.008) and the streampepperdf.py galpy extension, which implements the fast calculation of the perturbed stream structure.
Effects of spatial grouping on the functional response of predators
Cosner, C.; DeAngelis, D.L.; Ault, J.S.; Olson, D.B.
1999-01-01
A unified mechanistic approach is given for the derivation of various forms of functional response in predator-prey models. The derivation is based on the principle-of-mass action but with the crucial refinement that the nature of the spatial distribution of predators and/or opportunities for predation are taken into account in an implicit way. If the predators are assumed to have a homogeneous spatial distribution, then the derived functional response is prey-dependent. If the predators are assumed to form a dense colony or school in a single (possibly moving) location, or if the region where predators can encounter prey is assumed to be of limited size, then the functional response depends on both predator and prey densities in a manner that reflects feeding interference between predators. Depending on the specific assumptions, the resulting functional response may be of Beddington-DeAngelis type, of Hassell-Varley type, or ratio-dependent.
Bhattacharjee, Biplab
2003-04-01
The paper presents a general formalism for the nth-nearest-neighbor distribution (NND) of identical interacting particles in a fluid confined in a nu-dimensional space. The nth-NND functions, W(n,r) (for n=1,2,3, em leader) in a fluid are obtained hierarchically in terms of the pair correlation function and W(n-1,r) alone. The radial distribution function (RDF) profiles obtained from the molecular dynamics (MD) simulation of Lennard-Jones (LJ) fluid is used to illustrate the results. It is demonstrated that the collective structural information contained in the maxima and minima of the RDF profiles being resolved in terms of individual NND functions may provide more insights about the microscopic neighborhood structure around a reference particle in a fluid. Representative comparison between the results obtained from the formalism and the MD simulation data shows good agreement. Apart from the quantities such as nth-NND functions and nth-nearest-neighbor distances, the average neighbor population number is defined. These quantities are evaluated for the LJ model system and interesting density dependence of the microscopic neighborhood shell structures are discussed in terms of them. The relevance of the NND functions in various phenomena is also pointed out.
NASA Astrophysics Data System (ADS)
Bhattacharjee, Biplab
2003-04-01
The paper presents a general formalism for the nth-nearest-neighbor distribution (NND) of identical interacting particles in a fluid confined in a ν-dimensional space. The nth-NND functions, W(n,r¯) (for n=1,2,3,…) in a fluid are obtained hierarchically in terms of the pair correlation function and W(n-1,r¯) alone. The radial distribution function (RDF) profiles obtained from the molecular dynamics (MD) simulation of Lennard-Jones (LJ) fluid is used to illustrate the results. It is demonstrated that the collective structural information contained in the maxima and minima of the RDF profiles being resolved in terms of individual NND functions may provide more insights about the microscopic neighborhood structure around a reference particle in a fluid. Representative comparison between the results obtained from the formalism and the MD simulation data shows good agreement. Apart from the quantities such as nth-NND functions and nth-nearest-neighbor distances, the average neighbor population number is defined. These quantities are evaluated for the LJ model system and interesting density dependence of the microscopic neighborhood shell structures are discussed in terms of them. The relevance of the NND functions in various phenomena is also pointed out.
NASA Technical Reports Server (NTRS)
Ko, William L.; Fleischer, Van Tran
2012-01-01
In the formulations of earlier Displacement Transfer Functions for structure shape predictions, the surface strain distributions, along a strain-sensing line, were represented with piecewise linear functions. To improve the shape-prediction accuracies, Improved Displacement Transfer Functions were formulated using piecewise nonlinear strain representations. Through discretization of an embedded beam (depth-wise cross section of a structure along a strain-sensing line) into multiple small domains, piecewise nonlinear functions were used to describe the surface strain distributions along the discretized embedded beam. Such piecewise approach enabled the piecewise integrations of the embedded beam curvature equations to yield slope and deflection equations in recursive forms. The resulting Improved Displacement Transfer Functions, written in summation forms, were expressed in terms of beam geometrical parameters and surface strains along the strain-sensing line. By feeding the surface strains into the Improved Displacement Transfer Functions, structural deflections could be calculated at multiple points for mapping out the overall structural deformed shapes for visual display. The shape-prediction accuracies of the Improved Displacement Transfer Functions were then examined in view of finite-element-calculated deflections using different tapered cantilever tubular beams. It was found that by using the piecewise nonlinear strain representations, the shape-prediction accuracies could be greatly improved, especially for highly-tapered cantilever tubular beams.
Pleiotropy Analysis of Quantitative Traits at Gene Level by Multivariate Functional Linear Models
Wang, Yifan; Liu, Aiyi; Mills, James L.; Boehnke, Michael; Wilson, Alexander F.; Bailey-Wilson, Joan E.; Xiong, Momiao; Wu, Colin O.; Fan, Ruzong
2015-01-01
In genetics, pleiotropy describes the genetic effect of a single gene on multiple phenotypic traits. A common approach is to analyze the phenotypic traits separately using univariate analyses and combine the test results through multiple comparisons. This approach may lead to low power. Multivariate functional linear models are developed to connect genetic variant data to multiple quantitative traits adjusting for covariates for a unified analysis. Three types of approximate F-distribution tests based on Pillai–Bartlett trace, Hotelling–Lawley trace, and Wilks’s Lambda are introduced to test for association between multiple quantitative traits and multiple genetic variants in one genetic region. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and optimal sequence kernel association test (SKAT-O). Extensive simulations were performed to evaluate the false positive rates and power performance of the proposed models and tests. We show that the approximate F-distribution tests control the type I error rates very well. Overall, simultaneous analysis of multiple traits can increase power performance compared to an individual test of each trait. The proposed methods were applied to analyze (1) four lipid traits in eight European cohorts, and (2) three biochemical traits in the Trinity Students Study. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and SKAT-O for the three biochemical traits. The approximate F-distribution tests of the proposed functional linear models are more sensitive than those of the traditional multivariate linear models that in turn are more sensitive than SKAT-O in the univariate case. The analysis of the four lipid traits and the three biochemical traits detects more association than SKAT-O in the univariate case. PMID:25809955
Pleiotropy analysis of quantitative traits at gene level by multivariate functional linear models.
Wang, Yifan; Liu, Aiyi; Mills, James L; Boehnke, Michael; Wilson, Alexander F; Bailey-Wilson, Joan E; Xiong, Momiao; Wu, Colin O; Fan, Ruzong
2015-05-01
In genetics, pleiotropy describes the genetic effect of a single gene on multiple phenotypic traits. A common approach is to analyze the phenotypic traits separately using univariate analyses and combine the test results through multiple comparisons. This approach may lead to low power. Multivariate functional linear models are developed to connect genetic variant data to multiple quantitative traits adjusting for covariates for a unified analysis. Three types of approximate F-distribution tests based on Pillai-Bartlett trace, Hotelling-Lawley trace, and Wilks's Lambda are introduced to test for association between multiple quantitative traits and multiple genetic variants in one genetic region. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and optimal sequence kernel association test (SKAT-O). Extensive simulations were performed to evaluate the false positive rates and power performance of the proposed models and tests. We show that the approximate F-distribution tests control the type I error rates very well. Overall, simultaneous analysis of multiple traits can increase power performance compared to an individual test of each trait. The proposed methods were applied to analyze (1) four lipid traits in eight European cohorts, and (2) three biochemical traits in the Trinity Students Study. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and SKAT-O for the three biochemical traits. The approximate F-distribution tests of the proposed functional linear models are more sensitive than those of the traditional multivariate linear models that in turn are more sensitive than SKAT-O in the univariate case. The analysis of the four lipid traits and the three biochemical traits detects more association than SKAT-O in the univariate case. © 2015 WILEY PERIODICALS, INC.
The nitrate response of a lowland catchment and groundwater travel times
NASA Astrophysics Data System (ADS)
van der Velde, Ype; Rozemeijer, Joachim; de Rooij, Gerrit; van Geer, Frans
2010-05-01
Intensive agriculture in lowland catchments causes eutrophication of downstream waters. To determine effective measures to reduce the nutrient loads from upstream lowland catchments, we need to understand the origin of long-term and daily variations in surface water nutrient concentrations. Surface water concentrations are often linked to travel time distributions of water passing through the saturated and unsaturated soil of the contributing catchment. This distribution represents the contact time over which sorption, desorption and degradation takes place. However, travel time distributions are strongly influenced by processes like tube drain flow, overland flow and the dynamics of draining ditches and streams and therefore exhibit strong daily and seasonal variations. The study we will present is situated in the 6.6 km2 Hupsel brook catchment in The Netherlands. In this catchment nitrate and chloride concentrations have been intensively monitored for the past 26 years under steadily decreasing agricultural inputs. We described the complicated dynamics of subsurface water fluxes as streams, ditches and tube drains locally switch between active or passive depending on the ambient groundwater level by a groundwater model with high spatial and temporal resolutions. A transient particle tracking approach is used to derive a unique catchment-scale travel time distribution for each day during the 26 year model period. These transient travel time distributions are not smooth distributions, but distributions that are strongly spiked reflecting the contribution of past rainfall events to the current discharge. We will show that a catchment-scale mass response function approach that only describes catchment-scale mixing and degradation suffices to accurately reproduce observed chloride and nitrate surface water concentrations as long as the mass response functions include the dynamics of travel time distributions caused by the highly variable connectivity of the surface water network.
Vaknin, David; Bu, Wei; Travesset, Alex
2008-07-28
We show that the structure factor S(q) of water can be obtained from x-ray synchrotron experiments at grazing angle of incidence (in reflection mode) by using a liquid surface diffractometer. The corrections used to obtain S(q) self-consistently are described. Applying these corrections to scans at different incident beam angles (above the critical angle) collapses the measured intensities into a single master curve, without fitting parameters, which within a scale factor yields S(q). Performing the measurements below the critical angle for total reflectivity yields the structure factor of the top most layers of the water/vapor interface. Our results indicate water restructuring at the vapor/water interface. We also introduce a new approach to extract g(r), the pair distribution function (PDF), by expressing the PDF as a linear sum of error functions whose parameters are refined by applying a nonlinear least square fit method. This approach enables a straightforward determination of the inherent uncertainties in the PDF. Implications of our results to previously measured and theoretical predictions of the PDF are also discussed.
Two-particle correlation function and dihadron correlation approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vechernin, V. V., E-mail: v.vechernin@spbu.ru; Ivanov, K. O.; Neverov, D. I.
It is shown that, in the case of asymmetric nuclear interactions, the application of the traditional dihadron correlation approach to determining a two-particle correlation function C may lead to a form distorted in relation to the canonical pair correlation function {sub C}{sup 2}. This result was obtained both by means of exact analytic calculations of correlation functions within a simple string model for proton–nucleus and deuteron–nucleus collisions and by means of Monte Carlo simulations based on employing the HIJING event generator. It is also shown that the method based on studying multiplicity correlations in two narrow observation windows separated inmore » rapidity makes it possible to determine correctly the canonical pair correlation function C{sub 2} for all cases, including the case where the rapidity distribution of product particles is not uniform.« less
ERIC Educational Resources Information Center
Fink, Susan Jo Breakenridge
2011-01-01
The purpose of this study was to explore how instructors at a mid-sized Midwest four-year undergraduate private university view the purpose, structure, format and use of their course syllabi. The theory of structural functionalism and a quantitative research approach were employed. A group administration approach was used to distribute the paper…
Apodization of two-dimensional pupils with aberrations
NASA Astrophysics Data System (ADS)
Reddy, Andra Naresh Kumar; Hashemi, Mahdieh; Khonina, Svetlana Nikolaevna
2018-06-01
The technique proposed to enhance the resolution of the point spread function (PSF) of an optical system underneath defocussing and spherical aberrations. The method of approach is based on the amplitude and phase masking in a ring aperture for modifying the light intensity distribution in the Gaussian focal plane (YD = 0) and in the defocussed planes (YD= π and YD= 2π ). The width of the annulus modifies the distribution of the light intensity in the side lobes of the resultant PSF. In the presence of an asymmetry in the phase of the annulus, the Hanning amplitude apodizer [cos(π β ρ )] employed in the pupil function can modify the spatial distribution of light in the maximum defocussed plane ({Y}D = 2π ), results in PSF with improved resolution.
An advanced kinetic theory for morphing continuum with inner structures
NASA Astrophysics Data System (ADS)
Chen, James
2017-12-01
Advanced kinetic theory with the Boltzmann-Curtiss equation provides a promising tool for polyatomic gas flows, especially for fluid flows containing inner structures, such as turbulence, polyatomic gas flows and others. Although a Hamiltonian-based distribution function was proposed for diatomic gas flow, a general distribution function for the generalized Boltzmann-Curtiss equations and polyatomic gas flow is still out of reach. With assistance from Boltzmann's entropy principle, a generalized Boltzmann-Curtiss distribution for polyatomic gas flow is introduced. The corresponding governing equations at equilibrium state are derived and compared with Eringen's morphing (micropolar) continuum theory derived under the framework of rational continuum thermomechanics. Although rational continuum thermomechanics has the advantages of mathematical rigor and simplicity, the presented statistical kinetic theory approach provides a clear physical picture for what the governing equations represent.
NASA Astrophysics Data System (ADS)
Sharma, Prabhat Kumar
2016-11-01
A framework is presented for the analysis of average symbol error rate (SER) for M-ary quadrature amplitude modulation in a free-space optical communication system. The standard probability density function (PDF)-based approach is extended to evaluate the average SER by representing the Q-function through its Meijer's G-function equivalent. Specifically, a converging power series expression for the average SER is derived considering the zero-boresight misalignment errors in the receiver side. The analysis presented here assumes a unified expression for the PDF of channel coefficient which incorporates the M-distributed atmospheric turbulence and Rayleigh-distributed radial displacement for the misalignment errors. The analytical results are compared with the results obtained using Q-function approximation. Further, the presented results are supported by the Monte Carlo simulations.
Exact solutions for the selection-mutation equilibrium in the Crow-Kimura evolutionary model.
Semenov, Yuri S; Novozhilov, Artem S
2015-08-01
We reformulate the eigenvalue problem for the selection-mutation equilibrium distribution in the case of a haploid asexually reproduced population in the form of an equation for an unknown probability generating function of this distribution. The special form of this equation in the infinite sequence limit allows us to obtain analytically the steady state distributions for a number of particular cases of the fitness landscape. The general approach is illustrated by examples; theoretical findings are compared with numerical calculations. Copyright © 2015. Published by Elsevier Inc.
A Bayesian approach to parameter and reliability estimation in the Poisson distribution.
NASA Technical Reports Server (NTRS)
Canavos, G. C.
1972-01-01
For life testing procedures, a Bayesian analysis is developed with respect to a random intensity parameter in the Poisson distribution. Bayes estimators are derived for the Poisson parameter and the reliability function based on uniform and gamma prior distributions of that parameter. A Monte Carlo procedure is implemented to make possible an empirical mean-squared error comparison between Bayes and existing minimum variance unbiased, as well as maximum likelihood, estimators. As expected, the Bayes estimators have mean-squared errors that are appreciably smaller than those of the other two.
Qi, Li; Zhu, Jiang; Hancock, Aneeka M.; Dai, Cuixia; Zhang, Xuping; Frostig, Ron D.; Chen, Zhongping
2016-01-01
Doppler optical coherence tomography (DOCT) is considered one of the most promising functional imaging modalities for neuro biology research and has demonstrated the ability to quantify cerebral blood flow velocity at a high accuracy. However, the measurement of total absolute blood flow velocity (BFV) of major cerebral arteries is still a difficult problem since it is related to vessel geometry. In this paper, we present a volumetric vessel reconstruction approach that is capable of measuring the absolute BFV distributed along the entire middle cerebral artery (MCA) within a large field-of-view. The Doppler angle at each point of the MCA, representing the vessel geometry, is derived analytically by localizing the artery from pure DOCT images through vessel segmentation and skeletonization. Our approach could achieve automatic quantification of the fully distributed absolute BFV across different vessel branches. Experiments on rodents using swept-source optical coherence tomography showed that our approach was able to reveal the consequences of permanent MCA occlusion with absolute BFV measurement. PMID:26977365
Qi, Li; Zhu, Jiang; Hancock, Aneeka M; Dai, Cuixia; Zhang, Xuping; Frostig, Ron D; Chen, Zhongping
2016-02-01
Doppler optical coherence tomography (DOCT) is considered one of the most promising functional imaging modalities for neuro biology research and has demonstrated the ability to quantify cerebral blood flow velocity at a high accuracy. However, the measurement of total absolute blood flow velocity (BFV) of major cerebral arteries is still a difficult problem since it is related to vessel geometry. In this paper, we present a volumetric vessel reconstruction approach that is capable of measuring the absolute BFV distributed along the entire middle cerebral artery (MCA) within a large field-of-view. The Doppler angle at each point of the MCA, representing the vessel geometry, is derived analytically by localizing the artery from pure DOCT images through vessel segmentation and skeletonization. Our approach could achieve automatic quantification of the fully distributed absolute BFV across different vessel branches. Experiments on rodents using swept-source optical coherence tomography showed that our approach was able to reveal the consequences of permanent MCA occlusion with absolute BFV measurement.
NASA Astrophysics Data System (ADS)
Grosso, Juan M.; Ocampo-Martinez, Carlos; Puig, Vicenç
2017-10-01
This paper proposes a distributed model predictive control approach designed to work in a cooperative manner for controlling flow-based networks showing periodic behaviours. Under this distributed approach, local controllers cooperate in order to enhance the performance of the whole flow network avoiding the use of a coordination layer. Alternatively, controllers use both the monolithic model of the network and the given global cost function to optimise the control inputs of the local controllers but taking into account the effect of their decisions over the remainder subsystems conforming the entire network. In this sense, a global (all-to-all) communication strategy is considered. Although the Pareto optimality cannot be reached due to the existence of non-sparse coupling constraints, the asymptotic convergence to a Nash equilibrium is guaranteed. The resultant strategy is tested and its effectiveness is shown when applied to a large-scale complex flow-based network: the Barcelona drinking water supply system.
Praznik, Werner; Huber, Anton
2005-09-25
A major capability of polysaccharides in aqueous media is their tendency for aggregation and dynamic formation of supermolecular structures. Even extended dissolution processes will not eliminate these structures which dominate many analytical approaches, in particular absolute molecular weight determinations referring to light scattering data. An alternative approach for determination of de facto molecular weight for glucans with free terminal hemiacetal functionality (reducing end group) has been adjusted from carbohydrates for midrange and high-dp glucans: quantitative and stabilized labeling as aminopyridyl-derivatives (AP-glucans) and subsequent analysis of SEC-separated elution profiles based on simultaneously monitored mass and molar fractions by refractive index and fluorescence detection. SEC-DRI/FL of AP-glucans proved as an appropriate approach for determination of de facto molecular weight of constituting glucan molecules even in the presence of supermolecular structures for non-branched (pullulan), branched (dextran), narrow distributed and broad distributed and for mixes of compact and loose packed polymer coils (starch glucan hydrolizate).
NASA Astrophysics Data System (ADS)
Donkov, Sava; Stefanov, Ivan Z.
2018-03-01
We have set ourselves the task of obtaining the probability distribution function of the mass density of a self-gravitating isothermal compressible turbulent fluid from its physics. We have done this in the context of a new notion: the molecular clouds ensemble. We have applied a new approach that takes into account the fractal nature of the fluid. Using the medium equations, under the assumption of steady state, we show that the total energy per unit mass is an invariant with respect to the fractal scales. As a next step we obtain a non-linear integral equation for the dimensionless scale Q which is the third root of the integral of the probability distribution function. It is solved approximately up to the leading-order term in the series expansion. We obtain two solutions. They are power-law distributions with different slopes: the first one is -1.5 at low densities, corresponding to an equilibrium between all energies at a given scale, and the second one is -2 at high densities, corresponding to a free fall at small scales.
New Approaches to Robust Confidence Intervals for Location: A Simulation Study.
1984-06-01
obtain a denominator for the test statistic. Those statistics based on location estimates derived from Hampel’s redescending influence function or v...defined an influence function for a test in terms of the behavior of its P-values when the data are sampled from a model distribution modified by point...proposal could be used for interval estimation as well as hypothesis testing, the extension is immediate. Once an influence function has been defined
Path probability of stochastic motion: A functional approach
NASA Astrophysics Data System (ADS)
Hattori, Masayuki; Abe, Sumiyoshi
2016-06-01
The path probability of a particle undergoing stochastic motion is studied by the use of functional technique, and the general formula is derived for the path probability distribution functional. The probability of finding paths inside a tube/band, the center of which is stipulated by a given path, is analytically evaluated in a way analogous to continuous measurements in quantum mechanics. Then, the formalism developed here is applied to the stochastic dynamics of stock price in finance.
Conjugate gradient minimisation approach to generating holographic traps for ultracold atoms.
Harte, Tiffany; Bruce, Graham D; Keeling, Jonathan; Cassettari, Donatella
2014-11-03
Direct minimisation of a cost function can in principle provide a versatile and highly controllable route to computational hologram generation. Here we show that the careful design of cost functions, combined with numerically efficient conjugate gradient minimisation, establishes a practical method for the generation of holograms for a wide range of target light distributions. This results in a guided optimisation process, with a crucial advantage illustrated by the ability to circumvent optical vortex formation during hologram calculation. We demonstrate the implementation of the conjugate gradient method for both discrete and continuous intensity distributions and discuss its applicability to optical trapping of ultracold atoms.
NASA Technical Reports Server (NTRS)
Markley, F. Landis
2005-01-01
A new method is presented for the simultaneous estimation of the attitude of a spacecraft and an N-vector of bias parameters. This method uses a probability distribution function defined on the Cartesian product of SO(3), the group of rotation matrices, and the Euclidean space W N .The Fokker-Planck equation propagates the probability distribution function between measurements, and Bayes s formula incorporates measurement update information. This approach avoids all the issues of singular attitude representations or singular covariance matrices encountered in extended Kalman filters. In addition, the filter has a consistent initialization for a completely unknown initial attitude, owing to the fact that SO(3) is a compact space.
NASA Astrophysics Data System (ADS)
Butler, Samuel D.; Marciniak, Michael A.
2014-09-01
Since the development of the Torrance-Sparrow bidirectional re ectance distribution function (BRDF) model in 1967, several BRDF models have been created. Previous attempts to categorize BRDF models have relied upon somewhat vague descriptors, such as empirical, semi-empirical, and experimental. Our approach is to instead categorize BRDF models based on functional form: microfacet normal distribution, geometric attenua- tion, directional-volumetric and Fresnel terms, and cross section conversion factor. Several popular microfacet models are compared to a standardized notation for a microfacet BRDF model. A library of microfacet model components is developed, allowing for creation of unique microfacet models driven by experimentally measured BRDFs.
Directional pair distribution function for diffraction line profile analysis of atomistic models
Leonardi, Alberto; Leoni, Matteo; Scardi, Paolo
2013-01-01
The concept of the directional pair distribution function is proposed to describe line broadening effects in powder patterns calculated from atomistic models of nano-polycrystalline microstructures. The approach provides at the same time a description of the size effect for domains of any shape and a detailed explanation of the strain effect caused by the local atomic displacement. The latter is discussed in terms of different strain types, also accounting for strain field anisotropy and grain boundary effects. The results can in addition be directly read in terms of traditional line profile analysis, such as that based on the Warren–Averbach method. PMID:23396818
Chaotic jumps in the generalized first adiabatic invariant in current sheets
NASA Technical Reports Server (NTRS)
Brittnacher, M. J.; Whipple, E. C.
1991-01-01
The present study examines how the changes in the generalized first adiabatic invariant J derived from the separatrix crossing theory can be incorporated into the drift variable approach to generating distribution functions. A method is proposed for determining distribution functions for an ensemble of particles following interaction with the tail current sheet by treating the interaction as a scattering problem characterized by changes in the invariant. Generalized drift velocities are obtained for a 1D tail configuration by using the generalized first invariant. The invariant remained constant except for the discrete changes caused by chaotic scattering as the particles cross the separatrix.
An empirical Bayes approach for the Poisson life distribution.
NASA Technical Reports Server (NTRS)
Canavos, G. C.
1973-01-01
A smooth empirical Bayes estimator is derived for the intensity parameter (hazard rate) in the Poisson distribution as used in life testing. The reliability function is also estimated either by using the empirical Bayes estimate of the parameter, or by obtaining the expectation of the reliability function. The behavior of the empirical Bayes procedure is studied through Monte Carlo simulation in which estimates of mean-squared errors of the empirical Bayes estimators are compared with those of conventional estimators such as minimum variance unbiased or maximum likelihood. Results indicate a significant reduction in mean-squared error of the empirical Bayes estimators over the conventional variety.
Dolan, Paul; Tsuchiya, Aki
2009-01-01
The literature on income distribution has attempted to evaluate different degrees of inequality using a social welfare function (SWF) approach. However, it has largely ignored the source of such inequalities, and has thus failed to consider different degrees of inequity. The literature on egalitarianism has addressed issues of equity, largely in relation to individual responsibility. This paper builds upon these two literatures, and introduces individual responsibility into the SWF. Results from a small-scale study of people's preferences in relation to the distribution of health benefits are presented to illustrate how the parameter values of a SWF might be determined.
Definition and Evolution of Transverse Momentum Distributions
NASA Astrophysics Data System (ADS)
Echevarría, Miguel G.; Idilbi, Ahmad; Scimemi, Ignazio
We consider the definition of unpolarized transverse-momentum-dependent parton distribution functions while staying on-the-light-cone. By imposing a requirement of identical treatment of two collinear sectors, our approach, compatible with a generic factorization theorem with the soft function included, is valid for all non-ultra-violet regulators (as it should), an issue which causes much confusion in the whole field. We explain how large logarithms can be resummed in a way which can be considered as an alternative to the use of Collins-Soper evolution equation. The evolution properties are also discussed and the gauge-invariance, in both classes of gauges, regular and singular, is emphasized.
Alternative Approaches to Evaluation in Empirical Microeconomics
ERIC Educational Resources Information Center
Blundell, Richard; Dias, Monica Costa
2009-01-01
This paper reviews some of the most popular policy evaluation methods in empirical microeconomics: social experiments, natural experiments, matching, instrumental variables, discontinuity design, and control functions. It discusses identification of traditionally used average parameters and more complex distributional parameters. The adequacy,…
Isaac, Marney E; Anglaaere, Luke C N
2013-01-01
Tree root distribution and activity are determinants of belowground competition. However, studying root response to environmental and management conditions remains logistically challenging. Methodologically, nondestructive in situ tree root ecology analysis has lagged. In this study, we tested a nondestructive approach to determine tree coarse root architecture and function of a perennial tree crop, Theobroma cacao L., at two edaphically contrasting sites (sandstone and phyllite–granite derived soils) in Ghana, West Africa. We detected coarse root vertical distribution using ground-penetrating radar and root activity via soil water acquisition using isotopic matching of δ18O plant and soil signatures. Coarse roots were detected to a depth of 50 cm, however, intraspecifc coarse root vertical distribution was modified by edaphic conditions. Soil δ18O isotopic signature declined with depth, providing conditions for plant–soil δ18O isotopic matching. This pattern held only under sandstone conditions where water acquisition zones were identifiably narrow in the 10–20 cm depth but broader under phyllite–granite conditions, presumably due to resource patchiness. Detected coarse root count by depth and measured fine root density were strongly correlated as were detected coarse root count and identified water acquisition zones, thus validating root detection capability of ground-penetrating radar, but exclusively on sandstone soils. This approach was able to characterize trends between intraspecific root architecture and edaphic-dependent resource availability, however, limited by site conditions. This study successfully demonstrates a new approach for in situ root studies that moves beyond invasive point sampling to nondestructive detection of root architecture and function. We discuss the transfer of such an approach to answer root ecology questions in various tree-based landscapes. PMID:23762519
A Distributed Trajectory-Oriented Approach to Managing Traffic Complexity
NASA Technical Reports Server (NTRS)
Idris, Husni; Wing, David J.; Vivona, Robert; Garcia-Chico, Jose-Luis
2007-01-01
In order to handle the expected increase in air traffic volume, the next generation air transportation system is moving towards a distributed control architecture, in which ground-based service providers such as controllers and traffic managers and air-based users such as pilots share responsibility for aircraft trajectory generation and management. While its architecture becomes more distributed, the goal of the Air Traffic Management (ATM) system remains to achieve objectives such as maintaining safety and efficiency. It is, therefore, critical to design appropriate control elements to ensure that aircraft and groundbased actions result in achieving these objectives without unduly restricting user-preferred trajectories. This paper presents a trajectory-oriented approach containing two such elements. One is a trajectory flexibility preservation function, by which aircraft plan their trajectories to preserve flexibility to accommodate unforeseen events. And the other is a trajectory constraint minimization function by which ground-based agents, in collaboration with air-based agents, impose just-enough restrictions on trajectories to achieve ATM objectives, such as separation assurance and flow management. The underlying hypothesis is that preserving trajectory flexibility of each individual aircraft naturally achieves the aggregate objective of avoiding excessive traffic complexity, and that trajectory flexibility is increased by minimizing constraints without jeopardizing the intended ATM objectives. The paper presents conceptually how the two functions operate in a distributed control architecture that includes self separation. The paper illustrates the concept through hypothetical scenarios involving conflict resolution and flow management. It presents a functional analysis of the interaction and information flow between the functions. It also presents an analytical framework for defining metrics and developing methods to preserve trajectory flexibility and minimize its constraints. In this framework flexibility is defined in terms of robustness and adaptability to disturbances and the impact of constraints is illustrated through analysis of a trajectory solution space with limited degrees of freedom and in simple constraint situations involving meeting multiple times of arrival and resolving a conflict.
NASA Astrophysics Data System (ADS)
Melis, Stefano
2015-01-01
We present a review of current Transverse Momentum Dependent (TMD) phenomenology focusing our attention on the unpolarized TMD parton distribution function and the Sivers function. The paper introduces and comments about the new Collins-Soper-Sterman (CSS) TMD evolution formalism [1]. We make use of a selection of results obtained by several groups to illustrate the achievements and the failures of the simple Gaussian approach and the TMD CSS evolution formalism.
Interpolating Non-Parametric Distributions of Hourly Rainfall Intensities Using Random Mixing
NASA Astrophysics Data System (ADS)
Mosthaf, Tobias; Bárdossy, András; Hörning, Sebastian
2015-04-01
The correct spatial interpolation of hourly rainfall intensity distributions is of great importance for stochastical rainfall models. Poorly interpolated distributions may lead to over- or underestimation of rainfall and consequently to wrong estimates of following applications, like hydrological or hydraulic models. By analyzing the spatial relation of empirical rainfall distribution functions, a persistent order of the quantile values over a wide range of non-exceedance probabilities is observed. As the order remains similar, the interpolation weights of quantile values for one certain non-exceedance probability can be applied to the other probabilities. This assumption enables the use of kernel smoothed distribution functions for interpolation purposes. Comparing the order of hourly quantile values over different gauges with the order of their daily quantile values for equal probabilities, results in high correlations. The hourly quantile values also show high correlations with elevation. The incorporation of these two covariates into the interpolation is therefore tested. As only positive interpolation weights for the quantile values assure a monotonically increasing distribution function, the use of geostatistical methods like kriging is problematic. Employing kriging with external drift to incorporate secondary information is not applicable. Nonetheless, it would be fruitful to make use of covariates. To overcome this shortcoming, a new random mixing approach of spatial random fields is applied. Within the mixing process hourly quantile values are considered as equality constraints and correlations with elevation values are included as relationship constraints. To profit from the dependence of daily quantile values, distribution functions of daily gauges are used to set up lower equal and greater equal constraints at their locations. In this way the denser daily gauge network can be included in the interpolation of the hourly distribution functions. The applicability of this new interpolation procedure will be shown for around 250 hourly rainfall gauges in the German federal state of Baden-Württemberg. The performance of the random mixing technique within the interpolation is compared to applicable kriging methods. Additionally, the interpolation of kernel smoothed distribution functions is compared with the interpolation of fitted parametric distributions.
Four theorems on the psychometric function.
May, Keith A; Solomon, Joshua A
2013-01-01
In a 2-alternative forced-choice (2AFC) discrimination task, observers choose which of two stimuli has the higher value. The psychometric function for this task gives the probability of a correct response for a given stimulus difference, Δx. This paper proves four theorems about the psychometric function. Assuming the observer applies a transducer and adds noise, Theorem 1 derives a convenient general expression for the psychometric function. Discrimination data are often fitted with a Weibull function. Theorem 2 proves that the Weibull "slope" parameter, β, can be approximated by β(Noise) x β(Transducer), where β(Noise) is the β of the Weibull function that fits best to the cumulative noise distribution, and β(Transducer) depends on the transducer. We derive general expressions for β(Noise) and β(Transducer), from which we derive expressions for specific cases. One case that follows naturally from our general analysis is Pelli's finding that, when d' ∝ (Δx)(b), β ≈ β(Noise) x b. We also consider two limiting cases. Theorem 3 proves that, as sensitivity improves, 2AFC performance will usually approach that for a linear transducer, whatever the actual transducer; we show that this does not apply at signal levels where the transducer gradient is zero, which explains why it does not apply to contrast detection. Theorem 4 proves that, when the exponent of a power-function transducer approaches zero, 2AFC performance approaches that of a logarithmic transducer. We show that the power-function exponents of 0.4-0.5 fitted to suprathreshold contrast discrimination data are close enough to zero for the fitted psychometric function to be practically indistinguishable from that of a log transducer. Finally, Weibull β reflects the shape of the noise distribution, and we used our results to assess the recent claim that internal noise has higher kurtosis than a Gaussian. Our analysis of β for contrast discrimination suggests that, if internal noise is stimulus-independent, it has lower kurtosis than a Gaussian.
NASA Astrophysics Data System (ADS)
Mat Jan, Nur Amalina; Shabri, Ani
2017-01-01
TL-moments approach has been used in an analysis to identify the best-fitting distributions to represent the annual series of maximum streamflow data over seven stations in Johor, Malaysia. The TL-moments with different trimming values are used to estimate the parameter of the selected distributions namely: Three-parameter lognormal (LN3) and Pearson Type III (P3) distribution. The main objective of this study is to derive the TL-moments ( t 1,0), t 1 = 1,2,3,4 methods for LN3 and P3 distributions. The performance of TL-moments ( t 1,0), t 1 = 1,2,3,4 was compared with L-moments through Monte Carlo simulation and streamflow data over a station in Johor, Malaysia. The absolute error is used to test the influence of TL-moments methods on estimated probability distribution functions. From the cases in this study, the results show that TL-moments with four trimmed smallest values from the conceptual sample (TL-moments [4, 0]) of LN3 distribution was the most appropriate in most of the stations of the annual maximum streamflow series in Johor, Malaysia.
Computation of parton distributions from the quasi-PDF approach at the physical point
NASA Astrophysics Data System (ADS)
Alexandrou, Constantia; Bacchio, Simone; Cichy, Krzysztof; Constantinou, Martha; Hadjiyiannakou, Kyriakos; Jansen, Karl; Koutsou, Giannis; Scapellato, Aurora; Steffens, Fernanda
2018-03-01
We show the first results for parton distribution functions within the proton at the physical pion mass, employing the method of quasi-distributions. In particular, we present the matrix elements for the iso-vector combination of the unpolarized, helicity and transversity quasi-distributions, obtained with Nf = 2 twisted mass cloverimproved fermions and a proton boosted with momentum |p→| = 0.83 GeV. The momentum smearing technique has been applied to improve the overlap with the proton boosted state. Moreover, we present the renormalized helicity matrix elements in the RI' scheme, following the non-perturbative renormalization prescription recently developed by our group.
Statistical distribution of time to crack initiation and initial crack size using service data
NASA Technical Reports Server (NTRS)
Heller, R. A.; Yang, J. N.
1977-01-01
Crack growth inspection data gathered during the service life of the C-130 Hercules airplane were used in conjunction with a crack propagation rule to estimate the distribution of crack initiation times and of initial crack sizes. A Bayesian statistical approach was used to calculate the fraction of undetected initiation times as a function of the inspection time and the reliability of the inspection procedure used.
Ciecior, Willy; Röhlig, Klaus-Jürgen; Kirchner, Gerald
2018-10-01
In the present paper, deterministic as well as first- and second-order probabilistic biosphere modeling approaches are compared. Furthermore, the sensitivity of the influence of the probability distribution function shape (empirical distribution functions and fitted lognormal probability functions) representing the aleatory uncertainty (also called variability) of a radioecological model parameter as well as the role of interacting parameters are studied. Differences in the shape of the output distributions for the biosphere dose conversion factor from first-order Monte Carlo uncertainty analysis using empirical and fitted lognormal distribution functions for input parameters suggest that a lognormal approximation is possibly not always an adequate representation of the aleatory uncertainty of a radioecological parameter. Concerning the comparison of the impact of aleatory and epistemic parameter uncertainty on the biosphere dose conversion factor, the latter here is described using uncertain moments (mean, variance) while the distribution itself represents the aleatory uncertainty of the parameter. From the results obtained, the solution space of second-order Monte Carlo simulation is much larger than that from first-order Monte Carlo simulation. Therefore, the influence of epistemic uncertainty of a radioecological parameter on the output result is much larger than that one caused by its aleatory uncertainty. Parameter interactions are only of significant influence in the upper percentiles of the distribution of results as well as only in the region of the upper percentiles of the model parameters. Copyright © 2018 Elsevier Ltd. All rights reserved.
Distributed traffic signal control using fuzzy logic
NASA Technical Reports Server (NTRS)
Chiu, Stephen
1992-01-01
We present a distributed approach to traffic signal control, where the signal timing parameters at a given intersection are adjusted as functions of the local traffic condition and of the signal timing parameters at adjacent intersections. Thus, the signal timing parameters evolve dynamically using only local information to improve traffic flow. This distributed approach provides for a fault-tolerant, highly responsive traffic management system. The signal timing at an intersection is defined by three parameters: cycle time, phase split, and offset. We use fuzzy decision rules to adjust these three parameters based only on local information. The amount of change in the timing parameters during each cycle is limited to a small fraction of the current parameters to ensure smooth transition. We show the effectiveness of this method through simulation of the traffic flow in a network of controlled intersections.
Nuclear spin noise in the central spin model
NASA Astrophysics Data System (ADS)
Fröhling, Nina; Anders, Frithjof B.; Glazov, Mikhail
2018-05-01
We study theoretically the fluctuations of the nuclear spins in quantum dots employing the central spin model which accounts for the hyperfine interaction of the nuclei with the electron spin. These fluctuations are calculated both with an analytical approach using homogeneous hyperfine couplings (box model) and with a numerical simulation using a distribution of hyperfine coupling constants. The approaches are in good agreement. The box model serves as a benchmark with low computational cost that explains the basic features of the nuclear spin noise well. We also demonstrate that the nuclear spin noise spectra comprise a two-peak structure centered at the nuclear Zeeman frequency in high magnetic fields with the shape of the spectrum controlled by the distribution of the hyperfine constants. This allows for direct access to this distribution function through nuclear spin noise spectroscopy.
Filling the gap in functional trait databases: use of ecological hypotheses to replace missing data.
Taugourdeau, Simon; Villerd, Jean; Plantureux, Sylvain; Huguenin-Elie, Olivier; Amiaud, Bernard
2014-04-01
Functional trait databases are powerful tools in ecology, though most of them contain large amounts of missing values. The goal of this study was to test the effect of imputation methods on the evaluation of trait values at species level and on the subsequent calculation of functional diversity indices at community level using functional trait databases. Two simple imputation methods (average and median), two methods based on ecological hypotheses, and one multiple imputation method were tested using a large plant trait database, together with the influence of the percentage of missing data and differences between functional traits. At community level, the complete-case approach and three functional diversity indices calculated from grassland plant communities were included. At the species level, one of the methods based on ecological hypothesis was for all traits more accurate than imputation with average or median values, but the multiple imputation method was superior for most of the traits. The method based on functional proximity between species was the best method for traits with an unbalanced distribution, while the method based on the existence of relationships between traits was the best for traits with a balanced distribution. The ranking of the grassland communities for their functional diversity indices was not robust with the complete-case approach, even for low percentages of missing data. With the imputation methods based on ecological hypotheses, functional diversity indices could be computed with a maximum of 30% of missing data, without affecting the ranking between grassland communities. The multiple imputation method performed well, but not better than single imputation based on ecological hypothesis and adapted to the distribution of the trait values for the functional identity and range of the communities. Ecological studies using functional trait databases have to deal with missing data using imputation methods corresponding to their specific needs and making the most out of the information available in the databases. Within this framework, this study indicates the possibilities and limits of single imputation methods based on ecological hypothesis and concludes that they could be useful when studying the ranking of communities for their functional diversity indices.
Filling the gap in functional trait databases: use of ecological hypotheses to replace missing data
Taugourdeau, Simon; Villerd, Jean; Plantureux, Sylvain; Huguenin-Elie, Olivier; Amiaud, Bernard
2014-01-01
Functional trait databases are powerful tools in ecology, though most of them contain large amounts of missing values. The goal of this study was to test the effect of imputation methods on the evaluation of trait values at species level and on the subsequent calculation of functional diversity indices at community level using functional trait databases. Two simple imputation methods (average and median), two methods based on ecological hypotheses, and one multiple imputation method were tested using a large plant trait database, together with the influence of the percentage of missing data and differences between functional traits. At community level, the complete-case approach and three functional diversity indices calculated from grassland plant communities were included. At the species level, one of the methods based on ecological hypothesis was for all traits more accurate than imputation with average or median values, but the multiple imputation method was superior for most of the traits. The method based on functional proximity between species was the best method for traits with an unbalanced distribution, while the method based on the existence of relationships between traits was the best for traits with a balanced distribution. The ranking of the grassland communities for their functional diversity indices was not robust with the complete-case approach, even for low percentages of missing data. With the imputation methods based on ecological hypotheses, functional diversity indices could be computed with a maximum of 30% of missing data, without affecting the ranking between grassland communities. The multiple imputation method performed well, but not better than single imputation based on ecological hypothesis and adapted to the distribution of the trait values for the functional identity and range of the communities. Ecological studies using functional trait databases have to deal with missing data using imputation methods corresponding to their specific needs and making the most out of the information available in the databases. Within this framework, this study indicates the possibilities and limits of single imputation methods based on ecological hypothesis and concludes that they could be useful when studying the ranking of communities for their functional diversity indices. PMID:24772273
NASA Astrophysics Data System (ADS)
Huang, Dong; Liu, Yangang
2014-12-01
Subgrid-scale variability is one of the main reasons why parameterizations are needed in large-scale models. Although some parameterizations started to address the issue of subgrid variability by introducing a subgrid probability distribution function for relevant quantities, the spatial structure has been typically ignored and thus the subgrid-scale interactions cannot be accounted for physically. Here we present a new statistical-physics-like approach whereby the spatial autocorrelation function can be used to physically capture the net effects of subgrid cloud interaction with radiation. The new approach is able to faithfully reproduce the Monte Carlo 3D simulation results with several orders less computational cost, allowing for more realistic representation of cloud radiation interactions in large-scale models.
Probabilistic Modeling of Aircraft Trajectories for Dynamic Separation Volumes
NASA Technical Reports Server (NTRS)
Lewis, Timothy A.
2016-01-01
With a proliferation of new and unconventional vehicles and operations expected in the future, the ab initio airspace design will require new approaches to trajectory prediction for separation assurance and other air traffic management functions. This paper presents an approach to probabilistic modeling of the trajectory of an aircraft when its intent is unknown. The approach uses a set of feature functions to constrain a maximum entropy probability distribution based on a set of observed aircraft trajectories. This model can be used to sample new aircraft trajectories to form an ensemble reflecting the variability in an aircraft's intent. The model learning process ensures that the variability in this ensemble reflects the behavior observed in the original data set. Computational examples are presented.
Simulation and optimization of faceted structure for illumination
NASA Astrophysics Data System (ADS)
Liu, Lihong; Engel, Thierry; Flury, Manuel
2016-04-01
The re-direction of incoherent light using a surface containing only facets with specific angular values is proposed. A new photometric approach is adopted since the size of each facet is large in comparison with the wavelength. A reflective configuration is employed to avoid the dispersion problems of materials. The irradiance distribution of the reflected beam is determined by the angular position of each facet. In order to obtain the specific irradiance distribution, the angular position of each facet is optimized using Zemax OpticStudio 15 software. A detector is placed in the direction which is perpendicular to the reflected beam. According to the incoherent irradiance distribution on the detector, a merit function needs to be defined to pilot the optimization process. The two dimensional angular position of each facet is defined as a variable which is optimized within a specified varying range. Because the merit function needs to be updated, a macro program is carried out to update this function within Zemax. In order to reduce the complexity of the manual operation, an automatic optimization approach is established. Zemax is in charge of performing the optimization task and sending back the irradiance data to Matlab for further analysis. Several simulation results are given for the verification of the optimization method. The simulation results are compared to those obtained with the LightTools software in order to verify our optimization method.
Zounemat-Kermani, Mohammad; Ramezani-Charmahineh, Abdollah; Adamowski, Jan; Kisi, Ozgur
2018-06-13
Chlorination, the basic treatment utilized for drinking water sources, is widely used for water disinfection and pathogen elimination in water distribution networks. Thereafter, the proper prediction of chlorine consumption is of great importance in water distribution network performance. In this respect, data mining techniques-which have the ability to discover the relationship between dependent variable(s) and independent variables-can be considered as alternative approaches in comparison to conventional methods (e.g., numerical methods). This study examines the applicability of three key methods, based on the data mining approach, for predicting chlorine levels in four water distribution networks. ANNs (artificial neural networks, including the multi-layer perceptron neural network, MLPNN, and radial basis function neural network, RBFNN), SVM (support vector machine), and CART (classification and regression tree) methods were used to estimate the concentration of residual chlorine in distribution networks for three villages in Kerman Province, Iran. Produced water (flow), chlorine consumption, and residual chlorine were collected daily for 3 years. An assessment of the studied models using several statistical criteria (NSC, RMSE, R 2 , and SEP) indicated that, in general, MLPNN has the greatest capability for predicting chlorine levels followed by CART, SVM, and RBF-ANN. Weaker performance of the data-driven methods in the water distribution networks, in some cases, could be attributed to improper chlorination management rather than the methods' capability.
Dynamical Mass Measurements of Contaminated Galaxy Clusters Using Support Distribution Machines
NASA Astrophysics Data System (ADS)
Ntampaka, Michelle; Trac, Hy; Sutherland, Dougal; Fromenteau, Sebastien; Poczos, Barnabas; Schneider, Jeff
2018-01-01
We study dynamical mass measurements of galaxy clusters contaminated by interlopers and show that a modern machine learning (ML) algorithm can predict masses by better than a factor of two compared to a standard scaling relation approach. We create two mock catalogs from Multidark’s publicly available N-body MDPL1 simulation, one with perfect galaxy cluster membership infor- mation and the other where a simple cylindrical cut around the cluster center allows interlopers to contaminate the clusters. In the standard approach, we use a power-law scaling relation to infer cluster mass from galaxy line-of-sight (LOS) velocity dispersion. Assuming perfect membership knowledge, this unrealistic case produces a wide fractional mass error distribution, with a width E=0.87. Interlopers introduce additional scatter, significantly widening the error distribution further (E=2.13). We employ the support distribution machine (SDM) class of algorithms to learn from distributions of data to predict single values. Applied to distributions of galaxy observables such as LOS velocity and projected distance from the cluster center, SDM yields better than a factor-of-two improvement (E=0.67) for the contaminated case. Remarkably, SDM applied to contaminated clusters is better able to recover masses than even the scaling relation approach applied to uncon- taminated clusters. We show that the SDM method more accurately reproduces the cluster mass function, making it a valuable tool for employing cluster observations to evaluate cosmological models.
Sampling probability distributions of lesions in mammograms
NASA Astrophysics Data System (ADS)
Looney, P.; Warren, L. M.; Dance, D. R.; Young, K. C.
2015-03-01
One approach to image perception studies in mammography using virtual clinical trials involves the insertion of simulated lesions into normal mammograms. To facilitate this, a method has been developed that allows for sampling of lesion positions across the cranio-caudal and medio-lateral radiographic projections in accordance with measured distributions of real lesion locations. 6825 mammograms from our mammography image database were segmented to find the breast outline. The outlines were averaged and smoothed to produce an average outline for each laterality and radiographic projection. Lesions in 3304 mammograms with malignant findings were mapped on to a standardised breast image corresponding to the average breast outline using piecewise affine transforms. A four dimensional probability distribution function was found from the lesion locations in the cranio-caudal and medio-lateral radiographic projections for calcification and noncalcification lesions. Lesion locations sampled from this probability distribution function were mapped on to individual mammograms using a piecewise affine transform which transforms the average outline to the outline of the breast in the mammogram. The four dimensional probability distribution function was validated by comparing it to the two dimensional distributions found by considering each radiographic projection and laterality independently. The correlation of the location of the lesions sampled from the four dimensional probability distribution function across radiographic projections was shown to match the correlation of the locations of the original mapped lesion locations. The current system has been implemented as a web-service on a server using the Python Django framework. The server performs the sampling, performs the mapping and returns the results in a javascript object notation format.
VizieR Online Data Catalog: Tracers of the Milky Way mass (Bratek+, 2014)
NASA Astrophysics Data System (ADS)
Bratek, L.; Sikora, S.; Jalocha, J.; Kutschera, M.
2013-11-01
We model the phase-space distribution of the kinematic tracers using general, smooth distribution functions to derive a conservative lower bound on the total mass within ~~150-200kpc. By approximating the potential as Keplerian, the phase-space distribution can be simplified to that of a smooth distribution of energies and eccentricities. Our approach naturally allows for calculating moments of the distribution function, such as the radial profile of the orbital anisotropy. We systematically construct a family of phase-space functions with the resulting radial velocity dispersion overlapping with the one obtained using data on radial motions of distant kinematic tracers, while making no assumptions about the density of the tracers and the velocity anisotropy parameter β regarded as a function of the radial variable. While there is no apparent upper bound for the Milky Way mass, at least as long as only the radial motions are concerned, we find a sharp lower bound for the mass that is small. In particular, a mass value of 2.4x1011M⊙, obtained in the past for lower and intermediate radii, is still consistent with the dispersion profile at larger radii. Compared with much greater mass values in the literature, this result shows that determining the Milky Way mass is strongly model-dependent. We expect a similar reduction of mass estimates in models assuming more realistic mass profiles. (1 data file).
Probabilistic track coverage in cooperative sensor networks.
Ferrari, Silvia; Zhang, Guoxian; Wettergren, Thomas A
2010-12-01
The quality of service of a network performing cooperative track detection is represented by the probability of obtaining multiple elementary detections over time along a target track. Recently, two different lines of research, namely, distributed-search theory and geometric transversals, have been used in the literature for deriving the probability of track detection as a function of random and deterministic sensors' positions, respectively. In this paper, we prove that these two approaches are equivalent under the same problem formulation. Also, we present a new performance function that is derived by extending the geometric-transversal approach to the case of random sensors' positions using Poisson flats. As a result, a unified approach for addressing track detection in both deterministic and probabilistic sensor networks is obtained. The new performance function is validated through numerical simulations and is shown to bring about considerable computational savings for both deterministic and probabilistic sensor networks.
Interpretation of heavy rainfall spatial distribution in mountain watersheds by copula functions
NASA Astrophysics Data System (ADS)
Grossi, Giovanna; Balistrocchi, Matteo
2016-04-01
The spatial distribution of heavy rainfalls can strongly influence flood dynamics in mountain watersheds, depending on their geomorphologic features, namely orography, slope, land covers and soil types. Unfortunately, the direct observation of rainfall fields by meteorological radar is very difficult in this situation, so that interpolation of rain gauge observations or downscaling of meteorological predictions must be adopted to derive spatial rainfall distributions. To do so, various stochastic and physically based approaches are already available, even though the first one is the most familiar in hydrology. Indeed, Kriging interpolation procedures represent very popular techniques to face this problem by means of a stochastic approach. A certain number of restrictive assumptions and parameter uncertainties however affects Kriging. Many alternative formulations and additional procedures were therefore developed during the last decades. More recently, copula functions (Joe, 1997; Nelsen, 2006; Salvadori et al. 2007) were suggested to provide a more straightforward solution to carry out spatial interpolations of hydrologic variables (Bardossy & Pegram; 2009). Main advantages lie in the possibility of i) assessing the dependence structure relating to rainfall variables independently of marginal distributions, ii) expressing the association degree through rank correlation coefficients, iii) implementing marginal distributions and copula functions belonging to different models to develop complex joint distribution functions, iv) verifying the model reliability by effective statistical tests (Genest et al., 2009). A suitable case study to verify these potentialities is provided by the Taro River, a right-bank tributary of the Po River (northern Italy), whose contributing area amounts to about 2˙000 km2. The mountain catchment area is divided into two similar watersheds, so that spatial distribution is crucial in extreme flood event generation. A quite well diffused hydro-meteorological network, consisting of about 30 rain gauges and 10 hydrometers, monitors this medium-size watershed. A decade of rainfall-runoff event observations are available. Severe rainfall events were identified with reference to a main raingauge station, by using an interevent time definition and a depth threshold. Rainfall depths were thus derived and the spatial variability of their association degree was represented by using the Kendall coefficient. A unique copula model based on Gumbel copula function was finally found to be suitable to represent the dependence structure relating to rainfall depths observed in distinct raingauges. Bardossy A., Pegram G. (2009), Copula based multisite model for daily precipitation simulation, Hydrol. Earth Syst. Sci., 13, 2299-2314. Genest C., Rémilland B., Beaudoin D. (2009), Goodness-of-fit tests for copulas: a review and a power study, Insur. Math. Econ., 44(2), 199-213. Joe H. (1997), Multivariate models and dependence concepts, Chapman and Hall, London. Nelsen R. B. (2006), An introduction to copulas, second ed., Springer, New York. Salvadori G., De Michele C., Kottegoda N. T., Rosso R. (2007), Extremes in nature: an approach using copulas, Springer, Dordrecht, The Nederlands.
NASA Astrophysics Data System (ADS)
Ramirez-Lopez, L.; van Wesemael, B.; Stevens, A.; Doetterl, S.; Van Oost, K.; Behrens, T.; Schmidt, K.
2012-04-01
Soil Organic Carbon (SOC) represents a key component in the global C cycle and has an important influence on the global CO2 fluxes between terrestrial biosphere and atmosphere. In the context of agricultural landscapes, SOC inventories are important since soil management practices have a strong influence on CO2 fluxes and SOC stocks. However, there is lack of accurate and cost-effective methods for producing high spatial resolution of SOC information. In this respect, our work is focused on the development of a three dimensional modeling approach for SOC monitoring in agricultural fields. The study area comprises ~420 km2 and includes 4 of the 5 agro-geological regions of the Grand-Duchy of Luxembourg. The soil dataset consist of 172 profiles (1033 samples) which were not sampled specifically for this study. This dataset is a combination of profile samples collected in previous soil surveys and soil profiles sampled for other research purposes. The proposed strategy comprises two main steps. In the first step the SOC distribution within each profile (vertical distribution) is modeled. Depth functions for are fitted in order to summarize the information content in the profile. By using these functions the SOC can be interpolated at any depth within the profiles. The second step involves the use of contextual terrain (ConMap) features (Behrens et al., 2010). These features are based on the differences in elevation between a given point location in the landscape and its circular neighbourhoods at a given set of different radius. One of the main advantages of this approach is that it allows the integration of several spatial scales (eg. local and regional) for soil spatial analysis. In this work the ConMap features are derived from a digital elevation model of the area and are used as predictors for spatial modeling of the parameters of the depth functions fitted in the previous step. In this poster we present some preliminary results in which we analyze: i. The use of different depth functions, ii. The use of different machine learning approaches for modeling the parameters of the fitted depth functions using the ConMap features and iii. The influence of different spatial scales on the SOC profile distribution variability. Keywords: 3D modeling, Digital soil mapping, Depth functions, Terrain analysis. Reference Behrens, T., K. Schmidt, K., Zhu, A.X. Scholten, T. 2010. The ConMap approach for terrain-based digital soil mapping. European Journal of Soil Science, v. 61, p.133-143.
NASA Astrophysics Data System (ADS)
Tvaskis, V.; Tvaskis, A.; Niculescu, I.; Abbott, D.; Adams, G. S.; Afanasev, A.; Ahmidouch, A.; Angelescu, T.; Arrington, J.; Asaturyan, R.; Avery, S.; Baker, O. K.; Benmouna, N.; Berman, B. L.; Biselli, A.; Blok, H. P.; Boeglin, W. U.; Bosted, P. E.; Brash, E.; Breuer, H.; Chang, G.; Chant, N.; Christy, M. E.; Connell, S. H.; Dalton, M. M.; Danagoulian, S.; Day, D.; Dodario, T.; Dunne, J. A.; Dutta, D.; El Khayari, N.; Ent, R.; Fenker, H. C.; Frolov, V. V.; Gaskell, D.; Garrow, K.; Gilman, R.; Gueye, P.; Hafidi, K.; Hinton, W.; Holt, R. J.; Horn, T.; Huber, G. M.; Jackson, H.; Jiang, X.; Jones, M. K.; Joo, K.; Kelly, J. J.; Keppel, C. E.; Kuhn, J.; Kinney, E.; Klein, A.; Kubarovsky, V.; Liang, Y.; Lolos, G.; Lung, A.; Mack, D.; Malace, S.; Markowitz, P.; Mbianda, G.; McGrath, E.; Mckee, D.; Meekins, D. G.; Mkrtchyan, H.; Napolitano, J.; Navasardyan, T.; Niculescu, G.; Nozar, M.; Ostapenko, T.; Papandreou, Z.; Potterveld, D.; Reimer, P. E.; Reinhold, J.; Roche, J.; Rock, S. E.; Schulte, E.; Segbefia, E.; Smith, C.; Smith, G. R.; Stoler, P.; Tadevosyan, V.; Tang, L.; Telfeyan, J.; Todor, L.; Ungaro, M.; Uzzle, A.; Vidakovic, S.; Villano, A.; Vulcan, W. F.; Warren, G.; Wesselmann, F.; Wojtsekhowski, B.; Wood, S. A.; Yan, C.; Zihlmann, B.
2018-04-01
Structure functions, as measured in lepton-nucleon scattering, have proven to be very useful in studying the partonic dynamics within the nucleon. However, it is experimentally difficult to separately determine the longitudinal and transverse structure functions, and consequently there are substantially less data available in particular for the longitudinal structure function. Here, we present separated structure functions for hydrogen and deuterium at low four-momentum transfer squared, Q2<1 GeV2 , and compare them with parton distribution parametrization and kT factorization approaches. While differences are found, the parametrizations generally agree with the data, even at the very low-Q2 scale of the data. The deuterium data show a smaller longitudinal structure function and a smaller ratio of longitudinal to transverse cross section, R , than the proton. This suggests either an unexpected difference in R for the proton and the neutron or a suppression of the gluonic distribution in nuclei.
Tvaskis, V.; Tvaskis, A.; Niculescu, I.; ...
2018-04-26
Structure functions, as measured in lepton-nucleon scattering, have proven to be very useful in studying the partonic dynamics within the nucleon. Furthermore, it is experimentally difficult to separately determine the longitudinal and transverse structure functions, and consequently there are substantially less data available in particular for the longitudinal structure function. Here, we present separated structure functions for hydrogen and deuterium at low four-momentum transfer squared, Q 2 < 1 GeV 2, and compare them with parton distribution parametrization and k T factorization approaches. While differences are found, the parametrizations generally agree with the data, even at the very low-Q 2more » scale of the data. The deuterium data show a smaller longitudinal structure function and a smaller ratio of longitudinal to transverse cross section, R, than the proton. This suggests either an unexpected difference in R for the proton and the neutron or a suppression of the gluonic distribution in nuclei.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tvaskis, V.; Tvaskis, A.; Niculescu, I.
Structure functions, as measured in lepton-nucleon scattering, have proven to be very useful in studying the partonic dynamics within the nucleon. Furthermore, it is experimentally difficult to separately determine the longitudinal and transverse structure functions, and consequently there are substantially less data available in particular for the longitudinal structure function. Here, we present separated structure functions for hydrogen and deuterium at low four-momentum transfer squared, Q 2 < 1 GeV 2, and compare them with parton distribution parametrization and k T factorization approaches. While differences are found, the parametrizations generally agree with the data, even at the very low-Q 2more » scale of the data. The deuterium data show a smaller longitudinal structure function and a smaller ratio of longitudinal to transverse cross section, R, than the proton. This suggests either an unexpected difference in R for the proton and the neutron or a suppression of the gluonic distribution in nuclei.« less
Properties of field functionals and characterization of local functionals
NASA Astrophysics Data System (ADS)
Brouder, Christian; Dang, Nguyen Viet; Laurent-Gengoux, Camille; Rejzner, Kasia
2018-02-01
Functionals (i.e., functions of functions) are widely used in quantum field theory and solid-state physics. In this paper, functionals are given a rigorous mathematical framework and their main properties are described. The choice of the proper space of test functions (smooth functions) and of the relevant concept of differential (Bastiani differential) are discussed. The relation between the multiple derivatives of a functional and the corresponding distributions is described in detail. It is proved that, in a neighborhood of every test function, the support of a smooth functional is uniformly compactly supported and the order of the corresponding distribution is uniformly bounded. Relying on a recent work by Dabrowski, several spaces of functionals are furnished with a complete and nuclear topology. In view of physical applications, it is shown that most formal manipulations can be given a rigorous meaning. A new concept of local functionals is proposed and two characterizations of them are given: the first one uses the additivity (or Hammerstein) property, the second one is a variant of Peetre's theorem. Finally, the first step of a cohomological approach to quantum field theory is carried out by proving a global Poincaré lemma and defining multi-vector fields and graded functionals within our framework.
Balderson, M J; Brown, D W; Quirk, S; Ghasroddashti, E; Kirkby, C
2012-07-01
Clinical outcome studies with clear and objective endpoints are necessary to make informed radiotherapy treatment decisions. Commonly, clinical outcomes are established after lengthy and costly clinical trials are performed and the data are analyzed and published. One the challenges with obtaining meaningful data from clinical trials is that by the time the information gets to the medical profession the results may be less clinically relevant than when the trial began, An alternative approach is to estimate clinical outcomes through patient population modeling. We are developing a mathematical tool that uses Monte Carlo techniques to simulate variations in planned and delivered dose distributions of prostate patients receiving radiotherapy. Ultimately, our simulation will calculate a distribution of Tumor Control Probabilities (TCPs) for a population of patients treated under a given protocol. Such distributions can serve as a metric for comparing different treatment modalities, planning and setup approaches, and machine parameter settings or tolerances with respect to outcomes on broad patient populations. It may also help researchers understand differences one might expect to find before actually doing the clinical trial. As a first step and for the focus of this abstract we wanted to see if we could answer the question: "Can a population of dose distributions of prostate patients be accurately modeled by a set of randomly generated Gaussian functions?" Our results have demonstrated that using a set of randomly generated Gaussian functions can simulate a distribution of prostate patients. © 2012 American Association of Physicists in Medicine.
Hierarchical organization of functional connectivity in the mouse brain: a complex network approach.
Bardella, Giampiero; Bifone, Angelo; Gabrielli, Andrea; Gozzi, Alessandro; Squartini, Tiziano
2016-08-18
This paper represents a contribution to the study of the brain functional connectivity from the perspective of complex networks theory. More specifically, we apply graph theoretical analyses to provide evidence of the modular structure of the mouse brain and to shed light on its hierarchical organization. We propose a novel percolation analysis and we apply our approach to the analysis of a resting-state functional MRI data set from 41 mice. This approach reveals a robust hierarchical structure of modules persistent across different subjects. Importantly, we test this approach against a statistical benchmark (or null model) which constrains only the distributions of empirical correlations. Our results unambiguously show that the hierarchical character of the mouse brain modular structure is not trivially encoded into this lower-order constraint. Finally, we investigate the modular structure of the mouse brain by computing the Minimal Spanning Forest, a technique that identifies subnetworks characterized by the strongest internal correlations. This approach represents a faster alternative to other community detection methods and provides a means to rank modules on the basis of the strength of their internal edges.
Hierarchical organization of functional connectivity in the mouse brain: a complex network approach
NASA Astrophysics Data System (ADS)
Bardella, Giampiero; Bifone, Angelo; Gabrielli, Andrea; Gozzi, Alessandro; Squartini, Tiziano
2016-08-01
This paper represents a contribution to the study of the brain functional connectivity from the perspective of complex networks theory. More specifically, we apply graph theoretical analyses to provide evidence of the modular structure of the mouse brain and to shed light on its hierarchical organization. We propose a novel percolation analysis and we apply our approach to the analysis of a resting-state functional MRI data set from 41 mice. This approach reveals a robust hierarchical structure of modules persistent across different subjects. Importantly, we test this approach against a statistical benchmark (or null model) which constrains only the distributions of empirical correlations. Our results unambiguously show that the hierarchical character of the mouse brain modular structure is not trivially encoded into this lower-order constraint. Finally, we investigate the modular structure of the mouse brain by computing the Minimal Spanning Forest, a technique that identifies subnetworks characterized by the strongest internal correlations. This approach represents a faster alternative to other community detection methods and provides a means to rank modules on the basis of the strength of their internal edges.
On soft clipping of Zernike moments for deblurring and enhancement of optical point spread functions
NASA Astrophysics Data System (ADS)
Becherer, Nico; Jödicke, Hanna; Schlosser, Gregor; Hesser, Jürgen; Zeilfelder, Frank; Männer, Reinhard
2006-02-01
Blur and noise originating from the physical imaging processes degrade the microscope data. Accurate deblurring techniques require, however, an accurate estimation of the underlying point-spread function (PSF). A good representation of PSFs can be achieved by Zernike Polynomials since they offer a compact representation where low-order coefficients represent typical aberrations of optical wavefronts while noise is represented in higher order coefficients. A quantitative description of the noise distribution (Gaussian) over the Zernike moments of various orders is given which is the basis for the new soft clipping approach for denoising of PSFs. Instead of discarding moments beyond a certain order, those Zernike moments that are more sensitive to noise are dampened according to the measured distribution and the present noise model. Further, a new scheme to combine experimental and theoretical PSFs in Zernike space is presented. According to our experimental reconstructions, using the new improved PSF the correlation between reconstructed and original volume is raised by 15% on average cases and up to 85% in the case of thin fibre structures, compared to reconstructions where a non improved PSF was used. Finally, we demonstrate the advantages of our approach on 3D images of confocal microscopes by generating visually improved volumes. Additionally, we are presenting a method to render the reconstructed results using a new volume rendering method that is almost artifact-free. The new approach is based on a Shear-Warp technique, wavelet data encoding techniques and a recent approach to approximate the gray value distribution by a Super spline model.
2015-03-26
albeit powerful , method available for exploring CAS. As discussed above, there are many useful mathematical tools appropriate for CAS modeling. Agent-based...cells, tele- phone calls, and sexual contacts approach power -law distributions. [48] Networks in general are robust against random failures, but...targeted failures can have powerful effects – provided the targeter has a good understanding of the network structure. Some argue (convincingly) that all
Hyde, M W; Schmidt, J D; Havrilla, M J
2009-11-23
A polarimetric bidirectional reflectance distribution function (pBRDF), based on geometrical optics, is presented. The pBRDF incorporates a visibility (shadowing/masking) function and a Lambertian (diffuse) component which distinguishes it from other geometrical optics pBRDFs in literature. It is shown that these additions keep the pBRDF bounded (and thus a more realistic physical model) as the angle of incidence or observation approaches grazing and better able to model the behavior of light scattered from rough, reflective surfaces. In this paper, the theoretical development of the pBRDF is shown and discussed. Simulation results of a rough, perfect reflecting surface obtained using an exact, electromagnetic solution and experimental Mueller matrix results of two, rough metallic samples are presented to validate the pBRDF.
Statistical mechanics of high-density bond percolation
NASA Astrophysics Data System (ADS)
Timonin, P. N.
2018-05-01
High-density (HD) percolation describes the percolation of specific κ -clusters, which are the compact sets of sites each connected to κ nearest filled sites at least. It takes place in the classical patterns of independently distributed sites or bonds in which the ordinary percolation transition also exists. Hence, the study of series of κ -type HD percolations amounts to the description of classical clusters' structure for which κ -clusters constitute κ -cores nested one into another. Such data are needed for description of a number of physical, biological, and information properties of complex systems on random lattices, graphs, and networks. They range from magnetic properties of semiconductor alloys to anomalies in supercooled water and clustering in biological and social networks. Here we present the statistical mechanics approach to study HD bond percolation on an arbitrary graph. It is shown that the generating function for κ -clusters' size distribution can be obtained from the partition function of the specific q -state Potts-Ising model in the q →1 limit. Using this approach we find exact κ -clusters' size distributions for the Bethe lattice and Erdos-Renyi graph. The application of the method to Euclidean lattices is also discussed.
Yu, Peng; Shaw, Chad A
2014-06-01
The Dirichlet-multinomial (DMN) distribution is a fundamental model for multicategory count data with overdispersion. This distribution has many uses in bioinformatics including applications to metagenomics data, transctriptomics and alternative splicing. The DMN distribution reduces to the multinomial distribution when the overdispersion parameter ψ is 0. Unfortunately, numerical computation of the DMN log-likelihood function by conventional methods results in instability in the neighborhood of [Formula: see text]. An alternative formulation circumvents this instability, but it leads to long runtimes that make it impractical for large count data common in bioinformatics. We have developed a new method for computation of the DMN log-likelihood to solve the instability problem without incurring long runtimes. The new approach is composed of a novel formula and an algorithm to extend its applicability. Our numerical experiments show that this new method both improves the accuracy of log-likelihood evaluation and the runtime by several orders of magnitude, especially in high-count data situations that are common in deep sequencing data. Using real metagenomic data, our method achieves manyfold runtime improvement. Our method increases the feasibility of using the DMN distribution to model many high-throughput problems in bioinformatics. We have included in our work an R package giving access to this method and a vingette applying this approach to metagenomic data. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Exact collisional moments for plasma fluid theories
NASA Astrophysics Data System (ADS)
Pfefferlé, D.; Hirvijoki, E.; Lingam, M.
2017-04-01
The velocity-space moments of the often troublesome nonlinear Landau collision operator are expressed exactly in terms of multi-index Hermite-polynomial moments of distribution functions. The collisional moments are shown to be generated by derivatives of two well-known functions, namely, the Rosenbluth-MacDonald-Judd-Trubnikov potentials for a Gaussian distribution. The resulting formula has a nonlinear dependency on the relative mean flow of the colliding species normalised to the root-mean-square of the corresponding thermal velocities and a bilinear dependency on densities and higher-order velocity moments of the distribution functions, with no restriction on temperature, flow, or mass ratio of the species. The result can be applied to both the classic transport theory of plasmas that relies on the Chapman-Enskog method, as well as to derive collisional fluid equations that follow Grad's moment approach. As an illustrative example, we provide the collisional ten-moment equations with exact conservation laws for momentum- and energy-transfer rates.
Exact collisional moments for plasma fluid theories
NASA Astrophysics Data System (ADS)
Pfefferle, David; Hirvijoki, Eero; Lingam, Manasvi
2017-10-01
The velocity-space moments of the often troublesome nonlinear Landau collision operator are expressed exactly in terms of multi-index Hermite-polynomial moments of the distribution functions. The collisional moments are shown to be generated by derivatives of two well-known functions, namely the Rosenbluth-MacDonald-Judd-Trubnikov potentials for a Gaussian distribution. The resulting formula has a nonlinear dependency on the relative mean flow of the colliding species normalised to the root-mean-square of the corresponding thermal velocities, and a bilinear dependency on densities and higher-order velocity moments of the distribution functions, with no restriction on temperature, flow or mass ratio of the species. The result can be applied to both the classic transport theory of plasmas, that relies on the Chapman-Enskog method, as well as to deriving collisional fluid equations that follow Grad's moment approach. As an illustrative example, we provide the collisional ten-moment equations with exact conservation laws for momentum- and energy-transfer rate.
Exact collisional moments for plasma fluid theories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pfefferlé, D.; Hirvijoki, E.; Lingam, M.
The velocity-space moments of the often troublesome nonlinear Landau collision operator are expressed exactly in terms of multi-index Hermite-polynomial moments of distribution functions. The collisional moments are shown to be generated by derivatives of two well-known functions, namely, the Rosenbluth-MacDonald-Judd-Trubnikov potentials for a Gaussian distribution. The resulting formula has a nonlinear dependency on the relative mean flow of the colliding species normalised to the root-mean-square of the corresponding thermal velocities and a bilinear dependency on densities and higher-order velocity moments of the distribution functions, with no restriction on temperature, flow, or mass ratio of the species. The result can bemore » applied to both the classic transport theory of plasmas that relies on the Chapman-Enskog method, as well as to derive collisional fluid equations that follow Grad's moment approach. As an illustrative example, we provide the collisional ten-moment equations with exact conservation laws for momentum-and energy-transfer rates.« less
Exact collisional moments for plasma fluid theories
Pfefferlé, D.; Hirvijoki, E.; Lingam, M.
2017-04-01
The velocity-space moments of the often troublesome nonlinear Landau collision operator are expressed exactly in terms of multi-index Hermite-polynomial moments of distribution functions. The collisional moments are shown to be generated by derivatives of two well-known functions, namely, the Rosenbluth-MacDonald-Judd-Trubnikov potentials for a Gaussian distribution. The resulting formula has a nonlinear dependency on the relative mean flow of the colliding species normalised to the root-mean-square of the corresponding thermal velocities and a bilinear dependency on densities and higher-order velocity moments of the distribution functions, with no restriction on temperature, flow, or mass ratio of the species. The result can bemore » applied to both the classic transport theory of plasmas that relies on the Chapman-Enskog method, as well as to derive collisional fluid equations that follow Grad's moment approach. As an illustrative example, we provide the collisional ten-moment equations with exact conservation laws for momentum-and energy-transfer rates.« less
NASA Technical Reports Server (NTRS)
Morris, Robert A.
1990-01-01
The emphasis is on defining a set of communicating processes for intelligent spacecraft secondary power distribution and control. The computer hardware and software implementation platform for this work is that of the ADEPTS project at the Johnson Space Center (JSC). The electrical power system design which was used as the basis for this research is that of Space Station Freedom, although the functionality of the processes defined here generalize to any permanent manned space power control application. First, the Space Station Electrical Power Subsystem (EPS) hardware to be monitored is described, followed by a set of scenarios describing typical monitor and control activity. Then, the parallel distributed problem solving approach to knowledge engineering is introduced. There follows a two-step presentation of the intelligent software design for secondary power control. The first step decomposes the problem of monitoring and control into three primary functions. Each of the primary functions is described in detail. Suggestions for refinements and embelishments in design specifications are given.
Genome-Wide Association Study of the Genetic Determinants of Emphysema Distribution.
Boueiz, Adel; Lutz, Sharon M; Cho, Michael H; Hersh, Craig P; Bowler, Russell P; Washko, George R; Halper-Stromberg, Eitan; Bakke, Per; Gulsvik, Amund; Laird, Nan M; Beaty, Terri H; Coxson, Harvey O; Crapo, James D; Silverman, Edwin K; Castaldi, Peter J; DeMeo, Dawn L
2017-03-15
Emphysema has considerable variability in the severity and distribution of parenchymal destruction throughout the lungs. Upper lobe-predominant emphysema has emerged as an important predictor of response to lung volume reduction surgery. Yet, aside from alpha-1 antitrypsin deficiency, the genetic determinants of emphysema distribution remain largely unknown. To identify the genetic influences of emphysema distribution in non-alpha-1 antitrypsin-deficient smokers. A total of 11,532 subjects with complete genotype and computed tomography densitometry data in the COPDGene (Genetic Epidemiology of Chronic Obstructive Pulmonary Disease [COPD]; non-Hispanic white and African American), ECLIPSE (Evaluation of COPD Longitudinally to Identify Predictive Surrogate Endpoints), and GenKOLS (Genetics of Chronic Obstructive Lung Disease) studies were analyzed. Two computed tomography scan emphysema distribution measures (difference between upper-third and lower-third emphysema; ratio of upper-third to lower-third emphysema) were tested for genetic associations in all study subjects. Separate analyses in each study population were followed by a fixed effect metaanalysis. Single-nucleotide polymorphism-, gene-, and pathway-based approaches were used. In silico functional evaluation was also performed. We identified five loci associated with emphysema distribution at genome-wide significance. These loci included two previously reported associations with COPD susceptibility (4q31 near HHIP and 15q25 near CHRNA5) and three new associations near SOWAHB, TRAPPC9, and KIAA1462. Gene set analysis and in silico functional evaluation revealed pathways and cell types that may potentially contribute to the pathogenesis of emphysema distribution. This multicohort genome-wide association study identified new genomic loci associated with differential emphysematous destruction throughout the lungs. These findings may point to new biologic pathways on which to expand diagnostic and therapeutic approaches in chronic obstructive pulmonary disease. Clinical trial registered with www.clinicaltrials.gov (NCT 00608764).
Disentangling rotational velocity distribution of stars
NASA Astrophysics Data System (ADS)
Curé, Michel; Rial, Diego F.; Cassetti, Julia; Christen, Alejandra
2017-11-01
Rotational speed is an important physical parameter of stars: knowing the distribution of stellar rotational velocities is essential for understanding stellar evolution. However, rotational speed cannot be measured directly and is instead the convolution between the rotational speed and the sine of the inclination angle vsin(i). The problem itself can be described via a Fredhoml integral of the first kind. A new method (Curé et al. 2014) to deconvolve this inverse problem and obtain the cumulative distribution function for stellar rotational velocities is based on the work of Chandrasekhar & Münch (1950). Another method to obtain the probability distribution function is Tikhonov regularization method (Christen et al. 2016). The proposed methods can be also applied to the mass ratio distribution of extrasolar planets and brown dwarfs (in binary systems, Curé et al. 2015). For stars in a cluster, where all members are gravitationally bounded, the standard assumption that rotational axes are uniform distributed over the sphere is questionable. On the basis of the proposed techniques a simple approach to model this anisotropy of rotational axes has been developed with the possibility to ``disentangling'' simultaneously both the rotational speed distribution and the orientation of rotational axes.
Bolech, C J; Heidrich-Meisner, F; Langer, S; McCulloch, I P; Orso, G; Rigol, M
2012-09-14
We study the sudden expansion of spin-imbalanced ultracold lattice fermions with attractive interactions in one dimension after turning off the longitudinal confining potential. We show that the momentum distribution functions of majority and minority fermions quickly approach stationary values due to a quantum distillation mechanism that results in a spatial separation of pairs and majority fermions. As a consequence, Fulde-Ferrell-Larkin-Ovchinnikov (FFLO) correlations are lost during the expansion. Furthermore, we argue that the shape of the stationary momentum distribution functions can be understood by relating them to the integrals of motion in this integrable quantum system. We discuss our results in the context of proposals to observe FFLO correlations, related to recent experiments by Liao et al., Nature (London) 467, 567 (2010).
An engineering approach to design of dextran microgels size fabricated by water/oil emulsification.
Salimi-Kenari, Hamed; Imani, Mohammad; Nodehi, Azizollah; Abedini, Hossein
2016-09-01
A correlation, based on fluid mechanics, has been investigated for the mean particle diameter of crosslinked dextran microgels (CDMs) prepared via a water/oil emulsification methodology conducted in a single-stirred vessel. To this end, non-dimensional correlations were developed to predict the mean particle size of CDMs as a function of Weber number, Reynolds number and viscosity number similar to ones introduced for liquid-liquid dispersions. Moreover, a Rosin-Rammler distribution function has been successfully applied to the microgel particle size distributions. The correlations were validated using experimentally obtained mean particle sizes for CDMs prepared at different stirring conditions. The validated correlation is especially applicable to medical and pharmaceutical applications where strict control on the mean particle size and size distribution of CDMs are extremely essential. [Formula: see text].
Exponential Boundary Observers for Pressurized Water Pipe
NASA Astrophysics Data System (ADS)
Hermine Som, Idellette Judith; Cocquempot, Vincent; Aitouche, Abdel
2015-11-01
This paper deals with state estimation on a pressurized water pipe modeled by nonlinear coupled distributed hyperbolic equations for non-conservative laws with three known boundary measures. Our objective is to estimate the fourth boundary variable, which will be useful for leakage detection. Two approaches are studied. Firstly, the distributed hyperbolic equations are discretized through a finite-difference scheme. By using the Lipschitz property of the nonlinear term and a Lyapunov function, the exponential stability of the estimation error is proven by solving Linear Matrix Inequalities (LMIs). Secondly, the distributed hyperbolic system is preserved for state estimation. After state transformations, a Luenberger-like PDE boundary observer based on backstepping mathematical tools is proposed. An exponential Lyapunov function is used to prove the stability of the resulted estimation error. The performance of the two observers are shown on a water pipe prototype simulated example.
Demidenko, Eugene
2017-09-01
The exact density distribution of the nonlinear least squares estimator in the one-parameter regression model is derived in closed form and expressed through the cumulative distribution function of the standard normal variable. Several proposals to generalize this result are discussed. The exact density is extended to the estimating equation (EE) approach and the nonlinear regression with an arbitrary number of linear parameters and one intrinsically nonlinear parameter. For a very special nonlinear regression model, the derived density coincides with the distribution of the ratio of two normally distributed random variables previously obtained by Fieller (1932), unlike other approximations previously suggested by other authors. Approximations to the density of the EE estimators are discussed in the multivariate case. Numerical complications associated with the nonlinear least squares are illustrated, such as nonexistence and/or multiple solutions, as major factors contributing to poor density approximation. The nonlinear Markov-Gauss theorem is formulated based on the near exact EE density approximation.
Sengers, B G; Van Donkelaar, C C; Oomens, C W J; Baaijens, F P T
2004-12-01
Assessment of the functionality of tissue engineered cartilage constructs is hampered by the lack of correlation between global measurements of extra cellular matrix constituents and the global mechanical properties. Based on patterns of matrix deposition around individual cells, it has been hypothesized previously, that mechanical functionality arises when contact occurs between zones of matrix associated with individual cells. The objective of this study is to determine whether the local distribution of newly synthesized extracellular matrix components contributes to the evolution of the mechanical properties of tissue engineered cartilage constructs. A computational homogenization approach was adopted, based on the concept of a periodic representative volume element. Local transport and immobilization of newly synthesized matrix components were described. Mechanical properties were taken dependent on the local matrix concentration and subsequently the global aggregate modulus and hydraulic permeability were derived. The transport parameters were varied to assess the effect of the evolving matrix distribution during culture. The results indicate that the overall stiffness and permeability are to a large extent insensitive to differences in local matrix distribution. This emphasizes the need for caution in the visual interpretation of tissue functionality from histology and underlines the importance of complementary measurements of the matrix's intrinsic molecular organization.
Boyle, Kevin J; Paterson, Robert; Carson, Richard; Leggett, Christopher; Kanninen, Barbara; Molenar, John; Neumann, James
2016-05-15
Environmental regulations often have the objective of eliminating the lower tail of an index of environmental quality. That part of the distribution of environmental quality moves somewhere above a threshold and where in the original distribution it moves is a function of the control strategy chosen. This paper provides an approach for estimating the economic benefits of different distributional changes as the worst environmental conditions are removed. The proposed approach is illustrated by examining shifts in visibility at Class I visibility areas (National Parks and wilderness areas) that would occur with implementation of the U.S. Environmental Protection Agency's Regional Haze Program. In this application we show that people value shifts in the distribution of visibility and place a higher value on the removal of a low visibility day than on the addition of a high visibility day. We found that respondents would pay about $120 per year in the Southeast U.S. and about $80 per year in the Southwest U.S. for improvement programs that remove the 20% worst visibility days. Copyright © 2016 Elsevier Ltd. All rights reserved.
A distributed control approach for power and energy management in a notional shipboard power system
NASA Astrophysics Data System (ADS)
Shen, Qunying
The main goal of this thesis is to present a power control module (PCON) based approach for power and energy management and to examine its control capability in shipboard power system (SPS). The proposed control scheme is implemented in a notional medium voltage direct current (MVDC) integrated power system (IPS) for electric ship. To realize the control functions such as ship mode selection, generator launch schedule, blackout monitoring, and fault ride-through, a PCON based distributed power and energy management system (PEMS) is developed. The control scheme is proposed as two-layer hierarchical architecture with system level on the top as the supervisory control and zonal level on the bottom as the decentralized control, which is based on the zonal distribution characteristic of the notional MVDC IPS that was proposed as one of the approaches for Next Generation Integrated Power System (NGIPS) by Norbert Doerry. Several types of modules with different functionalities are used to derive the control scheme in detail for the notional MVDC IPS. Those modules include the power generation module (PGM) that controls the function of generators, the power conversion module (PCM) that controls the functions of DC/DC or DC/AC converters, etc. Among them, the power control module (PCON) plays a critical role in the PEMS. It is the core of the control process. PCONs in the PEMS interact with all the other modules, such as power propulsion module (PPM), energy storage module (ESM), load shedding module (LSHED), and human machine interface (HMI) to realize the control algorithm in PEMS. The proposed control scheme is implemented in real time using the real time digital simulator (RTDS) to verify its validity. To achieve this, a system level energy storage module (SESM) and a zonal level energy storage module (ZESM) are developed in RTDS to cooperate with PCONs to realize the control functionalities. In addition, a load shedding module which takes into account the reliability of power supply (in terms of quality of service) is developed. This module can supply uninterruptible power to the mission critical loads. In addition, a multi-agent system (MAS) based framework is proposed to implement the PCON based PEMS through a hardware setup that is composed of MAMBA boards and FPGA interface. Agents are implemented using Java Agent DEvelopment Framework (JADE). Various test scenarios were tested to validate the approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hilton, Harry H.
Protocols are developed for formulating optimal viscoelastic designer functionally graded materials tailored to best respond to prescribed loading and boundary conditions. In essence, an inverse approach is adopted where material properties instead of structures per se are designed and then distributed throughout structural elements. The final measure of viscoelastic material efficacy is expressed in terms of failure probabilities vs. survival time000.
Sample Reuse in Statistical Remodeling.
1987-08-01
as the jackknife and bootstrap, is an expansion of the functional, T(Fn), or of its distribution function or both. Frangos and Schucany (1987a) used...accelerated bootstrap. In the same report Frangos and Schucany demonstrated the small sample superiority of that approach over the proposals that take...higher order terms of an Edgeworth expansion into account. In a second report Frangos and Schucany (1987b) examined the small sample performance of
Model-free quantification of dynamic PET data using nonparametric deconvolution
Zanderigo, Francesca; Parsey, Ramin V; Todd Ogden, R
2015-01-01
Dynamic positron emission tomography (PET) data are usually quantified using compartment models (CMs) or derived graphical approaches. Often, however, CMs either do not properly describe the tracer kinetics, or are not identifiable, leading to nonphysiologic estimates of the tracer binding. The PET data are modeled as the convolution of the metabolite-corrected input function and the tracer impulse response function (IRF) in the tissue. Using nonparametric deconvolution methods, it is possible to obtain model-free estimates of the IRF, from which functionals related to tracer volume of distribution and binding may be computed, but this approach has rarely been applied in PET. Here, we apply nonparametric deconvolution using singular value decomposition to simulated and test–retest clinical PET data with four reversible tracers well characterized by CMs ([11C]CUMI-101, [11C]DASB, [11C]PE2I, and [11C]WAY-100635), and systematically compare reproducibility, reliability, and identifiability of various IRF-derived functionals with that of traditional CMs outcomes. Results show that nonparametric deconvolution, completely free of any model assumptions, allows for estimates of tracer volume of distribution and binding that are very close to the estimates obtained with CMs and, in some cases, show better test–retest performance than CMs outcomes. PMID:25873427
Flight Crew Workload Evaluation Based on the Workload Function Distribution Method.
Zheng, Yiyuan; Lu, Yanyu; Jie, Yuwen; Fu, Shan
2017-05-01
The minimum flight crew on the flight deck should be established according to the workload for individual crewmembers. Typical workload measures consist of three types: subjective rating scale, task performance, and psychophysiological measures. However, all these measures have their own limitations. To reflect flight crew workload more specifically and comprehensively within the flight environment, and more directly comply with airworthiness regulations, the Workload Function Distribution Method, which combined the basic six workload functions, was proposed. The analysis was based on the different conditions of workload function numbers. Each condition was analyzed from two aspects, which were overall proportion and effective proportion. Three types of approach tasks were used in this study and the NASA-TLX scale was implemented for comparison. Neither the one-function condition nor the two-function condition had the same results with NASA-TLX. However, both the three-function and the four- to six- function conditions were identical with NASA-TLX. Further, the significant differences were different on four to six conditions. The overall proportion was insignificant, while the effective proportions were significant. The results show that the conditions with one function and two functions seemed to have no influence on workload, while executing three functions and four to six functions had an impact on workload. Besides, effective proportions of workload functions were more precisely compared with the overall proportions to indicate workload, especially in the conditions with multiple functions.Zheng Y, Lu Y, Jie Y, Fu S. Flight crew workload evaluation based on the workload function distribution method. Aerosp Med Hum Perform. 2017; 88(5):481-486.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Medasani, Bharat; Ovanesyan, Zaven; Thomas, Dennis G.
In this article we present a classical density functional theory for electrical double layers of spherical macroions that extends the capabilities of conventional approaches by accounting for electrostatic ion correlations, size asymmetry and excluded volume effects. The approach is based on a recent approximation introduced by Hansen-Goos and Roth for the hard sphere excess free energy of inhomogeneous fluids (J. Chem. Phys. 124, 154506). It accounts for the proper and efficient description of the effects of ionic asymmetry and solvent excluded volume, especially at high ion concentrations and size asymmetry ratios including those observed in experimental studies. Additionally, we utilizemore » a leading functional Taylor expansion approximation of the ion density profiles. In addition, we use the Mean Spherical Approximation for multi-component charged hard sphere fluids to account for the electrostatic ion correlation effects. These approximations are implemented in our theoretical formulation into a suitable decomposition of the excess free energy which plays a key role in capturing the complex interplay between charge correlations and excluded volume effects. We perform Monte Carlo simulations in various scenarios to validate the proposed approach, obtaining a good compromise between accuracy and computational cost. We use the proposed computational approach to study the effects of ion size, ion size asymmetry and solvent excluded volume on the ion profiles, integrated charge, mean electrostatic potential, and ionic coordination number around spherical macroions in various electrolyte mixtures. Our results show that both solvent hard sphere diameter and density play a dominant role in the distribution of ions around spherical macroions, mainly for experimental water molarity and size values where the counterion distribution is characterized by a tight binding to the macroion, similar to that predicted by the Stern model.« less
Feasibility of Decentralized Linear-Quadratic-Gaussian Control of Autonomous Distributed Spacecraft
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell
1999-01-01
A distributed satellite formation, modeled as an arbitrary number of fully connected nodes in a network, could be controlled using a decentralized controller framework that distributes operations in parallel over the network. For such problems, a solution that minimizes data transmission requirements, in the context of linear-quadratic-Gaussian (LQG) control theory, was given by Speyer. This approach is advantageous because it is non-hierarchical, detected failures gracefully degrade system performance, fewer local computations are required than for a centralized controller, and it is optimal with respect to the standard LQG cost function. Disadvantages of the approach are the need for a fully connected communications network, the total operations performed over all the nodes are greater than for a centralized controller, and the approach is formulated for linear time-invariant systems. To investigate the feasibility of the decentralized approach to satellite formation flying, a simple centralized LQG design for a spacecraft orbit control problem is adapted to the decentralized framework. The simple design uses a fixed reference trajectory (an equatorial, Keplerian, circular orbit), and by appropriate choice of coordinates and measurements is formulated as a linear time-invariant system.
The 120V 20A PWM switch for applications in high power distribution
NASA Astrophysics Data System (ADS)
Borelli, V.; Nimal, W.
1989-08-01
A 20A/120VDC (voltage direct current) PWM (Pulse Width Modulation) Solid State Power Controller (SSPC) developed under ESA contract to be used in the power distribution system of Columbus is described. The general characteristics are discussed and the project specification defined. The benefits of a PWM solution over a more conventional approach, for the specific application considered are presented. An introduction to the SSPC characteristics and a functional description are presented.
Alternative Statistical Frameworks for Student Growth Percentile Estimation
ERIC Educational Resources Information Center
Lockwood, J. R.; Castellano, Katherine E.
2015-01-01
This article suggests two alternative statistical approaches for estimating student growth percentiles (SGP). The first is to estimate percentile ranks of current test scores conditional on past test scores directly, by modeling the conditional cumulative distribution functions, rather than indirectly through quantile regressions. This would…
Theoretical limit of spatial resolution in diffuse optical tomography using a perturbation model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Konovalov, A B; Vlasov, V V
2014-03-28
We have assessed the limit of spatial resolution of timedomain diffuse optical tomography (DOT) based on a perturbation reconstruction model. From the viewpoint of the structure reconstruction accuracy, three different approaches to solving the inverse DOT problem are compared. The first approach involves reconstruction of diffuse tomograms from straight lines, the second – from average curvilinear trajectories of photons and the third – from total banana-shaped distributions of photon trajectories. In order to obtain estimates of resolution, we have derived analytical expressions for the point spread function and modulation transfer function, as well as have performed a numerical experiment onmore » reconstruction of rectangular scattering objects with circular absorbing inhomogeneities. It is shown that in passing from reconstruction from straight lines to reconstruction using distributions of photon trajectories we can improve resolution by almost an order of magnitude and exceed the accuracy of reconstruction of multi-step algorithms used in DOT. (optical tomography)« less
Maximally informative pairwise interactions in networks
Fitzgerald, Jeffrey D.; Sharpee, Tatyana O.
2010-01-01
Several types of biological networks have recently been shown to be accurately described by a maximum entropy model with pairwise interactions, also known as the Ising model. Here we present an approach for finding the optimal mappings between input signals and network states that allow the network to convey the maximal information about input signals drawn from a given distribution. This mapping also produces a set of linear equations for calculating the optimal Ising-model coupling constants, as well as geometric properties that indicate the applicability of the pairwise Ising model. We show that the optimal pairwise interactions are on average zero for Gaussian and uniformly distributed inputs, whereas they are nonzero for inputs approximating those in natural environments. These nonzero network interactions are predicted to increase in strength as the noise in the response functions of each network node increases. This approach also suggests ways for how interactions with unmeasured parts of the network can be inferred from the parameters of response functions for the measured network nodes. PMID:19905153
SCOS 2: A distributed architecture for ground system control
NASA Astrophysics Data System (ADS)
Keyte, Karl P.
The current generation of spacecraft ground control systems in use at the European Space Agency/European Space Operations Centre (ESA/ESOC) is based on the SCOS 1. Such systems have become difficult to manage in both functional and financial terms. The next generation of spacecraft is demanding more flexibility in the use, configuration and distribution of control facilities as well as functional requirements capable of matching those being planned for future missions. SCOS 2 is more than a successor to SCOS 1. Many of the shortcomings of the existing system have been carefully analyzed by user and technical communities and a complete redesign was made. Different technologies were used in many areas including hardware platform, network architecture, user interfaces and implementation techniques, methodologies and language. As far as possible a flexible design approach has been made using popular industry standards to provide vendor independence in both hardware and software areas. This paper describes many of the new approaches made in the architectural design of the SCOS 2.
Isabelle, Boulangeat; Damien, Georges; Wilfried, Thuiller
2014-01-01
During the last decade, despite strenuous efforts to develop new models and compare different approaches, few conclusions have been drawn on their ability to provide robust biodiversity projections in an environmental change context. The recurring suggestions are that models should explicitly (i) include spatiotemporal dynamics; (ii) consider multiple species in interactions; and (iii) account for the processes shaping biodiversity distribution. This paper presents a biodiversity model (FATE-HD) that meets this challenge at regional scale by combining phenomenological and process-based approaches and using well-defined plant functional groups. FATE-HD has been tested and validated in a French National Park, demonstrating its ability to simulate vegetation dynamics, structure and diversity in response to disturbances and climate change. The analysis demonstrated the importance of considering biotic interactions, spatio-temporal dynamics, and disturbances in addition to abiotic drivers to simulate vegetation dynamics. The distribution of pioneer trees was particularly improved, as were all undergrowth functional groups. PMID:24214499
Function Lateralization via Measuring Coherence Laterality
Wang, Ze; Mechanic-Hamilton, Dawn; Pluta, John; Glynn, Simon; Detre, John A.
2009-01-01
A data-driven approach for lateralization of brain function based on the spatial coherence difference of functional MRI (fMRI) data in homologous regions-of-interest (ROI) in each hemisphere is proposed. The utility of using coherence laterality (CL) to determine function laterality was assessed first by examining motor laterality using normal subjects’ data acquired both at rest and with a simple unilateral motor task and subsequently by examining mesial temporal lobe memory laterality in normal subjects and patients with temporal lobe epilepsy. The motor task was used to demonstrate that CL within motor ROI correctly lateralized functional stimulation. In patients with unilateral epilepsy studied during a scene-encoding task, CL in a hippocampus-parahippocampus-fusiform (HPF) ROI was concordant with lateralization based on task activation, and the CL index (CLI) significantly differentiated the right side group to the left side group. By contrast, normal controls showed a symmetric HPF CLI distribution. Additionally, similar memory laterality prediction results were still observed using CL in epilepsy patients with unilateral seizures after the memory encoding effect was removed from the data, suggesting the potential for lateralization of pathological brain function based on resting fMRI data. A better lateralization was further achieved via a combination of the proposed approach and the standard activation based approach, demonstrating that assessment of spatial coherence changes provides a complementary approach to quantifying task-correlated activity for lateralizing brain function. PMID:19345736
Conversion of woodlands changes soil related ecosystem services in Subsaharan Africa
NASA Astrophysics Data System (ADS)
Groengroeft, Alexander; Landschreiber, Lars; Luther-Mosebach, Jona; Masamba, Wellington; Zimmermann, Ibo; Eschenbach, Annette
2015-04-01
In remote areas of Subsaharan Africa, growing population, changes in consumption patterns and increasing global influences are leading to a strong pressure on the land resources. Smallholders convert woodlands by fire, grazing and clearing in different intensities thus changing soil properties and their ecosystem functioning. As the extraction of ecosystem services forms the basis of local wellbeing for many communities, the role of soils in providing ecosystem services is of high importance. Since 2010, "The Future Okavango" project investigates the quantification of ecosystem functions and services at four core research sites along the Okavango river basin (Angola, Namibia, Botswana, see http://www.future-okavango.org/). These research sites have an extent of 100 km2 each. Within our subproject the soil functions underlying ecosystem services are studied: The amount and spatial variation of soil nutrient reserves in woodland and their changes by land use activities, the water storage function as a basis for plant growth, and their effect on groundwater recharge and the carbon storage function. The scientific framework consists of four major parts including soil survey and mapping, lab analysis, field measurements and modeling approaches on different scales. A detailed soil survey leads to a measure of the spatial distribution, extent and heterogeneity of soil types for each research site. For generalization purposes, geomorphological and pedological characteristics are merged to derive landscape units. These landscape units have been overlaid by recent land use types to stratify the research site for subsequent soil sampling. On the basis of field and laboratory analysis, spatial distribution of soil properties as well as boundaries between neighboring landscape units are derived. The parameters analysed describe properties according to grain size distribution, organic carbon content, saturated and unsaturated hydraulic conductivity as well as pore space distribution. At nine selected sites, soil water contents and pressure heads are logged throughout the year with a 12 hour resolution in depth of 10 to 160 cm. This monitoring gives information about soil water dynamics at point scale and the database is used to evaluate model outputs of soil water balances later on. To derive point scale soil water balances for each landscape unit the one dimensional and physically based model SWAP 3.2 is applied. The presentation will demonstrate the conceptual framework, exemplary results and will discuss, if the ecosystem service approach can help to avoid future land degradation. Key word: Okavango catchment, soil functions, conceptual approach
Back to the future: Rational maps for exploring acetylcholine receptor space and time.
Tessier, Christian J G; Emlaw, Johnathon R; Cao, Zhuo Qian; Pérez-Areales, F Javier; Salameh, Jean-Paul J; Prinston, Jethro E; McNulty, Melissa S; daCosta, Corrie J B
2017-11-01
Global functions of nicotinic acetylcholine receptors, such as subunit cooperativity and compatibility, likely emerge from a network of amino acid residues distributed across the entire pentameric complex. Identification of such networks has stymied traditional approaches to acetylcholine receptor structure and function, likely due to the cryptic interdependency of their underlying amino acid residues. An emerging evolutionary biochemistry approach, which traces the evolutionary history of acetylcholine receptor subunits, allows for rational mapping of acetylcholine receptor sequence space, and offers new hope for uncovering the amino acid origins of these enigmatic properties. Copyright © 2017 Elsevier B.V. All rights reserved.
A lower bound on the Milky Way mass from general phase-space distribution function models
NASA Astrophysics Data System (ADS)
Bratek, Łukasz; Sikora, Szymon; Jałocha, Joanna; Kutschera, Marek
2014-02-01
We model the phase-space distribution of the kinematic tracers using general, smooth distribution functions to derive a conservative lower bound on the total mass within ≈150-200 kpc. By approximating the potential as Keplerian, the phase-space distribution can be simplified to that of a smooth distribution of energies and eccentricities. Our approach naturally allows for calculating moments of the distribution function, such as the radial profile of the orbital anisotropy. We systematically construct a family of phase-space functions with the resulting radial velocity dispersion overlapping with the one obtained using data on radial motions of distant kinematic tracers, while making no assumptions about the density of the tracers and the velocity anisotropy parameter β regarded as a function of the radial variable. While there is no apparent upper bound for the Milky Way mass, at least as long as only the radial motions are concerned, we find a sharp lower bound for the mass that is small. In particular, a mass value of 2.4 × 1011 M⊙, obtained in the past for lower and intermediate radii, is still consistent with the dispersion profile at larger radii. Compared with much greater mass values in the literature, this result shows that determining the Milky Way mass is strongly model-dependent. We expect a similar reduction of mass estimates in models assuming more realistic mass profiles. Full Table 1 is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/562/A134
Maximum likelihood estimation for life distributions with competing failure modes
NASA Technical Reports Server (NTRS)
Sidik, S. M.
1979-01-01
Systems which are placed on test at time zero, function for a period and die at some random time were studied. Failure may be due to one of several causes or modes. The parameters of the life distribution may depend upon the levels of various stress variables the item is subject to. Maximum likelihood estimation methods are discussed. Specific methods are reported for the smallest extreme-value distributions of life. Monte-Carlo results indicate the methods to be promising. Under appropriate conditions, the location parameters are nearly unbiased, the scale parameter is slight biased, and the asymptotic covariances are rapidly approached.
NASA Technical Reports Server (NTRS)
Lizcano, Maricela
2017-01-01
High voltage hybrid electric propulsion systems are now pushing new technology development efforts for air transportation. A key challenge in hybrid electric aircraft is safe high voltage distribution and transmission of megawatts of power (>20 MW). For the past two years, a multidisciplinary materials research team at NASA Glenn Research Center has investigated the feasibility of distributing high voltage power on future hybrid electric aircraft. This presentation describes the team's approach to addressing this challenge, significant technical findings, and next steps in GRC's materials research effort for MW power distribution on aircraft.
NASA Astrophysics Data System (ADS)
Hashimoto, Chihiro; Panizza, Pascal; Rouch, Jacques; Ushiki, Hideharu
2005-10-01
A new analytical concept is applied to the kinetics of the shrinking process of poly(N-isopropylacrylamide) (PNIPA) gels. When PNIPA gels are put into hot water above the critical temperature, two-step shrinking is observed and the secondary shrinking of gels is fitted well by a stretched exponential function. The exponent β characterizing the stretched exponential is always higher than one, although there are few analytical concepts for the stretched exponential function with β>1. As a new interpretation for this function, we propose a superposition of step (Heaviside) function and a new distribution function of characteristic time is deduced.
Onset of natural convection in a continuously perturbed system
NASA Astrophysics Data System (ADS)
Ghorbani, Zohreh; Riaz, Amir
2017-11-01
The convective mixing triggered by gravitational instability plays an important role in CO2 sequestration in saline aquifers. The linear stability analysis and the numerical simulation concerning convective mixing in porous media requires perturbations of small amplitude to be imposed on the concentration field in the form of an initial shape function. In aquifers, however, the instability is triggered by local porosity and permeability. In this work, we consider a canonical 2D homogeneous system where perturbations arise due to spatial variation of porosity in the system. The advantage of this approach is not only the elimination of the required initial shape function, but it also serves as a more realistic approach. Using a reduced nonlinear method, we first explore the effect of harmonic variations of porosity in the transverse and streamwise direction on the onset time of convection and late time behavior. We then obtain the optimal porosity structure that minimizes the convection onset. We further examine the effect of a random porosity distribution, that is independent of the spatial mode of porosity structure, on the convection onset. Using high-order pseudospectral DNS, we explore how the random distribution differs from the modal approach in predicting the onset time.
Vilquin, A; Boudet, J F; Kellay, H
2016-08-01
Velocity distributions in normal shock waves obtained in dilute granular flows are studied. These distributions cannot be described by a simple functional shape and are believed to be bimodal. Our results show that these distributions are not strictly bimodal but a trimodal distribution is shown to be sufficient. The usual Mott-Smith bimodal description of these distributions, developed for molecular gases, and based on the coexistence of two subpopulations (a supersonic and a subsonic population) in the shock front, can be modified by adding a third subpopulation. Our experiments show that this additional population results from collisions between the supersonic and subsonic subpopulations. We propose a simple approach incorporating the role of this third intermediate population to model the measured probability distributions and apply it to granular shocks as well as shocks in molecular gases.
Trace element distribution in the rat cerebellum
NASA Astrophysics Data System (ADS)
Kwiatek, W. M.; Long, G. J.; Pounds, J. G.; Reuhl, K. R.; Hanson, A. L.; Jones, K. W.
1990-04-01
Spatial distributions and concentrations of trace elements (TE) in the brain are important because TE perform catalytic and structural functions in enzymes which regulate brain function and development. We have investigated the distributions of TE in rat cerebellum. Structures were sectioned and analyzed by the Synchrotron Radiation Induced X-ray Emission (SRIXE) method using the NSLS X-26 white-light microprobe facility. Advantages important for TE analysis of biological specimens with X-ray microscopy include short time of measurement, high brightness and flux, good spatial resolution, multielemental detection, good sensitivity, and nondestructive irradiation. Trace elements were measured in thin rat brain sections of 20 μm thickness. The analyses were performed on sample volumes as small as 0.2 nl with Minimum Detectable Limits (MDL) of 50 ppb wet weight for Fe, 100 ppb wet weight for Cu, and Zn, and 1 ppm wet weight for Pb. The distribution of TE in the molecular cell layer, granule cell layer and fiber tract of rat cerebella was investigated. Both point analyses and two-dimensional semiquantitative mapping of the TE distribution in a section were used. All analyzed elements were observed in each structure of the cerebellum except mercury which was not observed in granule cell layer or fiber tract. This approach permits an exacting correlation of the TE distribution in complex structure with the diet, toxic elements, and functional status of the animal.
Mean-field approximation for spacing distribution functions in classical systems
NASA Astrophysics Data System (ADS)
González, Diego Luis; Pimpinelli, Alberto; Einstein, T. L.
2012-01-01
We propose a mean-field method to calculate approximately the spacing distribution functions p(n)(s) in one-dimensional classical many-particle systems. We compare our method with two other commonly used methods, the independent interval approximation and the extended Wigner surmise. In our mean-field approach, p(n)(s) is calculated from a set of Langevin equations, which are decoupled by using a mean-field approximation. We find that in spite of its simplicity, the mean-field approximation provides good results in several systems. We offer many examples illustrating that the three previously mentioned methods give a reasonable description of the statistical behavior of the system. The physical interpretation of each method is also discussed.
NASA Astrophysics Data System (ADS)
Jiang, Fan; Rossi, Mathieu; Parent, Guillaume
2018-05-01
Accurately modeling the anisotropic behavior of electrical steel is mandatory in order to perform good end simulations. Several approaches can be found in the literature for that purpose but the more often those methods are not able to deal with grain oriented electrical steel. In this paper, a method based on orientation distribution function is applied to modern grain oriented laminations. In particular, two solutions are proposed in order to increase the results accuracy. The first one consists in increasing the decomposition number of the cosine series on which the method is based. The second one consists in modifying the determination method of the terms belonging to this cosine series.
Elbasha, Elamin H
2005-05-01
The availability of patient-level data from clinical trials has spurred a lot of interest in developing methods for quantifying and presenting uncertainty in cost-effectiveness analysis (CEA). Although the majority has focused on developing methods for using sample data to estimate a confidence interval for an incremental cost-effectiveness ratio (ICER), a small strand of the literature has emphasized the importance of incorporating risk preferences and the trade-off between the mean and the variance of returns to investment in health and medicine (mean-variance analysis). This paper shows how the exponential utility-moment-generating function approach is a natural extension to this branch of the literature for modelling choices from healthcare interventions with uncertain costs and effects. The paper assumes an exponential utility function, which implies constant absolute risk aversion, and is based on the fact that the expected value of this function results in a convenient expression that depends only on the moment-generating function of the random variables. The mean-variance approach is shown to be a special case of this more general framework. The paper characterizes the solution to the resource allocation problem using standard optimization techniques and derives the summary measure researchers need to estimate for each programme, when the assumption of risk neutrality does not hold, and compares it to the standard incremental cost-effectiveness ratio. The importance of choosing the correct distribution of costs and effects and the issues related to estimation of the parameters of the distribution are also discussed. An empirical example to illustrate the methods and concepts is provided. Copyright 2004 John Wiley & Sons, Ltd
Graça, Márlon B; Morais, José W; Franklin, Elizabeth; Pequeno, Pedro A C L; Souza, Jorge L P; Bueno, Anderson Saldanha
2016-04-01
This study investigated the spatial distribution of an Amazonian fruit-feeding butterfly assemblage by linking species taxonomic and functional approaches. We hypothesized that: 1) vegetation richness (i.e., resources) and abundance of insectivorous birds (i.e., predators) should drive changes in butterfly taxonomic composition, 2) larval diet breadth should decrease with increase of plant species richness, 3) small-sized adults should be favored by higher abundance of birds, and 4) communities with eyespot markings should be able to exploit areas with higher predation pressure. Fruit-feeding butterflies were sampled with bait traps and insect nets across 25 km(2) of an Amazonian ombrophilous forest in Brazil. We measured larval diet breadth, adult body size, and wing marking of all butterflies. Our results showed that plant species richness explained most of the variation in butterfly taxonomic turnover. Also, community average diet breadth decreased with increase of plant species richness, which supports our expectations. In contrast, community average body size increased with the abundance of birds, refuting our hypothesis. We detected no influence of environmental gradients on the occurrence of species with eyespot markings. The association between butterfly taxonomic and functional composition points to a mediator role of the functional traits in the environmental filtering of butterflies. The incorporation of the functional approach into the analyses allowed for the detection of relationships that were not observed using a strictly taxonomic perspective and provided an extra insight into comprehending the potential adaptive strategies of butterflies. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Distribution of injected power fluctuations in electroconvection.
Tóth-Katona, Tibor; Gleeson, J T
2003-12-31
We report on the distribution spectra of the fluctations in the amount of power injected into a liquid crystal undergoing electroconvective flow. The probability distribution functions (PDFs) of the fluc-tuations as well as the magnitude of the fluctuations have been determined in a wide range of imposed stress both for unconfined and confined flow geometries. These spectra are compared to those found in other systems held far from equilibrium, and find that in certain conditions we obtain the universal PDF form reported by Phys. Rev. Lett. 84, 3744 (2000)]. Moreover, the PDF approaches this universal form via an interesting mechanism whereby the distribution's negative tail evolves towards form in a different manner than the positive tail.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Dong; Liu, Yangang
2014-12-18
Subgrid-scale variability is one of the main reasons why parameterizations are needed in large-scale models. Although some parameterizations started to address the issue of subgrid variability by introducing a subgrid probability distribution function for relevant quantities, the spatial structure has been typically ignored and thus the subgrid-scale interactions cannot be accounted for physically. Here we present a new statistical-physics-like approach whereby the spatial autocorrelation function can be used to physically capture the net effects of subgrid cloud interaction with radiation. The new approach is able to faithfully reproduce the Monte Carlo 3D simulation results with several orders less computational cost,more » allowing for more realistic representation of cloud radiation interactions in large-scale models.« less
Functional approach to high-throughput plant growth analysis
2013-01-01
Method Taking advantage of the current rapid development in imaging systems and computer vision algorithms, we present HPGA, a high-throughput phenotyping platform for plant growth modeling and functional analysis, which produces better understanding of energy distribution in regards of the balance between growth and defense. HPGA has two components, PAE (Plant Area Estimation) and GMA (Growth Modeling and Analysis). In PAE, by taking the complex leaf overlap problem into consideration, the area of every plant is measured from top-view images in four steps. Given the abundant measurements obtained with PAE, in the second module GMA, a nonlinear growth model is applied to generate growth curves, followed by functional data analysis. Results Experimental results on model plant Arabidopsis thaliana show that, compared to an existing approach, HPGA reduces the error rate of measuring plant area by half. The application of HPGA on the cfq mutant plants under fluctuating light reveals the correlation between low photosynthetic rates and small plant area (compared to wild type), which raises a hypothesis that knocking out cfq changes the sensitivity of the energy distribution under fluctuating light conditions to repress leaf growth. Availability HPGA is available at http://www.msu.edu/~jinchen/HPGA. PMID:24565437
NASA Astrophysics Data System (ADS)
Crosby, N.; Georgoulis, M.; Vilmer, N.
1999-10-01
Solar burst observations in the deka-keV energy range originating from the WATCH experiment aboard the GRANAT spacecraft were used to perform frequency distributions built on measured X-ray flare parameters (Crosby et al., 1998). The results of the study show that: 1- the overall distribution functions are robust power laws extending over a number of decades. The typical parameters of events (total counts, peak count rates, duration) are all correlated to each other. 2- the overall distribution functions are the convolution of significantly different distribution functions built on parts of the whole data set filtered by the event duration. These "partial" frequency distributions are still power law distributions over several decades, with a slope systematically decreasing with increasing duration. 3- No correlation is found between the elapsed time interval between successive bursts arising from the same active region and the peak intensity of the flare. In this paper, we attempt a tentative comparison between the statistical properties of the self-organized critical (SOC) cellular automaton statistical flare models (see e.g. Lu and Hamilton (1991), Georgoulis and Vlahos (1996, 1998)) and the respective properties of the WATCH flare data. Despite the inherent weaknesses of the SOC models to simulate a number of physical processes in the active region, it is found that most of the observed statistical properties can be reproduced using the SOC models, including the various frequency distributions and scatter plots. We finally conclude that, even if SOC models must be refined to improve the physical links to MHD approaches, they nevertheless represent a good approach to describe the properties of rapid energy dissipation and magnetic field annihilation in complex and magnetized plasmas. Crosby N., Vilmer N., Lund N. and Sunyaev R., A&A; 334; 299-313; 1998 Crosby N., Lund N., Vilmer N. and Sunyaev R.; A&A Supplement Series; 130, 233, 1998 Georgoulis M. and Vlahos L., 1996, Astrophy. J. Letters, 469, L135 Georgoulis M. and Vlahos L., 1998, in preparation Lu E.T. and Hamilton R.J., 1991, Astroph. J., 380, L89
Fault detection and diagnosis using neural network approaches
NASA Technical Reports Server (NTRS)
Kramer, Mark A.
1992-01-01
Neural networks can be used to detect and identify abnormalities in real-time process data. Two basic approaches can be used, the first based on training networks using data representing both normal and abnormal modes of process behavior, and the second based on statistical characterization of the normal mode only. Given data representative of process faults, radial basis function networks can effectively identify failures. This approach is often limited by the lack of fault data, but can be facilitated by process simulation. The second approach employs elliptical and radial basis function neural networks and other models to learn the statistical distributions of process observables under normal conditions. Analytical models of failure modes can then be applied in combination with the neural network models to identify faults. Special methods can be applied to compensate for sensor failures, to produce real-time estimation of missing or failed sensors based on the correlations codified in the neural network.
Robust stochastic optimization for reservoir operation
NASA Astrophysics Data System (ADS)
Pan, Limeng; Housh, Mashor; Liu, Pan; Cai, Ximing; Chen, Xin
2015-01-01
Optimal reservoir operation under uncertainty is a challenging engineering problem. Application of classic stochastic optimization methods to large-scale problems is limited due to computational difficulty. Moreover, classic stochastic methods assume that the estimated distribution function or the sample inflow data accurately represents the true probability distribution, which may be invalid and the performance of the algorithms may be undermined. In this study, we introduce a robust optimization (RO) approach, Iterative Linear Decision Rule (ILDR), so as to provide a tractable approximation for a multiperiod hydropower generation problem. The proposed approach extends the existing LDR method by accommodating nonlinear objective functions. It also provides users with the flexibility of choosing the accuracy of ILDR approximations by assigning a desired number of piecewise linear segments to each uncertainty. The performance of the ILDR is compared with benchmark policies including the sampling stochastic dynamic programming (SSDP) policy derived from historical data. The ILDR solves both the single and multireservoir systems efficiently. The single reservoir case study results show that the RO method is as good as SSDP when implemented on the original historical inflows and it outperforms SSDP policy when tested on generated inflows with the same mean and covariance matrix as those in history. For the multireservoir case study, which considers water supply in addition to power generation, numerical results show that the proposed approach performs as well as in the single reservoir case study in terms of optimal value and distributional robustness.
Formation and distribution of fragments in the spontaneous fission of 240Pu
NASA Astrophysics Data System (ADS)
Sadhukhan, Jhilam; Zhang, Chunli; Nazarewicz, Witold; Schunck, Nicolas
2017-12-01
Background: Fission is a fundamental decay mode of heavy atomic nuclei. The prevalent theoretical approach is based on mean-field theory and its extensions where fission is modeled as a large amplitude motion of a nucleus in a multidimensional collective space. One of the important observables characterizing fission is the charge and mass distribution of fission fragments. Purpose: The goal of this Rapid Communication is to better understand the structure of fission fragment distributions by investigating the competition between the static structure of the collective manifold and the stochastic dynamics. In particular, we study the characteristics of the tails of yield distributions, which correspond to very asymmetric fission into a very heavy and a very light fragment. Methods: We use the stochastic Langevin framework to simulate the nuclear evolution after the system tunnels through the multidimensional potential barrier. For a representative sample of different initial configurations along the outer turning-point line, we define effective fission paths by computing a large number of Langevin trajectories. We extract the relative contribution of each such path to the fragment distribution. We then use nucleon localization functions along effective fission pathways to analyze the characteristics of prefragments at prescission configurations. Results: We find that non-Newtonian Langevin trajectories, strongly impacted by the random force, produce the tails of the fission fragment distribution of 240Pu. The prefragments deduced from nucleon localizations are formed early and change little as the nucleus evolves towards scission. On the other hand, the system contains many nucleons that are not localized in the prefragments even near the scission point. Such nucleons are distributed rapidly at scission to form the final fragments. Fission prefragments extracted from direct integration of the density and from the localization functions typically differ by more than 30 nucleons even near scission. Conclusions: Our Rapid Communication shows that only theoretical models of fission that account for some form of stochastic dynamics can give an accurate description of the structure of fragment distributions. In particular, it should be nearly impossible to predict the tails of these distributions within the standard formulation of time-dependent density-functional theory. At the same time, the large number of nonlocalized nucleons during fission suggests that adiabatic approaches where the interplay between intrinsic excitations and collective dynamics is neglected are ill suited to describe fission fragment properties, in particular, their excitation energy.
Geostatistical Interpolation of Particle-Size Curves in Heterogeneous Aquifers
NASA Astrophysics Data System (ADS)
Guadagnini, A.; Menafoglio, A.; Secchi, P.
2013-12-01
We address the problem of predicting the spatial field of particle-size curves (PSCs) from measurements associated with soil samples collected at a discrete set of locations within an aquifer system. Proper estimates of the full PSC are relevant to applications related to groundwater hydrology, soil science and geochemistry and aimed at modeling physical and chemical processes occurring in heterogeneous earth systems. Hence, we focus on providing kriging estimates of the entire PSC at unsampled locations. To this end, we treat particle-size curves as cumulative distribution functions, model their densities as functional compositional data and analyze them by embedding these into the Hilbert space of compositional functions endowed with the Aitchison geometry. On this basis, we develop a new geostatistical methodology for the analysis of spatially dependent functional compositional data. Our functional compositional kriging (FCK) approach allows providing predictions at unsampled location of the entire particle-size curve, together with a quantification of the associated uncertainty, by fully exploiting both the functional form of the data and their compositional nature. This is a key advantage of our approach with respect to traditional methodologies, which treat only a set of selected features (e.g., quantiles) of PSCs. Embedding the full PSC into a geostatistical analysis enables one to provide a complete characterization of the spatial distribution of lithotypes in a reservoir, eventually leading to improved predictions of soil hydraulic attributes through pedotransfer functions as well as of soil geochemical parameters which are relevant in sorption/desorption and cation exchange processes. We test our new method on PSCs sampled along a borehole located within an alluvial aquifer near the city of Tuebingen, Germany. The quality of FCK predictions is assessed through leave-one-out cross-validation. A comparison between hydraulic conductivity estimates obtained via FCK approach and those predicted by classical kriging of effective particle diameters (i.e., quantiles of the PSCs) is finally performed.
NASA Astrophysics Data System (ADS)
Lee, Haksu; Seo, Dong-Jun; Noh, Seong Jin
2016-11-01
This paper presents a simple yet effective weakly-constrained (WC) data assimilation (DA) approach for hydrologic models which accounts for model structural inadequacies associated with rainfall-runoff transformation processes. Compared to the strongly-constrained (SC) DA, WC DA adjusts the control variables less while producing similarly or more accurate analysis. Hence the adjusted model states are dynamically more consistent with those of the base model. The inadequacy of a rainfall-runoff model was modeled as an additive error to runoff components prior to routing and penalized in the objective function. Two example modeling applications, distributed and lumped, were carried out to investigate the effects of the WC DA approach on DA results. For distributed modeling, the distributed Sacramento Soil Moisture Accounting (SAC-SMA) model was applied to the TIFM7 Basin in Missouri, USA. For lumped modeling, the lumped SAC-SMA model was applied to nineteen basins in Texas. In both cases, the variational DA (VAR) technique was used to assimilate discharge data at the basin outlet. For distributed SAC-SMA, spatially homogeneous error modeling yielded updated states that are spatially much more similar to the a priori states, as quantified by Earth Mover's Distance (EMD), than spatially heterogeneous error modeling by up to ∼10 times. DA experiments using both lumped and distributed SAC-SMA modeling indicated that assimilating outlet flow using the WC approach generally produce smaller mean absolute difference as well as higher correlation between the a priori and the updated states than the SC approach, while producing similar or smaller root mean square error of streamflow analysis and prediction. Large differences were found in both lumped and distributed modeling cases between the updated and the a priori lower zone tension and primary free water contents for both WC and SC approaches, indicating possible model structural deficiency in describing low flows or evapotranspiration processes for the catchments studied. Also presented are the findings from this study and key issues relevant to WC DA approaches using hydrologic models.
Exploiting the functional and taxonomic structure of genomic data by probabilistic topic modeling.
Chen, Xin; Hu, Xiaohua; Lim, Tze Y; Shen, Xiajiong; Park, E K; Rosen, Gail L
2012-01-01
In this paper, we present a method that enable both homology-based approach and composition-based approach to further study the functional core (i.e., microbial core and gene core, correspondingly). In the proposed method, the identification of major functionality groups is achieved by generative topic modeling, which is able to extract useful information from unlabeled data. We first show that generative topic model can be used to model the taxon abundance information obtained by homology-based approach and study the microbial core. The model considers each sample as a “document,” which has a mixture of functional groups, while each functional group (also known as a “latent topic”) is a weight mixture of species. Therefore, estimating the generative topic model for taxon abundance data will uncover the distribution over latent functions (latent topic) in each sample. Second, we show that, generative topic model can also be used to study the genome-level composition of “N-mer” features (DNA subreads obtained by composition-based approaches). The model consider each genome as a mixture of latten genetic patterns (latent topics), while each functional pattern is a weighted mixture of the “N-mer” features, thus the existence of core genomes can be indicated by a set of common N-mer features. After studying the mutual information between latent topics and gene regions, we provide an explanation of the functional roles of uncovered latten genetic patterns. The experimental results demonstrate the effectiveness of proposed method.
Propagation of eigenmodes and transfer functions in waveguide WDM structures
NASA Astrophysics Data System (ADS)
Mashkov, Vladimir A.; Francoeur, S.; Geuss, U.; Neiser, K.; Temkin, Henryk
1998-02-01
A method of propagation functions and transfer amplitudes suitable for the design of integrated optical circuits is presented. The method is based on vectorial formulation of electrodynamics: the distributions and propagation of electromagnetic fields in optical circuits is described by equivalent surface sources. This approach permits a division of complex optical waveguide structures into sets of primitive blocks and to separately calculate the transfer function and the transfer amplitude for each block. The transfer amplitude of the entire optical system is represented by a convolution of transfer amplitudes of its primitive blocks. The eigenvalues and eigenfunctions of arbitrary waveguide structure are obtained in the WKB approximation and compared with other methods. The general approach is illustrated with the transfer amplitude calculations for Dragone's star coupler and router.
Hasan, Md. Zobaer; Kamil, Anton Abdulbasah; Mustafa, Adli; Baten, Md. Azizul
2012-01-01
The stock market is considered essential for economic growth and expected to contribute to improved productivity. An efficient pricing mechanism of the stock market can be a driving force for channeling savings into profitable investments and thus facilitating optimal allocation of capital. This study investigated the technical efficiency of selected groups of companies of Bangladesh Stock Market that is the Dhaka Stock Exchange (DSE) market, using the stochastic frontier production function approach. For this, the authors considered the Cobb-Douglas Stochastic frontier in which the technical inefficiency effects are defined by a model with two distributional assumptions. Truncated normal and half-normal distributions were used in the model and both time-variant and time-invariant inefficiency effects were estimated. The results reveal that technical efficiency decreased gradually over the reference period and that truncated normal distribution is preferable to half-normal distribution for technical inefficiency effects. The value of technical efficiency was high for the investment group and low for the bank group, as compared with other groups in the DSE market for both distributions in time- varying environment whereas it was high for the investment group but low for the ceramic group as compared with other groups in the DSE market for both distributions in time-invariant situation. PMID:22629352
Hasan, Md Zobaer; Kamil, Anton Abdulbasah; Mustafa, Adli; Baten, Md Azizul
2012-01-01
The stock market is considered essential for economic growth and expected to contribute to improved productivity. An efficient pricing mechanism of the stock market can be a driving force for channeling savings into profitable investments and thus facilitating optimal allocation of capital. This study investigated the technical efficiency of selected groups of companies of Bangladesh Stock Market that is the Dhaka Stock Exchange (DSE) market, using the stochastic frontier production function approach. For this, the authors considered the Cobb-Douglas Stochastic frontier in which the technical inefficiency effects are defined by a model with two distributional assumptions. Truncated normal and half-normal distributions were used in the model and both time-variant and time-invariant inefficiency effects were estimated. The results reveal that technical efficiency decreased gradually over the reference period and that truncated normal distribution is preferable to half-normal distribution for technical inefficiency effects. The value of technical efficiency was high for the investment group and low for the bank group, as compared with other groups in the DSE market for both distributions in time-varying environment whereas it was high for the investment group but low for the ceramic group as compared with other groups in the DSE market for both distributions in time-invariant situation.
A Metastatistical Approach to Satellite Estimates of Extreme Rainfall Events
NASA Astrophysics Data System (ADS)
Zorzetto, E.; Marani, M.
2017-12-01
The estimation of the average recurrence interval of intense rainfall events is a central issue for both hydrologic modeling and engineering design. These estimates require the inference of the properties of the right tail of the statistical distribution of precipitation, a task often performed using the Generalized Extreme Value (GEV) distribution, estimated either from a samples of annual maxima (AM) or with a peaks over threshold (POT) approach. However, these approaches require long and homogeneous rainfall records, which often are not available, especially in the case of remote-sensed rainfall datasets. We use here, and tailor it to remotely-sensed rainfall estimates, an alternative approach, based on the metastatistical extreme value distribution (MEVD), which produces estimates of rainfall extreme values based on the probability distribution function (pdf) of all measured `ordinary' rainfall event. This methodology also accounts for the interannual variations observed in the pdf of daily rainfall by integrating over the sample space of its random parameters. We illustrate the application of this framework to the TRMM Multi-satellite Precipitation Analysis rainfall dataset, where MEVD optimally exploits the relatively short datasets of satellite-sensed rainfall, while taking full advantage of its high spatial resolution and quasi-global coverage. Accuracy of TRMM precipitation estimates and scale issues are here investigated for a case study located in the Little Washita watershed, Oklahoma, using a dense network of rain gauges for independent ground validation. The methodology contributes to our understanding of the risk of extreme rainfall events, as it allows i) an optimal use of the TRMM datasets in estimating the tail of the probability distribution of daily rainfall, and ii) a global mapping of daily rainfall extremes and distributional tail properties, bridging the existing gaps in rain gauges networks.
Hathaway, R.M.; McNellis, J.M.
1989-01-01
Investigating the occurrence, quantity, quality, distribution, and movement of the Nation 's water resources is the principal mission of the U.S. Geological Survey 's Water Resources Division. Reports of these investigations are published and available to the public. To accomplish this mission, the Division requires substantial computer technology to process, store, and analyze data from more than 57,000 hydrologic sites. The Division 's computer resources are organized through the Distributed Information System Program Office that manages the nationwide network of computers. The contract that provides the major computer components for the Water Resources Division 's Distributed information System expires in 1991. Five work groups were organized to collect the information needed to procure a new generation of computer systems for the U. S. Geological Survey, Water Resources Division. Each group was assigned a major Division activity and asked to describe its functional requirements of computer systems for the next decade. The work groups and major activities are: (1) hydrologic information; (2) hydrologic applications; (3) geographic information systems; (4) reports and electronic publishing; and (5) administrative. The work groups identified 42 functions and described their functional requirements for 1988, 1992, and 1997. A few new functions such as Decision Support Systems and Executive Information Systems, were identified, but most are the same as performed today. Although the number of functions will remain about the same, steady growth in the size, complexity, and frequency of many functions is predicted for the next decade. No compensating increase in the Division 's staff is anticipated during this period. To handle the increased workload and perform these functions, new approaches will be developed that use advanced computer technology. The advanced technology is required in a unified, tightly coupled system that will support all functions simultaneously. The new approaches and expanded use of computers will require substantial increases in the quantity and sophistication of the Division 's computer resources. The requirements presented in this report will be used to develop technical specifications that describe the computer resources needed during the 1990's. (USGS)
Laloš, Jernej; Babnik, Aleš; Možina, Janez; Požar, Tomaž
2016-03-01
The near-field, surface-displacement waveforms in plates are modeled using interwoven concepts of Green's function formalism and streamlined Huygens' principle. Green's functions resemble the building blocks of the sought displacement waveform, superimposed and weighted according to the simplified distribution. The approach incorporates an arbitrary circular spatial source distribution and an arbitrary circular spatial sensitivity in the area probed by the sensor. The displacement histories for uniform, Gaussian and annular normal-force source distributions and the uniform spatial sensor sensitivity are calculated, and the corresponding weight distributions are compared. To demonstrate the applicability of the developed scheme, measurements of laser ultrasound induced solely by the radiation pressure are compared with the calculated waveforms. The ultrasound is induced by laser pulse reflection from the mirror-surface of a glass plate. The measurements show excellent agreement not only with respect to various wave-arrivals but also in the shape of each arrival. Their shape depends on the beam profile of the excitation laser pulse and its corresponding spatial normal-force distribution. Copyright © 2015 Elsevier B.V. All rights reserved.
A Bayesian kriging approach for blending satellite and ground precipitation observations
Verdin, Andrew P.; Rajagopalan, Balaji; Kleiber, William; Funk, Christopher C.
2015-01-01
Drought and flood management practices require accurate estimates of precipitation. Gauge observations, however, are often sparse in regions with complicated terrain, clustered in valleys, and of poor quality. Consequently, the spatial extent of wet events is poorly represented. Satellite-derived precipitation data are an attractive alternative, though they tend to underestimate the magnitude of wet events due to their dependency on retrieval algorithms and the indirect relationship between satellite infrared observations and precipitation intensities. Here we offer a Bayesian kriging approach for blending precipitation gauge data and the Climate Hazards Group Infrared Precipitation satellite-derived precipitation estimates for Central America, Colombia, and Venezuela. First, the gauge observations are modeled as a linear function of satellite-derived estimates and any number of other variables—for this research we include elevation. Prior distributions are defined for all model parameters and the posterior distributions are obtained simultaneously via Markov chain Monte Carlo sampling. The posterior distributions of these parameters are required for spatial estimation, and thus are obtained prior to implementing the spatial kriging model. This functional framework is applied to model parameters obtained by sampling from the posterior distributions, and the residuals of the linear model are subject to a spatial kriging model. Consequently, the posterior distributions and uncertainties of the blended precipitation estimates are obtained. We demonstrate this method by applying it to pentadal and monthly total precipitation fields during 2009. The model's performance and its inherent ability to capture wet events are investigated. We show that this blending method significantly improves upon the satellite-derived estimates and is also competitive in its ability to represent wet events. This procedure also provides a means to estimate a full conditional distribution of the “true” observed precipitation value at each grid cell.
Statistical bias correction modelling for seasonal rainfall forecast for the case of Bali island
NASA Astrophysics Data System (ADS)
Lealdi, D.; Nurdiati, S.; Sopaheluwakan, A.
2018-04-01
Rainfall is an element of climate which is highly influential to the agricultural sector. Rain pattern and distribution highly determines the sustainability of agricultural activities. Therefore, information on rainfall is very useful for agriculture sector and farmers in anticipating the possibility of extreme events which often cause failures of agricultural production. This research aims to identify the biases from seasonal forecast products from ECMWF (European Centre for Medium-Range Weather Forecasts) rainfall forecast and to build a transfer function in order to correct the distribution biases as a new prediction model using quantile mapping approach. We apply this approach to the case of Bali Island, and as a result, the use of bias correction methods in correcting systematic biases from the model gives better results. The new prediction model obtained with this approach is better than ever. We found generally that during rainy season, the bias correction approach performs better than in dry season.
{Phi}{sup 4} kinks: Statistical mechanics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Habib, S.
1995-12-31
Some recent investigations of the thermal equilibrium properties of kinks in a 1+1-dimensional, classical {phi}{sup 4} field theory are reviewed. The distribution function, kink density, correlation function, and certain thermodynamic quantities were studied both theoretically and via large scale simulations. A simple double Gaussian variational approach within the transfer operator formalism was shown to give good results in the intermediate temperature range where the dilute gas theory is known to fail.
A new solution-adaptive grid generation method for transonic airfoil flow calculations
NASA Technical Reports Server (NTRS)
Nakamura, S.; Holst, T. L.
1981-01-01
The clustering algorithm is controlled by a second-order, ordinary differential equation which uses the airfoil surface density gradient as a forcing function. The solution to this differential equation produces a surface grid distribution which is automatically clustered in regions with large gradients. The interior grid points are established from this surface distribution by using an interpolation scheme which is fast and retains the desirable properties of the original grid generated from the standard elliptic equation approach.
Bayesian approach to non-Gaussian field statistics for diffusive broadband terahertz pulses.
Pearce, Jeremy; Jian, Zhongping; Mittleman, Daniel M
2005-11-01
We develop a closed-form expression for the probability distribution function for the field components of a diffusive broadband wave propagating through a random medium. We consider each spectral component to provide an individual observation of a random variable, the configurationally averaged spectral intensity. Since the intensity determines the variance of the field distribution at each frequency, this random variable serves as the Bayesian prior that determines the form of the non-Gaussian field statistics. This model agrees well with experimental results.
NASA Technical Reports Server (NTRS)
Kunze, M. E.
1985-01-01
A systematic investigation was undertaken to characterize population shifts that occur in cultured human embryonic kidney cells as a function of passage number in vitro after original explantation. This approach to cell population shift analysis follows the suggestion of Mehreshi, Klein and Revesz that perturbed cell populations can be characterized by electrophoretic mobility distributions if they contain subpopulations with different electrophoretic mobilities. It was shown that this is the case with early passage cultured human embryo cells.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang Haiyan; Department of Mathematics and Statistics, University of North Carolina at Charlotte, Charlotte, NC 28223-0001; Cai Wei
2010-06-20
In this paper, we conduct a study of quantum transport models for a two-dimensional nano-size double gate (DG) MOSFET using two approaches: non-equilibrium Green's function (NEGF) and Wigner distribution. Both methods are implemented in the framework of the mode space methodology where the electron confinements below the gates are pre-calculated to produce subbands along the vertical direction of the device while the transport along the horizontal channel direction is described by either approach. Each approach handles the open quantum system along the transport direction in a different manner. The NEGF treats the open boundaries with boundary self-energy defined by amore » Dirichlet to Neumann mapping, which ensures non-reflection at the device boundaries for electron waves leaving the quantum device active region. On the other hand, the Wigner equation method imposes an inflow boundary treatment for the Wigner distribution, which in contrast ensures non-reflection at the boundaries for free electron waves entering the device active region. In both cases the space-charge effect is accounted for by a self-consistent coupling with a Poisson equation. Our goals are to study how the device boundaries are treated in both transport models affects the current calculations, and to investigate the performance of both approaches in modeling the DG-MOSFET. Numerical results show mostly consistent quantum transport characteristics of the DG-MOSFET using both methods, though with higher transport current for the Wigner equation method, and also provide the current-voltage (I-V) curve dependence on various physical parameters such as the gate voltage and the oxide thickness.« less
Modeling continuous covariates with a "spike" at zero: Bivariate approaches.
Jenkner, Carolin; Lorenz, Eva; Becher, Heiko; Sauerbrei, Willi
2016-07-01
In epidemiology and clinical research, predictors often take value zero for a large amount of observations while the distribution of the remaining observations is continuous. These predictors are called variables with a spike at zero. Examples include smoking or alcohol consumption. Recently, an extension of the fractional polynomial (FP) procedure, a technique for modeling nonlinear relationships, was proposed to deal with such situations. To indicate whether or not a value is zero, a binary variable is added to the model. In a two stage procedure, called FP-spike, the necessity of the binary variable and/or the continuous FP function for the positive part are assessed for a suitable fit. In univariate analyses, the FP-spike procedure usually leads to functional relationships that are easy to interpret. This paper introduces four approaches for dealing with two variables with a spike at zero (SAZ). The methods depend on the bivariate distribution of zero and nonzero values. Bi-Sep is the simplest of the four bivariate approaches. It uses the univariate FP-spike procedure separately for the two SAZ variables. In Bi-D3, Bi-D1, and Bi-Sub, proportions of zeros in both variables are considered simultaneously in the binary indicators. Therefore, these strategies can account for correlated variables. The methods can be used for arbitrary distributions of the covariates. For illustration and comparison of results, data from a case-control study on laryngeal cancer, with smoking and alcohol intake as two SAZ variables, is considered. In addition, a possible extension to three or more SAZ variables is outlined. A combination of log-linear models for the analysis of the correlation in combination with the bivariate approaches is proposed. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Landau damping of Langmuir twisted waves with kappa distributed electrons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arshad, Kashif, E-mail: kashif.arshad.butt@gmail.com; Aman-ur-Rehman; Mahmood, Shahzad
2015-11-15
The kinetic theory of Landau damping of Langmuir twisted modes is investigated in the presence of orbital angular momentum of the helical (twisted) electric field in plasmas with kappa distributed electrons. The perturbed distribution function and helical electric field are considered to be decomposed by Laguerre-Gaussian mode function defined in cylindrical geometry. The Vlasov-Poisson equation is obtained and solved analytically to obtain the weak damping rates of the Langmuir twisted waves in a nonthermal plasma. The strong damping effects of the Langmuir twisted waves at wavelengths approaching Debye length are also obtained by using an exact numerical method and aremore » illustrated graphically. The damping rates of the planar Langmuir waves are found to be larger than the twisted Langmuir waves in plasmas which shows opposite behavior as depicted in Fig. 3 by J. T. Mendoça [Phys. Plasmas 19, 112113 (2012)].« less
Pion distribution amplitude from Euclidean correlation functions
NASA Astrophysics Data System (ADS)
Bali, Gunnar S.; Braun, Vladimir M.; Gläßle, Benjamin; Göckeler, Meinulf; Gruber, Michael; Hutzler, Fabian; Korcyl, Piotr; Lang, Bernhard; Schäfer, Andreas; Wein, Philipp; Zhang, Jian-Hui
2018-03-01
Following the proposal in (Braun and Müller. Eur Phys J C55:349, 2008), we study the feasibility to calculate the pion distribution amplitude (DA) from suitably chosen Euclidean correlation functions at large momentum. In our lattice study we employ the novel momentum smearing technique (Bali et al. Phys Rev D93:094515, 2016; Bali et al. Phys Lett B774:91, 2017). This approach is complementary to the calculations of the lowest moments of the DA using the Wilson operator product expansion and avoids mixing with lower dimensional local operators on the lattice. The theoretical status of this method is similar to that of quasi-distributions (Ji. Phys Rev Lett 110:262002, 2013) that have recently been used in (Zhang et al. Phys Rev D95:094514, 2017) to estimate the twist two pion DA. The similarities and differences between these two techniques are highlighted.
NASA Astrophysics Data System (ADS)
Wattanasakulpong, Nuttawit; Chaikittiratana, Arisara; Pornpeerakeat, Sacharuck
2018-06-01
In this paper, vibration analysis of functionally graded porous beams is carried out using the third-order shear deformation theory. The beams have uniform and non-uniform porosity distributions across their thickness and both ends are supported by rotational and translational springs. The material properties of the beams such as elastic moduli and mass density can be related to the porosity and mass coefficient utilizing the typical mechanical features of open-cell metal foams. The Chebyshev collocation method is applied to solve the governing equations derived from Hamilton's principle, which is used in order to obtain the accurate natural frequencies for the vibration problem of beams with various general and elastic boundary conditions. Based on the numerical experiments, it is revealed that the natural frequencies of the beams with asymmetric and non-uniform porosity distributions are higher than those of other beams with uniform and symmetric porosity distributions.
Revealing nonclassicality beyond Gaussian states via a single marginal distribution
Park, Jiyong; Lu, Yao; Lee, Jaehak; Shen, Yangchao; Zhang, Kuan; Zhang, Shuaining; Zubairy, Muhammad Suhail; Kim, Kihwan; Nha, Hyunchul
2017-01-01
A standard method to obtain information on a quantum state is to measure marginal distributions along many different axes in phase space, which forms a basis of quantum-state tomography. We theoretically propose and experimentally demonstrate a general framework to manifest nonclassicality by observing a single marginal distribution only, which provides a unique insight into nonclassicality and a practical applicability to various quantum systems. Our approach maps the 1D marginal distribution into a factorized 2D distribution by multiplying the measured distribution or the vacuum-state distribution along an orthogonal axis. The resulting fictitious Wigner function becomes unphysical only for a nonclassical state; thus the negativity of the corresponding density operator provides evidence of nonclassicality. Furthermore, the negativity measured this way yields a lower bound for entanglement potential—a measure of entanglement generated using a nonclassical state with a beam-splitter setting that is a prototypical model to produce continuous-variable (CV) entangled states. Our approach detects both Gaussian and non-Gaussian nonclassical states in a reliable and efficient manner. Remarkably, it works regardless of measurement axis for all non-Gaussian states in finite-dimensional Fock space of any size, also extending to infinite-dimensional states of experimental relevance for CV quantum informatics. We experimentally illustrate the power of our criterion for motional states of a trapped ion, confirming their nonclassicality in a measurement-axis–independent manner. We also address an extension of our approach combined with phase-shift operations, which leads to a stronger test of nonclassicality, that is, detection of genuine non-Gaussianity under a CV measurement. PMID:28077456
Revealing nonclassicality beyond Gaussian states via a single marginal distribution.
Park, Jiyong; Lu, Yao; Lee, Jaehak; Shen, Yangchao; Zhang, Kuan; Zhang, Shuaining; Zubairy, Muhammad Suhail; Kim, Kihwan; Nha, Hyunchul
2017-01-31
A standard method to obtain information on a quantum state is to measure marginal distributions along many different axes in phase space, which forms a basis of quantum-state tomography. We theoretically propose and experimentally demonstrate a general framework to manifest nonclassicality by observing a single marginal distribution only, which provides a unique insight into nonclassicality and a practical applicability to various quantum systems. Our approach maps the 1D marginal distribution into a factorized 2D distribution by multiplying the measured distribution or the vacuum-state distribution along an orthogonal axis. The resulting fictitious Wigner function becomes unphysical only for a nonclassical state; thus the negativity of the corresponding density operator provides evidence of nonclassicality. Furthermore, the negativity measured this way yields a lower bound for entanglement potential-a measure of entanglement generated using a nonclassical state with a beam-splitter setting that is a prototypical model to produce continuous-variable (CV) entangled states. Our approach detects both Gaussian and non-Gaussian nonclassical states in a reliable and efficient manner. Remarkably, it works regardless of measurement axis for all non-Gaussian states in finite-dimensional Fock space of any size, also extending to infinite-dimensional states of experimental relevance for CV quantum informatics. We experimentally illustrate the power of our criterion for motional states of a trapped ion, confirming their nonclassicality in a measurement-axis-independent manner. We also address an extension of our approach combined with phase-shift operations, which leads to a stronger test of nonclassicality, that is, detection of genuine non-Gaussianity under a CV measurement.
Optimal design and use of retry in fault tolerant real-time computer systems
NASA Technical Reports Server (NTRS)
Lee, Y. H.; Shin, K. G.
1983-01-01
A new method to determin an optimal retry policy and for use in retry of fault characterization is presented. An optimal retry policy for a given fault characteristic, which determines the maximum allowable retry durations to minimize the total task completion time was derived. The combined fault characterization and retry decision, in which the characteristics of fault are estimated simultaneously with the determination of the optimal retry policy were carried out. Two solution approaches were developed, one based on the point estimation and the other on the Bayes sequential decision. The maximum likelihood estimators are used for the first approach, and the backward induction for testing hypotheses in the second approach. Numerical examples in which all the durations associated with faults have monotone hazard functions, e.g., exponential, Weibull and gamma distributions are presented. These are standard distributions commonly used for modeling analysis and faults.
Expert system technologies for Space Shuttle decision support: Two case studies
NASA Technical Reports Server (NTRS)
Ortiz, Christopher J.; Hasan, David A.
1994-01-01
This paper addresses the issue of integrating the C Language Integrated Production System (CLIPS) into distributed data acquisition environments. In particular, it presents preliminary results of some ongoing software development projects aimed at exploiting CLIPS technology in the new mission control center (MCC) being built at NASA Johnson Space Center. One interesting aspect of the control center is its distributed architecture; it consists of networked workstations which acquire and share data through the NASA/JSC-developed information sharing protocol (ISP). This paper outlines some approaches taken to integrate CLIPS and ISP in order to permit the development of intelligent data analysis applications which can be used in the MCC. Three approaches to CLIPS/IPS integration are discussed. The initial approach involves clearly separating CLIPS from ISP using user-defined functions for gathering and sending data to and from a local storage buffer. Memory and performance drawbacks of this design are summarized. The second approach involves taking full advantage of CLIPS and the CLIPS Object-Oriented Language (COOL) by using objects to directly transmit data and state changes from ISP to COOL. Any changes within the object slots eliminate the need for both a data structure and external function call thus taking advantage of the object matching capabilities within CLIPS 6.0. The final approach is to treat CLIPS and ISP as peer toolkits. Neither is embedded in the other; rather the application interweaves calls to each directly in the application source code.
Liu, Zhongming; de Zwart, Jacco A.; Chang, Catie; Duan, Qi; van Gelderen, Peter; Duyn, Jeff H.
2014-01-01
Spontaneous activity in the human brain occurs in complex spatiotemporal patterns that may reflect functionally specialized neural networks. Here, we propose a subspace analysis method to elucidate large-scale networks by the joint analysis of electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) data. The new approach is based on the notion that the neuroelectrical activity underlying the fMRI signal may have EEG spectral features that report on regional neuronal dynamics and interregional interactions. Applying this approach to resting healthy adults, we indeed found characteristic spectral signatures in the EEG correlates of spontaneous fMRI signals at individual brain regions as well as the temporal synchronization among widely distributed regions. These spectral signatures not only allowed us to parcel the brain into clusters that resembled the brain's established functional subdivision, but also offered important clues for disentangling the involvement of individual regions in fMRI network activity. PMID:23796947
SU-E-I-16: Scan Length Dependency of the Radial Dose Distribution in a Long Polyethylene Cylinder
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bakalyar, D; McKenney, S; Feng, W
Purpose: The area-averaged dose in the central plane of a long cylinder following a CT scan depends upon the radial dose distribution and the length of the scan. The ICRU/TG200 phantom, a polyethylene cylinder 30 cm in diameter and 60 cm long, was the subject of this study. The purpose was to develop an analytic function that could determine the dose for a scan length L at any point in the central plane of this phantom. Methods: Monte Carlo calculations were performed on a simulated ICRU/TG200 phantom under conditions of cylindrically symmetric conditions of irradiation. Thus, the radial dose distributionmore » function must be an even function that accounts for two competing effects: The direct beam makes its weakest contribution at the center while the scatter begins abruptly at the outer radius and grows as the center is approached. The scatter contribution also increases with scan length with the increase approaching its limiting value at the periphery faster than along the central axis. An analytic function was developed that fit the data and possessed these features. Results: Symmetry and continuity dictate a local extremum at the center which is a minimum for the ICRU/TG200 phantom. The relative depth of the minimum decreases as the scan length grows and an absolute maximum can occur between the center and outer edge of the cylinders. As the scan length grows, the relative dip in the center decreases so that for very long scan lengths, the dose profile is relatively flat. Conclusion: An analytic function characterizes the radial and scan length dependency of dose for long cylindrical phantoms. The function can be integrated with the results expressed in closed form. One use for this is to help determine average dose distribution over the central cylinder plane for any scan length.« less
Horvath, Isabelle R; Chatterjee, Siddharth G
2018-05-01
The recently derived steady-state generalized Danckwerts age distribution is extended to unsteady-state conditions. For three different wind speeds used by researchers on air-water heat exchange on the Heidelberg Aeolotron, calculations reveal that the distribution has a sharp peak during the initial moments, but flattens out and acquires a bell-shaped character with process time, with the time taken to attain a steady-state profile being a strong and inverse function of wind speed. With increasing wind speed, the age distribution narrows significantly, its skewness decreases and its peak becomes larger. The mean eddy renewal time increases linearly with process time initially but approaches a final steady-state value asymptotically, which decreases dramatically with increased wind speed. Using the distribution to analyse the transient absorption of a gas into a large body of liquid, assuming negligible gas-side mass-transfer resistance, estimates are made of the gas-absorption and dissolved-gas transfer coefficients for oxygen absorption in water at 25°C for the three different wind speeds. Under unsteady-state conditions, these two coefficients show an inverse behaviour, indicating a heightened accumulation of dissolved gas in the surface elements, especially during the initial moments of absorption. However, the two mass-transfer coefficients start merging together as the steady state is approached. Theoretical predictions of the steady-state mass-transfer coefficient or transfer velocity are in fair agreement (average absolute error of prediction = 18.1%) with some experimental measurements of the same for the nitrous oxide-water system at 20°C that were made in the Heidelberg Aeolotron.
Grassmann phase space methods for fermions. I. Mode theory
NASA Astrophysics Data System (ADS)
Dalton, B. J.; Jeffers, J.; Barnett, S. M.
2016-07-01
In both quantum optics and cold atom physics, the behaviour of bosonic photons and atoms is often treated using phase space methods, where mode annihilation and creation operators are represented by c-number phase space variables, with the density operator equivalent to a distribution function of these variables. The anti-commutation rules for fermion annihilation, creation operators suggest the possibility of using anti-commuting Grassmann variables to represent these operators. However, in spite of the seminal work by Cahill and Glauber and a few applications, the use of Grassmann phase space methods in quantum-atom optics to treat fermionic systems is rather rare, though fermion coherent states using Grassmann variables are widely used in particle physics. The theory of Grassmann phase space methods for fermions based on separate modes is developed, showing how the distribution function is defined and used to determine quantum correlation functions, Fock state populations and coherences via Grassmann phase space integrals, how the Fokker-Planck equations are obtained and then converted into equivalent Ito equations for stochastic Grassmann variables. The fermion distribution function is an even Grassmann function, and is unique. The number of c-number Wiener increments involved is 2n2, if there are n modes. The situation is somewhat different to the bosonic c-number case where only 2 n Wiener increments are involved, the sign of the drift term in the Ito equation is reversed and the diffusion matrix in the Fokker-Planck equation is anti-symmetric rather than symmetric. The un-normalised B distribution is of particular importance for determining Fock state populations and coherences, and as pointed out by Plimak, Collett and Olsen, the drift vector in its Fokker-Planck equation only depends linearly on the Grassmann variables. Using this key feature we show how the Ito stochastic equations can be solved numerically for finite times in terms of c-number stochastic quantities. Averages of products of Grassmann stochastic variables at the initial time are also involved, but these are determined from the initial conditions for the quantum state. The detailed approach to the numerics is outlined, showing that (apart from standard issues in such numerics) numerical calculations for Grassmann phase space theories of fermion systems could be carried out without needing to represent Grassmann phase space variables on the computer, and only involving processes using c-numbers. We compare our approach to that of Plimak, Collett and Olsen and show that the two approaches differ. As a simple test case we apply the B distribution theory and solve the Ito stochastic equations to demonstrate coupling between degenerate Cooper pairs in a four mode fermionic system involving spin conserving interactions between the spin 1 / 2 fermions, where modes with momenta - k , + k-each associated with spin up, spin down states, are involved.
Optimization of removal function in computer controlled optical surfacing
NASA Astrophysics Data System (ADS)
Chen, Xi; Guo, Peiji; Ren, Jianfeng
2010-10-01
The technical principle of computer controlled optical surfacing (CCOS) and the common method of optimizing removal function that is used in CCOS are introduced in this paper. A new optimizing method time-sharing synthesis of removal function is proposed to solve problems of the removal function being far away from Gaussian type and slow approaching of the removal function error that encountered in the mode of planet motion or translation-rotation. Detailed time-sharing synthesis of using six removal functions is discussed. For a given region on the workpiece, six positions are selected as the centers of the removal function; polishing tool controlled by the executive system of CCOS revolves around each centre to complete a cycle in proper order. The overall removal function obtained by the time-sharing process is the ratio of total material removal in six cycles to time duration of the six cycles, which depends on the arrangement and distribution of the six removal functions. Simulations on the synthesized overall removal functions under two different modes of motion, i.e., planet motion and translation-rotation are performed from which the optimized combination of tool parameters and distribution of time-sharing synthesis removal functions are obtained. The evaluation function when optimizing is determined by an approaching factor which is defined as the ratio of the material removal within the area of half of the polishing tool coverage from the polishing center to the total material removal within the full polishing tool coverage area. After optimization, it is found that the optimized removal function obtained by time-sharing synthesis is closer to the ideal Gaussian type removal function than those by the traditional methods. The time-sharing synthesis method of the removal function provides an efficient way to increase the convergence speed of the surface error in CCOS for the fabrication of aspheric optical surfaces, and to reduce the intermediate- and high-frequency error.
Cargo Logistics Airlift Systems Study (CLASS). Volume 2: Case study approach and results
NASA Technical Reports Server (NTRS)
Burby, R. J.; Kuhlman, W. H.
1978-01-01
Models of transportation mode decision making were developed. The user's view of the present and future air cargo systems is discussed. Issues summarized include: (1) organization of the distribution function; (2) mode choice decision making; (3) air freight system; and (4) the future of air freight.
ERIC Educational Resources Information Center
Van Hecke, Tanja
2011-01-01
This article presents the mathematical approach of the optimal strategy to win the "Release the prisoners" game and the integration of this analysis in a math class. Outline lesson plans at three different levels are given, where simulations are suggested as well as theoretical findings about the probability distribution function and its mean…
Discrete element method as an approach to model the wheat milling process
USDA-ARS?s Scientific Manuscript database
It is a well-known phenomenon that break-release, particle size, and size distribution of wheat milling are functions of machine operational parameters and grain properties. Due to the non-uniformity of characteristics and properties of wheat kernels, the kernel physical and mechanical properties af...
The molecular analysis of drinking water microbial communities has focused primarily on 16S rRNA gene sequence analysis. Since this approach provides limited information on function potential of microbial communities, analysis of whole-metagenome pyrosequencing data was used to...
A distributed data base management system. [for Deep Space Network
NASA Technical Reports Server (NTRS)
Bryan, A. I.
1975-01-01
Major system design features of a distributed data management system for the NASA Deep Space Network (DSN) designed for continuous two-way deep space communications are described. The reasons for which the distributed data base utilizing third-generation minicomputers is selected as the optimum approach for the DSN are threefold: (1) with a distributed master data base, valid data is available in real-time to support DSN management activities at each location; (2) data base integrity is the responsibility of local management; and (3) the data acquisition/distribution and processing power of a third-generation computer enables the computer to function successfully as a data handler or as an on-line process controller. The concept of the distributed data base is discussed along with the software, data base integrity, and hardware used. The data analysis/update constraint is examined.
NASA Astrophysics Data System (ADS)
Oz, Alon; Hershkovitz, Shany; Tsur, Yoed
2014-11-01
In this contribution we present a novel approach to analyze impedance spectroscopy measurements of supercapacitors. Transforming the impedance data into frequency-dependent capacitance allows us to use Impedance Spectroscopy Genetic Programming (ISGP) in order to find the distribution function of relaxation times (DFRT) of the processes taking place in the tested device. Synthetic data was generated in order to demonstrate this technique and a model for supercapacitor ageing process has been obtained.
NASA Astrophysics Data System (ADS)
Menafoglio, A.; Guadagnini, A.; Secchi, P.
2016-08-01
We address the problem of stochastic simulation of soil particle-size curves (PSCs) in heterogeneous aquifer systems. Unlike traditional approaches that focus solely on a few selected features of PSCs (e.g., selected quantiles), our approach considers the entire particle-size curves and can optionally include conditioning on available data. We rely on our prior work to model PSCs as cumulative distribution functions and interpret their density functions as functional compositions. We thus approximate the latter through an expansion over an appropriate basis of functions. This enables us to (a) effectively deal with the data dimensionality and constraints and (b) to develop a simulation method for PSCs based upon a suitable and well defined projection procedure. The new theoretical framework allows representing and reproducing the complete information content embedded in PSC data. As a first field application, we demonstrate the quality of unconditional and conditional simulations obtained with our methodology by considering a set of particle-size curves collected within a shallow alluvial aquifer in the Neckar river valley, Germany.
NASA Astrophysics Data System (ADS)
Jasiulewicz-Kaczmarek, Małgorzata; Wyczółkowski, Ryszard; Gładysiak, Violetta
2017-12-01
Water distribution systems are one of the basic elements of contemporary technical infrastructure of urban and rural areas. It is a complex engineering system composed of transmission networks and auxiliary equipment (e.g. controllers, checkouts etc.), scattered territorially over a large area. From the water distribution system operation point of view, its basic features are: functional variability, resulting from the need to adjust the system to temporary fluctuations in demand for water and territorial dispersion. The main research questions are: What external factors should be taken into account when developing an effective water distribution policy? Does the size and nature of the water distribution system significantly affect the exploitation policy implemented? These questions have shaped the objectives of research and the method of research implementation.
NASA Technical Reports Server (NTRS)
Stefanick, M.; Jurdy, D. M.
1984-01-01
Statistical analyses are compared for two published hot spot data sets, one minimal set of 42 and another larger set of 117, using three different approaches. First, the earths surface is divided into 16 equal-area fractions and the observed distribution of hot spots among them is analyzed using chi-square tests. Second, cumulative distributions about the principal axes of the hot spot inertia tensor are used to describe hot spot distribution. Finally, a hot spot density function is constructed for each of the two hot spot data sets. The methods all indicate that hot spots have a nonuniform distribution, even when statistical fluctuations are considered. To the first order, hot spots are concentrated on one half of of the earth's surface area; within that portion, the distribution is consistent with a uniform distribution. The observed hot spot densities for neither data set are explained solely by plate speed.
Holt, Johnson; Leach, Adrian W; Schrader, Gritta; Petter, Françoise; MacLeod, Alan; van der Gaag, Dirk Jan; Baker, Richard H A; Mumford, John D
2014-01-01
Utility functions in the form of tables or matrices have often been used to combine discretely rated decision-making criteria. Matrix elements are usually specified individually, so no one rule or principle can be easily stated for the utility function as a whole. A series of five matrices are presented that aggregate criteria two at a time using simple rules that express a varying degree of constraint of the lower rating over the higher. A further nine possible matrices were obtained by using a different rule either side of the main axis of the matrix to describe situations where the criteria have a differential influence on the outcome. Uncertainties in the criteria are represented by three alternative frequency distributions from which the assessors select the most appropriate. The output of the utility function is a distribution of rating frequencies that is dependent on the distributions of the input criteria. In pest risk analysis (PRA), seven of these utility functions were required to mimic the logic by which assessors for the European and Mediterranean Plant Protection Organization arrive at an overall rating of pest risk. The framework enables the development of PRAs that are consistent and easy to understand, criticize, compare, and change. When tested in workshops, PRA practitioners thought that the approach accorded with both the logic and the level of resolution that they used in the risk assessments. © 2013 Society for Risk Analysis.
ONeil, Colleen E; Jackson, Joshua M; Shim, Sang-Hee; Soper, Steven A
2016-04-05
We present a novel approach for characterizing surfaces utilizing super-resolution fluorescence microscopy with subdiffraction limit spatial resolution. Thermoplastic surfaces were activated by UV/O3 or O2 plasma treatment under various conditions to generate pendant surface-confined carboxylic acids (-COOH). These surface functional groups were then labeled with a photoswitchable dye and interrogated using single-molecule, localization-based, super-resolution fluorescence microscopy to elucidate the surface heterogeneity of these functional groups across the activated surface. Data indicated nonuniform distributions of these functional groups for both COC and PMMA thermoplastics with the degree of heterogeneity being dose dependent. In addition, COC demonstrated relative higher surface density of functional groups compared to PMMA for both UV/O3 and O2 plasma treatment. The spatial distribution of -COOH groups secured from super-resolution imaging were used to simulate nonuniform patterns of electroosmotic flow in thermoplastic nanochannels. Simulations were compared to single-particle tracking of fluorescent nanoparticles within thermoplastic nanoslits to demonstrate the effects of surface functional group heterogeneity on the electrokinetic transport process.
Evaluation of Two Approaches to Defining Extinction Risk under the U.S. Endangered Species Act.
Thompson, Grant G; Maguire, Lynn A; Regan, Tracey J
2018-05-01
The predominant definition of extinction risk in conservation biology involves evaluating the cumulative distribution function (CDF) of extinction time at a particular point (the "time horizon"). Using the principles of decision theory, this article develops an alternative definition of extinction risk as the expected loss (EL) to society resulting from eventual extinction of a species. Distinct roles are identified for time preference and risk aversion. Ranges of tentative values for the parameters of the two approaches are proposed, and the performances of the two approaches are compared and contrasted for a small set of real-world species with published extinction time distributions and a large set of hypothetical extinction time distributions. Potential issues with each approach are evaluated, and the EL approach is recommended as the better of the two. The CDF approach suffers from the fact that extinctions that occur at any time before the specified time horizon are weighted equally, while extinctions that occur beyond the specified time horizon receive no weight at all. It also suffers from the fact that the time horizon does not correspond to any natural phenomenon, and so is impossible to specify nonarbitrarily; yet the results can depend critically on the specified value. In contrast, the EL approach has the advantage of weighting extinction time continuously, with no artificial time horizon, and the parameters of the approach (the rates of time preference and risk aversion) do correspond to natural phenomena, and so can be specified nonarbitrarily. © 2017 Society for Risk Analysis.
Brennan, Gerard P; Hunter, Stephen J; Snow, Greg; Minick, Kate I
2017-12-01
The Centers for Medicare and Medicaid Services (CMS) require physical therapists document patients' functional limitations. The process is not standardized. A systematic approach to determine a patient's functional limitations and responsiveness to change is needed. The purpose of this study is to compare patient-reported outcomes (PROs) responsiveness to change using 7-level severity/complexity modifier scale proposed by Medicare to a derived scale implemented by Intermountain Healthcare's Rehabilitation Outcomes Management System (ROMS). This was a retrospective, observational cohort design. 165,183 PROs prior to July 1, 2013, were compared to 46,334 records from July 1, 2013, to December 31, 2015. Histograms and ribbon plots illustrate distribution and change of patients' scores. ROMS raw score ranges were calculated and compared to CMS' severity/complexity levels based on score percentage. Distribution of the population was compared based on the 2 methods. Sensitivity and specificity were compared for responsiveness to change based on minimal clinically important difference (MCID). Histograms demonstrated few patient scores placed in CMS scale levels at the extremes, whereas the majority of scores placed in 2 middle levels (CJ, CK). ROMS distributed scores more evenly across levels. Ribbon plots illustrated advantage of ROMS' using narrower score ranges. Greater chance for patients to change levels was observed with ROMS when an MCID was achieved. ROMS narrower scale levels resulted in greater sensitivity and good specificity. Geographic representation for the United States was limited. Without patients' global rating of change, a reference standard to gauge validation of improvement could not be provided. ROMS provides a standard approach to identify accurately functional limitation modifier levels and to detect improvement more accurately than a straight across transposition using the CMS scale. © 2017 American Physical Therapy Association
Ackerly, D D; Cornwell, W K
2007-02-01
Plant functional traits vary both along environmental gradients and among species occupying similar conditions, creating a challenge for the synthesis of functional and community ecology. We present a trait-based approach that provides an additive decomposition of species' trait values into alpha and beta components: beta values refer to a species' position along a gradient defined by community-level mean trait values; alpha values are the difference between a species' trait values and the mean of co-occurring taxa. In woody plant communities of coastal California, beta trait values for specific leaf area, leaf size, wood density and maximum height all covary strongly, reflecting species distributions across a gradient of soil moisture availability. Alpha values, on the other hand, are generally not significantly correlated, suggesting several independent axes of differentiation within communities. This trait-based framework provides a novel approach to integrate functional ecology and gradient analysis with community ecology and coexistence theory.
Statistical Interior Tomography
Xu, Qiong; Wang, Ge; Sieren, Jered; Hoffman, Eric A.
2011-01-01
This paper presents a statistical interior tomography (SIT) approach making use of compressed sensing (CS) theory. With the projection data modeled by the Poisson distribution, an objective function with a total variation (TV) regularization term is formulated in the maximization of a posteriori (MAP) framework to solve the interior problem. An alternating minimization method is used to optimize the objective function with an initial image from the direct inversion of the truncated Hilbert transform. The proposed SIT approach is extensively evaluated with both numerical and real datasets. The results demonstrate that SIT is robust with respect to data noise and down-sampling, and has better resolution and less bias than its deterministic counterpart in the case of low count data. PMID:21233044
Angular correlations in pair production at the LHC in the parton Reggeization approach
NASA Astrophysics Data System (ADS)
Karpishkov, Anton; Nefedov, Maxim; Saleev, Vladimir
2017-10-01
We calculate angular correlation spectra between beauty (B) and anti-beauty mesons in proton-proton collisions in the leading order approximation of the parton Reggeization approach consistently merged with the next-to-leading order corrections from the emission of additional hard gluon (NLO* approximation). To describe b-quark hadronization we use the universal scale-depended parton-to-meson fragmentation functions extracted from the combined e+e- annihilation data. The Kimber-Martin-Ryskin model for the unintegrated parton distribution functions in a proton is implied. We have obtained good agreement between our predictions and data from the CMS Collaboration at the energy TeV for angular correlations within uncertainties and without free parameters.
Sumi, Tomonari; Maruyama, Yutaka; Mitsutake, Ayori; Koga, Kenichiro
2016-06-14
In the conventional classical density functional theory (DFT) for simple fluids, an ideal gas is usually chosen as the reference system because there is a one-to-one correspondence between the external field and the density distribution function, and the exact intrinsic free-energy functional is available for the ideal gas. In this case, the second-order density functional Taylor series expansion of the excess intrinsic free-energy functional provides the hypernetted-chain (HNC) approximation. Recently, it has been shown that the HNC approximation significantly overestimates the solvation free energy (SFE) for an infinitely dilute Lennard-Jones (LJ) solution, especially when the solute particles are several times larger than the solvent particles [T. Miyata and J. Thapa, Chem. Phys. Lett. 604, 122 (2014)]. In the present study, we propose a reference-modified density functional theory as a systematic approach to improve the SFE functional as well as the pair distribution functions. The second-order density functional Taylor series expansion for the excess part of the intrinsic free-energy functional in which a hard-sphere fluid is introduced as the reference system instead of an ideal gas is applied to the LJ pure and infinitely dilute solution systems and is proved to remarkably improve the drawbacks of the HNC approximation. Furthermore, the third-order density functional expansion approximation in which a factorization approximation is applied to the triplet direct correlation function is examined for the LJ systems. We also show that the third-order contribution can yield further refinements for both the pair distribution function and the excess chemical potential for the pure LJ liquids.
Tracking of multimodal therapeutic nanocomplexes targeting breast cancer in vivo.
Bardhan, Rizia; Chen, Wenxue; Bartels, Marc; Perez-Torres, Carlos; Botero, Maria F; McAninch, Robin Ward; Contreras, Alejandro; Schiff, Rachel; Pautler, Robia G; Halas, Naomi J; Joshi, Amit
2010-12-08
Nanoparticle-based therapeutics with local delivery and external electromagnetic field modulation holds extraordinary promise for soft-tissue cancers such as breast cancer; however, knowledge of the distribution and fate of nanoparticles in vivo is crucial for clinical translation. Here we demonstrate that multiple diagnostic capabilities can be introduced in photothermal therapeutic nanocomplexes by simultaneously enhancing both near-infrared fluorescence and magnetic resonance imaging (MRI). We track nanocomplexes in vivo, examining the influence of HER2 antibody targeting on nanocomplex distribution over 72 h. This approach provides valuable, detailed information regarding the distribution and fate of complex nanoparticles designed for specific diagnostic and therapeutic functions.
High level continuity for coordinate generation with precise controls
NASA Technical Reports Server (NTRS)
Eiseman, P. R.
1982-01-01
Coordinate generation techniques with precise local controls have been derived and analyzed for continuity requirements up to both the first and second derivatives, and have been projected to higher level continuity requirements from the established pattern. The desired local control precision was obtained when a family of coordinate surfaces could be uniformly distributed without a consequent creation of flat spots on the coordinate curves transverse to the family. Relative to the uniform distribution, the family could be redistributed from an a priori distribution function or from a solution adaptive approach, both without distortion from the underlying transformation which may be independently chosen to fit a nontrivial geometry and topology.
Tracking of Multimodal Therapeutic Nanocomplexes Targeting Breast Cancer in Vivo
Bardhan, Rizia; Chen, Wenxue; Bartels, Marc; Perez-Torres, Carlos; Botero, Maria F.; McAninch, Robin Ward; Contreras, Alejandro; Schiff, Rachel; Pautler, Robia G.; Halas, Naomi J.; Joshi, Amit
2014-01-01
Nanoparticle-based therapeutics with local delivery and external electromagnetic field modulation holds extraordinary promise for soft-tissue cancers such as breast cancer; however, knowledge of the distribution and fate of nanoparticles in vivo is crucial for clinical translation. Here we demonstrate that multiple diagnostic capabilities can be introduced in photothermal therapeutic nanocomplexes by simultaneously enhancing both near-infrared fluorescence and magnetic resonance imaging (MRI). We track nanocomplexes in vivo, examining the influence of HER2 antibody targeting on nanocomplex distribution over 72 h. This approach provides valuable, detailed information regarding the distribution and fate of complex nanoparticles designed for specific diagnostic and therapeutic functions. PMID:21090693
He, Yujun; Zhang, Jin; Li, Dongqi; Wang, Jiangtao; Wu, Qiong; Wei, Yang; Zhang, Lina; Wang, Jiaping; Liu, Peng; Li, Qunqing; Fan, Shoushan; Jiang, Kaili
2013-01-01
We show that the Schottky barrier at the metal-single walled carbon nanotube (SWCNT) contact can be clearly observed in scanning electron microscopy (SEM) images as a bright contrast segment with length up to micrometers due to the space charge distribution in the depletion region. The lengths of the charge depletion increase with the diameters of semiconducting SWCNTs (s-SWCNTs) when connected to one metal electrode, which enables direct and efficient evaluation of the bandgap distributions of s-SWCNTs. Moreover, this approach can also be applied for a wide variety of semiconducting nanomaterials, adding a new function to conventional SEM.
Distributed Optimization for a Class of Nonlinear Multiagent Systems With Disturbance Rejection.
Wang, Xinghu; Hong, Yiguang; Ji, Haibo
2016-07-01
The paper studies the distributed optimization problem for a class of nonlinear multiagent systems in the presence of external disturbances. To solve the problem, we need to achieve the optimal multiagent consensus based on local cost function information and neighboring information and meanwhile to reject local disturbance signals modeled by an exogenous system. With convex analysis and the internal model approach, we propose a distributed optimization controller for heterogeneous and nonlinear agents in the form of continuous-time minimum-phase systems with unity relative degree. We prove that the proposed design can solve the exact optimization problem with rejecting disturbances.
Geodesics in nonexpanding impulsive gravitational waves with Λ. II
NASA Astrophysics Data System (ADS)
Sämann, Clemens; Steinbauer, Roland
2017-11-01
We investigate all geodesics in the entire class of nonexpanding impulsive gravitational waves propagating in an (anti-)de Sitter universe using the distributional metric. We extend the regularization approach of part I [Sämann, C. et al., Classical Quantum Gravity 33(11), 115002 (2016)] to a full nonlinear distributional analysis within the geometric theory of generalized functions. We prove global existence and uniqueness of geodesics that cross the impulsive wave and hence geodesic completeness in full generality for this class of low regularity spacetimes. This, in particular, prepares the ground for a mathematically rigorous account on the "physical equivalence" of the continuous form with the distributional "form" of the metric.
DATMAN: A reliability data analysis program using Bayesian updating
DOE Office of Scientific and Technical Information (OSTI.GOV)
Becker, M.; Feltus, M.A.
1996-12-31
Preventive maintenance (PM) techniques focus on the prevention of failures, in particular, system components that are important to plant functions. Reliability-centered maintenance (RCM) improves on the PM techniques by introducing a set of guidelines by which to evaluate the system functions. It also minimizes intrusive maintenance, labor, and equipment downtime without sacrificing system performance when its function is essential for plant safety. Both the PM and RCM approaches require that system reliability data be updated as more component failures and operation time are acquired. Systems reliability and the likelihood of component failures can be calculated by Bayesian statistical methods, whichmore » can update these data. The DATMAN computer code has been developed at Penn State to simplify the Bayesian analysis by performing tedious calculations needed for RCM reliability analysis. DATMAN reads data for updating, fits a distribution that best fits the data, and calculates component reliability. DATMAN provides a user-friendly interface menu that allows the user to choose from several common prior and posterior distributions, insert new failure data, and visually select the distribution that matches the data most accurately.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hiotelis, Nicos; Popolo, Antonino Del, E-mail: adelpopolo@oact.inaf.it, E-mail: hiotelis@ipta.demokritos.gr
We construct an integral equation for the first crossing distributions for fractional Brownian motion in the case of a constant barrier and we present an exact analytical solution. Additionally we present first crossing distributions derived by simulating paths from fractional Brownian motion. We compare the results of the analytical solutions with both those of simulations and those of some approximated solutions which have been used in the literature. Finally, we present multiplicity functions for dark matter structures resulting from our analytical approach and we compare with those resulting from N-body simulations. We show that the results of analytical solutions aremore » in good agreement with those of path simulations but differ significantly from those derived from approximated solutions. Additionally, multiplicity functions derived from fractional Brownian motion are poor fits of the those which result from N-body simulations. We also present comparisons with other models which are exist in the literature and we discuss different ways of improving the agreement between analytical results and N-body simulations.« less
NASA Astrophysics Data System (ADS)
Kakehashi, Yoshiro; Chandra, Sumal
2017-03-01
The momentum distribution function (MDF) bands of iron-group transition metals from Sc to Cu have been investigated on the basis of the first-principles momentum dependent local ansatz wavefunction method. It is found that the MDF for d electrons show a strong momentum dependence and a large deviation from the Fermi-Dirac distribution function along high-symmetry lines of the first Brillouin zone, while the sp electrons behave as independent electrons. In particular, the deviation in bcc Fe (fcc Ni) is shown to be enhanced by the narrow eg (t2g) bands with flat dispersion in the vicinity of the Fermi level. Mass enhancement factors (MEF) calculated from the jump on the Fermi surface are also shown to be momentum dependent. Large mass enhancements of Mn and Fe are found to be caused by spin fluctuations due to d electrons, while that for Ni is mainly caused by charge fluctuations. Calculated MEF are consistent with electronic specific heat data as well as recent angle resolved photoemission spectroscopy data.
Mobility power flow analysis of an L-shaped plate structure subjected to distributed loading
NASA Technical Reports Server (NTRS)
Cuschieri, J. M.; Cimmerman, B.
1990-01-01
An analytical investigation based in the Mobility Power Flow (MPF) method is presented for the determination of the vibrational response and power flow for two coupled flat plate structures in an L-shaped configuration, subjected to distributed excitation. The principle of the MPF method consists of dividing the global structure into a series of subsystems coupled together using mobility functions. Each separate subsystem is analyzed independently to determine the structural mobility functions for the junction and excitation locations. The mobility functions, together with the characteristics of the junction between the subsystems, are then used to determine the response of the global structure and the MPF. In the considered coupled plate structure, MPF expressions are derived for distributed mechanical excitation which is independent of the structure response. However using a similar approach with some modifications excitation by an acoustic plane wave can be considered. Some modifications are required to deal with the latter case are necessary because the forces (acoustic pressure) acting on the structure are dependent on the response of the structure due to the presence of the scattered pressure.
NASA Astrophysics Data System (ADS)
Makoveeva, Eugenya V.; Alexandrov, Dmitri V.
2018-01-01
This article is concerned with a new analytical description of nucleation and growth of crystals in a metastable mushy layer (supercooled liquid or supersaturated solution) at the intermediate stage of phase transition. The model under consideration consisting of the non-stationary integro-differential system of governing equations for the distribution function and metastability level is analytically solved by means of the saddle-point technique for the Laplace-type integral in the case of arbitrary nucleation kinetics and time-dependent heat or mass sources in the balance equation. We demonstrate that the time-dependent distribution function approaches the stationary profile in course of time. This article is part of the theme issue `From atomistic interfaces to dendritic patterns'.
Mean-field approximation for spacing distribution functions in classical systems.
González, Diego Luis; Pimpinelli, Alberto; Einstein, T L
2012-01-01
We propose a mean-field method to calculate approximately the spacing distribution functions p((n))(s) in one-dimensional classical many-particle systems. We compare our method with two other commonly used methods, the independent interval approximation and the extended Wigner surmise. In our mean-field approach, p((n))(s) is calculated from a set of Langevin equations, which are decoupled by using a mean-field approximation. We find that in spite of its simplicity, the mean-field approximation provides good results in several systems. We offer many examples illustrating that the three previously mentioned methods give a reasonable description of the statistical behavior of the system. The physical interpretation of each method is also discussed. © 2012 American Physical Society
Toward the detection of gravitational waves under non-Gaussian noises I. Locally optimal statistic.
Yokoyama, Jun'ichi
2014-01-01
After reviewing the standard hypothesis test and the matched filter technique to identify gravitational waves under Gaussian noises, we introduce two methods to deal with non-Gaussian stationary noises. We formulate the likelihood ratio function under weakly non-Gaussian noises through the Edgeworth expansion and strongly non-Gaussian noises in terms of a new method we call Gaussian mapping where the observed marginal distribution and the two-body correlation function are fully taken into account. We then apply these two approaches to Student's t-distribution which has a larger tails than Gaussian. It is shown that while both methods work well in the case the non-Gaussianity is small, only the latter method works well for highly non-Gaussian case.
Parallel proton fire hose instability in the expanding solar wind: Hybrid simulations
NASA Astrophysics Data System (ADS)
Matteini, Lorenzo; Landi, Simone; Hellinger, Petr; Velli, Marco
2006-10-01
We report a study of the properties of the parallel proton fire hose instability comparing the results obtained by the linear analysis, from one-dimensional (1-D) standard hybrid simulations and 1-D hybrid expanding box simulations. The three different approaches converge toward the same instability threshold condition which is in good agreement with in situ observations, suggesting that such instability is relevant in the solar wind context. We investigate also the effect of the wave-particle interactions on shaping the proton distribution function and on the evolution of the spectrum of the magnetic fluctuations during the expansion. We find that the resonant interaction can provide the proton distribution function to depart from the bi-Maxwellian form.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yokogawa, D., E-mail: d.yokogawa@chem.nagoya-u.ac.jp; Institute of Transformative Bio-Molecules
2016-09-07
Theoretical approach to design bright bio-imaging molecules is one of the most progressing ones. However, because of the system size and computational accuracy, the number of theoretical studies is limited to our knowledge. To overcome the difficulties, we developed a new method based on reference interaction site model self-consistent field explicitly including spatial electron density distribution and time-dependent density functional theory. We applied it to the calculation of indole and 5-cyanoindole at ground and excited states in gas and solution phases. The changes in the optimized geometries were clearly explained with resonance structures and the Stokes shift was correctly reproduced.
Temperature evolution of the local order parameter in relaxor ferroelectrics (1 - x)PMN-xPZT
NASA Astrophysics Data System (ADS)
Gridnev, S. A.; Glazunov, A. A.; Tsotsorin, A. N.
2005-09-01
The temperature dependence of the local order parameter and relaxation time distribution function have been determined in (1 - x)PMN-xPZT ceramic samples via dielectric permittivity. Above the Burns temperature, the permittivity was found to follow the Currie-Weiss law, and with temperature decreasing the deviation was observed to increase. A local order parameter was calculated from the dielectric data using a modified Landau-Devonshire approach. These results are compared to the distribution function of relaxation times. It was found that a glasslike freezing of reorientable polar clusters occurs in the temperature range of diffuse relaxor transition. The evolution of the studied system to more ordered state arises from the increased PZT content.
NASA Astrophysics Data System (ADS)
Altomare, Albino; Cesario, Eugenio; Mastroianni, Carlo
2016-10-01
The opportunity of using Cloud resources on a pay-as-you-go basis and the availability of powerful data centers and high bandwidth connections are speeding up the success and popularity of Cloud systems, which is making on-demand computing a common practice for enterprises and scientific communities. The reasons for this success include natural business distribution, the need for high availability and disaster tolerance, the sheer size of their computational infrastructure, and/or the desire to provide uniform access times to the infrastructure from widely distributed client sites. Nevertheless, the expansion of large data centers is resulting in a huge rise of electrical power consumed by hardware facilities and cooling systems. The geographical distribution of data centers is becoming an opportunity: the variability of electricity prices, environmental conditions and client requests, both from site to site and with time, makes it possible to intelligently and dynamically (re)distribute the computational workload and achieve as diverse business goals as: the reduction of costs, energy consumption and carbon emissions, the satisfaction of performance constraints, the adherence to Service Level Agreement established with users, etc. This paper proposes an approach that helps to achieve the business goals established by the data center administrators. The workload distribution is driven by a fitness function, evaluated for each data center, which weighs some key parameters related to business objectives, among which, the price of electricity, the carbon emission rate, the balance of load among the data centers etc. For example, the energy costs can be reduced by using a "follow the moon" approach, e.g. by migrating the workload to data centers where the price of electricity is lower at that time. Our approach uses data about historical usage of the data centers and data about environmental conditions to predict, with the help of regressive models, the values of the parameters of the fitness function, and then to appropriately tune the weights assigned to the parameters in accordance to the business goals. Preliminary experimental results, presented in this paper, show encouraging benefits.
Feierstein, C E; Portugues, R; Orger, M B
2015-06-18
In recent years, the zebrafish has emerged as an appealing model system to tackle questions relating to the neural circuit basis of behavior. This can be attributed not just to the growing use of genetically tractable model organisms, but also in large part to the rapid advances in optical techniques for neuroscience, which are ideally suited for application to the small, transparent brain of the larval fish. Many characteristic features of vertebrate brains, from gross anatomy down to particular circuit motifs and cell-types, as well as conserved behaviors, can be found in zebrafish even just a few days post fertilization, and, at this early stage, the physical size of the brain makes it possible to analyze neural activity in a comprehensive fashion. In a recent study, we used a systematic and unbiased imaging method to record the pattern of activity dynamics throughout the whole brain of larval zebrafish during a simple visual behavior, the optokinetic response (OKR). This approach revealed the broadly distributed network of neurons that were active during the behavior and provided insights into the fine-scale functional architecture in the brain, inter-individual variability, and the spatial distribution of behaviorally relevant signals. Combined with mapping anatomical and functional connectivity, targeted electrophysiological recordings, and genetic labeling of specific populations, this comprehensive approach in zebrafish provides an unparalleled opportunity to study complete circuits in a behaving vertebrate animal. Copyright © 2014. Published by Elsevier Ltd.
Kutch, Jason J; Ichesco, Eric; Hampson, Johnson P; Labus, Jennifer S; Farmer, Melissa A; Martucci, Katherine T; Ness, Timothy J; Deutsch, Georg; Apkarian, A Vania; Mackey, Sean C; Klumpp, David J; Schaeffer, Anthony J; Rodriguez, Larissa V; Kreder, Karl J; Buchwald, Dedra; Andriole, Gerald L; Lai, H Henry; Mullins, Chris; Kusek, John W; Landis, J Richard; Mayer, Emeran A; Clemens, J Quentin; Clauw, Daniel J; Harris, Richard E
2017-10-01
Chronic pain is often measured with a severity score that overlooks its spatial distribution across the body. This widespread pain is believed to be a marker of centralization, a central nervous system process that decouples pain perception from nociceptive input. Here, we investigated whether centralization is manifested at the level of the brain using data from 1079 participants in the Multidisciplinary Approach to the Study of Chronic Pelvic Pain Research Network (MAPP) study. Participants with a clinical diagnosis of urological chronic pelvic pain syndrome (UCPPS) were compared to pain-free controls and patients with fibromyalgia, the prototypical centralized pain disorder. Participants completed questionnaires capturing pain severity, function, and a body map of pain. A subset (UCPPS N = 110; fibromyalgia N = 23; healthy control N = 49) underwent functional and structural magnetic resonance imaging. Patients with UCPPS reported pain ranging from localized (pelvic) to widespread (throughout the body). Patients with widespread UCPPS displayed increased brain gray matter volume and functional connectivity involving sensorimotor and insular cortices (P < 0.05 corrected). These changes translated across disease diagnoses as identical outcomes were present in patients with fibromyalgia but not pain-free controls. Widespread pain was also associated with reduced physical and mental function independent of pain severity. Brain pathology in patients with centralized pain is related to pain distribution throughout the body. These patients may benefit from interventions targeting the central nervous system.
An Intrinsic Algorithm for Parallel Poisson Disk Sampling on Arbitrary Surfaces.
Ying, Xiang; Xin, Shi-Qing; Sun, Qian; He, Ying
2013-03-08
Poisson disk sampling plays an important role in a variety of visual computing, due to its useful statistical property in distribution and the absence of aliasing artifacts. While many effective techniques have been proposed to generate Poisson disk distribution in Euclidean space, relatively few work has been reported to the surface counterpart. This paper presents an intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces. We propose a new technique for parallelizing the dart throwing. Rather than the conventional approaches that explicitly partition the spatial domain to generate the samples in parallel, our approach assigns each sample candidate a random and unique priority that is unbiased with regard to the distribution. Hence, multiple threads can process the candidates simultaneously and resolve conflicts by checking the given priority values. It is worth noting that our algorithm is accurate as the generated Poisson disks are uniformly and randomly distributed without bias. Our method is intrinsic in that all the computations are based on the intrinsic metric and are independent of the embedding space. This intrinsic feature allows us to generate Poisson disk distributions on arbitrary surfaces. Furthermore, by manipulating the spatially varying density function, we can obtain adaptive sampling easily.
GI-conf: A configuration tool for the GI-cat distributed catalog
NASA Astrophysics Data System (ADS)
Papeschi, F.; Boldrini, E.; Bigagli, L.; Mazzetti, P.
2009-04-01
In this work we present a configuration tool for the GI-cat. In an Service-Oriented Architecture (SOA) framework, GI-cat implements a distributed catalog service providing advanced capabilities, such as: caching, brokering and mediation functionalities. GI-cat applies a distributed approach, being able to distribute queries to the remote service providers of interest in an asynchronous style, and notifies the status of the queries to the caller implementing an incremental feedback mechanism. Today, GI-cat functionalities are made available through two standard catalog interfaces: the OGC CSW ISO and CSW Core Application Profiles. However, two other interfaces are under testing: the CIM and the EO Extension Packages of the CSW ebRIM Application Profile. GI-cat is able to interface a multiplicity of discovery and access services serving heterogeneous Earth and Space Sciences resources. They include international standards like the OGC Web Services -i.e. OGC CSW, WCS, WFS and WMS, as well as interoperability arrangements (i.e. community standards) such as: UNIDATA THREDDS/OPeNDAP, SeaDataNet CDI (Common Data Index), GBIF (Global Biodiversity Information Facility) services, and SibESS-C infrastructure services. GI-conf implements user-friendly configuration tool for GI-cat. This is a GUI application that employs a visual and very simple approach to configure both the GI-cat publishing and distribution capabilities, in a dynamic way. The tool allows to set one or more GI-cat configurations. Each configuration consists of: a) the catalog standards interfaces published by GI-cat; b) the resources (i.e. services/servers) to be accessed and mediated -i.e. federated. Simple icons are used for interfaces and resources, implementing a user-friendly visual approach. The main GI-conf functionalities are: • Interfaces and federated resources management: user can set which interfaces must be published; besides, she/he can add a new resource, update or remove an already federated resource. • Multiple configuration management: multiple GI-cat configurations can be defined; every configuration identifies a set of published interfaces and a set of federated resources. Configurations can be edited, added, removed, exported, and even imported. • HTML report creation: an HTML report can be created, showing the current active GI-cat configuration, including the resources that are being federated and the published interface endpoints. The configuration tool is shipped with GI-cat and can be used to configure the service after its installation is completed.
Mancini, Matteo; Giulietti, Giovanni; Dowell, Nicholas; Spanò, Barbara; Harrison, Neil; Bozzali, Marco; Cercignani, Mara
2017-09-14
Microstructural imaging and connectomics are two research areas that hold great potential for investigating brain structure and function. Combining these two approaches can lead to a better and more complete characterization of the brain as a network. The aim of this work is characterizing the connectome from a novel perspective using the myelination measure given by the g-ratio. The g-ratio is the ratio of the inner to the outer diameters of a myelinated axon, whose aggregated value can now be estimated in vivo using MRI. In two different datasets of healthy subjects, we reconstructed the structural connectome and then used the g-ratio estimated from diffusion and magnetization transfer data to characterize the network structure. Significant characteristics of g-ratio weighted graphs emerged. First, the g-ratio distribution across the edges of the graph did not show the power-law distribution observed using the number of streamlines as a weight. Second, connections involving regions related to motor and sensory functions were the highest in myelin content. We also observed significant differences in terms of the hub structure and the rich-club organization suggesting that connections involving hub regions present higher myelination than peripheral connections. Taken together, these findings offer a characterization of g-ratio distribution across the connectome in healthy subjects and lay the foundations for further investigating plasticity and pathology using a similar approach. Copyright © 2017. Published by Elsevier Inc.
A fully traits-based approach to modeling global vegetation distribution.
van Bodegom, Peter M; Douma, Jacob C; Verheijen, Lieneke M
2014-09-23
Dynamic Global Vegetation Models (DGVMs) are indispensable for our understanding of climate change impacts. The application of traits in DGVMs is increasingly refined. However, a comprehensive analysis of the direct impacts of trait variation on global vegetation distribution does not yet exist. Here, we present such analysis as proof of principle. We run regressions of trait observations for leaf mass per area, stem-specific density, and seed mass from a global database against multiple environmental drivers, making use of findings of global trait convergence. This analysis explained up to 52% of the global variation of traits. Global trait maps, generated by coupling the regression equations to gridded soil and climate maps, showed up to orders of magnitude variation in trait values. Subsequently, nine vegetation types were characterized by the trait combinations that they possess using Gaussian mixture density functions. The trait maps were input to these functions to determine global occurrence probabilities for each vegetation type. We prepared vegetation maps, assuming that the most probable (and thus, most suited) vegetation type at each location will be realized. This fully traits-based vegetation map predicted 42% of the observed vegetation distribution correctly. Our results indicate that a major proportion of the predictive ability of DGVMs with respect to vegetation distribution can be attained by three traits alone if traits like stem-specific density and seed mass are included. We envision that our traits-based approach, our observation-driven trait maps, and our vegetation maps may inspire a new generation of powerful traits-based DGVMs.
Cai, Jing; Read, Paul W; Altes, Talissa A; Molloy, Janelle A; Brookeman, James R; Sheng, Ke
2007-01-21
Treatment planning based on probability distribution function (PDF) of patient geometries has been shown a potential off-line strategy to incorporate organ motion, but the application of such approach highly depends upon the reproducibility of the PDF. In this paper, we investigated the dependences of the PDF reproducibility on the imaging acquisition parameters, specifically the scan time and the frame rate. Three healthy subjects underwent a continuous 5 min magnetic resonance (MR) scan in the sagittal plane with a frame rate of approximately 10 f s-1, and the experiments were repeated with an interval of 2 to 3 weeks. A total of nine pulmonary vessels from different lung regions (upper, middle and lower) were tracked and the dependences of their displacement PDF reproducibility were evaluated as a function of scan time and frame rate. As results, the PDF reproducibility error decreased with prolonged scans and appeared to approach equilibrium state in subjects 2 and 3 within the 5 min scan. The PDF accuracy increased in the power function with the increase of frame rate; however, the PDF reproducibility showed less sensitivity to frame rate presumably due to the randomness of breathing which dominates the effects. As the key component of the PDF-based treatment planning, the reproducibility of the PDF affects the dosimetric accuracy substantially. This study provides a reference for acquiring MR-based PDF of structures in the lung.
Inventory slack routing application in emergency logistics and relief distributions.
Yang, Xianfeng; Hao, Wei; Lu, Yang
2018-01-01
Various natural and manmade disasters during last decades have highlighted the need of further improving on governmental preparedness to emergency events, and a relief supplies distribution problem named Inventory Slack Routing Problem (ISRP) has received increasing attentions. In an ISRP, inventory slack is defined as the duration between reliefs arriving time and estimated inventory stock-out time. Hence, a larger inventory slack could grant more responsive time in facing of various factors (e.g., traffic congestion) that may lead to delivery lateness. In this study, the relief distribution problem is formulated as an optimization model that maximize the minimum slack among all dispensing sites. To efficiently solve this problem, we propose a two-stage approach to tackle the vehicle routing and relief allocation sub-problems. By analyzing the inter-relations between these two sub-problems, a new objective function considering both delivery durations and dispensing rates of demand sites is applied in the first stage to design the vehicle routes. A hierarchical routing approach and a sweep approach are also proposed in this stage. Given the vehicle routing plan, the relief allocation could be easily solved in the second stage. Numerical experiment with a comparison of multi-vehicle Traveling Salesman Problem (TSP) has demonstrated the need of ISRP and the capability of the proposed solution approaches.
Inventory slack routing application in emergency logistics and relief distributions
Yang, Xianfeng; Lu, Yang
2018-01-01
Various natural and manmade disasters during last decades have highlighted the need of further improving on governmental preparedness to emergency events, and a relief supplies distribution problem named Inventory Slack Routing Problem (ISRP) has received increasing attentions. In an ISRP, inventory slack is defined as the duration between reliefs arriving time and estimated inventory stock-out time. Hence, a larger inventory slack could grant more responsive time in facing of various factors (e.g., traffic congestion) that may lead to delivery lateness. In this study, the relief distribution problem is formulated as an optimization model that maximize the minimum slack among all dispensing sites. To efficiently solve this problem, we propose a two-stage approach to tackle the vehicle routing and relief allocation sub-problems. By analyzing the inter-relations between these two sub-problems, a new objective function considering both delivery durations and dispensing rates of demand sites is applied in the first stage to design the vehicle routes. A hierarchical routing approach and a sweep approach are also proposed in this stage. Given the vehicle routing plan, the relief allocation could be easily solved in the second stage. Numerical experiment with a comparison of multi-vehicle Traveling Salesman Problem (TSP) has demonstrated the need of ISRP and the capability of the proposed solution approaches. PMID:29902196
Photoelectron angular distributions from rotationally resolved autoionizing states of N 2
Chartrand, A. M.; McCormack, E. F.; Jacovella, U.; ...
2017-12-08
The single-photon, photoelectron-photoion coincidence spectrum of N 2 has been recorded at high (~1.5 cm -1) resolution in the region between the N 2 + X 2Σ g +, v + = 0 and 1 ionization thresholds by using a double imaging spectrometer and intense vacuum-ultraviolet light from the Synchrotron SOLEIL. This approach provides the relative photoionization cross section, the photoelectron energy distribution, and the photoelectron angular distribution as a function of photon energy. The region of interest contains autoionizing valence states, vibrationally autoionizing Rydberg states converging to vibrationally excited levels of the N 2 + X 2Σ g +more » ground state, and electronically autoionizing states converging to the N 2 + A 2Π and B 2Σ u + states. The wavelength resolution is sufficient to resolve rotational structure in the autoionizing states, but the electron energy resolution is insufficient to resolve rotational structure in the photoion spectrum. Here, a simplified approach based on multichannel quantum defect theory is used to predict the photoelectron angular distribution parameters, β, and the results are in reasonably good agreement with experiment.« less
Evidence for hubs in human functional brain networks
Power, Jonathan D; Schlaggar, Bradley L; Lessov-Schlaggar, Christina N; Petersen, Steven E
2013-01-01
Summary Hubs integrate and distribute information in powerful ways due to the number and positioning of their contacts in a network. Several resting state functional connectivity MRI reports have implicated regions of the default mode system as brain hubs; we demonstrate that previous degree-based approaches to hub identification may have identified portions of large brain systems rather than critical nodes of brain networks. We utilize two methods to identify hub-like brain regions: 1) finding network nodes that participate in multiple sub-networks of the brain, and 2) finding spatial locations where several systems are represented within a small volume. These methods converge on a distributed set of regions that differ from previous reports on hubs. This work identifies regions that support multiple systems, leading to spatially constrained predictions about brain function that may be tested in terms of lesions, evoked responses, and dynamic patterns of activity. PMID:23972601
DNA mimic proteins: functions, structures, and bioinformatic analysis.
Wang, Hao-Ching; Ho, Chun-Han; Hsu, Kai-Cheng; Yang, Jinn-Moon; Wang, Andrew H-J
2014-05-13
DNA mimic proteins have DNA-like negative surface charge distributions, and they function by occupying the DNA binding sites of DNA binding proteins to prevent these sites from being accessed by DNA. DNA mimic proteins control the activities of a variety of DNA binding proteins and are involved in a wide range of cellular mechanisms such as chromatin assembly, DNA repair, transcription regulation, and gene recombination. However, the sequences and structures of DNA mimic proteins are diverse, making them difficult to predict by bioinformatic search. To date, only a few DNA mimic proteins have been reported. These DNA mimics were not found by searching for functional motifs in their sequences but were revealed only by structural analysis of their charge distribution. This review highlights the biological roles and structures of 16 reported DNA mimic proteins. We also discuss approaches that might be used to discover new DNA mimic proteins.
Application of a Modal Approach in Solving the Static Stability Problem for Electric Power Systems
NASA Astrophysics Data System (ADS)
Sharov, J. V.
2017-12-01
Application of a modal approach in solving the static stability problem for power systems is examined. It is proposed to use the matrix exponent norm as a generalized transition function of the power system disturbed motion. Based on the concept of a stability radius and the pseudospectrum of Jacobian matrix, the necessary and sufficient conditions for existence of the static margins were determined. The capabilities and advantages of the modal approach in designing centralized or distributed control and the prospects for the analysis of nonlinear oscillations and rendering the dynamic stability are demonstrated.
Dynamic modeling and parameter estimation of a radial and loop type distribution system network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jun Qui; Heng Chen; Girgis, A.A.
1993-05-01
This paper presents a new identification approach to three-phase power system modeling and model reduction taking power system network as multi-input, multi-output (MIMO) processes. The model estimate can be obtained in discrete-time input-output form, discrete- or continuous-time state-space variable form, or frequency-domain impedance transfer function matrix form. An algorithm for determining the model structure of this MIMO process is described. The effect of measurement noise on the approach is also discussed. This approach has been applied on a sample system and simulation results are also presented in this paper.
Distributed Electrical Energy Systems: Needs, Concepts, Approaches and Vision (in Chinese)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Yingchen; Zhang, Jun; Gao, Wenzhong
Intelligent distributed electrical energy systems (IDEES) are featured by vast system components, diversifled component types, and difficulties in operation and management, which results in that the traditional centralized power system management approach no longer flts the operation. Thus, it is believed that the blockchain technology is one of the important feasible technical paths for building future large-scale distributed electrical energy systems. An IDEES is inherently with both social and technical characteristics, as a result, a distributed electrical energy system needs to be divided into multiple layers, and at each layer, a blockchain is utilized to model and manage its logicmore » and physical functionalities. The blockchains at difierent layers coordinate with each other and achieve successful operation of the IDEES. Speciflcally, the multi-layer blockchains, named 'blockchain group', consist of distributed data access and service blockchain, intelligent property management blockchain, power system analysis blockchain, intelligent contract operation blockchain, and intelligent electricity trading blockchain. It is expected that the blockchain group can be self-organized into a complex, autonomous and distributed IDEES. In this complex system, frequent and in-depth interactions and computing will derive intelligence, and it is expected that such intelligence can bring stable, reliable and efficient electrical energy production, transmission and consumption.« less
An efficient distribution method for nonlinear transport problems in stochastic porous media
NASA Astrophysics Data System (ADS)
Ibrahima, F.; Tchelepi, H.; Meyer, D. W.
2015-12-01
Because geophysical data are inexorably sparse and incomplete, stochastic treatments of simulated responses are convenient to explore possible scenarios and assess risks in subsurface problems. In particular, understanding how uncertainties propagate in porous media with nonlinear two-phase flow is essential, yet challenging, in reservoir simulation and hydrology. We give a computationally efficient and numerically accurate method to estimate the one-point probability density (PDF) and cumulative distribution functions (CDF) of the water saturation for the stochastic Buckley-Leverett problem when the probability distributions of the permeability and porosity fields are available. The method draws inspiration from the streamline approach and expresses the distributions of interest essentially in terms of an analytically derived mapping and the distribution of the time of flight. In a large class of applications the latter can be estimated at low computational costs (even via conventional Monte Carlo). Once the water saturation distribution is determined, any one-point statistics thereof can be obtained, especially its average and standard deviation. Moreover, rarely available in other approaches, yet crucial information such as the probability of rare events and saturation quantiles (e.g. P10, P50 and P90) can be derived from the method. We provide various examples and comparisons with Monte Carlo simulations to illustrate the performance of the method.
Applications of species distribution modeling to paleobiology
NASA Astrophysics Data System (ADS)
Svenning, Jens-Christian; Fløjgaard, Camilla; Marske, Katharine A.; Nógues-Bravo, David; Normand, Signe
2011-10-01
Species distribution modeling (SDM: statistical and/or mechanistic approaches to the assessment of range determinants and prediction of species occurrence) offers new possibilities for estimating and studying past organism distributions. SDM complements fossil and genetic evidence by providing (i) quantitative and potentially high-resolution predictions of the past organism distributions, (ii) statistically formulated, testable ecological hypotheses regarding past distributions and communities, and (iii) statistical assessment of range determinants. In this article, we provide an overview of applications of SDM to paleobiology, outlining the methodology, reviewing SDM-based studies to paleobiology or at the interface of paleo- and neobiology, discussing assumptions and uncertainties as well as how to handle them, and providing a synthesis and outlook. Key methodological issues for SDM applications to paleobiology include predictor variables (types and properties; special emphasis is given to paleoclimate), model validation (particularly important given the emphasis on cross-temporal predictions in paleobiological applications), and the integration of SDM and genetics approaches. Over the last few years the number of studies using SDM to address paleobiology-related questions has increased considerably. While some of these studies only use SDM (23%), most combine them with genetically inferred patterns (49%), paleoecological records (22%), or both (6%). A large number of SDM-based studies have addressed the role of Pleistocene glacial refugia in biogeography and evolution, especially in Europe, but also in many other regions. SDM-based approaches are also beginning to contribute to a suite of other research questions, such as historical constraints on current distributions and diversity patterns, the end-Pleistocene megafaunal extinctions, past community assembly, human paleobiogeography, Holocene paleoecology, and even deep-time biogeography (notably, providing insights into biogeographic dynamics >400 million years ago). We discuss important assumptions and uncertainties that affect the SDM approach to paleobiology - the equilibrium postulate, niche stability, changing atmospheric CO 2 concentrations - as well as ways to address these (ensemble, functional SDM, and non-SDM ecoinformatics approaches). We conclude that the SDM approach offers important opportunities for advances in paleobiology by providing a quantitative ecological perspective, and hereby also offers the potential for an enhanced contribution of paleobiology to ecology and conservation biology, e.g., for estimating climate change impacts and for informing ecological restoration.
A new approach for beam hardening correction based on the local spectrum distributions
NASA Astrophysics Data System (ADS)
Rasoulpour, Naser; Kamali-Asl, Alireza; Hemmati, Hamidreza
2015-09-01
Energy dependence of material absorption and polychromatic nature of x-ray beams in the Computed Tomography (CT) causes a phenomenon which called "beam hardening". The purpose of this study is to provide a novel approach for Beam Hardening (BH) correction. This approach is based on the linear attenuation coefficients of Local Spectrum Distributions (LSDs) in the various depths of a phantom. The proposed method includes two steps. Firstly, the hardened spectra in various depths of the phantom (or LSDs) are estimated based on the Expectation Maximization (EM) algorithm for arbitrary thickness interval of known materials in the phantom. The performance of LSD estimation technique is evaluated by applying random Gaussian noise to transmission data. Then, the linear attenuation coefficients with regarding to the mean energy of LSDs are obtained. Secondly, a correction function based on the calculated attenuation coefficients is derived in order to correct polychromatic raw data. Since a correction function has been used for the conversion of the polychromatic data to the monochromatic data, the effect of BH in proposed reconstruction must be reduced in comparison with polychromatic reconstruction. The proposed approach has been assessed in the phantoms which involve less than two materials, but the correction function has been extended for using in the constructed phantoms with more than two materials. The relative mean energy difference in the LSDs estimations based on the noise-free transmission data was less than 1.5%. Also, it shows an acceptable value when a random Gaussian noise is applied to the transmission data. The amount of cupping artifact in the proposed reconstruction method has been effectively reduced and proposed reconstruction profile is uniform more than polychromatic reconstruction profile.
2012-10-01
black and approximations in cyan and magenta. The second ODE is the pendulum equation, given by: This ODE was also implemented using Crank...The drawback of approaches like the one proposed can be observed with a very simple example. Suppose vector is found by applying 4 linear...public release; distribution unlimited Figure 2. A phase space plot of the Pendulum example. Fine solution (black) contains 32768 time steps
Analyzing degradation data with a random effects spline regression model
Fugate, Michael Lynn; Hamada, Michael Scott; Weaver, Brian Phillip
2017-03-17
This study proposes using a random effects spline regression model to analyze degradation data. Spline regression avoids having to specify a parametric function for the true degradation of an item. A distribution for the spline regression coefficients captures the variation of the true degradation curves from item to item. We illustrate the proposed methodology with a real example using a Bayesian approach. The Bayesian approach allows prediction of degradation of a population over time and estimation of reliability is easy to perform.
An integrate-over-temperature approach for enhanced sampling.
Gao, Yi Qin
2008-02-14
A simple method is introduced to achieve efficient random walking in the energy space in molecular dynamics simulations which thus enhances the sampling over a large energy range. The approach is closely related to multicanonical and replica exchange simulation methods in that it allows configurations of the system to be sampled in a wide energy range by making use of Boltzmann distribution functions at multiple temperatures. A biased potential is quickly generated using this method and is then used in accelerated molecular dynamics simulations.
Analyzing degradation data with a random effects spline regression model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fugate, Michael Lynn; Hamada, Michael Scott; Weaver, Brian Phillip
This study proposes using a random effects spline regression model to analyze degradation data. Spline regression avoids having to specify a parametric function for the true degradation of an item. A distribution for the spline regression coefficients captures the variation of the true degradation curves from item to item. We illustrate the proposed methodology with a real example using a Bayesian approach. The Bayesian approach allows prediction of degradation of a population over time and estimation of reliability is easy to perform.
Shedding light into the function of the earliest vertebrate skeleton
NASA Astrophysics Data System (ADS)
Martinez-Perez, Carlos; Purnell, Mark; Rayfield, Emily; Donoghue, Philip
2016-04-01
Conodonts are an extinct group of jawless vertebrates, the first in our evolutionary lineage to develop a biomineralized skeleton. As such, the conodont skeleton is of great significance because of the insights it provides concerning the biology and function of the primitive vertebrate skeleton. Conodont function has been debated for a century and a half on the basis of its paleocological importance in the Palaezoic ecosystems. However, due to the lack of extanct close representatives and the small size of the conodont element (under a milimiter in length) strongly limited their functional analysis, traditional restricted to analogy. More recently, qualitative approaches have been developed, facilitating tests of element function based on occlusal performance and analysis of microwear and microstructure. In this work we extend these approaches using novel quantitative experimental methods including Synchrotron Radiation X-ray Tomographic Microscopy or Finite Element Analysis to test hypotheses of conodont function. The development of high resolution virtual models of conodont elements, together with biomechanical approaches using Finite Element analysis, informed by occlusal and microwear analyses, provided conclusive support to test hypothesis of structural adaptation within the crown tissue microstructure, showing a close topological co-variation patterns of compressive and tensile stress distribution with different crystallite orientation. In addition, our computational analyses strongly support a tooth-like function for many conodont species. Above all, our study establishes a framework (experimental approach) in which the functional ecology of conodonts can be read from their rich taxonomy and phylogeny, representing an important attempt to understand the role of this abundant and diverse clade in the Phanerozoic marine ecosystems.
The Filtered Abel Transform and Its Application in Combustion Diagnostics
NASA Technical Reports Server (NTRS)
Simons, Stephen N. (Technical Monitor); Yuan, Zeng-Guang
2003-01-01
Many non-intrusive combustion diagnosis methods generate line-of-sight projections of a flame field. To reconstruct the spatial field of the measured properties, these projections need to be deconvoluted. When the spatial field is axisymmetric, commonly used deconvolution method include the Abel transforms, the onion peeling method and the two-dimensional Fourier transform method and its derivatives such as the filtered back projection methods. This paper proposes a new approach for performing the Abel transform method is developed, which possesses the exactness of the Abel transform and the flexibility of incorporating various filters in the reconstruction process. The Abel transform is an exact method and the simplest among these commonly used methods. It is evinced in this paper that all the exact reconstruction methods for axisymmetric distributions must be equivalent to the Abel transform because of its uniqueness and exactness. Detailed proof is presented to show that the two dimensional Fourier methods when applied to axisymmetric cases is identical to the Abel transform. Discrepancies among various reconstruction method stem from the different approximations made to perform numerical calculations. An equation relating the spectrum of a set of projection date to that of the corresponding spatial distribution is obtained, which shows that the spectrum of the projection is equal to the Abel transform of the spectrum of the corresponding spatial distribution. From the equation, if either the projection or the distribution is bandwidth limited, the other is also bandwidth limited, and both have the same bandwidth. If the two are not bandwidth limited, the Abel transform has a bias against low wave number components in most practical cases. This explains why the Abel transform and all exact deconvolution methods are sensitive to high wave number noises. The filtered Abel transform is based on the fact that the Abel transform of filtered projection data is equal to an integral transform of the original projection data with the kernel function being the Abel transform of the filtering function. The kernel function is independent of the projection data and can be obtained separately when the filtering function is selected. Users can select the best filtering function for a particular set of experimental data. When the kernal function is obtained, it can be used repeatedly to a number of projection data sets (rovs) from the same experiment. When an entire flame image that contains a large number of projection lines needs to be processed, the new approach significantly reduces computational effort in comparison with the conventional approach in which each projection data set is deconvoluted separately. Computer codes have been developed to perform the filter Abel transform for an entire flame field. Measured soot volume fraction data of a jet diffusion flame are processed as an example.
Genome-Wide Association Study of the Genetic Determinants of Emphysema Distribution
Boueiz, Adel; Lutz, Sharon M.; Cho, Michael H.; Hersh, Craig P.; Bowler, Russell P.; Washko, George R.; Halper-Stromberg, Eitan; Bakke, Per; Gulsvik, Amund; Laird, Nan M.; Beaty, Terri H.; Coxson, Harvey O.; Crapo, James D.; Silverman, Edwin K.; Castaldi, Peter J.
2017-01-01
Rationale: Emphysema has considerable variability in the severity and distribution of parenchymal destruction throughout the lungs. Upper lobe–predominant emphysema has emerged as an important predictor of response to lung volume reduction surgery. Yet, aside from alpha-1 antitrypsin deficiency, the genetic determinants of emphysema distribution remain largely unknown. Objectives: To identify the genetic influences of emphysema distribution in non–alpha-1 antitrypsin–deficient smokers. Methods: A total of 11,532 subjects with complete genotype and computed tomography densitometry data in the COPDGene (Genetic Epidemiology of Chronic Obstructive Pulmonary Disease [COPD]; non-Hispanic white and African American), ECLIPSE (Evaluation of COPD Longitudinally to Identify Predictive Surrogate Endpoints), and GenKOLS (Genetics of Chronic Obstructive Lung Disease) studies were analyzed. Two computed tomography scan emphysema distribution measures (difference between upper-third and lower-third emphysema; ratio of upper-third to lower-third emphysema) were tested for genetic associations in all study subjects. Separate analyses in each study population were followed by a fixed effect metaanalysis. Single-nucleotide polymorphism–, gene-, and pathway-based approaches were used. In silico functional evaluation was also performed. Measurements and Main Results: We identified five loci associated with emphysema distribution at genome-wide significance. These loci included two previously reported associations with COPD susceptibility (4q31 near HHIP and 15q25 near CHRNA5) and three new associations near SOWAHB, TRAPPC9, and KIAA1462. Gene set analysis and in silico functional evaluation revealed pathways and cell types that may potentially contribute to the pathogenesis of emphysema distribution. Conclusions: This multicohort genome-wide association study identified new genomic loci associated with differential emphysematous destruction throughout the lungs. These findings may point to new biologic pathways on which to expand diagnostic and therapeutic approaches in chronic obstructive pulmonary disease. Clinical trial registered with www.clinicaltrials.gov (NCT 00608764). PMID:27669027
Pacific Yew: A Facultative Riparian Conifer with an Uncertain Future
Stanley Scher; Bert Schwarzschild
1989-01-01
Increasing demands for Pacific yew bark, a source of an anticancer agent, have generated interest in defining the yew resource and in exploring strategies to conserve this species. The distribution, riparian requirements and ecosystem functions of yew populations in coastal and inland forests of northern California are outlined and alternative approaches to conserving...
Portraits of Principal Practice: Time Allocation and School Principal Work
ERIC Educational Resources Information Center
Sebastian, James; Camburn, Eric M.; Spillane, James P.
2018-01-01
Purpose: The purpose of this study was to examine how school principals in urban settings distributed their time working on critical school functions. We also examined who principals worked with and how their time allocation patterns varied by school contextual characteristics. Research Method/Approach: The study was conducted in an urban school…
Probalistic Assessment of Radiation Risk for Solar Particle Events
NASA Technical Reports Server (NTRS)
Kim, Myung-Hee Y.; Cucinotta, Francis A.
2008-01-01
For long duration missions outside of the protection of the Earth's magnetic field, exposure to solar particle events (SPEs) is a major safety concern for crew members during extra-vehicular activities (EVAs) on the lunar surface or Earth-to-moon or Earth-to-Mars transit. The large majority (90%) of SPEs have small or no health consequences because the doses are low and the particles do not penetrate to organ depths. However, there is an operational challenge to respond to events of unknown size and duration. We have developed a probabilistic approach to SPE risk assessment in support of mission design and operational planning. Using the historical database of proton measurements during the past 5 solar cycles, the functional form of hazard function of SPE occurrence per cycle was found for nonhomogeneous Poisson model. A typical hazard function was defined as a function of time within a non-specific future solar cycle of 4000 days duration. Distributions of particle fluences for a specified mission period were simulated ranging from its 5th to 95th percentile. Organ doses from large SPEs were assessed using NASA's Baryon transport model, BRYNTRN. The SPE risk was analyzed with the organ dose distribution for the given particle fluences during a mission period. In addition to the total particle fluences of SPEs, the detailed energy spectra of protons, especially at high energy levels, were recognized as extremely important for assessing the cancer risk associated with energetic particles for large events. The probability of exceeding the NASA 30-day limit of blood forming organ (BFO) dose inside a typical spacecraft was calculated for various SPE sizes. This probabilistic approach to SPE protection will be combined with a probabilistic approach to the radiobiological factors that contribute to the uncertainties in projecting cancer risks in future work.
Backscattering from a Gaussian distributed, perfectly conducting, rough surface
NASA Technical Reports Server (NTRS)
Brown, G. S.
1977-01-01
The problem of scattering by random surfaces possessing many scales of roughness is analyzed. The approach is applicable to bistatic scattering from dielectric surfaces, however, this specific analysis is restricted to backscattering from a perfectly conducting surface in order to more clearly illustrate the method. The surface is assumed to be Gaussian distributed so that the surface height can be split into large and small scale components, relative to the electromagnetic wavelength. A first order perturbation approach is employed wherein the scattering solution for the large scale structure is perturbed by the small scale diffraction effects. The scattering from the large scale structure is treated via geometrical optics techniques. The effect of the large scale surface structure is shown to be equivalent to a convolution in k-space of the height spectrum with the following: the shadowing function, a polarization and surface slope dependent function, and a Gaussian factor resulting from the unperturbed geometrical optics solution. This solution provides a continuous transition between the near normal incidence geometrical optics and wide angle Bragg scattering results.
Laminar Motion of the Incompressible Fluids in Self-Acting Thrust Bearings with Spiral Grooves
Velescu, Cornel; Popa, Nicolae Calin
2014-01-01
We analyze the laminar motion of incompressible fluids in self-acting thrust bearings with spiral grooves with inner or external pumping. The purpose of the study is to find some mathematical relations useful to approach the theoretical functionality of these bearings having magnetic controllable fluids as incompressible fluids, in the presence of a controllable magnetic field. This theoretical study approaches the permanent motion regime. To validate the theoretical results, we compare them to some experimental results presented in previous papers. The laminar motion of incompressible fluids in bearings is described by the fundamental equations of fluid dynamics. We developed and particularized these equations by taking into consideration the geometrical and functional characteristics of these hydrodynamic bearings. Through the integration of the differential equation, we determined the pressure and speed distributions in bearings with length in the “pumping” direction. These pressure and speed distributions offer important information, both quantitative (concerning the bearing performances) and qualitative (evidence of the viscous-inertial effects, the fluid compressibility, etc.), for the laminar and permanent motion regime. PMID:24526896
A new method for calculating differential distributions directly in Mellin space
NASA Astrophysics Data System (ADS)
Mitov, Alexander
2006-12-01
We present a new method for the calculation of differential distributions directly in Mellin space without recourse to the usual momentum-fraction (or z-) space. The method is completely general and can be applied to any process. It is based on solving the integration-by-parts identities when one of the powers of the propagators is an abstract number. The method retains the full dependence on the Mellin variable and can be implemented in any program for solving the IBP identities based on algebraic elimination, like Laporta. General features of the method are: (1) faster reduction, (2) smaller number of master integrals compared to the usual z-space approach and (3) the master integrals satisfy difference instead of differential equations. This approach generalizes previous results related to fully inclusive observables like the recently calculated three-loop space-like anomalous dimensions and coefficient functions in inclusive DIS to more general processes requiring separate treatment of the various physical cuts. Many possible applications of this method exist, the most notable being the direct evaluation of the three-loop time-like splitting functions in QCD.
Laminar motion of the incompressible fluids in self-acting thrust bearings with spiral grooves.
Velescu, Cornel; Popa, Nicolae Calin
2014-01-01
We analyze the laminar motion of incompressible fluids in self-acting thrust bearings with spiral grooves with inner or external pumping. The purpose of the study is to find some mathematical relations useful to approach the theoretical functionality of these bearings having magnetic controllable fluids as incompressible fluids, in the presence of a controllable magnetic field. This theoretical study approaches the permanent motion regime. To validate the theoretical results, we compare them to some experimental results presented in previous papers. The laminar motion of incompressible fluids in bearings is described by the fundamental equations of fluid dynamics. We developed and particularized these equations by taking into consideration the geometrical and functional characteristics of these hydrodynamic bearings. Through the integration of the differential equation, we determined the pressure and speed distributions in bearings with length in the "pumping" direction. These pressure and speed distributions offer important information, both quantitative (concerning the bearing performances) and qualitative (evidence of the viscous-inertial effects, the fluid compressibility, etc.), for the laminar and permanent motion regime.
Comparison of volatility function technique for risk-neutral densities estimation
NASA Astrophysics Data System (ADS)
Bahaludin, Hafizah; Abdullah, Mimi Hafizah
2017-08-01
Volatility function technique by using interpolation approach plays an important role in extracting the risk-neutral density (RND) of options. The aim of this study is to compare the performances of two interpolation approaches namely smoothing spline and fourth order polynomial in extracting the RND. The implied volatility of options with respect to strike prices/delta are interpolated to obtain a well behaved density. The statistical analysis and forecast accuracy are tested using moments of distribution. The difference between the first moment of distribution and the price of underlying asset at maturity is used as an input to analyze forecast accuracy. RNDs are extracted from the Dow Jones Industrial Average (DJIA) index options with a one month constant maturity for the period from January 2011 until December 2015. The empirical results suggest that the estimation of RND using a fourth order polynomial is more appropriate to be used compared to a smoothing spline in which the fourth order polynomial gives the lowest mean square error (MSE). The results can be used to help market participants capture market expectations of the future developments of the underlying asset.
Nonprincipal plane scattering of flat plates and pattern control of horn antennas
NASA Technical Reports Server (NTRS)
Balanis, Constantine A.; Polka, Lesley A.; Liu, Kefeng
1989-01-01
Using the geometrical theory of diffraction, the traditional method of high frequency scattering analysis, the prediction of the radar cross section of a perfectly conducting, flat, rectangular plate is limited to principal planes. Part A of this report predicts the radar cross section in nonprincipal planes using the method of equivalent currents. This technique is based on an asymptotic end-point reduction of the surface radiation integrals for an infinite wedge and enables nonprincipal plane prediction. The predicted radar cross sections for both horizontal and vertical polarizations are compared to moment method results and experimental data from Arizona State University's anechoic chamber. In part B, a variational calculus approach to the pattern control of the horn antenna is outlined. The approach starts with the optimization of the aperture field distribution so that the control of the radiation pattern in a range of directions can be realized. A control functional is thus formulated. Next, a spectral analysis method is introduced to solve for the eigenfunctions from the extremal condition of the formulated functional. Solutions to the optimized aperture field distribution are then obtained.
Production of W + W - pairs via γ * γ * → W + W - subprocess with photon transverse momenta
NASA Astrophysics Data System (ADS)
Łuszczak, Marta; Schäfer, Wolfgang; Szczurek, Antoni
2018-05-01
We discuss production of W + W - pairs in proton-proton collisions induced by two-photon fusion including, for a first time, transverse momenta of incoming photons. The unintegrated inelastic fluxes (related to proton dissociation) of photons are calculated based on modern parametrizations of deep inelastic structure functions in a broad range of their arguments ( x and Q 2). In our approach we can get separate contributions of different W helicities states. Several one- and two-dimensional differential distributions are shown and discussed. The present results are compared to the results of previous calculations within collinear factorization approach. Similar results are found except of some observables such as e.g. transverse momentum of the pair of W + and W -. We find large contributions to the cross section from the region of large photon virtualities. We show decomposition of the total cross section as well as invariant mass distribution into the polarisation states of both W bosons. The role of the longitudinal F L structure function is quantified. Its inclusion leads to a 4-5% decrease of the cross section, almost independent of M WW .
Direct Reconstruction of Two-Dimensional Currents in Thin Films from Magnetic-Field Measurements
NASA Astrophysics Data System (ADS)
Meltzer, Alexander Y.; Levin, Eitan; Zeldov, Eli
2017-12-01
An accurate determination of microscopic transport and magnetization currents is of central importance for the study of the electric properties of low-dimensional materials and interfaces, of superconducting thin films, and of electronic devices. Current distribution is usually derived from the measurement of the perpendicular component of the magnetic field above the surface of the sample, followed by numerical inversion of the Biot-Savart law. The inversion is commonly obtained by deriving the current stream function g , which is then differentiated in order to obtain the current distribution. However, this two-step procedure requires filtering at each step and, as a result, oversmooths the solution. To avoid this oversmoothing, we develop a direct procedure for inversion of the magnetic field that avoids use of the stream function. This approach provides enhanced accuracy of current reconstruction over a wide range of noise levels. We further introduce a reflection procedure that allows for the reconstruction of currents that cross the boundaries of the measurement window. The effectiveness of our approach is demonstrated by several numerical examples.
Lexical is as lexical does: computational approaches to lexical representation
Woollams, Anna M.
2015-01-01
In much of neuroimaging and neuropsychology, regions of the brain have been associated with ‘lexical representation’, with little consideration as to what this cognitive construct actually denotes. Within current computational models of word recognition, there are a number of different approaches to the representation of lexical knowledge. Structural lexical representations, found in original theories of word recognition, have been instantiated in modern localist models. However, such a representational scheme lacks neural plausibility in terms of economy and flexibility. Connectionist models have therefore adopted distributed representations of form and meaning. Semantic representations in connectionist models necessarily encode lexical knowledge. Yet when equipped with recurrent connections, connectionist models can also develop attractors for familiar forms that function as lexical representations. Current behavioural, neuropsychological and neuroimaging evidence shows a clear role for semantic information, but also suggests some modality- and task-specific lexical representations. A variety of connectionist architectures could implement these distributed functional representations, and further experimental and simulation work is required to discriminate between these alternatives. Future conceptualisations of lexical representations will therefore emerge from a synergy between modelling and neuroscience. PMID:25893204
NASA Astrophysics Data System (ADS)
Xia, Xiuli; Shao, Yuanzhi
2018-02-01
We report the magneto-electric behavior of a dual-modality biomedical nanoprobe, a ternary nanosystem consisting of gold and gadolinia clusters and water molecules, with the effect of both nanoclusters on the structural and electronic properties of water. The hydrogen-oxygen bond lengths and angles as well as electronic charges of water molecules surrounding both nanoclusters were calculated using Hubbard U corrected density functional theory aided by molecular dynamics approach. The calculations reveal existence of a magneto-electric interaction between gold and gadolinium oxide nanoclusters, which influences the physical properties of surrounding water remarkably. A broader (narrower) distribution of Hsbnd O bond lengths (Hsbnd Osbnd H bond angles) was observed at the presence of either gold or gadolinia nanoclusters. The presence of Gd6O9 cluster leads to the larger charges of neighbour oxygen atoms. The distribution of oxygen atom charges becomes border when both Gd6O9 and Au13 clusters coexist. Ab initio calculation provides a feasible approach to explore the most essential interactions among functional components of a multimodal nanoprobe applied in aqueous environment.
A seismological model for earthquakes induced by fluid extraction from a subsurface reservoir
NASA Astrophysics Data System (ADS)
Bourne, S. J.; Oates, S. J.; van Elk, J.; Doornhof, D.
2014-12-01
A seismological model is developed for earthquakes induced by subsurface reservoir volume changes. The approach is based on the work of Kostrov () and McGarr () linking total strain to the summed seismic moment in an earthquake catalog. We refer to the fraction of the total strain expressed as seismic moment as the strain partitioning function, α. A probability distribution for total seismic moment as a function of time is derived from an evolving earthquake catalog. The moment distribution is taken to be a Pareto Sum Distribution with confidence bounds estimated using approximations given by Zaliapin et al. (). In this way available seismic moment is expressed in terms of reservoir volume change and hence compaction in the case of a depleting reservoir. The Pareto Sum Distribution for moment and the Pareto Distribution underpinning the Gutenberg-Richter Law are sampled using Monte Carlo methods to simulate synthetic earthquake catalogs for subsequent estimation of seismic ground motion hazard. We demonstrate the method by applying it to the Groningen gas field. A compaction model for the field calibrated using various geodetic data allows reservoir strain due to gas extraction to be expressed as a function of both spatial position and time since the start of production. Fitting with a generalized logistic function gives an empirical expression for the dependence of α on reservoir compaction. Probability density maps for earthquake event locations can then be calculated from the compaction maps. Predicted seismic moment is shown to be strongly dependent on planned gas production.
NASA Astrophysics Data System (ADS)
Ramnath, Vishal
2017-11-01
In the field of pressure metrology the effective area is Ae = A0 (1 + λP) where A0 is the zero-pressure area and λ is the distortion coefficient and the conventional practise is to construct univariate probability density functions (PDFs) for A0 and λ. As a result analytical generalized non-Gaussian bivariate joint PDFs has not featured prominently in pressure metrology. Recently extended lambda distribution based quantile functions have been successfully utilized for summarizing univariate arbitrary PDF distributions of gas pressure balances. Motivated by this development we investigate the feasibility and utility of extending and applying quantile functions to systems which naturally exhibit bivariate PDFs. Our approach is to utilize the GUM Supplement 1 methodology to solve and generate Monte Carlo based multivariate uncertainty data for an oil based pressure balance laboratory standard that is used to generate known high pressures, and which are in turn cross-floated against another pressure balance transfer standard in order to deduce the transfer standard's respective area. We then numerically analyse the uncertainty data by formulating and constructing an approximate bivariate quantile distribution that directly couples A0 and λ in order to compare and contrast its accuracy to an exact GUM Supplement 2 based uncertainty quantification analysis.
Analytic Evolution of Singular Distribution Amplitudes in QCD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tandogan Kunkel, Asli
2014-08-01
Distribution amplitudes (DAs) are the basic functions that contain information about the quark momentum. DAs are necessary to describe hard exclusive processes in quantum chromodynamics. We describe a method of analytic evolution of DAs that have singularities such as nonzero values at the end points of the support region, jumps at some points inside the support region and cusps. We illustrate the method by applying it to the evolution of a at (constant) DA, antisymmetric at DA, and then use the method for evolution of the two-photon generalized distribution amplitude. Our approach to DA evolution has advantages over the standardmore » method of expansion in Gegenbauer polynomials [1, 2] and over a straightforward iteration of an initial distribution with evolution kernel. Expansion in Gegenbauer polynomials requires an infinite number of terms in order to accurately reproduce functions in the vicinity of singular points. Straightforward iteration of an initial distribution produces logarithmically divergent terms at each iteration. In our method the logarithmic singularities are summed from the start, which immediately produces a continuous curve. Afterwards, in order to get precise results, only one or two iterations are needed.« less
[Sex differentiation in plants. Terms and notions].
Godin, V N
2007-01-01
There are two methodological approaches to the study of sex in plants: the descriptive and morphological approach and the quantitative approach. The former is based exclusively on external morphological peculiarities of the generative organs of the flower, the latter is based on the functioning of individuals as parents of the coming generation. It has been suggested to recognize three flower types: staminate, pistillate, and complete. Depending on the distribution pattern of the flowers of different sex type, there are monomorphic populations (all individuals form flowers of the same type) and heteromorphic populations (individuals have flowers of different types). Monomorphic populations include monoclinous, monoecious, gynomonoecious, andromonoecious, and polygamomonoecious ones. Among heteromorphic populations, dioecious, polygamodioecious, subdioecious, paradioecious, and trioecious ones are recognized. It is desirable to give up the usage of such terms as "bisexual", "polygamous", "functionally female", and "functionally male" flowers, "temporary dioecy" and some others. The notion "gender" has been established in English-language works for describing the sex quantitavely; two additional terms have been proposed: "phenotypic gender" and "functional gender". The recently developed quantitative approach is at present in the process of accumulating material, and in need of the further elaborating the methodological base for research. Analysis of the principal notions shows the necessity to form their integrated structure and to correct the usage of the existing and new terms.
Desired Precision in Multi-Objective Optimization: Epsilon Archiving or Rounding Objectives?
NASA Astrophysics Data System (ADS)
Asadzadeh, M.; Sahraei, S.
2016-12-01
Multi-objective optimization (MO) aids in supporting the decision making process in water resources engineering and design problems. One of the main goals of solving a MO problem is to archive a set of solutions that is well-distributed across a wide range of all the design objectives. Modern MO algorithms use the epsilon dominance concept to define a mesh with pre-defined grid-cell size (often called epsilon) in the objective space and archive at most one solution at each grid-cell. Epsilon can be set to the desired precision level of each objective function to make sure that the difference between each pair of archived solutions is meaningful. This epsilon archiving process is computationally expensive in problems that have quick-to-evaluate objective functions. This research explores the applicability of a similar but computationally more efficient approach to respect the desired precision level of all objectives in the solution archiving process. In this alternative approach each objective function is rounded to the desired precision level before comparing any new solution to the set of archived solutions that already have rounded objective function values. This alternative solution archiving approach is compared to the epsilon archiving approach in terms of efficiency and quality of archived solutions for solving mathematical test problems and hydrologic model calibration problems.
Simple Kinematic Pathway Approach (KPA) to Catchment-scale Travel Time and Water Age Distributions
NASA Astrophysics Data System (ADS)
Soltani, S. S.; Cvetkovic, V.; Destouni, G.
2017-12-01
The distribution of catchment-scale water travel times is strongly influenced by morphological dispersion and is partitioned between hillslope and larger, regional scales. We explore whether hillslope travel times are predictable using a simple semi-analytical "kinematic pathway approach" (KPA) that accounts for dispersion on two levels of morphological and macro-dispersion. The study gives new insights to shallow (hillslope) and deep (regional) groundwater travel times by comparing numerical simulations of travel time distributions, referred to as "dynamic model", with corresponding KPA computations for three different real catchment case studies in Sweden. KPA uses basic structural and hydrological data to compute transient water travel time (forward mode) and age (backward mode) distributions at the catchment outlet. Longitudinal and morphological dispersion components are reflected in KPA computations by assuming an effective Peclet number and topographically driven pathway length distributions, respectively. Numerical simulations of advective travel times are obtained by means of particle tracking using the fully-integrated flow model MIKE SHE. The comparison of computed cumulative distribution functions of travel times shows significant influence of morphological dispersion and groundwater recharge rate on the compatibility of the "kinematic pathway" and "dynamic" models. Zones of high recharge rate in "dynamic" models are associated with topographically driven groundwater flow paths to adjacent discharge zones, e.g. rivers and lakes, through relatively shallow pathway compartments. These zones exhibit more compatible behavior between "dynamic" and "kinematic pathway" models than the zones of low recharge rate. Interestingly, the travel time distributions of hillslope compartments remain almost unchanged with increasing recharge rates in the "dynamic" models. This robust "dynamic" model behavior suggests that flow path lengths and travel times in shallow hillslope compartments are controlled by topography, and therefore application and further development of the simple "kinematic pathway" approach is promising for their modeling.
A κ-generalized statistical mechanics approach to income analysis
NASA Astrophysics Data System (ADS)
Clementi, F.; Gallegati, M.; Kaniadakis, G.
2009-02-01
This paper proposes a statistical mechanics approach to the analysis of income distribution and inequality. A new distribution function, having its roots in the framework of κ-generalized statistics, is derived that is particularly suitable for describing the whole spectrum of incomes, from the low-middle income region up to the high income Pareto power-law regime. Analytical expressions for the shape, moments and some other basic statistical properties are given. Furthermore, several well-known econometric tools for measuring inequality, which all exist in a closed form, are considered. A method for parameter estimation is also discussed. The model is shown to fit remarkably well the data on personal income for the United States, and the analysis of inequality performed in terms of its parameters is revealed as very powerful.
Using Bayesian networks to support decision-focused information retrieval
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lehner, P.; Elsaesser, C.; Seligman, L.
This paper has described an approach to controlling the process of pulling data/information from distributed data bases in a way that is specific to a persons specific decision making context. Our prototype implementation of this approach uses a knowledge-based planner to generate a plan, an automatically constructed Bayesian network to evaluate the plan, specialized processing of the network to derive key information items that would substantially impact the evaluation of the plan (e.g., determine that replanning is needed), automated construction of Standing Requests for Information (SRIs) which are automated functions that monitor changes and trends in distributed data base thatmore » are relevant to the key information items. This emphasis of this paper is on how Bayesian networks are used.« less
Numerical modeling of sorption kinetics of organic compounds to soil and sediment particles
NASA Astrophysics Data System (ADS)
Wu, Shian-chee; Gschwend, Phillip M.
1988-08-01
A numerical model is developed to simulate hydrophobic organic compound sorption kinetics, based on a retarded intraaggregate diffusion conceptualization of this solid-water exchange process. This model was used to ascertain the sensitivity of the sorption process for various sorbates to nonsteady solution concentrations and to polydisperse soil or sediment aggregate particle size distributions. Common approaches to modeling sorption kinetics amount to simplifications of our model and appear justified only when (1) the concentration fluctuations occur on a time scale which matches the sorption timescale of interest and (2) the particle size distribution is relatively narrow. Finally, a means is provided to estimate the extent of approach of a sorbing system to equilibrium as a function of aggregate size, chemical diffusivity and hydrophobicity, and system solids concentration.
Detailed validation of the bidirectional effect in various Case I and Case II waters.
Gleason, Arthur C R; Voss, Kenneth J; Gordon, Howard R; Twardowski, Michael; Sullivan, James; Trees, Charles; Weidemann, Alan; Berthon, Jean-François; Clark, Dennis; Lee, Zhong-Ping
2012-03-26
Simulated bidirectional reflectance distribution functions (BRDF) were compared with measurements made just beneath the water's surface. In Case I water, the set of simulations that varied the particle scattering phase function depending on chlorophyll concentration agreed more closely with the data than other models. In Case II water, however, the simulations using fixed phase functions agreed well with the data and were nearly indistinguishable from each other, on average. The results suggest that BRDF corrections in Case II water are feasible using single, average, particle scattering phase functions, but that the existing approach using variable particle scattering phase functions is still warranted in Case I water.