Science.gov

Sample records for probability-density estimation method

  1. Structural Reliability Using Probability Density Estimation Methods Within NESSUS

    NASA Technical Reports Server (NTRS)

    Chamis, Chrisos C. (Technical Monitor); Godines, Cody Ric

    2003-01-01

    A reliability analysis studies a mathematical model of a physical system taking into account uncertainties of design variables and common results are estimations of a response density, which also implies estimations of its parameters. Some common density parameters include the mean value, the standard deviation, and specific percentile(s) of the response, which are measures of central tendency, variation, and probability regions, respectively. Reliability analyses are important since the results can lead to different designs by calculating the probability of observing safe responses in each of the proposed designs. All of this is done at the expense of added computational time as compared to a single deterministic analysis which will result in one value of the response out of many that make up the density of the response. Sampling methods, such as monte carlo (MC) and latin hypercube sampling (LHS), can be used to perform reliability analyses and can compute nonlinear response density parameters even if the response is dependent on many random variables. Hence, both methods are very robust; however, they are computationally expensive to use in the estimation of the response density parameters. Both methods are 2 of 13 stochastic methods that are contained within the Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) program. NESSUS is a probabilistic finite element analysis (FEA) program that was developed through funding from NASA Glenn Research Center (GRC). It has the additional capability of being linked to other analysis programs; therefore, probabilistic fluid dynamics, fracture mechanics, and heat transfer are only a few of what is possible with this software. The LHS method is the newest addition to the stochastic methods within NESSUS. Part of this work was to enhance NESSUS with the LHS method. The new LHS module is complete, has been successfully integrated with NESSUS, and been used to study four different test cases that have been

  2. On the method of logarithmic cumulants for parametric probability density function estimation.

    PubMed

    Krylov, Vladimir A; Moser, Gabriele; Serpico, Sebastiano B; Zerubia, Josiane

    2013-10-01

    Parameter estimation of probability density functions is one of the major steps in the area of statistical image and signal processing. In this paper we explore several properties and limitations of the recently proposed method of logarithmic cumulants (MoLC) parameter estimation approach which is an alternative to the classical maximum likelihood (ML) and method of moments (MoM) approaches. We derive the general sufficient condition for a strong consistency of the MoLC estimates which represents an important asymptotic property of any statistical estimator. This result enables the demonstration of the strong consistency of MoLC estimates for a selection of widely used distribution families originating from (but not restricted to) synthetic aperture radar image processing. We then derive the analytical conditions of applicability of MoLC to samples for the distribution families in our selection. Finally, we conduct various synthetic and real data experiments to assess the comparative properties, applicability and small sample performance of MoLC notably for the generalized gamma and K families of distributions. Supervised image classification experiments are considered for medical ultrasound and remote-sensing SAR imagery. The obtained results suggest that MoLC is a feasible and computationally fast yet not universally applicable alternative to MoM. MoLC becomes especially useful when the direct ML approach turns out to be unfeasible. PMID:23799694

  3. Probability Density and CFAR Threshold Estimation for Hyperspectral Imaging

    SciTech Connect

    Clark, G A

    2004-09-21

    The work reported here shows the proof of principle (using a small data set) for a suite of algorithms designed to estimate the probability density function of hyperspectral background data and compute the appropriate Constant False Alarm Rate (CFAR) matched filter decision threshold for a chemical plume detector. Future work will provide a thorough demonstration of the algorithms and their performance with a large data set. The LASI (Large Aperture Search Initiative) Project involves instrumentation and image processing for hyperspectral images of chemical plumes in the atmosphere. The work reported here involves research and development on algorithms for reducing the false alarm rate in chemical plume detection and identification algorithms operating on hyperspectral image cubes. The chemical plume detection algorithms to date have used matched filters designed using generalized maximum likelihood ratio hypothesis testing algorithms [1, 2, 5, 6, 7, 12, 10, 11, 13]. One of the key challenges in hyperspectral imaging research is the high false alarm rate that often results from the plume detector [1, 2]. The overall goal of this work is to extend the classical matched filter detector to apply Constant False Alarm Rate (CFAR) methods to reduce the false alarm rate, or Probability of False Alarm P{sub FA} of the matched filter [4, 8, 9, 12]. A detector designer is interested in minimizing the probability of false alarm while simultaneously maximizing the probability of detection P{sub D}. This is summarized by the Receiver Operating Characteristic Curve (ROC) [10, 11], which is actually a family of curves depicting P{sub D} vs. P{sub FA}parameterized by varying levels of signal to noise (or clutter) ratio (SNR or SCR). Often, it is advantageous to be able to specify a desired P{sub FA} and develop a ROC curve (P{sub D} vs. decision threshold r{sub 0}) for that case. That is the purpose of this work. Specifically, this work develops a set of algorithms and MATLAB

  4. Probability Density Function Method for Langevin Equations with Colored Noise

    SciTech Connect

    Wang, Peng; Tartakovsky, Alexandre M.; Tartakovsky, Daniel M.

    2013-04-05

    We present a novel method to derive closed-form, computable PDF equations for Langevin systems with colored noise. The derived equations govern the dynamics of joint or marginal probability density functions (PDFs) of state variables, and rely on a so-called Large-Eddy-Diffusivity (LED) closure. We demonstrate the accuracy of the proposed PDF method for linear and nonlinear Langevin equations, describing the classical Brownian displacement and dispersion in porous media.

  5. Estimating probability densities from short samples: A parametric maximum likelihood approach

    NASA Astrophysics Data System (ADS)

    Dudok de Wit, T.; Floriani, E.

    1998-10-01

    A parametric method similar to autoregressive spectral estimators is proposed to determine the probability density function (PDF) of a random set. The method proceeds by maximizing the likelihood of the PDF, yielding estimates that perform equally well in the tails as in the bulk of the distribution. It is therefore well suited for the analysis of short sets drawn from smooth PDF's and stands out by the simplicity of its computational scheme. Its advantages and limitations are discussed.

  6. Large Eddy Simulation and the Filtered Probability Density Function Method

    NASA Astrophysics Data System (ADS)

    Jones, W. P.; Navarro-Martinez, S.

    2009-12-01

    Recently there is has been increased interest in modelling combustion processes with high-levels of extinction and re-ignition. Such system often lie beyond the scope of conventional single scalar-based models. Large Eddy Simulation (LES) has shown a large potential for describing turbulent reactive systems, though combustion occurs at the smallest unresolved scales of the flow and must be modelled. In the sub-grid Probability Density Function (pdf) method approximations are devised to close the evolution equation for the joint-pdf which is then solved directly. The paper describes such an approach and concerns, in particular, the Eulerian stochastic field method of solving the pdf equation. The paper examines the capabilities of the LES-pdf method in capturing auto-ignition and extinction events in different partially premixed configurations with different fuels (hydrogen, methane and n-heptane). The results show that the LES-pdf formulation can capture different regimes without any parameter adjustments, independent of Reynolds numbers and fuel type.

  7. Numerical methods for high-dimensional probability density function equations

    NASA Astrophysics Data System (ADS)

    Cho, H.; Venturi, D.; Karniadakis, G. E.

    2016-01-01

    In this paper we address the problem of computing the numerical solution to kinetic partial differential equations involving many phase variables. These types of equations arise naturally in many different areas of mathematical physics, e.g., in particle systems (Liouville and Boltzmann equations), stochastic dynamical systems (Fokker-Planck and Dostupov-Pugachev equations), random wave theory (Malakhov-Saichev equations) and coarse-grained stochastic systems (Mori-Zwanzig equations). We propose three different classes of new algorithms addressing high-dimensionality: The first one is based on separated series expansions resulting in a sequence of low-dimensional problems that can be solved recursively and in parallel by using alternating direction methods. The second class of algorithms relies on truncation of interaction in low-orders that resembles the Bogoliubov-Born-Green-Kirkwood-Yvon (BBGKY) framework of kinetic gas theory and it yields a hierarchy of coupled probability density function equations. The third class of algorithms is based on high-dimensional model representations, e.g., the ANOVA method and probabilistic collocation methods. A common feature of all these approaches is that they are reducible to the problem of computing the solution to high-dimensional equations via a sequence of low-dimensional problems. The effectiveness of the new algorithms is demonstrated in numerical examples involving nonlinear stochastic dynamical systems and partial differential equations, with up to 120 variables.

  8. Parameterizing deep convection using the assumed probability density function method

    SciTech Connect

    Storer, R. L.; Griffin, B. M.; Hoft, Jan; Weber, J. K.; Raut, E.; Larson, Vincent E.; Wang, Minghuai; Rasch, Philip J.

    2015-01-06

    Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method.The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and mid-latitude deep convection.These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing is weak. The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.

  9. Parameterizing deep convection using the assumed probability density function method

    NASA Astrophysics Data System (ADS)

    Storer, R. L.; Griffin, B. M.; Höft, J.; Weber, J. K.; Raut, E.; Larson, V. E.; Wang, M.; Rasch, P. J.

    2015-01-01

    Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method. The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and midlatitude deep convection. These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing is weak. The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.

  10. Parameterizing deep convection using the assumed probability density function method

    DOE PAGESBeta

    Storer, R. L.; Griffin, B. M.; Höft, J.; Weber, J. K.; Raut, E.; Larson, V. E.; Wang, M.; Rasch, P. J.

    2014-06-11

    Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method. The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and mid-latitude deep convection. These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing ismore » weak. The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less

  11. Parameterizing deep convection using the assumed probability density function method

    DOE PAGESBeta

    Storer, R. L.; Griffin, B. M.; Höft, J.; Weber, J. K.; Raut, E.; Larson, V. E.; Wang, M.; Rasch, P. J.

    2015-01-06

    Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method. The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and midlatitude deep convection. These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing ismore » weak. The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less

  12. Approximation of probability density functions by the Multilevel Monte Carlo Maximum Entropy method

    NASA Astrophysics Data System (ADS)

    Bierig, Claudio; Chernov, Alexey

    2016-06-01

    We develop a complete convergence theory for the Maximum Entropy method based on moment matching for a sequence of approximate statistical moments estimated by the Multilevel Monte Carlo method. Under appropriate regularity assumptions on the target probability density function, the proposed method is superior to the Maximum Entropy method with moments estimated by the Monte Carlo method. New theoretical results are illustrated in numerical examples.

  13. Robust location and spread measures for nonparametric probability density function estimation.

    PubMed

    López-Rubio, Ezequiel

    2009-10-01

    Robustness against outliers is a desirable property of any unsupervised learning scheme. In particular, probability density estimators benefit from incorporating this feature. A possible strategy to achieve this goal is to substitute the sample mean and the sample covariance matrix by more robust location and spread estimators. Here we use the L1-median to develop a nonparametric probability density function (PDF) estimator. We prove its most relevant properties, and we show its performance in density estimation and classification applications. PMID:19885963

  14. Probability Density Estimation Using Isocontours and Isosurfaces: Application to Information-Theoretic Image Registration

    PubMed Central

    Rajwade, Ajit; Banerjee, Arunava; Rangarajan, Anand

    2010-01-01

    We present a new geometric approach for determining the probability density of the intensity values in an image. We drop the notion of an image as a set of discrete pixels and assume a piecewise-continuous representation. The probability density can then be regarded as being proportional to the area between two nearby isocontours of the image surface. Our paper extends this idea to joint densities of image pairs. We demonstrate the application of our method to affine registration between two or more images using information-theoretic measures such as mutual information. We show cases where our method outperforms existing methods such as simple histograms, histograms with partial volume interpolation, Parzen windows, etc., under fine intensity quantization for affine image registration under significant image noise. Furthermore, we demonstrate results on simultaneous registration of multiple images, as well as for pairs of volume data sets, and show some theoretical properties of our density estimator. Our approach requires the selection of only an image interpolant. The method neither requires any kind of kernel functions (as in Parzen windows), which are unrelated to the structure of the image in itself, nor does it rely on any form of sampling for density estimation. PMID:19147876

  15. A probability density function method for acoustic field uncertainty analysis

    NASA Astrophysics Data System (ADS)

    James, Kevin R.; Dowling, David R.

    2005-11-01

    Acoustic field predictions, whether analytical or computational, rely on knowledge of the environmental, boundary, and initial conditions. When knowledge of these conditions is uncertain, acoustic field predictions will also be uncertain, even if the techniques for field prediction are perfect. Quantifying acoustic field uncertainty is important for applications that require accurate field amplitude and phase predictions, like matched-field techniques for sonar, nondestructive evaluation, bio-medical ultrasound, and atmospheric remote sensing. Drawing on prior turbulence research, this paper describes how an evolution equation for the probability density function (PDF) of the predicted acoustic field can be derived and used to quantify predicted-acoustic-field uncertainties arising from uncertain environmental, boundary, or initial conditions. Example calculations are presented in one and two spatial dimensions for the one-point PDF for the real and imaginary parts of a harmonic field, and show that predicted field uncertainty increases with increasing range and frequency. In particular, at 500 Hz in an ideal 100 m deep underwater sound channel with a 1 m root-mean-square depth uncertainty, the PDF results presented here indicate that at a range of 5 km, all phases and a 10 dB range of amplitudes will have non-negligible probability. Evolution equations for the two-point PDF are also derived.

  16. ANNz2 - Photometric redshift and probability density function estimation using machine-learning

    NASA Astrophysics Data System (ADS)

    Sadeh, Iftach

    2014-05-01

    Large photometric galaxy surveys allow the study of questions at the forefront of science, such as the nature of dark energy. The success of such surveys depends on the ability to measure the photometric redshifts of objects (photo-zs), based on limited spectral data. A new major version of the public photo-z estimation software, ANNz , is presented here. The new code incorporates several machine-learning methods, such as artificial neural networks and boosted decision/regression trees, which are all used in concert. The objective of the algorithm is to dynamically optimize the performance of the photo-z estimation, and to properly derive the associated uncertainties. In addition to single-value solutions, the new code also generates full probability density functions in two independent ways.

  17. SAR amplitude probability density function estimation based on a generalized Gaussian model.

    PubMed

    Moser, Gabriele; Zerubia, Josiane; Serpico, Sebastiano B

    2006-06-01

    In the context of remotely sensed data analysis, an important problem is the development of accurate models for the statistics of the pixel intensities. Focusing on synthetic aperture radar (SAR) data, this modeling process turns out to be a crucial task, for instance, for classification or for denoising purposes. In this paper, an innovative parametric estimation methodology for SAR amplitude data is proposed that adopts a generalized Gaussian (GG) model for the complex SAR backscattered signal. A closed-form expression for the corresponding amplitude probability density function (PDF) is derived and a specific parameter estimation algorithm is developed in order to deal with the proposed model. Specifically, the recently proposed "method-of-log-cumulants" (MoLC) is applied, which stems from the adoption of the Mellin transform (instead of the usual Fourier transform) in the computation of characteristic functions and from the corresponding generalization of the concepts of moment and cumulant. For the developed GG-based amplitude model, the resulting MoLC estimates turn out to be numerically feasible and are also analytically proved to be consistent. The proposed parametric approach was validated by using several real ERS-1, XSAR, E-SAR, and NASA/JPL airborne SAR images, and the experimental results prove that the method models the amplitude PDF better than several previously proposed parametric models for backscattering phenomena. PMID:16764268

  18. Unification of field theory and maximum entropy methods for learning probability densities

    NASA Astrophysics Data System (ADS)

    Kinney, Justin B.

    2015-09-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

  19. Estimating the probability density of the scattering cross section from Rayleigh scattering experiments

    NASA Astrophysics Data System (ADS)

    Hengartner, Nicolas; Talbot, Lawrence; Shepherd, Ian; Bickel, Peter

    1995-06-01

    An important parameter in the experimental study of dynamics of combustion is the probability distribution of the effective Rayleigh scattering cross section. This cross section cannot be observed directly. Instead, pairs of measurements of laser intensities and Rayleigh scattering counts are observed. Our aim is to provide estimators for the probability density function of the scattering cross section from such measurements. The probability distribution is derived first for the number of recorded photons in the Rayleigh scattering experiment. In this approach the laser intensity measurements are treated as known covariates. This departs from the usual practice of normalizing the Rayleigh scattering counts by the laser intensities. For distributions supported on finite intervals two one based on expansion of the density in

  20. A Tomographic Method for the Reconstruction of Local Probability Density Functions

    NASA Technical Reports Server (NTRS)

    Sivathanu, Y. R.; Gore, J. P.

    1993-01-01

    A method of obtaining the probability density function (PDF) of local properties from path integrated measurements is described. The approach uses a discrete probability function (DPF) method to infer the PDF of the local extinction coefficient from measurements of the PDFs of the path integrated transmittance. The local PDFs obtained using the method are compared with those obtained from direct intrusive measurements in propylene/air and ethylene/air diffusion flames. The results of this comparison are good.

  1. Identification of contaminant point source in surface waters based on backward location probability density function method

    NASA Astrophysics Data System (ADS)

    Cheng, Wei Ping; Jia, Yafei

    2010-04-01

    A backward location probability density function (BL-PDF) method capable of identifying location of point sources in surface waters is presented in this paper. The relation of forward location probability density function (FL-PDF) and backward location probability density, based on adjoint analysis, is validated using depth-averaged free-surface flow and mass transport models and several surface water test cases. The solutions of the backward location PDF transport equation agreed well to the forward location PDF computed using the pollutant concentration at the monitoring points. Using this relation and the distribution of the concentration detected at the monitoring points, an effective point source identification method is established. The numerical error of the backward location PDF simulation is found to be sensitive to the irregularity of the computational meshes, diffusivity, and velocity gradients. The performance of identification method is evaluated regarding the random error and number of observed values. In addition to hypothetical cases, a real case was studied to identify the source location where a dye tracer was instantaneously injected into a stream. The study indicated the proposed source identification method is effective, robust, and quite efficient in surface waters; the number of advection-diffusion equations needed to solve is equal to the number of observations.

  2. Probability density function estimation for characterizing hourly variability of ionospheric total electron content

    NASA Astrophysics Data System (ADS)

    Turel, N.; Arikan, F.

    2010-12-01

    Ionospheric channel characterization is an important task for both HF and satellite communications. The inherent space-time variability of the ionosphere can be observed through total electron content (TEC) that can be obtained using GPS receivers. In this study, within-the-hour variability of the ionosphere over high-latitude, midlatitude, and equatorial regions is investigated by estimating a parametric model for the probability density function (PDF) of GPS-TEC. PDF is a useful tool in defining the statistical structure of communication channels. For this study, a half solar cycle data is collected for 18 GPS stations. Histograms of TEC, corresponding to experimental probability distributions, are used to estimate the parameters of five different PDFs. The best fitting distribution to the TEC data is obtained using the maximum likelihood ratio of the estimated parametric distributions. It is observed that all of the midlatitude stations and most of the high-latitude and equatorial stations are distributed as lognormal. A representative distribution can easily be obtained for stations that are located in midlatitude using solar zenith normalization. The stations located in very high latitudes or in equatorial regions cannot be described using only one PDF distribution. Due to significant seasonal variability, different distributions are required for summer and winter.

  3. Fast and accurate probability density estimation in large high dimensional astronomical datasets

    NASA Astrophysics Data System (ADS)

    Gupta, Pramod; Connolly, Andrew J.; Gardner, Jeffrey P.

    2015-01-01

    Astronomical surveys will generate measurements of hundreds of attributes (e.g. color, size, shape) on hundreds of millions of sources. Analyzing these large, high dimensional data sets will require efficient algorithms for data analysis. An example of this is probability density estimation that is at the heart of many classification problems such as the separation of stars and quasars based on their colors. Popular density estimation techniques use binning or kernel density estimation. Kernel density estimation has a small memory footprint but often requires large computational resources. Binning has small computational requirements but usually binning is implemented with multi-dimensional arrays which leads to memory requirements which scale exponentially with the number of dimensions. Hence both techniques do not scale well to large data sets in high dimensions. We present an alternative approach of binning implemented with hash tables (BASH tables). This approach uses the sparseness of data in the high dimensional space to ensure that the memory requirements are small. However hashing requires some extra computation so a priori it is not clear if the reduction in memory requirements will lead to increased computational requirements. Through an implementation of BASH tables in C++ we show that the additional computational requirements of hashing are negligible. Hence this approach has small memory and computational requirements. We apply our density estimation technique to photometric selection of quasars using non-parametric Bayesian classification and show that the accuracy of the classification is same as the accuracy of earlier approaches. Since the BASH table approach is one to three orders of magnitude faster than the earlier approaches it may be useful in various other applications of density estimation in astrostatistics.

  4. A H-infinity Fault Detection and Diagnosis Scheme for Discrete Nonlinear System Using Output Probability Density Estimation

    SciTech Connect

    Zhang Yumin; Lum, Kai-Yew; Wang Qingguo

    2009-03-05

    In this paper, a H-infinity fault detection and diagnosis (FDD) scheme for a class of discrete nonlinear system fault using output probability density estimation is presented. Unlike classical FDD problems, the measured output of the system is viewed as a stochastic process and its square root probability density function (PDF) is modeled with B-spline functions, which leads to a deterministic space-time dynamic model including nonlinearities, uncertainties. A weighting mean value is given as an integral function of the square root PDF along space direction, which leads a function only about time and can be used to construct residual signal. Thus, the classical nonlinear filter approach can be used to detect and diagnose the fault in system. A feasible detection criterion is obtained at first, and a new H-infinity adaptive fault diagnosis algorithm is further investigated to estimate the fault. Simulation example is given to demonstrate the effectiveness of the proposed approaches.

  5. Probability density function method for variable-density pressure-gradient-driven turbulence and mixing

    SciTech Connect

    Bakosi, Jozsef; Ristorcelli, Raymond J

    2010-01-01

    Probability density function (PDF) methods are extended to variable-density pressure-gradient-driven turbulence. We apply the new method to compute the joint PDF of density and velocity in a non-premixed binary mixture of different-density molecularly mixing fluids under gravity. The full time-evolution of the joint PDF is captured in the highly non-equilibrium flow: starting from a quiescent state, transitioning to fully developed turbulence and finally dissipated by molecular diffusion. High-Atwood-number effects (as distinguished from the Boussinesq case) are accounted for: both hydrodynamic turbulence and material mixing are treated at arbitrary density ratios, with the specific volume, mass flux and all their correlations in closed form. An extension of the generalized Langevin model, originally developed for the Lagrangian fluid particle velocity in constant-density shear-driven turbulence, is constructed for variable-density pressure-gradient-driven flows. The persistent small-scale anisotropy, a fundamentally 'non-Kolmogorovian' feature of flows under external acceleration forces, is captured by a tensorial diffusion term based on the external body force. The material mixing model for the fluid density, an active scalar, is developed based on the beta distribution. The beta-PDF is shown to be capable of capturing the mixing asymmetry and that it can accurately represent the density through transition, in fully developed turbulence and in the decay process. The joint model for hydrodynamics and active material mixing yields a time-accurate evolution of the turbulent kinetic energy and Reynolds stress anisotropy without resorting to gradient diffusion hypotheses, and represents the mixing state by the density PDF itself, eliminating the need for dubious mixing measures. Direct numerical simulations of the homogeneous Rayleigh-Taylor instability are used for model validation.

  6. Model Assembly for Estimating Cell Surviving Fraction for Both Targeted and Nontargeted Effects Based on Microdosimetric Probability Densities

    PubMed Central

    Sato, Tatsuhiko; Hamada, Nobuyuki

    2014-01-01

    We here propose a new model assembly for estimating the surviving fraction of cells irradiated with various types of ionizing radiation, considering both targeted and nontargeted effects in the same framework. The probability densities of specific energies in two scales, which are the cell nucleus and its substructure called a domain, were employed as the physical index for characterizing the radiation fields. In the model assembly, our previously established double stochastic microdosimetric kinetic (DSMK) model was used to express the targeted effect, whereas a newly developed model was used to express the nontargeted effect. The radioresistance caused by overexpression of anti-apoptotic protein Bcl-2 known to frequently occur in human cancer was also considered by introducing the concept of the adaptive response in the DSMK model. The accuracy of the model assembly was examined by comparing the computationally and experimentally determined surviving fraction of Bcl-2 cells (Bcl-2 overexpressing HeLa cells) and Neo cells (neomycin resistant gene-expressing HeLa cells) irradiated with microbeam or broadbeam of energetic heavy ions, as well as the WI-38 normal human fibroblasts irradiated with X-ray microbeam. The model assembly reproduced very well the experimentally determined surviving fraction over a wide range of dose and linear energy transfer (LET) values. Our newly established model assembly will be worth being incorporated into treatment planning systems for heavy-ion therapy, brachytherapy, and boron neutron capture therapy, given critical roles of the frequent Bcl-2 overexpression and the nontargeted effect in estimating therapeutic outcomes and harmful effects of such advanced therapeutic modalities. PMID:25426641

  7. Model assembly for estimating cell surviving fraction for both targeted and nontargeted effects based on microdosimetric probability densities.

    PubMed

    Sato, Tatsuhiko; Hamada, Nobuyuki

    2014-01-01

    We here propose a new model assembly for estimating the surviving fraction of cells irradiated with various types of ionizing radiation, considering both targeted and nontargeted effects in the same framework. The probability densities of specific energies in two scales, which are the cell nucleus and its substructure called a domain, were employed as the physical index for characterizing the radiation fields. In the model assembly, our previously established double stochastic microdosimetric kinetic (DSMK) model was used to express the targeted effect, whereas a newly developed model was used to express the nontargeted effect. The radioresistance caused by overexpression of anti-apoptotic protein Bcl-2 known to frequently occur in human cancer was also considered by introducing the concept of the adaptive response in the DSMK model. The accuracy of the model assembly was examined by comparing the computationally and experimentally determined surviving fraction of Bcl-2 cells (Bcl-2 overexpressing HeLa cells) and Neo cells (neomycin resistant gene-expressing HeLa cells) irradiated with microbeam or broadbeam of energetic heavy ions, as well as the WI-38 normal human fibroblasts irradiated with X-ray microbeam. The model assembly reproduced very well the experimentally determined surviving fraction over a wide range of dose and linear energy transfer (LET) values. Our newly established model assembly will be worth being incorporated into treatment planning systems for heavy-ion therapy, brachytherapy, and boron neutron capture therapy, given critical roles of the frequent Bcl-2 overexpression and the nontargeted effect in estimating therapeutic outcomes and harmful effects of such advanced therapeutic modalities. PMID:25426641

  8. Non-stationary random vibration analysis of a 3D train-bridge system using the probability density evolution method

    NASA Astrophysics Data System (ADS)

    Yu, Zhi-wu; Mao, Jian-feng; Guo, Feng-qi; Guo, Wei

    2016-03-01

    Rail irregularity is one of the main sources causing train-bridge random vibration. A new random vibration theory for the coupled train-bridge systems is proposed in this paper. First, number theory method (NTM) with 2N-dimensional vectors for the stochastic harmonic function (SHF) of rail irregularity power spectrum density was adopted to determine the representative points of spatial frequencies and phases to generate the random rail irregularity samples, and the non-stationary rail irregularity samples were modulated with the slowly varying function. Second, the probability density evolution method (PDEM) was employed to calculate the random dynamic vibration of the three-dimensional (3D) train-bridge system by a program compiled on the MATLAB® software platform. Eventually, the Newmark-β integration method and double edge difference method of total variation diminishing (TVD) format were adopted to obtain the mean value curve, the standard deviation curve and the time-history probability density information of responses. A case study was presented in which the ICE-3 train travels on a three-span simply-supported high-speed railway bridge with excitation of random rail irregularity. The results showed that compared to the Monte Carlo simulation, the PDEM has higher computational efficiency for the same accuracy, i.e., an improvement by 1-2 orders of magnitude. Additionally, the influences of rail irregularity and train speed on the random vibration of the coupled train-bridge system were discussed.

  9. A Galerkin-based formulation of the probability density evolution method for general stochastic finite element systems

    NASA Astrophysics Data System (ADS)

    Papadopoulos, Vissarion; Kalogeris, Ioannis

    2016-05-01

    The present paper proposes a Galerkin finite element projection scheme for the solution of the partial differential equations (pde's) involved in the probability density evolution method, for the linear and nonlinear static analysis of stochastic systems. According to the principle of preservation of probability, the probability density evolution of a stochastic system is expressed by its corresponding Fokker-Planck (FP) stochastic partial differential equation. Direct integration of the FP equation is feasible only for simple systems with a small number of degrees of freedom, due to analytical and/or numerical intractability. However, rewriting the FP equation conditioned to the random event description, a generalized density evolution equation (GDEE) can be obtained, which can be reduced to a one dimensional pde. Two Galerkin finite element method schemes are proposed for the numerical solution of the resulting pde's, namely a time-marching discontinuous Galerkin scheme and the StreamlineUpwind/Petrov Galerkin (SUPG) scheme. In addition, a reformulation of the classical GDEE is proposed, which implements the principle of probability preservation in space instead of time, making this approach suitable for the stochastic analysis of finite element systems. The advantages of the FE Galerkin methods and in particular the SUPG over finite difference schemes, like the modified Lax-Wendroff, which is the most frequently used method for the solution of the GDEE, are illustrated with numerical examples and explored further.

  10. Estimated probability density functions for the times between flashes in the storms of 12 September 1975, 26 August 1975, and 13 July 1976

    NASA Technical Reports Server (NTRS)

    Tretter, S. A.

    1977-01-01

    A report is given to supplement the progress report of June 17, 1977. In that progress report gamma, lognormal, and Rayleigh probability density functions were fitted to the times between lightning flashes in the storms of 9/12/75, 8/26/75, and 7/13/76 by the maximum likelihood method. The goodness of fit is checked by the Kolmogoroff-Smirnoff test. Plots of the estimated densities along with normalized histograms are included to provide a visual check on the goodness of fit. The lognormal densities are the most peaked and have the highest tails. This results in the best fit to the normalized histogram in most cases. The Rayleigh densities have too broad and rounded peaks to give good fits. In addition, they have the lowest tails. The gamma densities fall inbetween and give the best fit in a few cases.

  11. A method for evaluating the expectation value of a power spectrum using the probability density function of phases

    SciTech Connect

    Caliandro, G.A.; Torres, D.F.; Rea, N. E-mail: dtorres@aliga.ieec.uab.es

    2013-07-01

    Here, we present a new method to evaluate the expectation value of the power spectrum of a time series. A statistical approach is adopted to define the method. After its demonstration, it is validated showing that it leads to the known properties of the power spectrum when the time series contains a periodic signal. The approach is also validated in general with numerical simulations. The method puts into evidence the importance that is played by the probability density function of the phases associated to each time stamp for a given frequency, and how this distribution can be perturbed by the uncertainties of the parameters in the pulsar ephemeris. We applied this method to solve the power spectrum in the case the first derivative of the pulsar frequency is unknown and not negligible. We also undertook the study of the most general case of a blind search, in which both the frequency and its first derivative are uncertain. We found the analytical solutions of the above cases invoking the sum of Fresnel's integrals squared.

  12. From data to probability densities without histograms

    NASA Astrophysics Data System (ADS)

    Berg, Bernd A.; Harris, Robert C.

    2008-09-01

    When one deals with data drawn from continuous variables, a histogram is often inadequate to display their probability density. It deals inefficiently with statistical noise, and binsizes are free parameters. In contrast to that, the empirical cumulative distribution function (obtained after sorting the data) is parameter free. But it is a step function, so that its differentiation does not give a smooth probability density. Based on Fourier series expansion and Kolmogorov tests, we introduce a simple method, which overcomes this problem. Error bars on the estimated probability density are calculated using a jackknife method. We give several examples and provide computer code reproducing them. You may want to look at the corresponding figures 4 to 9 first. Program summaryProgram title: cdf_to_pd Catalogue identifier: AEBC_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEBC_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 2758 No. of bytes in distributed program, including test data, etc.: 18 594 Distribution format: tar.gz Programming language: Fortran 77 Computer: Any capable of compiling and executing Fortran code Operating system: Any capable of compiling and executing Fortran code Classification: 4.14, 9 Nature of problem: When one deals with data drawn from continuous variables, a histogram is often inadequate to display the probability density. It deals inefficiently with statistical noise, and binsizes are free parameters. In contrast to that, the empirical cumulative distribution function (obtained after sorting the data) is parameter free. But it is a step function, so that its differentiation does not give a smooth probability density. Solution method: Based on Fourier series expansion and Kolmogorov tests, we introduce a simple method, which

  13. Probability densities in strong turbulence

    NASA Astrophysics Data System (ADS)

    Yakhot, Victor

    2006-03-01

    In this work we, using Mellin’s transform combined with the Gaussian large-scale boundary condition, calculate probability densities (PDFs) of velocity increments P(δu,r), velocity derivatives P(u,r) and the PDF of the fluctuating dissipation scales Q(η,Re), where Re is the large-scale Reynolds number. The resulting expressions strongly deviate from the Log-normal PDF P(δu,r) often quoted in the literature. It is shown that the probability density of the small-scale velocity fluctuations includes information about the large (integral) scale dynamics which is responsible for the deviation of P(δu,r) from P(δu,r). An expression for the function D(h) of the multifractal theory, free from spurious logarithms recently discussed in [U. Frisch, M. Martins Afonso, A. Mazzino, V. Yakhot, J. Fluid Mech. 542 (2005) 97] is also obtained.

  14. A transported probability density function/photon Monte Carlo method for high-temperature oxy-natural gas combustion with spectral gas and wall radiation

    NASA Astrophysics Data System (ADS)

    Zhao, X. Y.; Haworth, D. C.; Ren, T.; Modest, M. F.

    2013-04-01

    A computational fluid dynamics model for high-temperature oxy-natural gas combustion is developed and exercised. The model features detailed gas-phase chemistry and radiation treatments (a photon Monte Carlo method with line-by-line spectral resolution for gas and wall radiation - PMC/LBL) and a transported probability density function (PDF) method to account for turbulent fluctuations in composition and temperature. The model is first validated for a 0.8 MW oxy-natural gas furnace, and the level of agreement between model and experiment is found to be at least as good as any that has been published earlier. Next, simulations are performed with systematic model variations to provide insight into the roles of individual physical processes and their interplay in high-temperature oxy-fuel combustion. This includes variations in the chemical mechanism and the radiation model, and comparisons of results obtained with versus without the PDF method to isolate and quantify the effects of turbulence-chemistry interactions and turbulence-radiation interactions. In this combustion environment, it is found to be important to account for the interconversion of CO and CO2, and radiation plays a dominant role. The PMC/LBL model allows the effects of molecular gas radiation and wall radiation to be clearly separated and quantified. Radiation and chemistry are tightly coupled through the temperature, and correct temperature prediction is required for correct prediction of the CO/CO2 ratio. Turbulence-chemistry interactions influence the computed flame structure and mean CO levels. Strong local effects of turbulence-radiation interactions are found in the flame, but the net influence of TRI on computed mean temperature and species profiles is small. The ultimate goal of this research is to simulate high-temperature oxy-coal combustion, where accurate treatments of chemistry, radiation and turbulence-chemistry-particle-radiation interactions will be even more important.

  15. Trajectory versus probability density entropy.

    PubMed

    Bologna, M; Grigolini, P; Karagiorgis, M; Rosa, A

    2001-07-01

    We show that the widely accepted conviction that a connection can be established between the probability density entropy and the Kolmogorov-Sinai (KS) entropy is questionable. We adopt the definition of density entropy as a functional of a distribution density whose time evolution is determined by a transport equation, conceived as the only prescription to use for the calculation. Although the transport equation is built up for the purpose of affording a picture equivalent to that stemming from trajectory dynamics, no direct use of trajectory time evolution is allowed, once the transport equation is defined. With this definition in mind we prove that the detection of a time regime of increase of the density entropy with a rate identical to the KS entropy is possible only in a limited number of cases. The proposals made by some authors to establish a connection between the two entropies in general, violate our definition of density entropy and imply the concept of trajectory, which is foreign to that of density entropy. PMID:11461383

  16. Trajectory versus probability density entropy

    NASA Astrophysics Data System (ADS)

    Bologna, Mauro; Grigolini, Paolo; Karagiorgis, Markos; Rosa, Angelo

    2001-07-01

    We show that the widely accepted conviction that a connection can be established between the probability density entropy and the Kolmogorov-Sinai (KS) entropy is questionable. We adopt the definition of density entropy as a functional of a distribution density whose time evolution is determined by a transport equation, conceived as the only prescription to use for the calculation. Although the transport equation is built up for the purpose of affording a picture equivalent to that stemming from trajectory dynamics, no direct use of trajectory time evolution is allowed, once the transport equation is defined. With this definition in mind we prove that the detection of a time regime of increase of the density entropy with a rate identical to the KS entropy is possible only in a limited number of cases. The proposals made by some authors to establish a connection between the two entropies in general, violate our definition of density entropy and imply the concept of trajectory, which is foreign to that of density entropy.

  17. Modulation Based on Probability Density Functions

    NASA Technical Reports Server (NTRS)

    Williams, Glenn L.

    2009-01-01

    A proposed method of modulating a sinusoidal carrier signal to convey digital information involves the use of histograms representing probability density functions (PDFs) that characterize samples of the signal waveform. The method is based partly on the observation that when a waveform is sampled (whether by analog or digital means) over a time interval at least as long as one half cycle of the waveform, the samples can be sorted by frequency of occurrence, thereby constructing a histogram representing a PDF of the waveform during that time interval.

  18. Direct propagation of probability density functions in hydrological equations

    NASA Astrophysics Data System (ADS)

    Kunstmann, Harald; Kastens, Marko

    2006-06-01

    Sustainable decisions in hydrological risk management require detailed information on the probability density function ( pdf) of the model output. Only then probabilities for the failure of a specific management option or the exceedance of critical thresholds (e.g. of pollutants) can be derived. A new approach of uncertainty propagation in hydrological equations is developed that directly propagates the probability density functions of uncertain model input parameters into the corresponding probability density functions of model output. The basics of the methodology are presented and central applications to different disciplines in hydrology are shown. This work focuses on the following basic hydrological equations: (1) pumping test analysis (Theis-equation, propagation of uncertainties in recharge and transmissivity), (2) 1-dim groundwater contaminant transport equation (Gauss-equation, propagation of uncertainties in decay constant and dispersivity), (3) evapotranspiration estimation (Penman-Monteith-equation, propagation of uncertainty in roughness length). The direct propagation of probability densities is restricted to functions that are monotonically increasing or decreasing or that can be separated in corresponding monotonic branches so that inverse functions can be derived. In case no analytic solutions for inverse functions could be derived, semi-analytical approximations were used. It is shown that the results of direct probability density function propagation are in perfect agreement with results obtained from corresponding Monte Carlo derived frequency distributions. Direct pdf propagation, however, has the advantage that is yields exact solutions for the resulting hydrological pdfs rather than approximating discontinuous frequency distributions. It is additionally shown that the type of the resulting pdf depends on the specific values (order of magnitude, respectively) of the standard deviation of the input pdf. The dependency of skewness and kurtosis

  19. Carrier Modulation Via Waveform Probability Density Function

    NASA Technical Reports Server (NTRS)

    Williams, Glenn L.

    2004-01-01

    Beyond the classic modes of carrier modulation by varying amplitude (AM), phase (PM), or frequency (FM), we extend the modulation domain of an analog carrier signal to include a class of general modulations which are distinguished by their probability density function histogram. Separate waveform states are easily created by varying the pdf of the transmitted waveform. Individual waveform states are assignable as proxies for digital ONEs or ZEROs. At the receiver, these states are easily detected by accumulating sampled waveform statistics and performing periodic pattern matching, correlation, or statistical filtering. No fundamental natural laws are broken in the detection process. We show how a typical modulation scheme would work in the digital domain and suggest how to build an analog version. We propose that clever variations of the modulating waveform (and thus the histogram) can provide simple steganographic encoding.

  20. Carrier Modulation Via Waveform Probability Density Function

    NASA Technical Reports Server (NTRS)

    Williams, Glenn L.

    2006-01-01

    Beyond the classic modes of carrier modulation by varying amplitude (AM), phase (PM), or frequency (FM), we extend the modulation domain of an analog carrier signal to include a class of general modulations which are distinguished by their probability density function histogram. Separate waveform states are easily created by varying the pdf of the transmitted waveform. Individual waveform states are assignable as proxies for digital one's or zero's. At the receiver, these states are easily detected by accumulating sampled waveform statistics and performing periodic pattern matching, correlation, or statistical filtering. No fundamental physical laws are broken in the detection process. We show how a typical modulation scheme would work in the digital domain and suggest how to build an analog version. We propose that clever variations of the modulating waveform (and thus the histogram) can provide simple steganographic encoding.

  1. Application of the response probability density function technique to biodynamic models.

    PubMed

    Hershey, R L; Higgins, T H

    1978-01-01

    A method has been developed, which we call the "response probability density function technique," which has applications in predicting the probability of injury in a wide range of biodynamic situations. The method, which was developed in connection with sonic boom damage prediction, utilized the probability density function of the excitation force and the probability density function of the sensitivity of the material being acted upon. The method is especially simple to use when both these probability density functions are lognormal. Studies thus far have shown that the stresses from sonic booms, as well as the strengths of glass and mortars, are distributed lognormally. Some biodynamic processes also have lognormal distributions and are, therefore, amenable to modeling by this technique. In particular, this paper discusses the application of the response probability density function technique to the analysis of the thoracic response to air blast and the prediction of skull fracture from head impact. PMID:623590

  2. Probability density function learning by unsupervised neurons.

    PubMed

    Fiori, S

    2001-10-01

    In a recent work, we introduced the concept of pseudo-polynomial adaptive activation function neuron (FAN) and presented an unsupervised information-theoretic learning theory for such structure. The learning model is based on entropy optimization and provides a way of learning probability distributions from incomplete data. The aim of the present paper is to illustrate some theoretical features of the FAN neuron, to extend its learning theory to asymmetrical density function approximation, and to provide an analytical and numerical comparison with other known density function estimation methods, with special emphasis to the universal approximation ability. The paper also provides a survey of PDF learning from incomplete data, as well as results of several experiments performed on real-world problems and signals. PMID:11709808

  3. Downlink Probability Density Functions for EOS-McMurdo Sound

    NASA Technical Reports Server (NTRS)

    Christopher, P.; Jackson, A. H.

    1996-01-01

    The visibility times and communication link dynamics for the Earth Observations Satellite (EOS)-McMurdo Sound direct downlinks have been studied. The 16 day EOS periodicity may be shown with the Goddard Trajectory Determination System (GTDS) and the entire 16 day period should be simulated for representative link statistics. We desire many attributes of the downlink, however, and a faster orbital determination method is desirable. We use the method of osculating elements for speed and accuracy in simulating the EOS orbit. The accuracy of the method of osculating elements is demonstrated by closely reproducing the observed 16 day Landsat periodicity. An autocorrelation function method is used to show the correlation spike at 16 days. The entire 16 day record of passes over McMurdo Sound is then used to generate statistics for innage time, outage time, elevation angle, antenna angle rates, and propagation loss. The levation angle probability density function is compared with 1967 analytic approximation which has been used for medium to high altitude satellites. One practical result of this comparison is seen to be the rare occurrence of zenith passes. The new result is functionally different than the earlier result, with a heavy emphasis on low elevation angles. EOS is one of a large class of sun synchronous satellites which may be downlinked to McMurdo Sound. We examine delay statistics for an entire group of sun synchronous satellites ranging from 400 km to 1000 km altitude. Outage probability density function results are presented three dimensionally.

  4. Probability density function transformation using seeded localized averaging

    SciTech Connect

    Dimitrov, N. B.; Jordanov, V. T.

    2011-07-01

    Seeded Localized Averaging (SLA) is a spectrum acquisition method that averages pulse-heights in dynamic windows. SLA sharpens peaks in the acquired spectra. This work investigates the transformation of the original probability density function (PDF) in the process of applying SLA procedure. We derive an analytical expression for the resulting probability density function after an application of SLA. In addition, we prove the following properties: 1) for symmetric distributions, SLA preserves both the mean and symmetry. 2) for uni-modal symmetric distributions, SLA reduces variance, sharpening the distributions peak. Our results are the first to prove these properties, reinforcing past experimental observations. Specifically, our results imply that in the typical case of a spectral peak with Gaussian PDF the full width at half maximum (FWHM) of the transformed peak becomes narrower even with averaging of only two pulse-heights. While the Gaussian shape is no longer preserved, our results include an analytical expression for the resulting distribution. Examples of the transformation of other PDFs are presented. (authors)

  5. Nonstationary Probability Densities of Nonlinear Multi-Degree-of-Freedom Systems under Gaussian White Noise Excitations

    NASA Astrophysics Data System (ADS)

    Jin, X. L.; Huang, Z. L.

    The nonstationary probability densities of system responses are obtained for nonlinear multi-degree-of-freedom systems subject to stochastic parametric and external excitations. First, the stochastic averaging method is used to obtain the averaged Itô equation for amplitude envelopes of the system response. Then, the corresponding Fokker-Planck-Kolmogorov equation governing the nonstationary probability density of the amplitude envelopes is deduced. By applying the Galerkin method, the nonstationary probability density can be expressed as a series expansion in terms of a set of orthogonal base functions with time-dependent coefficients. Finally, the nonstationary probability densities for the amplitude response, as well as those for the state-space response, are solved approximately. To illustrate the applicability, the proposed method is applied to a two-degree-of-freedom van der Pol oscillator subject to external excitations of Gaussian white noises.

  6. Protein single-model quality assessment by feature-based probability density functions.

    PubMed

    Cao, Renzhi; Cheng, Jianlin

    2016-01-01

    Protein quality assessment (QA) has played an important role in protein structure prediction. We developed a novel single-model quality assessment method-Qprob. Qprob calculates the absolute error for each protein feature value against the true quality scores (i.e. GDT-TS scores) of protein structural models, and uses them to estimate its probability density distribution for quality assessment. Qprob has been blindly tested on the 11th Critical Assessment of Techniques for Protein Structure Prediction (CASP11) as MULTICOM-NOVEL server. The official CASP result shows that Qprob ranks as one of the top single-model QA methods. In addition, Qprob makes contributions to our protein tertiary structure predictor MULTICOM, which is officially ranked 3rd out of 143 predictors. The good performance shows that Qprob is good at assessing the quality of models of hard targets. These results demonstrate that this new probability density distribution based method is effective for protein single-model quality assessment and is useful for protein structure prediction. The webserver of Qprob is available at: http://calla.rnet.missouri.edu/qprob/. The software is now freely available in the web server of Qprob. PMID:27041353

  7. Sediment sound speed inversion with time-frequency analysis and modal arrival time probability density functions.

    PubMed

    Michalopoulou, Zoi-Heleni; Pole, Andrew

    2016-07-01

    The dispersion pattern of a received signal is critical for understanding physical properties of the propagation medium. The objective of this work is to estimate accurately sediment sound speed using modal arrival times obtained from dispersion curves extracted via time-frequency analysis of acoustic signals. A particle filter is used that estimates probability density functions of modal frequencies arriving at specific times. Employing this information, probability density functions of arrival times for modal frequencies are constructed. Samples of arrival time differences are then obtained and are propagated backwards through an inverse acoustic model. As a result, probability density functions of sediment sound speed are estimated. Maximum a posteriori estimates indicate that inversion is successful. It is also demonstrated that multiple frequency processing offers an advantage over inversion at a single frequency, producing results with reduced variance. PMID:27475202

  8. Protein single-model quality assessment by feature-based probability density functions

    PubMed Central

    Cao, Renzhi; Cheng, Jianlin

    2016-01-01

    Protein quality assessment (QA) has played an important role in protein structure prediction. We developed a novel single-model quality assessment method–Qprob. Qprob calculates the absolute error for each protein feature value against the true quality scores (i.e. GDT-TS scores) of protein structural models, and uses them to estimate its probability density distribution for quality assessment. Qprob has been blindly tested on the 11th Critical Assessment of Techniques for Protein Structure Prediction (CASP11) as MULTICOM-NOVEL server. The official CASP result shows that Qprob ranks as one of the top single-model QA methods. In addition, Qprob makes contributions to our protein tertiary structure predictor MULTICOM, which is officially ranked 3rd out of 143 predictors. The good performance shows that Qprob is good at assessing the quality of models of hard targets. These results demonstrate that this new probability density distribution based method is effective for protein single-model quality assessment and is useful for protein structure prediction. The webserver of Qprob is available at: http://calla.rnet.missouri.edu/qprob/. The software is now freely available in the web server of Qprob. PMID:27041353

  9. A unified optical damage criterion based on the probability density distribution of detector signals

    NASA Astrophysics Data System (ADS)

    Somoskoi, T.; Vass, Cs.; Mero, M.; Mingesz, R.; Bozoki, Z.; Osvay, K.

    2013-11-01

    Various methods and procedures have been developed so far to test laser induced optical damage. The question naturally arises, that what are the respective sensitivities of these diverse methods. To make a suitable comparison, both the processing of the measured primary signal has to be at least similar to the various methods, and one needs to establish a proper damage criterion, which has to be universally applicable for every method. We defined damage criteria based on the probability density distribution of the obtained detector signals. This was determined by the kernel density estimation procedure. We have tested the entire evaluation procedure in four well-known detection techniques: direct observation of the sample by optical microscopy; monitoring of the change in the light scattering power of the target surface and the detection of the generated photoacoustic waves both in the bulk of the sample and in the surrounding air.

  10. Probability Density Functions of Observed Rainfall in Montana

    NASA Technical Reports Server (NTRS)

    Larsen, Scott D.; Johnson, L. Ronald; Smith, Paul L.

    1995-01-01

    The question of whether a rain rate probability density function (PDF) can vary uniformly between precipitation events is examined. Image analysis on large samples of radar echoes is possible because of advances in technology. The data provided by such an analysis easily allow development of radar reflectivity factors (and by extension rain rate) distribution. Finding a PDF becomes a matter of finding a function that describes the curve approximating the resulting distributions. Ideally, one PDF would exist for all cases; or many PDF's that have the same functional form with only systematic variations in parameters (such as size or shape) exist. Satisfying either of theses cases will, validate the theoretical basis of the Area Time Integral (ATI). Using the method of moments and Elderton's curve selection criteria, the Pearson Type 1 equation was identified as a potential fit for 89 percent of the observed distributions. Further analysis indicates that the Type 1 curve does approximate the shape of the distributions but quantitatively does not produce a great fit. Using the method of moments and Elderton's curve selection criteria, the Pearson Type 1 equation was identified as a potential fit for 89% of the observed distributions. Further analysis indicates that the Type 1 curve does approximate the shape of the distributions but quantitatively does not produce a great fit.

  11. Representation of Probability Density Functions from Orbit Determination using the Particle Filter

    NASA Technical Reports Server (NTRS)

    Mashiku, Alinda K.; Garrison, James; Carpenter, J. Russell

    2012-01-01

    Statistical orbit determination enables us to obtain estimates of the state and the statistical information of its region of uncertainty. In order to obtain an accurate representation of the probability density function (PDF) that incorporates higher order statistical information, we propose the use of nonlinear estimation methods such as the Particle Filter. The Particle Filter (PF) is capable of providing a PDF representation of the state estimates whose accuracy is dependent on the number of particles or samples used. For this method to be applicable to real case scenarios, we need a way of accurately representing the PDF in a compressed manner with little information loss. Hence we propose using the Independent Component Analysis (ICA) as a non-Gaussian dimensional reduction method that is capable of maintaining higher order statistical information obtained using the PF. Methods such as the Principal Component Analysis (PCA) are based on utilizing up to second order statistics, hence will not suffice in maintaining maximum information content. Both the PCA and the ICA are applied to two scenarios that involve a highly eccentric orbit with a lower apriori uncertainty covariance and a less eccentric orbit with a higher a priori uncertainty covariance, to illustrate the capability of the ICA in relation to the PCA.

  12. Probability density function modeling for sub-powered interconnects

    NASA Astrophysics Data System (ADS)

    Pater, Flavius; Amaricǎi, Alexandru

    2016-06-01

    This paper proposes three mathematical models for reliability probability density function modeling the interconnect supplied at sub-threshold voltages: spline curve approximations, Gaussian models,and sine interpolation. The proposed analysis aims at determining the most appropriate fitting for the switching delay - probability of correct switching for sub-powered interconnects. We compare the three mathematical models with the Monte-Carlo simulations of interconnects for 45 nm CMOS technology supplied at 0.25V.

  13. Assumed Probability Density Functions for Shallow and Deep Convection

    NASA Astrophysics Data System (ADS)

    Bogenschutz, Peter A.; Krueger, Steven K.; Khairoutdinov, Marat

    2010-04-01

    The assumed joint probability density function (PDF) between vertical velocity and conserved temperature and total water scalars has been suggested to be a relatively computationally inexpensive and unified subgrid-scale (SGS) parameterization for boundary layer clouds and turbulent moments. This paper analyzes the performance of five families of PDFs using large-eddy simulations of deep convection, shallow convection, and a transition from stratocumulus to trade wind cumulus. Three of the PDF families are based on the double Gaussian form and the remaining two are the single Gaussian and a Double Delta Function (analogous to a mass flux model). The assumed PDF method is tested for grid sizes as small as 0.4 km to as large as 204.8 km. In addition, studies are performed for PDF sensitivity to errors in the input moments and for how well the PDFs diagnose some higher-order moments. In general, the double Gaussian PDFs more accurately represent SGS cloud structure and turbulence moments in the boundary layer compared to the single Gaussian and Double Delta Function PDFs for the range of grid sizes tested. This is especially true for small SGS cloud fractions. While the most complex PDF, Lewellen-Yoh, better represents shallow convective cloud properties (cloud fraction and liquid water mixing ratio) compared to the less complex Analytic Double Gaussian 1 PDF, there appears to be no advantage in implementing Lewellen-Yoh for deep convection. However, the Analytic Double Gaussian 1 PDF better represents the liquid water flux, is less sensitive to errors in the input moments, and diagnoses higher order moments more accurately. Between the Lewellen-Yoh and Analytic Double Gaussian 1 PDFs, it appears that neither family is distinctly better at representing cloudy layers. However, due to the reduced computational cost and fairly robust results, it appears that the Analytic Double Gaussian 1 PDF could be an ideal family for SGS cloud and turbulence representation in coarse

  14. Empirical and quadrature approximation of acoustic field and array response probability density functions.

    PubMed

    Hayward, Thomas J; Oba, Roger M

    2013-07-01

    Numerical methods are presented for approximating the probability density functions (pdf's) of acoustic fields and receiver-array responses induced by a given joint pdf of a set of acoustic environmental parameters. An approximation to the characteristic function of the random acoustic field (the inverse Fourier transform of the field pdf) is first obtained either by construction of the empirical characteristic function (ECF) from a random sample of the acoustic parameters, or by application of generalized Gaussian quadrature to approximate the integral defining the characteristic function. The Fourier transform is then applied to obtain an approximation of the pdf by a continuous function of the field variables. Application of both the ECF and generalized Gaussian quadrature is demonstrated in an example of a shallow-water ocean waveguide with two-dimensional uncertainty of sound speed and attenuation coefficient in the ocean bottom. Both approximations lead to a smoother estimate of the field pdf than that provided by a histogram, with generalized Gaussian quadrature providing a smoother estimate at the tails of the pdf. Potential applications to acoustic system performance quantification and to nonparametric acoustic signal processing are discussed. PMID:23862782

  15. A new estimator method for GARCH models

    NASA Astrophysics Data System (ADS)

    Onody, R. N.; Favaro, G. M.; Cazaroto, E. R.

    2007-06-01

    The GARCH (p, q) model is a very interesting stochastic process with widespread applications and a central role in empirical finance. The Markovian GARCH (1, 1) model has only 3 control parameters and a much discussed question is how to estimate them when a series of some financial asset is given. Besides the maximum likelihood estimator technique, there is another method which uses the variance, the kurtosis and the autocorrelation time to determine them. We propose here to use the standardized 6th moment. The set of parameters obtained in this way produces a very good probability density function and a much better time autocorrelation function. This is true for both studied indexes: NYSE Composite and FTSE 100. The probability of return to the origin is investigated at different time horizons for both Gaussian and Laplacian GARCH models. In spite of the fact that these models show almost identical performances with respect to the final probability density function and to the time autocorrelation function, their scaling properties are, however, very different. The Laplacian GARCH model gives a better scaling exponent for the NYSE time series, whereas the Gaussian dynamics fits better the FTSE scaling exponent.

  16. Representation of layer-counted proxy records as probability densities on error-free time axes

    NASA Astrophysics Data System (ADS)

    Boers, Niklas; Goswami, Bedartha; Ghil, Michael

    2016-04-01

    Time series derived from paleoclimatic proxy records exhibit substantial dating uncertainties in addition to the measurement errors of the proxy values. For radiometrically dated proxy archives, Goswami et al. [1] have recently introduced a framework rooted in Bayesian statistics that successfully propagates the dating uncertainties from the time axis to the proxy axis. The resulting proxy record consists of a sequence of probability densities over the proxy values, conditioned on prescribed age values. One of the major benefits of this approach is that the proxy record is represented on an accurate, error-free time axis. Such unambiguous dating is crucial, for instance, in comparing different proxy records. This approach, however, is not directly applicable to proxy records with layer-counted chronologies, as for example ice cores, which are typically dated by counting quasi-annually deposited ice layers. Hence the nature of the chronological uncertainty in such records is fundamentally different from that in radiometrically dated ones. Here, we introduce a modification of the Goswami et al. [1] approach that is specifically designed for layer-counted proxy records, instead of radiometrically dated ones. We apply our method to isotope ratios and dust concentrations in the NGRIP core, using a published 60,000-year chronology [2]. It is shown that the further one goes into the past, the more the layer-counting errors accumulate and lead to growing uncertainties in the probability density sequence for the proxy values that results from the proposed approach. For the older parts of the record, these uncertainties affect more and more a statistically sound estimation of proxy values. This difficulty implies that great care has to be exercised when comparing and in particular aligning specific events among different layer-counted proxy records. On the other hand, when attempting to derive stochastic dynamical models from the proxy records, one is only interested in the

  17. Nonstationary probability densities of a class of nonlinear system excited by external colored noise

    NASA Astrophysics Data System (ADS)

    Qi, LuYuan; Xu, Wei; Gu, XuDong

    2012-03-01

    This paper deals with the approximate nonstationary probability density of a class of nonlinear vibrating system excited by colored noise. First, the stochastic averaging method is adopted to obtain the averaged Itô equation for the amplitude of the system. The corresponding Fokker-Planck-Kolmogorov equation governing the evolutionary probability density function is deduced. Then, the approximate solution of the Fokker-Planck-Kolmogorov equation is derived by applying the Galerkin method. The solution is expressed as a sum of a series of expansion in terms of a set of proper basis functions with time-depended coefficients. Finally, an example is given to illustrate the proposed procedure. The validity of the proposed method is confirmed by Monte Carlo Simulation.

  18. Probability density function characterization for aggregated large-scale wind power based on Weibull mixtures

    DOE PAGESBeta

    Gomez-Lazaro, Emilio; Bueso, Maria C.; Kessler, Mathieu; Martin-Martinez, Sergio; Zhang, Jie; Hodge, Bri -Mathias; Molina-Garcia, Angel

    2016-02-02

    Here, the Weibull probability distribution has been widely applied to characterize wind speeds for wind energy resources. Wind power generation modeling is different, however, due in particular to power curve limitations, wind turbine control methods, and transmission system operation requirements. These differences are even greater for aggregated wind power generation in power systems with high wind penetration. Consequently, models based on one-Weibull component can provide poor characterizations for aggregated wind power generation. With this aim, the present paper focuses on discussing Weibull mixtures to characterize the probability density function (PDF) for aggregated wind power generation. PDFs of wind power datamore » are firstly classified attending to hourly and seasonal patterns. The selection of the number of components in the mixture is analyzed through two well-known different criteria: the Akaike information criterion (AIC) and the Bayesian information criterion (BIC). Finally, the optimal number of Weibull components for maximum likelihood is explored for the defined patterns, including the estimated weight, scale, and shape parameters. Results show that multi-Weibull models are more suitable to characterize aggregated wind power data due to the impact of distributed generation, variety of wind speed values and wind power curtailment.« less

  19. Analytical Formulation of the Single-visit Completeness Joint Probability Density Function

    NASA Astrophysics Data System (ADS)

    Garrett, Daniel; Savransky, Dmitry

    2016-09-01

    We derive an exact formulation of the multivariate integral representing the single-visit obscurational and photometric completeness joint probability density function for arbitrary distributions for planetary parameters. We present a derivation of the region of nonzero values of this function, which extends previous work, and discuss the time and computational complexity costs and benefits of the method. We present a working implementation and demonstrate excellent agreement between this approach and Monte Carlo simulation results.

  20. Kappa distribution and Probability Density Functions in Solar Wind

    NASA Astrophysics Data System (ADS)

    Jurac, S.

    2004-12-01

    A signature of a statistical intermittency is the presence of large deviations from the average value: this increased probability of finding extreme deviations is characterized by Probability Density Functions (PDFs) which exhibit non Gaussian power-law tails. Such power-law distributions were observed over decades in biology, chemistry, finance and other fields. Known examples include heartbeat histograms, price distribution, turbulent fluid flow and many other non-equilibrium systems. It is shown that the Kappa distribution represents a good description of PDFs observed in Solar wind. The asymmetric fluctuations in variance over time observed in solar wind PDFs are Gamma distributed. It is shown that, by assuming such a distribution of variance, the Kappa distribution can be analitically derived.

  1. Zeeman mapping of probability densities in square quantum wells using magnetic probes

    NASA Astrophysics Data System (ADS)

    Prechtl, G.; Heiss, W.; Bonanni, A.; Jantsch, W.; Mackowski, S.; Janik, E.; Karczewski, G.

    2000-06-01

    We use a method to probe experimentally the probability density of carriers confined in semiconductor quantum structures. The exciton Zeeman splitting in quantum wells containing a single, ultranarrow magnetic layer is studied depending on the layer position. In particular, a system consisting of a 1/4 monolayer MnTe embedded at varying positions in nonmagnetic CdTe/CdMgTe quantum wells is investigated. The sp-d exchange interaction results in a drastic increase of the Zeeman splitting, which, because of the strongly localized nature of this interaction, sensitively depends on the position of the MnTe submonolayer in the quantum well. For various interband transitions we show that the dependence of the exciton Zeeman splitting on the position of the magnetic layer directly maps the probability density of free holesin the growth direction.

  2. Analysis of 2-d ultrasound cardiac strain imaging using joint probability density functions.

    PubMed

    Ma, Chi; Varghese, Tomy

    2014-06-01

    Ultrasound frame rates play a key role for accurate cardiac deformation tracking. Insufficient frame rates lead to an increase in signal de-correlation artifacts resulting in erroneous displacement and strain estimation. Joint probability density distributions generated from estimated axial strain and its associated signal-to-noise ratio provide a useful approach to assess the minimum frame rate requirements. Previous reports have demonstrated that bi-modal distributions in the joint probability density indicate inaccurate strain estimation over a cardiac cycle. In this study, we utilize similar analysis to evaluate a 2-D multi-level displacement tracking and strain estimation algorithm for cardiac strain imaging. The effect of different frame rates, final kernel dimensions and a comparison of radio frequency and envelope based processing are evaluated using echo signals derived from a 3-D finite element cardiac model and five healthy volunteers. Cardiac simulation model analysis demonstrates that the minimum frame rates required to obtain accurate joint probability distributions for the signal-to-noise ratio and strain, for a final kernel dimension of 1 λ by 3 A-lines, was around 42 Hz for radio frequency signals. On the other hand, even a frame rate of 250 Hz with envelope signals did not replicate the ideal joint probability distribution. For the volunteer study, clinical data was acquired only at a 34 Hz frame rate, which appears to be sufficient for radio frequency analysis. We also show that an increase in the final kernel dimensions significantly affect the strain probability distribution and joint probability density function generated, with a smaller effect on the variation in the accumulated mean strain estimated over a cardiac cycle. Our results demonstrate that radio frequency frame rates currently achievable on clinical cardiac ultrasound systems are sufficient for accurate analysis of the strain probability distribution, when a multi-level 2-D

  3. Effect of Non-speckle Echo Signals on Tissue Characteristics for Liver Fibrosis using Probability Density Function of Ultrasonic B-mode image

    NASA Astrophysics Data System (ADS)

    Mori, Shohei; Hirata, Shinnosuke; Yamaguchi, Tadashi; Hachiya, Hiroyuki

    To develop a quantitative diagnostic method for liver fibrosis using an ultrasound B-mode image, a probability imaging method of tissue characteristics based on a multi-Rayleigh model, which expresses a probability density function of echo signals from liver fibrosis, has been proposed. In this paper, an effect of non-speckle echo signals on tissue characteristics estimated from the multi-Rayleigh model was evaluated. Non-speckle signals were determined and removed using the modeling error of the multi-Rayleigh model. The correct tissue characteristics of fibrotic tissue could be estimated with the removal of non-speckle signals.

  4. Probability Density Function for Waves Propagating in a Straight PEC Rough Wall Tunnel

    SciTech Connect

    Pao, H

    2004-11-08

    The probability density function for wave propagating in a straight perfect electrical conductor (PEC) rough wall tunnel is deduced from the mathematical models of the random electromagnetic fields. The field propagating in caves or tunnels is a complex-valued Gaussian random processing by the Central Limit Theorem. The probability density function for single modal field amplitude in such structure is Ricean. Since both expected value and standard deviation of this field depend only on radial position, the probability density function, which gives what is the power distribution, is a radially dependent function. The radio channel places fundamental limitations on the performance of wireless communication systems in tunnels and caves. The transmission path between the transmitter and receiver can vary from a simple direct line of sight to one that is severely obstructed by rough walls and corners. Unlike wired channels that are stationary and predictable, radio channels can be extremely random and difficult to analyze. In fact, modeling the radio channel has historically been one of the more challenging parts of any radio system design; this is often done using statistical methods. In this contribution, we present the most important statistic property, the field probability density function, of wave propagating in a straight PEC rough wall tunnel. This work only studies the simplest case--PEC boundary which is not the real world but the methods and conclusions developed herein are applicable to real world problems which the boundary is dielectric. The mechanisms behind electromagnetic wave propagation in caves or tunnels are diverse, but can generally be attributed to reflection, diffraction, and scattering. Because of the multiple reflections from rough walls, the electromagnetic waves travel along different paths of varying lengths. The interactions between these waves cause multipath fading at any location, and the strengths of the waves decrease as the distance

  5. Using Prediction Markets to Generate Probability Density Functions for Climate Change Risk Assessment

    NASA Astrophysics Data System (ADS)

    Boslough, M.

    2011-12-01

    Climate-related uncertainty is traditionally presented as an error bar, but it is becoming increasingly common to express it in terms of a probability density function (PDF). PDFs are a necessary component of probabilistic risk assessments, for which simple "best estimate" values are insufficient. Many groups have generated PDFs for climate sensitivity using a variety of methods. These PDFs are broadly consistent, but vary significantly in their details. One axiom of the verification and validation community is, "codes don't make predictions, people make predictions." This is a statement of the fact that subject domain experts generate results using assumptions within a range of epistemic uncertainty and interpret them according to their expert opinion. Different experts with different methods will arrive at different PDFs. For effective decision support, a single consensus PDF would be useful. We suggest that market methods can be used to aggregate an ensemble of opinions into a single distribution that expresses the consensus. Prediction markets have been shown to be highly successful at forecasting the outcome of events ranging from elections to box office returns. In prediction markets, traders can take a position on whether some future event will or will not occur. These positions are expressed as contracts that are traded in a double-action market that aggregates price, which can be interpreted as a consensus probability that the event will take place. Since climate sensitivity cannot directly be measured, it cannot be predicted. However, the changes in global mean surface temperature are a direct consequence of climate sensitivity, changes in forcing, and internal variability. Viable prediction markets require an undisputed event outcome on a specific date. Climate-related markets exist on Intrade.com, an online trading exchange. One such contract is titled "Global Temperature Anomaly for Dec 2011 to be greater than 0.65 Degrees C." Settlement is based

  6. Interpolation of probability densities in ENDF and ENDL

    SciTech Connect

    Hedstrom, G

    2006-01-27

    Suppose that we are given two probability densities p{sub 0}(E{prime}) and p{sub 1}(E{prime}) for the energy E{prime} of an outgoing particle, p{sub 0}(E{prime}) corresponding to energy E{sub 0} of the incident particle and p{sub 1}(E{prime}) corresponding to incident energy E{sub 1}. If E{sub 0} < E{sub 1}, the problem is how to define p{sub {alpha}}(E{prime}) for intermediate incident energies E{sub {alpha}} = (1 - {alpha})E{sub 0} + {alpha}E{sub 1} with 0 < {alpha} < 1. In this note the author considers three ways to do it. They begin with unit-base interpolation, which is standard in ENDL and is sometimes used in ENDF. They then describe the equiprobable bins used by some Monte Carlo codes. They then close with a discussion of interpolation by corresponding-points, which is commonly used in ENDF.

  7. On singular probability densities generated by extremal dynamics

    NASA Astrophysics Data System (ADS)

    Garcia, Guilherme J. M.; Dickman, Ronald

    2004-02-01

    Extremal dynamics is the mechanism that drives the Bak-Sneppen model into a (self-organized) critical state, marked by a singular stationary probability density p( x). With the aim of understanding this phenomenon, we study the BS model and several variants via mean-field theory and simulation. In all cases, we find that p( x) is singular at one or more points, as a consequence of extremal dynamics. Furthermore we show that the extremal barrier xi always belongs to the ‘prohibited’ interval, in which p( x)=0. Our simulations indicate that the Bak-Sneppen universality class is robust with regard to changes in the updating rule: we find the same value for the exponent π for all variants. Mean-field theory, which furnishes an exact description for the model on a complete graph, reproduces the character of the probability distribution found in simulations. For the modified processes mean-field theory takes the form of a functional equation for p( x).

  8. Probability density distribution of velocity differences at high Reynolds numbers

    NASA Technical Reports Server (NTRS)

    Praskovsky, Alexander A.

    1993-01-01

    Recent understanding of fine-scale turbulence structure in high Reynolds number flows is mostly based on Kolmogorov's original and revised models. The main finding of these models is that intrinsic characteristics of fine-scale fluctuations are universal ones at high Reynolds numbers, i.e., the functional behavior of any small-scale parameter is the same in all flows if the Reynolds number is high enough. The only large-scale quantity that directly affects small-scale fluctuations is the energy flux through a cascade. In dynamical equilibrium between large- and small-scale motions, this flux is equal to the mean rate of energy dissipation epsilon. The pdd of velocity difference is a very important characteristic for both the basic understanding of fully developed turbulence and engineering problems. Hence, it is important to test the findings: (1) the functional behavior of the tails of the probability density distribution (pdd) represented by P(delta(u)) is proportional to exp(-b(r) absolute value of delta(u)/sigma(sub delta(u))) and (2) the logarithmic decrement b(r) scales as b(r) is proportional to r(sup 0.15) when separation r lies in the inertial subrange in high Reynolds number laboratory shear flows.

  9. Efficiency issues related to probability density function comparison

    SciTech Connect

    Kelly, P.M.; Cannon, M.; Barros, J.E.

    1996-03-01

    The CANDID project (Comparison Algorithm for Navigating Digital Image Databases) employs probability density functions (PDFs) of localized feature information to represent the content of an image for search and retrieval purposes. A similarity measure between PDFs is used to identify database images that are similar to a user-provided query image. Unfortunately, signature comparison involving PDFs is a very time-consuming operation. In this paper, we look into some efficiency considerations when working with PDFS. Since PDFs can take on many forms, we look into tradeoffs between accurate representation and efficiency of manipulation for several data sets. In particular, we typically represent each PDF as a Gaussian mixture (e.g. as a weighted sum of Gaussian kernels) in the feature space. We find that by constraining all Gaussian kernels to have principal axes that are aligned to the natural axes of the feature space, computations involving these PDFs are simplified. We can also constrain the Gaussian kernels to be hyperspherical rather than hyperellipsoidal, simplifying computations even further, and yielding an order of magnitude speedup in signature comparison. This paper illustrates the tradeoffs encountered when using these constraints.

  10. Probability density functions for use when calculating standardised drought indices

    NASA Astrophysics Data System (ADS)

    Svensson, Cecilia; Prosdocimi, Ilaria; Hannaford, Jamie

    2015-04-01

    Time series of drought indices like the standardised precipitation index (SPI) and standardised flow index (SFI) require a statistical probability density function to be fitted to the observed (generally monthly) precipitation and river flow data. Once fitted, the quantiles are transformed to a Normal distribution with mean = 0 and standard deviation = 1. These transformed data are the SPI/SFI, which are widely used in drought studies, including for drought monitoring and early warning applications. Different distributions were fitted to rainfall and river flow data accumulated over 1, 3, 6 and 12 months for 121 catchments in the United Kingdom. These catchments represent a range of catchment characteristics in a mid-latitude climate. Both rainfall and river flow data have a lower bound at 0, as rains and flows cannot be negative. Their empirical distributions also tend to have positive skewness, and therefore the Gamma distribution has often been a natural and suitable choice for describing the data statistically. However, after transformation of the data to Normal distributions to obtain the SPIs and SFIs for the 121 catchments, the distributions are rejected in 11% and 19% of cases, respectively, by the Shapiro-Wilk test. Three-parameter distributions traditionally used in hydrological applications, such as the Pearson type 3 for rainfall and the Generalised Logistic and Generalised Extreme Value distributions for river flow, tend to make the transformed data fit better, with rejection rates of 5% or less. However, none of these three-parameter distributions have a lower bound at zero. This means that the lower tail of the fitted distribution may potentially go below zero, which would result in a lower limit to the calculated SPI and SFI values (as observations can never reach into this lower tail of the theoretical distribution). The Tweedie distribution can overcome the problems found when using either the Gamma or the above three-parameter distributions. The

  11. Spectral discrete probability density function of measured wind turbine noise in the far field.

    PubMed

    Ashtiani, Payam; Denison, Adelaide

    2015-01-01

    Of interest is the spectral character of wind turbine noise at typical residential set-back distances. In this paper, a spectral statistical analysis has been applied to immission measurements conducted at three locations. This method provides discrete probability density functions for the Turbine ONLY component of the measured noise. This analysis is completed for one-third octave sound levels, at integer wind speeds, and is compared to existing metrics for measuring acoustic comfort as well as previous discussions on low-frequency noise sources. PMID:25905097

  12. Probability density function analysis for optical turbulence with applications to underwater communications systems

    NASA Astrophysics Data System (ADS)

    Bernotas, Marius P.; Nelson, Charles

    2016-05-01

    The Weibull and Exponentiated Weibull probability density functions have been examined for the free space regime using heuristically derived shape and scale parameters. This paper extends current literature to the underwater channel and explores use of experimentally derived parameters. Data gathered in a short range underwater channel emulator was analyzed using a nonlinear curve fitting methodology to optimize the scale and shape parameters of the PDFs. This method provides insight into the scaled effects of underwater optical turbulence on a long range link, and may yield a general set of equations for determining the PDF for an underwater optical link.

  13. Spectral Discrete Probability Density Function of Measured Wind Turbine Noise in the Far Field

    PubMed Central

    Ashtiani, Payam; Denison, Adelaide

    2015-01-01

    Of interest is the spectral character of wind turbine noise at typical residential set-back distances. In this paper, a spectral statistical analysis has been applied to immission measurements conducted at three locations. This method provides discrete probability density functions for the Turbine ONLY component of the measured noise. This analysis is completed for one-third octave sound levels, at integer wind speeds, and is compared to existing metrics for measuring acoustic comfort as well as previous discussions on low-frequency noise sources. PMID:25905097

  14. Probability density function of a passive scalar in turbulent shear flows

    SciTech Connect

    Kollmann, W.; Janicka, J.

    1982-10-01

    The transport equation for the probability density function of a scalar in turbulent shear flow is analyzed and the closure based on the gradient flux model for the turbulent flux and an integral model for the scalar dissipation term is put forward. The probability density function equation is complemented by a two-equation turbulence model. Application to several shear flows proves the capability of the closure model to determine the probability density function of passive scalars.

  15. 3D model retrieval using probability density-based shape descriptors.

    PubMed

    Akgül, Ceyhun Burak; Sankur, Bülent; Yemez, Yücel; Schmitt, Francis

    2009-06-01

    We address content-based retrieval of complete 3D object models by a probabilistic generative description of local shape properties. The proposed shape description framework characterizes a 3D object with sampled multivariate probability density functions of its local surface features. This density-based descriptor can be efficiently computed via kernel density estimation (KDE) coupled with fast Gauss transform. The non-parametric KDE technique allows reliable characterization of a diverse set of shapes and yields descriptors which remain relatively insensitive to small shape perturbations and mesh resolution. Density-based characterization also induces a permutation property which can be used to guarantee invariance at the shape matching stage. As proven by extensive retrieval experiments on several 3D databases, our framework provides state-of-the-art discrimination over a broad and heterogeneous set of shape categories. PMID:19372614

  16. Assessment of probability density function based on POD reduced-order model for ensemble-based data assimilation

    NASA Astrophysics Data System (ADS)

    Kikuchi, Ryota; Misaka, Takashi; Obayashi, Shigeru

    2015-10-01

    An integrated method of a proper orthogonal decomposition based reduced-order model (ROM) and data assimilation is proposed for the real-time prediction of an unsteady flow field. In this paper, a particle filter (PF) and an ensemble Kalman filter (EnKF) are compared for data assimilation and the difference in the predicted flow fields is evaluated focusing on the probability density function (PDF) of the model variables. The proposed method is demonstrated using identical twin experiments of an unsteady flow field around a circular cylinder at the Reynolds number of 1000. The PF and EnKF are employed to estimate temporal coefficients of the ROM based on the observed velocity components in the wake of the circular cylinder. The prediction accuracy of ROM-PF is significantly better than that of ROM-EnKF due to the flexibility of PF for representing a PDF compared to EnKF. Furthermore, the proposed method reproduces the unsteady flow field several orders faster than the reference numerical simulation based on the Navier-Stokes equations.

  17. Turbulent combustion analysis with various probability density functions

    NASA Astrophysics Data System (ADS)

    Kim, Yongmo; Chung, T. J.

    A finite element method for the computation of confined, axisymmetric, turbulent diffusion flames is developed. This algorithm adopts the coupled velocity-pressure formulation to improve the covergence rate in variable-viscosity/variable-density flows. In order to minimize the numerical diffusion, the streamline upwind/Petrov-Galerkin formulation is employed. Turbulence is represented by the k-epsilon model, and the combustion process involves an irreversible one-step reaction at an infinite rate. The mean mixture properties were obtained by three methods based on the diffusion flame concept; without using a pdf, with a double-delta pdf, and with a beta pdf. A comparison is made between the combustion models with and without the pdf application, and the effect of turbulence on combustion are discussed. The numerical results are compared with available experimental data.

  18. Incorporating photometric redshift probability density information into real-space clustering measurements

    NASA Astrophysics Data System (ADS)

    Myers, Adam D.; White, Martin; Ball, Nicholas M.

    2009-11-01

    The use of photometric redshifts in cosmology is increasing. Often, however these photo-z are treated like spectroscopic observations, in that the peak of the photometric redshift, rather than the full probability density function (PDF), is used. This overlooks useful information inherent in the full PDF. We introduce a new real-space estimator for one of the most used cosmological statistics, the two-point correlation function, that weights by the PDF of individual photometric objects in a manner that is optimal when Poisson statistics dominate. As our estimator does not bin based on the PDF peak, it substantially enhances the clustering signal by usefully incorporating information from all photometric objects that overlap the redshift bin of interest. As a real-world application, we measure quasi-stellar object (QSO) clustering in the Sloan Digital Sky Survey (SDSS). We find that our simplest binned estimator improves the clustering signal by a factor equivalent to increasing the survey size by a factor of 2-3. We also introduce a new implementation that fully weights between pairs of objects in constructing the cross-correlation and find that this pair-weighted estimator improves clustering signal in a manner equivalent to increasing the survey size by a factor of 4-5. Our technique uses spectroscopic data to anchor the distance scale and it will be particularly useful where spectroscopic data (e.g. from BOSS) overlap deeper photometry (e.g. from Pan-STARRS, DES or the LSST). We additionally provide simple, informative expressions to determine when our estimator will be competitive with the autocorrelation of spectroscopic objects. Although we use QSOs as an example population, our estimator can and should be applied to any clustering estimate that uses photometric objects.

  19. Simulation Of Wave Function And Probability Density Of Modified Poschl Teller Potential Derived Using Supersymmetric Quantum Mechanics

    SciTech Connect

    Angraini, Lily Maysari; Suparmi,; Variani, Viska Inda

    2010-12-23

    SUSY quantum mechanics can be applied to solve Schrodinger equation for high dimensional system that can be reduced into one dimensional system and represented in lowering and raising operators. Lowering and raising operators can be obtained using relationship between original Hamiltonian equation and the (super) potential equation. In this paper SUSY quantum mechanics is used as a method to obtain the wave function and the energy level of the Modified Poschl Teller potential. The graph of wave function equation and probability density is simulated by using Delphi 7.0 programming language. Finally, the expectation value of quantum mechanics operator could be calculated analytically using integral form or probability density graph resulted by the programming.

  20. Simulation Of Wave Function And Probability Density Of Modified Poschl Teller Potential Derived Using Supersymmetric Quantum Mechanics

    NASA Astrophysics Data System (ADS)

    Angraini, Lily Maysari; Suparmi, Variani, Viska Inda

    2010-12-01

    SUSY quantum mechanics can be applied to solve Schrodinger equation for high dimensional system that can be reduced into one dimensional system and represented in lowering and raising operators. Lowering and raising operators can be obtained using relationship between original Hamiltonian equation and the (super) potential equation. In this paper SUSY quantum mechanics is used as a method to obtain the wave function and the energy level of the Modified Poschl Teller potential. The graph of wave function equation and probability density is simulated by using Delphi 7.0 programming language. Finally, the expectation value of quantum mechanics operator could be calculated analytically using integral form or probability density graph resulted by the programming.

  1. Entrainment Rate in Shallow Cumuli: Dependence on Entrained Dry Air Sources and Probability Density Functions

    NASA Astrophysics Data System (ADS)

    Lu, C.; Liu, Y.; Niu, S.; Vogelmann, A. M.

    2012-12-01

    In situ aircraft cumulus observations from the RACORO field campaign are used to estimate entrainment rate for individual clouds using a recently developed mixing fraction approach. The entrainment rate is computed based on the observed state of the cloud core and the state of the air that is laterally mixed into the cloud at its edge. The computed entrainment rate decreases when the air is entrained from increasing distance from the cloud core edge; this is because the air farther away from cloud edge is drier than the neighboring air that is within the humid shells around cumulus clouds. Probability density functions of entrainment rate are well fitted by lognormal distributions at different heights above cloud base for different dry air sources (i.e., different source distances from the cloud core edge). Such lognormal distribution functions are appropriate for inclusion into future entrainment rate parameterization in large scale models. To the authors' knowledge, this is the first time that probability density functions of entrainment rate have been obtained in shallow cumulus clouds based on in situ observations. The reason for the wide spread of entrainment rate is that the observed clouds are affected by entrainment mixing processes to different extents, which is verified by the relationships between the entrainment rate and cloud microphysics/dynamics. The entrainment rate is negatively correlated with liquid water content and cloud droplet number concentration due to the dilution and evaporation in entrainment mixing processes. The entrainment rate is positively correlated with relative dispersion (i.e., ratio of standard deviation to mean value) of liquid water content and droplet size distributions, consistent with the theoretical expectation that entrainment mixing processes are responsible for microphysics fluctuations and spectral broadening. The entrainment rate is negatively correlated with vertical velocity and dissipation rate because entrainment

  2. Evaluation of muscle force classification using shape analysis of the sEMG probability density function: a simulation study.

    PubMed

    Ayachi, F S; Boudaoud, S; Marque, C

    2014-08-01

    In this work, we propose to classify, by simulation, the shape variability (or non-Gaussianity) of the surface electromyogram (sEMG) amplitude probability density function (PDF), according to contraction level, using high-order statistics (HOS) and a recent functional formalism, the core shape modeling (CSM). According to recent studies, based on simulated and/or experimental conditions, the sEMG PDF shape seems to be modified by many factors as: contraction level, fatigue state, muscle anatomy, used instrumentation, and also motor control parameters. For sensitivity evaluation against these several sources (physiological, instrumental, and neural control) of variability, a large-scale simulation (25 muscle anatomies, ten parameter configurations, three electrode arrangements) is performed, by using a recent sEMG-force model and parallel computing, to classify sEMG data from three contraction levels (20, 50, and 80% MVC). A shape clustering algorithm is then launched using five combinations of HOS parameters, the CSM method and compared to amplitude clustering with classical indicators [average rectified value (ARV) and root mean square (RMS)]. From the results screening, it appears that the CSM method obtains, using Laplacian electrode arrangement, the highest classification scores, after ARV and RMS approaches, and followed by one HOS combination. However, when some critical confounding parameters are changed, these scores decrease. These simulation results demonstrate that the shape screening of the sEMG amplitude PDF is a complex task which needs both efficient shape analysis methods and specific signal recording protocol to be properly used for tracking neural drive and muscle activation strategies with varying force contraction in complement to classical amplitude estimators. PMID:24961179

  3. Predicting Ligand Binding Sites on Protein Surfaces by 3-Dimensional Probability Density Distributions of Interacting Atoms

    PubMed Central

    Jian, Jhih-Wei; Elumalai, Pavadai; Pitti, Thejkiran; Wu, Chih Yuan; Tsai, Keng-Chang; Chang, Jeng-Yih; Peng, Hung-Pin; Yang, An-Suei

    2016-01-01

    Predicting ligand binding sites (LBSs) on protein structures, which are obtained either from experimental or computational methods, is a useful first step in functional annotation or structure-based drug design for the protein structures. In this work, the structure-based machine learning algorithm ISMBLab-LIG was developed to predict LBSs on protein surfaces with input attributes derived from the three-dimensional probability density maps of interacting atoms, which were reconstructed on the query protein surfaces and were relatively insensitive to local conformational variations of the tentative ligand binding sites. The prediction accuracy of the ISMBLab-LIG predictors is comparable to that of the best LBS predictors benchmarked on several well-established testing datasets. More importantly, the ISMBLab-LIG algorithm has substantial tolerance to the prediction uncertainties of computationally derived protein structure models. As such, the method is particularly useful for predicting LBSs not only on experimental protein structures without known LBS templates in the database but also on computationally predicted model protein structures with structural uncertainties in the tentative ligand binding sites. PMID:27513851

  4. Development and evaluation of probability density functions for a set of human exposure factors

    SciTech Connect

    Maddalena, R.L.; McKone, T.E.; Bodnar, A.; Jacobson, J.

    1999-06-01

    The purpose of this report is to describe efforts carried out during 1998 and 1999 at the Lawrence Berkeley National Laboratory to assist the U.S. EPA in developing and ranking the robustness of a set of default probability distributions for exposure assessment factors. Among the current needs of the exposure-assessment community is the need to provide data for linking exposure, dose, and health information in ways that improve environmental surveillance, improve predictive models, and enhance risk assessment and risk management (NAS, 1994). The U.S. Environmental Protection Agency (EPA) Office of Emergency and Remedial Response (OERR) plays a lead role in developing national guidance and planning future activities that support the EPA Superfund Program. OERR is in the process of updating its 1989 Risk Assessment Guidance for Superfund (RAGS) as part of the EPA Superfund reform activities. Volume III of RAGS, when completed in 1999 will provide guidance for conducting probabilistic risk assessments. This revised document will contain technical information including probability density functions (PDFs) and methods used to develop and evaluate these PDFs. The PDFs provided in this EPA document are limited to those relating to exposure factors.

  5. Probability density functions of the average and difference intensities of Friedel opposites.

    PubMed

    Shmueli, U; Flack, H D

    2010-11-01

    Trigonometric series for the average (A) and difference (D) intensities of Friedel opposites were carefully rederived and were normalized to minimize their dependence on sin(theta)/lambda. Probability density functions (hereafter p.d.f.s) of these series were then derived by the Fourier method [Shmueli, Weiss, Kiefer & Wilson (1984). Acta Cryst. A40, 651-660] and their expressions, which admit any chemical composition of the unit-cell contents, were obtained for the space group P1. Histograms of A and D were then calculated for an assumed random-structure model and for 3135 Friedel pairs of a published solved crystal structure, and were compared with the p.d.f.s after the latter were scaled up to the histograms. Good agreement was obtained for the random-structure model and a qualitative one for the published solved structure. The results indicate that the residual discrepancy is mainly due to the presumed statistical independence of the p.d.f.'s characteristic function on the contributions of the interatomic vectors. PMID:20962376

  6. Evaluation of joint probability density function models for turbulent nonpremixed combustion with complex chemistry

    NASA Technical Reports Server (NTRS)

    Smith, N. S. A.; Frolov, S. M.; Bowman, C. T.

    1996-01-01

    Two types of mixing sub-models are evaluated in connection with a joint-scalar probability density function method for turbulent nonpremixed combustion. Model calculations are made and compared to simulation results for homogeneously distributed methane-air reaction zones mixing and reacting in decaying turbulence within a two-dimensional enclosed domain. The comparison is arranged to ensure that both the simulation and model calculations a) make use of exactly the same chemical mechanism, b) do not involve non-unity Lewis number transport of species, and c) are free from radiation loss. The modified Curl mixing sub-model was found to provide superior predictive accuracy over the simple relaxation-to-mean submodel in the case studied. Accuracy to within 10-20% was found for global means of major species and temperature; however, nitric oxide prediction accuracy was lower and highly dependent on the choice of mixing sub-model. Both mixing submodels were found to produce non-physical mixing behavior for mixture fractions removed from the immediate reaction zone. A suggestion for a further modified Curl mixing sub-model is made in connection with earlier work done in the field.

  7. Probability densities for quantum-mechanical collision resonances in reactive scattering

    NASA Astrophysics Data System (ADS)

    Thompson, Todd C.; Truhlar, Donald G.

    1983-10-01

    We present contour maps of probability density |ψ| 2 for reactive compound-state resonances in two collinear reactions: H+ FH → HF + H on a model low-barrier surface and H + H 2 → H 2 + H on the Porter-Karplus surface no. 2. The maps clearly show the Fermi-resonance schizoid character of the compound states.

  8. Dynamic Graphics in Excel for Teaching Statistics: Understanding the Probability Density Function

    ERIC Educational Resources Information Center

    Coll-Serrano, Vicente; Blasco-Blasco, Olga; Alvarez-Jareno, Jose A.

    2011-01-01

    In this article, we show a dynamic graphic in Excel that is used to introduce an important concept in our subject, Statistics I: the probability density function. This interactive graphic seeks to facilitate conceptual understanding of the main aspects analysed by the learners.

  9. Derivation of Probability Density Function of Signal-to-Interference-Plus-Noise Ratio for the MS-to-MS Interference Analysis

    PubMed Central

    2013-01-01

    This paper provides an analytical derivation of the probability density function of signal-to-interference-plus-noise ratio in the scenario where mobile stations interfere with each other. This analysis considers cochannel interference and adjacent channel interference. This could also remove the need for Monte Carlo simulations when evaluating the interference effect between mobile stations. Numerical verification shows that the analytical result agrees well with a Monte Carlo simulation. Also, we applied analytical methods for evaluating the interference effect between mobile stations using adjacent frequency bands. The analytical derivation of the probability density function can be used to provide the technical criteria for sharing a frequency band. PMID:24453792

  10. The role of presumed probability density functions in the simulation of nonpremixed turbulent combustion

    NASA Astrophysics Data System (ADS)

    Coclite, A.; Pascazio, G.; De Palma, P.; Cutrone, L.

    2016-07-01

    Flamelet-Progress-Variable (FPV) combustion models allow the evaluation of all thermochemical quantities in a reacting flow by computing only the mixture fraction Z and a progress variable C. When using such a method to predict turbulent combustion in conjunction with a turbulence model, a probability density function (PDF) is required to evaluate statistical averages (e. g., Favre averages) of chemical quantities. The choice of the PDF is a compromise between computational costs and accuracy level. The aim of this paper is to investigate the influence of the PDF choice and its modeling aspects to predict turbulent combustion. Three different models are considered: the standard one, based on the choice of a β-distribution for Z and a Dirac-distribution for C; a model employing a β-distribution for both Z and C; and the third model obtained using a β-distribution for Z and the statistically most likely distribution (SMLD) for C. The standard model, although widely used, does not take into account the interaction between turbulence and chemical kinetics as well as the dependence of the progress variable not only on its mean but also on its variance. The SMLD approach establishes a systematic framework to incorporate informations from an arbitrary number of moments, thus providing an improvement over conventionally employed presumed PDF closure models. The rational behind the choice of the three PDFs is described in some details and the prediction capability of the corresponding models is tested vs. well-known test cases, namely, the Sandia flames, and H2-air supersonic combustion.

  11. Model-based prognostics for batteries which estimates useful life and uses a probability density function

    NASA Technical Reports Server (NTRS)

    Saha, Bhaskar (Inventor); Goebel, Kai F. (Inventor)

    2012-01-01

    This invention develops a mathematical model to describe battery behavior during individual discharge cycles as well as over its cycle life. The basis for the form of the model has been linked to the internal processes of the battery and validated using experimental data. Effects of temperature and load current have also been incorporated into the model. Subsequently, the model has been used in a Particle Filtering framework to make predictions of remaining useful life for individual discharge cycles as well as for cycle life. The prediction performance was found to be satisfactory as measured by performance metrics customized for prognostics for a sample case. The work presented here provides initial steps towards a comprehensive health management solution for energy storage devices.

  12. The quiet Sun magnetic field observed with ZIMPOL on THEMIS. I. The probability density function

    NASA Astrophysics Data System (ADS)

    Bommier, V.; Martínez González, M.; Bianda, M.; Frisch, H.; Asensio Ramos, A.; Gelly, B.; Landi Degl'Innocenti, E.

    2009-11-01

    Context: The quiet Sun magnetic field probability density function (PDF) remains poorly known. Modeling this field also introduces a magnetic filling factor that is also poorly known. With these two quantities, PDF and filling factor, the statistical description of the quiet Sun magnetic field is complex and needs to be clarified. Aims: In the present paper, we propose a procedure that combines direct determinations and inversion results to derive the magnetic field vector and filling factor, and their PDFs. Methods: We used spectro-polarimetric observations taken with the ZIMPOL polarimeter mounted on the THEMIS telescope. The target was a quiet region at disk center. We analyzed the data by means of the UNNOFIT inversion code, with which we inferred the distribution of the mean magnetic field α B, α being the magnetic filling factor. The distribution of α was derived by an independent method, directly from the spectro-polarimetric data. The magnetic field PDF p(B) could then be inferred. By introducing a joint PDF for the filling factor and the magnetic field strength, we have clarified the definition of the PDF of the quiet Sun magnetic field when the latter is assumed not to be volume-filling. Results: The most frequent local average magnetic field strength is found to be 13 G. We find that the magnetic filling factor is related to the magnetic field strength by the simple law α = B_1/B with B1 = 15 G. This result is compatible with the Hanle weak-field determinations, as well as with the stronger field determinations from the Zeeman effect (kGauss field filling 1-2% of space). From linear fits, we obtain the analytical dependence of the magnetic field PDF. Our analysis has also revealed that the magnetic field in the quiet Sun is isotropically distributed in direction. Conclusions: We conclude that the quiet Sun is a complex medium where magnetic fields having different field strengths and filling factors coexist. Further observations with a better

  13. A Projection and Density Estimation Method for Knowledge Discovery

    PubMed Central

    Stanski, Adam; Hellwich, Olaf

    2012-01-01

    A key ingredient to modern data analysis is probability density estimation. However, it is well known that the curse of dimensionality prevents a proper estimation of densities in high dimensions. The problem is typically circumvented by using a fixed set of assumptions about the data, e.g., by assuming partial independence of features, data on a manifold or a customized kernel. These fixed assumptions limit the applicability of a method. In this paper we propose a framework that uses a flexible set of assumptions instead. It allows to tailor a model to various problems by means of 1d-decompositions. The approach achieves a fast runtime and is not limited by the curse of dimensionality as all estimations are performed in 1d-space. The wide range of applications is demonstrated at two very different real world examples. The first is a data mining software that allows the fully automatic discovery of patterns. The software is publicly available for evaluation. As a second example an image segmentation method is realized. It achieves state of the art performance on a benchmark dataset although it uses only a fraction of the training data and very simple features. PMID:23049675

  14. PDV Uncertainty Estimation & Methods Comparison

    SciTech Connect

    Machorro, E.

    2011-11-01

    Several methods are presented for estimating the rapidly changing instantaneous frequency of a time varying signal that is contaminated by measurement noise. Useful a posteriori error estimates for several methods are verified numerically through Monte Carlo simulation. However, given the sampling rates of modern digitizers, sub-nanosecond variations in velocity are shown to be reliably measurable in most (but not all) cases. Results support the hypothesis that in many PDV regimes of interest, sub-nanosecond resolution can be achieved.

  15. Properties of the probability density function of the non-central chi-squared distribution

    NASA Astrophysics Data System (ADS)

    András, Szilárd; Baricz, Árpád

    2008-10-01

    In this paper we consider the probability density function (pdf) of a non-central [chi]2 distribution with arbitrary number of degrees of freedom. For this function we prove that can be represented as a finite sum and we deduce a partial derivative formula. Moreover, we show that the pdf is log-concave when the degrees of freedom is greater or equal than 2. At the end of this paper we present some Turán-type inequalities for this function and an elegant application of the monotone form of l'Hospital's rule in probability theory is given.

  16. Eulerian Mapping Closure Approach for Probability Density Function of Concentration in Shear Flows

    NASA Technical Reports Server (NTRS)

    He, Guowei; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    The Eulerian mapping closure approach is developed for uncertainty propagation in computational fluid mechanics. The approach is used to study the Probability Density Function (PDF) for the concentration of species advected by a random shear flow. An analytical argument shows that fluctuation of the concentration field at one point in space is non-Gaussian and exhibits stretched exponential form. An Eulerian mapping approach provides an appropriate approximation to both convection and diffusion terms and leads to a closed mapping equation. The results obtained describe the evolution of the initial Gaussian field, which is in agreement with direct numerical simulations.

  17. Breather turbulence versus soliton turbulence: Rogue waves, probability density functions, and spectral features.

    PubMed

    Akhmediev, N; Soto-Crespo, J M; Devine, N

    2016-08-01

    Turbulence in integrable systems exhibits a noticeable scientific advantage: it can be expressed in terms of the nonlinear modes of these systems. Whether the majority of the excitations in the system are breathers or solitons defines the properties of the turbulent state. In the two extreme cases we can call such states "breather turbulence" or "soliton turbulence." The number of rogue waves, the probability density functions of the chaotic wave fields, and their physical spectra are all specific for each of these two situations. Understanding these extreme cases also helps in studies of mixed turbulent states when the wave field contains both solitons and breathers, thus revealing intermediate characteristics. PMID:27627303

  18. Equations for probability density and for the phase of wave function in quantum mechanics and superconductivity

    SciTech Connect

    Mkrtchyan, A. R.; Hayrapetyan, A. G.; Khachatryan, B. V.; Petrosyan, R. G.; Avakyan, R. M.

    2009-08-15

    The fourth order linear differential equation is obtained for the probability density considering the non-Hermitian Hamiltonian (the case of quasistationary states - complexity of energy). Third order nonlinear differential equation for the square of the modulus of the order parameter and for the phase is obtained by making use of Ginzburg-Landau equations. Three integrals of 'motion' are found in the absence of the external magnetic field and two integrals are found in the presence of the external magnetic field. The analysis of these integrals is conducted. New analytical solutions are obtained.

  19. A Delta-Sigma Modulator Using a Non-uniform Quantizer Adjusted for the Probability Density of Input Signals

    NASA Astrophysics Data System (ADS)

    Kitayabu, Toru; Hagiwara, Mao; Ishikawa, Hiroyasu; Shirai, Hiroshi

    A novel delta-sigma modulator that employs a non-uniform quantizer whose spacing is adjusted by reference to the statistical properties of the input signal is proposed. The proposed delta-sigma modulator has less quantization noise compared to the one that uses a uniform quantizer with the same number of output values. With respect to the quantizer on its own, Lloyd proposed a non-uniform quantizer that is best for minimizing the average quantization noise power. The applicable condition of the method is that the statistical properties of the input signal, the probability density, are given. However, the procedure cannot be directly applied to the quantizer in the delta-sigma modulator because it jeopardizes the modulator's stability. In this paper, a procedure is proposed that determine the spacing of the quantizer with avoiding instability. Simulation results show that the proposed method reduces quantization noise by up to 3.8dB and 2.8dB with the input signal having a PAPR of 16dB and 12dB, respectively, compared to the one employing a uniform quantizer. Two alternative types of probability density function (PDF) are used in the proposed method for the calculation of the output values. One is the PDF of the input signal to the delta-sigma modulator and the other is an approximated PDF of the input signal to the quantizer inside the delta-sigma modulator. Both approaches are evaluated to find that the latter gives lower quantization noise.

  20. A biology-driven receptor model for daily pollen allergy risk in Korea based on Weibull probability density function

    NASA Astrophysics Data System (ADS)

    Kim, Kyu Rang; Kim, Mijin; Choe, Ho-Seong; Han, Mae Ja; Lee, Hye-Rim; Oh, Jae-Won; Kim, Baek-Jo

    2016-07-01

    Pollen is an important cause of respiratory allergic reactions. As individual sanitation has improved, allergy risk has increased, and this trend is expected to continue due to climate change. Atmospheric pollen concentration is highly influenced by weather conditions. Regression analysis and modeling of the relationships between airborne pollen concentrations and weather conditions were performed to analyze and forecast pollen conditions. Traditionally, daily pollen concentration has been estimated using regression models that describe the relationships between observed pollen concentrations and weather conditions. These models were able to forecast daily concentrations at the sites of observation, but lacked broader spatial applicability beyond those sites. To overcome this limitation, an integrated modeling scheme was developed that is designed to represent the underlying processes of pollen production and distribution. A maximum potential for airborne pollen is first determined using the Weibull probability density function. Then, daily pollen concentration is estimated using multiple regression models. Daily risk grade levels are determined based on the risk criteria used in Korea. The mean percentages of agreement between the observed and estimated levels were 81.4-88.2 % and 92.5-98.5 % for oak and Japanese hop pollens, respectively. The new models estimated daily pollen risk more accurately than the original statistical models because of the newly integrated biological response curves. Although they overestimated seasonal mean concentration, they did not simulate all of the peak concentrations. This issue would be resolved by adding more variables that affect the prevalence and internal maturity of pollens.

  1. Ensemble Averaged Probability Density Function (APDF) for Compressible Turbulent Reacting Flows

    NASA Technical Reports Server (NTRS)

    Shih, Tsan-Hsing; Liu, Nan-Suey

    2012-01-01

    In this paper, we present a concept of the averaged probability density function (APDF) for studying compressible turbulent reacting flows. The APDF is defined as an ensemble average of the fine grained probability density function (FG-PDF) with a mass density weighting. It can be used to exactly deduce the mass density weighted, ensemble averaged turbulent mean variables. The transport equation for APDF can be derived in two ways. One is the traditional way that starts from the transport equation of FG-PDF, in which the compressible Navier- Stokes equations are embedded. The resulting transport equation of APDF is then in a traditional form that contains conditional means of all terms from the right hand side of the Navier-Stokes equations except for the chemical reaction term. These conditional means are new unknown quantities that need to be modeled. Another way of deriving the transport equation of APDF is to start directly from the ensemble averaged Navier-Stokes equations. The resulting transport equation of APDF derived from this approach appears in a closed form without any need for additional modeling. The methodology of ensemble averaging presented in this paper can be extended to other averaging procedures: for example, the Reynolds time averaging for statistically steady flow and the Reynolds spatial averaging for statistically homogeneous flow. It can also be extended to a time or spatial filtering procedure to construct the filtered density function (FDF) for the large eddy simulation (LES) of compressible turbulent reacting flows.

  2. Regression approaches to derive generic and fish group-specific probability density functions of bioconcentration factors for metals.

    PubMed

    Tanaka, Taku; Ciffroy, Philippe; Stenberg, Kristofer; Capri, Ettore

    2010-11-01

    In the framework of environmental multimedia modeling studies dedicated to environmental and health risk assessments of chemicals, the bioconcentration factor (BCF) is a parameter commonly used, especially for fish. As for neutral lipophilic substances, it is assumed that BCF is independent of exposure levels of the substances. However, for metals some studies found the inverse relationship between BCF values and aquatic exposure concentrations for various aquatic species and metals, and also high variability in BCF data. To deal with the factors determining BCF for metals, we conducted regression analyses to evaluate the inverse relationships and introduce the concept of probability density function (PDF) for Cd, Cu, Zn, Pb, and As. In the present study, for building the regression model and derive the PDF of fish BCF, two statistical approaches are applied: ordinary regression analysis to estimate a regression model that does not consider the variation in data across different fish family groups; and hierarchical Bayesian regression analysis to estimate fish group-specific regression models. The results show that the BCF ranges and PDFs estimated for metals by both statistical approaches have less uncertainty than the variation of collected BCF data (the uncertainty is reduced by 9%-61%), and thus such PDFs proved to be useful to obtain accurate model predictions for environmental and health risk assessment concerning metals. PMID:20886641

  3. Comparison of two ways for representation of the forecast probability density function in ensemble-based sequential data assimilation

    NASA Astrophysics Data System (ADS)

    Nakano, Shinya

    2013-04-01

    In the ensemble-based sequential data assimilation, the probability density function (PDF) at each time step is represented by ensemble members. These ensemble members are usually assumed to be Monte Carlo samples drawn from the PDF, and the probability density is associated with the concentration of the ensemble members. On the basis of the Monte Carlo approximation, the forecast ensemble, which is obtained by applying the dynamical model to each ensemble member, provides an approximation of the forecast PDF on the basis of the Chapman-Kolmogorov integral. In practical cases, however, the ensemble size is limited by available computational resources, and it is typically much less than the system dimension. In such situations, the Monte Carlo approximation would not well work. When the ensemble size is less than the system dimension, the ensemble would form a simplex in a subspace. The simplex can not represent the third or higher-order moments of the PDF, but it can represent only the Gaussian features of the PDF. As noted by Wang et al. (2004), the forecast ensemble, which is obtained by applying the dynamical model to each member of the simplex ensemble, provides an approximation of the mean and covariance of the forecast PDF where the Taylor expansion of the dynamical model up to the second-order is considered except that the uncertainties which can not represented by the ensemble members are ignored. Since the third and higher-order nonlinearity is discarded, the forecast ensemble would provide some bias to the forecast. Using a small nonlinear model, the Lorenz 63 model, we also performed the experiment of the state estimation with both the simplex representation and the Monte Carlo representation, which corresponds to the limited-sized ensemble case and the large-sized ensemble case, respectively. If we use the simplex representation, it is found that the estimates tend to have some bias which is likely to be caused by the nonlinearity of the system rather

  4. Methods for Cloud Cover Estimation

    NASA Technical Reports Server (NTRS)

    Glackin, D. L.; Huning, J. R.; Smith, J. H.; Logan, T. L.

    1984-01-01

    Several methods for cloud cover estimation are described relevant to assessing the performance of a ground-based network of solar observatories. The methods rely on ground and satellite data sources and provide meteorological or climatological information. One means of acquiring long-term observations of solar oscillations is the establishment of a ground-based network of solar observatories. Criteria for station site selection are: gross cloudiness, accurate transparency information, and seeing. Alternative methods for computing this duty cycle are discussed. The cycle, or alternatively a time history of solar visibility from the network, can then be input to a model to determine the effect of duty cycle on derived solar seismology parameters. Cloudiness from space is studied to examine various means by which the duty cycle might be computed. Cloudiness, and to some extent transparency, can potentially be estimated from satellite data.

  5. Computing approximate random Delta v magnitude probability densities. [for spacecraft trajectory correction

    NASA Technical Reports Server (NTRS)

    Chadwick, C.

    1984-01-01

    This paper describes the development and use of an algorithm to compute approximate statistics of the magnitude of a single random trajectory correction maneuver (TCM) Delta v vector. The TCM Delta v vector is modeled as a three component Cartesian vector each of whose components is a random variable having a normal (Gaussian) distribution with zero mean and possibly unequal standard deviations. The algorithm uses these standard deviations as input to produce approximations to (1) the mean and standard deviation of the magnitude of Delta v, (2) points of the probability density function of the magnitude of Delta v, and (3) points of the cumulative and inverse cumulative distribution functions of Delta v. The approximates are based on Monte Carlo techniques developed in a previous paper by the author and extended here. The algorithm described is expected to be useful in both pre-flight planning and in-flight analysis of maneuver propellant requirements for space missions.

  6. Probability density function formalism for optical coherence tomography signal analysis: a controlled phantom study.

    PubMed

    Weatherbee, Andrew; Sugita, Mitsuro; Bizheva, Kostadinka; Popov, Ivan; Vitkin, Alex

    2016-06-15

    The distribution of backscattered intensities as described by the probability density function (PDF) of tissue-scattered light contains information that may be useful for tissue assessment and diagnosis, including characterization of its pathology. In this Letter, we examine the PDF description of the light scattering statistics in a well characterized tissue-like particulate medium using optical coherence tomography (OCT). It is shown that for low scatterer density, the governing statistics depart considerably from a Gaussian description and follow the K distribution for both OCT amplitude and intensity. The PDF formalism is shown to be independent of the scatterer flow conditions; this is expected from theory, and suggests robustness and motion independence of the OCT amplitude (and OCT intensity) PDF metrics in the context of potential biomedical applications. PMID:27304274

  7. Pharmacokinetic parameter estimations by minimum relative entropy method.

    PubMed

    Amisaki, T; Eguchi, S

    1995-10-01

    For estimating pharmacokinetic parameters, we introduce the minimum relative entropy (MRE) method and compare its performance with least squares methods. There are several variants of least squares, such as ordinary least squares (OLS), weighted least squares, and iteratively reweighted least squares. In addition to these traditional methods, even extended least squares (ELS), a relatively new approach to nonlinear regression analysis, can be regarded as a variant of least squares. These methods are different from each other in their manner of handling weights. It has been recognized that least squares methods with an inadequate weighting scheme may cause misleading results (the "choice of weights" problem). Although least squares with uniform weights, i.e., OLS, is rarely used in pharmacokinetic analysis, it offers the principle of least squares. The objective function of OLS can be regarded as a distance between observed and theoretical pharmacokinetic values on the Euclidean space RN, where N is the number of observations. Thus OLS produces its estimates by minimizing the Euclidean distance. On the other hand, MRE works by minimizing the relative entropy which expresses discrepancy between two probability densities. Because pharmacokinetic functions are not density function in general, we use a particular form of the relative entropy whose domain is extended to the space of all positive functions. MRE never assumes any distribution of errors involved in observations. Thus, it can be a possible solution to the choice of weights problem. Moreover, since the mathematical form of the relative entropy, i.e., an expectation of the log-ratio of two probability density functions, is different from that of a usual Euclidean distance, the behavior of MRE may be different from those of least squares methods. To clarify the behavior of MRE, we have compared the performance of MRE with those of ELS and OLS by carrying out an intensive simulation study, where four pharmaco

  8. Translating CFC-based piston ages into probability density functions of ground-water age in karst

    USGS Publications Warehouse

    Long, A.J.; Putnam, L.D.

    2006-01-01

    Temporal age distributions are equivalent to probability density functions (PDFs) of transit time. The type and shape of a PDF provides important information related to ground-water mixing at the well or spring and the complex nature of flow networks in karst aquifers. Chlorofluorocarbon (CFC) concentrations measured for samples from 12 locations in the karstic Madison aquifer were used to evaluate the suitability of various PDF types for this aquifer. Parameters of PDFs could not be estimated within acceptable confidence intervals for any of the individual sites. Therefore, metrics derived from CFC-based apparent ages were used to evaluate results of PDF modeling in a more general approach. The ranges of these metrics were established as criteria against which families of PDFs could be evaluated for their applicability to different parts of the aquifer. Seven PDF types, including five unimodal and two bimodal models, were evaluated. Model results indicate that unimodal models may be applicable to areas close to conduits that have younger piston (i.e., apparent) ages and that bimodal models probably are applicable to areas farther from conduits that have older piston ages. The two components of a bimodal PDF are interpreted as representing conduit and diffuse flow, and transit times of as much as two decades may separate these PDF components. Areas near conduits may be dominated by conduit flow, whereas areas farther from conduits having bimodal distributions probably have good hydraulic connection to both diffuse and conduit flow. ?? 2006 Elsevier B.V. All rights reserved.

  9. On the Evolution of the Density Probability Density Function in Strongly Self-gravitating Systems

    NASA Astrophysics Data System (ADS)

    Girichidis, Philipp; Konstandin, Lukas; Whitworth, Anthony P.; Klessen, Ralf S.

    2014-02-01

    The time evolution of the probability density function (PDF) of the mass density is formulated and solved for systems in free-fall using a simple approximate function for the collapse of a sphere. We demonstrate that a pressure-free collapse results in a power-law tail on the high-density side of the PDF. The slope quickly asymptotes to the functional form PV (ρ)vpropρ-1.54 for the (volume-weighted) PDF and PM (ρ)vpropρ-0.54 for the corresponding mass-weighted distribution. From the simple approximation of the PDF we derive analytic descriptions for mass accretion, finding that dynamically quiet systems with narrow density PDFs lead to retarded star formation and low star formation rates (SFRs). Conversely, strong turbulent motions that broaden the PDF accelerate the collapse causing a bursting mode of star formation. Finally, we compare our theoretical work with observations. The measured SFRs are consistent with our model during the early phases of the collapse. Comparison of observed column density PDFs with those derived from our model suggests that observed star-forming cores are roughly in free-fall.

  10. Probability density function of the intensity of a laser beam propagating in the maritime environment.

    PubMed

    Korotkova, Olga; Avramov-Zamurovic, Svetlana; Malek-Madani, Reza; Nelson, Charles

    2011-10-10

    A number of field experiments measuring the fluctuating intensity of a laser beam propagating along horizontal paths in the maritime environment is performed over sub-kilometer distances at the United States Naval Academy. Both above the ground and over the water links are explored. Two different detection schemes, one photographing the beam on a white board, and the other capturing the beam directly using a ccd sensor, gave consistent results. The probability density function (pdf) of the fluctuating intensity is reconstructed with the help of two theoretical models: the Gamma-Gamma and the Gamma-Laguerre, and compared with the intensity's histograms. It is found that the on-ground experimental results are in good agreement with theoretical predictions. The results obtained above the water paths lead to appreciable discrepancies, especially in the case of the Gamma-Gamma model. These discrepancies are attributed to the presence of the various scatterers along the path of the beam, such as water droplets, aerosols and other airborne particles. Our paper's main contribution is providing a methodology for computing the pdf function of the laser beam intensity in the maritime environment using field measurements. PMID:21997043

  11. Models for the probability densities of the turbulent plasma flux in magnetized plasmas

    NASA Astrophysics Data System (ADS)

    Bergsaker, A. S.; Fredriksen, Å; Pécseli, H. L.; Trulsen, J. K.

    2015-10-01

    Observations of turbulent transport in magnetized plasmas indicate that plasma losses can be due to coherent structures or bursts of plasma rather than a classical random walk or diffusion process. A model for synthetic data based on coherent plasma flux events is proposed, where all basic properties can be obtained analytically in terms of a few control parameters. One basic parameter in the present case is the density of burst events in a long time-record, together with parameters in a model of the individual pulse shapes and the statistical distribution of these parameters. The model and its extensions give the probability density of the plasma flux. An interesting property of the model is a prediction of a near-parabolic relation between skewness and kurtosis of the statistical flux distribution for a wide range of parameters. The model is generalized by allowing for an additive random noise component. When this noise dominates the signal we can find a transition to standard results for Gaussian random noise. Applications of the model are illustrated by data from the toroidal Blaamann plasma.

  12. Probability density of intensity fluctuations for laser beams disturbed by turbulent aero-engine exhaust

    NASA Astrophysics Data System (ADS)

    Ivanova, I. V.; Dmitriev, D. I.; Sirazetdinov, V. S.

    2007-02-01

    In this paper we analyze some results of natural and numerical experiments on probability density of intensity fluctuations on an axis for 1,06 microns and 0,53 microns laser beams in comparison with theoretical dependences (lognormal, exponential and K-distribution). Beams were propagated in aviation engine exhaust at various angles between the jet and beam axes. It has been shown that for a beam with a wavelength of 0,53 microns experimental data can be approximated as exponential and K-distribution, while for radiation with a wavelength of 1,06 microns good conformity to K-distribution has been observed. Optimum conditions for image registration with CCD-cameras of laser beams distorted by turbulence have been chosen. For this purpose transfer characteristics of several same type samples of CCD-cameras have been studied at various irradiation modes and registration tunings. It has been shown that the dynamic range of the cameras is used to maximum capacity for image recording when gamma-correction is applied.

  13. On the evolution of the density probability density function in strongly self-gravitating systems

    SciTech Connect

    Girichidis, Philipp; Konstandin, Lukas; Klessen, Ralf S.; Whitworth, Anthony P.

    2014-02-01

    The time evolution of the probability density function (PDF) of the mass density is formulated and solved for systems in free-fall using a simple approximate function for the collapse of a sphere. We demonstrate that a pressure-free collapse results in a power-law tail on the high-density side of the PDF. The slope quickly asymptotes to the functional form P{sub V} (ρ)∝ρ{sup –1.54} for the (volume-weighted) PDF and P{sub M} (ρ)∝ρ{sup –0.54} for the corresponding mass-weighted distribution. From the simple approximation of the PDF we derive analytic descriptions for mass accretion, finding that dynamically quiet systems with narrow density PDFs lead to retarded star formation and low star formation rates (SFRs). Conversely, strong turbulent motions that broaden the PDF accelerate the collapse causing a bursting mode of star formation. Finally, we compare our theoretical work with observations. The measured SFRs are consistent with our model during the early phases of the collapse. Comparison of observed column density PDFs with those derived from our model suggests that observed star-forming cores are roughly in free-fall.

  14. Homogeneous clusters over India using probability density function of daily rainfall

    NASA Astrophysics Data System (ADS)

    Kulkarni, Ashwini

    2016-04-01

    The Indian landmass has been divided into homogeneous clusters by applying the cluster analysis to the probability density function of a century-long time series of daily summer monsoon (June through September) rainfall at 357 grids over India, each of approximately 100 km × 100 km. The analysis gives five clusters over Indian landmass; only cluster 5 happened to be the contiguous region and all other clusters are dispersed away which confirms the erratic behavior of daily rainfall over India. The area averaged seasonal rainfall over cluster 5 has a very strong relationship with Indian summer monsoon rainfall; also, the rainfall variability over this region is modulated by the most important mode of climate system, i.e., El Nino Southern Oscillation (ENSO). This cluster could be considered as the representative of the entire Indian landmass to examine monsoon variability. The two-sample Kolmogorov-Smirnov test supports that the cumulative distribution functions of daily rainfall over cluster 5 and India as a whole do not differ significantly. The clustering algorithm is also applied to two time epochs 1901-1975 and 1976-2010 to examine the possible changes in clusters in a recent warming period. The clusters are drastically different in two time periods. They are more dispersed in recent period implying the more erroneous distribution of daily rainfall in recent period.

  15. Comparison of Fatigue Life Estimation Using Equivalent Linearization and Time Domain Simulation Methods

    NASA Technical Reports Server (NTRS)

    Mei, Chuh; Dhainaut, Jean-Michel

    2000-01-01

    The Monte Carlo simulation method in conjunction with the finite element large deflection modal formulation are used to estimate fatigue life of aircraft panels subjected to stationary Gaussian band-limited white-noise excitations. Ten loading cases varying from 106 dB to 160 dB OASPL with bandwidth 1024 Hz are considered. For each load case, response statistics are obtained from an ensemble of 10 response time histories. The finite element nonlinear modal procedure yields time histories, probability density functions (PDF), power spectral densities and higher statistical moments of the maximum deflection and stress/strain. The method of moments of PSD with Dirlik's approach is employed to estimate the panel fatigue life.

  16. On recontamination and directional-bias problems in Monte Carlo simulation of PDF turbulence models. [probability density function

    NASA Technical Reports Server (NTRS)

    Hsu, Andrew T.

    1992-01-01

    Turbulent combustion can not be simulated adequately by conventional moment closure turbulent models. The probability density function (PDF) method offers an attractive alternative: in a PDF model, the chemical source terms are closed and do not require additional models. Because the number of computational operations grows only linearly in the Monte Carlo scheme, it is chosen over finite differencing schemes. A grid dependent Monte Carlo scheme following J.Y. Chen and W. Kollmann has been studied in the present work. It was found that in order to conserve the mass fractions absolutely, one needs to add further restrictions to the scheme, namely alpha(sub j) + gamma(sub j) = alpha(sub j - 1) + gamma(sub j + 1). A new algorithm was devised that satisfied this restriction in the case of pure diffusion or uniform flow problems. Using examples, it is shown that absolute conservation can be achieved. Although for non-uniform flows absolute conservation seems impossible, the present scheme has reduced the error considerably.

  17. The probability density function in molecular gas in the G333 and Vela C molecular clouds

    NASA Astrophysics Data System (ADS)

    Cunningham, Maria

    2015-08-01

    The probability density function (PDF) is a simple analytical tool for determining the hierarchical spatial structure of molecular clouds. It has been used frequently in recent years with dust continuum emission, such as that from the Herschel space telescope and ALMA. These dust column density PDFs universally show a log-normal distribution in low column density gas, characteristic of unbound turbulent gas, and a power-law tail at high column densities, indicating the presence of gravitationally bound gas. We have recently conducted a PDF analysis of the molecular gas in the G333 and Vela C giant molecular cloud complexes, using transitions of CO, HCN, HNC, HCO+ and N2H+.The results show that CO and its isotopologues trace mostly the log-normal part of the PDF, while HCN and HCO+ trace both a log-normal part and a power law part to the distribution. On the other hand, HNC and N2H+ mostly trace only the power law tail. The difference between the PDFs of HCN and HNC is surprising, as is the similarity between HNC and the N2H+ PDFs. The most likely explanation for the similar distributions of HNC and N2H+ is that N2H+ is known to be enhanced in cool gas below 20K, where CO is depleted, while the reaction that forms HNC or HCN favours the former at similar low temperatures. The lack of evidence for a power law tail in 13CO and C18O, in conjunction for the results for the N2H+ PDF suggest that depletion of CO in the dense cores of these molecular clouds is significant. In conclusion, the PDF has proved to be a surprisingly useful tool for investigating not only the spatial distribution of molecular gas, but also the wide scale chemistry of molecular clouds.

  18. Sparse representation of photometric redshift probability density functions: preparing for petascale astronomy

    NASA Astrophysics Data System (ADS)

    Carrasco Kind, Matias; Brunner, Robert J.

    2014-07-01

    One of the consequences of entering the era of precision cosmology is the widespread adoption of photometric redshift probability density functions (PDFs). Both current and future photometric surveys are expected to obtain images of billions of distinct galaxies. As a result, storing and analysing all of these PDFs will be non-trivial and even more severe if a survey plans to compute and store multiple different PDFs. In this paper we propose the use of a sparse basis representation to fully represent individual photo-z PDFs. By using an orthogonal matching pursuit algorithm and a combination of Gaussian and Voigt basis functions, we demonstrate how our approach is superior to a multi-Gaussian fitting, as we require approximately half of the parameters for the same fitting accuracy with the additional advantage that an entire PDF can be stored by using a 4-byte integer per basis function, and we can achieve better accuracy by increasing the number of bases. By using data from the Canada-France-Hawaii Telescope Lensing Survey, we demonstrate that only 10-20 points per galaxy are sufficient to reconstruct both the individual PDFs and the ensemble redshift distribution, N(z), to an accuracy of 99.9 per cent when compared to the one built using the original PDFs computed with a resolution of δz = 0.01, reducing the required storage of 200 original values by a factor of 10-20. Finally, we demonstrate how this basis representation can be directly extended to a cosmological analysis, thereby increasing computational performance without losing resolution nor accuracy.

  19. USing the probability density function of radar reflectivity to identify precipitation in thunderstorms

    NASA Astrophysics Data System (ADS)

    Diop, C. A.

    2009-09-01

    In many studies discussing the statistical characterization of the rain rate, most of the authors have found that the probability density function (PDF) of the rain rate follows a lognormal law. However, a more careful analysis of the PDF of the radar reflectivity Z suggests that it is a question of a mixture of distributions. The purpose of this work is to identify precipitation types that can coexist in a continental thunderstorm from the PDF of the radar reflectivity. The data used come from the NEXRAD S-band radar network, notably the level II database. From reflectivity ranging from -10 dBZ to 70 dBZ, we compute the PDF. We find that the total distribution is a mixture of several populations adjusted by several gaussian distributions with known parameters : mean, standard deviation and proportion of each one in the mixture. Since it is known that the rainfall is a sum of its parts and is composed of hydrometeors of various sizes, these statistical findings are in accordance with the physical properties of the precipitation. Then each component of the mixed distribution is tentatively attributed to a physical character of the precipitation. The first distribution with low reflectivities is assumed to represent the background of the small sized particles. The second component centred around medium Z corresponds to stratiform rain, the third population located at larger Z is due to heavy rain. Eventually a fourth population is present for hail. *Observatoire Midi-Pyrénées, Laboratoire d'Aérologie, CNRS/Université Paul Sabatier, Toulouse , France **Université des Sciences et Technologies de Lille, UFR de Physique Fondamentale, Laboratoire d'Optique Atmosphérique, Lille, France

  20. Nonparametric estimation of plant density by the distance method

    USGS Publications Warehouse

    Patil, S.A.; Burnham, K.P.; Kovner, J.L.

    1979-01-01

    A relation between the plant density and the probability density function of the nearest neighbor distance (squared) from a random point is established under fairly broad conditions. Based upon this relationship, a nonparametric estimator for the plant density is developed and presented in terms of order statistics. Consistency and asymptotic normality of the estimator are discussed. An interval estimator for the density is obtained. The modifications of this estimator and its variance are given when the distribution is truncated. Simulation results are presented for regular, random and aggregated populations to illustrate the nonparametric estimator and its variance. A numerical example from field data is given. Merits and deficiencies of the estimator are discussed with regard to its robustness and variance.

  1. Research on Parameter Estimation Methods for Alpha Stable Noise in a Laser Gyroscope’s Random Error

    PubMed Central

    Wang, Xueyun; Li, Kui; Gao, Pengyu; Meng, Suxia

    2015-01-01

    Alpha stable noise, determined by four parameters, has been found in the random error of a laser gyroscope. Accurate estimation of the four parameters is the key process for analyzing the properties of alpha stable noise. Three widely used estimation methods—quantile, empirical characteristic function (ECF) and logarithmic moment method—are analyzed in contrast with Monte Carlo simulation in this paper. The estimation accuracy and the application conditions of all methods, as well as the causes of poor estimation accuracy, are illustrated. Finally, the highest precision method, ECF, is applied to 27 groups of experimental data to estimate the parameters of alpha stable noise in a laser gyroscope’s random error. The cumulative probability density curve of the experimental data fitted by an alpha stable distribution is better than that by a Gaussian distribution, which verifies the existence of alpha stable noise in a laser gyroscope’s random error. PMID:26230698

  2. EUPDF: Eulerian Monte Carlo Probability Density Function Solver for Applications With Parallel Computing, Unstructured Grids, and Sprays

    NASA Technical Reports Server (NTRS)

    Raju, M. S.

    1998-01-01

    The success of any solution methodology used in the study of gas-turbine combustor flows depends a great deal on how well it can model the various complex and rate controlling processes associated with the spray's turbulent transport, mixing, chemical kinetics, evaporation, and spreading rates, as well as convective and radiative heat transfer and other phenomena. The phenomena to be modeled, which are controlled by these processes, often strongly interact with each other at different times and locations. In particular, turbulence plays an important role in determining the rates of mass and heat transfer, chemical reactions, and evaporation in many practical combustion devices. The influence of turbulence in a diffusion flame manifests itself in several forms, ranging from the so-called wrinkled, or stretched, flamelets regime to the distributed combustion regime, depending upon how turbulence interacts with various flame scales. Conventional turbulence models have difficulty treating highly nonlinear reaction rates. A solution procedure based on the composition joint probability density function (PDF) approach holds the promise of modeling various important combustion phenomena relevant to practical combustion devices (such as extinction, blowoff limits, and emissions predictions) because it can account for nonlinear chemical reaction rates without making approximations. In an attempt to advance the state-of-the-art in multidimensional numerical methods, we at the NASA Lewis Research Center extended our previous work on the PDF method to unstructured grids, parallel computing, and sprays. EUPDF, which was developed by M.S. Raju of Nyma, Inc., was designed to be massively parallel and could easily be coupled with any existing gas-phase and/or spray solvers. EUPDF can use an unstructured mesh with mixed triangular, quadrilateral, and/or tetrahedral elements. The application of the PDF method showed favorable results when applied to several supersonic

  3. Kinetic and dynamic probability-density-function descriptions of disperse turbulent two-phase flows

    NASA Astrophysics Data System (ADS)

    Minier, Jean-Pierre; Profeta, Christophe

    2015-11-01

    This article analyzes the status of two classical one-particle probability density function (PDF) descriptions of the dynamics of discrete particles dispersed in turbulent flows. The first PDF formulation considers only the process made up by particle position and velocity Zp=(xp,Up) and is represented by its PDF p (t ;yp,Vp) which is the solution of a kinetic PDF equation obtained through a flux closure based on the Furutsu-Novikov theorem. The second PDF formulation includes fluid variables into the particle state vector, for example, the fluid velocity seen by particles Zp=(xp,Up,Us) , and, consequently, handles an extended PDF p (t ;yp,Vp,Vs) which is the solution of a dynamic PDF equation. For high-Reynolds-number fluid flows, a typical formulation of the latter category relies on a Langevin model for the trajectories of the fluid seen or, conversely, on a Fokker-Planck equation for the extended PDF. In the present work, a new derivation of the kinetic PDF equation is worked out and new physical expressions of the dispersion tensors entering the kinetic PDF equation are obtained by starting from the extended PDF and integrating over the fluid seen. This demonstrates that, under the same assumption of a Gaussian colored noise and irrespective of the specific stochastic model chosen for the fluid seen, the kinetic PDF description is the marginal of a dynamic PDF one. However, a detailed analysis reveals that kinetic PDF models of particle dynamics in turbulent flows described by statistical correlations constitute incomplete stand-alone PDF descriptions and, moreover, that present kinetic-PDF equations are mathematically ill posed. This is shown to be the consequence of the non-Markovian characteristic of the stochastic process retained to describe the system and the use of an external colored noise. Furthermore, developments bring out that well-posed PDF descriptions are essentially due to a proper choice of the variables selected to describe physical systems

  4. Kinetic and dynamic probability-density-function descriptions of disperse turbulent two-phase flows.

    PubMed

    Minier, Jean-Pierre; Profeta, Christophe

    2015-11-01

    This article analyzes the status of two classical one-particle probability density function (PDF) descriptions of the dynamics of discrete particles dispersed in turbulent flows. The first PDF formulation considers only the process made up by particle position and velocity Z(p)=(x(p),U(p)) and is represented by its PDF p(t; y(p),V(p)) which is the solution of a kinetic PDF equation obtained through a flux closure based on the Furutsu-Novikov theorem. The second PDF formulation includes fluid variables into the particle state vector, for example, the fluid velocity seen by particles Z(p)=(x(p),U(p),U(s)), and, consequently, handles an extended PDF p(t; y(p),V(p),V(s)) which is the solution of a dynamic PDF equation. For high-Reynolds-number fluid flows, a typical formulation of the latter category relies on a Langevin model for the trajectories of the fluid seen or, conversely, on a Fokker-Planck equation for the extended PDF. In the present work, a new derivation of the kinetic PDF equation is worked out and new physical expressions of the dispersion tensors entering the kinetic PDF equation are obtained by starting from the extended PDF and integrating over the fluid seen. This demonstrates that, under the same assumption of a Gaussian colored noise and irrespective of the specific stochastic model chosen for the fluid seen, the kinetic PDF description is the marginal of a dynamic PDF one. However, a detailed analysis reveals that kinetic PDF models of particle dynamics in turbulent flows described by statistical correlations constitute incomplete stand-alone PDF descriptions and, moreover, that present kinetic-PDF equations are mathematically ill posed. This is shown to be the consequence of the non-Markovian characteristic of the stochastic process retained to describe the system and the use of an external colored noise. Furthermore, developments bring out that well-posed PDF descriptions are essentially due to a proper choice of the variables selected to

  5. Accuracy of the non-relativistic approximation to relativistic probability densities for a low-speed weak-gravity system

    NASA Astrophysics Data System (ADS)

    Liang, Shiuan-Ni; Lan, Boon Leong

    2015-11-01

    The Newtonian and general-relativistic position and velocity probability densities, which are calculated from the same initial Gaussian ensemble of trajectories using the same system parameters, are compared for a low-speed weak-gravity bouncing ball system. The Newtonian approximation to the general-relativistic probability densities does not always break down rapidly if the trajectories in the ensembles are chaotic -- the rapid breakdown occurs only if the initial position and velocity standard deviations are sufficiently small. This result is in contrast to the previously studied single-trajectory case where the Newtonian approximation to a general-relativistic trajectory will always break down rapidly if the two trajectories are chaotic. Similar rapid breakdown of the Newtonian approximation to the general-relativistic probability densities should also occur for other low-speed weak-gravity chaotic systems since it is due to sensitivity to the small difference between the two dynamical theories at low speed and weak gravity. For the bouncing ball system, the breakdown of the Newtonian approximation is transient because the Newtonian and general-relativistic probability densities eventually converge to invariant densities which are close in agreement.

  6. A method for estimating proportions

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr.; Marion, B. P.

    1975-01-01

    A proportion estimation procedure is presented which requires only on set of ground truth data for determining the error matrix. The error matrix is then used to determine an unbiased estimate. The error matrix is shown to be directly related to the probability of misclassifications, and is more diagonally dominant with the increase in the number of passes used.

  7. Applications of the line-of-response probability density function resolution model in PET list mode reconstruction

    NASA Astrophysics Data System (ADS)

    Jian, Y.; Yao, R.; Mulnix, T.; Jin, X.; Carson, R. E.

    2015-01-01

    Resolution degradation in PET image reconstruction can be caused by inaccurate modeling of the physical factors in the acquisition process. Resolution modeling (RM) is a common technique that takes into account the resolution degrading factors in the system matrix. Our previous work has introduced a probability density function (PDF) method of deriving the resolution kernels from Monte Carlo simulation and parameterizing the LORs to reduce the number of kernels needed for image reconstruction. In addition, LOR-PDF allows different PDFs to be applied to LORs from different crystal layer pairs of the HRRT. In this study, a thorough test was performed with this new model (LOR-PDF) applied to two PET scanners—the HRRT and Focus-220. A more uniform resolution distribution was observed in point source reconstructions by replacing the spatially-invariant kernels with the spatially-variant LOR-PDF. Specifically, from the center to the edge of radial field of view (FOV) of the HRRT, the measured in-plane FWHMs of point sources in a warm background varied slightly from 1.7 mm to 1.9 mm in LOR-PDF reconstructions. In Minihot and contrast phantom reconstructions, LOR-PDF resulted in up to 9% higher contrast at any given noise level than image-space resolution model. LOR-PDF also has the advantage in performing crystal-layer-dependent resolution modeling. The contrast improvement by using LOR-PDF was verified statistically by replicate reconstructions. In addition, [11C]AFM rats imaged on the HRRT and [11C]PHNO rats imaged on the Focus-220 were utilized to demonstrated the advantage of the new model. Higher contrast between high-uptake regions of only a few millimeter diameter and the background was observed in LOR-PDF reconstruction than in other methods.

  8. Smoothing Methods for Estimating Test Score Distributions.

    ERIC Educational Resources Information Center

    Kolen, Michael J.

    1991-01-01

    Estimation/smoothing methods that are flexible enough to fit a wide variety of test score distributions are reviewed: kernel method, strong true-score model-based method, and method that uses polynomial log-linear models. Applications of these methods include describing/comparing test score distributions, estimating norms, and estimating…

  9. Probability density of spatially distributed soil moisture inferred from crosshole georadar traveltime measurements

    NASA Astrophysics Data System (ADS)

    Linde, N.; Vrugt, J. A.

    2009-04-01

    Geophysical models are increasingly used in hydrological simulations and inversions, where they are typically treated as an artificial data source with known uncorrelated "data errors". The model appraisal problem in classical deterministic linear and non-linear inversion approaches based on linearization is often addressed by calculating model resolution and model covariance matrices. These measures offer only a limited potential to assign a more appropriate "data covariance matrix" for future hydrological applications, simply because the regularization operators used to construct a stable inverse solution bear a strong imprint on such estimates and because the non-linearity of the geophysical inverse problem is not explored. We present a parallelized Markov Chain Monte Carlo (MCMC) scheme to efficiently derive the posterior spatially distributed radar slowness and water content between boreholes given first-arrival traveltimes. This method is called DiffeRential Evolution Adaptive Metropolis (DREAM_ZS) with snooker updater and sampling from past states. Our inverse scheme does not impose any smoothness on the final solution, and uses uniform prior ranges of the parameters. The posterior distribution of radar slowness is converted into spatially distributed soil moisture values using a petrophysical relationship. To benchmark the performance of DREAM_ZS, we first apply our inverse method to a synthetic two-dimensional infiltration experiment using 9421 traveltimes contaminated with Gaussian errors and 80 different model parameters, corresponding to a model discretization of 0.3 m × 0.3 m. After this, the method is applied to field data acquired in the vadose zone during snowmelt. This work demonstrates that fully non-linear stochastic inversion can be applied with few limiting assumptions to a range of common two-dimensional tomographic geophysical problems. The main advantage of DREAM_ZS is that it provides a full view of the posterior distribution of spatially

  10. An automatic locally-adaptive method to estimate heavily-tailed breakthrough curves from particle distributions

    NASA Astrophysics Data System (ADS)

    Pedretti, Daniele; Fernàndez-Garcia, Daniel

    2013-09-01

    Particle tracking methods to simulate solute transport deal with the issue of having to reconstruct smooth concentrations from a limited number of particles. This is an error-prone process that typically leads to large fluctuations in the determined late-time behavior of breakthrough curves (BTCs). Kernel density estimators (KDE) can be used to automatically reconstruct smooth BTCs from a small number of particles. The kernel approach incorporates the uncertainty associated with subsampling a large population by equipping each particle with a probability density function. Two broad classes of KDE methods can be distinguished depending on the parametrization of this function: global and adaptive methods. This paper shows that each method is likely to estimate a specific portion of the BTCs. Although global methods offer a valid approach to estimate early-time behavior and peak of BTCs, they exhibit important fluctuations at the tails where fewer particles exist. In contrast, locally adaptive methods improve tail estimation while oversmoothing both early-time and peak concentrations. Therefore a new method is proposed combining the strength of both KDE approaches. The proposed approach is universal and only needs one parameter (α) which slightly depends on the shape of the BTCs. Results show that, for the tested cases, heavily-tailed BTCs are properly reconstructed with α ≈ 0.5 .

  11. Radiance and atmosphere propagation-based method for the target range estimation

    NASA Astrophysics Data System (ADS)

    Cho, Hoonkyung; Chun, Joohwan

    2012-06-01

    Target range estimation is traditionally based on radar and active sonar systems in modern combat system. However, the performance of such active sensor devices is degraded tremendously by jamming signal from the enemy. This paper proposes a simple range estimation method between the target and the sensor. Passive IR sensors measures infrared (IR) light radiance radiating from objects in dierent wavelength and this method shows robustness against electromagnetic jamming. The measured target radiance of each wavelength at the IR sensor depends on the emissive properties of target material and is attenuated by various factors, in particular the distance between the sensor and the target and atmosphere environment. MODTRAN is a tool that models atmospheric propagation of electromagnetic radiation. Based on the result from MODTRAN and measured radiance, the target range is estimated. To statistically analyze the performance of proposed method, we use maximum likelihood estimation (MLE) and evaluate the Cramer-Rao Lower Bound (CRLB) via the probability density function of measured radiance. And we also compare CRLB and the variance of and ML estimation using Monte-Carlo.

  12. a Hybrid Method in Vegetation Height Estimation Using Polinsar Images of Campaign Biosar

    NASA Astrophysics Data System (ADS)

    Dehnavi, S.; Maghsoudi, Y.

    2015-12-01

    Recently, there have been plenty of researches on the retrieval of forest height by PolInSAR data. This paper aims at the evaluation of a hybrid method in vegetation height estimation based on L-band multi-polarized air-borne SAR images. The SAR data used in this paper were collected by the airborne E-SAR system. The objective of this research is firstly to describe each interferometry cross correlation as a sum of contributions corresponding to single bounce, double bounce and volume scattering processes. Then, an ESPIRIT (Estimation of Signal Parameters via Rotational Invariance Techniques) algorithm is implemented, to determine the interferometric phase of each local scatterer (ground and canopy). Secondly, the canopy height is estimated by phase differencing method, according to the RVOG (Random Volume Over Ground) concept. The applied model-based decomposition method is unrivaled, as it is not limited to specific type of vegetation, unlike the previous decomposition techniques. In fact, the usage of generalized probability density function based on the nth power of a cosine-squared function, which is characterized by two parameters, makes this method useful for different vegetation types. Experimental results show the efficiency of the approach for vegetation height estimation in the test site.

  13. Coupled Monte Carlo Probability Density Function/ SPRAY/CFD Code Developed for Modeling Gas-Turbine Combustor Flows

    NASA Technical Reports Server (NTRS)

    1995-01-01

    The success of any solution methodology for studying gas-turbine combustor flows depends a great deal on how well it can model various complex, rate-controlling processes associated with turbulent transport, mixing, chemical kinetics, evaporation and spreading rates of the spray, convective and radiative heat transfer, and other phenomena. These phenomena often strongly interact with each other at disparate time and length scales. In particular, turbulence plays an important role in determining the rates of mass and heat transfer, chemical reactions, and evaporation in many practical combustion devices. Turbulence manifests its influence in a diffusion flame in several forms depending on how turbulence interacts with various flame scales. These forms range from the so-called wrinkled, or stretched, flamelets regime, to the distributed combustion regime. Conventional turbulence closure models have difficulty in treating highly nonlinear reaction rates. A solution procedure based on the joint composition probability density function (PDF) approach holds the promise of modeling various important combustion phenomena relevant to practical combustion devices such as extinction, blowoff limits, and emissions predictions because it can handle the nonlinear chemical reaction rates without any approximation. In this approach, mean and turbulence gas-phase velocity fields are determined from a standard turbulence model; the joint composition field of species and enthalpy are determined from the solution of a modeled PDF transport equation; and a Lagrangian-based dilute spray model is used for the liquid-phase representation with appropriate consideration of the exchanges of mass, momentum, and energy between the two phases. The PDF transport equation is solved by a Monte Carlo method, and existing state-of-the-art numerical representations are used to solve the mean gasphase velocity and turbulence fields together with the liquid-phase equations. The joint composition PDF

  14. An at-site flood estimation method in the context of nonstationarity I. A simulation study

    NASA Astrophysics Data System (ADS)

    Gado, Tamer A.; Nguyen, Van-Thanh-Van

    2016-04-01

    The stationarity of annual flood peak records is the traditional assumption of flood frequency analysis. In some cases, however, as a result of land-use and/or climate change, this assumption is no longer valid. Therefore, new statistical models are needed to capture dynamically the change of probability density functions over time, in order to obtain reliable flood estimation. In this study, an innovative method for nonstationary flood frequency analysis was presented. Here, the new method is based on detrending the flood series and applying the L-moments along with the GEV distribution to the transformed "stationary" series (hereafter, this is called the LM-NS). The LM-NS method was assessed through a comparative study with the maximum likelihood (ML) method for the nonstationary GEV model, as well as with the stationary (S) GEV model. The comparative study, based on Monte Carlo simulations, was carried out for three nonstationary GEV models: a linear dependence of the mean on time (GEV1), a quadratic dependence of the mean on time (GEV2), and linear dependence in both the mean and log standard deviation on time (GEV11). The simulation results indicated that the LM-NS method performs better than the ML method for most of the cases studied, whereas the stationary method provides the least accurate results. An additional advantage of the LM-NS method is to avoid the numerical problems (e.g., convergence problems) that may occur with the ML method when estimating parameters for small data samples.

  15. Assessment of a three-dimensional line-of-response probability density function system matrix for PET.

    PubMed

    Yao, Rutao; Ramachandra, Ranjith M; Mahajan, Neeraj; Rathod, Vinay; Gunasekar, Noel; Panse, Ashish; Ma, Tianyu; Jian, Yiqiang; Yan, Jianhua; Carson, Richard E

    2012-11-01

    To achieve optimal PET image reconstruction through better system modeling, we developed a system matrix that is based on the probability density function for each line of response (LOR-PDF). The LOR-PDFs are grouped by LOR-to-detector incident angles to form a highly compact system matrix. The system matrix was implemented in the MOLAR list mode reconstruction algorithm for a small animal PET scanner. The impact of LOR-PDF on reconstructed image quality was assessed qualitatively as well as quantitatively in terms of contrast recovery coefficient (CRC) and coefficient of variance (COV), and its performance was compared with a fixed Gaussian (iso-Gaussian) line spread function. The LOR-PDFs of three coincidence signal emitting sources, (1) ideal positron emitter that emits perfect back-to-back γ rays (γγ) in air; (2) fluorine-18 (¹⁸F) nuclide in water; and (3) oxygen-15 (¹⁵O) nuclide in water, were derived, and assessed with simulated and experimental phantom data. The derived LOR-PDFs showed anisotropic and asymmetric characteristics dependent on LOR-detector angle, coincidence emitting source, and the medium, consistent with common PET physical principles. The comparison of the iso-Gaussian function and LOR-PDF showed that: (1) without positron range and acollinearity effects, the LOR-PDF achieved better or similar trade-offs of contrast recovery and noise for objects of 4 mm radius or larger, and this advantage extended to smaller objects (e.g. 2 mm radius sphere, 0.6 mm radius hot-rods) at higher iteration numbers; and (2) with positron range and acollinearity effects, the iso-Gaussian achieved similar or better resolution recovery depending on the significance of positron range effect. We conclude that the 3D LOR-PDF approach is an effective method to generate an accurate and compact system matrix. However, when used directly in expectation-maximization based list-mode iterative reconstruction algorithms such as MOLAR, its superiority is not clear

  16. Assessment of a Three-Dimensional Line-of-Response Probability Density Function System Matrix for PET

    PubMed Central

    Yao, Rutao; Ramachandra, Ranjith M.; Mahajan, Neeraj; Rathod, Vinay; Gunasekar, Noel; Panse, Ashish; Ma, Tianyu; Jian, Yiqiang; Yan, Jianhua; Carson, Richard E.

    2012-01-01

    To achieve optimal PET image reconstruction through better system modeling, we developed a system matrix that is based on the probability density function for each line of response (LOR-PDF). The LOR-PDFs are grouped by LOR-to-detector incident angles to form a highly compact system matrix. The system matrix was implemented in the MOLAR list mode reconstruction algorithm for a small animal PET scanner. The impact of LOR-PDF on reconstructed image quality was assessed qualitatively as well as quantitatively in terms of contrast recovery coefficient (CRC) and coefficient of variance (COV), and its performance was compared with a fixed Gaussian (iso-Gaussian) line spread function. The LOR-PDFs of 3 coincidence signal emitting sources, 1) ideal positron emitter that emits perfect back-to-back γ rays (γγ) in air; 2) fluorine-18 (18F) nuclide in water; and 3) oxygen-15 (15O) nuclide in water, were derived, and assessed with simulated and experimental phantom data. The derived LOR-PDFs showed anisotropic and asymmetric characteristics dependent on LOR-detector angle, coincidence emitting source, and the medium, consistent with common PET physical principles. The comparison of the iso-Gaussian function and LOR-PDF showed that: 1) without positron range and acolinearity effects, the LOR-PDF achieved better or similar trade-offs of contrast recovery and noise for objects of 4-mm radius or larger, and this advantage extended to smaller objects (e.g. 2-mm radius sphere, 0.6-mm radius hot-rods) at higher iteration numbers; and 2) with positron range and acolinearity effects, the iso-Gaussian achieved similar or better resolution recovery depending on the significance of positron range effect. We conclude that the 3-D LOR-PDF approach is an effective method to generate an accurate and compact system matrix. However, when used directly in expectation-maximization based list-mode iterative reconstruction algorithms such as MOLAR, its superiority is not clear. For this

  17. Assessment of a three-dimensional line-of-response probability density function system matrix for PET

    NASA Astrophysics Data System (ADS)

    Yao, Rutao; Ramachandra, Ranjith M.; Mahajan, Neeraj; Rathod, Vinay; Gunasekar, Noel; Panse, Ashish; Ma, Tianyu; Jian, Yiqiang; Yan, Jianhua; Carson, Richard E.

    2012-11-01

    To achieve optimal PET image reconstruction through better system modeling, we developed a system matrix that is based on the probability density function for each line of response (LOR-PDF). The LOR-PDFs are grouped by LOR-to-detector incident angles to form a highly compact system matrix. The system matrix was implemented in the MOLAR list mode reconstruction algorithm for a small animal PET scanner. The impact of LOR-PDF on reconstructed image quality was assessed qualitatively as well as quantitatively in terms of contrast recovery coefficient (CRC) and coefficient of variance (COV), and its performance was compared with a fixed Gaussian (iso-Gaussian) line spread function. The LOR-PDFs of three coincidence signal emitting sources, (1) ideal positron emitter that emits perfect back-to-back γ rays (γγ) in air; (2) fluorine-18 (18F) nuclide in water; and (3) oxygen-15 (15O) nuclide in water, were derived, and assessed with simulated and experimental phantom data. The derived LOR-PDFs showed anisotropic and asymmetric characteristics dependent on LOR-detector angle, coincidence emitting source, and the medium, consistent with common PET physical principles. The comparison of the iso-Gaussian function and LOR-PDF showed that: (1) without positron range and acollinearity effects, the LOR-PDF achieved better or similar trade-offs of contrast recovery and noise for objects of 4 mm radius or larger, and this advantage extended to smaller objects (e.g. 2 mm radius sphere, 0.6 mm radius hot-rods) at higher iteration numbers; and (2) with positron range and acollinearity effects, the iso-Gaussian achieved similar or better resolution recovery depending on the significance of positron range effect. We conclude that the 3D LOR-PDF approach is an effective method to generate an accurate and compact system matrix. However, when used directly in expectation-maximization based list-mode iterative reconstruction algorithms such as MOLAR, its superiority is not clear. For this

  18. Robust parameter estimation method for bilinear model

    NASA Astrophysics Data System (ADS)

    Ismail, Mohd Isfahani; Ali, Hazlina; Yahaya, Sharipah Soaad S.

    2015-12-01

    This paper proposed the method of parameter estimation for bilinear model, especially on BL(1,0,1,1) model without and with the presence of additive outlier (AO). In this study, the estimated parameters for BL(1,0,1,1) model are using nonlinear least squares (LS) method and also through robust approaches. The LS method employs the Newton-Raphson (NR) iterative procedure in estimating the parameters of bilinear model, but, using LS in estimating the parameters can be affected with the occurrence of outliers. As a solution, this study proposed robust approaches in dealing with the problem of outliers specifically on AO in BL(1,0,1,1) model. In robust estimation method, for improvement, we proposed to modify the NR procedure with robust scale estimators. We introduced two robust scale estimators namely median absolute deviation (MADn) and Tn in linear autoregressive model, AR(1) that be adequate and suitable for bilinear BL(1,0,1,1) model. We used the estimated parameter value in AR(1) model as an initial value in estimating the parameter values of BL(1,0,1,1) model. The investigation of the performance of LS and robust estimation methods in estimating the coefficients of BL(1,0,1,1) model is carried out through simulation study. The achievement of performance for both methods will be assessed in terms of bias values. Numerical results present that, the robust estimation method performs better than LS method in estimating the parameters without and with the presence of AO.

  19. Reliability Estimation Methods for Liquid Rocket Engines

    NASA Astrophysics Data System (ADS)

    Hirata, Kunio; Masuya, Goro; Kamijo, Kenjiro

    Reliability estimation using the dispersive, binominal distribution method has been traditionally used to certify the reliability of liquid rocket engines, but its estimation sometimes disagreed with the failure rates of flight engines. In order to take better results, the reliability growth model and the failure distribution method are applied to estimate the reliability of LE-7A engines, which have propelled the first stage of H-2A launch vehicles.

  20. Probability density adjoint for sensitivity analysis of the Mean of Chaos

    SciTech Connect

    Blonigan, Patrick J. Wang, Qiqi

    2014-08-01

    Sensitivity analysis, especially adjoint based sensitivity analysis, is a powerful tool for engineering design which allows for the efficient computation of sensitivities with respect to many parameters. However, these methods break down when used to compute sensitivities of long-time averaged quantities in chaotic dynamical systems. This paper presents a new method for sensitivity analysis of ergodic chaotic dynamical systems, the density adjoint method. The method involves solving the governing equations for the system's invariant measure and its adjoint on the system's attractor manifold rather than in phase-space. This new approach is derived for and demonstrated on one-dimensional chaotic maps and the three-dimensional Lorenz system. It is found that the density adjoint computes very finely detailed adjoint distributions and accurate sensitivities, but suffers from large computational costs.

  1. Existence, uniqueness and regularity of a time-periodic probability density distribution arising in a sedimentation-diffusion problem

    NASA Technical Reports Server (NTRS)

    Nitsche, Ludwig C.; Nitsche, Johannes M.; Brenner, Howard

    1988-01-01

    The sedimentation and diffusion of a nonneutrally buoyant Brownian particle in vertical fluid-filled cylinder of finite length which is instantaneously inverted at regular intervals are investigated analytically. A one-dimensional convective-diffusive equation is derived to describe the temporal and spatial evolution of the probability density; a periodicity condition is formulated; the applicability of Fredholm theory is established; and the parameter-space regions are determined within which the existence and uniqueness of solutions are guaranteed. Numerical results for sample problems are presented graphically and briefly characterized.

  2. Finite-size scaling of the magnetization probability density for the critical Ising model in slab geometry

    NASA Astrophysics Data System (ADS)

    Lopes Cardozo, David; Holdsworth, Peter C. W.

    2016-04-01

    The magnetization probability density in d  =  2 and 3 dimensional Ising models in slab geometry of volume L\\paralleld-1× {{L}\\bot} is computed through Monte-Carlo simulation at the critical temperature and zero magnetic field. The finite-size scaling of this distribution and its dependence on the system aspect-ratio ρ =\\frac{{{L}\\bot}}{{{L}\\parallel}} and boundary conditions are discussed. In the limiting case ρ \\to 0 of a macroscopically large slab ({{L}\\parallel}\\gg {{L}\\bot} ) the distribution is found to scale as a Gaussian function for all tested system sizes and boundary conditions.

  3. Direct Density Derivative Estimation.

    PubMed

    Sasaki, Hiroaki; Noh, Yung-Kyun; Niu, Gang; Sugiyama, Masashi

    2016-06-01

    Estimating the derivatives of probability density functions is an essential step in statistical data analysis. A naive approach to estimate the derivatives is to first perform density estimation and then compute its derivatives. However, this approach can be unreliable because a good density estimator does not necessarily mean a good density derivative estimator. To cope with this problem, in this letter, we propose a novel method that directly estimates density derivatives without going through density estimation. The proposed method provides computationally efficient estimation for the derivatives of any order on multidimensional data with a hyperparameter tuning method and achieves the optimal parametric convergence rate. We further discuss an extension of the proposed method by applying regularized multitask learning and a general framework for density derivative estimation based on Bregman divergences. Applications of the proposed method to nonparametric Kullback-Leibler divergence approximation and bandwidth matrix selection in kernel density estimation are also explored. PMID:27140943

  4. Probability Density Function for Waves Propagating in a Straight Rough Wall Tunnel

    SciTech Connect

    Pao, H

    2004-01-28

    The radio channel places fundamental limitations on the performance of wireless communication systems in tunnels and caves. The transmission path between the transmitter and receiver can vary from a simple direct line of sight to one that is severely obstructed by rough walls and corners. Unlike wired channels that are stationary and predictable, radio channels can be extremely random and difficult to analyze. In fact, modeling the radio channel has historically been one of the more challenging parts of any radio system design; this is often done using statistical methods. The mechanisms behind electromagnetic wave propagation are diverse, but can generally be attributed to reflection, diffraction, and scattering. Because of the multiple reflections from rough walls, the electromagnetic waves travel along different paths of varying lengths. The interactions between these waves cause multipath fading at any location, and the strengths of the waves decrease as the distance between the transmitter and receiver increases. As a consequence of the central limit theorem, the received signals are approximately Gaussian random process. This means that the field propagating in a cave or tunnel is typically a complex-valued Gaussian random process.

  5. On the thresholds, probability densities, and critical exponents of Bak-Sneppen-like models

    NASA Astrophysics Data System (ADS)

    Garcia, Guilherme J. M.; Dickman, Ronald

    2004-10-01

    We report a simple method to accurately determine the threshold and the exponent ν of the Bak-Sneppen (BS) model and also investigate the BS universality class. For the random-neighbor version of the BS model, we find the threshold x ∗=0.33332(3) , in agreement with the exact result x ∗= {1}/{3} given by mean-field theory. For the one-dimensional original model, we find x ∗=0.6672(2) in good agreement with the results reported in the literature; for the anisotropic BS model we obtain x ∗=0.7240(1) . We study the finite size effect x ∗(L)-x ∗(L→∞)∝L -ν, observed in a system with L sites, and find ν=1.00(1) for the random-neighbor version, ν=1.40(1) for the original model, and ν=1.58(1) for the anisotropic case. Finally, we discuss the effect of defining the extremal site as the one which minimizes a general function f( x), instead of simply f( x)= x as in the original updating rule. We emphasize that models with extremal dynamics have singular stationary probability distributions p( x). Our simulations indicate the existence of two symmetry-based universality classes.

  6. Some statistical properties of surface slopes via remote sensing considering a non-Gaussian probability density function

    NASA Astrophysics Data System (ADS)

    Poom-Medina, José Luis; Álvarez-Borrego, Josué

    2016-07-01

    Theoretical relationships of statistical properties of surface slope from statistical properties of the image intensity in remotely sensed images, considering a non-Gaussian probability density function of the surface slope, are shown. Considering a variable detector line of sight angle and considering ocean waves moving along a single direction and that the observer and the sun are both in the vertical plane containing this direction, new expressions, using two different glitter functions, between the variance of the intensity of the image and the variance of the surface slopes are derived. In this case, skewness and kurtosis moments are taken into account. However, new expressions between correlation functions of the intensities in the image and surface slopes are numerically analyzed; for this case, the skewness moments were considered only. It is possible to observe more changes in these statistical relationships when the Rect function is used. The skewness and kurtosis values are in direct relation with the wind velocity on the sea surface.

  7. Steady-state probability density function of the phase error for a DPLL with an integrate-and-dump device

    NASA Technical Reports Server (NTRS)

    Simon, M.; Mileant, A.

    1986-01-01

    The steady-state behavior of a particular type of digital phase-locked loop (DPLL) with an integrate-and-dump circuit following the phase detector is characterized in terms of the probability density function (pdf) of the phase error in the loop. Although the loop is entirely digital from an implementation standpoint, it operates at two extremely different sampling rates. In particular, the combination of a phase detector and an integrate-and-dump circuit operates at a very high rate whereas the loop update rate is very slow by comparison. Because of this dichotomy, the loop can be analyzed by hybrid analog/digital (s/z domain) techniques. The loop is modeled in such a general fashion that previous analyses of the Real-Time Combiner (RTC), Subcarrier Demodulator Assembly (SDA), and Symbol Synchronization Assembly (SSA) fall out as special cases.

  8. Nonparametric estimation of population density for line transect sampling using FOURIER series

    USGS Publications Warehouse

    Crain, B.R.; Burnham, K.P.; Anderson, D.R.; Lake, J.L.

    1979-01-01

    A nonparametric, robust density estimation method is explored for the analysis of right-angle distances from a transect line to the objects sighted. The method is based on the FOURIER series expansion of a probability density function over an interval. With only mild assumptions, a general population density estimator of wide applicability is obtained.

  9. Habitat suitability criteria via parametric distributions: estimation, model selection and uncertainty

    USGS Publications Warehouse

    Som, Nicholas A.; Goodman, Damon H.; Perry, Russell W.; Hardy, Thomas B.

    2015-01-01

    Previous methods for constructing univariate habitat suitability criteria (HSC) curves have ranged from professional judgement to kernel-smoothed density functions or combinations thereof. We present a new method of generating HSC curves that applies probability density functions as the mathematical representation of the curves. Compared with previous approaches, benefits of our method include (1) estimation of probability density function parameters directly from raw data, (2) quantitative methods for selecting among several candidate probability density functions, and (3) concise methods for expressing estimation uncertainty in the HSC curves. We demonstrate our method with a thorough example using data collected on the depth of water used by juvenile Chinook salmon (Oncorhynchus tschawytscha) in the Klamath River of northern California and southern Oregon. All R code needed to implement our example is provided in the appendix. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.

  10. FINAL PROJECT REPORT DOE Early Career Principal Investigator Program Project Title: Developing New Mathematical Models for Multiphase Flows Based on a Fundamental Probability Density Function Approach

    SciTech Connect

    Shankar Subramaniam

    2009-04-01

    This final project report summarizes progress made towards the objectives described in the proposal entitled “Developing New Mathematical Models for Multiphase Flows Based on a Fundamental Probability Density Function Approach”. Substantial progress has been made in theory, modeling and numerical simulation of turbulent multiphase flows. The consistent mathematical framework based on probability density functions is described. New models are proposed for turbulent particle-laden flows and sprays.

  11. Computing rates of Markov models of voltage-gated ion channels by inverting partial differential equations governing the probability density functions of the conducting and non-conducting states

    PubMed Central

    Tveito, Aslak; Lines, Glenn T.; Edwards, Andrew G.; McCulloch, Andrew

    2016-01-01

    Markov models are ubiquitously used to represent the function of single ion channels. However, solving the inverse problem to construct a Markov model of single channel dynamics from bilayer or patch-clamp recordings remains challenging, particularly for channels involving complex gating processes. Methods for solving the inverse problem are generally based on data from voltage clamp measurements. Here, we describe an alternative approach to this problem based on measurements of voltage traces. The voltage traces define probability density functions of the functional states of an ion channel. These probability density functions can also be computed by solving a deterministic system of partial differential equations. The inversion is based on tuning the rates of the Markov models used in the deterministic system of partial differential equations such that the solution mimics the properties of the probability density function gathered from (pseudo) experimental data as well as possible. The optimization is done by defining a cost function to measure the difference between the deterministic solution and the solution based on experimental data. By evoking the properties of this function, it is possible to infer whether the rates of the Markov model are identifiable by our method. We present applications to Markov model well known from the literature. PMID:27154008

  12. Computing rates of Markov models of voltage-gated ion channels by inverting partial differential equations governing the probability density functions of the conducting and non-conducting states.

    PubMed

    Tveito, Aslak; Lines, Glenn T; Edwards, Andrew G; McCulloch, Andrew

    2016-07-01

    Markov models are ubiquitously used to represent the function of single ion channels. However, solving the inverse problem to construct a Markov model of single channel dynamics from bilayer or patch-clamp recordings remains challenging, particularly for channels involving complex gating processes. Methods for solving the inverse problem are generally based on data from voltage clamp measurements. Here, we describe an alternative approach to this problem based on measurements of voltage traces. The voltage traces define probability density functions of the functional states of an ion channel. These probability density functions can also be computed by solving a deterministic system of partial differential equations. The inversion is based on tuning the rates of the Markov models used in the deterministic system of partial differential equations such that the solution mimics the properties of the probability density function gathered from (pseudo) experimental data as well as possible. The optimization is done by defining a cost function to measure the difference between the deterministic solution and the solution based on experimental data. By evoking the properties of this function, it is possible to infer whether the rates of the Markov model are identifiable by our method. We present applications to Markov model well-known from the literature. PMID:27154008

  13. A simple method to estimate interwell autocorrelation

    SciTech Connect

    Pizarro, J.O.S.; Lake, L.W.

    1997-08-01

    The estimation of autocorrelation in the lateral or interwell direction is important when performing reservoir characterization studies using stochastic modeling. This paper presents a new method to estimate the interwell autocorrelation based on parameters, such as the vertical range and the variance, that can be estimated with commonly available data. We used synthetic fields that were generated from stochastic simulations to provide data to construct the estimation charts. These charts relate the ratio of areal to vertical variance and the autocorrelation range (expressed variously) in two directions. Three different semivariogram models were considered: spherical, exponential and truncated fractal. The overall procedure is demonstrated using field data. We find that the approach gives the most self-consistent results when it is applied to previously identified facies. Moreover, the autocorrelation trends follow the depositional pattern of the reservoir, which gives confidence in the validity of the approach.

  14. Variational bayesian method of estimating variance components.

    PubMed

    Arakawa, Aisaku; Taniguchi, Masaaki; Hayashi, Takeshi; Mikawa, Satoshi

    2016-07-01

    We developed a Bayesian analysis approach by using a variational inference method, a so-called variational Bayesian method, to determine the posterior distributions of variance components. This variational Bayesian method and an alternative Bayesian method using Gibbs sampling were compared in estimating genetic and residual variance components from both simulated data and publically available real pig data. In the simulated data set, we observed strong bias toward overestimation of genetic variance for the variational Bayesian method in the case of low heritability and low population size, and less bias was detected with larger population sizes in both methods examined. The differences in the estimates of variance components between the variational Bayesian and the Gibbs sampling were not found in the real pig data. However, the posterior distributions of the variance components obtained with the variational Bayesian method had shorter tails than those obtained with the Gibbs sampling. Consequently, the posterior standard deviations of the genetic and residual variances of the variational Bayesian method were lower than those of the method using Gibbs sampling. The computing time required was much shorter with the variational Bayesian method than with the method using Gibbs sampling. PMID:26877207

  15. Efficient Methods of Estimating Switchgrass Biomass Supplies

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Switchgrass (Panicum virgatum L.) is being developed as a biofuel feedstock for the United States. Efficient and accurate methods to estimate switchgrass biomass feedstock supply within a production area will be required by biorefineries. Our main objective was to determine the effectiveness of in...

  16. Cost estimating methods for advanced space systems

    NASA Technical Reports Server (NTRS)

    Cyr, Kelley

    1988-01-01

    The development of parametric cost estimating methods for advanced space systems in the conceptual design phase is discussed. The process of identifying variables which drive cost and the relationship between weight and cost are discussed. A theoretical model of cost is developed and tested using a historical data base of research and development projects.

  17. Estimation method for serial dilution experiments.

    PubMed

    Ben-David, Avishai; Davidson, Charles E

    2014-12-01

    Titration of microorganisms in infectious or environmental samples is a corner stone of quantitative microbiology. A simple method is presented to estimate the microbial counts obtained with the serial dilution technique for microorganisms that can grow on bacteriological media and develop into a colony. The number (concentration) of viable microbial organisms is estimated from a single dilution plate (assay) without a need for replicate plates. Our method selects the best agar plate with which to estimate the microbial counts, and takes into account the colony size and plate area that both contribute to the likelihood of miscounting the number of colonies on a plate. The estimate of the optimal count given by our method can be used to narrow the search for the best (optimal) dilution plate and saves time. The required inputs are the plate size, the microbial colony size, and the serial dilution factors. The proposed approach shows relative accuracy well within ±0.1log10 from data produced by computer simulations. The method maintains this accuracy even in the presence of dilution errors of up to 10% (for both the aliquot and diluent volumes), microbial counts between 10(4) and 10(12) colony-forming units, dilution ratios from 2 to 100, and plate size to colony size ratios between 6.25 to 200. PMID:25205541

  18. New evolution equations for the joint response-excitation probability density function of stochastic solutions to first-order nonlinear PDEs

    SciTech Connect

    Venturi, D.; Karniadakis, G.E.

    2012-08-30

    By using functional integral methods we determine new evolution equations satisfied by the joint response-excitation probability density function (PDF) associated with the stochastic solution to first-order nonlinear partial differential equations (PDEs). The theory is presented for both fully nonlinear and for quasilinear scalar PDEs subject to random boundary conditions, random initial conditions or random forcing terms. Particular applications are discussed for the classical linear and nonlinear advection equations and for the advection-reaction equation. By using a Fourier-Galerkin spectral method we obtain numerical solutions of the proposed response-excitation PDF equations. These numerical solutions are compared against those obtained by using more conventional statistical approaches such as probabilistic collocation and multi-element probabilistic collocation methods. It is found that the response-excitation approach yields accurate predictions of the statistical properties of the system. In addition, it allows to directly ascertain the tails of probabilistic distributions, thus facilitating the assessment of rare events and associated risks. The computational cost of the response-excitation method is order magnitudes smaller than the one of more conventional statistical approaches if the PDE is subject to high-dimensional random boundary or initial conditions. The question of high-dimensionality for evolution equations involving multidimensional joint response-excitation PDFs is also addressed.

  19. A statistical method to estimate outflow volume in case of levee breach due to overtopping

    NASA Astrophysics Data System (ADS)

    Brandimarte, Luigia; Martina, Mario; Dottori, Francesco; Mazzoleni, Maurizio

    2015-04-01

    The aim of this study is to propose a statistical method to assess the outflowing water volume through a levee breach, due to overtopping, in case of three different types of grass cover quality. The first step in the proposed methodology is the definition of the reliability function, a the relation between loading and resistance conditions on the levee system, in case of overtopping. Secondly, the fragility curve, which relates the probability of failure with loading condition over the levee system, is estimated having defined the stochastic variables in the reliability function. Thus, different fragility curves are assessed in case of different scenarios of grass cover quality. Then, a levee breach model is implemented and combined with a 1D hydrodynamic model in order to assess the outflow hydrograph given the water level in the main channel and stochastic values of the breach width. Finally, the water volume is estimated as a combination of the probability density function of the breach width and levee failure. The case study is located in the in 98km-braided reach of Po River, Italy, between the cross-sections of Cremona and Borgoforte. The analysis showed how different counter measures, different grass cover quality in this case, can reduce the probability of failure of the levee system. In particular, for a given values of breach width good levee cover qualities can significantly reduce the outflowing water volume, compared to bad cover qualities, inducing a consequent lower flood risk within the flood-prone area.

  20. Estimation of typhoon rainfall in GaoPing River: A Multivariate Maximum Entropy Method

    NASA Astrophysics Data System (ADS)

    Pei-Jui, Wu; Hwa-Lung, Yu

    2016-04-01

    The heavy rainfall from typhoons is the main factor of the natural disaster in Taiwan, which causes the significant loss of human lives and properties. Statistically average 3.5 typhoons invade Taiwan every year, and the serious typhoon, Morakot in 2009, impacted Taiwan in recorded history. Because the duration, path and intensity of typhoon, also affect the temporal and spatial rainfall type in specific region , finding the characteristics of the typhoon rainfall type is advantageous when we try to estimate the quantity of rainfall. This study developed a rainfall prediction model and can be divided three parts. First, using the EEOF(extended empirical orthogonal function) to classify the typhoon events, and decompose the standard rainfall type of all stations of each typhoon event into the EOF and PC(principal component). So we can classify the typhoon events which vary similarly in temporally and spatially as the similar typhoon types. Next, according to the classification above, we construct the PDF(probability density function) in different space and time by means of using the multivariate maximum entropy from the first to forth moment statistically. Therefore, we can get the probability of each stations of each time. Final we use the BME(Bayesian Maximum Entropy method) to construct the typhoon rainfall prediction model , and to estimate the rainfall for the case of GaoPing river which located in south of Taiwan.This study could be useful for typhoon rainfall predictions in future and suitable to government for the typhoon disaster prevention .

  1. A computerized method to estimate friction coefficient from orientation distribution of meso-scale faults

    NASA Astrophysics Data System (ADS)

    Sato, Katsushi

    2016-08-01

    The friction coefficient controls the brittle strength of the Earth's crust for deformation recorded by faults. This study proposes a computerized method to determine the friction coefficient of meso-scale faults. The method is based on the analysis of orientation distribution of faults, and the principal stress axes and the stress ratio calculated by a stress tensor inversion technique. The method assumes that faults are activated according to the cohesionless Coulomb's failure criterion, where the fluctuations of fluid pressure and the magnitude of differential stress are assumed to induce faulting. In this case, the orientation distribution of fault planes is described by a probability density function that is visualized as linear contours on a Mohr diagram. The parametric optimization of the function for an observed fault population yields the friction coefficient. A test using an artificial fault-slip dataset successfully determines the internal friction angle (the arctangent of the friction coefficient) with its confidence interval of several degrees estimated by the bootstrap resampling technique. An application to natural faults cutting a Pleistocene forearc basin fill yields a friction coefficient around 0.7 which is experimentally predicted by the Byerlee's law.

  2. Cost estimating methods for advanced space systems

    NASA Technical Reports Server (NTRS)

    Cyr, Kelley

    1988-01-01

    Parametric cost estimating methods for space systems in the conceptual design phase are developed. The approach is to identify variables that drive cost such as weight, quantity, development culture, design inheritance, and time. The relationship between weight and cost is examined in detail. A theoretical model of cost is developed and tested statistically against a historical data base of major research and development programs. It is concluded that the technique presented is sound, but that it must be refined in order to produce acceptable cost estimates.

  3. Implicit solvent methods for free energy estimation

    PubMed Central

    Decherchi, Sergio; Masetti, Matteo; Vyalov, Ivan; Rocchia, Walter

    2014-01-01

    Solvation is a fundamental contribution in many biological processes and especially in molecular binding. Its estimation can be performed by means of several computational approaches. The aim of this review is to give an overview of existing theories and methods to estimate solvent effects giving a specific focus on the category of implicit solvent models and their use in Molecular Dynamics. In many of these models, the solvent is considered as a continuum homogenous medium, while the solute can be represented at the atomic detail and at different levels of theory. Despite their degree of approximation, implicit methods are still widely employed due to their trade-off between accuracy and efficiency. Their derivation is rooted in the statistical mechanics and integral equations disciplines, some of the related details being provided here. Finally, methods that combine implicit solvent models and molecular dynamics simulation, are briefly described. PMID:25193298

  4. Fusing probability density function into Dempster-Shafer theory of evidence for the evaluation of water treatment plant.

    PubMed

    Chowdhury, Shakhawat

    2013-05-01

    The evaluation of the status of a municipal drinking water treatment plant (WTP) is important. The evaluation depends on several factors, including, human health risks from disinfection by-products (R), disinfection performance (D), and cost (C) of water production and distribution. The Dempster-Shafer theory (DST) of evidence can combine the individual status with respect to R, D, and C to generate a new indicator, from which the overall status of a WTP can be evaluated. In the DST, the ranges of different factors affecting the overall status are divided into several segments. The basic probability assignments (BPA) for each segment of these factors are provided by multiple experts, which are then combined to obtain the overall status. In assigning the BPA, the experts use their individual judgments, which can impart subjective biases in the overall evaluation. In this research, an approach has been introduced to avoid the assignment of subjective BPA. The factors contributing to the overall status were characterized using the probability density functions (PDF). The cumulative probabilities for different segments of these factors were determined from the cumulative density function, which were then assigned as the BPA for these factors. A case study is presented to demonstrate the application of PDF in DST to evaluate a WTP, leading to the selection of the required level of upgradation for the WTP. PMID:22941202

  5. Unit-Sphere Anisotropic Multiaxial Stochastic-Strength Model Probability Density Distribution for the Orientation of Critical Flaws

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel

    2013-01-01

    Models that predict the failure probability of monolithic glass and ceramic components under multiaxial loading have been developed by authors such as Batdorf, Evans, and Matsuo. These "unit-sphere" failure models assume that the strength-controlling flaws are randomly oriented, noninteracting planar microcracks of specified geometry but of variable size. This report develops a formulation to describe the probability density distribution of the orientation of critical strength-controlling flaws that results from an applied load. This distribution is a function of the multiaxial stress state, the shear sensitivity of the flaws, the Weibull modulus, and the strength anisotropy. Examples are provided showing the predicted response on the unit sphere for various stress states for isotropic and transversely isotropic (anisotropic) materials--including the most probable orientation of critical flaws for offset uniaxial loads with strength anisotropy. The author anticipates that this information could be used to determine anisotropic stiffness degradation or anisotropic damage evolution for individual brittle (or quasi-brittle) composite material constituents within finite element or micromechanics-based software

  6. A method for estimating soil moisture availability

    NASA Technical Reports Server (NTRS)

    Carlson, T. N.

    1985-01-01

    A method for estimating values of soil moisture based on measurements of infrared surface temperature is discussed. A central element in the method is a boundary layer model. Although it has been shown that soil moistures determined by this method using satellite measurements do correspond in a coarse fashion to the antecedent precipitation, the accuracy and exact physical interpretation (with respect to ground water amounts) are not well known. This area of ignorance, which currently impedes the practical application of the method to problems in hydrology, meteorology and agriculture, is largely due to the absence of corresponding surface measurements. Preliminary field measurements made over France have led to the development of a promising vegetation formulation (Taconet et al., 1985), which has been incorporated in the model. It is necessary, however, to test the vegetation component, and the entire method, over a wide variety of surface conditions and crop canopies.

  7. Comparative yield estimation via shock hydrodynamic methods

    SciTech Connect

    Attia, A.V.; Moran, B.; Glenn, L.A.

    1991-06-01

    Shock TOA (CORRTEX) from recent underground nuclear explosions in saturated tuff were used to estimate yield via the simulated explosion-scaling method. The sensitivity of the derived yield to uncertainties in the measured shock Hugoniot, release adiabats, and gas porosity is the main focus of this paper. In this method for determining yield, we assume a point-source explosion in an infinite homogeneous material. The rock is formulated using laboratory experiments on core samples, taken prior to the explosion. Results show that increasing gas porosity from 0% to 2% causes a 15% increase in yield per ms/kt{sup 1/3}. 6 refs., 4 figs.

  8. On methods of estimating cosmological bulk flows

    NASA Astrophysics Data System (ADS)

    Nusser, Adi

    2016-01-01

    We explore similarities and differences between several estimators of the cosmological bulk flow, B, from the observed radial peculiar velocities of galaxies. A distinction is made between two theoretical definitions of B as a dipole moment of the velocity field weighted by a radial window function. One definition involves the three-dimensional (3D) peculiar velocity, while the other is based on its radial component alone. Different methods attempt at inferring B for either of these definitions which coincide only for the case of a velocity field which is constant in space. We focus on the Wiener Filtering (WF) and the Constrained Minimum Variance (CMV) methodologies. Both methodologies require a prior expressed in terms of the radial velocity correlation function. Hoffman et al. compute B in Top-Hat windows from a WF realization of the 3D peculiar velocity field. Feldman et al. infer B directly from the observed velocities for the second definition of B. The WF methodology could easily be adapted to the second definition, in which case it will be equivalent to the CMV with the exception of the imposed constraint. For a prior with vanishing correlations or very noisy data, CMV reproduces the standard Maximum Likelihood estimation for B of the entire sample independent of the radial weighting function. Therefore, this estimator is likely more susceptible to observational biases that could be present in measurements of distant galaxies. Finally, two additional estimators are proposed.

  9. Cost estimating methods for advanced space systems

    NASA Technical Reports Server (NTRS)

    Cyr, Kelley

    1994-01-01

    NASA is responsible for developing much of the nation's future space technology. Cost estimates for new programs are required early in the planning process so that decisions can be made accurately. Because of the long lead times required to develop space hardware, the cost estimates are frequently required 10 to 15 years before the program delivers hardware. The system design in conceptual phases of a program is usually only vaguely defined and the technology used is so often state-of-the-art or beyond. These factors combine to make cost estimating for conceptual programs very challenging. This paper describes an effort to develop parametric cost estimating methods for space systems in the conceptual design phase. The approach is to identify variables that drive cost such as weight, quantity, development culture, design inheritance and time. The nature of the relationships between the driver variables and cost will be discussed. In particular, the relationship between weight and cost will be examined in detail. A theoretical model of cost will be developed and tested statistically against a historical database of major research and development projects.

  10. Probability density function treatment of turbulence/chemistry interactions during the ignition of a temperature-stratified mixture for application to HCCI engine modeling

    SciTech Connect

    Bisetti, Fabrizio; Chen, J.-Y.; Hawkes, Evatt R.; Chen, Jacqueline H.

    2008-12-15

    Homogeneous charge compression ignition (HCCI) engine technology promises to reduce NO{sub x} and soot emissions while achieving high thermal efficiency. Temperature and mixture stratification are regarded as effective means of controlling the start of combustion and reducing the abrupt pressure rise at high loads. Probability density function methods are currently being pursued as a viable approach to modeling the effects of turbulent mixing and mixture stratification on HCCI ignition. In this paper we present an assessment of the merits of three widely used mixing models in reproducing the moments of reactive scalars during the ignition of a lean hydrogen/air mixture ({phi}=0.1, p=41atm, and T=1070 K) under increasing temperature stratification and subject to decaying turbulence. The results from the solution of the evolution equation for a spatially homogeneous joint PDF of the reactive scalars are compared with available direct numerical simulation (DNS) data [E.R. Hawkes, R. Sankaran, P.P. Pebay, J.H. Chen, Combust. Flame 145 (1-2) (2006) 145-159]. The mixing models are found able to quantitatively reproduce the time history of the heat release rate, first and second moments of temperature, and hydroxyl radical mass fraction from the DNS results. Most importantly, the dependence of the heat release rate on the extent of the initial temperature stratification in the charge is also well captured. (author)

  11. Probability densities for the sums of iterates of the sine-circle map in the vicinity of the quasiperiodic edge of chaos

    NASA Astrophysics Data System (ADS)

    Afsar, Ozgur; Tirnakli, Ugur

    2010-10-01

    We investigate the probability density of rescaled sum of iterates of sine-circle map within quasiperiodic route to chaos. When the dynamical system is strongly mixing (i.e., ergodic), standard central limit theorem (CLT) is expected to be valid, but at the edge of chaos where iterates have strong correlations, the standard CLT is not necessarily valid anymore. We discuss here the main characteristics of the probability densities for the sums of iterates of deterministic dynamical systems which exhibit quasiperiodic route to chaos. At the golden-mean onset of chaos for the sine-circle map, we numerically verify that the probability density appears to converge to a q -Gaussian with q<1 as the golden mean value is approached.

  12. An analytical method of estimating turbine performance

    NASA Technical Reports Server (NTRS)

    Kochendorfer, Fred D; Nettles, J Cary

    1949-01-01

    A method is developed by which the performance of a turbine over a range of operating conditions can be analytically estimated from the blade angles and flow areas. In order to use the method, certain coefficients that determine the weight flow and the friction losses must be approximated. The method is used to calculate the performance of the single-stage turbine of a commercial aircraft gas-turbine engine and the calculated performance is compared with the performance indicated by experimental data. For the turbine of the typical example, the assumed pressure losses and the tuning angles give a calculated performance that represents the trends of the experimental performance with reasonable accuracy. The exact agreement between analytical performance and experimental performance is contingent upon the proper selection of a blading-loss parameter.

  13. An Analytical Method of Estimating Turbine Performance

    NASA Technical Reports Server (NTRS)

    Kochendorfer, Fred D; Nettles, J Cary

    1948-01-01

    A method is developed by which the performance of a turbine over a range of operating conditions can be analytically estimated from the blade angles and flow areas. In order to use the method, certain coefficients that determine the weight flow and friction losses must be approximated. The method is used to calculate the performance of the single-stage turbine of a commercial aircraft gas-turbine engine and the calculated performance is compared with the performance indicated by experimental data. For the turbine of the typical example, the assumed pressure losses and turning angles give a calculated performance that represents the trends of the experimental performance with reasonable accuracy. The exact agreement between analytical performance and experimental performance is contingent upon the proper selection of the blading-loss parameter. A variation of blading-loss parameter from 0.3 to 0.5 includes most of the experimental data from the turbine investigated.

  14. Method for estimation of protein isoelectric point.

    PubMed

    Pihlasalo, Sari; Auranen, Laura; Hänninen, Pekka; Härmä, Harri

    2012-10-01

    Adsorption of sample protein to Eu(3+) chelate-labeled nanoparticles is the basis of the developed noncompetitive and homogeneous method for the estimation of the protein isoelectric point (pI). The lanthanide ion of the nanoparticle surface-conjugated Eu(3+) chelate is dissociated at a low pH, therefore decreasing the luminescence signal. A nanoparticle-adsorbed sample protein prevents the dissociation of the chelate, leading to a high luminescence signal. The adsorption efficiency of the sample protein is reduced above the isoelectric point due to the decreased electrostatic attraction between the negatively charged protein and the negatively charged particle. Four proteins with isoelectric points ranging from ~5 to 9 were tested to show the performance of the method. These pI values measured with the developed method were close to the theoretical and experimental literature values. The method is sensitive and requires a low analyte concentration of submilligrams per liter, which is nearly 10000 times lower than the concentration required for the traditional isoelectric focusing. Moreover, the method is significantly faster and simpler than the existing methods, as a ready-to-go assay was prepared for the microtiter plate format. This mix-and-measure concept is a highly attractive alternative for routine laboratory work. PMID:22946671

  15. On the use of the noncentral chi-square density function for the distribution of helicopter spectral estimates

    NASA Technical Reports Server (NTRS)

    Garber, Donald P.

    1993-01-01

    A probability density function for the variability of ensemble averaged spectral estimates from helicopter acoustic signals in Gaussian background noise was evaluated. Numerical methods for calculating the density function and for determining confidence limits were explored. Density functions were predicted for both synthesized and experimental data and compared with observed spectral estimate variability.

  16. A Novel Method for Estimating Linkage Maps

    PubMed Central

    Tan, Yuan-De; Fu, Yun-Xin

    2006-01-01

    The goal of linkage mapping is to find the true order of loci from a chromosome. Since the number of possible orders is large even for a modest number of loci, the problem of finding the optimal solution is known as a NP-hard problem or traveling salesman problem (TSP). Although a number of algorithms are available, many either are low in the accuracy of recovering the true order of loci or require tremendous amounts of computational resources, thus making them difficult to use for reconstructing a large-scale map. We developed in this article a novel method called unidirectional growth (UG) to help solve this problem. The UG algorithm sequentially constructs the linkage map on the basis of novel results about additive distance. It not only is fast but also has a very high accuracy in recovering the true order of loci according to our simulation studies. Since the UG method requires n − 1 cycles to estimate the ordering of n loci, it is particularly useful for estimating linkage maps consisting of hundreds or even thousands of linked codominant loci on a chromosome. PMID:16783016

  17. Large Eddy Simulation/Probability Density Function Modeling of a Turbulent CH4/H2/N2 Jet Flame

    SciTech Connect

    Wang, Haifeng; Pope, Stephen B.

    2011-01-01

    In this work, we develop the large-eddy simulation (LES)/probability density function (PDF) simulation capability for turbulent combustion and apply it to a turbulent CH{sub 4}/H{sub 2}/N{sub 2} jet flame (DLR Flame A). The PDF code is verified to be second-order accurate with respect to the time-step size and the grid size in a manufactured one-dimensional test case. Three grids (64×64×16,192×192×48,320×320×80)(64×64×16,192×192×48,320×320×80) are used in the simulations of DLR Flame A to examine the effect of the grid resolution. The numerical solutions of the resolved mixture fraction, the mixture fraction squared, and the density are duplicated in the LES code and the PDF code to explore the numerical consistency between them. A single laminar flamelet profile is used to reduce the computational cost of treating the chemical reactions of the particles. The sensitivity of the LES results to the time-step size is explored. Both first and second-order time splitting schemes are used for integrating the stochastic differential equations for the particles, and these are compared in the jet flame simulations. The numerical results are found to be sensitive to the grid resolution, and the 192×192×48192×192×48 grid is adequate to capture the main flow fields of interest for this study. The numerical consistency between LES and PDF is confirmed by the small difference between their numerical predictions. Overall good agreement between the LES/PDF predictions and the experimental data is observed for the resolved flow fields and the composition fields, including for the mass fractions of the minor species and NO. The LES results are found to be insensitive to the time-step size for this particular flame. The first-order splitting scheme performs as well as the second-order splitting scheme in predicting the resolved mean and rms mixture fraction and the density for this flame.

  18. Momentum Probabilities for a Single Quantum Particle in Three-Dimensional Regular "Infinite" Wells: One Way of Promoting Understanding of Probability Densities

    ERIC Educational Resources Information Center

    Riggs, Peter J.

    2013-01-01

    Students often wrestle unsuccessfully with the task of correctly calculating momentum probability densities and have difficulty in understanding their interpretation. In the case of a particle in an "infinite" potential well, its momentum can take values that are not just those corresponding to the particle's quantised energies but…

  19. Demographic estimation methods for plants with dormancy

    USGS Publications Warehouse

    Kery, M.; Gregg, K.B.

    2004-01-01

    Demographic studies in plants appear simple because unlike animals, plants do not run away. Plant individuals can be marked with, e.g., plastic tags, but often the coordinates of an individual may be sufficient to identify it. Vascular plants in temperate latitudes have a pronounced seasonal life-cycle, so most plant demographers survey their study plots once a year often during or shortly after flowering. Life-states are pervasive in plants, hence the results of a demographic study for an individual can be summarized in a familiar encounter history, such as OVFVVF000. A zero means that an individual was not seen in a year and a letter denotes its state for years when it was seen aboveground. V and F here stand for vegetative and flowering states, respectively. Probabilities of survival and state transitions can then be obtained by mere counting. Problems arise when there is an unobservable dormant state, I.e., when plants may stay belowground for one or more growing seasons. Encounter histories such as OVFOOF000 may then occur where the meaning of zeroes becomes ambiguous. A zero can either mean a dead or a dormant plant. Various ad hoc methods in wide use among plant ecologists have made strong assumptions about when a zero should be equated to a dormant individual. These methods have never been compared among each other. In our talk and in Kery et al. (submitted), we show that these ad hoc estimators provide spurious estimates of survival and should not be used. In contrast, if detection probabilities for aboveground plants are known or can be estimated, capture-recapture (CR) models can be used to estimate probabilities of survival and state-transitions and the fraction of the population that is dormant. We have used this approach in two studies of terrestrial orchids, Cleistes bifaria (Kery et aI., submitted) and Cypripedium reginae (Kery & Gregg, submitted) in West Virginia, U.S.A. For Cleistes, our data comprised one population with a total of 620 marked

  20. An estimation method of the direct benefit of a waterlogging control project applicable to the changing environment

    NASA Astrophysics Data System (ADS)

    Zengmei, L.; Guanghua, Q.; Zishen, C.

    2015-05-01

    The direct benefit of a waterlogging control project is reflected by the reduction or avoidance of waterlogging loss. Before and after the construction of a waterlogging control project, the disaster-inducing environment in the waterlogging-prone zone is generally different. In addition, the category, quantity and spatial distribution of the disaster-bearing bodies are also changed more or less. Therefore, under the changing environment, the direct benefit of a waterlogging control project should be the reduction of waterlogging losses compared to conditions with no control project. Moreover, the waterlogging losses with or without the project should be the mathematical expectations of the waterlogging losses when rainstorms of all frequencies meet various water levels in the drainage-accepting zone. So an estimation model of the direct benefit of waterlogging control is proposed. Firstly, on the basis of a Copula function, the joint distribution of the rainstorms and the water levels are established, so as to obtain their joint probability density function. Secondly, according to the two-dimensional joint probability density distribution, the dimensional domain of integration is determined, which is then divided into small domains so as to calculate the probability for each of the small domains and the difference between the average waterlogging loss with and without a waterlogging control project, called the regional benefit of waterlogging control project, under the condition that rainstorms in the waterlogging-prone zone meet the water level in the drainage-accepting zone. Finally, it calculates the weighted mean of the project benefit of all small domains, with probability as the weight, and gets the benefit of the waterlogging control project. Taking the estimation of benefit of a waterlogging control project in Yangshan County, Guangdong Province, as an example, the paper briefly explains the procedures in waterlogging control project benefit estimation. The

  1. Bayes method for low rank tensor estimation

    NASA Astrophysics Data System (ADS)

    Suzuki, Taiji; Kanagawa, Heishiro

    2016-03-01

    We investigate the statistical convergence rate of a Bayesian low-rank tensor estimator, and construct a Bayesian nonlinear tensor estimator. The problem setting is the regression problem where the regression coefficient forms a tensor structure. This problem setting occurs in many practical applications, such as collaborative filtering, multi-task learning, and spatio-temporal data analysis. The convergence rate of the Bayes tensor estimator is analyzed in terms of both in-sample and out-of-sample predictive accuracies. It is shown that a fast learning rate is achieved without any strong convexity of the observation. Moreover, we extend the tensor estimator to a nonlinear function estimator so that we estimate a function that is a tensor product of several functions.

  2. Comparisons of Four Methods for Estimating a Dynamic Factor Model

    ERIC Educational Resources Information Center

    Zhang, Zhiyong; Hamaker, Ellen L.; Nesselroade, John R.

    2008-01-01

    Four methods for estimating a dynamic factor model, the direct autoregressive factor score (DAFS) model, are evaluated and compared. The first method estimates the DAFS model using a Kalman filter algorithm based on its state space model representation. The second one employs the maximum likelihood estimation method based on the construction of a…

  3. The MIRD method of estimating absorbed dose

    SciTech Connect

    Weber, D.A.

    1991-01-01

    The estimate of absorbed radiation dose from internal emitters provides the information required to assess the radiation risk associated with the administration of radiopharmaceuticals for medical applications. The MIRD (Medical Internal Radiation Dose) system of dose calculation provides a systematic approach to combining the biologic distribution data and clearance data of radiopharmaceuticals and the physical properties of radionuclides to obtain dose estimates. This tutorial presents a review of the MIRD schema, the derivation of the equations used to calculate absorbed dose, and shows how the MIRD schema can be applied to estimate dose from radiopharmaceuticals used in nuclear medicine.

  4. Statistically advanced, self-similar, radial probability density functions of atmospheric and under-expanded hydrogen jets

    NASA Astrophysics Data System (ADS)

    Ruggles, Adam J.

    2015-11-01

    This paper presents improved statistical insight regarding the self-similar scalar mixing process of atmospheric hydrogen jets and the downstream region of under-expanded hydrogen jets. Quantitative planar laser Rayleigh scattering imaging is used to probe both jets. The self-similarity of statistical moments up to the sixth order (beyond the literature established second order) is documented in both cases. This is achieved using a novel self-similar normalization method that facilitated a degree of statistical convergence that is typically limited to continuous, point-based measurements. This demonstrates that image-based measurements of a limited number of samples can be used for self-similar scalar mixing studies. Both jets exhibit the same radial trends of these moments demonstrating that advanced atmospheric self-similarity can be applied in the analysis of under-expanded jets. Self-similar histograms away from the centerline are shown to be the combination of two distributions. The first is attributed to turbulent mixing. The second, a symmetric Poisson-type distribution centered on zero mass fraction, progressively becomes the dominant and eventually sole distribution at the edge of the jet. This distribution is attributed to shot noise-affected pure air measurements, rather than a diffusive superlayer at the jet boundary. This conclusion is reached after a rigorous measurement uncertainty analysis and inspection of pure air data collected with each hydrogen data set. A threshold based upon the measurement noise analysis is used to separate the turbulent and pure air data, and thusly estimate intermittency. Beta-distributions (four parameters) are used to accurately represent the turbulent distribution moments. This combination of measured intermittency and four-parameter beta-distributions constitutes a new, simple approach to model scalar mixing. Comparisons between global moments from the data and moments calculated using the proposed model show excellent

  5. Statistical methods of estimating mining costs

    USGS Publications Warehouse

    Long, K.R.

    2011-01-01

    Until it was defunded in 1995, the U.S. Bureau of Mines maintained a Cost Estimating System (CES) for prefeasibility-type economic evaluations of mineral deposits and estimating costs at producing and non-producing mines. This system had a significant role in mineral resource assessments to estimate costs of developing and operating known mineral deposits and predicted undiscovered deposits. For legal reasons, the U.S. Geological Survey cannot update and maintain CES. Instead, statistical tools are under development to estimate mining costs from basic properties of mineral deposits such as tonnage, grade, mineralogy, depth, strip ratio, distance from infrastructure, rock strength, and work index. The first step was to reestimate "Taylor's Rule" which relates operating rate to available ore tonnage. The second step was to estimate statistical models of capital and operating costs for open pit porphyry copper mines with flotation concentrators. For a sample of 27 proposed porphyry copper projects, capital costs can be estimated from three variables: mineral processing rate, strip ratio, and distance from nearest railroad before mine construction began. Of all the variables tested, operating costs were found to be significantly correlated only with strip ratio.

  6. A Study of Variance Estimation Methods. Working Paper Series.

    ERIC Educational Resources Information Center

    Zhang, Fan; Weng, Stanley; Salvucci, Sameena; Hu, Ming-xiu

    This working paper contains reports of five studies of variance estimation methods. The first, An Empirical Study of Poststratified Estimator, by Fan Zhang uses data from the National Household Education Survey to illustrate use of poststratified estimation. The second paper, BRR Variance Estimation Using BPLX Hadamard Procedure, by Stanley Weng…

  7. Nutrient Estimation Using Subsurface Sensing Methods

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This report investigates the use of precision management techniques for measuring soil conductivity on feedlot surfaces to estimate nutrient value for crop production. An electromagnetic induction soil conductivity meter was used to collect apparent soil electrical conductivity (ECa) from feedlot p...

  8. Estimation Methods for One-Parameter Testlet Models

    ERIC Educational Resources Information Center

    Jiao, Hong; Wang, Shudong; He, Wei

    2013-01-01

    This study demonstrated the equivalence between the Rasch testlet model and the three-level one-parameter testlet model and explored the Markov Chain Monte Carlo (MCMC) method for model parameter estimation in WINBUGS. The estimation accuracy from the MCMC method was compared with those from the marginalized maximum likelihood estimation (MMLE)…

  9. A Method to Estimate the Probability that any Individual Cloud-to-Ground Lightning Stroke was Within any Radius of any Point

    NASA Technical Reports Server (NTRS)

    Huddleston, Lisa L.; Roeder, William P.; Merceret, Francis J.

    2011-01-01

    A new technique has been developed to estimate the probability that a nearby cloud to ground lightning stroke was within a specified radius of any point of interest. This process uses the bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to determine the probability that the stroke is inside any specified radius of any location, even if that location is not centered on or even with the location error ellipse. This technique is adapted from a method of calculating the probability of debris collision with spacecraft. Such a technique is important in spaceport processing activities because it allows engineers to quantify the risk of induced current damage to critical electronics due to nearby lightning strokes. This technique was tested extensively and is now in use by space launch organizations at Kennedy Space Center and Cape Canaveral Air Force Station. Future applications could include forensic meteorology.

  10. A Method to Estimate the Probability That Any Individual Cloud-to-Ground Lightning Stroke Was Within Any Radius of Any Point

    NASA Technical Reports Server (NTRS)

    Huddleston, Lisa L.; Roeder, William P.; Merceret, Francis J.

    2010-01-01

    A new technique has been developed to estimate the probability that a nearby cloud-to-ground lightning stroke was within a specified radius of any point of interest. This process uses the bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to determine the probability that the stroke is inside any specified radius of any location, even if that location is not centered on or even within the location error ellipse. This technique is adapted from a method of calculating the probability of debris collision with spacecraft. Such a technique is important in spaceport processing activities because it allows engineers to quantify the risk of induced current damage to critical electronics due to nearby lightning strokes. This technique was tested extensively and is now in use by space launch organizations at Kennedy Space Center and Cape Canaveral Air Force station.

  11. Development of advanced acreage estimation methods

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr. (Principal Investigator)

    1980-01-01

    The use of the AMOEBA clustering/classification algorithm was investigated as a basis for both a color display generation technique and maximum likelihood proportion estimation procedure. An approach to analyzing large data reduction systems was formulated and an exploratory empirical study of spatial correlation in LANDSAT data was also carried out. Topics addressed include: (1) development of multiimage color images; (2) spectral spatial classification algorithm development; (3) spatial correlation studies; and (4) evaluation of data systems.

  12. Estimation of vegetation cover at subpixel resolution using LANDSAT data

    NASA Technical Reports Server (NTRS)

    Jasinski, Michael F.; Eagleson, Peter S.

    1986-01-01

    The present report summarizes the various approaches relevant to estimating canopy cover at subpixel resolution. The approaches are based on physical models of radiative transfer in non-homogeneous canopies and on empirical methods. The effects of vegetation shadows and topography are examined. Simple versions of the model are tested, using the Taos, New Mexico Study Area database. Emphasis has been placed on using relatively simple models requiring only one or two bands. Although most methods require some degree of ground truth, a two-band method is investigated whereby the percent cover can be estimated without ground truth by examining the limits of the data space. Future work is proposed which will incorporate additional surface parameters into the canopy cover algorithm, such as topography, leaf area, or shadows. The method involves deriving a probability density function for the percent canopy cover based on the joint probability density function of the observed radiances.

  13. An investigation of new methods for estimating parameter sensitivities

    NASA Technical Reports Server (NTRS)

    Beltracchi, Todd J.; Gabriele, Gary A.

    1989-01-01

    The method proposed for estimating sensitivity derivatives is based on the Recursive Quadratic Programming (RQP) method and in conjunction a differencing formula to produce estimates of the sensitivities. This method is compared to existing methods and is shown to be very competitive in terms of the number of function evaluations required. In terms of accuracy, the method is shown to be equivalent to a modified version of the Kuhn-Tucker method, where the Hessian of the Lagrangian is estimated using the BFS method employed by the RQP algorithm. Initial testing on a test set with known sensitivities demonstrates that the method can accurately calculate the parameter sensitivity.

  14. A quasi-Newton approach to optimization problems with probability density constraints. [problem solving in mathematical programming

    NASA Technical Reports Server (NTRS)

    Tapia, R. A.; Vanrooy, D. L.

    1976-01-01

    A quasi-Newton method is presented for minimizing a nonlinear function while constraining the variables to be nonnegative and sum to one. The nonnegativity constraints were eliminated by working with the squares of the variables and the resulting problem was solved using Tapia's general theory of quasi-Newton methods for constrained optimization. A user's guide for a computer program implementing this algorithm is provided.

  15. Morphological method for estimation of simian virus 40 infectious titer.

    PubMed

    Landau, S M; Nosach, L N; Pavlova, G V

    1982-01-01

    The cytomorphologic method previously reported for titration of adenoviruses has been employed for estimating the infectious titer of simian virus 40 (SV 40). Infected cells forming intranuclear inclusions were determined. The method examined possesses a number of advantages over virus titration by plaque assay and cytopathic effect. The virus titer estimated by the method of inclusion counting and expressed as IFU/ml (Inclusion Forming Units/ml) corresponds to that estimated by plaque count and expressed as PFU/ml. PMID:6289780

  16. The augmented Lagrangian method for parameter estimation in elliptic systems

    NASA Technical Reports Server (NTRS)

    Ito, Kazufumi; Kunisch, Karl

    1990-01-01

    In this paper a new technique for the estimation of parameters in elliptic partial differential equations is developed. It is a hybrid method combining the output-least-squares and the equation error method. The new method is realized by an augmented Lagrangian formulation, and convergence as well as rate of convergence proofs are provided. Technically the critical step is the verification of a coercivity estimate of an appropriately defined Lagrangian functional. To obtain this coercivity estimate a seminorm regularization technique is used.

  17. Fused methods for visual saliency estimation

    NASA Astrophysics Data System (ADS)

    Danko, Amanda S.; Lyu, Siwei

    2015-02-01

    In this work, we present a new model of visual saliency by combing results from existing methods, improving upon their performance and accuracy. By fusing pre-attentive and context-aware methods, we highlight the abilities of state-of-the-art models while compensating for their deficiencies. We put this theory to the test in a series of experiments, comparatively evaluating the visual saliency maps and employing them for content-based image retrieval and thumbnail generation. We find that on average our model yields definitive improvements upon recall and f-measure metrics with comparable precisions. In addition, we find that all image searches using our fused method return more correct images and additionally rank them higher than the searches using the original methods alone.

  18. Advancing Methods for Estimating Cropland Area

    NASA Astrophysics Data System (ADS)

    King, L.; Hansen, M.; Stehman, S. V.; Adusei, B.; Potapov, P.; Krylov, A.

    2014-12-01

    Measurement and monitoring of complex and dynamic agricultural land systems is essential with increasing demands on food, feed, fuel and fiber production from growing human populations, rising consumption per capita, the expansion of crops oils in industrial products, and the encouraged emphasis on crop biofuels as an alternative energy source. Soybean is an important global commodity crop, and the area of land cultivated for soybean has risen dramatically over the past 60 years, occupying more than 5% of all global croplands (Monfreda et al 2008). Escalating demands for soy over the next twenty years are anticipated to be met by an increase of 1.5 times the current global production, resulting in expansion of soybean cultivated land area by nearly the same amount (Masuda and Goldsmith 2009). Soybean cropland area is estimated with the use of a sampling strategy and supervised non-linear hierarchical decision tree classification for the United States, Argentina and Brazil as the prototype in development of a new methodology for crop specific agricultural area estimation. Comparison of our 30 m2 Landsat soy classification with the National Agricultural Statistical Services Cropland Data Layer (CDL) soy map shows a strong agreement in the United States for 2011, 2012, and 2013. RapidEye 5m2 imagery was also classified for soy presence and absence and used at the field scale for validation and accuracy assessment of the Landsat soy maps, describing a nearly 1 to 1 relationship in the United States, Argentina and Brazil. The strong correlation found between all products suggests high accuracy and precision of the prototype and has proven to be a successful and efficient way to assess soybean cultivated area at the sub-national and national scale for the United States with great potential for application elsewhere.

  19. Inversion Theorem Based Kernel Density Estimation for the Ordinary Least Squares Estimator of a Regression Coefficient

    PubMed Central

    Wang, Dongliang; Hutson, Alan D.

    2016-01-01

    The traditional confidence interval associated with the ordinary least squares estimator of linear regression coefficient is sensitive to non-normality of the underlying distribution. In this article, we develop a novel kernel density estimator for the ordinary least squares estimator via utilizing well-defined inversion based kernel smoothing techniques in order to estimate the conditional probability density distribution of the dependent random variable. Simulation results show that given a small sample size, our method significantly increases the power as compared with Wald-type CIs. The proposed approach is illustrated via an application to a classic small data set originally from Graybill (1961). PMID:26924882

  20. Comparison of three methods for estimating complete life tables

    NASA Astrophysics Data System (ADS)

    Ibrahim, Rose Irnawaty

    2013-04-01

    A question of interest in the demographic and actuarial fields is the estimation of the complete sets of qx values when the data are given in age groups. When the complete life tables are not available, estimating it from abridged life tables is necessary. Three methods such as King's Osculatory Interpolation, Six-point Lagrangian Interpolation and Heligman-Pollard Model are compared using data on abridged life tables for Malaysian population. Each of these methods considered was applied on the abridged data sets to estimate the complete sets of qx values. Then, the estimated complete sets of qx values were used to produce the estimated abridged ones by each of the three methods. The results were then compared with the actual values published in the abridged life tables. Among the three methods, the Six-point Lagrangian Interpolation method produces the best estimates of complete life tables from five-year abridged life tables.

  1. System and method for correcting attitude estimation

    NASA Technical Reports Server (NTRS)

    Josselson, Robert H. (Inventor)

    2010-01-01

    A system includes an angular rate sensor disposed in a vehicle for providing angular rates of the vehicle, and an instrument disposed in the vehicle for providing line-of-sight control with respect to a line-of-sight reference. The instrument includes an integrator which is configured to integrate the angular rates of the vehicle to form non-compensated attitudes. Also included is a compensator coupled across the integrator, in a feed-forward loop, for receiving the angular rates of the vehicle and outputting compensated angular rates of the vehicle. A summer combines the non-compensated attitudes and the compensated angular rates of the to vehicle to form estimated vehicle attitudes for controlling the instrument with respect to the line-of-sight reference. The compensator is configured to provide error compensation to the instrument free-of any feedback loop that uses an error signal. The compensator may include a transfer function providing a fixed gain to the received angular rates of the vehicle. The compensator may, alternatively, include a is transfer function providing a variable gain as a function of frequency to operate on the received angular rates of the vehicle.

  2. Evaluation of Two Methods to Estimate and Monitor Bird Populations

    PubMed Central

    Taylor, Sandra L.; Pollard, Katherine S.

    2008-01-01

    Background Effective management depends upon accurately estimating trends in abundance of bird populations over time, and in some cases estimating abundance. Two population estimation methods, double observer (DO) and double sampling (DS), have been advocated for avian population studies and the relative merits and short-comings of these methods remain an area of debate. Methodology/Principal Findings We used simulations to evaluate the performances of these two population estimation methods under a range of realistic scenarios. For three hypothetical populations with different levels of clustering, we generated DO and DS population size estimates for a range of detection probabilities and survey proportions. Population estimates for both methods were centered on the true population size for all levels of population clustering and survey proportions when detection probabilities were greater than 20%. The DO method underestimated the population at detection probabilities less than 30% whereas the DS method remained essentially unbiased. The coverage probability of 95% confidence intervals for population estimates was slightly less than the nominal level for the DS method but was substantially below the nominal level for the DO method at high detection probabilities. Differences in observer detection probabilities did not affect the accuracy and precision of population estimates of the DO method. Population estimates for the DS method remained unbiased as the proportion of units intensively surveyed changed, but the variance of the estimates decreased with increasing proportion intensively surveyed. Conclusions/Significance The DO and DS methods can be applied in many different settings and our evaluations provide important information on the performance of these two methods that can assist researchers in selecting the method most appropriate for their particular needs. PMID:18728775

  3. A Novel Conditional Probability Density Distribution Surface for the Analysis of the Drop Life of Solder Joints Under Board Level Drop Impact

    NASA Astrophysics Data System (ADS)

    Gu, Jian; Lei, YongPing; Lin, Jian; Fu, HanGuang; Wu, Zhongwei

    2016-01-01

    The scattering of fatigue life data is a common problem and usually described using the normal distribution or Weibull distribution. For solder joints under drop impact, due to the complicated stress distribution, the relationship between the stress and the drop life is so far unknown. Furthermore, it is important to establish a function describing the change in standard deviation for solder joints under different drop impact levels. Therefore, in this study, a novel conditional probability density distribution surface (CPDDS) was established for the analysis of the drop life of solder joints. The relationship between the drop impact acceleration and the drop life is proposed, which comprehensively considers the stress distribution. A novel exponential model was adopted for describing the change of the standard deviation with the impact acceleration (0 → +∞). To validate the model, the drop life of Sn-3.0Ag-0.5Cu solder joints was analyzed. The probability density curve of the logarithm of the fatigue life distribution can be easily obtained for a certain acceleration level fixed on the acceleration level axis of the CPDDS. The P- A- N curve was also obtained using the functions μ( A) and σ( A), which can reflect the regularity of the life data for an overall reliability P.

  4. Robust time and frequency domain estimation methods in adaptive control

    NASA Technical Reports Server (NTRS)

    Lamaire, Richard Orville

    1987-01-01

    A robust identification method was developed for use in an adaptive control system. The type of estimator is called the robust estimator, since it is robust to the effects of both unmodeled dynamics and an unmeasurable disturbance. The development of the robust estimator was motivated by a need to provide guarantees in the identification part of an adaptive controller. To enable the design of a robust control system, a nominal model as well as a frequency-domain bounding function on the modeling uncertainty associated with this nominal model must be provided. Two estimation methods are presented for finding parameter estimates, and, hence, a nominal model. One of these methods is based on the well developed field of time-domain parameter estimation. In a second method of finding parameter estimates, a type of weighted least-squares fitting to a frequency-domain estimated model is used. The frequency-domain estimator is shown to perform better, in general, than the time-domain parameter estimator. In addition, a methodology for finding a frequency-domain bounding function on the disturbance is used to compute a frequency-domain bounding function on the additive modeling error due to the effects of the disturbance and the use of finite-length data. The performance of the robust estimator in both open-loop and closed-loop situations is examined through the use of simulations.

  5. Validation of the probability density function for the calculated radiant power of synchrotron radiation according to the Schwinger formalism

    NASA Astrophysics Data System (ADS)

    Klein, Roman

    2016-06-01

    Electron storage rings with appropriate design are primary source standards, the spectral radiant intensity of which can be calculated from measured parameters using the Schwinger equation. PTB uses the electron storage rings BESSY II and MLS for source-based radiometry in the spectral range from the near-infrared to the x-ray region. The uncertainty of the calculated radiant intensity depends on the uncertainty of the measured parameters used for the calculation. Up to now the procedure described in the guide to the expression of uncertainty in measurement (GUM), i.e. the law of propagation of uncertainty, assuming a linear measurement model, was used to determine the combined uncertainty of the calculated spectral intensity, and for the determination of the coverage interval as well. Now it has been tested with a Monte Carlo simulation, according to Supplement 1 to the GUM, whether this procedure is valid for the rather complicated calculation by means of the Schwinger formalism and for different probability distributions of the input parameters. It was found that for typical uncertainties of the input parameters both methods yield similar results.

  6. System and method for motor parameter estimation

    SciTech Connect

    Luhrs, Bin; Yan, Ting

    2014-03-18

    A system and method for determining unknown values of certain motor parameters includes a motor input device connectable to an electric motor having associated therewith values for known motor parameters and an unknown value of at least one motor parameter. The motor input device includes a processing unit that receives a first input from the electric motor comprising values for the known motor parameters for the electric motor and receive a second input comprising motor data on a plurality of reference motors, including values for motor parameters corresponding to the known motor parameters of the electric motor and values for motor parameters corresponding to the at least one unknown motor parameter value of the electric motor. The processor determines the unknown value of the at least one motor parameter from the first input and the second input and determines a motor management strategy for the electric motor based thereon.

  7. Estimate octane numbers using an enhanced method

    SciTech Connect

    Twu, C.H.; Coon, J.E.

    1997-03-01

    An improved model, based on the Twu-Coon method, is not only internally consistent, but also retains the same level of accuracy as the previous model in predicting octanes of gasoline blends. The enhanced model applies the same binary interaction parameters to components in each gasoline cut and their blends. Thus, the enhanced model can blend gasoline cuts in any order, in any combination or from any splitting of gasoline cuts and still yield the identical value of octane number for blending the same number of gasoline cuts. Setting binary interaction parameters to zero for identical gasoline cuts during the blending process is not required. The new model changes the old model`s methodology so that the same binary interaction parameters can be applied between components inside a gasoline cut as are applied to the same components between gasoline cuts. The enhanced model is more consistent in methodology than the original model, but it has equal accuracy for predicting octane numbers of gasoline blends, and it has the same number of binary interaction parameters. The paper discusses background, enhancement of the Twu-Coon interaction model, and three examples: blend of 2 identical gasoline cuts, blend of 3 gasoline cuts, and blend of the same 3 gasoline cuts in a different order.

  8. Carbon footprint: current methods of estimation.

    PubMed

    Pandey, Divya; Agrawal, Madhoolika; Pandey, Jai Shanker

    2011-07-01

    Increasing greenhouse gaseous concentration in the atmosphere is perturbing the environment to cause grievous global warming and associated consequences. Following the rule that only measurable is manageable, mensuration of greenhouse gas intensiveness of different products, bodies, and processes is going on worldwide, expressed as their carbon footprints. The methodologies for carbon footprint calculations are still evolving and it is emerging as an important tool for greenhouse gas management. The concept of carbon footprinting has permeated and is being commercialized in all the areas of life and economy, but there is little coherence in definitions and calculations of carbon footprints among the studies. There are disagreements in the selection of gases, and the order of emissions to be covered in footprint calculations. Standards of greenhouse gas accounting are the common resources used in footprint calculations, although there is no mandatory provision of footprint verification. Carbon footprinting is intended to be a tool to guide the relevant emission cuts and verifications, its standardization at international level are therefore necessary. Present review describes the prevailing carbon footprinting methods and raises the related issues. PMID:20848311

  9. Evaluating combinational illumination estimation methods on real-world images.

    PubMed

    Bing Li; Weihua Xiong; Weiming Hu; Funt, Brian

    2014-03-01

    Illumination estimation is an important component of color constancy and automatic white balancing. A number of methods of combining illumination estimates obtained from multiple subordinate illumination estimation methods now appear in the literature. These combinational methods aim to provide better illumination estimates by fusing the information embedded in the subordinate solutions. The existing combinational methods are surveyed and analyzed here with the goals of determining: 1) the effectiveness of fusing illumination estimates from multiple subordinate methods; 2) the best method of combination; 3) the underlying factors that affect the performance of a combinational method; and 4) the effectiveness of combination for illumination estimation in multiple-illuminant scenes. The various combinational methods are categorized in terms of whether or not they require supervised training and whether or not they rely on high-level scene content cues (e.g., indoor versus outdoor). Extensive tests and enhanced analyzes using three data sets of real-world images are conducted. For consistency in testing, the images were labeled according to their high-level features (3D stages, indoor/outdoor) and this label data is made available on-line. The tests reveal that the trained combinational methods (direct combination by support vector regression in particular) clearly outperform both the non-combinational methods and those combinational methods based on scene content cues. PMID:23974624

  10. Estimating Tree Height-Diameter Models with the Bayesian Method

    PubMed Central

    Duan, Aiguo; Zhang, Jianguo; Xiang, Congwei

    2014-01-01

    Six candidate height-diameter models were used to analyze the height-diameter relationships. The common methods for estimating the height-diameter models have taken the classical (frequentist) approach based on the frequency interpretation of probability, for example, the nonlinear least squares method (NLS) and the maximum likelihood method (ML). The Bayesian method has an exclusive advantage compared with classical method that the parameters to be estimated are regarded as random variables. In this study, the classical and Bayesian methods were used to estimate six height-diameter models, respectively. Both the classical method and Bayesian method showed that the Weibull model was the “best” model using data1. In addition, based on the Weibull model, data2 was used for comparing Bayesian method with informative priors with uninformative priors and classical method. The results showed that the improvement in prediction accuracy with Bayesian method led to narrower confidence bands of predicted value in comparison to that for the classical method, and the credible bands of parameters with informative priors were also narrower than uninformative priors and classical method. The estimated posterior distributions for parameters can be set as new priors in estimating the parameters using data2. PMID:24711733

  11. Estimating tree height-diameter models with the Bayesian method.

    PubMed

    Zhang, Xiongqing; Duan, Aiguo; Zhang, Jianguo; Xiang, Congwei

    2014-01-01

    Six candidate height-diameter models were used to analyze the height-diameter relationships. The common methods for estimating the height-diameter models have taken the classical (frequentist) approach based on the frequency interpretation of probability, for example, the nonlinear least squares method (NLS) and the maximum likelihood method (ML). The Bayesian method has an exclusive advantage compared with classical method that the parameters to be estimated are regarded as random variables. In this study, the classical and Bayesian methods were used to estimate six height-diameter models, respectively. Both the classical method and Bayesian method showed that the Weibull model was the "best" model using data1. In addition, based on the Weibull model, data2 was used for comparing Bayesian method with informative priors with uninformative priors and classical method. The results showed that the improvement in prediction accuracy with Bayesian method led to narrower confidence bands of predicted value in comparison to that for the classical method, and the credible bands of parameters with informative priors were also narrower than uninformative priors and classical method. The estimated posterior distributions for parameters can be set as new priors in estimating the parameters using data2. PMID:24711733

  12. Evaluation of Methods to Estimate Understory Fruit Biomass

    PubMed Central

    Lashley, Marcus A.; Thompson, Jeffrey R.; Chitwood, M. Colter; DePerno, Christopher S.; Moorman, Christopher E.

    2014-01-01

    Fleshy fruit is consumed by many wildlife species and is a critical component of forest ecosystems. Because fruit production may change quickly during forest succession, frequent monitoring of fruit biomass may be needed to better understand shifts in wildlife habitat quality. Yet, designing a fruit sampling protocol that is executable on a frequent basis may be difficult, and knowledge of accuracy within monitoring protocols is lacking. We evaluated the accuracy and efficiency of 3 methods to estimate understory fruit biomass (Fruit Count, Stem Density, and Plant Coverage). The Fruit Count method requires visual counts of fruit to estimate fruit biomass. The Stem Density method uses counts of all stems of fruit producing species to estimate fruit biomass. The Plant Coverage method uses land coverage of fruit producing species to estimate fruit biomass. Using linear regression models under a censored-normal distribution, we determined the Fruit Count and Stem Density methods could accurately estimate fruit biomass; however, when comparing AIC values between models, the Fruit Count method was the superior method for estimating fruit biomass. After determining that Fruit Count was the superior method to accurately estimate fruit biomass, we conducted additional analyses to determine the sampling intensity (i.e., percentage of area) necessary to accurately estimate fruit biomass. The Fruit Count method accurately estimated fruit biomass at a 0.8% sampling intensity. In some cases, sampling 0.8% of an area may not be feasible. In these cases, we suggest sampling understory fruit production with the Fruit Count method at the greatest feasible sampling intensity, which could be valuable to assess annual fluctuations in fruit production. PMID:24819253

  13. A source number estimation method for single optical fiber sensor

    NASA Astrophysics Data System (ADS)

    Hu, Junpeng; Huang, Zhiping; Su, Shaojing; Zhang, Yimeng; Liu, Chunwu

    2015-10-01

    The single-channel blind source separation (SCBSS) technique makes great significance in many fields, such as optical fiber communication, sensor detection, image processing and so on. It is a wide range application to realize blind source separation (BSS) from a single optical fiber sensor received data. The performance of many BSS algorithms and signal process methods will be worsened with inaccurate source number estimation. Many excellent algorithms have been proposed to deal with the source number estimation in array signal process which consists of multiple sensors, but they can not be applied directly to the single sensor condition. This paper presents a source number estimation method dealing with the single optical fiber sensor received data. By delay process, this paper converts the single sensor received data to multi-dimension form. And the data covariance matrix is constructed. Then the estimation algorithms used in array signal processing can be utilized. The information theoretic criteria (ITC) based methods, presented by AIC and MDL, Gerschgorin's disk estimation (GDE) are introduced to estimate the source number of the single optical fiber sensor's received signal. To improve the performance of these estimation methods at low signal noise ratio (SNR), this paper make a smooth process to the data covariance matrix. By the smooth process, the fluctuation and uncertainty of the eigenvalues of the covariance matrix are reduced. Simulation results prove that ITC base methods can not estimate the source number effectively under colored noise. The GDE method, although gets a poor performance at low SNR, but it is able to accurately estimate the number of sources with colored noise. The experiments also show that the proposed method can be applied to estimate the source number of single sensor received data.

  14. An automated method of tuning an attitude estimator

    NASA Technical Reports Server (NTRS)

    Mason, Paul A. C.; Mook, D. Joseph

    1995-01-01

    Attitude determination is a major element of the operation and maintenance of a spacecraft. There are several existing methods of determining the attitude of a spacecraft. One of the most commonly used methods utilizes the Kalman filter to estimate the attitude of the spacecraft. Given an accurate model of a system and adequate observations, a Kalman filter can produce accurate estimates of the attitude. If the system model, filter parameters, or observations are inaccurate, the attitude estimates may be degraded. Therefore, it is advantageous to develop a method of automatically tuning the Kalman filter to produce the accurate estimates. In this paper, a three-axis attitude determination Kalman filter, which uses only magnetometer measurements, is developed and tested using real data. The appropriate filter parameters are found via the Process Noise Covariance Estimator (PNCE). The PNCE provides an optimal criterion for determining the best filter parameters.

  15. Quantifying Accurate Calorie Estimation Using the "Think Aloud" Method

    ERIC Educational Resources Information Center

    Holmstrup, Michael E.; Stearns-Bruening, Kay; Rozelle, Jeffrey

    2013-01-01

    Objective: Clients often have limited time in a nutrition education setting. An improved understanding of the strategies used to accurately estimate calories may help to identify areas of focused instruction to improve nutrition knowledge. Methods: A "Think Aloud" exercise was recorded during the estimation of calories in a standard dinner meal…

  16. A Novel Monopulse Angle Estimation Method for Wideband LFM Radars

    PubMed Central

    Zhang, Yi-Xiong; Liu, Qi-Fan; Hong, Ru-Jia; Pan, Ping-Ping; Deng, Zhen-Miao

    2016-01-01

    Traditional monopulse angle estimations are mainly based on phase comparison and amplitude comparison methods, which are commonly adopted in narrowband radars. In modern radar systems, wideband radars are becoming more and more important, while the angle estimation for wideband signals is little studied in previous works. As noise in wideband radars has larger bandwidth than narrowband radars, the challenge lies in the accumulation of energy from the high resolution range profile (HRRP) of monopulse. In wideband radars, linear frequency modulated (LFM) signals are frequently utilized. In this paper, we investigate the monopulse angle estimation problem for wideband LFM signals. To accumulate the energy of the received echo signals from different scatterers of a target, we propose utilizing a cross-correlation operation, which can achieve a good performance in low signal-to-noise ratio (SNR) conditions. In the proposed algorithm, the problem of angle estimation is converted to estimating the frequency of the cross-correlation function (CCF). Experimental results demonstrate the similar performance of the proposed algorithm compared with the traditional amplitude comparison method. It means that the proposed method for angle estimation can be adopted. When adopting the proposed method, future radars may only need wideband signals for both tracking and imaging, which can greatly increase the data rate and strengthen the capability of anti-jamming. More importantly, the estimated angle will not become ambiguous under an arbitrary angle, which can significantly extend the estimated angle range in wideband radars. PMID:27271629

  17. A Novel Monopulse Angle Estimation Method for Wideband LFM Radars.

    PubMed

    Zhang, Yi-Xiong; Liu, Qi-Fan; Hong, Ru-Jia; Pan, Ping-Ping; Deng, Zhen-Miao

    2016-01-01

    Traditional monopulse angle estimations are mainly based on phase comparison and amplitude comparison methods, which are commonly adopted in narrowband radars. In modern radar systems, wideband radars are becoming more and more important, while the angle estimation for wideband signals is little studied in previous works. As noise in wideband radars has larger bandwidth than narrowband radars, the challenge lies in the accumulation of energy from the high resolution range profile (HRRP) of monopulse. In wideband radars, linear frequency modulated (LFM) signals are frequently utilized. In this paper, we investigate the monopulse angle estimation problem for wideband LFM signals. To accumulate the energy of the received echo signals from different scatterers of a target, we propose utilizing a cross-correlation operation, which can achieve a good performance in low signal-to-noise ratio (SNR) conditions. In the proposed algorithm, the problem of angle estimation is converted to estimating the frequency of the cross-correlation function (CCF). Experimental results demonstrate the similar performance of the proposed algorithm compared with the traditional amplitude comparison method. It means that the proposed method for angle estimation can be adopted. When adopting the proposed method, future radars may only need wideband signals for both tracking and imaging, which can greatly increase the data rate and strengthen the capability of anti-jamming. More importantly, the estimated angle will not become ambiguous under an arbitrary angle, which can significantly extend the estimated angle range in wideband radars. PMID:27271629

  18. Methods for Estimating Medical Expenditures Attributable to Intimate Partner Violence

    ERIC Educational Resources Information Center

    Brown, Derek S.; Finkelstein, Eric A.; Mercy, James A.

    2008-01-01

    This article compares three methods for estimating the medical cost burden of intimate partner violence against U.S. adult women (18 years and older), 1 year postvictimization. To compute the estimates, prevalence data from the National Violence Against Women Survey are combined with cost data from the Medical Expenditure Panel Survey, the…

  19. Uncertainty estimation in seismo-acoustic reflection travel time inversion.

    PubMed

    Dettmer, Jan; Dosso, Stan E; Holland, Charles W

    2007-07-01

    This paper develops a nonlinear Bayesian inversion for high-resolution seabed reflection travel time data including rigorous uncertainty estimation and examination of statistical assumptions. Travel time data are picked on seismo-acoustic traces and inverted for a layered sediment sound-velocity model. Particular attention is paid to picking errors which are often biased, correlated, and nonstationary. Non-Toeplitz data covariance matrices are estimated and included in the inversion along with unknown travel time offset (bias) parameters to account for these errors. Simulated experiments show that neglecting error covariances and biases can cause misleading inversion results with unrealistically high confidence. The inversion samples the posterior probability density and provides a solution in terms of one- and two-dimensional marginal probability densities, correlations, and credibility intervals. Statistical assumptions are examined through the data residuals with rigorous statistical tests. The method is applied to shallow-water data collected on the Malta Plateau during the SCARAB98 experiment. PMID:17614476

  20. Adaptive frequency estimation by MUSIC (Multiple Signal Classification) method

    NASA Astrophysics Data System (ADS)

    Karhunen, Juha; Nieminen, Esko; Joutsensalo, Jyrki

    During the last years, the eigenvector-based method called MUSIC has become very popular in estimating the frequencies of sinusoids in additive white noise. Adaptive realizations of the MUSIC method are studied using simulated data. Several of the adaptive realizations seem to give in practice equally good results as the nonadaptive standard realization. The only exceptions are instantaneous gradient type algorithms that need considerably more samples to achieve a comparable performance. A new method is proposed for constructing initial estimates to the signal subspace. The method improves often dramatically the performance of instantaneous gradient type algorithms. The new signal subspace estimate can also be used to define a frequency estimator directly or to simplify eigenvector computation.

  1. Methods for Estimating Uncertainty in Factor Analytic Solutions

    EPA Science Inventory

    The EPA PMF (Environmental Protection Agency positive matrix factorization) version 5.0 and the underlying multilinear engine-executable ME-2 contain three methods for estimating uncertainty in factor analytic models: classical bootstrap (BS), displacement of factor elements (DI...

  2. Evapotranspiration: Mass balance measurements compared with flux estimation methods

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Evapotranspiration (ET) may be measured by mass balance methods and estimated by flux sensing methods. The mass balance methods are typically restricted in terms of the area that can be represented (e.g., surface area of weighing lysimeter (LYS) or equivalent representative area of neutron probe (NP...

  3. Estimation and classification by sigmoids based on mutual information

    NASA Technical Reports Server (NTRS)

    Baram, Yoram

    1994-01-01

    An estimate of the probability density function of a random vector is obtained by maximizing the mutual information between the input and the output of a feedforward network of sigmoidal units with respect to the input weights. Classification problems can be solved by selecting the class associated with the maximal estimated density. Newton's s method, applied to an estimated density, yields a recursive maximum likelihood estimator, consisting of a single internal layer of sigmoids, for a random variable or a random sequence. Applications to the diamond classification and to the prediction of a sun-spot process are demonstrated.

  4. Recent developments in the methods of estimating shooting distance.

    PubMed

    Zeichner, Arie; Glattstein, Baruch

    2002-03-01

    A review of developments during the past 10 years in the methods of estimating shooting distance is provided. This review discusses the examination of clothing targets, cadavers, and exhibits that cannot be processed in the laboratory. The methods include visual/microscopic examinations, color tests, and instrumental analysis of the gunshot residue deposits around the bullet entrance holes. The review does not cover shooting distance estimation from shotguns that fired pellet loads. PMID:12805985

  5. Using the Mercy Method for Weight Estimation in Indian Children

    PubMed Central

    Batmanabane, Gitanjali; Jena, Pradeep Kumar; Dikshit, Roshan

    2015-01-01

    This study was designed to compare the performance of a new weight estimation strategy (Mercy Method) with 12 existing weight estimation methods (APLS, Best Guess, Broselow, Leffler, Luscombe-Owens, Nelson, Shann, Theron, Traub-Johnson, Traub-Kichen) in children from India. Otherwise healthy children, 2 months to 16 years, were enrolled and weight, height, humeral length (HL), and mid-upper arm circumference (MUAC) were obtained by trained raters. Weight estimation was performed as described for each method. Predicted weights were regressed against actual weights and the slope, intercept, and Pearson correlation coefficient estimated. Agreement between estimated weight and actual weight was determined using Bland–Altman plots with log-transformation. Predictive performance of each method was assessed using mean error (ME), mean percentage error (MPE), and root mean square error (RMSE). Three hundred seventy-five children (7.5 ± 4.3 years, 22.1 ± 12.3 kg, 116.2 ± 26.3 cm) participated in this study. The Mercy Method (MM) offered the best correlation between actual and estimated weight when compared with the other methods (r2 = .967 vs .517-.844). The MM also demonstrated the lowest ME, MPE, and RMSE. Finally, the MM estimated weight within 20% of actual for nearly all children (96%) as opposed to the other methods for which these values ranged from 14% to 63%. The MM performed extremely well in Indian children with performance characteristics comparable to those observed for US children in whom the method was developed. It appears that the MM can be used in Indian children without modification, extending the utility of this weight estimation strategy beyond Western populations. PMID:27335932

  6. An investigation of new methods for estimating parameter sensitivities

    NASA Technical Reports Server (NTRS)

    Beltracchi, Todd J.; Gabriele, Gary A.

    1988-01-01

    Parameter sensitivity is defined as the estimation of changes in the modeling functions and the design variables due to small changes in the fixed parameters of the formulation. There are currently several methods for estimating parameter sensitivities requiring either difficult to obtain second order information, or do not return reliable estimates for the derivatives. Additionally, all the methods assume that the set of active constraints does not change in a neighborhood of the estimation point. If the active set does in fact change, than any extrapolations based on these derivatives may be in error. The objective here is to investigate more efficient new methods for estimating parameter sensitivities when the active set changes. The new method is based on the recursive quadratic programming (RQP) method and in conjunction a differencing formula to produce estimates of the sensitivities. This is compared to existing methods and is shown to be very competitive in terms of the number of function evaluations required. In terms of accuracy, the method is shown to be equivalent to a modified version of the Kuhn-Tucker method, where the Hessian of the Lagrangian is estimated using the BFS method employed by the RPQ algorithm. Inital testing on a test set with known sensitivities demonstrates that the method can accurately calculate the parameter sensitivity. To handle changes in the active set, a deflection algorithm is proposed for those cases where the new set of active constraints remains linearly independent. For those cases where dependencies occur, a directional derivative is proposed. A few simple examples are included for the algorithm, but extensive testing has not yet been performed.

  7. A posteriori pointwise error estimates for the boundary element method

    SciTech Connect

    Paulino, G.H.; Gray, L.J.; Zarikian, V.

    1995-01-01

    This report presents a new approach for a posteriori pointwise error estimation in the boundary element method. The estimator relies upon the evaluation of hypersingular integral equations, and is therefore intrinsic to the boundary integral equation approach. This property allows some theoretical justification by mathematically correlating the exact and estimated errors. A methodology is developed for approximating the error on the boundary as well as in the interior of the domain. In the interior, error estimates for both the function and its derivatives (e.g. potential and interior gradients for potential problems, displacements and stresses for elasticity problems) are presented. Extensive computational experiments have been performed for the two dimensional Laplace equation on interior domains, employing Dirichlet and mixed boundary conditions. The results indicate that the error estimates successfully track the form of the exact error curve. Moreover, a reasonable estimate of the magnitude of the actual error is also obtained.

  8. Two-dimensional location and direction estimating method.

    PubMed

    Haga, Teruhiro; Tsukamoto, Sosuke; Hoshino, Hiroshi

    2008-01-01

    In this paper, a method of estimating both the position and the rotation angle of an object on a measurement stage was proposed. The system utilizes the radio communication technology and the directivity of an antenna. As a prototype system, a measurement stage (a circle 240mm in diameter) with 36 antennas that placed in each 10 degrees was developed. Two transmitter antennas are settled in a right angle on the stage as the target object, and the position and the rotation angle is estimated by measuring efficiency of the radio communication of each 36 antennas. The experimental result revealed that even when the estimated location is not so accurate (about a 30 mm error), the rotation angle is accurately estimated (about 2.33 degree error on average). The result suggests that the proposed method will be useful for estimating the location and the direction of an object. PMID:19162938

  9. A Channelization-Based DOA Estimation Method for Wideband Signals.

    PubMed

    Guo, Rui; Zhang, Yue; Lin, Qianqiang; Chen, Zengping

    2016-01-01

    In this paper, we propose a novel direction of arrival (DOA) estimation method for wideband signals with sensor arrays. The proposed method splits the wideband array output into multiple frequency sub-channels and estimates the signal parameters using a digital channelization receiver. Based on the output sub-channels, a channelization-based incoherent signal subspace method (Channelization-ISM) and a channelization-based test of orthogonality of projected subspaces method (Channelization-TOPS) are proposed. Channelization-ISM applies narrowband signal subspace methods on each sub-channel independently. Then the arithmetic mean or geometric mean of the estimated DOAs from each sub-channel gives the final result. Channelization-TOPS measures the orthogonality between the signal and the noise subspaces of the output sub-channels to estimate DOAs. The proposed channelization-based method isolates signals in different bandwidths reasonably and improves the output SNR. It outperforms the conventional ISM and TOPS methods on estimation accuracy and dynamic range, especially in real environments. Besides, the parallel processing architecture makes it easy to implement on hardware. A wideband digital array radar (DAR) using direct wideband radio frequency (RF) digitization is presented. Experiments carried out in a microwave anechoic chamber with the wideband DAR are presented to demonstrate the performance. The results verify the effectiveness of the proposed method. PMID:27384566

  10. A Channelization-Based DOA Estimation Method for Wideband Signals

    PubMed Central

    Guo, Rui; Zhang, Yue; Lin, Qianqiang; Chen, Zengping

    2016-01-01

    In this paper, we propose a novel direction of arrival (DOA) estimation method for wideband signals with sensor arrays. The proposed method splits the wideband array output into multiple frequency sub-channels and estimates the signal parameters using a digital channelization receiver. Based on the output sub-channels, a channelization-based incoherent signal subspace method (Channelization-ISM) and a channelization-based test of orthogonality of projected subspaces method (Channelization-TOPS) are proposed. Channelization-ISM applies narrowband signal subspace methods on each sub-channel independently. Then the arithmetic mean or geometric mean of the estimated DOAs from each sub-channel gives the final result. Channelization-TOPS measures the orthogonality between the signal and the noise subspaces of the output sub-channels to estimate DOAs. The proposed channelization-based method isolates signals in different bandwidths reasonably and improves the output SNR. It outperforms the conventional ISM and TOPS methods on estimation accuracy and dynamic range, especially in real environments. Besides, the parallel processing architecture makes it easy to implement on hardware. A wideband digital array radar (DAR) using direct wideband radio frequency (RF) digitization is presented. Experiments carried out in a microwave anechoic chamber with the wideband DAR are presented to demonstrate the performance. The results verify the effectiveness of the proposed method. PMID:27384566

  11. Comparison of several methods for estimating low speed stability derivatives

    NASA Technical Reports Server (NTRS)

    Fletcher, H. S.

    1971-01-01

    Methods presented in five different publications have been used to estimate the low-speed stability derivatives of two unpowered airplane configurations. One configuration had unswept lifting surfaces, the other configuration was the D-558-II swept-wing research airplane. The results of the computations were compared with each other, with existing wind-tunnel data, and with flight-test data for the D-558-II configuration to assess the relative merits of the methods for estimating derivatives. The results of the study indicated that, in general, for low subsonic speeds, no one text appeared consistently better for estimating all derivatives.

  12. A Computationally Efficient Method for Polyphonic Pitch Estimation

    NASA Astrophysics Data System (ADS)

    Zhou, Ruohua; Reiss, Joshua D.; Mattavelli, Marco; Zoia, Giorgio

    2009-12-01

    This paper presents a computationally efficient method for polyphonic pitch estimation. The method employs the Fast Resonator Time-Frequency Image (RTFI) as the basic time-frequency analysis tool. The approach is composed of two main stages. First, a preliminary pitch estimation is obtained by means of a simple peak-picking procedure in the pitch energy spectrum. Such spectrum is calculated from the original RTFI energy spectrum according to harmonic grouping principles. Then the incorrect estimations are removed according to spectral irregularity and knowledge of the harmonic structures of the music notes played on commonly used music instruments. The new approach is compared with a variety of other frame-based polyphonic pitch estimation methods, and results demonstrate the high performance and computational efficiency of the approach.

  13. A robust method for rotation estimation using spherical harmonics representation.

    PubMed

    Althloothi, Salah; Mahoor, Mohammad H; Voyles, Richard M

    2013-06-01

    This paper presents a robust method for 3D object rotation estimation using spherical harmonics representation and the unit quaternion vector. The proposed method provides a closed-form solution for rotation estimation without recurrence relations or searching for point correspondences between two objects. The rotation estimation problem is casted as a minimization problem, which finds the optimum rotation angles between two objects of interest in the frequency domain. The optimum rotation angles are obtained by calculating the unit quaternion vector from a symmetric matrix, which is constructed from the two sets of spherical harmonics coefficients using eigendecomposition technique. Our experimental results on hundreds of 3D objects show that our proposed method is very accurate in rotation estimation, robust to noisy data, missing surface points, and can handle intra-class variability between 3D objects. PMID:23475364

  14. A Fast Estimation Method of Railway Passengers' Flow

    NASA Astrophysics Data System (ADS)

    Nagasaki, Yusaku; Asuka, Masashi; Komaya, Kiyotoshi

    To evaluate a train schedule from the viewpoint of passengers' convenience, it is important to know each passenger's choice of trains and transfer stations to arrive at his/her destination. Because of difficulties of measuring such passengers' behavior, estimation methods of railway passengers' flow are proposed to execute such an evaluation. However, a train schedule planning system equipped with those methods is not practical due to necessity of much time to complete the estimation. In this article, the authors propose a fast passengers' flow estimation method that employs features of passengers' flow graph using preparative search based on each train's arrival time at each station. And the authors show the results of passengers' flow estimation applied on a railway in an urban area.

  15. Evaluation of the Mercy weight estimation method in Ouelessebougou, Mali

    PubMed Central

    2014-01-01

    Background This study evaluated the performance of a new weight estimation strategy (Mercy Method) with four existing weight-estimation methods (APLS, ARC, Broselow, and Nelson) in children from Ouelessebougou, Mali. Methods Otherwise healthy children, 2 mos to 16 yrs, were enrolled and weight, height, humeral length (HL) and mid-upper arm circumference (MUAC) obtained by trained raters. Weight estimation was performed as described for each method. Predicted weights were regressed against actual weights. Agreement between estimated and actual weight was determined using Bland-Altman plots with log-transformation. Predictive performance of each method was assessed using residual error (RE), percentage error (PE), root mean square error (RMSE), and percent predicted within 10, 20 and 30% of actual weight. Results 473 children (8.1 ± 4.8 yr, 25.1 ± 14.5 kg, 120.9 ± 29.5 cm) participated in this study. The Mercy Method (MM) offered the best correlation between actual and estimated weight when compared with the other methods (r2 = 0.97 vs. 0.80-0.94). The MM also demonstrated the lowest ME (0.06 vs. 0.92-4.1 kg), MPE (1.6 vs. 7.8-19.8%) and RMSE (2.6 vs. 3.0-6.7). Finally, the MM estimated weight within 20% of actual for nearly all children (97%) as opposed to the other methods for which these values ranged from 50-69%. Conclusions The MM performed extremely well in Malian children with performance characteristics comparable to those observed for U.S and India and could be used in sub-Saharan African children without modification extending the utility of this weight estimation strategy. PMID:24650051

  16. Demographic estimation methods for plants with unobservable life-states

    USGS Publications Warehouse

    Kery, M.; Gregg, K.B.; Schaub, M.

    2005-01-01

    Demographic estimation of vital parameters in plants with an unobservable dormant state is complicated, because time of death is not known. Conventional methods assume that death occurs at a particular time after a plant has last been seen aboveground but the consequences of assuming a particular duration of dormancy have never been tested. Capture-recapture methods do not make assumptions about time of death; however, problems with parameter estimability have not yet been resolved. To date, a critical comparative assessment of these methods is lacking. We analysed data from a 10 year study of Cleistes bifaria, a terrestrial orchid with frequent dormancy, and compared demographic estimates obtained by five varieties of the conventional methods, and two capture-recapture methods. All conventional methods produced spurious unity survival estimates for some years or for some states, and estimates of demographic rates sensitive to the time of death assumption. In contrast, capture-recapture methods are more parsimonious in terms of assumptions, are based on well founded theory and did not produce spurious estimates. In Cleistes, dormant episodes lasted for 1-4 years (mean 1.4, SD 0.74). The capture-recapture models estimated ramet survival rate at 0.86 (SE~ 0.01), ranging from 0.77-0.94 (SEs # 0.1) in anyone year. The average fraction dormant was estimated at 30% (SE 1.5), ranging 16 -47% (SEs # 5.1) in anyone year. Multistate capture-recapture models showed that survival rates were positively related to precipitation in the current year, but transition rates were more strongly related to precipitation in the previous than in the current year, with more ramets going dormant following dry years. Not all capture-recapture models of interest have estimable parameters; for instance, without excavating plants in years when they do not appear aboveground, it is not possible to obtain independent timespecific survival estimates for dormant plants. We introduce rigorous

  17. A new method for parameter estimation in nonlinear dynamical equations

    NASA Astrophysics Data System (ADS)

    Wang, Liu; He, Wen-Ping; Liao, Le-Jian; Wan, Shi-Quan; He, Tao

    2015-01-01

    Parameter estimation is an important scientific problem in various fields such as chaos control, chaos synchronization and other mathematical models. In this paper, a new method for parameter estimation in nonlinear dynamical equations is proposed based on evolutionary modelling (EM). This will be achieved by utilizing the following characteristics of EM which includes self-organizing, adaptive and self-learning features which are inspired by biological natural selection, and mutation and genetic inheritance. The performance of the new method is demonstrated by using various numerical tests on the classic chaos model—Lorenz equation (Lorenz 1963). The results indicate that the new method can be used for fast and effective parameter estimation irrespective of whether partial parameters or all parameters are unknown in the Lorenz equation. Moreover, the new method has a good convergence rate. Noises are inevitable in observational data. The influence of observational noises on the performance of the presented method has been investigated. The results indicate that the strong noises, such as signal noise ratio (SNR) of 10 dB, have a larger influence on parameter estimation than the relatively weak noises. However, it is found that the precision of the parameter estimation remains acceptable for the relatively weak noises, e.g. SNR is 20 or 30 dB. It indicates that the presented method also has some anti-noise performance.

  18. Scanning linear estimation: improvements over region of interest (ROI) methods

    NASA Astrophysics Data System (ADS)

    Kupinski, Meredith K.; Clarkson, Eric W.; Barrett, Harrison H.

    2013-03-01

    In tomographic medical imaging, a signal activity is typically estimated by summing voxels from a reconstructed image. We introduce an alternative estimation scheme that operates on the raw projection data and offers a substantial improvement, as measured by the ensemble mean-square error (EMSE), when compared to using voxel values from a maximum-likelihood expectation-maximization (MLEM) reconstruction. The scanning-linear (SL) estimator operates on the raw projection data and is derived as a special case of maximum-likelihood estimation with a series of approximations to make the calculation tractable. The approximated likelihood accounts for background randomness, measurement noise and variability in the parameters to be estimated. When signal size and location are known, the SL estimate of signal activity is unbiased, i.e. the average estimate equals the true value. By contrast, unpredictable bias arising from the null functions of the imaging system affect standard algorithms that operate on reconstructed data. The SL method is demonstrated for two different tasks: (1) simultaneously estimating a signal’s size, location and activity; (2) for a fixed signal size and location, estimating activity. Noisy projection data are realistically simulated using measured calibration data from the multi-module multi-resolution small-animal SPECT imaging system. For both tasks, the same set of images is reconstructed using the MLEM algorithm (80 iterations), and the average and maximum values within the region of interest (ROI) are calculated for comparison. This comparison shows dramatic improvements in EMSE for the SL estimates. To show that the bias in ROI estimates affects not only absolute values but also relative differences, such as those used to monitor the response to therapy, the activity estimation task is repeated for three different signal sizes.

  19. Stability and error estimation for Component Adaptive Grid methods

    NASA Technical Reports Server (NTRS)

    Oliger, Joseph; Zhu, Xiaolei

    1994-01-01

    Component adaptive grid (CAG) methods for solving hyperbolic partial differential equations (PDE's) are discussed in this paper. Applying recent stability results for a class of numerical methods on uniform grids. The convergence of these methods for linear problems on component adaptive grids is established here. Furthermore, the computational error can be estimated on CAG's using the stability results. Using these estimates, the error can be controlled on CAG's. Thus, the solution can be computed efficiently on CAG's within a given error tolerance. Computational results for time dependent linear problems in one and two space dimensions are presented.

  20. Assessing the sensitivity of methods for estimating principal causal effects.

    PubMed

    Stuart, Elizabeth A; Jo, Booil

    2015-12-01

    The framework of principal stratification provides a way to think about treatment effects conditional on post-randomization variables, such as level of compliance. In particular, the complier average causal effect (CACE) - the effect of the treatment for those individuals who would comply with their treatment assignment under either treatment condition - is often of substantive interest. However, estimation of the CACE is not always straightforward, with a variety of estimation procedures and underlying assumptions, but little advice to help researchers select between methods. In this article, we discuss and examine two methods that rely on very different assumptions to estimate the CACE: a maximum likelihood ('joint') method that assumes the 'exclusion restriction,' (ER) and a propensity score-based method that relies on 'principal ignorability.' We detail the assumptions underlying each approach, and assess each methods' sensitivity to both its own assumptions and those of the other method using both simulated data and a motivating example. We find that the ER-based joint approach appears somewhat less sensitive to its assumptions, and that the performance of both methods is significantly improved when there are strong predictors of compliance. Interestingly, we also find that each method performs particularly well when the assumptions of the other approach are violated. These results highlight the importance of carefully selecting an estimation procedure whose assumptions are likely to be satisfied in practice and of having strong predictors of principal stratum membership. PMID:21971481

  1. A Simple Method to Estimate Harvest Index in Grain Crops

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Several methods have been proposed to simulate yield in crop simulation models. In this work we present a simple method to estimate harvest index (HI) of grain crops based on fractional post-anthesis growth (fG = fraction of growth that occurred post-anthesis). We propose that there is a linear or c...

  2. A Study of Methods for Estimating Distributions of Test Scores.

    ERIC Educational Resources Information Center

    Cope, Ronald T.; Kolen, Michael J.

    This study compared five density estimation techniques applied to samples from a population of 272,244 examinees' ACT English Usage and Mathematics Usage raw scores. Unsmoothed frequencies, kernel method, negative hypergeometric, four-parameter beta compound binomial, and Cureton-Tukey methods were applied to 500 replications of random samples of…

  3. Evaluation of alternative methods for estimating reference evapotranspiration

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Evapotranspiration is an important component in water-balance and irrigation scheduling models. While the FAO-56 Penman-Monteith method has become the de facto standard for estimating reference evapotranspiration (ETo), it is a complex method requiring several weather parameters. Required weather ...

  4. Precision of two methods for estimating age from burbot otoliths

    USGS Publications Warehouse

    Edwards, W.H.; Stapanian, M.A.; Stoneman, A.T.

    2011-01-01

    Lower reproductive success and older age structure are associated with many burbot (Lota lota L.) populations that are declining or of conservation concern. Therefore, reliable methods for estimating the age of burbot are critical for effective assessment and management. In Lake Erie, burbot populations have declined in recent years due to the combined effects of an aging population (&xmacr; = 10 years in 2007) and extremely low recruitment since 2002. We examined otoliths from burbot (N = 91) collected in Lake Erie in 2007 and compared the estimates of burbot age by two agers, each using two established methods (cracked-and-burned and thin-section) of estimating ages from burbot otoliths. One ager was experienced at estimating age from otoliths, the other was a novice. Agreement (precision) between the two agers was higher for the thin-section method, particularly at ages 6–11 years, based on linear regression analyses and 95% confidence intervals. As expected, precision between the two methods was higher for the more experienced ager. Both agers reported that the thin sections offered clearer views of the annuli, particularly near the margins on otoliths from burbot ages ≥8. Slides for the thin sections required some costly equipment and more than 2 days to prepare. In contrast, preparing the cracked-and-burned samples was comparatively inexpensive and quick. We suggest use of the thin-section method for estimating the age structure of older burbot populations.

  5. Time domain attenuation estimation method from ultrasonic backscattered signals

    PubMed Central

    Ghoshal, Goutam; Oelze, Michael L.

    2012-01-01

    Ultrasonic attenuation is important not only as a parameter for characterizing tissue but also for compensating other parameters that are used to classify tissues. Several techniques have been explored for estimating ultrasonic attenuation from backscattered signals. In the present study, a technique is developed to estimate the local ultrasonic attenuation coefficient by analyzing the time domain backscattered signal. The proposed method incorporates an objective function that combines the diffraction pattern of the source/receiver with the attenuation slope in an integral equation. The technique was assessed through simulations and validated through experiments with a tissue mimicking phantom and fresh rabbit liver samples. The attenuation values estimated using the proposed technique were compared with the attenuation estimated using insertion loss measurements. For a data block size of 15 pulse lengths axially and 15 beamwidths laterally, the mean attenuation estimates from the tissue mimicking phantoms were within 10% of the estimates using insertion loss measurements. With a data block size of 20 pulse lengths axially and 20 beamwidths laterally, the error in the attenuation values estimated from the liver samples were within 10% of the attenuation values estimated from the insertion loss measurements. PMID:22779499

  6. Estimating Population Size Using the Network Scale Up Method

    PubMed Central

    Maltiel, Rachael; Raftery, Adrian E.; McCormick, Tyler H.; Baraff, Aaron J.

    2015-01-01

    We develop methods for estimating the size of hard-to-reach populations from data collected using network-based questions on standard surveys. Such data arise by asking respondents how many people they know in a specific group (e.g. people named Michael, intravenous drug users). The Network Scale up Method (NSUM) is a tool for producing population size estimates using these indirect measures of respondents’ networks. Killworth et al. (1998a,b) proposed maximum likelihood estimators of population size for a fixed effects model in which respondents’ degrees or personal network sizes are treated as fixed. We extend this by treating personal network sizes as random effects, yielding principled statements of uncertainty. This allows us to generalize the model to account for variation in people’s propensity to know people in particular subgroups (barrier effects), such as their tendency to know people like themselves, as well as their lack of awareness of or reluctance to acknowledge their contacts’ group memberships (transmission bias). NSUM estimates also suffer from recall bias, in which respondents tend to underestimate the number of members of larger groups that they know, and conversely for smaller groups. We propose a data-driven adjustment method to deal with this. Our methods perform well in simulation studies, generating improved estimates and calibrated uncertainty intervals, as well as in back estimates of real sample data. We apply them to data from a study of HIV/AIDS prevalence in Curitiba, Brazil. Our results show that when transmission bias is present, external information about its likely extent can greatly improve the estimates. The methods are implemented in the NSUM R package. PMID:26949438

  7. Benchmarking Method for Estimation of Biogas Upgrading Schemes

    NASA Astrophysics Data System (ADS)

    Blumberga, D.; Kuplais, Ģ.; Veidenbergs, I.; Dāce, E.

    2009-01-01

    The paper describes a new benchmarking method proposed for estimation of different biogas upgrading schemes. The method has been developed to compare the indicators of alternative biogas purification and upgrading solutions and their threshold values. The chosen indicators cover both economic and ecologic aspects of these solutions, e.g. the prime cost of biogas purification and storage, and the cost efficiency of greenhouse gas emission reduction. The proposed benchmarking method has been tested at "Daibe" - a landfill for solid municipal waste.

  8. New method for the estimation of platelet ascorbic acid

    PubMed Central

    Lloyd, J. V.; Davis, P. S.; Lander, Harry

    1969-01-01

    Present techniques for the estimation of platelet ascorbic acid allow interference by other substances in the sample. A new and more specific method of analysis is presented. The proposed method owes its increased specificity to resolution of the extract by thin-layer chromatography. By this means ascorbic acid is separated from other reducing substances present. The separated ascorbic acid is eluted from the thin layer and estimated by a new and very sensitive procedure: ascorbic acid is made to react with ferric chloride and the ferrous ions so formed are estimated spectrophotometrically by the coloured derivative which they form with tripyridyl-Striazine. Results obtained with normal blood platelets were consistently lower than simultaneous determinations by the dinitrophenylhydrazine (DNPH) method. PMID:5798633

  9. Fault detection in electromagnetic suspension systems with state estimation methods

    SciTech Connect

    Sinha, P.K.; Zhou, F.B.; Kutiyal, R.S. . Dept. of Engineering)

    1993-11-01

    High-speed maglev vehicles need a high level of safety that depends on the whole vehicle system's reliability. There are many ways of attaining high reliability for the system. Conventional method uses redundant hardware with majority vote logic circuits. Hardware redundancy costs more, weigh more and occupy more space than that of analytically redundant methods. Analytically redundant systems use parameter identification and state estimation methods based on the system models to detect and isolate the fault of instruments (sensors), actuator and components. In this paper the authors use the Luenberger observer to estimate three state variables of the electromagnetic suspension system: position (airgap), vehicle velocity, and vertical acceleration. These estimates are compared with the corresponding sensor outputs for fault detection. In this paper, they consider FDI of the accelerometer, the sensor which provides the ride quality.

  10. A compensation-defect model for the joint probability density function of the scalar difference and the length scale of dissipation elements

    NASA Astrophysics Data System (ADS)

    Wang, Lipo; Peters, Norbert

    2008-06-01

    Dissipation element analysis is a new approach to study turbulent scalar fields. Gradient trajectories starting from each material point in a fluctuating scalar field ϕ'(x⃗,t) in ascending and descending directions will inevitably reach a maximal and a minimal point. The ensemble of material points sharing the same pair ending points is named a dissipation element. Dissipation elements can be parametrized by the length scale l and the scalar difference Δϕ', which are defined as the straight line connecting the two extremal points and the scalar difference at these points, respectively. The decomposition of a turbulent field into dissipation elements is space filling. This allows us to reconstruct certain statistical quantities of fine scale turbulence which cannot be obtained otherwise. The marginal probability density function (PDF) of the length scale distribution had been modeled in the previous work based on a Poisson random cutting-reconnection process and had been compared to data from direct numerical simulation (DNS). The joint PDF of l and Δϕ ' contains the important information that is needed for the modeling of scalar mixing in turbulence, such as the marginal PDF of the length of elements and conditional moments, as well as their scaling exponents. In order to be able to predict these quantities, there is a need to model the joint PDF. A compensation-defect model is put forward in this work and the agreement between the model prediction and DNS results is satisfactory.