Science.gov

Sample records for probability-density estimation method

  1. A new parametric method of estimating the joint probability density

    NASA Astrophysics Data System (ADS)

    Alghalith, Moawia

    2017-04-01

    We present simple parametric methods that overcome major limitations of the literature on joint/marginal density estimation. In doing so, we do not assume any form of marginal or joint distribution. Furthermore, using our method, a multivariate density can be easily estimated if we know only one of the marginal densities. We apply our methods to financial data.

  2. Structural Reliability Using Probability Density Estimation Methods Within NESSUS

    NASA Technical Reports Server (NTRS)

    Chamis, Chrisos C. (Technical Monitor); Godines, Cody Ric

    2003-01-01

    A reliability analysis studies a mathematical model of a physical system taking into account uncertainties of design variables and common results are estimations of a response density, which also implies estimations of its parameters. Some common density parameters include the mean value, the standard deviation, and specific percentile(s) of the response, which are measures of central tendency, variation, and probability regions, respectively. Reliability analyses are important since the results can lead to different designs by calculating the probability of observing safe responses in each of the proposed designs. All of this is done at the expense of added computational time as compared to a single deterministic analysis which will result in one value of the response out of many that make up the density of the response. Sampling methods, such as monte carlo (MC) and latin hypercube sampling (LHS), can be used to perform reliability analyses and can compute nonlinear response density parameters even if the response is dependent on many random variables. Hence, both methods are very robust; however, they are computationally expensive to use in the estimation of the response density parameters. Both methods are 2 of 13 stochastic methods that are contained within the Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) program. NESSUS is a probabilistic finite element analysis (FEA) program that was developed through funding from NASA Glenn Research Center (GRC). It has the additional capability of being linked to other analysis programs; therefore, probabilistic fluid dynamics, fracture mechanics, and heat transfer are only a few of what is possible with this software. The LHS method is the newest addition to the stochastic methods within NESSUS. Part of this work was to enhance NESSUS with the LHS method. The new LHS module is complete, has been successfully integrated with NESSUS, and been used to study four different test cases that have been

  3. Structural Reliability Using Probability Density Estimation Methods Within NESSUS

    NASA Technical Reports Server (NTRS)

    Chamis, Chrisos C. (Technical Monitor); Godines, Cody Ric

    2003-01-01

    A reliability analysis studies a mathematical model of a physical system taking into account uncertainties of design variables and common results are estimations of a response density, which also implies estimations of its parameters. Some common density parameters include the mean value, the standard deviation, and specific percentile(s) of the response, which are measures of central tendency, variation, and probability regions, respectively. Reliability analyses are important since the results can lead to different designs by calculating the probability of observing safe responses in each of the proposed designs. All of this is done at the expense of added computational time as compared to a single deterministic analysis which will result in one value of the response out of many that make up the density of the response. Sampling methods, such as monte carlo (MC) and latin hypercube sampling (LHS), can be used to perform reliability analyses and can compute nonlinear response density parameters even if the response is dependent on many random variables. Hence, both methods are very robust; however, they are computationally expensive to use in the estimation of the response density parameters. Both methods are 2 of 13 stochastic methods that are contained within the Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) program. NESSUS is a probabilistic finite element analysis (FEA) program that was developed through funding from NASA Glenn Research Center (GRC). It has the additional capability of being linked to other analysis programs; therefore, probabilistic fluid dynamics, fracture mechanics, and heat transfer are only a few of what is possible with this software. The LHS method is the newest addition to the stochastic methods within NESSUS. Part of this work was to enhance NESSUS with the LHS method. The new LHS module is complete, has been successfully integrated with NESSUS, and been used to study four different test cases that have been

  4. Simplified Computation for Nonparametric Windows Method of Probability Density Function Estimation.

    PubMed

    Joshi, Niranjan; Kadir, Timor; Brady, Michael

    2011-08-01

    Recently, Kadir and Brady proposed a method for estimating probability density functions (PDFs) for digital signals which they call the Nonparametric (NP) Windows method. The method involves constructing a continuous space representation of the discrete space and sampled signal by using a suitable interpolation method. NP Windows requires only a small number of observed signal samples to estimate the PDF and is completely data driven. In this short paper, we first develop analytical formulae to obtain the NP Windows PDF estimates for 1D, 2D, and 3D signals, for different interpolation methods. We then show that the original procedure to calculate the PDF estimate can be significantly simplified and made computationally more efficient by a judicious choice of the frame of reference. We have also outlined specific algorithmic details of the procedures enabling quick implementation. Our reformulation of the original concept has directly demonstrated a close link between the NP Windows method and the Kernel Density Estimator.

  5. Nonparametric maximum likelihood estimation of probability densities by penalty function methods

    NASA Technical Reports Server (NTRS)

    Demontricher, G. F.; Tapia, R. A.; Thompson, J. R.

    1974-01-01

    When it is known a priori exactly to which finite dimensional manifold the probability density function gives rise to a set of samples, the parametric maximum likelihood estimation procedure leads to poor estimates and is unstable; while the nonparametric maximum likelihood procedure is undefined. A very general theory of maximum penalized likelihood estimation which should avoid many of these difficulties is presented. It is demonstrated that each reproducing kernel Hilbert space leads, in a very natural way, to a maximum penalized likelihood estimator and that a well-known class of reproducing kernel Hilbert spaces gives polynomial splines as the nonparametric maximum penalized likelihood estimates.

  6. METAPHOR: a machine-learning-based method for the probability density estimation of photometric redshifts

    NASA Astrophysics Data System (ADS)

    Cavuoti, S.; Amaro, V.; Brescia, M.; Vellucci, C.; Tortora, C.; Longo, G.

    2017-02-01

    A variety of fundamental astrophysical science topics require the determination of very accurate photometric redshifts (photo-z). A wide plethora of methods have been developed, based either on template models fitting or on empirical explorations of the photometric parameter space. Machine-learning-based techniques are not explicitly dependent on the physical priors and able to produce accurate photo-z estimations within the photometric ranges derived from the spectroscopic training set. These estimates, however, are not easy to characterize in terms of a photo-z probability density function (PDF), due to the fact that the analytical relation mapping the photometric parameters on to the redshift space is virtually unknown. We present METAPHOR (Machine-learning Estimation Tool for Accurate PHOtometric Redshifts), a method designed to provide a reliable PDF of the error distribution for empirical techniques. The method is implemented as a modular workflow, whose internal engine for photo-z estimation makes use of the MLPQNA neural network (Multi Layer Perceptron with Quasi Newton learning rule), with the possibility to easily replace the specific machine-learning model chosen to predict photo-z. We present a summary of results on SDSS-DR9 galaxy data, used also to perform a direct comparison with PDFs obtained by the LE PHARE spectral energy distribution template fitting. We show that METAPHOR is capable to estimate the precision and reliability of photometric redshifts obtained with three different self-adaptive techniques, i.e. MLPQNA, Random Forest and the standard K-Nearest Neighbors models.

  7. An empirical method for estimating probability density functions of gridded daily minimum and maximum temperature

    NASA Astrophysics Data System (ADS)

    Lussana, C.

    2013-04-01

    The presented work focuses on the investigation of gridded daily minimum (TN) and maximum (TX) temperature probability density functions (PDFs) with the intent of both characterising a region and detecting extreme values. The empirical PDFs estimation procedure has been realised using the most recent years of gridded temperature analysis fields available at ARPA Lombardia, in Northern Italy. The spatial interpolation is based on an implementation of Optimal Interpolation using observations from a dense surface network of automated weather stations. An effort has been made to identify both the time period and the spatial areas with a stable data density otherwise the elaboration could be influenced by the unsettled station distribution. The PDF used in this study is based on the Gaussian distribution, nevertheless it is designed to have an asymmetrical (skewed) shape in order to enable distinction between warming and cooling events. Once properly defined the occurrence of extreme events, it is possible to straightforwardly deliver to the users the information on a local-scale in a concise way, such as: TX extremely cold/hot or TN extremely cold/hot.

  8. Probability density estimation using artificial neural networks

    NASA Astrophysics Data System (ADS)

    Likas, Aristidis

    2001-04-01

    We present an approach for the estimation of probability density functions (pdf) given a set of observations. It is based on the use of feedforward multilayer neural networks with sigmoid hidden units. The particular characteristic of the method is that the output of the network is not a pdf, therefore, the computation of the network's integral is required. When this integral cannot be performed analytically, one is forced to resort to numerical integration techniques. It turns out that this is quite tricky when coupled with subsequent training procedures. Several modifications of the original approach (Modha and Fainman, 1994) are proposed, most of them related to the numerical treatment of the integral and the employment of a preprocessing phase where the network parameters are initialized using supervised training. Experimental results using several test problems indicate that the proposed method is very effective and in most cases superior to the method of Gaussian mixtures.

  9. Estimation of probability densities using scale-free field theories.

    PubMed

    Kinney, Justin B

    2014-07-01

    The question of how best to estimate a continuous probability density from finite data is an intriguing open problem at the interface of statistics and physics. Previous work has argued that this problem can be addressed in a natural way using methods from statistical field theory. Here I describe results that allow this field-theoretic approach to be rapidly and deterministically computed in low dimensions, making it practical for use in day-to-day data analysis. Importantly, this approach does not impose a privileged length scale for smoothness of the inferred probability density, but rather learns a natural length scale from the data due to the tradeoff between goodness of fit and an Occam factor. Open source software implementing this method in one and two dimensions is provided.

  10. Estimation of probability densities using scale-free field theories

    NASA Astrophysics Data System (ADS)

    Kinney, Justin B.

    2014-07-01

    The question of how best to estimate a continuous probability density from finite data is an intriguing open problem at the interface of statistics and physics. Previous work has argued that this problem can be addressed in a natural way using methods from statistical field theory. Here I describe results that allow this field-theoretic approach to be rapidly and deterministically computed in low dimensions, making it practical for use in day-to-day data analysis. Importantly, this approach does not impose a privileged length scale for smoothness of the inferred probability density, but rather learns a natural length scale from the data due to the tradeoff between goodness of fit and an Occam factor. Open source software implementing this method in one and two dimensions is provided.

  11. Online Reinforcement Learning Using a Probability Density Estimation.

    PubMed

    Agostini, Alejandro; Celaya, Enric

    2017-01-01

    Function approximation in online, incremental, reinforcement learning needs to deal with two fundamental problems: biased sampling and nonstationarity. In this kind of task, biased sampling occurs because samples are obtained from specific trajectories dictated by the dynamics of the environment and are usually concentrated in particular convergence regions, which in the long term tend to dominate the approximation in the less sampled regions. The nonstationarity comes from the recursive nature of the estimations typical of temporal difference methods. This nonstationarity has a local profile, varying not only along the learning process but also along different regions of the state space. We propose to deal with these problems using an estimation of the probability density of samples represented with a gaussian mixture model. To deal with the nonstationarity problem, we use the common approach of introducing a forgetting factor in the updating formula. However, instead of using the same forgetting factor for the whole domain, we make it dependent on the local density of samples, which we use to estimate the nonstationarity of the function at any given input point. To address the biased sampling problem, the forgetting factor applied to each mixture component is modulated according to the new information provided in the updating, rather than forgetting depending only on time, thus avoiding undesired distortions of the approximation in less sampled regions.

  12. Conditional Probability Density Functions Arising in Bearing Estimation

    DTIC Science & Technology

    1994-05-01

    and a better known performance measure: the Cramer-Rao bound . 14. SUMECT TEm IL5 NUlMN OF PAMES Probability Density Function, bearing angle estimation...results obtained using the calculated density functions and a better known performance measure: the Cramer-Rao bound . The major results obtained are as...48 15. Sampling Inteval , Propagation Delay, and Covariance Singularities ....... 52 viii List of Figures (continued

  13. Probability Density and CFAR Threshold Estimation for Hyperspectral Imaging

    SciTech Connect

    Clark, G A

    2004-09-21

    The work reported here shows the proof of principle (using a small data set) for a suite of algorithms designed to estimate the probability density function of hyperspectral background data and compute the appropriate Constant False Alarm Rate (CFAR) matched filter decision threshold for a chemical plume detector. Future work will provide a thorough demonstration of the algorithms and their performance with a large data set. The LASI (Large Aperture Search Initiative) Project involves instrumentation and image processing for hyperspectral images of chemical plumes in the atmosphere. The work reported here involves research and development on algorithms for reducing the false alarm rate in chemical plume detection and identification algorithms operating on hyperspectral image cubes. The chemical plume detection algorithms to date have used matched filters designed using generalized maximum likelihood ratio hypothesis testing algorithms [1, 2, 5, 6, 7, 12, 10, 11, 13]. One of the key challenges in hyperspectral imaging research is the high false alarm rate that often results from the plume detector [1, 2]. The overall goal of this work is to extend the classical matched filter detector to apply Constant False Alarm Rate (CFAR) methods to reduce the false alarm rate, or Probability of False Alarm P{sub FA} of the matched filter [4, 8, 9, 12]. A detector designer is interested in minimizing the probability of false alarm while simultaneously maximizing the probability of detection P{sub D}. This is summarized by the Receiver Operating Characteristic Curve (ROC) [10, 11], which is actually a family of curves depicting P{sub D} vs. P{sub FA}parameterized by varying levels of signal to noise (or clutter) ratio (SNR or SCR). Often, it is advantageous to be able to specify a desired P{sub FA} and develop a ROC curve (P{sub D} vs. decision threshold r{sub 0}) for that case. That is the purpose of this work. Specifically, this work develops a set of algorithms and MATLAB

  14. Application of a maximum entropy method to estimate the probability density function of nonlinear or chaotic behavior in structural health monitoring data

    NASA Astrophysics Data System (ADS)

    Livingston, Richard A.; Jin, Shuang

    2005-05-01

    Bridges and other civil structures can exhibit nonlinear and/or chaotic behavior under ambient traffic or wind loadings. The probability density function (pdf) of the observed structural responses thus plays an important role for long-term structural health monitoring, LRFR and fatigue life analysis. However, the actual pdf of such structural response data often has a very complicated shape due to its fractal nature. Various conventional methods to approximate it can often lead to biased estimates. This paper presents recent research progress at the Turner-Fairbank Highway Research Center of the FHWA in applying a novel probabilistic scaling scheme for enhanced maximum entropy evaluation to find the most unbiased pdf. The maximum entropy method is applied with a fractal interpolation formulation based on contraction mappings through an iterated function system (IFS). Based on a fractal dimension determined from the entire response data set by an algorithm involving the information dimension, a characteristic uncertainty parameter, called the probabilistic scaling factor, can be introduced. This allows significantly enhanced maximum entropy evaluation through the added inferences about the fine scale fluctuations in the response data. Case studies using the dynamic response data sets collected from a real world bridge (Commodore Barry Bridge, PA) and from the simulation of a classical nonlinear chaotic system (the Lorenz system) are presented in this paper. The results illustrate the advantages of the probabilistic scaling method over conventional approaches for finding the unbiased pdf especially in the critical tail region that contains the larger structural responses.

  15. Nonparametric probability density estimation by optimization theoretic techniques

    NASA Technical Reports Server (NTRS)

    Scott, D. W.

    1976-01-01

    Two nonparametric probability density estimators are considered. The first is the kernel estimator. The problem of choosing the kernel scaling factor based solely on a random sample is addressed. An interactive mode is discussed and an algorithm proposed to choose the scaling factor automatically. The second nonparametric probability estimate uses penalty function techniques with the maximum likelihood criterion. A discrete maximum penalized likelihood estimator is proposed and is shown to be consistent in the mean square error. A numerical implementation technique for the discrete solution is discussed and examples displayed. An extensive simulation study compares the integrated mean square error of the discrete and kernel estimators. The robustness of the discrete estimator is demonstrated graphically.

  16. Probability Density Function Method for Langevin Equations with Colored Noise

    SciTech Connect

    Wang, Peng; Tartakovsky, Alexandre M.; Tartakovsky, Daniel M.

    2013-04-05

    We present a novel method to derive closed-form, computable PDF equations for Langevin systems with colored noise. The derived equations govern the dynamics of joint or marginal probability density functions (PDFs) of state variables, and rely on a so-called Large-Eddy-Diffusivity (LED) closure. We demonstrate the accuracy of the proposed PDF method for linear and nonlinear Langevin equations, describing the classical Brownian displacement and dispersion in porous media.

  17. A Non-Parametric Probability Density Estimator and Some Applications.

    DTIC Science & Technology

    1984-05-01

    ESTIMATOR AND SOME APPLICATIONS Ronald P. Fuchs, B.S., M.S. Major, USAF Approved: oe Jt / 6 ’.°, Accep ted: Dean, School of Engineering .-7% Preface...4. Sensitivity to Support Estimation 35 5. Estimate of Density Function With No Subsampling 45 6 . Density Estimate Generated from Subsample One 46 7...Comparison of Distribution Function Average Square Errors (n-100) 61 6 . ASE for Basic and Parameterized Estimates 84 7. Distribution Function Method

  18. Estimating probability densities from short samples: A parametric maximum likelihood approach

    NASA Astrophysics Data System (ADS)

    Dudok de Wit, T.; Floriani, E.

    1998-10-01

    A parametric method similar to autoregressive spectral estimators is proposed to determine the probability density function (PDF) of a random set. The method proceeds by maximizing the likelihood of the PDF, yielding estimates that perform equally well in the tails as in the bulk of the distribution. It is therefore well suited for the analysis of short sets drawn from smooth PDF's and stands out by the simplicity of its computational scheme. Its advantages and limitations are discussed.

  19. METAPHOR: Probability density estimation for machine learning based photometric redshifts

    NASA Astrophysics Data System (ADS)

    Amaro, V.; Cavuoti, S.; Brescia, M.; Vellucci, C.; Tortora, C.; Longo, G.

    2017-06-01

    We present METAPHOR (Machine-learning Estimation Tool for Accurate PHOtometric Redshifts), a method able to provide a reliable PDF for photometric galaxy redshifts estimated through empirical techniques. METAPHOR is a modular workflow, mainly based on the MLPQNA neural network as internal engine to derive photometric galaxy redshifts, but giving the possibility to easily replace MLPQNA with any other method to predict photo-z's and their PDF. We present here the results about a validation test of the workflow on the galaxies from SDSS-DR9, showing also the universality of the method by replacing MLPQNA with KNN and Random Forest models. The validation test include also a comparison with the PDF's derived from a traditional SED template fitting method (Le Phare).

  20. Numerical methods for high-dimensional probability density function equations

    SciTech Connect

    Cho, H.; Venturi, D.; Karniadakis, G.E.

    2016-01-15

    In this paper we address the problem of computing the numerical solution to kinetic partial differential equations involving many phase variables. These types of equations arise naturally in many different areas of mathematical physics, e.g., in particle systems (Liouville and Boltzmann equations), stochastic dynamical systems (Fokker–Planck and Dostupov–Pugachev equations), random wave theory (Malakhov–Saichev equations) and coarse-grained stochastic systems (Mori–Zwanzig equations). We propose three different classes of new algorithms addressing high-dimensionality: The first one is based on separated series expansions resulting in a sequence of low-dimensional problems that can be solved recursively and in parallel by using alternating direction methods. The second class of algorithms relies on truncation of interaction in low-orders that resembles the Bogoliubov–Born–Green–Kirkwood–Yvon (BBGKY) framework of kinetic gas theory and it yields a hierarchy of coupled probability density function equations. The third class of algorithms is based on high-dimensional model representations, e.g., the ANOVA method and probabilistic collocation methods. A common feature of all these approaches is that they are reducible to the problem of computing the solution to high-dimensional equations via a sequence of low-dimensional problems. The effectiveness of the new algorithms is demonstrated in numerical examples involving nonlinear stochastic dynamical systems and partial differential equations, with up to 120 variables.

  1. Numerical methods for high-dimensional probability density function equations

    NASA Astrophysics Data System (ADS)

    Cho, H.; Venturi, D.; Karniadakis, G. E.

    2016-01-01

    In this paper we address the problem of computing the numerical solution to kinetic partial differential equations involving many phase variables. These types of equations arise naturally in many different areas of mathematical physics, e.g., in particle systems (Liouville and Boltzmann equations), stochastic dynamical systems (Fokker-Planck and Dostupov-Pugachev equations), random wave theory (Malakhov-Saichev equations) and coarse-grained stochastic systems (Mori-Zwanzig equations). We propose three different classes of new algorithms addressing high-dimensionality: The first one is based on separated series expansions resulting in a sequence of low-dimensional problems that can be solved recursively and in parallel by using alternating direction methods. The second class of algorithms relies on truncation of interaction in low-orders that resembles the Bogoliubov-Born-Green-Kirkwood-Yvon (BBGKY) framework of kinetic gas theory and it yields a hierarchy of coupled probability density function equations. The third class of algorithms is based on high-dimensional model representations, e.g., the ANOVA method and probabilistic collocation methods. A common feature of all these approaches is that they are reducible to the problem of computing the solution to high-dimensional equations via a sequence of low-dimensional problems. The effectiveness of the new algorithms is demonstrated in numerical examples involving nonlinear stochastic dynamical systems and partial differential equations, with up to 120 variables.

  2. Parameterizing deep convection using the assumed probability density function method

    DOE PAGES

    Storer, R. L.; Griffin, B. M.; Höft, J.; ...

    2015-01-06

    Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method.The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and midlatitude deep convection. These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing is weak.more » The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less

  3. Parameterizing deep convection using the assumed probability density function method

    DOE PAGES

    Storer, R. L.; Griffin, B. M.; Höft, J.; ...

    2014-06-11

    Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method. The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and mid-latitude deep convection. These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing ismore » weak. The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less

  4. Parameterizing deep convection using the assumed probability density function method

    SciTech Connect

    Storer, R. L.; Griffin, B. M.; Hoft, Jan; Weber, J. K.; Raut, E.; Larson, Vincent E.; Wang, Minghuai; Rasch, Philip J.

    2015-01-06

    Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method.The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and mid-latitude deep convection.These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing is weak. The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.

  5. Probability density estimation using isocontours and isosurfaces: applications to information-theoretic image registration.

    PubMed

    Rajwade, Ajit; Banerjee, Arunava; Rangarajan, Anand

    2009-03-01

    We present a new, geometric approach for determining the probability density of the intensity values in an image. We drop the notion of an image as a set of discrete pixels, and assume a piecewise-continuous representation. The probability density can then be regarded as being proportional to the area between two nearby isocontours of the image surface. Our paper extends this idea to joint densities of image pairs. We demonstrate the application of our method to affine registration between two or more images using information theoretic measures such as mutual information. We show cases where our method outperforms existing methods such as simple histograms, histograms with partial volume interpolation, Parzen windows, etc. under fine intensity quantization for affine image registration under significant image noise. Furthermore, we demonstrate results on simultaneous registration of multiple images, as well as for pairs of volume datasets, and show some theoretical properties of our density estimator. Our approach requires the selection of only an image interpolant. The method neither requires any kind of kernel functions (as in Parzen windows) which are unrelated to the structure of the image in itself, nor does it rely on any form of sampling for density estimation.

  6. Probability Density Estimation Using Isocontours and Isosurfaces: Application to Information-Theoretic Image Registration

    PubMed Central

    Rajwade, Ajit; Banerjee, Arunava; Rangarajan, Anand

    2010-01-01

    We present a new geometric approach for determining the probability density of the intensity values in an image. We drop the notion of an image as a set of discrete pixels and assume a piecewise-continuous representation. The probability density can then be regarded as being proportional to the area between two nearby isocontours of the image surface. Our paper extends this idea to joint densities of image pairs. We demonstrate the application of our method to affine registration between two or more images using information-theoretic measures such as mutual information. We show cases where our method outperforms existing methods such as simple histograms, histograms with partial volume interpolation, Parzen windows, etc., under fine intensity quantization for affine image registration under significant image noise. Furthermore, we demonstrate results on simultaneous registration of multiple images, as well as for pairs of volume data sets, and show some theoretical properties of our density estimator. Our approach requires the selection of only an image interpolant. The method neither requires any kind of kernel functions (as in Parzen windows), which are unrelated to the structure of the image in itself, nor does it rely on any form of sampling for density estimation. PMID:19147876

  7. Goal-Oriented Probability Density Function Methods for Uncertainty Quantification

    DTIC Science & Technology

    2015-12-11

    TELEPHONE NUMBER (Include area code) 831-502-7249 Standard Form 298 (Rev. 8/98) Prescribed by ANSI Std . Z39.18 Adobe Professional 7.0...methods in the stochastic modeling of nonlinear dynamical systems”, University of Delaware , Delaware , Apr. 17th, 2015 10. D. Venturi, “Statistical

  8. SAR amplitude probability density function estimation based on a generalized Gaussian model.

    PubMed

    Moser, Gabriele; Zerubia, Josiane; Serpico, Sebastiano B

    2006-06-01

    In the context of remotely sensed data analysis, an important problem is the development of accurate models for the statistics of the pixel intensities. Focusing on synthetic aperture radar (SAR) data, this modeling process turns out to be a crucial task, for instance, for classification or for denoising purposes. In this paper, an innovative parametric estimation methodology for SAR amplitude data is proposed that adopts a generalized Gaussian (GG) model for the complex SAR backscattered signal. A closed-form expression for the corresponding amplitude probability density function (PDF) is derived and a specific parameter estimation algorithm is developed in order to deal with the proposed model. Specifically, the recently proposed "method-of-log-cumulants" (MoLC) is applied, which stems from the adoption of the Mellin transform (instead of the usual Fourier transform) in the computation of characteristic functions and from the corresponding generalization of the concepts of moment and cumulant. For the developed GG-based amplitude model, the resulting MoLC estimates turn out to be numerically feasible and are also analytically proved to be consistent. The proposed parametric approach was validated by using several real ERS-1, XSAR, E-SAR, and NASA/JPL airborne SAR images, and the experimental results prove that the method models the amplitude PDF better than several previously proposed parametric models for backscattering phenomena.

  9. Dictionary-based probability density function estimation for high-resolution SAR data

    NASA Astrophysics Data System (ADS)

    Krylov, Vladimir; Moser, Gabriele; Serpico, Sebastiano B.; Zerubia, Josiane

    2009-02-01

    In the context of remotely sensed data analysis, a crucial problem is represented by the need to develop accurate models for the statistics of pixel intensities. In this work, we develop a parametric finite mixture model for the statistics of pixel intensities in high resolution synthetic aperture radar (SAR) images. This method is an extension of previously existing method for lower resolution images. The method integrates the stochastic expectation maximization (SEM) scheme and the method of log-cumulants (MoLC) with an automatic technique to select, for each mixture component, an optimal parametric model taken from a predefined dictionary of parametric probability density functions (pdf). The proposed dictionary consists of eight state-of-the-art SAR-specific pdfs: Nakagami, log-normal, generalized Gaussian Rayleigh, Heavy-tailed Rayleigh, Weibull, K-root, Fisher and generalized Gamma. The designed scheme is endowed with the novel initialization procedure and the algorithm to automatically estimate the optimal number of mixture components. The experimental results with a set of several high resolution COSMO-SkyMed images demonstrate the high accuracy of the designed algorithm, both from the viewpoint of a visual comparison of the histograms, and from the viewpoint of quantitive accuracy measures such as correlation coefficient (above 99,5%). The method proves to be effective on all the considered images, remaining accurate for multimodal and highly heterogeneous scenes.

  10. Unification of field theory and maximum entropy methods for learning probability densities.

    PubMed

    Kinney, Justin B

    2015-09-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

  11. A Tomographic Method for the Reconstruction of Local Probability Density Functions

    NASA Technical Reports Server (NTRS)

    Sivathanu, Y. R.; Gore, J. P.

    1993-01-01

    A method of obtaining the probability density function (PDF) of local properties from path integrated measurements is described. The approach uses a discrete probability function (DPF) method to infer the PDF of the local extinction coefficient from measurements of the PDFs of the path integrated transmittance. The local PDFs obtained using the method are compared with those obtained from direct intrusive measurements in propylene/air and ethylene/air diffusion flames. The results of this comparison are good.

  12. Fast and accurate probability density estimation in large high dimensional astronomical datasets

    NASA Astrophysics Data System (ADS)

    Gupta, Pramod; Connolly, Andrew J.; Gardner, Jeffrey P.

    2015-01-01

    Astronomical surveys will generate measurements of hundreds of attributes (e.g. color, size, shape) on hundreds of millions of sources. Analyzing these large, high dimensional data sets will require efficient algorithms for data analysis. An example of this is probability density estimation that is at the heart of many classification problems such as the separation of stars and quasars based on their colors. Popular density estimation techniques use binning or kernel density estimation. Kernel density estimation has a small memory footprint but often requires large computational resources. Binning has small computational requirements but usually binning is implemented with multi-dimensional arrays which leads to memory requirements which scale exponentially with the number of dimensions. Hence both techniques do not scale well to large data sets in high dimensions. We present an alternative approach of binning implemented with hash tables (BASH tables). This approach uses the sparseness of data in the high dimensional space to ensure that the memory requirements are small. However hashing requires some extra computation so a priori it is not clear if the reduction in memory requirements will lead to increased computational requirements. Through an implementation of BASH tables in C++ we show that the additional computational requirements of hashing are negligible. Hence this approach has small memory and computational requirements. We apply our density estimation technique to photometric selection of quasars using non-parametric Bayesian classification and show that the accuracy of the classification is same as the accuracy of earlier approaches. Since the BASH table approach is one to three orders of magnitude faster than the earlier approaches it may be useful in various other applications of density estimation in astrostatistics.

  13. Probability density based gradient projection method for inverse kinematics of a robotic human body model.

    PubMed

    Lura, Derek; Wernke, Matthew; Alqasemi, Redwan; Carey, Stephanie; Dubey, Rajiv

    2012-01-01

    This paper presents the probability density based gradient projection (GP) of the null space of the Jacobian for a 25 degree of freedom bilateral robotic human body model (RHBM). This method was used to predict the inverse kinematics of the RHBM and maximize the similarity between predicted inverse kinematic poses and recorded data of 10 subjects performing activities of daily living. The density function was created for discrete increments of the workspace. The number of increments in each direction (x, y, and z) was varied from 1 to 20. Performance of the method was evaluated by finding the root mean squared (RMS) of the difference between the predicted joint angles relative to the joint angles recorded from motion capture. The amount of data included in the creation of the probability density function was varied from 1 to 10 subjects, creating sets of for subjects included and excluded from the density function. The performance of the GP method for subjects included and excluded from the density function was evaluated to test the robustness of the method. Accuracy of the GP method varied with amount of incremental division of the workspace, increasing the number of increments decreased the RMS error of the method, with the error of average RMS error of included subjects ranging from 7.7° to 3.7°. However increasing the number of increments also decreased the robustness of the method.

  14. A probability density function discretization and approximation method for the dynamic load identification of stochastic structures

    NASA Astrophysics Data System (ADS)

    Liu, Jie; Sun, Xingsheng; Li, Kun; Jiang, Chao; Han, Xu

    2015-11-01

    Aiming at structures containing random parameters with multi-peak probability density functions (PDFs) or great variable coefficients, an analytical method of probability density function discretization and approximation (PDFDA) is proposed for dynamic load identification. Dynamic loads are expressed as the functions of time and random parameters in time domain and the forward model is established through the discretized convolution integral of loads and the corresponding unit-pulse response functions. The PDF of each random parameter is discretized into several subintervals and in each subinterval the original PDF curve is approximated via uniform distribution PDF with equal probability value. Then the joint distribution model is built and hence the equivalent deterministic equations are solved to identify unknown loads. Inverse analysis is operated separately at each variable in the joint distribution model through regularization because of noise-contaminated measured responses. In order to assess the accuracy of identified results, PDF curves and statistical properties of loads are achieved based on the specially assumed distributions of identified loads. Numerical simulations demonstrate the efficiency and superiority of the presented method.

  15. Application of maximum likelihood to direct methods: the probability density function of the triple-phase sums. XI.

    PubMed

    Rius, Jordi

    2006-09-01

    The maximum-likelihood method is applied to direct methods to derive a more general probability density function of the triple-phase sums which is capable of predicting negative values. This study also proves that maximization of the origin-free modulus sum function S yields, within the limitations imposed by the assumed approximations, the maximum-likelihood estimates of the phases. It thus represents the formal theoretical justification of the S function that was initially derived from Patterson-function arguments [Rius (1993). Acta Cryst. A49, 406-409].

  16. A Monte Carlo method for the PDF (Probability Density Functions) equations of turbulent flow

    NASA Astrophysics Data System (ADS)

    Pope, S. B.

    1980-02-01

    The transport equations of joint probability density functions (pdfs) in turbulent flows are simulated using a Monte Carlo method because finite difference solutions of the equations are impracticable, mainly due to the large dimensionality of the pdfs. Attention is focused on equation for the joint pdf of chemical and thermodynamic properties in turbulent reactive flows. It is shown that the Monte Carlo method provides a true simulation of this equation and that the amount of computation required increases only linearly with the number of properties considered. Consequently, the method can be used to solve the pdf equation for turbulent flows involving many chemical species and complex reaction kinetics. Monte Carlo calculations of the pdf of temperature in a turbulent mixing layer are reported. These calculations are in good agreement with the measurements of Batt (1977).

  17. A H-infinity Fault Detection and Diagnosis Scheme for Discrete Nonlinear System Using Output Probability Density Estimation

    SciTech Connect

    Zhang Yumin; Lum, Kai-Yew; Wang Qingguo

    2009-03-05

    In this paper, a H-infinity fault detection and diagnosis (FDD) scheme for a class of discrete nonlinear system fault using output probability density estimation is presented. Unlike classical FDD problems, the measured output of the system is viewed as a stochastic process and its square root probability density function (PDF) is modeled with B-spline functions, which leads to a deterministic space-time dynamic model including nonlinearities, uncertainties. A weighting mean value is given as an integral function of the square root PDF along space direction, which leads a function only about time and can be used to construct residual signal. Thus, the classical nonlinear filter approach can be used to detect and diagnose the fault in system. A feasible detection criterion is obtained at first, and a new H-infinity adaptive fault diagnosis algorithm is further investigated to estimate the fault. Simulation example is given to demonstrate the effectiveness of the proposed approaches.

  18. A probability density function method for detecting atrial fibrillation using R-R intervals.

    PubMed

    Hong-Wei, Lu; Ying, Sun; Min, Lin; Pi-Ding, Li; Zheng, Zheng

    2009-01-01

    A probability density function (PDF) method is proposed for investigating the structure of the reconstructed attractor of R-R intervals. By constructing the PDF of distance between two points in the reconstructed phase space of R-R intervals of normal sinus rhythm (NSR) and atrial fibrillation (AF), it is found that the distributions of PDF of NSR and AF R-R intervals have significant differences. By taking advantage of their differences, a characteristic parameter k(n), which represents the sum of n points slope in filtered PDF curve, is put forward to detect both 400 segments of NSR and AF R-R intervals from the MIT-BIH Atrial Fibrillation database. Parameters such as number of R-R intervals, number of embedding dimensions and slope are optimized for the best detection performance. Results demonstrate that the new algorithm has a fast response speed with R-R intervals as short as 40, and shows a sensitivity of 0.978, and a specificity of 0.990 in the best detecting performance.

  19. Wall-boundary conditions in probability density function methods and application to a turbulent channel flow

    NASA Astrophysics Data System (ADS)

    Minier, Jean-Pierre; Pozorski, Jacek

    1999-09-01

    An application of a probability density function (PDF), or Lagrangian stochastic, approach to the case of high-Reynolds number wall-bounded turbulent flows is presented. The model simulates the instantaneous velocity and dissipation rate attached to a large number of particles and the wall-boundary conditions are formulated directly in terms of the particle properties. The present conditions aim at reproducing statistical results of the logarithmic region and are therefore in the spirit of wall functions. A new derivation of these boundary conditions and a discussion of the resulting behavior for different mean variables, such as the Reynolds stress components, is proposed. Thus, the present paper complements the work of Dreeben and Pope [Phys. Fluids 9, 2692 (1997)] who proposed similar wall-boundary particle conditions. Numerical implementation of these conditions in a standalone two-dimensional PDF code and a pressure-correction algorithm are detailed. Moments up to the fourth order are presented for a high-Reynolds number channel flow and are analyzed. This case helps to clarify how the method works in practice, to validate the boundary conditions and to assess the model and the code performance.

  20. Probability density function method for variable-density pressure-gradient-driven turbulence and mixing

    SciTech Connect

    Bakosi, Jozsef; Ristorcelli, Raymond J

    2010-01-01

    Probability density function (PDF) methods are extended to variable-density pressure-gradient-driven turbulence. We apply the new method to compute the joint PDF of density and velocity in a non-premixed binary mixture of different-density molecularly mixing fluids under gravity. The full time-evolution of the joint PDF is captured in the highly non-equilibrium flow: starting from a quiescent state, transitioning to fully developed turbulence and finally dissipated by molecular diffusion. High-Atwood-number effects (as distinguished from the Boussinesq case) are accounted for: both hydrodynamic turbulence and material mixing are treated at arbitrary density ratios, with the specific volume, mass flux and all their correlations in closed form. An extension of the generalized Langevin model, originally developed for the Lagrangian fluid particle velocity in constant-density shear-driven turbulence, is constructed for variable-density pressure-gradient-driven flows. The persistent small-scale anisotropy, a fundamentally 'non-Kolmogorovian' feature of flows under external acceleration forces, is captured by a tensorial diffusion term based on the external body force. The material mixing model for the fluid density, an active scalar, is developed based on the beta distribution. The beta-PDF is shown to be capable of capturing the mixing asymmetry and that it can accurately represent the density through transition, in fully developed turbulence and in the decay process. The joint model for hydrodynamics and active material mixing yields a time-accurate evolution of the turbulent kinetic energy and Reynolds stress anisotropy without resorting to gradient diffusion hypotheses, and represents the mixing state by the density PDF itself, eliminating the need for dubious mixing measures. Direct numerical simulations of the homogeneous Rayleigh-Taylor instability are used for model validation.

  1. ARMA Estimators of Probability Densities with Exponential or Regularly Varying Fourier Coefficients.

    DTIC Science & Technology

    1987-06-01

    of the smoothing parameter of fn (’m) (see Hart 1985 and Diggle and Hall 1986 for more on this subject). The integrated squared errors of the cross...Statist. 5 530-535. Diggle , P.J. and Hall, P. (1986). The selection of terms in an orthogonal series density estimator. J. Amer. Statist. Assoc. 81 230-233

  2. Spline Histogram Method for Reconstruction of Probability Density Functions of Clusters of Galaxies

    NASA Astrophysics Data System (ADS)

    Docenko, Dmitrijs; Berzins, Karlis

    We describe the spline histogram algorithm which is useful for visualization of the probability density function setting up a statistical hypothesis for a test. The spline histogram is constructed from discrete data measurements using tensioned cubic spline interpolation of the cumulative distribution function which is then differentiated and smoothed using the Savitzky-Golay filter. The optimal width of the filter is determined by minimization of the Integrated Square Error function. The current distribution of the TCSplin algorithm written in f77 with IDL and Gnuplot visualization scripts is available from www.virac.lv/en/soft.html.

  3. Non-stationary random vibration analysis of a 3D train-bridge system using the probability density evolution method

    NASA Astrophysics Data System (ADS)

    Yu, Zhi-wu; Mao, Jian-feng; Guo, Feng-qi; Guo, Wei

    2016-03-01

    Rail irregularity is one of the main sources causing train-bridge random vibration. A new random vibration theory for the coupled train-bridge systems is proposed in this paper. First, number theory method (NTM) with 2N-dimensional vectors for the stochastic harmonic function (SHF) of rail irregularity power spectrum density was adopted to determine the representative points of spatial frequencies and phases to generate the random rail irregularity samples, and the non-stationary rail irregularity samples were modulated with the slowly varying function. Second, the probability density evolution method (PDEM) was employed to calculate the random dynamic vibration of the three-dimensional (3D) train-bridge system by a program compiled on the MATLAB® software platform. Eventually, the Newmark-β integration method and double edge difference method of total variation diminishing (TVD) format were adopted to obtain the mean value curve, the standard deviation curve and the time-history probability density information of responses. A case study was presented in which the ICE-3 train travels on a three-span simply-supported high-speed railway bridge with excitation of random rail irregularity. The results showed that compared to the Monte Carlo simulation, the PDEM has higher computational efficiency for the same accuracy, i.e., an improvement by 1-2 orders of magnitude. Additionally, the influences of rail irregularity and train speed on the random vibration of the coupled train-bridge system were discussed.

  4. A Galerkin-based formulation of the probability density evolution method for general stochastic finite element systems

    NASA Astrophysics Data System (ADS)

    Papadopoulos, Vissarion; Kalogeris, Ioannis

    2016-05-01

    The present paper proposes a Galerkin finite element projection scheme for the solution of the partial differential equations (pde's) involved in the probability density evolution method, for the linear and nonlinear static analysis of stochastic systems. According to the principle of preservation of probability, the probability density evolution of a stochastic system is expressed by its corresponding Fokker-Planck (FP) stochastic partial differential equation. Direct integration of the FP equation is feasible only for simple systems with a small number of degrees of freedom, due to analytical and/or numerical intractability. However, rewriting the FP equation conditioned to the random event description, a generalized density evolution equation (GDEE) can be obtained, which can be reduced to a one dimensional pde. Two Galerkin finite element method schemes are proposed for the numerical solution of the resulting pde's, namely a time-marching discontinuous Galerkin scheme and the StreamlineUpwind/Petrov Galerkin (SUPG) scheme. In addition, a reformulation of the classical GDEE is proposed, which implements the principle of probability preservation in space instead of time, making this approach suitable for the stochastic analysis of finite element systems. The advantages of the FE Galerkin methods and in particular the SUPG over finite difference schemes, like the modified Lax-Wendroff, which is the most frequently used method for the solution of the GDEE, are illustrated with numerical examples and explored further.

  5. A method for evaluating the expectation value of a power spectrum using the probability density function of phases

    SciTech Connect

    Caliandro, G.A.; Torres, D.F.; Rea, N. E-mail: dtorres@aliga.ieec.uab.es

    2013-07-01

    Here, we present a new method to evaluate the expectation value of the power spectrum of a time series. A statistical approach is adopted to define the method. After its demonstration, it is validated showing that it leads to the known properties of the power spectrum when the time series contains a periodic signal. The approach is also validated in general with numerical simulations. The method puts into evidence the importance that is played by the probability density function of the phases associated to each time stamp for a given frequency, and how this distribution can be perturbed by the uncertainties of the parameters in the pulsar ephemeris. We applied this method to solve the power spectrum in the case the first derivative of the pulsar frequency is unknown and not negligible. We also undertook the study of the most general case of a blind search, in which both the frequency and its first derivative are uncertain. We found the analytical solutions of the above cases invoking the sum of Fresnel's integrals squared.

  6. Urban stormwater capture curve using three-parameter mixed exponential probability density function and NRCS runoff curve number method.

    PubMed

    Kim, Sangdan; Han, Suhee

    2010-01-01

    Most related literature regarding designing urban non-point-source management systems assumes that precipitation event-depths follow the 1-parameter exponential probability density function to reduce the mathematical complexity of the derivation process. However, the method of expressing the rainfall is the most important factor for analyzing stormwater; thus, a better mathematical expression, which represents the probability distribution of rainfall depths, is suggested in this study. Also, the rainfall-runoff calculation procedure required for deriving a stormwater-capture curve is altered by the U.S. Natural Resources Conservation Service (Washington, D.C.) (NRCS) runoff curve number method to consider the nonlinearity of the rainfall-runoff relation and, at the same time, obtain a more verifiable and representative curve for design when applying it to urban drainage areas with complicated land-use characteristics, such as occurs in Korea. The result of developing the stormwater-capture curve from the rainfall data in Busan, Korea, confirms that the methodology suggested in this study provides a better solution than the pre-existing one.

  7. Cell survival fraction estimation based on the probability densities of domain and cell nucleus specific energies using improved microdosimetric kinetic models.

    PubMed

    Sato, Tatsuhiko; Furusawa, Yoshiya

    2012-10-01

    Estimation of the survival fractions of cells irradiated with various particles over a wide linear energy transfer (LET) range is of great importance in the treatment planning of charged-particle therapy. Two computational models were developed for estimating survival fractions based on the concept of the microdosimetric kinetic model. They were designated as the double-stochastic microdosimetric kinetic and stochastic microdosimetric kinetic models. The former model takes into account the stochastic natures of both domain and cell nucleus specific energies, whereas the latter model represents the stochastic nature of domain specific energy by its approximated mean value and variance to reduce the computational time. The probability densities of the domain and cell nucleus specific energies are the fundamental quantities for expressing survival fractions in these models. These densities are calculated using the microdosimetric and LET-estimator functions implemented in the Particle and Heavy Ion Transport code System (PHITS) in combination with the convolution or database method. Both the double-stochastic microdosimetric kinetic and stochastic microdosimetric kinetic models can reproduce the measured survival fractions for high-LET and high-dose irradiations, whereas a previously proposed microdosimetric kinetic model predicts lower values for these fractions, mainly due to intrinsic ignorance of the stochastic nature of cell nucleus specific energies in the calculation. The models we developed should contribute to a better understanding of the mechanism of cell inactivation, as well as improve the accuracy of treatment planning of charged-particle therapy.

  8. Sensor Fusion Based on an Integrated Neural Network and Probability Density Function (PDF) Dual Kalman Filter for On-Line Estimation of Vehicle Parameters and States.

    PubMed

    Vargas-Melendez, Leandro; Boada, Beatriz L; Boada, Maria Jesus L; Gauchia, Antonio; Diaz, Vicente

    2017-04-29

    Vehicles with a high center of gravity (COG), such as light trucks and heavy vehicles, are prone to rollover. This kind of accident causes nearly 33 % of all deaths from passenger vehicle crashes. Nowadays, these vehicles are incorporating roll stability control (RSC) systems to improve their safety. Most of the RSC systems require the vehicle roll angle as a known input variable to predict the lateral load transfer. The vehicle roll angle can be directly measured by a dual antenna global positioning system (GPS), but it is expensive. For this reason, it is important to estimate the vehicle roll angle from sensors installed onboard in current vehicles. On the other hand, the knowledge of the vehicle's parameters values is essential to obtain an accurate vehicle response. Some of vehicle parameters cannot be easily obtained and they can vary over time. In this paper, an algorithm for the simultaneous on-line estimation of vehicle's roll angle and parameters is proposed. This algorithm uses a probability density function (PDF)-based truncation method in combination with a dual Kalman filter (DKF), to guarantee that both vehicle's states and parameters are within bounds that have a physical meaning, using the information obtained from sensors mounted on vehicles. Experimental results show the effectiveness of the proposed algorithm.

  9. Sensor Fusion Based on an Integrated Neural Network and Probability Density Function (PDF) Dual Kalman Filter for On-Line Estimation of Vehicle Parameters and States

    PubMed Central

    Vargas-Melendez, Leandro; Boada, Beatriz L.; Boada, Maria Jesus L.; Gauchia, Antonio; Diaz, Vicente

    2017-01-01

    Vehicles with a high center of gravity (COG), such as light trucks and heavy vehicles, are prone to rollover. This kind of accident causes nearly 33% of all deaths from passenger vehicle crashes. Nowadays, these vehicles are incorporating roll stability control (RSC) systems to improve their safety. Most of the RSC systems require the vehicle roll angle as a known input variable to predict the lateral load transfer. The vehicle roll angle can be directly measured by a dual antenna global positioning system (GPS), but it is expensive. For this reason, it is important to estimate the vehicle roll angle from sensors installed onboard in current vehicles. On the other hand, the knowledge of the vehicle’s parameters values is essential to obtain an accurate vehicle response. Some of vehicle parameters cannot be easily obtained and they can vary over time. In this paper, an algorithm for the simultaneous on-line estimation of vehicle’s roll angle and parameters is proposed. This algorithm uses a probability density function (PDF)-based truncation method in combination with a dual Kalman filter (DKF), to guarantee that both vehicle’s states and parameters are within bounds that have a physical meaning, using the information obtained from sensors mounted on vehicles. Experimental results show the effectiveness of the proposed algorithm. PMID:28468252

  10. Numerical calculation of light scattering from metal and dielectric randomly rough Gaussian surfaces using microfacet slope probability density function based method

    NASA Astrophysics Data System (ADS)

    Wang, Shouyu; Xue, Liang; Yan, Keding

    2017-07-01

    Light scattering from randomly rough surfaces is of great significance in various fields such as remote sensing and target identification. As numerical methods can obtain scattering distributions without complex setups and complicated operations, they become important tools in light scattering study. However, most of them suffer from huge computing load and low operating efficiency, limiting their applications in dynamic measurements and high-speed detections. Here, to overcome these disadvantages, microfacet slope probability density function based method is presented, providing scattering information without computing ensemble average from numerous scattered fields, thus it can obtain light scattering distributions with extremely fast speed. Additionally, it can reach high-computing accuracy quantitatively certificated by mature light scattering computing algorithms. It is believed the provided approach is useful in light scattering study and offers potentiality for real-time detections.

  11. Probability density functions for hyperbolic and isodiachronic locations.

    PubMed

    Spiesberger, John L; Wahlberg, Magnus

    2002-12-01

    Animal locations are sometimes estimated with hyperbolic techniques by estimating the difference in distances of their sounds between pairs of receivers. Each pair specifies the animal's location to a hyperboloid because the speed of sound is assumed to be spatially homogeneous. Sufficient numbers of intersecting hyperboloids specify the location. A nonlinear method is developed for computing probability density functions for location. The method incorporates a priori probability density functions for the receiver locations, the speed of sound, winds, and the errors in the differences in travel time. The traditional linear approximation method overestimates bounds for probability density functions by one or two orders of magnitude compared with the more accurate nonlinear method. The nonlinear method incorporates a generalization of hyperbolic methods because the average speed of sound is allowed to vary between different receivers and the source. The resulting "isodiachronic" surface is the locus of points on which the difference in travel time is constant. Isodiachronic locations yield correct location errors in situations where hyperbolic methods yield incorrect results, particularly when the speed of propagation varies significantly between a source and different receivers.

  12. Predicting critical transitions in dynamical systems from time series using nonstationary probability density modeling.

    PubMed

    Kwasniok, Frank

    2013-11-01

    A time series analysis method for predicting the probability density of a dynamical system is proposed. A nonstationary parametric model of the probability density is estimated from data within a maximum likelihood framework and then extrapolated to forecast the future probability density and explore the system for critical transitions or tipping points. A full systematic account of parameter uncertainty is taken. The technique is generic, independent of the underlying dynamics of the system. The method is verified on simulated data and then applied to prediction of Arctic sea-ice extent.

  13. Predicting critical transitions in dynamical systems from time series using nonstationary probability density modeling

    NASA Astrophysics Data System (ADS)

    Kwasniok, Frank

    2013-11-01

    A time series analysis method for predicting the probability density of a dynamical system is proposed. A nonstationary parametric model of the probability density is estimated from data within a maximum likelihood framework and then extrapolated to forecast the future probability density and explore the system for critical transitions or tipping points. A full systematic account of parameter uncertainty is taken. The technique is generic, independent of the underlying dynamics of the system. The method is verified on simulated data and then applied to prediction of Arctic sea-ice extent.

  14. Retrievals of atmospheric columnar carbon dioxide and methane from GOSAT observations with photon path-length probability density function (PPDF) method

    NASA Astrophysics Data System (ADS)

    Bril, A.; Oshchepkov, S.; Yokota, T.; Yoshida, Y.; Morino, I.; Uchino, O.; Belikov, D. A.; Maksyutov, S. S.

    2014-12-01

    We retrieved the column-averaged dry air mole fraction of atmospheric carbon dioxide (XCO2) and methane (XCH4) from the radiance spectra measured by Greenhouse gases Observing SATellite (GOSAT) for 48 months of the satellite operation from June 2009. Recent version of the Photon path-length Probability Density Function (PPDF)-based algorithm was used to estimate XCO2 and optical path modifications in terms of PPDF parameters. We also present results of numerical simulations for over-land observations and "sharp edge" tests for sun-glint mode to discuss the algorithm accuracy under conditions of strong optical path modification. For the methane abundance retrieved from 1.67-µm-absorption band we applied optical path correction based on PPDF parameters from 1.6-µm carbon dioxide (CO2) absorption band. Similarly to CO2-proxy technique, this correction assumes identical light path modifications in 1.67-µm and 1.6-µm bands. However, proxy approach needs pre-defined XCO2 values to compute XCH4, whilst the PPDF-based approach does not use prior assumptions on CO2 concentrations.Post-processing data correction for XCO2 and XCH4 over land observations was performed using regression matrix based on multivariate analysis of variance (MANOVA). The MANOVA statistics was applied to the GOSAT retrievals using reference collocated measurements of Total Carbon Column Observing Network (TCCON). The regression matrix was constructed using the parameters that were found to correlate with GOSAT-TCCON discrepancies: PPDF parameters α and ρ, that are mainly responsible for shortening and lengthening of the optical path due to atmospheric light scattering; solar and satellite zenith angles; surface pressure; surface albedo in three GOSAT short wave infrared (SWIR) bands. Application of the post-correction generally improves statistical characteristics of the GOSAT-TCCON correlation diagrams for individual stations as well as for aggregated data.In addition to the analysis of the

  15. Trajectory versus probability density entropy.

    PubMed

    Bologna, M; Grigolini, P; Karagiorgis, M; Rosa, A

    2001-07-01

    We show that the widely accepted conviction that a connection can be established between the probability density entropy and the Kolmogorov-Sinai (KS) entropy is questionable. We adopt the definition of density entropy as a functional of a distribution density whose time evolution is determined by a transport equation, conceived as the only prescription to use for the calculation. Although the transport equation is built up for the purpose of affording a picture equivalent to that stemming from trajectory dynamics, no direct use of trajectory time evolution is allowed, once the transport equation is defined. With this definition in mind we prove that the detection of a time regime of increase of the density entropy with a rate identical to the KS entropy is possible only in a limited number of cases. The proposals made by some authors to establish a connection between the two entropies in general, violate our definition of density entropy and imply the concept of trajectory, which is foreign to that of density entropy.

  16. Modulation Based on Probability Density Functions

    NASA Technical Reports Server (NTRS)

    Williams, Glenn L.

    2009-01-01

    A proposed method of modulating a sinusoidal carrier signal to convey digital information involves the use of histograms representing probability density functions (PDFs) that characterize samples of the signal waveform. The method is based partly on the observation that when a waveform is sampled (whether by analog or digital means) over a time interval at least as long as one half cycle of the waveform, the samples can be sorted by frequency of occurrence, thereby constructing a histogram representing a PDF of the waveform during that time interval.

  17. A weighted bootstrap method for the determination of probability density functions of freshwater distribution coefficients (Kds) of Co, Cs, Sr and I radioisotopes.

    PubMed

    Durrieu, G; Ciffroy, P; Garnier, J-M

    2006-11-01

    The objective of the study was to provide global probability density functions (PDFs) representing the uncertainty of distribution coefficients (Kds) in freshwater for radioisotopes of Co, Cs, Sr and I. A comprehensive database containing Kd values referenced in 61 articles was first built and quality scores were affected to each data point according to various criteria (e.g. presentation of data, contact times, pH, solid-to-liquid ratio, expert judgement). A weighted bootstrapping procedure was then set up in order to build PDFs, in such a way that more importance is given to the most relevant data points (i.e. those corresponding to typical natural environments). However, it was also assessed that the relevance and the robustness of the PDFs determined by our procedure depended on the number of Kd values in the database. Owing to the large database, conditional PDFs were also proposed, for site studies where some parametric information is known (e.g. pH, contact time between radionuclides and particles, solid-to-liquid ratio). Such conditional PDFs reduce the uncertainty on the Kd values. These global and conditional PDFs are useful for end-users of dose models because the uncertainty and sensitivity of Kd values are taking into account.

  18. Effects of combined dimension reduction and tabulation on the simulations of a turbulent premixed flame using a large-eddy simulation/probability density function method

    NASA Astrophysics Data System (ADS)

    Kim, Jeonglae; Pope, Stephen B.

    2014-05-01

    A turbulent lean-premixed propane-air flame stabilised by a triangular cylinder as a flame-holder is simulated to assess the accuracy and computational efficiency of combined dimension reduction and tabulation of chemistry. The computational condition matches the Volvo rig experiments. For the reactive simulation, the Lagrangian Large-Eddy Simulation/Probability Density Function (LES/PDF) formulation is used. A novel two-way coupling approach between LES and PDF is applied to obtain resolved density to reduce its statistical fluctuations. Composition mixing is evaluated by the modified Interaction-by-Exchange with the Mean (IEM) model. A baseline case uses In Situ Adaptive Tabulation (ISAT) to calculate chemical reactions efficiently. Its results demonstrate good agreement with the experimental measurements in turbulence statistics, temperature, and minor species mass fractions. For dimension reduction, 11 and 16 represented species are chosen and a variant of Rate Controlled Constrained Equilibrium (RCCE) is applied in conjunction with ISAT to each case. All the quantities in the comparison are indistinguishable from the baseline results using ISAT only. The combined use of RCCE/ISAT reduces the computational time for chemical reaction by more than 50%. However, for the current turbulent premixed flame, chemical reaction takes only a minor portion of the overall computational cost, in contrast to non-premixed flame simulations using LES/PDF, presumably due to the restricted manifold of purely premixed flame in the composition space. Instead, composition mixing is the major contributor to cost reduction since the mean-drift term, which is computationally expensive, is computed for the reduced representation. Overall, a reduction of more than 15% in the computational cost is obtained.

  19. Adaptive Hessian-based Non-stationary Gaussian Process Response Surface Method for Probability Density Approximation with Application to Bayesian Solution of Large-scale Inverse Problems

    DTIC Science & Technology

    2011-10-01

    covariance matrix, for example, is out of the question. Usually, the method of choice for computing statistics is Markov chain Monte Carlo (MCMC) [28...normalized constant (which is not required by Markov chain Monte Carlo methods), d (m) = πlike × πprior. Denote J = − log d (m), then we have J = 1...23] Montserrat Fuentes and Richard L. Smith, A new class of nonstationary spatial models, tech. report, North Carolina state University, 2001. [24] D

  20. Use of ELVIS II platform for random process modelling and analysis of its probability density function

    NASA Astrophysics Data System (ADS)

    Maslennikova, Yu. S.; Nugmanov, I. S.

    2016-08-01

    The problem of probability density function estimation for a random process is one of the most common in practice. There are several methods to solve this problem. Presented laboratory work uses methods of the mathematical statistics to detect patterns in the realization of random process. On the basis of ergodic theory, we construct algorithm for estimating univariate probability density distribution function for a random process. Correlational analysis of realizations is applied to estimate the necessary size of the sample and the time of observation. Hypothesis testing for two probability distributions (normal and Cauchy) is used on the experimental data, using χ2 criterion. To facilitate understanding and clarity of the problem solved, we use ELVIS II platform and LabVIEW software package that allows us to make the necessary calculations, display results of the experiment and, most importantly, to control the experiment. At the same time students are introduced to a LabVIEW software package and its capabilities.

  1. Velocity analysis with local event slopes related probability density function

    NASA Astrophysics Data System (ADS)

    Zhang, Peng; Lu, Wenkai; Zhang, Yingqiang

    2015-12-01

    Macro velocity model plays a key role in seismic imaging and inversion. The performance of traditional velocity analysis methods is degraded by multiples and amplitude-versus-offset (AVO) anomalies. Local event slopes, containing the subsurface velocity information, have been widely used to accomplish common time-domain seismic processing, imaging and velocity estimation. In this paper, we propose a method for velocity analysis with probability density function (PDF) related to local event slopes. We first estimate local event slopes with phase information in the Fourier domain. An adaptive filter is applied to improve the performance of slopes estimator in the low signal-to-noise ratio (SNR) situation. Second, the PDF is approximated with the histogram function, which is related to attributes derived from local event slopes. As a graphical representation of the data distribution, the histogram function can be computed efficiently. By locating the ray path of the first arrival on the semblance image with straight-ray segments assumption, automatic velocity picking is carried out to establish velocity model. Unlike local event slopes based velocity estimation strategies such as averaging filters and image warping, the proposed method does not make the assumption that the errors of mapped velocity values are symmetrically distributed or that the variation of amplitude along the offset is slight. Extension of the method to prestack time-domain migration velocity estimation is also given. With synthetic and field examples, we demonstrate that our method can achieve high resolution, even in the presence of multiples, strong amplitude variations and polarity reversals.

  2. Probability density functions in turbulent channel flow

    NASA Technical Reports Server (NTRS)

    Dinavahi, Surya P. G.

    1992-01-01

    The probability density functions (pdf's) of the fluctuating velocity components, as well as their first and second derivatives, are calculated using data from the direct numerical simulations (DNS) of fully developed turbulent channel flow. It is observed that, beyond the buffer region, the pdf of each of these quantities is independent of the distance from the channel wall. It is further observed that, beyond the buffer region, the pdf's for all the first derivatives collapse onto a single universal curve and those of the second derivatives also collapse onto another universal curve, irrespective of the distance from the wall. The kinetic-energy dissipation rate exhibits log normal behavior.

  3. Carrier Modulation Via Waveform Probability Density Function

    NASA Technical Reports Server (NTRS)

    Williams, Glenn L.

    2006-01-01

    Beyond the classic modes of carrier modulation by varying amplitude (AM), phase (PM), or frequency (FM), we extend the modulation domain of an analog carrier signal to include a class of general modulations which are distinguished by their probability density function histogram. Separate waveform states are easily created by varying the pdf of the transmitted waveform. Individual waveform states are assignable as proxies for digital one's or zero's. At the receiver, these states are easily detected by accumulating sampled waveform statistics and performing periodic pattern matching, correlation, or statistical filtering. No fundamental physical laws are broken in the detection process. We show how a typical modulation scheme would work in the digital domain and suggest how to build an analog version. We propose that clever variations of the modulating waveform (and thus the histogram) can provide simple steganographic encoding.

  4. Carrier Modulation Via Waveform Probability Density Function

    NASA Technical Reports Server (NTRS)

    Williams, Glenn L.

    2004-01-01

    Beyond the classic modes of carrier modulation by varying amplitude (AM), phase (PM), or frequency (FM), we extend the modulation domain of an analog carrier signal to include a class of general modulations which are distinguished by their probability density function histogram. Separate waveform states are easily created by varying the pdf of the transmitted waveform. Individual waveform states are assignable as proxies for digital ONEs or ZEROs. At the receiver, these states are easily detected by accumulating sampled waveform statistics and performing periodic pattern matching, correlation, or statistical filtering. No fundamental natural laws are broken in the detection process. We show how a typical modulation scheme would work in the digital domain and suggest how to build an analog version. We propose that clever variations of the modulating waveform (and thus the histogram) can provide simple steganographic encoding.

  5. Continuation of probability density functions using a generalized Lyapunov approach

    NASA Astrophysics Data System (ADS)

    Baars, S.; Viebahn, J. P.; Mulder, T. E.; Kuehn, C.; Wubs, F. W.; Dijkstra, H. A.

    2017-05-01

    Techniques from numerical bifurcation theory are very useful to study transitions between steady fluid flow patterns and the instabilities involved. Here, we provide computational methodology to use parameter continuation in determining probability density functions of systems of stochastic partial differential equations near fixed points, under a small noise approximation. Key innovation is the efficient solution of a generalized Lyapunov equation using an iterative method involving low-rank approximations. We apply and illustrate the capabilities of the method using a problem in physical oceanography, i.e. the occurrence of multiple steady states of the Atlantic Ocean circulation.

  6. Downlink Probability Density Functions for EOS-McMurdo Sound

    NASA Technical Reports Server (NTRS)

    Christopher, P.; Jackson, A. H.

    1996-01-01

    The visibility times and communication link dynamics for the Earth Observations Satellite (EOS)-McMurdo Sound direct downlinks have been studied. The 16 day EOS periodicity may be shown with the Goddard Trajectory Determination System (GTDS) and the entire 16 day period should be simulated for representative link statistics. We desire many attributes of the downlink, however, and a faster orbital determination method is desirable. We use the method of osculating elements for speed and accuracy in simulating the EOS orbit. The accuracy of the method of osculating elements is demonstrated by closely reproducing the observed 16 day Landsat periodicity. An autocorrelation function method is used to show the correlation spike at 16 days. The entire 16 day record of passes over McMurdo Sound is then used to generate statistics for innage time, outage time, elevation angle, antenna angle rates, and propagation loss. The levation angle probability density function is compared with 1967 analytic approximation which has been used for medium to high altitude satellites. One practical result of this comparison is seen to be the rare occurrence of zenith passes. The new result is functionally different than the earlier result, with a heavy emphasis on low elevation angles. EOS is one of a large class of sun synchronous satellites which may be downlinked to McMurdo Sound. We examine delay statistics for an entire group of sun synchronous satellites ranging from 400 km to 1000 km altitude. Outage probability density function results are presented three dimensionally.

  7. Protein single-model quality assessment by feature-based probability density functions.

    PubMed

    Cao, Renzhi; Cheng, Jianlin

    2016-04-04

    Protein quality assessment (QA) has played an important role in protein structure prediction. We developed a novel single-model quality assessment method-Qprob. Qprob calculates the absolute error for each protein feature value against the true quality scores (i.e. GDT-TS scores) of protein structural models, and uses them to estimate its probability density distribution for quality assessment. Qprob has been blindly tested on the 11th Critical Assessment of Techniques for Protein Structure Prediction (CASP11) as MULTICOM-NOVEL server. The official CASP result shows that Qprob ranks as one of the top single-model QA methods. In addition, Qprob makes contributions to our protein tertiary structure predictor MULTICOM, which is officially ranked 3rd out of 143 predictors. The good performance shows that Qprob is good at assessing the quality of models of hard targets. These results demonstrate that this new probability density distribution based method is effective for protein single-model quality assessment and is useful for protein structure prediction. The webserver of Qprob is available at: http://calla.rnet.missouri.edu/qprob/. The software is now freely available in the web server of Qprob.

  8. A method for density estimation based on expectation identities

    NASA Astrophysics Data System (ADS)

    Peralta, Joaquín; Loyola, Claudia; Loguercio, Humberto; Davis, Sergio

    2017-06-01

    We present a simple and direct method for non-parametric estimation of a one-dimensional probability density, based on the application of the recent conjugate variables theorem. The method expands the logarithm of the probability density ln P(x|I) in terms of a complete basis and numerically solves for the coefficients of the expansion using a linear system of equations. No Monte Carlo sampling is needed. We present preliminary results that show the practical usefulness of the method for modeling statistical data.

  9. Probability densities of the effective neutrino masses mβ and mββ

    NASA Astrophysics Data System (ADS)

    Di Iura, Andrea; Meloni, Davide

    2017-08-01

    We compute the probability densities of the effective neutrino masses mβ and mββ using the Kernel Density Estimate (KDE) approach applied to a distribution of points in the (mmin ,mββ) and (mβ ,mββ) planes, obtained using the available Probability Distribution Functions (PDFs) of the neutrino mixing angles and mass differences, with the additional constraints coming from cosmological data on the sum of the neutrino masses. We show that the reconstructed probability densities strongly depend on the assumed set of cosmological data: for ∑jmj ≤ 0.68 eV at 95% CL a sensitive portion of the allowed values are already excluded by null results of experiments searching for mββ and mβ, whereas in the case ∑jmj ≤ 0.23 eV at 95% CL the bulk of the probability densities are below the current bounds.

  10. Probability Density Function Analysis of Turbulent Condensation Using GPU Hardware

    NASA Astrophysics Data System (ADS)

    Keedy, Ryan; Riley, James; Aliseda, Alberto

    2014-11-01

    Growth of liquid droplets by condensation is an important phenomenon in many environmental and industrial applications. In a homogenous, supersaturated environment, condensation will tend to narrow the diameter distribution of a poly-disperse collection of droplets. However, free shear turbulence can broaden the diameter distribution due to intermittency in the mixing and by subjecting droplets to non-Gaussian supersaturation statistics. In order to understand the condensation behavior of water droplets in a turbulent flow, it is necessary to understand the dispersion of the droplets and transported scalars. We describe a hybrid approach for predicting droplet growth and dispersion in a turbulent mixing layer and compare our computational predictions to experimental data. The approach utilizes a finite-volume code to calculate the fluid velocity field and a particle-mesh Monte Carlo method to track the locations and thermodynamics of the large number of stochastic particles throughout the domain required to resolve the Probability Density Function of the water vapor and droplets. The particle tracking algorithm is designed to take advantage of the computational power of a large number of GPU cores, with significant speed-up when compared against a baseline CPU configuration.

  11. Time-dependent probability density function in cubic stochastic processes.

    PubMed

    Kim, Eun-Jin; Hollerbach, Rainer

    2016-11-01

    We report time-dependent probability density functions (PDFs) for a nonlinear stochastic process with a cubic force using analytical and computational studies. Analytically, a transition probability is formulated by using a path integral and is computed by the saddle-point solution (instanton method) and a new nonlinear transformation of time. The predicted PDF p(x,t) in general involves a time integral, and useful PDFs with explicit dependence on x and t are presented in certain limits (e.g., in the short and long time limits). Numerical simulations of the Fokker-Planck equation provide exact time evolution of the PDFs and confirm analytical predictions in the limit of weak noise. In particular, we show that transient PDFs behave drastically differently from the stationary PDFs in regard to the asymmetry (skewness) and kurtosis. Specifically, while stationary PDFs are symmetric with the kurtosis smaller than 3, transient PDFs are skewed with the kurtosis larger than 3; transient PDFs are much broader than stationary PDFs. We elucidate the effect of nonlinear interaction on the strong fluctuations and intermittency in the relaxation process.

  12. Stationary and Nontationary Response Probability Density Function of a Beam under Poisson White Noise

    NASA Astrophysics Data System (ADS)

    Vasta, M.; Di Paola, M.

    In this paper an approximate explicit probability density function for the analysis of external oscillations of a linear and geometric nonlinear simply supported beam driven by random pulses is proposed. The adopted impulsive loading model is the Poisson White Noise , that is a process having Dirac's delta occurrences with random intensity distributed in time according to Poisson's law. The response probability density function can be obtained solving the related Kolmogorov-Feller (KF) integro-differential equation. An approximated solution, using path integral method, is derived transforming the KF equation to a first order partial differential equation. The method of characteristic is then applied to obtain an explicit solution. Different levels of approximation, depending on the physical assumption on the transition probability density function, are found and the solution for the response density is obtained as series expansion using convolution integrals.

  13. Investigations of turbulent scalar fields using probability density function approach

    NASA Technical Reports Server (NTRS)

    Gao, Feng

    1991-01-01

    Scalar fields undergoing random advection have attracted much attention from researchers in both the theoretical and practical sectors. Research interest spans from the study of the small scale structures of turbulent scalar fields to the modeling and simulations of turbulent reacting flows. The probability density function (PDF) method is an effective tool in the study of turbulent scalar fields, especially for those which involve chemical reactions. It has been argued that a one-point, joint PDF approach is the one to choose from among many simulation and closure methods for turbulent combustion and chemically reacting flows based on its practical feasibility in the foreseeable future for multiple reactants. Instead of the multi-point PDF, the joint PDF of a scalar and its gradient which represents the roles of both scalar and scalar diffusion is introduced. A proper closure model for the molecular diffusion term in the PDF equation is investigated. Another direction in this research is to study the mapping closure method that has been recently proposed to deal with the PDF's in turbulent fields. This method seems to have captured the physics correctly when applied to diffusion problems. However, if the turbulent stretching is included, the amplitude mapping has to be supplemented by either adjusting the parameters representing turbulent stretching at each time step or by introducing the coordinate mapping. This technique is still under development and seems to be quite promising. The final objective of this project is to understand some fundamental properties of the turbulent scalar fields and to develop practical numerical schemes that are capable of handling turbulent reacting flows.

  14. Representation of Probability Density Functions from Orbit Determination using the Particle Filter

    NASA Technical Reports Server (NTRS)

    Mashiku, Alinda K.; Garrison, James; Carpenter, J. Russell

    2012-01-01

    Statistical orbit determination enables us to obtain estimates of the state and the statistical information of its region of uncertainty. In order to obtain an accurate representation of the probability density function (PDF) that incorporates higher order statistical information, we propose the use of nonlinear estimation methods such as the Particle Filter. The Particle Filter (PF) is capable of providing a PDF representation of the state estimates whose accuracy is dependent on the number of particles or samples used. For this method to be applicable to real case scenarios, we need a way of accurately representing the PDF in a compressed manner with little information loss. Hence we propose using the Independent Component Analysis (ICA) as a non-Gaussian dimensional reduction method that is capable of maintaining higher order statistical information obtained using the PF. Methods such as the Principal Component Analysis (PCA) are based on utilizing up to second order statistics, hence will not suffice in maintaining maximum information content. Both the PCA and the ICA are applied to two scenarios that involve a highly eccentric orbit with a lower apriori uncertainty covariance and a less eccentric orbit with a higher a priori uncertainty covariance, to illustrate the capability of the ICA in relation to the PCA.

  15. Exact probability-density function for phase-measurement interferometry

    NASA Astrophysics Data System (ADS)

    Ho, Keang-Po; Kahn, Joseph M.

    1995-09-01

    Conventional analyses of the accuracy of phase-measurement interferometry derive a figure of merit that is either a variance or a signal-to-noise ratio. We derive the probability-density function of the phase-measurement output, so that the measurement confidence interval can be determined. We include both laser phase noise and additive Gaussian noise, and we consider both unmodulated interferometers and those employing phase or frequency modulation. For both unmodulated and modulated interferometers the confidence interval can be obtained by numerical integration of the probability-density function. For the modulated interferometer we derive a series summation for the confidence interval. For both unmodulated and modulated interferometers we derive approximate analytical expressions for the confidence interval, which we show to be extremely accurate at high signal-to-noise ratios.

  16. Probability density functions of instantaneous Stokes parameters on weak scattering

    NASA Astrophysics Data System (ADS)

    Chen, Xi; Korotkova, Olga

    2017-10-01

    The single-point probability density functions (PDF) of the instantaneous Stokes parameters of a polarized plane-wave light field scattered from a three-dimensional, statistically stationary, weak medium with Gaussian statistics and Gaussian correlation function have been studied for the first time. Apart from the scattering geometry the PDF distributions of the scattered light have been related to the illumination's polarization state and the correlation properties of the medium.

  17. Assumed Probability Density Functions for Shallow and Deep Convection

    NASA Astrophysics Data System (ADS)

    Bogenschutz, Peter A.; Krueger, Steven K.; Khairoutdinov, Marat

    2010-04-01

    The assumed joint probability density function (PDF) between vertical velocity and conserved temperature and total water scalars has been suggested to be a relatively computationally inexpensive and unified subgrid-scale (SGS) parameterization for boundary layer clouds and turbulent moments. This paper analyzes the performance of five families of PDFs using large-eddy simulations of deep convection, shallow convection, and a transition from stratocumulus to trade wind cumulus. Three of the PDF families are based on the double Gaussian form and the remaining two are the single Gaussian and a Double Delta Function (analogous to a mass flux model). The assumed PDF method is tested for grid sizes as small as 0.4 km to as large as 204.8 km. In addition, studies are performed for PDF sensitivity to errors in the input moments and for how well the PDFs diagnose some higher-order moments. In general, the double Gaussian PDFs more accurately represent SGS cloud structure and turbulence moments in the boundary layer compared to the single Gaussian and Double Delta Function PDFs for the range of grid sizes tested. This is especially true for small SGS cloud fractions. While the most complex PDF, Lewellen-Yoh, better represents shallow convective cloud properties (cloud fraction and liquid water mixing ratio) compared to the less complex Analytic Double Gaussian 1 PDF, there appears to be no advantage in implementing Lewellen-Yoh for deep convection. However, the Analytic Double Gaussian 1 PDF better represents the liquid water flux, is less sensitive to errors in the input moments, and diagnoses higher order moments more accurately. Between the Lewellen-Yoh and Analytic Double Gaussian 1 PDFs, it appears that neither family is distinctly better at representing cloudy layers. However, due to the reduced computational cost and fairly robust results, it appears that the Analytic Double Gaussian 1 PDF could be an ideal family for SGS cloud and turbulence representation in coarse

  18. A new estimator method for GARCH models

    NASA Astrophysics Data System (ADS)

    Onody, R. N.; Favaro, G. M.; Cazaroto, E. R.

    2007-06-01

    The GARCH (p, q) model is a very interesting stochastic process with widespread applications and a central role in empirical finance. The Markovian GARCH (1, 1) model has only 3 control parameters and a much discussed question is how to estimate them when a series of some financial asset is given. Besides the maximum likelihood estimator technique, there is another method which uses the variance, the kurtosis and the autocorrelation time to determine them. We propose here to use the standardized 6th moment. The set of parameters obtained in this way produces a very good probability density function and a much better time autocorrelation function. This is true for both studied indexes: NYSE Composite and FTSE 100. The probability of return to the origin is investigated at different time horizons for both Gaussian and Laplacian GARCH models. In spite of the fact that these models show almost identical performances with respect to the final probability density function and to the time autocorrelation function, their scaling properties are, however, very different. The Laplacian GARCH model gives a better scaling exponent for the NYSE time series, whereas the Gaussian dynamics fits better the FTSE scaling exponent.

  19. Probability Density Function at the 3D Anderson Transition

    NASA Astrophysics Data System (ADS)

    Rodriguez, Alberto; Vasquez, Louella J.; Roemer, Rudolf

    2009-03-01

    The probability density function (PDF) for the wavefunction amplitudes is studied at the metal-insulator transition of the 3D Anderson model, for very large systems up to L^3=240^3. The implications of the multifractal nature of the state upon the PDF are presented in detail. A formal expression between the PDF and the singularity spectrum f(α) is given. The PDF can be easily used to carry out a numerical multifractal analysis and it appears as a valid alternative to the more usual approach based on the scaling law of the general inverse participation rations.

  20. Representation of layer-counted proxy records as probability densities on error-free time axes

    NASA Astrophysics Data System (ADS)

    Boers, Niklas; Goswami, Bedartha; Ghil, Michael

    2016-04-01

    Time series derived from paleoclimatic proxy records exhibit substantial dating uncertainties in addition to the measurement errors of the proxy values. For radiometrically dated proxy archives, Goswami et al. [1] have recently introduced a framework rooted in Bayesian statistics that successfully propagates the dating uncertainties from the time axis to the proxy axis. The resulting proxy record consists of a sequence of probability densities over the proxy values, conditioned on prescribed age values. One of the major benefits of this approach is that the proxy record is represented on an accurate, error-free time axis. Such unambiguous dating is crucial, for instance, in comparing different proxy records. This approach, however, is not directly applicable to proxy records with layer-counted chronologies, as for example ice cores, which are typically dated by counting quasi-annually deposited ice layers. Hence the nature of the chronological uncertainty in such records is fundamentally different from that in radiometrically dated ones. Here, we introduce a modification of the Goswami et al. [1] approach that is specifically designed for layer-counted proxy records, instead of radiometrically dated ones. We apply our method to isotope ratios and dust concentrations in the NGRIP core, using a published 60,000-year chronology [2]. It is shown that the further one goes into the past, the more the layer-counting errors accumulate and lead to growing uncertainties in the probability density sequence for the proxy values that results from the proposed approach. For the older parts of the record, these uncertainties affect more and more a statistically sound estimation of proxy values. This difficulty implies that great care has to be exercised when comparing and in particular aligning specific events among different layer-counted proxy records. On the other hand, when attempting to derive stochastic dynamical models from the proxy records, one is only interested in the

  1. Probability density function characterization for aggregated large-scale wind power based on Weibull mixtures

    DOE PAGES

    Gomez-Lazaro, Emilio; Bueso, Maria C.; Kessler, Mathieu; ...

    2016-02-02

    Here, the Weibull probability distribution has been widely applied to characterize wind speeds for wind energy resources. Wind power generation modeling is different, however, due in particular to power curve limitations, wind turbine control methods, and transmission system operation requirements. These differences are even greater for aggregated wind power generation in power systems with high wind penetration. Consequently, models based on one-Weibull component can provide poor characterizations for aggregated wind power generation. With this aim, the present paper focuses on discussing Weibull mixtures to characterize the probability density function (PDF) for aggregated wind power generation. PDFs of wind power datamore » are firstly classified attending to hourly and seasonal patterns. The selection of the number of components in the mixture is analyzed through two well-known different criteria: the Akaike information criterion (AIC) and the Bayesian information criterion (BIC). Finally, the optimal number of Weibull components for maximum likelihood is explored for the defined patterns, including the estimated weight, scale, and shape parameters. Results show that multi-Weibull models are more suitable to characterize aggregated wind power data due to the impact of distributed generation, variety of wind speed values and wind power curtailment.« less

  2. Probability density function characterization for aggregated large-scale wind power based on Weibull mixtures

    SciTech Connect

    Gomez-Lazaro, Emilio; Bueso, Maria C.; Kessler, Mathieu; Martin-Martinez, Sergio; Zhang, Jie; Hodge, Bri -Mathias; Molina-Garcia, Angel

    2016-02-02

    Here, the Weibull probability distribution has been widely applied to characterize wind speeds for wind energy resources. Wind power generation modeling is different, however, due in particular to power curve limitations, wind turbine control methods, and transmission system operation requirements. These differences are even greater for aggregated wind power generation in power systems with high wind penetration. Consequently, models based on one-Weibull component can provide poor characterizations for aggregated wind power generation. With this aim, the present paper focuses on discussing Weibull mixtures to characterize the probability density function (PDF) for aggregated wind power generation. PDFs of wind power data are firstly classified attending to hourly and seasonal patterns. The selection of the number of components in the mixture is analyzed through two well-known different criteria: the Akaike information criterion (AIC) and the Bayesian information criterion (BIC). Finally, the optimal number of Weibull components for maximum likelihood is explored for the defined patterns, including the estimated weight, scale, and shape parameters. Results show that multi-Weibull models are more suitable to characterize aggregated wind power data due to the impact of distributed generation, variety of wind speed values and wind power curtailment.

  3. Analytical Formulation of the Single-visit Completeness Joint Probability Density Function

    NASA Astrophysics Data System (ADS)

    Garrett, Daniel; Savransky, Dmitry

    2016-09-01

    We derive an exact formulation of the multivariate integral representing the single-visit obscurational and photometric completeness joint probability density function for arbitrary distributions for planetary parameters. We present a derivation of the region of nonzero values of this function, which extends previous work, and discuss the time and computational complexity costs and benefits of the method. We present a working implementation and demonstrate excellent agreement between this approach and Monte Carlo simulation results.

  4. Analysis of Two-Dimensional Ultrasound Cardiac Strain Imaging using Joint Probability Density Functions

    PubMed Central

    Ma, Chi; Varghese, Tomy

    2014-01-01

    Ultrasound frame rates play a key role for accurate cardiac deformation tracking. Insufficient frame rates lead to an increase in signal decorrelation artifacts; resulting in erroneous displacement and strain estimation. Joint probability density distributions generated from estimated axial strain and its associated signal-to-noise ratio provide a useful approach to assess the minimum frame rate requirements. Previous reports have demonstrated that bimodal distributions in the joint probability density indicate inaccurate strain estimation over a cardiac cycle. In this study, we utilize similar analysis to evaluate a two-dimensional multi-level displacement tracking and strain estimation algorithm for cardiac strain imaging. The impact of different frame rates, final kernel dimensions, and a comparison of radiofrequency and envelope based processing are evaluated using echo signals derived from a three-dimensional finite element cardiac model and 5 healthy volunteers. Cardiac simulation model analysis demonstrate that the minimum frame rates required to obtain accurate joint probability distributions for the signal to noise ratio and strain, for a final kernel dimension of 1 λ by 3 A-lines, was around 42 Hz for radiofrequency signals. On the other hand, even a frame rate of 250Hz with envelope signals did not replicate the ideal joint probability distribution. For the volunteer study, clinical data was acquired only at a 34 Hz frame rate which appears to be sufficient for radiofrequency analysis. We also show that an increase in the final kernel dimensions significantly impact the strain probability distribution and joint probability density function generated; with a smaller impact on the variation in the accumulated mean strain estimated over a cardiac cycle. Our results demonstrate that radiofrequency frame rates currently achievable on clinical cardiac ultrasound systems are sufficient for accurate analysis of the strain probability distribution, when a multi

  5. Effect of Non-speckle Echo Signals on Tissue Characteristics for Liver Fibrosis using Probability Density Function of Ultrasonic B-mode image

    NASA Astrophysics Data System (ADS)

    Mori, Shohei; Hirata, Shinnosuke; Yamaguchi, Tadashi; Hachiya, Hiroyuki

    To develop a quantitative diagnostic method for liver fibrosis using an ultrasound B-mode image, a probability imaging method of tissue characteristics based on a multi-Rayleigh model, which expresses a probability density function of echo signals from liver fibrosis, has been proposed. In this paper, an effect of non-speckle echo signals on tissue characteristics estimated from the multi-Rayleigh model was evaluated. Non-speckle signals were determined and removed using the modeling error of the multi-Rayleigh model. The correct tissue characteristics of fibrotic tissue could be estimated with the removal of non-speckle signals.

  6. Probability Density Function for Waves Propagating in a Straight PEC Rough Wall Tunnel

    SciTech Connect

    Pao, H

    2004-11-08

    The probability density function for wave propagating in a straight perfect electrical conductor (PEC) rough wall tunnel is deduced from the mathematical models of the random electromagnetic fields. The field propagating in caves or tunnels is a complex-valued Gaussian random processing by the Central Limit Theorem. The probability density function for single modal field amplitude in such structure is Ricean. Since both expected value and standard deviation of this field depend only on radial position, the probability density function, which gives what is the power distribution, is a radially dependent function. The radio channel places fundamental limitations on the performance of wireless communication systems in tunnels and caves. The transmission path between the transmitter and receiver can vary from a simple direct line of sight to one that is severely obstructed by rough walls and corners. Unlike wired channels that are stationary and predictable, radio channels can be extremely random and difficult to analyze. In fact, modeling the radio channel has historically been one of the more challenging parts of any radio system design; this is often done using statistical methods. In this contribution, we present the most important statistic property, the field probability density function, of wave propagating in a straight PEC rough wall tunnel. This work only studies the simplest case--PEC boundary which is not the real world but the methods and conclusions developed herein are applicable to real world problems which the boundary is dielectric. The mechanisms behind electromagnetic wave propagation in caves or tunnels are diverse, but can generally be attributed to reflection, diffraction, and scattering. Because of the multiple reflections from rough walls, the electromagnetic waves travel along different paths of varying lengths. The interactions between these waves cause multipath fading at any location, and the strengths of the waves decrease as the distance

  7. Using Prediction Markets to Generate Probability Density Functions for Climate Change Risk Assessment

    NASA Astrophysics Data System (ADS)

    Boslough, M.

    2011-12-01

    Climate-related uncertainty is traditionally presented as an error bar, but it is becoming increasingly common to express it in terms of a probability density function (PDF). PDFs are a necessary component of probabilistic risk assessments, for which simple "best estimate" values are insufficient. Many groups have generated PDFs for climate sensitivity using a variety of methods. These PDFs are broadly consistent, but vary significantly in their details. One axiom of the verification and validation community is, "codes don't make predictions, people make predictions." This is a statement of the fact that subject domain experts generate results using assumptions within a range of epistemic uncertainty and interpret them according to their expert opinion. Different experts with different methods will arrive at different PDFs. For effective decision support, a single consensus PDF would be useful. We suggest that market methods can be used to aggregate an ensemble of opinions into a single distribution that expresses the consensus. Prediction markets have been shown to be highly successful at forecasting the outcome of events ranging from elections to box office returns. In prediction markets, traders can take a position on whether some future event will or will not occur. These positions are expressed as contracts that are traded in a double-action market that aggregates price, which can be interpreted as a consensus probability that the event will take place. Since climate sensitivity cannot directly be measured, it cannot be predicted. However, the changes in global mean surface temperature are a direct consequence of climate sensitivity, changes in forcing, and internal variability. Viable prediction markets require an undisputed event outcome on a specific date. Climate-related markets exist on Intrade.com, an online trading exchange. One such contract is titled "Global Temperature Anomaly for Dec 2011 to be greater than 0.65 Degrees C." Settlement is based

  8. Probability density distribution of velocity differences at high Reynolds numbers

    NASA Technical Reports Server (NTRS)

    Praskovsky, Alexander A.

    1993-01-01

    Recent understanding of fine-scale turbulence structure in high Reynolds number flows is mostly based on Kolmogorov's original and revised models. The main finding of these models is that intrinsic characteristics of fine-scale fluctuations are universal ones at high Reynolds numbers, i.e., the functional behavior of any small-scale parameter is the same in all flows if the Reynolds number is high enough. The only large-scale quantity that directly affects small-scale fluctuations is the energy flux through a cascade. In dynamical equilibrium between large- and small-scale motions, this flux is equal to the mean rate of energy dissipation epsilon. The pdd of velocity difference is a very important characteristic for both the basic understanding of fully developed turbulence and engineering problems. Hence, it is important to test the findings: (1) the functional behavior of the tails of the probability density distribution (pdd) represented by P(delta(u)) is proportional to exp(-b(r) absolute value of delta(u)/sigma(sub delta(u))) and (2) the logarithmic decrement b(r) scales as b(r) is proportional to r(sup 0.15) when separation r lies in the inertial subrange in high Reynolds number laboratory shear flows.

  9. Efficiency issues related to probability density function comparison

    NASA Astrophysics Data System (ADS)

    Kelly, Patrick M.; Cannon, T. Michael; Barros, Julio E.

    1996-03-01

    The CANDID project (comparison algorithm for navigating digital image databases) employs probability density functions (PDFs) of localized feature information to represent the content of an image for search and retrieval purposes. A similarity measure between PDFs is used to identify database images that are similar to a user-provided query image. Unfortunately, signature comparison involving PDFs is a very time-consuming operation. In this paper, we look into some efficiency considerations when working with PDFs. Since PDFs can take on many forms, we look into tradeoffs between accurate representation and efficiency of manipulation for several data sets. In particular, we typically represent each PDF as a Gaussian mixture (e.g. as a weighted sum of Gaussian kernels) in the feature space. We find that by constraining all Gaussian kernels to have principal axes that are aligned to the natural axes of the feature space, computations involving these PDFs are simplified. We can also constrain the Gaussian kernels to be hyperspherical rather than hyperellipsoidal, simplifying computations even further, and yielding an order of magnitude speedup in signature comparison. This paper illustrates the tradeoffs encountered when using these constraints.

  10. Spectral Discrete Probability Density Function of Measured Wind Turbine Noise in the Far Field

    PubMed Central

    Ashtiani, Payam; Denison, Adelaide

    2015-01-01

    Of interest is the spectral character of wind turbine noise at typical residential set-back distances. In this paper, a spectral statistical analysis has been applied to immission measurements conducted at three locations. This method provides discrete probability density functions for the Turbine ONLY component of the measured noise. This analysis is completed for one-third octave sound levels, at integer wind speeds, and is compared to existing metrics for measuring acoustic comfort as well as previous discussions on low-frequency noise sources. PMID:25905097

  11. Spectral discrete probability density function of measured wind turbine noise in the far field.

    PubMed

    Ashtiani, Payam; Denison, Adelaide

    2015-01-01

    Of interest is the spectral character of wind turbine noise at typical residential set-back distances. In this paper, a spectral statistical analysis has been applied to immission measurements conducted at three locations. This method provides discrete probability density functions for the Turbine ONLY component of the measured noise. This analysis is completed for one-third octave sound levels, at integer wind speeds, and is compared to existing metrics for measuring acoustic comfort as well as previous discussions on low-frequency noise sources.

  12. Boundary conditions for probability density function transport equations in fluid mechanics.

    PubMed

    Valiño, Luis; Hierro, Juan

    2003-04-01

    The behavior of the probability density function (PDF) transport equation at the limits of the probability space is studied from the point of view of fluid mechanics. Different boundary conditions are considered depending on the nature of the variable considered (velocity, scalar, and position). A study of the implications of entrance and exit conditions is performed, showing that a new term should be added to the PDF transport equation to preserve normalization in some nonstationary processes. In practice, this term is taken into account naturally in particle methods. Finally, the existence of discontinuities at the limits is also investigated.

  13. Analysis of 2-d ultrasound cardiac strain imaging using joint probability density functions.

    PubMed

    Ma, Chi; Varghese, Tomy

    2014-06-01

    Ultrasound frame rates play a key role for accurate cardiac deformation tracking. Insufficient frame rates lead to an increase in signal de-correlation artifacts resulting in erroneous displacement and strain estimation. Joint probability density distributions generated from estimated axial strain and its associated signal-to-noise ratio provide a useful approach to assess the minimum frame rate requirements. Previous reports have demonstrated that bi-modal distributions in the joint probability density indicate inaccurate strain estimation over a cardiac cycle. In this study, we utilize similar analysis to evaluate a 2-D multi-level displacement tracking and strain estimation algorithm for cardiac strain imaging. The effect of different frame rates, final kernel dimensions and a comparison of radio frequency and envelope based processing are evaluated using echo signals derived from a 3-D finite element cardiac model and five healthy volunteers. Cardiac simulation model analysis demonstrates that the minimum frame rates required to obtain accurate joint probability distributions for the signal-to-noise ratio and strain, for a final kernel dimension of 1 λ by 3 A-lines, was around 42 Hz for radio frequency signals. On the other hand, even a frame rate of 250 Hz with envelope signals did not replicate the ideal joint probability distribution. For the volunteer study, clinical data was acquired only at a 34 Hz frame rate, which appears to be sufficient for radio frequency analysis. We also show that an increase in the final kernel dimensions significantly affect the strain probability distribution and joint probability density function generated, with a smaller effect on the variation in the accumulated mean strain estimated over a cardiac cycle. Our results demonstrate that radio frequency frame rates currently achievable on clinical cardiac ultrasound systems are sufficient for accurate analysis of the strain probability distribution, when a multi-level 2-D

  14. Non-linear Inversion of Noise Cross-correlations Using Probability Density Functions of Surface Waves Dispersion

    NASA Astrophysics Data System (ADS)

    Gaudot, I.; Beucler, E.; Mocquet, A.; Drilleau, M.; Le Feuvre, M.

    2015-12-01

    Cross-correlations of ambient seismic noise are widely used to retrieve the information of the medium between pairs of stations. For periods between 1 and 50 s, the diffuse wavefield is dominated by microseismic energy which travels mostly as surface waves. Therefore, such waves are mainly reconstructed in the cross-correlations, and information about the structure are obtained using dispersion analysis, i.e computing phase or group velocities. Classical group velocity determination relies on tracking the maximum energy in the dispersion diagrams in order to get a unique dispersion curve. This procedure may often present problems due to the presence of several maxima. Moreover, the estimation of associated measurement errors usually depends on ad hoc user's criteria. We handle the non-unicity of the problem by inverting the whole dispersion diagram using a non-linear inversion scheme. For each frequency, the seismic energy is mapped into a time-dependent probability density function. The resulting map is inverted for the S-wave velocity structure using a Markov-chain Monte Carlo algorithm. Each time a new model is randomly sampled, the misfit value is computed according to the position of the corresponding group velocity curve in the probability density functions map. This method is applied for the analysis of vertical component noise cross-correlations computed from seismic data recorded in western Europe by the temporary PYROPE and IBERARRAY networks. The inversion of the fundamental mode Rayleigh wave dispersion diagrams between 5 and 50 s period gives a set of 1D S-wave velocity models, which are regionalized to infer a 3D S-wave velocity model of western France.

  15. Sliding-mode control design for nonlinear systems using probability density function shaping.

    PubMed

    Liu, Yu; Wang, Hong; Hou, Chaohuan

    2014-02-01

    In this paper, we propose a sliding-mode-based stochastic distribution control algorithm for nonlinear systems, where the sliding-mode controller is designed to stabilize the stochastic system and stochastic distribution control tries to shape the sliding surface as close as possible to the desired probability density function. Kullback-Leibler divergence is introduced to the stochastic distribution control, and the parameter of the stochastic distribution controller is updated at each sample interval rather than using a batch mode. It is shown that the estimated weight vector will converge to its ideal value and the system will be asymptotically stable under the rank-condition, which is much weaker than the persistent excitation condition. The effectiveness of the proposed algorithm is illustrated by simulation.

  16. 3D model retrieval using probability density-based shape descriptors.

    PubMed

    Akgül, Ceyhun Burak; Sankur, Bülent; Yemez, Yücel; Schmitt, Francis

    2009-06-01

    We address content-based retrieval of complete 3D object models by a probabilistic generative description of local shape properties. The proposed shape description framework characterizes a 3D object with sampled multivariate probability density functions of its local surface features. This density-based descriptor can be efficiently computed via kernel density estimation (KDE) coupled with fast Gauss transform. The non-parametric KDE technique allows reliable characterization of a diverse set of shapes and yields descriptors which remain relatively insensitive to small shape perturbations and mesh resolution. Density-based characterization also induces a permutation property which can be used to guarantee invariance at the shape matching stage. As proven by extensive retrieval experiments on several 3D databases, our framework provides state-of-the-art discrimination over a broad and heterogeneous set of shape categories.

  17. Nonparametric Estimation by the Method of Sieves.

    DTIC Science & Technology

    1983-07-01

    high speed memory to which Accomando has added a board with 64k 16-bit words. The programs will reconstruct a 60x60 phantom in about fifteen or twenty...1971. 8. Budingere T. F.. Gullberg. G. T., and Huesman. R. H., Emission computed tomography, chapter 5 in J&&" Reconstrution f= Prjeins; Im&lementation...section reconstrution . Phys. Med. Biol. 22. 511-521& 1977. 60 29. Kronmal, R. and Tarter. M., The estimation of probability densities and cumulatives by

  18. Impact of sampling volume on the probability density function of steady state concentration

    NASA Astrophysics Data System (ADS)

    Schwede, Ronnie L.; Cirpka, Olaf A.; Nowak, Wolfgang; Neuweiler, Insa

    2008-12-01

    In recent years, statistical theory has been used to compute the ensemble mean and variance of solute concentration in aquifer formations with second-order stationary velocity fields. The merit of accurately estimating the mean and variance of concentration, however, remains unclear without knowing the shape of the probability density function (pdf). In a setup where a conservative solute is continuously injected into a domain, the concentration is bounded between zero and the concentration value in the injected solution. At small travel distances close to the fringe of the plume, an observation point may fall into the plume or outside, so that the statistical concentration distribution clusters at the two limiting values. Obviously, this results in non-Gaussian pdf's of concentration. With increasing travel distance, the lateral plume boundaries are smoothed, resulting in increased probability of intermediate concentrations. Likewise, averaging the concentration in a larger sampling volume, as typically done in field measurements, leads to higher probabilities of intermediate concentrations. We present semianalytical results of concentration pdf's for measurements with point-like or larger support volumes based on stochastic theory applied to stationary media. To this end, we employ a reversed auxiliary transport problem, in which we use analytical expressions for first and second central spatial lateral moments with an assumed Gaussian pdf for the uncertainty of the first lateral moment and Gauss-like shapes in individual cross sections. The resulting concentration pdf can be reasonably fitted by beta distributions. The results are compared to Monte Carlo simulations of flow and steady state transport in 3-D heterogeneous domains. In both methods the shape of the concentration pdf changes with distance to the contaminant source: Near the source, the distribution is multimodal, whereas it becomes a unimodal beta distribution far away from the contaminant source

  19. Stochastic chaos induced by diffusion processes with identical spectral density but different probability density functions.

    PubMed

    Lei, Youming; Zheng, Fan

    2016-12-01

    Stochastic chaos induced by diffusion processes, with identical spectral density but different probability density functions (PDFs), is investigated in selected lightly damped Hamiltonian systems. The threshold amplitude of diffusion processes for the onset of chaos is derived by using the stochastic Melnikov method together with a mean-square criterion. Two quasi-Hamiltonian systems, namely, a damped single pendulum and damped Duffing oscillator perturbed by stochastic excitations, are used as illustrative examples. Four different cases of stochastic processes are taking as the driving excitations. It is shown that in such two systems the spectral density of diffusion processes completely determines the threshold amplitude for chaos, regardless of the shape of their PDFs, Gaussian or otherwise. Furthermore, the mean top Lyapunov exponent is employed to verify analytical results. The results obtained by numerical simulations are in accordance with the analytical results. This demonstrates that the stochastic Melnikov method is effective in predicting the onset of chaos in the quasi-Hamiltonian systems.

  20. Stochastic chaos induced by diffusion processes with identical spectral density but different probability density functions

    NASA Astrophysics Data System (ADS)

    Lei, Youming; Zheng, Fan

    2016-12-01

    Stochastic chaos induced by diffusion processes, with identical spectral density but different probability density functions (PDFs), is investigated in selected lightly damped Hamiltonian systems. The threshold amplitude of diffusion processes for the onset of chaos is derived by using the stochastic Melnikov method together with a mean-square criterion. Two quasi-Hamiltonian systems, namely, a damped single pendulum and damped Duffing oscillator perturbed by stochastic excitations, are used as illustrative examples. Four different cases of stochastic processes are taking as the driving excitations. It is shown that in such two systems the spectral density of diffusion processes completely determines the threshold amplitude for chaos, regardless of the shape of their PDFs, Gaussian or otherwise. Furthermore, the mean top Lyapunov exponent is employed to verify analytical results. The results obtained by numerical simulations are in accordance with the analytical results. This demonstrates that the stochastic Melnikov method is effective in predicting the onset of chaos in the quasi-Hamiltonian systems.

  1. Assessment of probability density function based on POD reduced-order model for ensemble-based data assimilation

    NASA Astrophysics Data System (ADS)

    Kikuchi, Ryota; Misaka, Takashi; Obayashi, Shigeru

    2015-10-01

    An integrated method of a proper orthogonal decomposition based reduced-order model (ROM) and data assimilation is proposed for the real-time prediction of an unsteady flow field. In this paper, a particle filter (PF) and an ensemble Kalman filter (EnKF) are compared for data assimilation and the difference in the predicted flow fields is evaluated focusing on the probability density function (PDF) of the model variables. The proposed method is demonstrated using identical twin experiments of an unsteady flow field around a circular cylinder at the Reynolds number of 1000. The PF and EnKF are employed to estimate temporal coefficients of the ROM based on the observed velocity components in the wake of the circular cylinder. The prediction accuracy of ROM-PF is significantly better than that of ROM-EnKF due to the flexibility of PF for representing a PDF compared to EnKF. Furthermore, the proposed method reproduces the unsteady flow field several orders faster than the reference numerical simulation based on the Navier-Stokes equations.

  2. Probability Density Function of the Output Current of Cascaded Multiplexer/Demultiplexers in Transparent Optical Networks

    NASA Astrophysics Data System (ADS)

    Rebola, João L.; Cartaxo, Adolfo V. T.

    The influence of the concatenation of arbitrary optical multiplexers/demultiplexers (MUX/DEMUXs) on the probability density function (PDF) of the output current of a transparent optical network is assessed. All PDF results obtained analytically are compared with estimates from Monte Carlo simulation and an excellent agreement is achieved. The non-Gaussian behavior of the PDFs, previously reported by other authors for square-law detectors, is significantly enhanced with the number of nodes increase due to the noise accumulation along the cascade of MUX/DEMUXs. The increase of the MUX/DEMUXs bandwidth and detuning also enhances the PDFs non-Gaussian behavior. The PDF shape variation with the detuning depends strongly on the number of nodes. Explanations for the Gaussian approximation (GA) accuracy on the assessment of the performance of a concatenation of optical MUX/DEMUXs are also provided. For infinite extinction ratio and tuned MUX/DEMUXs, the GA error probabilities are, in general, pessimistic, due to the inaccurate estimation of the error probability for both bits. For low extinction ratio, the GA is very accurate due to a balance between the error probabilities estimated for the bits "1" and "0." With the detuning increase, the GA estimates can become optimistic.

  3. Using skew-logistic probability density function as a model for age-specific fertility rate pattern.

    PubMed

    Asili, Sahar; Rezaei, Sadegh; Najjar, Lotfollah

    2014-01-01

    Fertility rate is one of the most important global indexes. Past researchers found models which fit to age-specific fertility rates. For example, mixture probability density functions have been proposed for situations with bi-modal fertility patterns. This model is less useful for unimodal age-specific fertility rate patterns, so a model based on skew-symmetric (skew-normal) pdf was proposed by Mazzuco and Scarpa (2011) which was flexible for unimodal and bimodal fertility patterns. In this paper, we introduce skew-logistic probability density function as a better model: its residuals are less than those of the skew-normal model and it can more precisely estimate the parameters of the model.

  4. Particle number and probability density functional theory and A-representability.

    PubMed

    Pan, Xiao-Yin; Sahni, Viraht

    2010-04-28

    In Hohenberg-Kohn density functional theory, the energy E is expressed as a unique functional of the ground state density rho(r): E = E[rho] with the internal energy component F(HK)[rho] being universal. Knowledge of the functional F(HK)[rho] by itself, however, is insufficient to obtain the energy: the particle number N is primary. By emphasizing this primacy, the energy E is written as a nonuniversal functional of N and probability density p(r): E = E[N,p]. The set of functions p(r) satisfies the constraints of normalization to unity and non-negativity, exists for each N; N = 1, ..., infinity, and defines the probability density or p-space. A particle number N and probability density p(r) functional theory is constructed. Two examples for which the exact energy functionals E[N,p] are known are provided. The concept of A-representability is introduced, by which it is meant the set of functions Psi(p) that leads to probability densities p(r) obtained as the quantum-mechanical expectation of the probability density operator, and which satisfies the above constraints. We show that the set of functions p(r) of p-space is equivalent to the A-representable probability density set. We also show via the Harriman and Gilbert constructions that the A-representable and N-representable probability density p(r) sets are equivalent.

  5. Incorporating photometric redshift probability density information into real-space clustering measurements

    NASA Astrophysics Data System (ADS)

    Myers, Adam D.; White, Martin; Ball, Nicholas M.

    2009-11-01

    The use of photometric redshifts in cosmology is increasing. Often, however these photo-z are treated like spectroscopic observations, in that the peak of the photometric redshift, rather than the full probability density function (PDF), is used. This overlooks useful information inherent in the full PDF. We introduce a new real-space estimator for one of the most used cosmological statistics, the two-point correlation function, that weights by the PDF of individual photometric objects in a manner that is optimal when Poisson statistics dominate. As our estimator does not bin based on the PDF peak, it substantially enhances the clustering signal by usefully incorporating information from all photometric objects that overlap the redshift bin of interest. As a real-world application, we measure quasi-stellar object (QSO) clustering in the Sloan Digital Sky Survey (SDSS). We find that our simplest binned estimator improves the clustering signal by a factor equivalent to increasing the survey size by a factor of 2-3. We also introduce a new implementation that fully weights between pairs of objects in constructing the cross-correlation and find that this pair-weighted estimator improves clustering signal in a manner equivalent to increasing the survey size by a factor of 4-5. Our technique uses spectroscopic data to anchor the distance scale and it will be particularly useful where spectroscopic data (e.g. from BOSS) overlap deeper photometry (e.g. from Pan-STARRS, DES or the LSST). We additionally provide simple, informative expressions to determine when our estimator will be competitive with the autocorrelation of spectroscopic objects. Although we use QSOs as an example population, our estimator can and should be applied to any clustering estimate that uses photometric objects.

  6. Improved fMRI time-series registration using joint probability density priors

    NASA Astrophysics Data System (ADS)

    Bhagalia, Roshni; Fessler, Jeffrey A.; Kim, Boklye; Meyer, Charles R.

    2009-02-01

    Functional MRI (fMRI) time-series studies are plagued by varying degrees of subject head motion. Faithful head motion correction is essential to accurately detect brain activation using statistical analyses of these time-series. Mutual information (MI) based slice-to-volume (SV) registration is used for motion estimation when the rate of change of head position is large. SV registration accounts for head motion between slice acquisitions by estimating an independent rigid transformation for each slice in the time-series. Consequently each MI optimization uses intensity counts from a single time-series slice, making the algorithm susceptible to noise for low complexity endslices (i.e., slices near the top of the head scans). This work focuses on improving the accuracy of MI-based SV registration of end-slices by using joint probability density priors derived from registered high complexity centerslices (i.e., slices near the middle of the head scans). Results show that the use of such priors can significantly improve SV registration accuracy.

  7. Simulation Of Wave Function And Probability Density Of Modified Poschl Teller Potential Derived Using Supersymmetric Quantum Mechanics

    NASA Astrophysics Data System (ADS)

    Angraini, Lily Maysari; Suparmi, Variani, Viska Inda

    2010-12-01

    SUSY quantum mechanics can be applied to solve Schrodinger equation for high dimensional system that can be reduced into one dimensional system and represented in lowering and raising operators. Lowering and raising operators can be obtained using relationship between original Hamiltonian equation and the (super) potential equation. In this paper SUSY quantum mechanics is used as a method to obtain the wave function and the energy level of the Modified Poschl Teller potential. The graph of wave function equation and probability density is simulated by using Delphi 7.0 programming language. Finally, the expectation value of quantum mechanics operator could be calculated analytically using integral form or probability density graph resulted by the programming.

  8. Simulation Of Wave Function And Probability Density Of Modified Poschl Teller Potential Derived Using Supersymmetric Quantum Mechanics

    SciTech Connect

    Angraini, Lily Maysari; Suparmi,; Variani, Viska Inda

    2010-12-23

    SUSY quantum mechanics can be applied to solve Schrodinger equation for high dimensional system that can be reduced into one dimensional system and represented in lowering and raising operators. Lowering and raising operators can be obtained using relationship between original Hamiltonian equation and the (super) potential equation. In this paper SUSY quantum mechanics is used as a method to obtain the wave function and the energy level of the Modified Poschl Teller potential. The graph of wave function equation and probability density is simulated by using Delphi 7.0 programming language. Finally, the expectation value of quantum mechanics operator could be calculated analytically using integral form or probability density graph resulted by the programming.

  9. Approximation to the Probability Density at the Output of a Photmultiplier Tube

    NASA Technical Reports Server (NTRS)

    Stokey, R. J.; Lee, P. J.

    1983-01-01

    The probability density of the integrated output of a photomultiplier tube (PMT) is approximated by the Gaussian, Rayleigh, and Gamma probability densities. The accuracy of the approximations depends on the signal energy alpha: the Gamma distribution is accurate for all alpha, the Raleigh distribution is accurate for small alpha (approximate or less than 1 photon) and the Gaussian distribution is accurate for large alpha (approximate or greater than 10 photons).

  10. Entrainment Rate in Shallow Cumuli: Dependence on Entrained Dry Air Sources and Probability Density Functions

    NASA Astrophysics Data System (ADS)

    Lu, C.; Liu, Y.; Niu, S.; Vogelmann, A. M.

    2012-12-01

    In situ aircraft cumulus observations from the RACORO field campaign are used to estimate entrainment rate for individual clouds using a recently developed mixing fraction approach. The entrainment rate is computed based on the observed state of the cloud core and the state of the air that is laterally mixed into the cloud at its edge. The computed entrainment rate decreases when the air is entrained from increasing distance from the cloud core edge; this is because the air farther away from cloud edge is drier than the neighboring air that is within the humid shells around cumulus clouds. Probability density functions of entrainment rate are well fitted by lognormal distributions at different heights above cloud base for different dry air sources (i.e., different source distances from the cloud core edge). Such lognormal distribution functions are appropriate for inclusion into future entrainment rate parameterization in large scale models. To the authors' knowledge, this is the first time that probability density functions of entrainment rate have been obtained in shallow cumulus clouds based on in situ observations. The reason for the wide spread of entrainment rate is that the observed clouds are affected by entrainment mixing processes to different extents, which is verified by the relationships between the entrainment rate and cloud microphysics/dynamics. The entrainment rate is negatively correlated with liquid water content and cloud droplet number concentration due to the dilution and evaporation in entrainment mixing processes. The entrainment rate is positively correlated with relative dispersion (i.e., ratio of standard deviation to mean value) of liquid water content and droplet size distributions, consistent with the theoretical expectation that entrainment mixing processes are responsible for microphysics fluctuations and spectral broadening. The entrainment rate is negatively correlated with vertical velocity and dissipation rate because entrainment

  11. Non-linear Inversion of Probability Density Functions of Surface Wave Dispersion

    NASA Astrophysics Data System (ADS)

    Beucler, E.; Drilleau, M.; Gaudot, I.; Mocquet, A.; Bodin, T.; Lognonne, P. H.

    2016-12-01

    A commonly used approach for inferring 3D shear wave velocity structure from surface wave measurements relies on regionalization of group (or phase) velocity curves at different frequencies as an intermediate step before inversion at depth for each grid point. This choice relies on tracking the maximum energy in the dispersion diagram in order to get a unique dispersion curve and the estimate of associated measurement uncertainties usually depends on ad hoc user's criteria. We present an alternative by directly inverting the waveform, once it is converted into probability density functions of dispersion, in order to obtain a posterior probability of 1D shear wave structure integrated along the ray path. For each 1D S-wave velocity trial model, the corresponding group velocity curve is compared to the dispersion diagram. The goodness of fit is then directly measured by the likelihood. Different type of parameterizations for the S-wave velocity structure can be chosen, we use here Bézier curves in order to ensure smooth variations and a fast forward problem. For each depth of the 1D shear wave posterior probabilities, path averaged velocities can be regionalized using classical least-squares criterion. We show inversion results of cross-correlations of ambient seismic noise in a regional context and at global scale of multiple orbit surface wave trains. This latter approach can be used for planetary purposes in the event of deployment of one seismic station on another planet such as the InSight mission.

  12. Rapid Classification of Protein Structure Models Using Unassigned Backbone RDCs and Probability Density Profile Analysis (PDPA)

    PubMed Central

    Bansal, Sonal; Miao, Xijiang; Adams, Michael W. W.; Prestegard, James H.; Valafar, Homayoun

    2009-01-01

    A method of identifying the best structural model for a protein of unknown structure from a list of structural candidates using unassigned 15N-1H residual dipolar coupling (RDC) data and probability density profile analysis (PDPA) is described. Ten candidate structures have been obtained for the structural genomics target protein PF2048.1 using ROBETTA. 15N-1H residual dipolar couplings have been measured from NMR spectra of the protein in two alignment media and these data have been analyzed using PDPA to rank the models in terms of their ability to represent the actual structure. A number of advantages in using this method to characterize a protein structure become apparent. RDCs can easily and rapidly be acquired, and without the need for assignment, the cost and duration of data acquisition is greatly reduced. The approach is quite robust with respect to imprecise and missing data. In the case of PF2048.1, a 79 residue protein, only 58 and 55 of the total RDC data were observed. The method can accelerate structure determination at higher resolution using traditional NMR spectroscopy by providing a starting point for the addition of NOEs and other NMR structural data. PMID:18321742

  13. Rapid classification of protein structure models using unassigned backbone RDCs and probability density profile analysis (PDPA).

    PubMed

    Bansal, Sonal; Miao, Xijiang; Adams, Michael W W; Prestegard, James H; Valafar, Homayoun

    2008-05-01

    A method of identifying the best structural model for a protein of unknown structure from a list of structural candidates using unassigned 15N1H residual dipolar coupling (RDC) data and probability density profile analysis (PDPA) is described. Ten candidate structures have been obtained for the structural genomics target protein PF2048.1 using ROBETTA. 15N1H residual dipolar couplings have been measured from NMR spectra of the protein in two alignment media and these data have been analyzed using PDPA to rank the models in terms of their ability to represent the actual structure. A number of advantages in using this method to characterize a protein structure become apparent. RDCs can easily and rapidly be acquired, and without the need for assignment, the cost and duration of data acquisition is greatly reduced. The approach is quite robust with respect to imprecise and missing data. In the case of PF2048.1, a 79 residue protein, only 58 and 55 of the total RDC data were observed. The method can accelerate structure determination at higher resolution using traditional NMR spectroscopy by providing a starting point for the addition of NOEs and other NMR structural data.

  14. Rapid classification of protein structure models using unassigned backbone RDCs and probability density profile analysis ( PDPA)

    NASA Astrophysics Data System (ADS)

    Bansal, Sonal; Miao, Xijiang; Adams, Michael W. W.; Prestegard, James H.; Valafar, Homayoun

    2008-05-01

    A method of identifying the best structural model for a protein of unknown structure from a list of structural candidates using unassigned 15N sbnd 1H residual dipolar coupling (RDC) data and probability density profile analysis ( PDPA) is described. Ten candidate structures have been obtained for the structural genomics target protein PF2048.1 using ROBETTA. 15N sbnd 1H residual dipolar couplings have been measured from NMR spectra of the protein in two alignment media and these data have been analyzed using PDPA to rank the models in terms of their ability to represent the actual structure. A number of advantages in using this method to characterize a protein structure become apparent. RDCs can easily and rapidly be acquired, and without the need for assignment, the cost and duration of data acquisition is greatly reduced. The approach is quite robust with respect to imprecise and missing data. In the case of PF2048.1, a 79 residue protein, only 58 and 55 of the total RDC data were observed. The method can accelerate structure determination at higher resolution using traditional NMR spectroscopy by providing a starting point for the addition of NOEs and other NMR structural data.

  15. Probability density function modeling of scalar mixing from concentrated sources in turbulent channel flow

    NASA Astrophysics Data System (ADS)

    Bakosi, J.; Franzese, P.; Boybeyi, Z.

    2007-11-01

    Dispersion of a passive scalar from concentrated sources in fully developed turbulent channel flow is studied with the probability density function (PDF) method. The joint PDF of velocity, turbulent frequency and scalar concentration is represented by a large number of Lagrangian particles. A stochastic near-wall PDF model combines the generalized Langevin model of Haworth and Pope [Phys. Fluids 29, 387 (1986)] with Durbin's [J. Fluid Mech. 249, 465 (1993)] method of elliptic relaxation to provide a mathematically exact treatment of convective and viscous transport with a nonlocal representation of the near-wall Reynolds stress anisotropy. The presence of walls is incorporated through the imposition of no-slip and impermeability conditions on particles without the use of damping or wall-functions. Information on the turbulent time scale is supplied by the gamma-distribution model of van Slooten et al. [Phys. Fluids 10, 246 (1998)]. Two different micromixing models are compared that incorporate the effect of small scale mixing on the transported scalar: the widely used interaction by exchange with the mean and the interaction by exchange with the conditional mean model. Single-point velocity and concentration statistics are compared to direct numerical simulation and experimental data at Reτ=1080 based on the friction velocity and the channel half width. The joint model accurately reproduces a wide variety of conditional and unconditional statistics in both physical and composition space.

  16. Predicting Ligand Binding Sites on Protein Surfaces by 3-Dimensional Probability Density Distributions of Interacting Atoms

    PubMed Central

    Jian, Jhih-Wei; Elumalai, Pavadai; Pitti, Thejkiran; Wu, Chih Yuan; Tsai, Keng-Chang; Chang, Jeng-Yih; Peng, Hung-Pin; Yang, An-Suei

    2016-01-01

    Predicting ligand binding sites (LBSs) on protein structures, which are obtained either from experimental or computational methods, is a useful first step in functional annotation or structure-based drug design for the protein structures. In this work, the structure-based machine learning algorithm ISMBLab-LIG was developed to predict LBSs on protein surfaces with input attributes derived from the three-dimensional probability density maps of interacting atoms, which were reconstructed on the query protein surfaces and were relatively insensitive to local conformational variations of the tentative ligand binding sites. The prediction accuracy of the ISMBLab-LIG predictors is comparable to that of the best LBS predictors benchmarked on several well-established testing datasets. More importantly, the ISMBLab-LIG algorithm has substantial tolerance to the prediction uncertainties of computationally derived protein structure models. As such, the method is particularly useful for predicting LBSs not only on experimental protein structures without known LBS templates in the database but also on computationally predicted model protein structures with structural uncertainties in the tentative ligand binding sites. PMID:27513851

  17. Development and evaluation of probability density functions for a set of human exposure factors

    SciTech Connect

    Maddalena, R.L.; McKone, T.E.; Bodnar, A.; Jacobson, J.

    1999-06-01

    The purpose of this report is to describe efforts carried out during 1998 and 1999 at the Lawrence Berkeley National Laboratory to assist the U.S. EPA in developing and ranking the robustness of a set of default probability distributions for exposure assessment factors. Among the current needs of the exposure-assessment community is the need to provide data for linking exposure, dose, and health information in ways that improve environmental surveillance, improve predictive models, and enhance risk assessment and risk management (NAS, 1994). The U.S. Environmental Protection Agency (EPA) Office of Emergency and Remedial Response (OERR) plays a lead role in developing national guidance and planning future activities that support the EPA Superfund Program. OERR is in the process of updating its 1989 Risk Assessment Guidance for Superfund (RAGS) as part of the EPA Superfund reform activities. Volume III of RAGS, when completed in 1999 will provide guidance for conducting probabilistic risk assessments. This revised document will contain technical information including probability density functions (PDFs) and methods used to develop and evaluate these PDFs. The PDFs provided in this EPA document are limited to those relating to exposure factors.

  18. Probability density function of a puff dispersing from the wall of a turbulent channel

    NASA Astrophysics Data System (ADS)

    Nguyen, Quoc; Papavassiliou, Dimitrios

    2015-11-01

    Study of dispersion of passive contaminants in turbulence has proved to be helpful in understanding fundamental heat and mass transfer phenomena. Many simulation and experimental works have been carried out to locate and track motions of scalar markers in a flow. One method is to combine Direct Numerical Simulation (DNS) and Lagrangian Scalar Tracking (LST) to record locations of markers. While this has proved to be useful, high computational cost remains a concern. In this study, we develop a model that could reproduce results obtained by DNS and LST for turbulent flow. Puffs of markers with different Schmidt numbers were released into a flow field at a frictional Reynolds number of 150. The point of release was at the channel wall, so that both diffusion and convection contribute to the puff dispersion pattern, defining different stages of dispersion. Based on outputs from DNS and LST, we seek the most suitable and feasible probability density function (PDF) that represents distribution of markers in the flow field. The PDF would play a significant role in predicting heat and mass transfer in wall turbulence, and would prove to be helpful where DNS and LST are not always available.

  19. Probability density functions of the average and difference intensities of Friedel opposites.

    PubMed

    Shmueli, U; Flack, H D

    2010-11-01

    Trigonometric series for the average (A) and difference (D) intensities of Friedel opposites were carefully rederived and were normalized to minimize their dependence on sin(theta)/lambda. Probability density functions (hereafter p.d.f.s) of these series were then derived by the Fourier method [Shmueli, Weiss, Kiefer & Wilson (1984). Acta Cryst. A40, 651-660] and their expressions, which admit any chemical composition of the unit-cell contents, were obtained for the space group P1. Histograms of A and D were then calculated for an assumed random-structure model and for 3135 Friedel pairs of a published solved crystal structure, and were compared with the p.d.f.s after the latter were scaled up to the histograms. Good agreement was obtained for the random-structure model and a qualitative one for the published solved structure. The results indicate that the residual discrepancy is mainly due to the presumed statistical independence of the p.d.f.'s characteristic function on the contributions of the interatomic vectors.

  20. Evaluation of joint probability density function models for turbulent nonpremixed combustion with complex chemistry

    NASA Technical Reports Server (NTRS)

    Smith, N. S. A.; Frolov, S. M.; Bowman, C. T.

    1996-01-01

    Two types of mixing sub-models are evaluated in connection with a joint-scalar probability density function method for turbulent nonpremixed combustion. Model calculations are made and compared to simulation results for homogeneously distributed methane-air reaction zones mixing and reacting in decaying turbulence within a two-dimensional enclosed domain. The comparison is arranged to ensure that both the simulation and model calculations a) make use of exactly the same chemical mechanism, b) do not involve non-unity Lewis number transport of species, and c) are free from radiation loss. The modified Curl mixing sub-model was found to provide superior predictive accuracy over the simple relaxation-to-mean submodel in the case studied. Accuracy to within 10-20% was found for global means of major species and temperature; however, nitric oxide prediction accuracy was lower and highly dependent on the choice of mixing sub-model. Both mixing submodels were found to produce non-physical mixing behavior for mixture fractions removed from the immediate reaction zone. A suggestion for a further modified Curl mixing sub-model is made in connection with earlier work done in the field.

  1. Error Estimates for Mixed Methods.

    DTIC Science & Technology

    1979-03-01

    This paper presents abstract error estimates for mixed methods for the approximate solution of elliptic boundary value problems. These estimates are...then applied to obtain quasi-optimal error estimates in the usual Sobolev norms for four examples: three mixed methods for the biharmonic problem and a mixed method for 2nd order elliptic problems. (Author)

  2. Probability density of strong intensity fluctuations of laser radiation in a weakly absorbing random medium

    SciTech Connect

    Almaev, R Kh; Suvorov, A A

    2010-01-31

    Based on the quasi-optic parabolic equation, we derived analytically an expression for the probability density of strong intensity fluctuations of radiation propagating in a random attenuating medium. This probability density is compared with that obtained experimentally. It is shown that the agreement between the theory and the experiment in the entire range of variations in the radiation intensity is achieved by the combined account for the effect of small random attenuation on the radiation propagation and the action of noises on the radiation receiver. (lasers)

  3. Fading probability density function of free-space optical communication channels with pointing error

    NASA Astrophysics Data System (ADS)

    Zhao, Zhijun; Liao, Rui

    2011-06-01

    The turbulent atmosphere causes wavefront distortion, beam wander, and beam broadening of a laser beam. These effects result in average power loss and instantaneous power fading at the receiver aperture and thus degrade performance of a free-space optical (FSO) communication system. In addition to the atmospheric turbulence, a FSO communication system may also suffer from laser beam pointing error. The pointing error causes excessive power loss and power fading. This paper proposes and studies an analytical method for calculating the FSO channel fading probability density function (pdf) induced by both atmospheric turbulence and pointing error. This method is based on the fast-tracked laser beam fading profile and the joint effects of beam wander and pointing error. In order to evaluate the proposed analytical method, large-scale numerical wave-optics simulations are conducted. Three types of pointing errors are studied , namely, the Gaussian random pointing error, the residual tracking error, and the sinusoidal sway pointing error. The FSO system employs a collimated Gaussian laser beam propagating along a horizontal path. The propagation distances range from 0.25 miles to 2.5 miles. The refractive index structure parameter is chosen to be Cn2 = 5×10-15m-2/3 and Cn2 = 5×10-13m-2/3. The studied cases cover from weak to strong fluctuations. The fading pdf curves of channels with pointing error calculated using the analytical method match accurately the corresponding pdf curves obtained directly from large-scale wave-optics simulations. They also give accurate average bit-error-rate (BER) curves and outage probabilities. Both the lognormal and the best-fit gamma-gamma fading pdf curves deviate from those of corresponding simulation curves, and they produce overoptimistic average BER curves and outage probabilities.

  4. Generation of time histories with a specified auto spectral density and probability density function

    SciTech Connect

    Smallwood, D.O.

    1996-08-01

    It is recognized that some dynamic and noise environments are characterized by time histories which are not Gaussian. An example is high intensity acoustic noise. Another example is some transportation vibration. A better simulation of these environments can be generated if a zero mean non-Gaussian time history can be reproduced with a specified auto (or power) spectral density (ASD or PSD) and a specified probability density function (pdf). After the required time history is synthesized, the waveform can be used for simulation purposes. For example, modem waveform reproduction techniques can be used to reproduce the waveform on electrodynamic or electrohydraulic shakers. Or the waveforms can be used in digital simulations. A method is presented for the generation of realizations of zero mean non-Gaussian random time histories with a specified ASD, and pdf. First a Gaussian time history with the specified auto (or power) spectral density (ASD) is generated. A monotonic nonlinear function relating the Gaussian waveform to the desired realization is then established based on the Cumulative Distribution Function (CDF) of the desired waveform and the known CDF of a Gaussian waveform. The established function is used to transform the Gaussian waveform to a realization of the desired waveform. Since the transformation preserves the zero-crossings and peaks of the original Gaussian waveform, and does not introduce any substantial discontinuities, the ASD is not substantially changed. Several methods are available to generate a realization of a Gaussian distributed waveform with a known ASD. The method of Smallwood and Paez (1993) is an example. However, the generation of random noise with a specified ASD but with a non-Gaussian distribution is less well known.

  5. Dynamic Graphics in Excel for Teaching Statistics: Understanding the Probability Density Function

    ERIC Educational Resources Information Center

    Coll-Serrano, Vicente; Blasco-Blasco, Olga; Alvarez-Jareno, Jose A.

    2011-01-01

    In this article, we show a dynamic graphic in Excel that is used to introduce an important concept in our subject, Statistics I: the probability density function. This interactive graphic seeks to facilitate conceptual understanding of the main aspects analysed by the learners.

  6. Dynamic Graphics in Excel for Teaching Statistics: Understanding the Probability Density Function

    ERIC Educational Resources Information Center

    Coll-Serrano, Vicente; Blasco-Blasco, Olga; Alvarez-Jareno, Jose A.

    2011-01-01

    In this article, we show a dynamic graphic in Excel that is used to introduce an important concept in our subject, Statistics I: the probability density function. This interactive graphic seeks to facilitate conceptual understanding of the main aspects analysed by the learners.

  7. Kernel density estimator methods for Monte Carlo radiation transport

    NASA Astrophysics Data System (ADS)

    Banerjee, Kaushik

    In this dissertation, the Kernel Density Estimator (KDE), a nonparametric probability density estimator, is studied and used to represent global Monte Carlo (MC) tallies. KDE is also employed to remove the singularities from two important Monte Carlo tallies, namely point detector and surface crossing flux tallies. Finally, KDE is also applied to accelerate the Monte Carlo fission source iteration for criticality problems. In the conventional MC calculation histograms are used to represent global tallies which divide the phase space into multiple bins. Partitioning the phase space into bins can add significant overhead to the MC simulation and the histogram provides only a first order approximation to the underlying distribution. The KDE method is attractive because it can estimate MC tallies in any location within the required domain without any particular bin structure. Post-processing of the KDE tallies is sufficient to extract detailed, higher order tally information for an arbitrary grid. The quantitative and numerical convergence properties of KDE tallies are also investigated and they are shown to be superior to conventional histograms as well as the functional expansion tally developed by Griesheimer. Monte Carlo point detector and surface crossing flux tallies are two widely used tallies but they suffer from an unbounded variance. As a result, the central limit theorem can not be used for these tallies to estimate confidence intervals. By construction, KDE tallies can be directly used to estimate flux at a point but the variance of this point estimate does not converge as 1/N, which is not unexpected for a point quantity. However, an improved approach is to modify both point detector and surface crossing flux tallies directly by using KDE within a variance reduction approach by taking advantage of the fact that KDE estimates the underlying probability density function. This methodology is demonstrated by several numerical examples and demonstrates that

  8. The role of presumed probability density functions in the simulation of nonpremixed turbulent combustion

    NASA Astrophysics Data System (ADS)

    Coclite, A.; Pascazio, G.; De Palma, P.; Cutrone, L.

    2016-07-01

    Flamelet-Progress-Variable (FPV) combustion models allow the evaluation of all thermochemical quantities in a reacting flow by computing only the mixture fraction Z and a progress variable C. When using such a method to predict turbulent combustion in conjunction with a turbulence model, a probability density function (PDF) is required to evaluate statistical averages (e. g., Favre averages) of chemical quantities. The choice of the PDF is a compromise between computational costs and accuracy level. The aim of this paper is to investigate the influence of the PDF choice and its modeling aspects to predict turbulent combustion. Three different models are considered: the standard one, based on the choice of a β-distribution for Z and a Dirac-distribution for C; a model employing a β-distribution for both Z and C; and the third model obtained using a β-distribution for Z and the statistically most likely distribution (SMLD) for C. The standard model, although widely used, does not take into account the interaction between turbulence and chemical kinetics as well as the dependence of the progress variable not only on its mean but also on its variance. The SMLD approach establishes a systematic framework to incorporate informations from an arbitrary number of moments, thus providing an improvement over conventionally employed presumed PDF closure models. The rational behind the choice of the three PDFs is described in some details and the prediction capability of the corresponding models is tested vs. well-known test cases, namely, the Sandia flames, and H2-air supersonic combustion.

  9. Model-based prognostics for batteries which estimates useful life and uses a probability density function

    NASA Technical Reports Server (NTRS)

    Saha, Bhaskar (Inventor); Goebel, Kai F. (Inventor)

    2012-01-01

    This invention develops a mathematical model to describe battery behavior during individual discharge cycles as well as over its cycle life. The basis for the form of the model has been linked to the internal processes of the battery and validated using experimental data. Effects of temperature and load current have also been incorporated into the model. Subsequently, the model has been used in a Particle Filtering framework to make predictions of remaining useful life for individual discharge cycles as well as for cycle life. The prediction performance was found to be satisfactory as measured by performance metrics customized for prognostics for a sample case. The work presented here provides initial steps towards a comprehensive health management solution for energy storage devices.

  10. Derivation of an eigenvalue probability density function relating to the Poincaré disk

    NASA Astrophysics Data System (ADS)

    Forrester, Peter J.; Krishnapur, Manjunath

    2009-09-01

    A result of Zyczkowski and Sommers (2000 J. Phys. A: Math. Gen. 33 2045-57) gives the eigenvalue probability density function for the top N × N sub-block of a Haar distributed matrix from U(N + n). In the case n >= N, we rederive this result, starting from knowledge of the distribution of the sub-blocks, introducing the Schur decomposition and integrating over all variables except the eigenvalues. The integration is done by identifying a recursive structure which reduces the dimension. This approach is inspired by an analogous approach which has been recently applied to determine the eigenvalue probability density function for random matrices A-1B, where A and B are random matrices with entries standard complex normals. We relate the eigenvalue distribution of the sub-blocks to a many-body quantum state, and to the one-component plasma, on the pseudosphere.

  11. Reconstruction of probability density function of intensity fluctuations relevant to free-space laser communications through atmospheric turbulence

    NASA Astrophysics Data System (ADS)

    Majumdar, Arun K.; Luna, Carlos E.; Idell, Paul S.

    2007-09-01

    A new method of reconstructing and predicting an unknown probability density function (PDF) characterizing the statistics of intensity fluctuations of optical beams propagating through atmospheric turbulence is presented in this paper. The method is based on a series expansion of generalized Laguerre polynomials ; the expansion coefficients are expressed in terms of the higher-order intensity moments of intensity statistics. This method generates the PDF from the data moments without any prior knowledge of specific statistics and converges smoothly. The utility of reconstructed PDF relevant to free-space laser communication in terms of calculating the average bit error rate and probability of fading is pointed out. Simulated numerical results are compared with some known non-Gaussian test PDFs: Log-Normal, Rice-Nakagami and Gamma-Gamma distributions and show excellent agreement obtained by the method developed. The accuracy of the reconstructed PDF is also evaluated.

  12. A Projection and Density Estimation Method for Knowledge Discovery

    PubMed Central

    Stanski, Adam; Hellwich, Olaf

    2012-01-01

    A key ingredient to modern data analysis is probability density estimation. However, it is well known that the curse of dimensionality prevents a proper estimation of densities in high dimensions. The problem is typically circumvented by using a fixed set of assumptions about the data, e.g., by assuming partial independence of features, data on a manifold or a customized kernel. These fixed assumptions limit the applicability of a method. In this paper we propose a framework that uses a flexible set of assumptions instead. It allows to tailor a model to various problems by means of 1d-decompositions. The approach achieves a fast runtime and is not limited by the curse of dimensionality as all estimations are performed in 1d-space. The wide range of applications is demonstrated at two very different real world examples. The first is a data mining software that allows the fully automatic discovery of patterns. The software is publicly available for evaluation. As a second example an image segmentation method is realized. It achieves state of the art performance on a benchmark dataset although it uses only a fraction of the training data and very simple features. PMID:23049675

  13. A projection and density estimation method for knowledge discovery.

    PubMed

    Stanski, Adam; Hellwich, Olaf

    2012-01-01

    A key ingredient to modern data analysis is probability density estimation. However, it is well known that the curse of dimensionality prevents a proper estimation of densities in high dimensions. The problem is typically circumvented by using a fixed set of assumptions about the data, e.g., by assuming partial independence of features, data on a manifold or a customized kernel. These fixed assumptions limit the applicability of a method. In this paper we propose a framework that uses a flexible set of assumptions instead. It allows to tailor a model to various problems by means of 1d-decompositions. The approach achieves a fast runtime and is not limited by the curse of dimensionality as all estimations are performed in 1d-space. The wide range of applications is demonstrated at two very different real world examples. The first is a data mining software that allows the fully automatic discovery of patterns. The software is publicly available for evaluation. As a second example an image segmentation method is realized. It achieves state of the art performance on a benchmark dataset although it uses only a fraction of the training data and very simple features.

  14. PDV Uncertainty Estimation & Methods Comparison

    SciTech Connect

    Machorro, E.

    2011-11-01

    Several methods are presented for estimating the rapidly changing instantaneous frequency of a time varying signal that is contaminated by measurement noise. Useful a posteriori error estimates for several methods are verified numerically through Monte Carlo simulation. However, given the sampling rates of modern digitizers, sub-nanosecond variations in velocity are shown to be reliably measurable in most (but not all) cases. Results support the hypothesis that in many PDV regimes of interest, sub-nanosecond resolution can be achieved.

  15. On fading probability density functions of fast-tracked and untracked free-space optical communication channels

    NASA Astrophysics Data System (ADS)

    Zhao, Zhijun; Liao, Rui

    2011-03-01

    Free-space optical (FSO) communication systems suffer from average power loss and instantaneous power fading due to the atmospheric turbulence. The channel fading probability density function (pdf) is of critical importance for FSO communication system design and evaluation. The performance and reliability of FSO communication systems can be greatly enhanced if fast-tacking devices are employed at the transmitter in order to compensate laser beam wander at the receiver aperture. The fast-tracking method is especially effective when communication distance is long. This paper studies the fading probability density functions of both fast-tracked and untracked FSO communication channels. Large-scale wave-optics simulations are conducted for both tracked and untracked lasers. In the simulations, the Kolmogorov spectrum is adopted, and it is assumed that the outer scale is infinitely large and the inner scale is negligibly small. The fading pdfs of both fast-tracked and untracked FSO channels are obtained from the simulations. Results show that the fast-tracked channel fading can be accurately modeled as gamma-distributed if receiver aperture size is smaller than the coherence radius. An analytical method is given for calculating the untracked fading pdfs of both point-like and finite-size receiver apertures from the fast-tracked fading pdf. For point-like apertures, the analytical method gives pdfs close to the well-known gamma-gamma pdfs if off-axis effects are omitted in the formulation. When off-axis effects are taken into consideration, the untracked pdfs obtained using the analytical method fit the simulation pdfs better than gamma-gamma distributions for point-like apertures, and closely fit the simulation pdfs for finite-size apertures where gamma-gamma pdfs deviate from those of the simulations significantly.

  16. Parameter estimation of social forces in pedestrian dynamics models via a probabilistic method.

    PubMed

    Corbetta, Alessandro; Muntean, Adrian; Vafayi, Kiamars

    2015-04-01

    Focusing on a specific crowd dynamics situation, including real life experiments and measurements, our paper targets a twofold aim: (1) we present a Bayesian probabilistic method to estimate the value and the uncertainty (in the form of a probability density function) of parameters in crowd dynamic models from the experimental data; and (2) we introduce a fitness measure for the models to classify a couple of model structures (forces) according to their fitness to the experimental data, preparing the stage for a more general model-selection and validation strategy inspired by probabilistic data analysis. Finally, we review the essential aspects of our experimental setup and measurement technique.

  17. Breather turbulence versus soliton turbulence: Rogue waves, probability density functions, and spectral features.

    PubMed

    Akhmediev, N; Soto-Crespo, J M; Devine, N

    2016-08-01

    Turbulence in integrable systems exhibits a noticeable scientific advantage: it can be expressed in terms of the nonlinear modes of these systems. Whether the majority of the excitations in the system are breathers or solitons defines the properties of the turbulent state. In the two extreme cases we can call such states "breather turbulence" or "soliton turbulence." The number of rogue waves, the probability density functions of the chaotic wave fields, and their physical spectra are all specific for each of these two situations. Understanding these extreme cases also helps in studies of mixed turbulent states when the wave field contains both solitons and breathers, thus revealing intermediate characteristics.

  18. Breather turbulence versus soliton turbulence: Rogue waves, probability density functions, and spectral features

    NASA Astrophysics Data System (ADS)

    Akhmediev, N.; Soto-Crespo, J. M.; Devine, N.

    2016-08-01

    Turbulence in integrable systems exhibits a noticeable scientific advantage: it can be expressed in terms of the nonlinear modes of these systems. Whether the majority of the excitations in the system are breathers or solitons defines the properties of the turbulent state. In the two extreme cases we can call such states "breather turbulence" or "soliton turbulence." The number of rogue waves, the probability density functions of the chaotic wave fields, and their physical spectra are all specific for each of these two situations. Understanding these extreme cases also helps in studies of mixed turbulent states when the wave field contains both solitons and breathers, thus revealing intermediate characteristics.

  19. Eulerian Mapping Closure Approach for Probability Density Function of Concentration in Shear Flows

    NASA Technical Reports Server (NTRS)

    He, Guowei; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    The Eulerian mapping closure approach is developed for uncertainty propagation in computational fluid mechanics. The approach is used to study the Probability Density Function (PDF) for the concentration of species advected by a random shear flow. An analytical argument shows that fluctuation of the concentration field at one point in space is non-Gaussian and exhibits stretched exponential form. An Eulerian mapping approach provides an appropriate approximation to both convection and diffusion terms and leads to a closed mapping equation. The results obtained describe the evolution of the initial Gaussian field, which is in agreement with direct numerical simulations.

  20. Equivalent probability density moments determine equivalent epidemics in an SIRS model with temporary immunity.

    PubMed

    Carr, Thomas W

    2017-02-01

    In an SIRS compartment model for a disease we consider the effect of different probability distributions for remaining immune. We show that to first approximation the first three moments of the corresponding probability densities are sufficient to well describe oscillatory solutions corresponding to recurrent epidemics. Specifically, increasing the fraction who lose immunity, increasing the mean immunity time, and decreasing the heterogeneity of the population all favor the onset of epidemics and increase their severity. We consider six different distributions, some symmetric about their mean and some asymmetric, and show that by tuning their parameters such that they have equivalent moments that they all exhibit equivalent dynamical behavior.

  1. A Delta-Sigma Modulator Using a Non-uniform Quantizer Adjusted for the Probability Density of Input Signals

    NASA Astrophysics Data System (ADS)

    Kitayabu, Toru; Hagiwara, Mao; Ishikawa, Hiroyasu; Shirai, Hiroshi

    A novel delta-sigma modulator that employs a non-uniform quantizer whose spacing is adjusted by reference to the statistical properties of the input signal is proposed. The proposed delta-sigma modulator has less quantization noise compared to the one that uses a uniform quantizer with the same number of output values. With respect to the quantizer on its own, Lloyd proposed a non-uniform quantizer that is best for minimizing the average quantization noise power. The applicable condition of the method is that the statistical properties of the input signal, the probability density, are given. However, the procedure cannot be directly applied to the quantizer in the delta-sigma modulator because it jeopardizes the modulator's stability. In this paper, a procedure is proposed that determine the spacing of the quantizer with avoiding instability. Simulation results show that the proposed method reduces quantization noise by up to 3.8dB and 2.8dB with the input signal having a PAPR of 16dB and 12dB, respectively, compared to the one employing a uniform quantizer. Two alternative types of probability density function (PDF) are used in the proposed method for the calculation of the output values. One is the PDF of the input signal to the delta-sigma modulator and the other is an approximated PDF of the input signal to the quantizer inside the delta-sigma modulator. Both approaches are evaluated to find that the latter gives lower quantization noise.

  2. A biology-driven receptor model for daily pollen allergy risk in Korea based on Weibull probability density function.

    PubMed

    Kim, Kyu Rang; Kim, Mijin; Choe, Ho-Seong; Han, Mae Ja; Lee, Hye-Rim; Oh, Jae-Won; Kim, Baek-Jo

    2017-02-01

    Pollen is an important cause of respiratory allergic reactions. As individual sanitation has improved, allergy risk has increased, and this trend is expected to continue due to climate change. Atmospheric pollen concentration is highly influenced by weather conditions. Regression analysis and modeling of the relationships between airborne pollen concentrations and weather conditions were performed to analyze and forecast pollen conditions. Traditionally, daily pollen concentration has been estimated using regression models that describe the relationships between observed pollen concentrations and weather conditions. These models were able to forecast daily concentrations at the sites of observation, but lacked broader spatial applicability beyond those sites. To overcome this limitation, an integrated modeling scheme was developed that is designed to represent the underlying processes of pollen production and distribution. A maximum potential for airborne pollen is first determined using the Weibull probability density function. Then, daily pollen concentration is estimated using multiple regression models. Daily risk grade levels are determined based on the risk criteria used in Korea. The mean percentages of agreement between the observed and estimated levels were 81.4-88.2 % and 92.5-98.5 % for oak and Japanese hop pollens, respectively. The new models estimated daily pollen risk more accurately than the original statistical models because of the newly integrated biological response curves. Although they overestimated seasonal mean concentration, they did not simulate all of the peak concentrations. This issue would be resolved by adding more variables that affect the prevalence and internal maturity of pollens.

  3. A biology-driven receptor model for daily pollen allergy risk in Korea based on Weibull probability density function

    NASA Astrophysics Data System (ADS)

    Kim, Kyu Rang; Kim, Mijin; Choe, Ho-Seong; Han, Mae Ja; Lee, Hye-Rim; Oh, Jae-Won; Kim, Baek-Jo

    2017-02-01

    Pollen is an important cause of respiratory allergic reactions. As individual sanitation has improved, allergy risk has increased, and this trend is expected to continue due to climate change. Atmospheric pollen concentration is highly influenced by weather conditions. Regression analysis and modeling of the relationships between airborne pollen concentrations and weather conditions were performed to analyze and forecast pollen conditions. Traditionally, daily pollen concentration has been estimated using regression models that describe the relationships between observed pollen concentrations and weather conditions. These models were able to forecast daily concentrations at the sites of observation, but lacked broader spatial applicability beyond those sites. To overcome this limitation, an integrated modeling scheme was developed that is designed to represent the underlying processes of pollen production and distribution. A maximum potential for airborne pollen is first determined using the Weibull probability density function. Then, daily pollen concentration is estimated using multiple regression models. Daily risk grade levels are determined based on the risk criteria used in Korea. The mean percentages of agreement between the observed and estimated levels were 81.4-88.2 % and 92.5-98.5 % for oak and Japanese hop pollens, respectively. The new models estimated daily pollen risk more accurately than the original statistical models because of the newly integrated biological response curves. Although they overestimated seasonal mean concentration, they did not simulate all of the peak concentrations. This issue would be resolved by adding more variables that affect the prevalence and internal maturity of pollens.

  4. A biology-driven receptor model for daily pollen allergy risk in Korea based on Weibull probability density function

    NASA Astrophysics Data System (ADS)

    Kim, Kyu Rang; Kim, Mijin; Choe, Ho-Seong; Han, Mae Ja; Lee, Hye-Rim; Oh, Jae-Won; Kim, Baek-Jo

    2016-07-01

    Pollen is an important cause of respiratory allergic reactions. As individual sanitation has improved, allergy risk has increased, and this trend is expected to continue due to climate change. Atmospheric pollen concentration is highly influenced by weather conditions. Regression analysis and modeling of the relationships between airborne pollen concentrations and weather conditions were performed to analyze and forecast pollen conditions. Traditionally, daily pollen concentration has been estimated using regression models that describe the relationships between observed pollen concentrations and weather conditions. These models were able to forecast daily concentrations at the sites of observation, but lacked broader spatial applicability beyond those sites. To overcome this limitation, an integrated modeling scheme was developed that is designed to represent the underlying processes of pollen production and distribution. A maximum potential for airborne pollen is first determined using the Weibull probability density function. Then, daily pollen concentration is estimated using multiple regression models. Daily risk grade levels are determined based on the risk criteria used in Korea. The mean percentages of agreement between the observed and estimated levels were 81.4-88.2 % and 92.5-98.5 % for oak and Japanese hop pollens, respectively. The new models estimated daily pollen risk more accurately than the original statistical models because of the newly integrated biological response curves. Although they overestimated seasonal mean concentration, they did not simulate all of the peak concentrations. This issue would be resolved by adding more variables that affect the prevalence and internal maturity of pollens.

  5. Lie symmetry analysis of the Lundgren-Monin-Novikov equations for multi-point probability density functions of turbulent flow

    NASA Astrophysics Data System (ADS)

    Wacławczyk, M.; Grebenev, V. N.; Oberlack, M.

    2017-04-01

    The problem of turbulence statistics described by the Lundgren-Monin-Novikov (LMN) hierarchy of integro-differential equations is studied in terms of its group properties. For this we perform a Lie group analysis of a truncated LMN chain which presents the first two equations in an infinite set of integro-differential equations for the multi-point probability density functions (pdf’s) of velocity. A complete set of point transformations is derived for the one-point pdf’s and the independent variables: sample space of velocity, space and time. For this purpose we use a direct method based on the canonical Lie-Bäcklund operator. Due to the one-way coupling of correlation equations, the present results are complete in the sense that no additional symmetries exist for the first leading equation, even if the full infinite hierarchy is considered.

  6. The probability density function tail of the Kardar-Parisi-Zhang equation in the strongly non-linear regime

    NASA Astrophysics Data System (ADS)

    Anderson, Johan; Johansson, Jonas

    2016-12-01

    An analytical derivation of the probability density function (PDF) tail describing the strongly correlated interface growth governed by the nonlinear Kardar-Parisi-Zhang equation is provided. The PDF tail exactly coincides with a Tracy-Widom distribution i.e. a PDF tail proportional to \\exp ≤ft(-cw23/2\\right) , where w 2 is the the width of the interface. The PDF tail is computed by the instanton method in the strongly non-linear regime within the Martin-Siggia-Rose framework using a careful treatment of the non-linear interactions. In addition, the effect of spatial dimensions on the PDF tail scaling is discussed. This gives a novel approach to understand the rightmost PDF tail of the interface width distribution and the analysis suggests that there is no upper critical dimension.

  7. Numerical study of traffic flow considering the probability density distribution of the traffic density

    NASA Astrophysics Data System (ADS)

    Guo, L. M.; Zhu, H. B.; Zhang, N. X.

    The probability density distribution of the traffic density is analyzed based on the empirical data. It is found that the beta distribution can fit the result obtained from the measured traffic density perfectly. Then a modified traffic model is proposed to simulate the microscopic traffic flow, in which the probability density distribution of the traffic density is taken into account. The model also contains the behavior of drivers’ speed adaptation by taking into account the driving behavior difference and the dynamic headway. Accompanied by presenting the flux-density diagrams, the velocity evolution diagrams and the spatial-temporal profiles of vehicles are also given. The synchronized flow phase and the wide moving jam phase are indicated, which is the challenge for the cellular automata traffic model. Furthermore the phenomenon of the high speed car-following is exhibited, which has been observed in the measured data previously. The results set demonstrate the effectiveness of the proposed model in detecting the complicated dynamic phenomena of the traffic flow.

  8. Ensemble Averaged Probability Density Function (APDF) for Compressible Turbulent Reacting Flows

    NASA Technical Reports Server (NTRS)

    Shih, Tsan-Hsing; Liu, Nan-Suey

    2012-01-01

    In this paper, we present a concept of the averaged probability density function (APDF) for studying compressible turbulent reacting flows. The APDF is defined as an ensemble average of the fine grained probability density function (FG-PDF) with a mass density weighting. It can be used to exactly deduce the mass density weighted, ensemble averaged turbulent mean variables. The transport equation for APDF can be derived in two ways. One is the traditional way that starts from the transport equation of FG-PDF, in which the compressible Navier- Stokes equations are embedded. The resulting transport equation of APDF is then in a traditional form that contains conditional means of all terms from the right hand side of the Navier-Stokes equations except for the chemical reaction term. These conditional means are new unknown quantities that need to be modeled. Another way of deriving the transport equation of APDF is to start directly from the ensemble averaged Navier-Stokes equations. The resulting transport equation of APDF derived from this approach appears in a closed form without any need for additional modeling. The methodology of ensemble averaging presented in this paper can be extended to other averaging procedures: for example, the Reynolds time averaging for statistically steady flow and the Reynolds spatial averaging for statistically homogeneous flow. It can also be extended to a time or spatial filtering procedure to construct the filtered density function (FDF) for the large eddy simulation (LES) of compressible turbulent reacting flows.

  9. Probability density of the empirical wavelet coefficients of a noisy chaos

    NASA Astrophysics Data System (ADS)

    Garcin, Matthieu; Guégan, Dominique

    2014-05-01

    We are interested in the random empirical wavelet coefficients of a noisy signal when this signal is a unidimensional or multidimensional chaos. More precisely we provide an expression of the conditional probability density of such coefficients, given a discrete observation grid. The noise is assumed to be described by a symmetric alpha-stable random variable. If the noise is a dynamic noise, then we present the exact expression of the probability density of each wavelet coefficient of the noisy signal. If we face a measurement noise, then the noise has a non-linear influence and we propose two approximations. The first one relies on a Taylor expansion whereas the second one, relying on an Edgeworth expansion, improves the first general Taylor approximation if the cumulants of the noise are defined. We give some illustrations of these theoretical results for the logistic map, the tent map and a multidimensional chaos, the Hénon map, disrupted by a Gaussian or a Cauchy noise.

  10. Polynomial probability distribution estimation using the method of moments

    PubMed Central

    Mattsson, Lars; Rydén, Jesper

    2017-01-01

    We suggest a procedure for estimating Nth degree polynomial approximations to unknown (or known) probability density functions (PDFs) based on N statistical moments from each distribution. The procedure is based on the method of moments and is setup algorithmically to aid applicability and to ensure rigor in use. In order to show applicability, polynomial PDF approximations are obtained for the distribution families Normal, Log-Normal, Weibull as well as for a bimodal Weibull distribution and a data set of anonymized household electricity use. The results are compared with results for traditional PDF series expansion methods of Gram–Charlier type. It is concluded that this procedure is a comparatively simple procedure that could be used when traditional distribution families are not applicable or when polynomial expansions of probability distributions might be considered useful approximations. In particular this approach is practical for calculating convolutions of distributions, since such operations become integrals of polynomial expressions. Finally, in order to show an advanced applicability of the method, it is shown to be useful for approximating solutions to the Smoluchowski equation. PMID:28394949

  11. Methods for Cloud Cover Estimation

    NASA Technical Reports Server (NTRS)

    Glackin, D. L.; Huning, J. R.; Smith, J. H.; Logan, T. L.

    1984-01-01

    Several methods for cloud cover estimation are described relevant to assessing the performance of a ground-based network of solar observatories. The methods rely on ground and satellite data sources and provide meteorological or climatological information. One means of acquiring long-term observations of solar oscillations is the establishment of a ground-based network of solar observatories. Criteria for station site selection are: gross cloudiness, accurate transparency information, and seeing. Alternative methods for computing this duty cycle are discussed. The cycle, or alternatively a time history of solar visibility from the network, can then be input to a model to determine the effect of duty cycle on derived solar seismology parameters. Cloudiness from space is studied to examine various means by which the duty cycle might be computed. Cloudiness, and to some extent transparency, can potentially be estimated from satellite data.

  12. Methods for Cloud Cover Estimation

    NASA Technical Reports Server (NTRS)

    Glackin, D. L.; Huning, J. R.; Smith, J. H.; Logan, T. L.

    1984-01-01

    Several methods for cloud cover estimation are described relevant to assessing the performance of a ground-based network of solar observatories. The methods rely on ground and satellite data sources and provide meteorological or climatological information. One means of acquiring long-term observations of solar oscillations is the establishment of a ground-based network of solar observatories. Criteria for station site selection are: gross cloudiness, accurate transparency information, and seeing. Alternative methods for computing this duty cycle are discussed. The cycle, or alternatively a time history of solar visibility from the network, can then be input to a model to determine the effect of duty cycle on derived solar seismology parameters. Cloudiness from space is studied to examine various means by which the duty cycle might be computed. Cloudiness, and to some extent transparency, can potentially be estimated from satellite data.

  13. Probability density function formalism for optical coherence tomography signal analysis: a controlled phantom study.

    PubMed

    Weatherbee, Andrew; Sugita, Mitsuro; Bizheva, Kostadinka; Popov, Ivan; Vitkin, Alex

    2016-06-15

    The distribution of backscattered intensities as described by the probability density function (PDF) of tissue-scattered light contains information that may be useful for tissue assessment and diagnosis, including characterization of its pathology. In this Letter, we examine the PDF description of the light scattering statistics in a well characterized tissue-like particulate medium using optical coherence tomography (OCT). It is shown that for low scatterer density, the governing statistics depart considerably from a Gaussian description and follow the K distribution for both OCT amplitude and intensity. The PDF formalism is shown to be independent of the scatterer flow conditions; this is expected from theory, and suggests robustness and motion independence of the OCT amplitude (and OCT intensity) PDF metrics in the context of potential biomedical applications.

  14. Computing approximate random Delta v magnitude probability densities. [for spacecraft trajectory correction

    NASA Technical Reports Server (NTRS)

    Chadwick, C.

    1984-01-01

    This paper describes the development and use of an algorithm to compute approximate statistics of the magnitude of a single random trajectory correction maneuver (TCM) Delta v vector. The TCM Delta v vector is modeled as a three component Cartesian vector each of whose components is a random variable having a normal (Gaussian) distribution with zero mean and possibly unequal standard deviations. The algorithm uses these standard deviations as input to produce approximations to (1) the mean and standard deviation of the magnitude of Delta v, (2) points of the probability density function of the magnitude of Delta v, and (3) points of the cumulative and inverse cumulative distribution functions of Delta v. The approximates are based on Monte Carlo techniques developed in a previous paper by the author and extended here. The algorithm described is expected to be useful in both pre-flight planning and in-flight analysis of maneuver propellant requirements for space missions.

  15. Scale-wise evolution of rainfall probability density functions fingerprints the rainfall generation mechanism

    NASA Astrophysics Data System (ADS)

    Molini, Annalisa; Katul, Gabriel G.; Porporato, Amilcare

    2010-04-01

    The cross-scale probabilistic structure of rainfall intensity records collected over time scales ranging from hours to decades at sites dominated by both convective and frontal systems is investigated. Across these sites, intermittency build-up from slow to fast time-scales is analyzed in terms of heavy tailed and asymmetric signatures in the scale-wise evolution of rainfall probability density functions (pdfs). The analysis demonstrates that rainfall records dominated by convective storms develop heavier-tailed power law pdfs toward finer scales when compared with their frontal systems counterpart. Also, a concomitant marked asymmetry build-up emerges at such finer time scales. A scale-dependent probabilistic description of such fat tails and asymmetry appearance is proposed based on a modified q-Gaussian model, able to describe the cross-scale rainfall pdfs in terms of the nonextensivity parameter q, a lacunarity (intermittency) correction and a tail asymmetry coefficient, linked to the rainfall generation mechanism.

  16. The Sherrington-Kirkpatrick spin glass model in the presence of a random field with a joint Gaussian probability density function for the exchange interactions and random fields

    NASA Astrophysics Data System (ADS)

    Hadjiagapiou, Ioannis A.

    2014-03-01

    The magnetic systems with disorder form an important class of systems, which are under intensive studies, since they reflect real systems. Such a class of systems is the spin glass one, which combines randomness and frustration. The Sherrington-Kirkpatrick Ising spin glass with random couplings in the presence of a random magnetic field is investigated in detail within the framework of the replica method. The two random variables (exchange integral interaction and random magnetic field) are drawn from a joint Gaussian probability density function characterized by a correlation coefficient ρ. The thermodynamic properties and phase diagrams are studied with respect to the natural parameters of both random components of the system contained in the probability density. The de Almeida-Thouless line is explored as a function of temperature, ρ and other system parameters. The entropy for zero temperature as well as for non zero temperatures is partly negative or positive, acquiring positive branches as h0 increases.

  17. Probability density of the orbital angular momentum mode of Hankel-Bessel beams in an atmospheric turbulence.

    PubMed

    Zhu, Yu; Liu, Xiaojun; Gao, Jie; Zhang, Yixin; Zhao, Fengsheng

    2014-04-07

    We develop a novel model of the probability density of the orbital angular momentum (OAM) modes for Hankel-Bessel beams in paraxial turbulence channel based on the Rytov approximation. The results show that there are multi-peaks of the mode probability density along the radial direction. The peak position of the mode probability density moves to beam center with the increasing of non-Kolmogorov turbulence-parameters and the generalized refractive-index structure parameters and with the decreasing of OAM quantum number, propagation distance and wavelength of the beams. Additionally, larger OAM quantum number and smaller non-Kolmogorov turbulence-parameter can be selected in order to obtain larger mode probability density. The probability density of the OAM mode crosstalk is increasing with the decreasing of the quantum number deviation and the wavelength. Because of the focusing properties of Hankel-Bessel beams in turbulence channel, compared with the Laguerre-Gaussian beams, Hankel-Bessel beams are a good light source for weakening turbulence spreading of the beams and mitigating the effects of turbulence on the probability density of the OAM mode.

  18. Translating CFC-based piston ages into probability density functions of ground-water age in karst

    USGS Publications Warehouse

    Long, A.J.; Putnam, L.D.

    2006-01-01

    Temporal age distributions are equivalent to probability density functions (PDFs) of transit time. The type and shape of a PDF provides important information related to ground-water mixing at the well or spring and the complex nature of flow networks in karst aquifers. Chlorofluorocarbon (CFC) concentrations measured for samples from 12 locations in the karstic Madison aquifer were used to evaluate the suitability of various PDF types for this aquifer. Parameters of PDFs could not be estimated within acceptable confidence intervals for any of the individual sites. Therefore, metrics derived from CFC-based apparent ages were used to evaluate results of PDF modeling in a more general approach. The ranges of these metrics were established as criteria against which families of PDFs could be evaluated for their applicability to different parts of the aquifer. Seven PDF types, including five unimodal and two bimodal models, were evaluated. Model results indicate that unimodal models may be applicable to areas close to conduits that have younger piston (i.e., apparent) ages and that bimodal models probably are applicable to areas farther from conduits that have older piston ages. The two components of a bimodal PDF are interpreted as representing conduit and diffuse flow, and transit times of as much as two decades may separate these PDF components. Areas near conduits may be dominated by conduit flow, whereas areas farther from conduits having bimodal distributions probably have good hydraulic connection to both diffuse and conduit flow. ?? 2006 Elsevier B.V. All rights reserved.

  19. Ellipsoidal Guaranteed Estimation Method for Satellite Collision Avoidance

    NASA Astrophysics Data System (ADS)

    Kim, Y.; Lee, J.; Ovseevich, A.

    2012-01-01

    The article represents a new guaranteed approach to determine a small area of deviations around Earth orbiting satellite nominal Keplerian orbit position, caused by a set of acting external disturbing forces and initial conditions. Only very restricted information is assumed about the disturbances: maximum values with no assumptions about the law of their distribution of probability density. The area of satellite deviations achievability is approximated by a state vector ellipsoid that can include satellite position and the velocity as the vector components. Mathematical equations that allow one to find the ellipsoid are developed on the base of linear Euler-Hill equations of satellite orbital motion. The approach can be considered and applied to various problems of satellite collision avoidance with other satellite or space debris, as well as for establishing potentially safe space traffic control norms. In particular, in CSA it is considering for planning collision avoidance manoeuvres of Earth observation satellite family RADARSAT, SCISAT and newly developing satellites. Originally general approach of ellipsoidal estimation was developed by Russian scientist academician .F. Chernousko. Considered in the article problem was studied by his followers and some of them participated in the method development together with the founder.

  20. Methods of estimating liner compression.

    PubMed

    Leonardi, S; Penry, J F; Tangorra, F M; Thompson, P D; Reinemann, D J

    2015-10-01

    The aim of this study was to compare 2 methods of measuring overpressure (OP) using a new test device designed to make OP measurements more quickly and accurately. Overpressure was measured with no pulsation (OP np) and with limited pulsation (OP lp) repeatedly on the same cow during a single milking. Each of the 6 liners (3 round liners and 3 triangular liners) used in this study were tested on the same 6 experimental cows. Both OP np and OP lp were measured on all 4 teats of each experimental cow twice for each liner. The order of OP np and OP lp alternated sequentially for each cow test. The OP results for the 6 liners were also compared with liner compression estimated on the same liners with a novel artificial teat sensor (ATS). The OP lp method showed small but significantly higher values than the OP np method (13.9 vs. 13.4 kPa). The OP lp method is recommended as the preferred method as it more closely approximates normal milking condition. Overpressure values decreased significantly between the first and the following measurements, (from 15.0 to 12.4 kPa). We recommend performing the OP test at a consistent time, 1 min after attaching the teatcup to a well-stimulated teat, to reduce the variability produced by OP changing during the peak flow period. The new test device had several advantages over previously published methods of measuring OP. A high correlation between OP and liner compression estimated by the ATS was found, but difficulties were noted when using the ATS with triangular liners. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  1. Multi-Detection Events, Probability Density Functions, and Reduced Location Area

    SciTech Connect

    Eslinger, Paul W.; Schrom, Brian T.

    2016-03-01

    Abstract Several efforts have been made in the Comprehensive Nuclear-Test-Ban Treaty (CTBT) community to assess the benefits of combining detections of radionuclides to improve the location estimates available from atmospheric transport modeling (ATM) backtrack calculations. We present a Bayesian estimation approach rather than a simple dilution field of regard approach to allow xenon detections and non-detections to be combined mathematically. This system represents one possible probabilistic approach to radionuclide event formation. Application of this method to a recent interesting radionuclide event shows a substantial reduction in the location uncertainty of that event.

  2. Probability density functions of the stream flow discharge in linearized diffusion wave models

    NASA Astrophysics Data System (ADS)

    Chang, Ching-Min; Yeh, Hund-Der

    2016-12-01

    This article considers stream flow discharge moving through channels subject to the lateral inflow and described by a linearized diffusion wave equation. The variability of lateral inflow is manifested by random fluctuations in time, which is the only source of uncertainty as to flow discharge quantification. The stochastic nature of stream flow discharge is described by the probability density function (PDF) obtained using the theory of distributions. The PDF of the stream flow discharge depends on the hydraulic properties of the stream flow, such as the wave celerity and hydraulic diffusivity as well as the temporal correlation scale of the lateral inflow rate fluctuations. The focus in this analysis is placed on the influence of the temporal correlation scale and the wave celerity coefficient on the PDF of the flow discharge. The analysis demonstrates that a larger temporal correlation scale causes an increase of PDF of the lateral inflow rate and, in turn, the PDF of the flow discharge which is also affected positively by the wave celerity coefficient.

  3. Vertical Overlap of Probability Density Functions of Cloud and Precipitation Hydrometeors

    NASA Astrophysics Data System (ADS)

    Ovchinnikov, M.; Lim, K. S. S.; Larson, V. E.; Wong, M.; Thayer-Calder, K.; Ghan, S. J.

    2016-12-01

    Coarse-resolution climate models increasingly rely on probability density functions (PDFs) to represent subgrid-scale variability of prognostic variables. While PDFs characterize the horizontal variability, a separate treatment is needed to account for the vertical structure of clouds and precipitation. When sub-columns are drawn from these PDFs for microphysics or radiation parameterizations, appropriate vertical correlations must be enforced via PDF overlap specifications. This study evaluates the representation of PDF overlap in the Subgrid Importance Latin Hypercube Sampler (SILHS) employed in the assumed PDF turbulence and cloud scheme called the Cloud Layers Unified By Binormals (CLUBB). PDF overlap in CLUBB-SILHS simulations of continental and tropical oceanic deep convection is compared with overlap of PDF of various microphysics variables in cloud-resolving model (CRM) simulations of the same cases that explicitly predict the 3D structure of cloud and precipitation fields. CRM results show that PDF overlap varies significantly between different hydrometeor types, as well as between PDFs of mass and number mixing ratios for each species, - a distinction that the current SILHS implementation does not make. Specifically, faster falling species, such as rain and graupel, exhibit significantly higher vertical coherence in their distributions than slow falling cloud liquid and ice. Development of a PDF overlap treatment linked to hydrometeor properties, such as fall speeds, in addition to the currently implemented dependency on the turbulent convective length scale will be discussed.

  4. Application of the compound probability density function for characterization of breast masses in ultrasound B scans

    NASA Astrophysics Data System (ADS)

    Shankar, P. M.; Piccoli, C. W.; Reid, J. M.; Forsberg, F.; Goldberg, B. B.

    2005-05-01

    The compound probability density function (pdf) is investigated for the ability of its parameters to classify masses in ultrasonic B scan breast images. Results of 198 images (29 malignant and 70 benign cases and two images per case) are reported and compared to the classification performance reported by us earlier in this journal. A new parameter, the speckle factor, calculated from the parameters of the compound pdf was explored to separate benign and malignant masses. The receiver operating characteristic curve for the parameter resulted in an Az value of 0.852. This parameter was combined with one of the parameters from our previous work, namely the ratio of the K distribution parameter at the site and away from the site. This combined parameter resulted in an Az value of 0.955. In conclusion, the parameters of the K distribution and the compound pdf may be useful in the classification of breast masses. These parameters can be calculated in an automated fashion. It should be possible to combine the results of the ultrasonic image analysis with those of traditional mammography, thereby increasing the accuracy of breast cancer diagnosis.

  5. Probability density function model equation for particle charging in a homogeneous dusty plasma.

    PubMed

    Pandya, R V; Mashayek, F

    2001-09-01

    In this paper, we use the direct interaction approximation (DIA) to obtain an approximate integrodifferential equation for the probability density function (PDF) of charge (q) on dust particles in homogeneous dusty plasma. The DIA is used to solve the closure problem which appears in the PDF equation due to the interactions between the phase space density of plasma particles and the phase space density of dust particles. The equation simplifies to a differential form under the condition when the fluctuations in phase space density for plasma particles change very rapidly in time and is correlated for very short times. The result is a Fokker-Planck type equation with extra terms having third and fourth order differentials in q, which account for the discrete nature of distribution of plasma particles and the interaction between fluctuations. Approximate macroscopic equations for the time evolution of the average charge and the higher order moments of the fluctuations in charge on the dust particles are obtained from the differential PDF equation. These equations are computed, in the case of a Maxwellian plasma, to show the effect of density fluctuations of plasma particles on the statistics of dust charge.

  6. Scale-wise evolution of rainfall probability density functions fingerprints the rainfall generation mechanism

    NASA Astrophysics Data System (ADS)

    Molini, Annalisa; Katul, Gabriel; Porporato, Amilcare

    2010-05-01

    Possible linkages between climatic fluctuations in rainfall at low frequencies and local intensity fluctuations within single storms is now receiving significant attention in climate change research. To progress on a narrower scope of this problem, the cross-scale probabilistic structure of rainfall intensity records collected over time scales ranging from hours to decades at sites dominated by either convective or frontal systems is investigated. Across these sites, intermittency buildup from slow to fast time-scales is analyzed in terms of its heavy tailed and asymmetric signatures in the scale-wise evolution of rainfall probability density functions (pdfs). The analysis demonstrates that rainfall records dominated by convective storms develop heavier-tailed power law pdfs across finer scales when compared with their frontal systems counterpart. A concomitant marked asymmetry buildup also emerges across finer time scales necessitating skewed probability laws for quantifying the scale-wise evolution of rainfall pdfs. A scale-dependent probabilistic description of such fat tails, peakedness and asymmetry appearance is proposed and tested by using a modified q-Gaussian model, able to describe the scale wise evolution of rainfall pdfs in terms of the nonextensivity parameter q, a lacunarity (intermittency) correction γ and a tail asymmetry coefficient c, also functions of q.

  7. On the evolution of the density probability density function in strongly self-gravitating systems

    SciTech Connect

    Girichidis, Philipp; Konstandin, Lukas; Klessen, Ralf S.; Whitworth, Anthony P.

    2014-02-01

    The time evolution of the probability density function (PDF) of the mass density is formulated and solved for systems in free-fall using a simple approximate function for the collapse of a sphere. We demonstrate that a pressure-free collapse results in a power-law tail on the high-density side of the PDF. The slope quickly asymptotes to the functional form P{sub V} (ρ)∝ρ{sup –1.54} for the (volume-weighted) PDF and P{sub M} (ρ)∝ρ{sup –0.54} for the corresponding mass-weighted distribution. From the simple approximation of the PDF we derive analytic descriptions for mass accretion, finding that dynamically quiet systems with narrow density PDFs lead to retarded star formation and low star formation rates (SFRs). Conversely, strong turbulent motions that broaden the PDF accelerate the collapse causing a bursting mode of star formation. Finally, we compare our theoretical work with observations. The measured SFRs are consistent with our model during the early phases of the collapse. Comparison of observed column density PDFs with those derived from our model suggests that observed star-forming cores are roughly in free-fall.

  8. On the Evolution of the Density Probability Density Function in Strongly Self-gravitating Systems

    NASA Astrophysics Data System (ADS)

    Girichidis, Philipp; Konstandin, Lukas; Whitworth, Anthony P.; Klessen, Ralf S.

    2014-02-01

    The time evolution of the probability density function (PDF) of the mass density is formulated and solved for systems in free-fall using a simple approximate function for the collapse of a sphere. We demonstrate that a pressure-free collapse results in a power-law tail on the high-density side of the PDF. The slope quickly asymptotes to the functional form PV (ρ)vpropρ-1.54 for the (volume-weighted) PDF and PM (ρ)vpropρ-0.54 for the corresponding mass-weighted distribution. From the simple approximation of the PDF we derive analytic descriptions for mass accretion, finding that dynamically quiet systems with narrow density PDFs lead to retarded star formation and low star formation rates (SFRs). Conversely, strong turbulent motions that broaden the PDF accelerate the collapse causing a bursting mode of star formation. Finally, we compare our theoretical work with observations. The measured SFRs are consistent with our model during the early phases of the collapse. Comparison of observed column density PDFs with those derived from our model suggests that observed star-forming cores are roughly in free-fall.

  9. Probability density function of the intensity of a laser beam propagating in the maritime environment.

    PubMed

    Korotkova, Olga; Avramov-Zamurovic, Svetlana; Malek-Madani, Reza; Nelson, Charles

    2011-10-10

    A number of field experiments measuring the fluctuating intensity of a laser beam propagating along horizontal paths in the maritime environment is performed over sub-kilometer distances at the United States Naval Academy. Both above the ground and over the water links are explored. Two different detection schemes, one photographing the beam on a white board, and the other capturing the beam directly using a ccd sensor, gave consistent results. The probability density function (pdf) of the fluctuating intensity is reconstructed with the help of two theoretical models: the Gamma-Gamma and the Gamma-Laguerre, and compared with the intensity's histograms. It is found that the on-ground experimental results are in good agreement with theoretical predictions. The results obtained above the water paths lead to appreciable discrepancies, especially in the case of the Gamma-Gamma model. These discrepancies are attributed to the presence of the various scatterers along the path of the beam, such as water droplets, aerosols and other airborne particles. Our paper's main contribution is providing a methodology for computing the pdf function of the laser beam intensity in the maritime environment using field measurements.

  10. Homogeneous clusters over India using probability density function of daily rainfall

    NASA Astrophysics Data System (ADS)

    Kulkarni, Ashwini

    2017-07-01

    The Indian landmass has been divided into homogeneous clusters by applying the cluster analysis to the probability density function of a century-long time series of daily summer monsoon (June through September) rainfall at 357 grids over India, each of approximately 100 km × 100 km. The analysis gives five clusters over Indian landmass; only cluster 5 happened to be the contiguous region and all other clusters are dispersed away which confirms the erratic behavior of daily rainfall over India. The area averaged seasonal rainfall over cluster 5 has a very strong relationship with Indian summer monsoon rainfall; also, the rainfall variability over this region is modulated by the most important mode of climate system, i.e., El Nino Southern Oscillation (ENSO). This cluster could be considered as the representative of the entire Indian landmass to examine monsoon variability. The two-sample Kolmogorov-Smirnov test supports that the cumulative distribution functions of daily rainfall over cluster 5 and India as a whole do not differ significantly. The clustering algorithm is also applied to two time epochs 1901-1975 and 1976-2010 to examine the possible changes in clusters in a recent warming period. The clusters are drastically different in two time periods. They are more dispersed in recent period implying the more erroneous distribution of daily rainfall in recent period.

  11. Vertical overlap of probability density functions of cloud and precipitation hydrometeors: CLOUD AND PRECIPITATION PDF OVERLAP

    SciTech Connect

    Ovchinnikov, Mikhail; Lim, Kyo-Sun Sunny; Larson, Vincent E.; Wong, May; Thayer-Calder, Katherine; Ghan, Steven J.

    2016-11-05

    Coarse-resolution climate models increasingly rely on probability density functions (PDFs) to represent subgrid-scale variability of prognostic variables. While PDFs characterize the horizontal variability, a separate treatment is needed to account for the vertical structure of clouds and precipitation. When sub-columns are drawn from these PDFs for microphysics or radiation parameterizations, appropriate vertical correlations must be enforced via PDF overlap specifications. This study evaluates the representation of PDF overlap in the Subgrid Importance Latin Hypercube Sampler (SILHS) employed in the assumed PDF turbulence and cloud scheme called the Cloud Layers Unified By Binormals (CLUBB). PDF overlap in CLUBB-SILHS simulations of continental and tropical oceanic deep convection is compared with overlap of PDF of various microphysics variables in cloud-resolving model (CRM) simulations of the same cases that explicitly predict the 3D structure of cloud and precipitation fields. CRM results show that PDF overlap varies significantly between different hydrometeor types, as well as between PDFs of mass and number mixing ratios for each species, - a distinction that the current SILHS implementation does not make. In CRM simulations that explicitly resolve cloud and precipitation structures, faster falling species, such as rain and graupel, exhibit significantly higher coherence in their vertical distributions than slow falling cloud liquid and ice. These results suggest that to improve the overlap treatment in the sub-column generator, the PDF correlations need to depend on hydrometeor properties, such as fall speeds, in addition to the currently implemented dependency on the turbulent convective length scale.

  12. Vertical overlap of probability density functions of cloud and precipitation hydrometeors

    NASA Astrophysics Data System (ADS)

    Ovchinnikov, Mikhail; Lim, Kyo-Sun Sunny; Larson, Vincent E.; Wong, May; Thayer-Calder, Katherine; Ghan, Steven J.

    2016-11-01

    Coarse-resolution climate models increasingly rely on probability density functions (PDFs) to represent subgrid-scale variability of prognostic variables. While PDFs characterize the horizontal variability, a separate treatment is needed to account for the vertical structure of clouds and precipitation. When subcolumns are drawn from these PDFs for microphysics or radiation parameterizations, appropriate vertical correlations must be enforced via PDF overlap specifications. This study evaluates the representation of PDF overlap in the Subgrid Importance Latin Hypercube Sampler (SILHS) employed in the assumed PDF turbulence and cloud scheme called the Cloud Layers Unified by Binormals (CLUBB). PDF overlap in CLUBB-SILHS simulations of continental and tropical oceanic deep convection is compared with overlap of PDF of various microphysics variables in cloud-resolving model (CRM) simulations of the same cases that explicitly predict the 3-D structure of cloud and precipitation fields. CRM results show that PDF overlap varies significantly between different hydrometeor types, as well as between PDFs of mass and number mixing ratios for each species—a distinction that the current SILHS implementation does not make. In CRM simulations that explicitly resolve cloud and precipitation structures, faster falling species, such as rain and graupel, exhibit significantly higher coherence in their vertical distributions than slow falling cloud liquid and ice. These results suggest that to improve the overlap treatment in the subcolumn generator, the PDF correlations need to depend on hydrometeor properties, such as fall speeds, in addition to the currently implemented dependency on the turbulent convective length scale.

  13. Modeling of aerosol formation in a turbulent jet with the transported population balance equation-probability density function approach

    NASA Astrophysics Data System (ADS)

    Di Veroli, G. Y.; Rigopoulos, S.

    2011-04-01

    Processes involving particle formation in turbulent flows feature complex interactions between turbulence and the various physicochemical processes involved. An example of such a process is aerosol formation in a turbulent jet, a process investigated experimentally by Lesniewski and Friedlander [Proc. R. Soc. London, Ser. A 454, 2477 (1998)]. Polydispersed particle formation can be described mathematically by a population balance (also called general dynamic) equation, but its formulation and use within a turbulent flow are riddled with problems, as straightforward averaging results in unknown correlations. In this paper we employ a probability density function formalism in conjunction with the population balance equation (the PBE-PDF method) to simulate and study the experiments of Lesniewski and Friedlander. The approach allows studying the effects of turbulence-particle formation interaction, as well as the prediction of the particle size distribution and the incorporation of kinetics of arbitrary complexity in the population balance equation. It is found that turbulence critically affects the first stages of the process, while it seems to have a secondary effect downstream. While Lesniewski and Friedlander argued that the bulk of the nucleation arises in the initial mixing layer, our results indicate that most of the particles nucleate downstream. The full particle size distributions are obtained via our method and can be compared to the experimental results showing good agreement. The sources of uncertainties in the experiments and the kinetic expressions are analyzed, and the underlying mechanisms that affect the evolution of particle size distribution are discussed.

  14. On recontamination and directional-bias problems in Monte Carlo simulation of PDF turbulence models. [probability density function

    NASA Technical Reports Server (NTRS)

    Hsu, Andrew T.

    1992-01-01

    Turbulent combustion can not be simulated adequately by conventional moment closure turbulent models. The probability density function (PDF) method offers an attractive alternative: in a PDF model, the chemical source terms are closed and do not require additional models. Because the number of computational operations grows only linearly in the Monte Carlo scheme, it is chosen over finite differencing schemes. A grid dependent Monte Carlo scheme following J.Y. Chen and W. Kollmann has been studied in the present work. It was found that in order to conserve the mass fractions absolutely, one needs to add further restrictions to the scheme, namely alpha(sub j) + gamma(sub j) = alpha(sub j - 1) + gamma(sub j + 1). A new algorithm was devised that satisfied this restriction in the case of pure diffusion or uniform flow problems. Using examples, it is shown that absolute conservation can be achieved. Although for non-uniform flows absolute conservation seems impossible, the present scheme has reduced the error considerably.

  15. On the probability density function and characteristic function moments of image steganalysis in the log prediction error wavelet subband

    NASA Astrophysics Data System (ADS)

    Bao, Zhenkun; Li, Xiaolong; Luo, Xiangyang

    2017-01-01

    Extracting informative statistic features is the most essential technical issue of steganalysis. Among various steganalysis methods, probability density function (PDF) and characteristic function (CF) moments are two important types of features due to the excellent ability for distinguishing the cover images from the stego ones. The two types of features are quite similar in definition. The only difference is that the PDF moments are computed in the spatial domain, while the CF moments are computed in the Fourier-transformed domain. Then, the comparison between PDF and CF moments is an interesting question of steganalysis. Several theoretical results have been derived, and CF moments are proved better than PDF moments in some cases. However, in the log prediction error wavelet subband of wavelet decomposition, some experiments show that the result is opposite and lacks a rigorous explanation. To solve this problem, a comparison result based on the rigorous proof is presented: the first-order PDF moment is proved better than the CF moment, while the second-order CF moment is better than the PDF moment. It tries to open the theoretical discussion on steganalysis and the question of finding suitable statistical features.

  16. Nonparametric estimation of plant density by the distance method

    USGS Publications Warehouse

    Patil, S.A.; Burnham, K.P.; Kovner, J.L.

    1979-01-01

    A relation between the plant density and the probability density function of the nearest neighbor distance (squared) from a random point is established under fairly broad conditions. Based upon this relationship, a nonparametric estimator for the plant density is developed and presented in terms of order statistics. Consistency and asymptotic normality of the estimator are discussed. An interval estimator for the density is obtained. The modifications of this estimator and its variance are given when the distribution is truncated. Simulation results are presented for regular, random and aggregated populations to illustrate the nonparametric estimator and its variance. A numerical example from field data is given. Merits and deficiencies of the estimator are discussed with regard to its robustness and variance.

  17. The probability density function in molecular gas in the G333 and Vela C molecular clouds

    NASA Astrophysics Data System (ADS)

    Cunningham, Maria

    2015-08-01

    The probability density function (PDF) is a simple analytical tool for determining the hierarchical spatial structure of molecular clouds. It has been used frequently in recent years with dust continuum emission, such as that from the Herschel space telescope and ALMA. These dust column density PDFs universally show a log-normal distribution in low column density gas, characteristic of unbound turbulent gas, and a power-law tail at high column densities, indicating the presence of gravitationally bound gas. We have recently conducted a PDF analysis of the molecular gas in the G333 and Vela C giant molecular cloud complexes, using transitions of CO, HCN, HNC, HCO+ and N2H+.The results show that CO and its isotopologues trace mostly the log-normal part of the PDF, while HCN and HCO+ trace both a log-normal part and a power law part to the distribution. On the other hand, HNC and N2H+ mostly trace only the power law tail. The difference between the PDFs of HCN and HNC is surprising, as is the similarity between HNC and the N2H+ PDFs. The most likely explanation for the similar distributions of HNC and N2H+ is that N2H+ is known to be enhanced in cool gas below 20K, where CO is depleted, while the reaction that forms HNC or HCN favours the former at similar low temperatures. The lack of evidence for a power law tail in 13CO and C18O, in conjunction for the results for the N2H+ PDF suggest that depletion of CO in the dense cores of these molecular clouds is significant. In conclusion, the PDF has proved to be a surprisingly useful tool for investigating not only the spatial distribution of molecular gas, but also the wide scale chemistry of molecular clouds.

  18. Annular wave packets at Dirac points in graphene and their probability-density oscillation.

    PubMed

    Luo, Ji; Valencia, Daniel; Lu, Junqiang

    2011-12-14

    Wave packets in graphene whose central wave vector is at Dirac points are investigated by numerical calculations. Starting from an initial Gaussian function, these wave packets form into annular peaks that propagate to all directions like ripple-rings on water surface. At the beginning, electronic probability alternates between the central peak and the ripple-rings and transient oscillation occurs at the center. As time increases, the ripple-rings propagate at the fixed Fermi speed, and their widths remain unchanged. The axial symmetry of the energy dispersion leads to the circular symmetry of the wave packets. The fixed speed and widths, however, are attributed to the linearity of the energy dispersion. Interference between states that, respectively, belong to two branches of the energy dispersion leads to multiple ripple-rings and the probability-density oscillation. In a magnetic field, annular wave packets become confined and no longer propagate to infinity. If the initial Gaussian width differs greatly from the magnetic length, expanding and shrinking ripple-rings form and disappear alternatively in a limited spread, and the wave packet resumes the Gaussian form frequently. The probability thus oscillates persistently between the central peak and the ripple-rings. If the initial Gaussian width is close to the magnetic length, the wave packet retains the Gaussian form and its height and width oscillate with a period determined by the first Landau energy. The wave-packet evolution is determined jointly by the initial state and the magnetic field, through the electronic structure of graphene in a magnetic field. © 2011 American Institute of Physics

  19. Probability density functions for radial anisotropy: implications for the upper 1200 km of the mantle

    NASA Astrophysics Data System (ADS)

    Beghein, Caroline; Trampert, Jeannot

    2004-01-01

    The presence of radial anisotropy in the upper mantle, transition zone and top of the lower mantle is investigated by applying a model space search technique to Rayleigh and Love wave phase velocity models. Probability density functions are obtained independently for S-wave anisotropy, P-wave anisotropy, intermediate parameter η, Vp, Vs and density anomalies. The likelihoods for P-wave and S-wave anisotropy beneath continents cannot be explained by a dry olivine-rich upper mantle at depths larger than 220 km. Indeed, while shear-wave anisotropy tends to disappear below 220 km depth in continental areas, P-wave anisotropy is still present but its sign changes compared to the uppermost mantle. This could be due to an increase with depth of the amount of pyroxene relative to olivine in these regions, although the presence of water, partial melt or a change in the deformation mechanism cannot be ruled out as yet. A similar observation is made for old oceans, but not for young ones where VSH> VSV appears likely down to 670 km depth and VPH> VPV down to 400 km depth. The change of sign in P-wave anisotropy seems to be qualitatively correlated with the presence of the Lehmann discontinuity, generally observed beneath continents and some oceans but not beneath ridges. Parameter η shows a similar age-related depth pattern as shear-wave anisotropy in the uppermost mantle and it undergoes the same change of sign as P-wave anisotropy at 220 km depth. The ratio between dln Vs and dln Vp suggests that a chemical component is needed to explain the anomalies in most places at depths greater than 220 km. More tests are needed to infer the robustness of the results for density, but they do not affect the results for anisotropy.

  20. Spatial correlations and probability density function of the phase difference in a developed speckle-field: numerical and natural experiments

    NASA Astrophysics Data System (ADS)

    Mysina, N. Yu; Maksimova, L. A.; Gorbatenko, B. B.; Ryabukho, V. P.

    2015-10-01

    Investigated are statistical properties of the phase difference of oscillations in speckle-fields at two points in the far-field diffraction region, with different shapes of the scatterer aperture. Statistical and spatial nonuniformity of the probability density function of the field phase difference is established. Numerical experiments show that, for the speckle-fields with an oscillating alternating-sign transverse correlation function, a significant nonuniformity of the probability density function of the phase difference in the correlation region of the field complex amplitude, with the most probable values 0 and p, is observed. A natural statistical interference experiment using Young diagrams has confirmed the results of numerical experiments.

  1. EUPDF: Eulerian Monte Carlo Probability Density Function Solver for Applications With Parallel Computing, Unstructured Grids, and Sprays

    NASA Technical Reports Server (NTRS)

    Raju, M. S.

    1998-01-01

    The success of any solution methodology used in the study of gas-turbine combustor flows depends a great deal on how well it can model the various complex and rate controlling processes associated with the spray's turbulent transport, mixing, chemical kinetics, evaporation, and spreading rates, as well as convective and radiative heat transfer and other phenomena. The phenomena to be modeled, which are controlled by these processes, often strongly interact with each other at different times and locations. In particular, turbulence plays an important role in determining the rates of mass and heat transfer, chemical reactions, and evaporation in many practical combustion devices. The influence of turbulence in a diffusion flame manifests itself in several forms, ranging from the so-called wrinkled, or stretched, flamelets regime to the distributed combustion regime, depending upon how turbulence interacts with various flame scales. Conventional turbulence models have difficulty treating highly nonlinear reaction rates. A solution procedure based on the composition joint probability density function (PDF) approach holds the promise of modeling various important combustion phenomena relevant to practical combustion devices (such as extinction, blowoff limits, and emissions predictions) because it can account for nonlinear chemical reaction rates without making approximations. In an attempt to advance the state-of-the-art in multidimensional numerical methods, we at the NASA Lewis Research Center extended our previous work on the PDF method to unstructured grids, parallel computing, and sprays. EUPDF, which was developed by M.S. Raju of Nyma, Inc., was designed to be massively parallel and could easily be coupled with any existing gas-phase and/or spray solvers. EUPDF can use an unstructured mesh with mixed triangular, quadrilateral, and/or tetrahedral elements. The application of the PDF method showed favorable results when applied to several supersonic

  2. EUPDF: Eulerian Monte Carlo Probability Density Function Solver for Applications With Parallel Computing, Unstructured Grids, and Sprays

    NASA Technical Reports Server (NTRS)

    Raju, M. S.

    1998-01-01

    The success of any solution methodology used in the study of gas-turbine combustor flows depends a great deal on how well it can model the various complex and rate controlling processes associated with the spray's turbulent transport, mixing, chemical kinetics, evaporation, and spreading rates, as well as convective and radiative heat transfer and other phenomena. The phenomena to be modeled, which are controlled by these processes, often strongly interact with each other at different times and locations. In particular, turbulence plays an important role in determining the rates of mass and heat transfer, chemical reactions, and evaporation in many practical combustion devices. The influence of turbulence in a diffusion flame manifests itself in several forms, ranging from the so-called wrinkled, or stretched, flamelets regime to the distributed combustion regime, depending upon how turbulence interacts with various flame scales. Conventional turbulence models have difficulty treating highly nonlinear reaction rates. A solution procedure based on the composition joint probability density function (PDF) approach holds the promise of modeling various important combustion phenomena relevant to practical combustion devices (such as extinction, blowoff limits, and emissions predictions) because it can account for nonlinear chemical reaction rates without making approximations. In an attempt to advance the state-of-the-art in multidimensional numerical methods, we at the NASA Lewis Research Center extended our previous work on the PDF method to unstructured grids, parallel computing, and sprays. EUPDF, which was developed by M.S. Raju of Nyma, Inc., was designed to be massively parallel and could easily be coupled with any existing gas-phase and/or spray solvers. EUPDF can use an unstructured mesh with mixed triangular, quadrilateral, and/or tetrahedral elements. The application of the PDF method showed favorable results when applied to several supersonic

  3. A method for estimating proportions

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr.; Marion, B. P.

    1975-01-01

    A proportion estimation procedure is presented which requires only on set of ground truth data for determining the error matrix. The error matrix is then used to determine an unbiased estimate. The error matrix is shown to be directly related to the probability of misclassifications, and is more diagonally dominant with the increase in the number of passes used.

  4. Kinetic and dynamic probability-density-function descriptions of disperse turbulent two-phase flows.

    PubMed

    Minier, Jean-Pierre; Profeta, Christophe

    2015-11-01

    This article analyzes the status of two classical one-particle probability density function (PDF) descriptions of the dynamics of discrete particles dispersed in turbulent flows. The first PDF formulation considers only the process made up by particle position and velocity Z(p)=(x(p),U(p)) and is represented by its PDF p(t; y(p),V(p)) which is the solution of a kinetic PDF equation obtained through a flux closure based on the Furutsu-Novikov theorem. The second PDF formulation includes fluid variables into the particle state vector, for example, the fluid velocity seen by particles Z(p)=(x(p),U(p),U(s)), and, consequently, handles an extended PDF p(t; y(p),V(p),V(s)) which is the solution of a dynamic PDF equation. For high-Reynolds-number fluid flows, a typical formulation of the latter category relies on a Langevin model for the trajectories of the fluid seen or, conversely, on a Fokker-Planck equation for the extended PDF. In the present work, a new derivation of the kinetic PDF equation is worked out and new physical expressions of the dispersion tensors entering the kinetic PDF equation are obtained by starting from the extended PDF and integrating over the fluid seen. This demonstrates that, under the same assumption of a Gaussian colored noise and irrespective of the specific stochastic model chosen for the fluid seen, the kinetic PDF description is the marginal of a dynamic PDF one. However, a detailed analysis reveals that kinetic PDF models of particle dynamics in turbulent flows described by statistical correlations constitute incomplete stand-alone PDF descriptions and, moreover, that present kinetic-PDF equations are mathematically ill posed. This is shown to be the consequence of the non-Markovian characteristic of the stochastic process retained to describe the system and the use of an external colored noise. Furthermore, developments bring out that well-posed PDF descriptions are essentially due to a proper choice of the variables selected to

  5. Extracting gridded probability density functions for precipitation intensity from point measurements

    NASA Astrophysics Data System (ADS)

    Haerter, Jan; Eggert, Bastian; Moseley, Christopher; Piani, Claudio; Berg, Peter

    2016-04-01

    A common complication arising in comparisons of modeled data, e.g. from regional climate models or re-analysis, to measurements, e.g. rain gauge data collected at a single position, is that the resolutions do not match. Thereby, a direct comparison of the probability density functions of precipitation rates is not possible, since the gridded data represent an average over an area and a time interval while the point data represent only a temporal average. The spatial resolution of the point data can be considered "infinitely high". This especially constitutes an obstacle in statistical downscaling approaches such as statistical bias correction, or the proper assessment of extremes as computed by climate models. It is well known from the Taylor hypothesis that considerable spatio-temporal information about a dynamical process, such as the eddies of the atmospheric flow, is already contained in a point measurement. Applying the Taylor hypothesis to the statistical distribution functions of precipitation intensity, we show that a gridded spatio-temporal process can be approximated very well by the zero-dimensional analog, i.e. the statistics at a single point. All that needs to be done is to use a coarser temporal resolution for the point timeseries, when comparing to the gridded data, i.e. much better results can be achieved when coarsening the resolution of the point data. The remaining question is, how to extract the proper "scale-adapted" temporal resolution in practice, when only an observed point timeseries and some gridded model data sets are available. We show that this is indeed possible by use of the model alone. Indeed, we find that models which misrepresent precipitation intensity, still serve well in producing proper scale-adaptation, i.e. the model is sufficient in representing larger-scale atmospheric dynamics even though precipitation formation is misrepresented. Our results may have relevance to improved statistical downscaling as well as the

  6. Solution of a torsional Schrödinger equation with a periodic potential of general form. The probability amplitude and probability density

    NASA Astrophysics Data System (ADS)

    Turovtsev, V. V.; Orlov, M. Yu.; Orlov, Yu. D.

    2017-08-01

    Analytic expressions for the probability density of states of a molecule with internal rotation and the probability of finding the state in the potential well are derived for the first time. Two methods are proposed for assigning conformers to potential wells. A quantitative measure of localization and delocalization of a state in the potential well is introduced. The rotational symmetry number is generalized to the case of asymmetric rotation. On the basis of the localization criterion, a model is developed for calculating the internal rotation contribution to thermodynamic properties of individual conformers with low rotational barriers and/or at a high temperature.

  7. Applications of the line-of-response probability density function resolution model in PET list mode reconstruction

    PubMed Central

    Jian, Y; Yao, R; Mulnix, T; Jin, X; Carson, R E

    2016-01-01

    Resolution degradation in PET image reconstruction can be caused by inaccurate modeling of the physical factors in the acquisition process. Resolution modeling (RM) is a common technique that takes into account the resolution degrading factors in the system matrix. Our previous work has introduced a probability density function (PDF) method of deriving the resolution kernels from Monte Carlo simulation and parameterizing the LORs to reduce the number of kernels needed for image reconstruction. In addition, LOR-PDF allows different PDFs to be applied to LORs from different crystal layer pairs of the HRRT. In this study, a thorough test was performed with this new model (LOR-PDF) applied to two PET scanners - the HRRT and Focus-220. A more uniform resolution distribution was observed in point source reconstructions by replacing the spatially-invariant kernels with the spatially-variant LOR-PDF. Specifically, from the center to the edge of radial field of view (FOV) of the HRRT, the measured in-plane FWHMs of point sources in a warm background varied slightly from 1.7 mm to 1.9 mm in LOR-PDF reconstructions. In Minihot and contrast phantom reconstructions, LOR-PDF resulted in up to 9% higher contrast at any given noise level than image-space resolution model. LOR-PDF also has the advantage in performing crystal-layer-dependent resolution modeling. The contrast improvement by using LOR-PDF was verified statistically by replicate reconstructions. In addition, [11C]AFM rats imaged on the HRRT and [11C]PHNO rats imaged on the Focus-220 were utilized to demonstrated the advantage of the new model. Higher contrast between high-uptake regions of only a few millimeter diameter and the background was observed in LOR-PDF reconstruction than in other methods. PMID:25490063

  8. Applications of the line-of-response probability density function resolution model in PET list mode reconstruction.

    PubMed

    Jian, Y; Yao, R; Mulnix, T; Jin, X; Carson, R E

    2015-01-07

    Resolution degradation in PET image reconstruction can be caused by inaccurate modeling of the physical factors in the acquisition process. Resolution modeling (RM) is a common technique that takes into account the resolution degrading factors in the system matrix. Our previous work has introduced a probability density function (PDF) method of deriving the resolution kernels from Monte Carlo simulation and parameterizing the LORs to reduce the number of kernels needed for image reconstruction. In addition, LOR-PDF allows different PDFs to be applied to LORs from different crystal layer pairs of the HRRT. In this study, a thorough test was performed with this new model (LOR-PDF) applied to two PET scanners-the HRRT and Focus-220. A more uniform resolution distribution was observed in point source reconstructions by replacing the spatially-invariant kernels with the spatially-variant LOR-PDF. Specifically, from the center to the edge of radial field of view (FOV) of the HRRT, the measured in-plane FWHMs of point sources in a warm background varied slightly from 1.7 mm to 1.9 mm in LOR-PDF reconstructions. In Minihot and contrast phantom reconstructions, LOR-PDF resulted in up to 9% higher contrast at any given noise level than image-space resolution model. LOR-PDF also has the advantage in performing crystal-layer-dependent resolution modeling. The contrast improvement by using LOR-PDF was verified statistically by replicate reconstructions. In addition, [(11)C]AFM rats imaged on the HRRT and [(11)C]PHNO rats imaged on the Focus-220 were utilized to demonstrated the advantage of the new model. Higher contrast between high-uptake regions of only a few millimeter diameter and the background was observed in LOR-PDF reconstruction than in other methods.

  9. Nonlinear Regression Methods for Estimation

    DTIC Science & Technology

    2005-09-01

    accuracy when the geometric dilution of precision ( GDOP ) causes collinearity, which in turn brings about poor position estimates. The main goal is...measurements are needed to wash-out the 168 measurement noise. Furthermore, the measurement arrangement’s geometry ( GDOP ) strongly impacts the achievable...Newton algorithm, 61 geometric dilution of precision, see GDOP initial parameter estimate, 91 iterative least squares, see ILS Kalman filtering, 10

  10. An automatic locally-adaptive method to estimate heavily-tailed breakthrough curves from particle distributions

    NASA Astrophysics Data System (ADS)

    Pedretti, Daniele; Fernàndez-Garcia, Daniel

    2013-09-01

    Particle tracking methods to simulate solute transport deal with the issue of having to reconstruct smooth concentrations from a limited number of particles. This is an error-prone process that typically leads to large fluctuations in the determined late-time behavior of breakthrough curves (BTCs). Kernel density estimators (KDE) can be used to automatically reconstruct smooth BTCs from a small number of particles. The kernel approach incorporates the uncertainty associated with subsampling a large population by equipping each particle with a probability density function. Two broad classes of KDE methods can be distinguished depending on the parametrization of this function: global and adaptive methods. This paper shows that each method is likely to estimate a specific portion of the BTCs. Although global methods offer a valid approach to estimate early-time behavior and peak of BTCs, they exhibit important fluctuations at the tails where fewer particles exist. In contrast, locally adaptive methods improve tail estimation while oversmoothing both early-time and peak concentrations. Therefore a new method is proposed combining the strength of both KDE approaches. The proposed approach is universal and only needs one parameter (α) which slightly depends on the shape of the BTCs. Results show that, for the tested cases, heavily-tailed BTCs are properly reconstructed with α ≈ 0.5 .

  11. Empirical methods in the evaluation of estimators

    Treesearch

    Gerald S. Walton; C.J. DeMars; C.J. DeMars

    1973-01-01

    The authors discuss the problem of selecting estimators of density and survival by making use of data on a forest-defoliating larva, the spruce budworm. Varlous estimators are compared. The results show that, among the estimators considered, ratio-type estimators are superior in terms of bias and variance. The methods used in making comparisons, particularly simulation...

  12. Iterative Methods for Parameter Estimation

    DTIC Science & Technology

    1990-12-01

    IMPULSE RESPONSE (FIR) SYSTEMS ............... 10 A. FIXED DATA ALGORITHMS .................... 10 1. Gauss- Seidel Method ....................... 10 2...potential to provide a less biased least squares solution than a correlation method formulation [Ref. 3]. A. FIXED DATA ALGORITHMS 1. Gauss- Seidel Method A...very simple and straightforward iterative algorithm is the Gauss- Seidel method [Ref. 7]. We drop the superscript M from aM for simplicity. Unless

  13. Radiance and atmosphere propagation-based method for the target range estimation

    NASA Astrophysics Data System (ADS)

    Cho, Hoonkyung; Chun, Joohwan

    2012-06-01

    Target range estimation is traditionally based on radar and active sonar systems in modern combat system. However, the performance of such active sensor devices is degraded tremendously by jamming signal from the enemy. This paper proposes a simple range estimation method between the target and the sensor. Passive IR sensors measures infrared (IR) light radiance radiating from objects in dierent wavelength and this method shows robustness against electromagnetic jamming. The measured target radiance of each wavelength at the IR sensor depends on the emissive properties of target material and is attenuated by various factors, in particular the distance between the sensor and the target and atmosphere environment. MODTRAN is a tool that models atmospheric propagation of electromagnetic radiation. Based on the result from MODTRAN and measured radiance, the target range is estimated. To statistically analyze the performance of proposed method, we use maximum likelihood estimation (MLE) and evaluate the Cramer-Rao Lower Bound (CRLB) via the probability density function of measured radiance. And we also compare CRLB and the variance of and ML estimation using Monte-Carlo.

  14. Tunnel Cost-Estimating Methods.

    DTIC Science & Technology

    1981-10-01

    This sensitivity analysis provided insight into which factors were critical, requiring reliable quantitative determination, and which factors could...seg- ment and reach. Some of these estimates reflected genuine uncertainty while others were used to test the sensitivity of costs to changes in a 36...I8 655290 00 65 BE.A(1,59) 055300 0060 P1HA(IS7) SS310 000 TSEGL-A(I,4S) 0SS320 000 DM-A(1,46) SS330 e000 RTEMP-A(,12) 055340 000 MSTAI A( 1,31

  15. Spatial correlations and probability density function of the phase difference in a developed speckle-field: numerical and natural experiments

    SciTech Connect

    Mysina, N Yu; Maksimova, L A; Ryabukho, V P; Gorbatenko, B B

    2015-10-31

    Investigated are statistical properties of the phase difference of oscillations in speckle-fields at two points in the far-field diffraction region, with different shapes of the scatterer aperture. Statistical and spatial nonuniformity of the probability density function of the field phase difference is established. Numerical experiments show that, for the speckle-fields with an oscillating alternating-sign transverse correlation function, a significant nonuniformity of the probability density function of the phase difference in the correlation region of the field complex amplitude, with the most probable values 0 and p, is observed. A natural statistical interference experiment using Young diagrams has confirmed the results of numerical experiments. (laser applications and other topics in quantum electronics)

  16. Coupled Monte Carlo Probability Density Function/ SPRAY/CFD Code Developed for Modeling Gas-Turbine Combustor Flows

    NASA Technical Reports Server (NTRS)

    1995-01-01

    The success of any solution methodology for studying gas-turbine combustor flows depends a great deal on how well it can model various complex, rate-controlling processes associated with turbulent transport, mixing, chemical kinetics, evaporation and spreading rates of the spray, convective and radiative heat transfer, and other phenomena. These phenomena often strongly interact with each other at disparate time and length scales. In particular, turbulence plays an important role in determining the rates of mass and heat transfer, chemical reactions, and evaporation in many practical combustion devices. Turbulence manifests its influence in a diffusion flame in several forms depending on how turbulence interacts with various flame scales. These forms range from the so-called wrinkled, or stretched, flamelets regime, to the distributed combustion regime. Conventional turbulence closure models have difficulty in treating highly nonlinear reaction rates. A solution procedure based on the joint composition probability density function (PDF) approach holds the promise of modeling various important combustion phenomena relevant to practical combustion devices such as extinction, blowoff limits, and emissions predictions because it can handle the nonlinear chemical reaction rates without any approximation. In this approach, mean and turbulence gas-phase velocity fields are determined from a standard turbulence model; the joint composition field of species and enthalpy are determined from the solution of a modeled PDF transport equation; and a Lagrangian-based dilute spray model is used for the liquid-phase representation with appropriate consideration of the exchanges of mass, momentum, and energy between the two phases. The PDF transport equation is solved by a Monte Carlo method, and existing state-of-the-art numerical representations are used to solve the mean gasphase velocity and turbulence fields together with the liquid-phase equations. The joint composition PDF

  17. An at-site flood estimation method in the context of nonstationarity I. A simulation study

    NASA Astrophysics Data System (ADS)

    Gado, Tamer A.; Nguyen, Van-Thanh-Van

    2016-04-01

    The stationarity of annual flood peak records is the traditional assumption of flood frequency analysis. In some cases, however, as a result of land-use and/or climate change, this assumption is no longer valid. Therefore, new statistical models are needed to capture dynamically the change of probability density functions over time, in order to obtain reliable flood estimation. In this study, an innovative method for nonstationary flood frequency analysis was presented. Here, the new method is based on detrending the flood series and applying the L-moments along with the GEV distribution to the transformed "stationary" series (hereafter, this is called the LM-NS). The LM-NS method was assessed through a comparative study with the maximum likelihood (ML) method for the nonstationary GEV model, as well as with the stationary (S) GEV model. The comparative study, based on Monte Carlo simulations, was carried out for three nonstationary GEV models: a linear dependence of the mean on time (GEV1), a quadratic dependence of the mean on time (GEV2), and linear dependence in both the mean and log standard deviation on time (GEV11). The simulation results indicated that the LM-NS method performs better than the ML method for most of the cases studied, whereas the stationary method provides the least accurate results. An additional advantage of the LM-NS method is to avoid the numerical problems (e.g., convergence problems) that may occur with the ML method when estimating parameters for small data samples.

  18. Methods of Estimating Strategic Intentions

    DTIC Science & Technology

    1982-05-01

    a considerable Impact on subsequent escala - tion and responses. The previous research performed by MATHTECH dealt with an assessment of methods for...or dimensions on which) Admiral X perceives these zniip types as similar or different ( Rosenberg and Jones, 1972; Rosenberg and Sedlak, 1972...34Family Rssemblances: Studies in the Internal Structure of Categories." Cognitlve Psychology, L. 573-603. ROSENBERG , S. and A. SEDLAK (1972

  19. Assessment of a Three-Dimensional Line-of-Response Probability Density Function System Matrix for PET

    PubMed Central

    Yao, Rutao; Ramachandra, Ranjith M.; Mahajan, Neeraj; Rathod, Vinay; Gunasekar, Noel; Panse, Ashish; Ma, Tianyu; Jian, Yiqiang; Yan, Jianhua; Carson, Richard E.

    2012-01-01

    To achieve optimal PET image reconstruction through better system modeling, we developed a system matrix that is based on the probability density function for each line of response (LOR-PDF). The LOR-PDFs are grouped by LOR-to-detector incident angles to form a highly compact system matrix. The system matrix was implemented in the MOLAR list mode reconstruction algorithm for a small animal PET scanner. The impact of LOR-PDF on reconstructed image quality was assessed qualitatively as well as quantitatively in terms of contrast recovery coefficient (CRC) and coefficient of variance (COV), and its performance was compared with a fixed Gaussian (iso-Gaussian) line spread function. The LOR-PDFs of 3 coincidence signal emitting sources, 1) ideal positron emitter that emits perfect back-to-back γ rays (γγ) in air; 2) fluorine-18 (18F) nuclide in water; and 3) oxygen-15 (15O) nuclide in water, were derived, and assessed with simulated and experimental phantom data. The derived LOR-PDFs showed anisotropic and asymmetric characteristics dependent on LOR-detector angle, coincidence emitting source, and the medium, consistent with common PET physical principles. The comparison of the iso-Gaussian function and LOR-PDF showed that: 1) without positron range and acolinearity effects, the LOR-PDF achieved better or similar trade-offs of contrast recovery and noise for objects of 4-mm radius or larger, and this advantage extended to smaller objects (e.g. 2-mm radius sphere, 0.6-mm radius hot-rods) at higher iteration numbers; and 2) with positron range and acolinearity effects, the iso-Gaussian achieved similar or better resolution recovery depending on the significance of positron range effect. We conclude that the 3-D LOR-PDF approach is an effective method to generate an accurate and compact system matrix. However, when used directly in expectation-maximization based list-mode iterative reconstruction algorithms such as MOLAR, its superiority is not clear. For this

  20. A very efficient approach to compute the first-passage probability density function in a time-changed Brownian model: Applications in finance

    NASA Astrophysics Data System (ADS)

    Ballestra, Luca Vincenzo; Pacelli, Graziella; Radi, Davide

    2016-12-01

    We propose a numerical method to compute the first-passage probability density function in a time-changed Brownian model. In particular, we derive an integral representation of such a density function in which the integrand functions must be obtained solving a system of Volterra equations of the first kind. In addition, we develop an ad-hoc numerical procedure to regularize and solve this system of integral equations. The proposed method is tested on three application problems of interest in mathematical finance, namely the calculation of the survival probability of an indebted firm, the pricing of a single-knock-out put option and the pricing of a double-knock-out put option. The results obtained reveal that the novel approach is extremely accurate and fast, and performs significantly better than the finite difference method.

  1. Probability density function selection based on the characteristics of wind speed data

    NASA Astrophysics Data System (ADS)

    Yürüşen, N. Y.; Melero, Julio J.

    2016-09-01

    The probabilistic approach has an important place in the wind energy research field as it provides cheap and fast initial information for experts with the help of simulations and estimations. Wind energy experts have been using the Weibull distribution for wind speed data for many years. Nevertheless, there exist cases, where the Weibull distribution is inappropriate with data presenting bimodal or multimodal behaviour which are unfit in high, null and low winds that can cause serious energy estimation errors. This paper presents a procedure for dealing with wind speed data taking into account non-Weibull distributions or data treatment when needed. The procedure detects deviations from the unimodal (Weibull) distribution and proposes other possible distributions to be used. The deviations of the used distributions regarding real data are addressed with the Root Mean Square Error (RMSE) and the annual energy production (AEP).

  2. Probability density functions for the variable solar wind near the solar cycle minimum

    NASA Astrophysics Data System (ADS)

    Vörös, Z.; Leitner, M.; Narita, Y.; Consolini, G.; Kovács, P.; Tóth, A.; Lichtenberger, J.

    2015-08-01

    Unconditional and conditional statistics are used for studying the histograms of magnetic field multiscale fluctuations in the solar wind near the solar cycle minimum in 2008. The unconditional statistics involves the magnetic data during the whole year in 2008. The conditional statistics involves the magnetic field time series split into concatenated subsets of data according to a threshold in dynamic pressure. The threshold separates fast-stream leading edge compressional and trailing edge uncompressional fluctuations. The histograms obtained from these data sets are associated with both multiscale (B) and small-scale (δB) magnetic fluctuations, the latter corresponding to time-delayed differences. It is shown here that, by keeping flexibility but avoiding the unnecessary redundancy in modeling, the histograms can be effectively described by a limited set of theoretical probability distribution functions (PDFs), such as the normal, lognormal, kappa, and log-kappa functions. In a statistical sense the model PDFs correspond to additive and multiplicative processes exhibiting correlations. It is demonstrated here that the skewed small-scale histograms inherent in turbulent cascades are better described by the skewed log-kappa than by the symmetric kappa model. Nevertheless, the observed skewness is rather small, resulting in potential difficulties of estimation of the third-order moments. This paper also investigates the dependence of the statistical convergence of PDF model parameters, goodness of fit, and skewness on the data sample size. It is shown that the minimum lengths of data intervals required for the robust estimation of parameters is scale, process, and model dependent.

  3. Existence, uniqueness and regularity of a time-periodic probability density distribution arising in a sedimentation-diffusion problem

    NASA Technical Reports Server (NTRS)

    Nitsche, Ludwig C.; Nitsche, Johannes M.; Brenner, Howard

    1988-01-01

    The sedimentation and diffusion of a nonneutrally buoyant Brownian particle in vertical fluid-filled cylinder of finite length which is instantaneously inverted at regular intervals are investigated analytically. A one-dimensional convective-diffusive equation is derived to describe the temporal and spatial evolution of the probability density; a periodicity condition is formulated; the applicability of Fredholm theory is established; and the parameter-space regions are determined within which the existence and uniqueness of solutions are guaranteed. Numerical results for sample problems are presented graphically and briefly characterized.

  4. Finite-size scaling of the magnetization probability density for the critical Ising model in slab geometry.

    PubMed

    Cardozo, David Lopes; Holdsworth, Peter C W

    2016-04-27

    The magnetization probability density in d  =  2 and 3 dimensional Ising models in slab geometry of volume [Formula: see text] is computed through Monte-Carlo simulation at the critical temperature and zero magnetic field. The finite-size scaling of this distribution and its dependence on the system aspect-ratio [Formula: see text]and boundary conditions are discussed. In the limiting case [Formula: see text] of a macroscopically large slab ([Formula: see text]) the distribution is found to scale as a Gaussian function for all tested system sizes and boundary conditions.

  5. Probability Density Functions of Floating Potential Fluctuations Due to Local Electron Flux Intermittency in a Linear ECR Plasma

    NASA Astrophysics Data System (ADS)

    Yoshimura, Shinji; Terasaka, Kenichiro; Tanaka, Eiki; Aramaki, Mitsutoshi; Tanaka, Masayoshi Y.

    An intermittent behavior of local electron flux in a laboratory ECR plasma is statistically analyzed by means of probability density functions (PDFs). The PDF constructed from a time series of the floating potential signal on a Langmuir probe has a fat tail in the negative value side, which reflects the intermittency of the local electron flux. The PDF of the waiting time, which is defined by the time interval between two successive events, is found to exhibit an exponential distribution, suggesting that the phenomenon is characterized by a stationary Poisson process. The underlying Poisson process is also confirmed by the number of events in given time intervals that is Poisson distributed.

  6. Finite-size scaling of the magnetization probability density for the critical Ising model in slab geometry

    NASA Astrophysics Data System (ADS)

    Lopes Cardozo, David; Holdsworth, Peter C. W.

    2016-04-01

    The magnetization probability density in d  =  2 and 3 dimensional Ising models in slab geometry of volume L\\paralleld-1× {{L}\\bot} is computed through Monte-Carlo simulation at the critical temperature and zero magnetic field. The finite-size scaling of this distribution and its dependence on the system aspect-ratio ρ =\\frac{{{L}\\bot}}{{{L}\\parallel}} and boundary conditions are discussed. In the limiting case ρ \\to 0 of a macroscopically large slab ({{L}\\parallel}\\gg {{L}\\bot} ) the distribution is found to scale as a Gaussian function for all tested system sizes and boundary conditions.

  7. The Flight Optimization System Weights Estimation Method

    NASA Technical Reports Server (NTRS)

    Wells, Douglas P.; Horvath, Bryce L.; McCullers, Linwood A.

    2017-01-01

    FLOPS has been the primary aircraft synthesis software used by the Aeronautics Systems Analysis Branch at NASA Langley Research Center. It was created for rapid conceptual aircraft design and advanced technology impact assessments. FLOPS is a single computer program that includes weights estimation, aerodynamics estimation, engine cycle analysis, propulsion data scaling and interpolation, detailed mission performance analysis, takeoff and landing performance analysis, noise footprint estimation, and cost analysis. It is well known as a baseline and common denominator for aircraft design studies. FLOPS is capable of calibrating a model to known aircraft data, making it useful for new aircraft and modifications to existing aircraft. The weight estimation method in FLOPS is known to be of high fidelity for conventional tube with wing aircraft and a substantial amount of effort went into its development. This report serves as a comprehensive documentation of the FLOPS weight estimation method. The development process is presented with the weight estimation process.

  8. Determination of the structure of wood from the self-diffusion probability densities of a fluid observed by position-exchange NMR spectroscopy.

    PubMed

    Telkki, Ville-Veikko; Jokisaari, Jukka

    2009-02-28

    Self-diffusion of a fluid absorbed in a solid matrix is restricted by the walls of the matrix. We demonstrate that the local self-diffusion probability densities (propagators) of fluid molecules can be measured by position-exchange nuclear magnetic resonance spectroscopy (POXSY), and analysis of the shape of the propagators reveals the local size-distributions of the voids in the matrix. We also show that, in the case of rectangular voids, size-distribution can be calculated in a long diffusion-time limit without any assumptions about the shape of the distribution. Pinus sylvestris pine wood was used as a sample material in the experiments, and the results show that this method gives detailed information about the structure of wood.

  9. A method of estimating log weights.

    Treesearch

    Charles N. Mann; Hilton H. Lysons

    1972-01-01

    This paper presents a practical method of estimating the weights of logs before they are yarded. Knowledge of log weights is required to achieve optimum loading of modern yarding equipment. Truckloads of logs are weighed and measured to obtain a local density index (pounds per cubic foot) for a species of logs. The density index is then used to estimate the weights of...

  10. Multivariate Density Estimation and Remote Sensing

    NASA Technical Reports Server (NTRS)

    Scott, D. W.

    1983-01-01

    Current efforts to develop methods and computer algorithms to effectively represent multivariate data commonly encountered in remote sensing applications are described. While this may involve scatter diagrams, multivariate representations of nonparametric probability density estimates are emphasized. The density function provides a useful graphical tool for looking at data and a useful theoretical tool for classification. This approach is called a thunderstorm data analysis.

  11. Probability density function of non-reactive solute concentration in heterogeneous porous formations.

    PubMed

    Bellin, Alberto; Tonina, Daniele

    2007-10-30

    Available models of solute transport in heterogeneous formations lack in providing complete characterization of the predicted concentration. This is a serious drawback especially in risk analysis where confidence intervals and probability of exceeding threshold values are required. Our contribution to fill this gap of knowledge is a probability distribution model for the local concentration of conservative tracers migrating in heterogeneous aquifers. Our model accounts for dilution, mechanical mixing within the sampling volume and spreading due to formation heterogeneity. It is developed by modeling local concentration dynamics with an Ito Stochastic Differential Equation (SDE) that under the hypothesis of statistical stationarity leads to the Beta probability distribution function (pdf) for the solute concentration. This model shows large flexibility in capturing the smoothing effect of the sampling volume and the associated reduction of the probability of exceeding large concentrations. Furthermore, it is fully characterized by the first two moments of the solute concentration, and these are the same pieces of information required for standard geostatistical techniques employing Normal or Log-Normal distributions. Additionally, we show that in the absence of pore-scale dispersion and for point concentrations the pdf model converges to the binary distribution of [Dagan, G., 1982. Stochastic modeling of groundwater flow by unconditional and conditional probabilities, 2, The solute transport. Water Resour. Res. 18 (4), 835-848.], while it approaches the Normal distribution for sampling volumes much larger than the characteristic scale of the aquifer heterogeneity. Furthermore, we demonstrate that the same model with the spatial moments replacing the statistical moments can be applied to estimate the proportion of the plume volume where solute concentrations are above or below critical thresholds. Application of this model to point and vertically averaged bromide

  12. Probability Density Function for Waves Propagating in a Straight Rough Wall Tunnel

    SciTech Connect

    Pao, H

    2004-01-28

    The radio channel places fundamental limitations on the performance of wireless communication systems in tunnels and caves. The transmission path between the transmitter and receiver can vary from a simple direct line of sight to one that is severely obstructed by rough walls and corners. Unlike wired channels that are stationary and predictable, radio channels can be extremely random and difficult to analyze. In fact, modeling the radio channel has historically been one of the more challenging parts of any radio system design; this is often done using statistical methods. The mechanisms behind electromagnetic wave propagation are diverse, but can generally be attributed to reflection, diffraction, and scattering. Because of the multiple reflections from rough walls, the electromagnetic waves travel along different paths of varying lengths. The interactions between these waves cause multipath fading at any location, and the strengths of the waves decrease as the distance between the transmitter and receiver increases. As a consequence of the central limit theorem, the received signals are approximately Gaussian random process. This means that the field propagating in a cave or tunnel is typically a complex-valued Gaussian random process.

  13. A comparison of ground truth estimation methods.

    PubMed

    Biancardi, Alberto M; Jirapatnakul, Artit C; Reeves, Anthony P

    2010-05-01

    Knowledge of the exact shape of a lesion, or ground truth (GT), is necessary for the development of diagnostic tools by means of algorithm validation, measurement metric analysis, accurate size estimation. Four methods that estimate GTs from multiple readers' documentations by considering the spatial location of voxels were compared: thresholded Probability-Map at 0.50 (TPM(0.50)) and at 0.75 (TPM(0.75)), simultaneous truth and performance level estimation (STAPLE) and truth estimate from self distances (TESD). A subset of the publicly available Lung Image Database Consortium archive was used, selecting pulmonary nodules documented by all four radiologists. The pair-wise similarities between the estimated GTs were analyzed by computing the respective Jaccard coefficients. Then, with respect to the readers' marking volumes, the estimated volumes were ranked and the sign test of the differences between them was performed. (a) the rank variations among the four methods and the volume differences between STAPLE and TESD are not statistically significant, (b) TPM(0.50) estimates are statistically larger (c) TPM(0.75) estimates are statistically smaller (d) there is some spatial disagreement in the estimates as the one-sided 90% confidence intervals between TPM(0.75) and TPM(0.50), TPM(0.75) and STAPLE, TPM(0.75) and TESD, TPM(0.50) and STAPLE, TPM(0.50) and TESD, STAPLE and TESD, respectively, show: [0.67, 1.00], [0.67, 1.00], [0.77, 1.00], [0.93, 1.00], [0.85, 1.00], [0.85, 1.00]. The method used to estimate the GT is important: the differences highlighted that STAPLE and TESD, notwithstanding a few weaknesses, appear to be equally viable as a GT estimator, while the increased availability of computing power is decreasing the appeal afforded to TPMs. Ultimately, the choice of which GT estimation method, between the two, should be preferred depends on the specific characteristics of the marked data that is used with respect to the two elements that differentiate the

  14. Nonparametric Bayesian methods for benchmark dose estimation.

    PubMed

    Guha, Nilabja; Roy, Anindya; Kopylev, Leonid; Fox, John; Spassova, Maria; White, Paul

    2013-09-01

    The article proposes and investigates the performance of two Bayesian nonparametric estimation procedures in the context of benchmark dose estimation in toxicological animal experiments. The methodology is illustrated using several existing animal dose-response data sets and is compared with traditional parametric methods available in standard benchmark dose estimation software (BMDS), as well as with a published model-averaging approach and a frequentist nonparametric approach. These comparisons together with simulation studies suggest that the nonparametric methods provide a lot of flexibility in terms of model fit and can be a very useful tool in benchmark dose estimation studies, especially when standard parametric models fail to fit to the data adequately. © 2013 Society for Risk Analysis.

  15. Standard methods for spectral estimation and prewhitening

    SciTech Connect

    Stearns, S.D.

    1986-07-01

    A standard FFT periodogram-averaging method for power spectral estimation is described in detail, with examples that the reader can use to verify his own software. The parameters that must be specified in order to repeat a given spectral estimate are listed. A standard technique for prewhitening is also described, again with repeatable examples and a summary of the parameters that must be specified.

  16. Variational adaptive correlation method for flow estimation.

    PubMed

    Becker, Florian; Wieneke, Bernhard; Petra, Stefania; Schröder, Andreas; Schnörr, Christoph

    2012-06-01

    A variational approach is presented to the estimation of turbulent fluid flow from particle image sequences in experimental fluid mechanics. The approach comprises two coupled optimizations for adapting size and shape of a Gaussian correlation window at each location and for estimating the flow, respectively. The method copes with a wide range of particle densities and image noise levels without any data-specific parameter tuning. Based on a careful implementation of a multiscale nonlinear optimization technique, we demonstrate robustness of the solution over typical experimental scenarios and highest estimation accuracy for an international benchmark data set (PIV Challenge).

  17. Source estimation methods for atmospheric dispersion

    NASA Astrophysics Data System (ADS)

    Shankar Rao, K.

    Both forward and backward transport modeling methods are being developed for characterization of sources in atmospheric releases of toxic agents. Forward modeling methods, which describe the atmospheric transport from sources to receptors, use forward-running transport and dispersion models or computational fluid dynamics models which are run many times, and the resulting dispersion field is compared to observations from multiple sensors. Forward modeling methods include Bayesian updating and inference schemes using stochastic Monte Carlo or Markov Chain Monte Carlo sampling techniques. Backward or inverse modeling methods use only one model run in the reverse direction from the receptors to estimate the upwind sources. Inverse modeling methods include adjoint and tangent linear models, Kalman filters, and variational data assimilation, among others. This survey paper discusses these source estimation methods and lists the key references. The need for assessing uncertainties in the characterization of sources using atmospheric transport and dispersion models is emphasized.

  18. An investigation of student understanding of classical ideas related to quantum mechanics: Potential energy diagrams and spatial probability density

    NASA Astrophysics Data System (ADS)

    Stephanik, Brian Michael

    This dissertation describes the results of two related investigations into introductory student understanding of ideas from classical physics that are key elements of quantum mechanics. One investigation probes the extent to which students are able to interpret and apply potential energy diagrams (i.e., graphs of potential energy versus position). The other probes the extent to which students are able to reason classically about probability and spatial probability density. The results of these investigations revealed significant conceptual and reasoning difficulties that students encounter with these topics. The findings guided the design of instructional materials to address the major problems. Results from post-instructional assessments are presented that illustrate the impact of the curricula on student learning.

  19. Steady-state probability density function of the phase error for a DPLL with an integrate-and-dump device

    NASA Technical Reports Server (NTRS)

    Simon, M.; Mileant, A.

    1986-01-01

    The steady-state behavior of a particular type of digital phase-locked loop (DPLL) with an integrate-and-dump circuit following the phase detector is characterized in terms of the probability density function (pdf) of the phase error in the loop. Although the loop is entirely digital from an implementation standpoint, it operates at two extremely different sampling rates. In particular, the combination of a phase detector and an integrate-and-dump circuit operates at a very high rate whereas the loop update rate is very slow by comparison. Because of this dichotomy, the loop can be analyzed by hybrid analog/digital (s/z domain) techniques. The loop is modeled in such a general fashion that previous analyses of the Real-Time Combiner (RTC), Subcarrier Demodulator Assembly (SDA), and Symbol Synchronization Assembly (SSA) fall out as special cases.

  20. Large-eddy simulation/probability density function modeling of local extinction and re-ignition in Sandia Flame E

    NASA Astrophysics Data System (ADS)

    Wang, Haifeng; Popov, Pavel; Hiremath, Varun; Lantz, Steven; Viswanathan, Sharadha; Pope, Stephen

    2010-11-01

    A large-eddy simulation (LES)/probability density function (PDF) code is developed and applied to the study of local extinction and re-ignition in Sandia Flame E. The modified Curl mixing model is used to account for the sub-filter scalar mixing; the ARM1 mechanism is used for the chemical reaction; and the in- situ adaptive tabulation (ISAT) algorithm is used to accelerate the chemistry calculations. Calculations are performed on different grids to study the resolution requirement for this flame. Then, with sufficient grid resolution, full-scale LES/PDF calculations are performed to study the flame characteristics and the turbulence-chemistry interactions. Sensitivity to the mixing frequency model is explored in order to understand the behavior of sub-filter scalar mixing in the context of LES. The simulation results are compared to the experimental data to demonstrate the capability of the code. Comparison is also made to previous RANS/PDF simulations.

  1. Nonparametric estimation of population density for line transect sampling using FOURIER series

    USGS Publications Warehouse

    Crain, B.R.; Burnham, K.P.; Anderson, D.R.; Lake, J.L.

    1979-01-01

    A nonparametric, robust density estimation method is explored for the analysis of right-angle distances from a transect line to the objects sighted. The method is based on the FOURIER series expansion of a probability density function over an interval. With only mild assumptions, a general population density estimator of wide applicability is obtained.

  2. Habitat suitability criteria via parametric distributions: estimation, model selection and uncertainty

    USGS Publications Warehouse

    Som, Nicholas A.; Goodman, Damon H.; Perry, Russell W.; Hardy, Thomas B.

    2016-01-01

    Previous methods for constructing univariate habitat suitability criteria (HSC) curves have ranged from professional judgement to kernel-smoothed density functions or combinations thereof. We present a new method of generating HSC curves that applies probability density functions as the mathematical representation of the curves. Compared with previous approaches, benefits of our method include (1) estimation of probability density function parameters directly from raw data, (2) quantitative methods for selecting among several candidate probability density functions, and (3) concise methods for expressing estimation uncertainty in the HSC curves. We demonstrate our method with a thorough example using data collected on the depth of water used by juvenile Chinook salmon (Oncorhynchus tschawytscha) in the Klamath River of northern California and southern Oregon. All R code needed to implement our example is provided in the appendix. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.

  3. FINAL PROJECT REPORT DOE Early Career Principal Investigator Program Project Title: Developing New Mathematical Models for Multiphase Flows Based on a Fundamental Probability Density Function Approach

    SciTech Connect

    Shankar Subramaniam

    2009-04-01

    This final project report summarizes progress made towards the objectives described in the proposal entitled “Developing New Mathematical Models for Multiphase Flows Based on a Fundamental Probability Density Function Approach”. Substantial progress has been made in theory, modeling and numerical simulation of turbulent multiphase flows. The consistent mathematical framework based on probability density functions is described. New models are proposed for turbulent particle-laden flows and sprays.

  4. Computing rates of Markov models of voltage-gated ion channels by inverting partial differential equations governing the probability density functions of the conducting and non-conducting states.

    PubMed

    Tveito, Aslak; Lines, Glenn T; Edwards, Andrew G; McCulloch, Andrew

    2016-07-01

    Markov models are ubiquitously used to represent the function of single ion channels. However, solving the inverse problem to construct a Markov model of single channel dynamics from bilayer or patch-clamp recordings remains challenging, particularly for channels involving complex gating processes. Methods for solving the inverse problem are generally based on data from voltage clamp measurements. Here, we describe an alternative approach to this problem based on measurements of voltage traces. The voltage traces define probability density functions of the functional states of an ion channel. These probability density functions can also be computed by solving a deterministic system of partial differential equations. The inversion is based on tuning the rates of the Markov models used in the deterministic system of partial differential equations such that the solution mimics the properties of the probability density function gathered from (pseudo) experimental data as well as possible. The optimization is done by defining a cost function to measure the difference between the deterministic solution and the solution based on experimental data. By evoking the properties of this function, it is possible to infer whether the rates of the Markov model are identifiable by our method. We present applications to Markov model well-known from the literature.

  5. Parameter estimation methods for chaotic intercellular networks.

    PubMed

    Mariño, Inés P; Ullner, Ekkehard; Zaikin, Alexey

    2013-01-01

    We have investigated simulation-based techniques for parameter estimation in chaotic intercellular networks. The proposed methodology combines a synchronization-based framework for parameter estimation in coupled chaotic systems with some state-of-the-art computational inference methods borrowed from the field of computational statistics. The first method is a stochastic optimization algorithm, known as accelerated random search method, and the other two techniques are based on approximate Bayesian computation. The latter is a general methodology for non-parametric inference that can be applied to practically any system of interest. The first method based on approximate Bayesian computation is a Markov Chain Monte Carlo scheme that generates a series of random parameter realizations for which a low synchronization error is guaranteed. We show that accurate parameter estimates can be obtained by averaging over these realizations. The second ABC-based technique is a Sequential Monte Carlo scheme. The algorithm generates a sequence of "populations", i.e., sets of randomly generated parameter values, where the members of a certain population attain a synchronization error that is lesser than the error attained by members of the previous population. Again, we show that accurate estimates can be obtained by averaging over the parameter values in the last population of the sequence. We have analysed how effective these methods are from a computational perspective. For the numerical simulations we have considered a network that consists of two modified repressilators with identical parameters, coupled by the fast diffusion of the autoinducer across the cell membranes.

  6. A simple method to estimate interwell autocorrelation

    SciTech Connect

    Pizarro, J.O.S.; Lake, L.W.

    1997-08-01

    The estimation of autocorrelation in the lateral or interwell direction is important when performing reservoir characterization studies using stochastic modeling. This paper presents a new method to estimate the interwell autocorrelation based on parameters, such as the vertical range and the variance, that can be estimated with commonly available data. We used synthetic fields that were generated from stochastic simulations to provide data to construct the estimation charts. These charts relate the ratio of areal to vertical variance and the autocorrelation range (expressed variously) in two directions. Three different semivariogram models were considered: spherical, exponential and truncated fractal. The overall procedure is demonstrated using field data. We find that the approach gives the most self-consistent results when it is applied to previously identified facies. Moreover, the autocorrelation trends follow the depositional pattern of the reservoir, which gives confidence in the validity of the approach.

  7. Variational bayesian method of estimating variance components.

    PubMed

    Arakawa, Aisaku; Taniguchi, Masaaki; Hayashi, Takeshi; Mikawa, Satoshi

    2016-07-01

    We developed a Bayesian analysis approach by using a variational inference method, a so-called variational Bayesian method, to determine the posterior distributions of variance components. This variational Bayesian method and an alternative Bayesian method using Gibbs sampling were compared in estimating genetic and residual variance components from both simulated data and publically available real pig data. In the simulated data set, we observed strong bias toward overestimation of genetic variance for the variational Bayesian method in the case of low heritability and low population size, and less bias was detected with larger population sizes in both methods examined. The differences in the estimates of variance components between the variational Bayesian and the Gibbs sampling were not found in the real pig data. However, the posterior distributions of the variance components obtained with the variational Bayesian method had shorter tails than those obtained with the Gibbs sampling. Consequently, the posterior standard deviations of the genetic and residual variances of the variational Bayesian method were lower than those of the method using Gibbs sampling. The computing time required was much shorter with the variational Bayesian method than with the method using Gibbs sampling. © 2016 Japanese Society of Animal Science.

  8. A nonparametric method for penetrance function estimation.

    PubMed

    Alarcon, F; Bonaïti-Pellié, C; Harari-Kermadec, H

    2009-01-01

    In diseases caused by a deleterious gene mutation, knowledge of age-specific cumulative risks is necessary for medical management of mutation carriers. When pedigrees are ascertained through at least one affected individual, ascertainment bias can be corrected by using a parametric method such as the Proband's phenotype Exclusion Likelihood, or PEL, that uses a survival analysis approach based on the Weibull model. This paper proposes a nonparametric method for penetrance function estimation that corrects for ascertainment on at least one affected: the Index Discarding EuclideAn Likelihood or IDEAL. IDEAL is compared with PEL, using family samples simulated from a Weibull distribution and under alternative models. We show that, under Weibull assumption and asymptotic conditions, IDEAL and PEL both provide unbiased risk estimates. However, when the true risk function deviates from a Weibull distribution, we show that the PEL might provide biased estimates while IDEAL remains unbiased.

  9. Cost estimating methods for advanced space systems

    NASA Technical Reports Server (NTRS)

    Cyr, Kelley

    1988-01-01

    The development of parametric cost estimating methods for advanced space systems in the conceptual design phase is discussed. The process of identifying variables which drive cost and the relationship between weight and cost are discussed. A theoretical model of cost is developed and tested using a historical data base of research and development projects.

  10. Karatsuba's method for estimating Kloosterman sums

    NASA Astrophysics Data System (ADS)

    Korolev, M. A.

    2016-08-01

    Using Karatsuba's method, we obtain estimates for Kloosterman sums modulo a prime, in which the number of terms is less than an arbitrarily small fixed power of the modulus. These bounds refine similar results obtained earlier by Bourgain and Garaev. Bibliography: 16 titles.

  11. A Flexible Method of Estimating Luminosity Functions

    NASA Astrophysics Data System (ADS)

    Kelly, Brandon C.; Fan, Xiaohui; Vestergaard, Marianne

    2008-08-01

    We describe a Bayesian approach to estimating luminosity functions. We derive the likelihood function and posterior probability distribution for the luminosity function, given the observed data, and we compare the Bayesian approach with maximum likelihood by simulating sources from a Schechter function. For our simulations confidence intervals derived from bootstrapping the maximum likelihood estimate can be too narrow, while confidence intervals derived from the Bayesian approach are valid. We develop our statistical approach for a flexible model where the luminosity function is modeled as a mixture of Gaussian functions. Statistical inference is performed using Markov chain Monte Carlo (MCMC) methods, and we describe a Metropolis-Hastings algorithm to perform the MCMC. The MCMC simulates random draws from the probability distribution of the luminosity function parameters, given the data, and we use a simulated data set to show how these random draws may be used to estimate the probability distribution for the luminosity function. In addition, we show how the MCMC output may be used to estimate the probability distribution of any quantities derived from the luminosity function, such as the peak in the space density of quasars. The Bayesian method we develop has the advantage that it is able to place accurate constraints on the luminosity function even beyond the survey detection limits, and that it provides a natural way of estimating the probability distribution of any quantities derived from the luminosity function, including those that rely on information beyond the survey detection limits.

  12. Estimation of typhoon rainfall in GaoPing River: A Multivariate Maximum Entropy Method

    NASA Astrophysics Data System (ADS)

    Pei-Jui, Wu; Hwa-Lung, Yu

    2016-04-01

    The heavy rainfall from typhoons is the main factor of the natural disaster in Taiwan, which causes the significant loss of human lives and properties. Statistically average 3.5 typhoons invade Taiwan every year, and the serious typhoon, Morakot in 2009, impacted Taiwan in recorded history. Because the duration, path and intensity of typhoon, also affect the temporal and spatial rainfall type in specific region , finding the characteristics of the typhoon rainfall type is advantageous when we try to estimate the quantity of rainfall. This study developed a rainfall prediction model and can be divided three parts. First, using the EEOF(extended empirical orthogonal function) to classify the typhoon events, and decompose the standard rainfall type of all stations of each typhoon event into the EOF and PC(principal component). So we can classify the typhoon events which vary similarly in temporally and spatially as the similar typhoon types. Next, according to the classification above, we construct the PDF(probability density function) in different space and time by means of using the multivariate maximum entropy from the first to forth moment statistically. Therefore, we can get the probability of each stations of each time. Final we use the BME(Bayesian Maximum Entropy method) to construct the typhoon rainfall prediction model , and to estimate the rainfall for the case of GaoPing river which located in south of Taiwan.This study could be useful for typhoon rainfall predictions in future and suitable to government for the typhoon disaster prevention .

  13. A statistical method to estimate outflow volume in case of levee breach due to overtopping

    NASA Astrophysics Data System (ADS)

    Brandimarte, Luigia; Martina, Mario; Dottori, Francesco; Mazzoleni, Maurizio

    2015-04-01

    The aim of this study is to propose a statistical method to assess the outflowing water volume through a levee breach, due to overtopping, in case of three different types of grass cover quality. The first step in the proposed methodology is the definition of the reliability function, a the relation between loading and resistance conditions on the levee system, in case of overtopping. Secondly, the fragility curve, which relates the probability of failure with loading condition over the levee system, is estimated having defined the stochastic variables in the reliability function. Thus, different fragility curves are assessed in case of different scenarios of grass cover quality. Then, a levee breach model is implemented and combined with a 1D hydrodynamic model in order to assess the outflow hydrograph given the water level in the main channel and stochastic values of the breach width. Finally, the water volume is estimated as a combination of the probability density function of the breach width and levee failure. The case study is located in the in 98km-braided reach of Po River, Italy, between the cross-sections of Cremona and Borgoforte. The analysis showed how different counter measures, different grass cover quality in this case, can reduce the probability of failure of the levee system. In particular, for a given values of breach width good levee cover qualities can significantly reduce the outflowing water volume, compared to bad cover qualities, inducing a consequent lower flood risk within the flood-prone area.

  14. An evaluation of the assumed beta probability density function subgrid-scale model for large eddy simulation of nonpremixed, turbulent combustion with heat release

    SciTech Connect

    Wall, Clifton; Boersma, Bendiks Jan; Moin, Parviz

    2000-10-01

    The assumed beta distribution model for the subgrid-scale probability density function (PDF) of the mixture fraction in large eddy simulation of nonpremixed, turbulent combustion is tested, a priori, for a reacting jet having significant heat release (density ratio of 5). The assumed beta distribution is tested as a model for both the subgrid-scale PDF and the subgrid-scale Favre PDF of the mixture fraction. The beta model is successful in approximating both types of PDF but is slightly more accurate in approximating the normal (non-Favre) PDF. To estimate the subgrid-scale variance of mixture fraction, which is required by the beta model, both a scale similarity model and a dynamic model are used. Predictions using the dynamic model are found to be more accurate. The beta model is used to predict the filtered value of a function chosen to resemble the reaction rate. When no model is used, errors in the predicted value are of the same order as the actual value. The beta model is found to reduce this error by about a factor of two, providing a significant improvement. (c) 2000 American Institute of Physics.

  15. A computerized method to estimate friction coefficient from orientation distribution of meso-scale faults

    NASA Astrophysics Data System (ADS)

    Sato, Katsushi

    2016-08-01

    The friction coefficient controls the brittle strength of the Earth's crust for deformation recorded by faults. This study proposes a computerized method to determine the friction coefficient of meso-scale faults. The method is based on the analysis of orientation distribution of faults, and the principal stress axes and the stress ratio calculated by a stress tensor inversion technique. The method assumes that faults are activated according to the cohesionless Coulomb's failure criterion, where the fluctuations of fluid pressure and the magnitude of differential stress are assumed to induce faulting. In this case, the orientation distribution of fault planes is described by a probability density function that is visualized as linear contours on a Mohr diagram. The parametric optimization of the function for an observed fault population yields the friction coefficient. A test using an artificial fault-slip dataset successfully determines the internal friction angle (the arctangent of the friction coefficient) with its confidence interval of several degrees estimated by the bootstrap resampling technique. An application to natural faults cutting a Pleistocene forearc basin fill yields a friction coefficient around 0.7 which is experimentally predicted by the Byerlee's law.

  16. Cost estimating methods for advanced space systems

    NASA Technical Reports Server (NTRS)

    Cyr, Kelley

    1988-01-01

    Parametric cost estimating methods for space systems in the conceptual design phase are developed. The approach is to identify variables that drive cost such as weight, quantity, development culture, design inheritance, and time. The relationship between weight and cost is examined in detail. A theoretical model of cost is developed and tested statistically against a historical data base of major research and development programs. It is concluded that the technique presented is sound, but that it must be refined in order to produce acceptable cost estimates.

  17. Implicit solvent methods for free energy estimation

    PubMed Central

    Decherchi, Sergio; Masetti, Matteo; Vyalov, Ivan; Rocchia, Walter

    2014-01-01

    Solvation is a fundamental contribution in many biological processes and especially in molecular binding. Its estimation can be performed by means of several computational approaches. The aim of this review is to give an overview of existing theories and methods to estimate solvent effects giving a specific focus on the category of implicit solvent models and their use in Molecular Dynamics. In many of these models, the solvent is considered as a continuum homogenous medium, while the solute can be represented at the atomic detail and at different levels of theory. Despite their degree of approximation, implicit methods are still widely employed due to their trade-off between accuracy and efficiency. Their derivation is rooted in the statistical mechanics and integral equations disciplines, some of the related details being provided here. Finally, methods that combine implicit solvent models and molecular dynamics simulation, are briefly described. PMID:25193298

  18. Parameter Estimation Methods for Chaotic Intercellular Networks

    PubMed Central

    Mariño, Inés P.; Ullner, Ekkehard; Zaikin, Alexey

    2013-01-01

    We have investigated simulation-based techniques for parameter estimation in chaotic intercellular networks. The proposed methodology combines a synchronization–based framework for parameter estimation in coupled chaotic systems with some state–of–the–art computational inference methods borrowed from the field of computational statistics. The first method is a stochastic optimization algorithm, known as accelerated random search method, and the other two techniques are based on approximate Bayesian computation. The latter is a general methodology for non–parametric inference that can be applied to practically any system of interest. The first method based on approximate Bayesian computation is a Markov Chain Monte Carlo scheme that generates a series of random parameter realizations for which a low synchronization error is guaranteed. We show that accurate parameter estimates can be obtained by averaging over these realizations. The second ABC–based technique is a Sequential Monte Carlo scheme. The algorithm generates a sequence of “populations”, i.e., sets of randomly generated parameter values, where the members of a certain population attain a synchronization error that is lesser than the error attained by members of the previous population. Again, we show that accurate estimates can be obtained by averaging over the parameter values in the last population of the sequence. We have analysed how effective these methods are from a computational perspective. For the numerical simulations we have considered a network that consists of two modified repressilators with identical parameters, coupled by the fast diffusion of the autoinducer across the cell membranes. PMID:24282513

  19. Clustering method for estimating principal diffusion directions

    PubMed Central

    Nazem-Zadeh, Mohammad-Reza; Jafari-Khouzani, Kourosh; Davoodi-Bojd, Esmaeil; Jiang, Quan; Soltanian-Zadeh, Hamid

    2012-01-01

    Diffusion tensor magnetic resonance imaging (DTMRI) is a non-invasive tool for the investigation of white matter structure within the brain. However, the traditional tensor model is unable to characterize anisotropies of orders higher than two in heterogeneous areas containing more than one fiber population. To resolve this issue, high angular resolution diffusion imaging (HARDI) with a large number of diffusion encoding gradients is used along with reconstruction methods such as Q-ball. Using HARDI data, the fiber orientation distribution function (ODF) on the unit sphere is calculated and used to extract the principal diffusion directions (PDDs). Fast and accurate estimation of PDDs is a prerequisite for tracking algorithms that deal with fiber crossings. In this paper, the PDDs are defined as the directions around which the ODF data is concentrated. Estimates of the PDDs based on this definition are less sensitive to noise in comparison with the previous approaches. A clustering approach to estimate the PDDs is proposed which is an extension of fuzzy c-means clustering developed for orientation of points on a sphere. MDL (Minimum description length) principle is proposed to estimate the number of PDDs. Using both simulated and real diffusion data, the proposed method has been evaluated and compared with some previous protocols. Experimental results show that the proposed clustering algorithm is more accurate, more resistant to noise, and faster than some of techniques currently being utilized. PMID:21642005

  20. A method for estimating soil moisture availability

    NASA Technical Reports Server (NTRS)

    Carlson, T. N.

    1985-01-01

    A method for estimating values of soil moisture based on measurements of infrared surface temperature is discussed. A central element in the method is a boundary layer model. Although it has been shown that soil moistures determined by this method using satellite measurements do correspond in a coarse fashion to the antecedent precipitation, the accuracy and exact physical interpretation (with respect to ground water amounts) are not well known. This area of ignorance, which currently impedes the practical application of the method to problems in hydrology, meteorology and agriculture, is largely due to the absence of corresponding surface measurements. Preliminary field measurements made over France have led to the development of a promising vegetation formulation (Taconet et al., 1985), which has been incorporated in the model. It is necessary, however, to test the vegetation component, and the entire method, over a wide variety of surface conditions and crop canopies.

  1. Child survivorship estimation: methods and data analysis.

    PubMed

    Feeney, G

    1991-01-01

    "The past 20 years have seen extensive elaboration, refinement, and application of the original Brass method for estimating infant and child mortality from child survivorship data. This experience has confirmed the overall usefulness of the methods beyond question, but it has also shown that...estimates must be analyzed in relation to other relevant information before useful conclusions about the level and trend of mortality can be drawn.... This article aims to illustrate the importance of data analysis through a series of examples, including data for the Eastern Malaysian state of Sarawak, Mexico, Thailand, and Indonesia. Specific maneuvers include plotting completed parity distributions and 'time-plotting' mean numbers of children ever born from successive censuses. A substantive conclusion of general interest is that data for older women are not so widely defective as generally supposed."

  2. Unit-Sphere Anisotropic Multiaxial Stochastic-Strength Model Probability Density Distribution for the Orientation of Critical Flaws

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel

    2013-01-01

    Models that predict the failure probability of monolithic glass and ceramic components under multiaxial loading have been developed by authors such as Batdorf, Evans, and Matsuo. These "unit-sphere" failure models assume that the strength-controlling flaws are randomly oriented, noninteracting planar microcracks of specified geometry but of variable size. This report develops a formulation to describe the probability density distribution of the orientation of critical strength-controlling flaws that results from an applied load. This distribution is a function of the multiaxial stress state, the shear sensitivity of the flaws, the Weibull modulus, and the strength anisotropy. Examples are provided showing the predicted response on the unit sphere for various stress states for isotropic and transversely isotropic (anisotropic) materials--including the most probable orientation of critical flaws for offset uniaxial loads with strength anisotropy. The author anticipates that this information could be used to determine anisotropic stiffness degradation or anisotropic damage evolution for individual brittle (or quasi-brittle) composite material constituents within finite element or micromechanics-based software

  3. Fusing probability density function into Dempster-Shafer theory of evidence for the evaluation of water treatment plant.

    PubMed

    Chowdhury, Shakhawat

    2013-05-01

    The evaluation of the status of a municipal drinking water treatment plant (WTP) is important. The evaluation depends on several factors, including, human health risks from disinfection by-products (R), disinfection performance (D), and cost (C) of water production and distribution. The Dempster-Shafer theory (DST) of evidence can combine the individual status with respect to R, D, and C to generate a new indicator, from which the overall status of a WTP can be evaluated. In the DST, the ranges of different factors affecting the overall status are divided into several segments. The basic probability assignments (BPA) for each segment of these factors are provided by multiple experts, which are then combined to obtain the overall status. In assigning the BPA, the experts use their individual judgments, which can impart subjective biases in the overall evaluation. In this research, an approach has been introduced to avoid the assignment of subjective BPA. The factors contributing to the overall status were characterized using the probability density functions (PDF). The cumulative probabilities for different segments of these factors were determined from the cumulative density function, which were then assigned as the BPA for these factors. A case study is presented to demonstrate the application of PDF in DST to evaluate a WTP, leading to the selection of the required level of upgradation for the WTP.

  4. Evaluation of Presumed Probability-Density-Function Models in Non-Premixed Flames by using Large Eddy Simulation

    NASA Astrophysics Data System (ADS)

    Cao, Hong-Jun; Zhang, Hui-Qiang; Lin, Wen-Yi

    2012-05-01

    Four kinds of presumed probability-density-function (PDF) models for non-premixed turbulent combustion are evaluated in flames with various stoichiometric mixture fractions by using large eddy simulation (LES). The LES code is validated by the experimental data of a classical turbulent jet flame (Sandia flame D). The mean and rms temperatures obtained by the presumed PDF models are compared with the LES results. The β-function model achieves a good prediction for different flames. The predicted rms temperature by using the double-δ function model is very small and unphysical in the vicinity of the maximum mean temperature. The clip-Gaussian model and the multi-δ function model make a worse prediction of the extremely fuel-rich or fuel-lean side due to the clip at the boundary of the mixture fraction space. The results also show that the overall prediction performance of presumed PDF models is better at mediate stoichiometric mixture fractions than that at very small or very large ones.

  5. A comparison of Monte Carlo-based Bayesian parameter estimation methods for stochastic models of genetic networks

    PubMed Central

    Zaikin, Alexey; Míguez, Joaquín

    2017-01-01

    We compare three state-of-the-art Bayesian inference methods for the estimation of the unknown parameters in a stochastic model of a genetic network. In particular, we introduce a stochastic version of the paradigmatic synthetic multicellular clock model proposed by Ullner et al., 2007. By introducing dynamical noise in the model and assuming that the partial observations of the system are contaminated by additive noise, we enable a principled mechanism to represent experimental uncertainties in the synthesis of the multicellular system and pave the way for the design of probabilistic methods for the estimation of any unknowns in the model. Within this setup, we tackle the Bayesian estimation of a subset of the model parameters. Specifically, we compare three Monte Carlo based numerical methods for the approximation of the posterior probability density function of the unknown parameters given a set of partial and noisy observations of the system. The schemes we assess are the particle Metropolis-Hastings (PMH) algorithm, the nonlinear population Monte Carlo (NPMC) method and the approximate Bayesian computation sequential Monte Carlo (ABC-SMC) scheme. We present an extensive numerical simulation study, which shows that while the three techniques can effectively solve the problem there are significant differences both in estimation accuracy and computational efficiency. PMID:28797087

  6. A comparison of Monte Carlo-based Bayesian parameter estimation methods for stochastic models of genetic networks.

    PubMed

    Mariño, Inés P; Zaikin, Alexey; Míguez, Joaquín

    2017-01-01

    We compare three state-of-the-art Bayesian inference methods for the estimation of the unknown parameters in a stochastic model of a genetic network. In particular, we introduce a stochastic version of the paradigmatic synthetic multicellular clock model proposed by Ullner et al., 2007. By introducing dynamical noise in the model and assuming that the partial observations of the system are contaminated by additive noise, we enable a principled mechanism to represent experimental uncertainties in the synthesis of the multicellular system and pave the way for the design of probabilistic methods for the estimation of any unknowns in the model. Within this setup, we tackle the Bayesian estimation of a subset of the model parameters. Specifically, we compare three Monte Carlo based numerical methods for the approximation of the posterior probability density function of the unknown parameters given a set of partial and noisy observations of the system. The schemes we assess are the particle Metropolis-Hastings (PMH) algorithm, the nonlinear population Monte Carlo (NPMC) method and the approximate Bayesian computation sequential Monte Carlo (ABC-SMC) scheme. We present an extensive numerical simulation study, which shows that while the three techniques can effectively solve the problem there are significant differences both in estimation accuracy and computational efficiency.

  7. Comparative yield estimation via shock hydrodynamic methods

    SciTech Connect

    Attia, A.V.; Moran, B.; Glenn, L.A.

    1991-06-01

    Shock TOA (CORRTEX) from recent underground nuclear explosions in saturated tuff were used to estimate yield via the simulated explosion-scaling method. The sensitivity of the derived yield to uncertainties in the measured shock Hugoniot, release adiabats, and gas porosity is the main focus of this paper. In this method for determining yield, we assume a point-source explosion in an infinite homogeneous material. The rock is formulated using laboratory experiments on core samples, taken prior to the explosion. Results show that increasing gas porosity from 0% to 2% causes a 15% increase in yield per ms/kt{sup 1/3}. 6 refs., 4 figs.

  8. Cancer incidence estimation method: an Apulian experience.

    PubMed

    Nannavecchia, Anna M; Rashid, Ivan; Cuccaro, Francesco; Chieti, Antonio; Bruno, Danila; Burgio Lo Monaco, Maria G; Tanzarella, Cinzia; Bisceglia, Lucia

    2017-09-01

    The Cancer Registry of Puglia (RTP) was instituted in 2008 as a regional population-based cancer registry. It consists of six sections (Foggia, Barletta-Andria-Tran, Bari, Brindisi, Lecce, and Taranto) and covers more than 4 000 000 inhabitants. At present, four of six sections have received accreditation by AIRTUM (53% of regional population). To point out possible regional geographic variability in cancer incidence and also to support health services planning, we developed an original estimation method to ensure a complete territorial coverage. Incidence data of the four accredited RTP sections for the shared incidence period 2006-2008, the 2001-2009 hospitalization regional data, and 2006-2009 mortality data were considered. To take into account specific health features of different provinces, we performed an estimate of cancer incidence rates of nonaccredited sections using a combination of accredited sections rates and a factor that combines mortality and hospitalization ratios available for all the sections. Finally, we validated the method and we applied it to estimate regional cancer rates as the population-weighted average of accredited sections and nonaccredited sections adjusted rates. The validation process shows that estimated rates are close to real incidence data. The most frequent neoplasms in Apulia are breast (direct standardized rates 96.8 per 100 000 inhabitants), colon-rectum (36.6), and thyroid cancer (25.3) in women and prostate (70.2), lung (68.4), and colon-rectum cancer (52.2) in men. This method could be useful to assess the cancer incidence when complete cancer registration data are not available, but hospitalization, mortality, and neighbouring incidence data are available.

  9. Cost estimating methods for advanced space systems

    NASA Technical Reports Server (NTRS)

    Cyr, Kelley

    1994-01-01

    NASA is responsible for developing much of the nation's future space technology. Cost estimates for new programs are required early in the planning process so that decisions can be made accurately. Because of the long lead times required to develop space hardware, the cost estimates are frequently required 10 to 15 years before the program delivers hardware. The system design in conceptual phases of a program is usually only vaguely defined and the technology used is so often state-of-the-art or beyond. These factors combine to make cost estimating for conceptual programs very challenging. This paper describes an effort to develop parametric cost estimating methods for space systems in the conceptual design phase. The approach is to identify variables that drive cost such as weight, quantity, development culture, design inheritance and time. The nature of the relationships between the driver variables and cost will be discussed. In particular, the relationship between weight and cost will be examined in detail. A theoretical model of cost will be developed and tested statistically against a historical database of major research and development projects.

  10. Advancing methods for global crop area estimation

    NASA Astrophysics Data System (ADS)

    King, M. L.; Hansen, M.; Adusei, B.; Stehman, S. V.; Becker-Reshef, I.; Ernst, C.; Noel, J.

    2012-12-01

    Cropland area estimation is a challenge, made difficult by the variety of cropping systems, including crop types, management practices, and field sizes. A MODIS derived indicator mapping product (1) developed from 16-day MODIS composites has been used to target crop type at national scales for the stratified sampling (2) of higher spatial resolution data for a standardized approach to estimate cultivated area. A global prototype is being developed using soybean, a global commodity crop with recent LCLUC dynamic and a relatively unambiguous spectral signature, for the United States, Argentina, Brazil, and China representing nearly ninety percent of soybean production. Supervised classification of soy cultivated area is performed for 40 km2 sample blocks using time-series, Landsat imagery. This method, given appropriate data for representative sampling with higher spatial resolution, represents an efficient and accurate approach for large area crop type estimation. Results for the United States sample blocks have exhibited strong agreement with the National Agricultural Statistics Service's (NASS's) Cropland Data Layer (CDL). A confusion matrix showed a 91.56% agreement and a kappa of .67 between the two products. Field measurements and RapidEye imagery have been collected for the USA, Brazil and Argentina in further assessing product accuracies. The results of this research will demonstrate the value of MODIS crop type indicator products and Landsat sample data in estimating soybean cultivated area at national scales, enabling an internally consistent global assessment of annual soybean production.

  11. Probability density function treatment of turbulence/chemistry interactions during the ignition of a temperature-stratified mixture for application to HCCI engine modeling

    SciTech Connect

    Bisetti, Fabrizio; Chen, J.-Y.; Hawkes, Evatt R.; Chen, Jacqueline H.

    2008-12-15

    Homogeneous charge compression ignition (HCCI) engine technology promises to reduce NO{sub x} and soot emissions while achieving high thermal efficiency. Temperature and mixture stratification are regarded as effective means of controlling the start of combustion and reducing the abrupt pressure rise at high loads. Probability density function methods are currently being pursued as a viable approach to modeling the effects of turbulent mixing and mixture stratification on HCCI ignition. In this paper we present an assessment of the merits of three widely used mixing models in reproducing the moments of reactive scalars during the ignition of a lean hydrogen/air mixture ({phi}=0.1, p=41atm, and T=1070 K) under increasing temperature stratification and subject to decaying turbulence. The results from the solution of the evolution equation for a spatially homogeneous joint PDF of the reactive scalars are compared with available direct numerical simulation (DNS) data [E.R. Hawkes, R. Sankaran, P.P. Pebay, J.H. Chen, Combust. Flame 145 (1-2) (2006) 145-159]. The mixing models are found able to quantitatively reproduce the time history of the heat release rate, first and second moments of temperature, and hydroxyl radical mass fraction from the DNS results. Most importantly, the dependence of the heat release rate on the extent of the initial temperature stratification in the charge is also well captured. (author)

  12. Collisional energy transfer probability densities P(E, J; E', J') for monatomics colliding with large molecules.

    PubMed

    Barker, John R; Weston, Ralph E

    2010-10-07

    Collisional energy transfer remains an important area of uncertainty in master equation simulations. Quasi-classical trajectory (QCT) calculations were used to examine the energy transfer probability density distribution (energy transfer kernel), which depends on translational temperature, on the nature of the collision partners, and on the initial and final total internal energies and angular momenta: P(E, J; E', J'). For this purpose, model potential energy functions were taken from the literature or were formulated for pyrazine + Ar and for ethane + Ar collisions. For each collision pair, batches of 10(5) trajectories were computed with three selected initial vibrational energies and five selected values for initial total angular momentum. Most trajectories were carried out with relative translational energy distributions at 300 K, but some were carried out at 1000 or 1200 K. In addition, some trajectories were computed for artificially "heavy" ethane, in which the H-atoms were assigned masses of 20 amu. The results were binned according to (ΔE, ΔJ), and a least-squares analysis was carried out by omitting the quasi-elastic trajectories from consideration. By trial-and-error, an empirical function was identified that fitted all 45 batches of trajectories with moderate accuracy. The results reveal significant correlations between initial and final energies and angular momenta. In particular, a strong correlation between ΔE and ΔJ depends on the smallest rotational constant in the excited polyatomic. These results show that the final rotational energy distribution is not independent of the initial distribution, showing that the plausible simplifying assumption described by Smith and Gilbert [Int. J. Chem. Kinet. 1988, 20, 307-329] and extended by Miller, Klippenstein, and Raffy [J. Phys. Chem. A 2002, 106, 4904-4913] is invalid for the systems studied.

  13. A method of estimating physician requirements.

    PubMed

    Scitovsky, A A; McCall, N

    1976-01-01

    This article describes and applies a method of estimating physician requirements for the United States based on physician utilization rates of members of two comprehensive prepaid plans of medical care providing first-dollar coverage for practically all physician services. The plan members' physician utilization rates by age and sex and by field of specialty of the physician were extrapolated to the entire population of the United States. On the basis of data for 1966, it was found that 34 percent more physicians than were available would have been required to give the entire population the amount and type of care received by the plan members. The "shortage" of primary care physicians (general practice, internal medicine, and pediatrics combined) was found to be considerably greater than of physicians in the surgical specialties taken together (41 percent as compared to 21 percent). The paper discusses in detail the various assumptions underlying this method and stresses the need for careful evaluation of all methods of estimating physician requirements.

  14. On the use of the noncentral chi-square density function for the distribution of helicopter spectral estimates

    NASA Technical Reports Server (NTRS)

    Garber, Donald P.

    1993-01-01

    A probability density function for the variability of ensemble averaged spectral estimates from helicopter acoustic signals in Gaussian background noise was evaluated. Numerical methods for calculating the density function and for determining confidence limits were explored. Density functions were predicted for both synthesized and experimental data and compared with observed spectral estimate variability.

  15. Demographic estimation methods for plants with dormancy

    USGS Publications Warehouse

    Kery, M.; Gregg, K.B.

    2004-01-01

    Demographic studies in plants appear simple because unlike animals, plants do not run away. Plant individuals can be marked with, e.g., plastic tags, but often the coordinates of an individual may be sufficient to identify it. Vascular plants in temperate latitudes have a pronounced seasonal life–cycle, so most plant demographers survey their study plots once a year often during or shortly after flowering. Life–states are pervasive in plants, hence the results of a demographic study for an individual can be summarized in a familiar encounter history, such as 0VFVVF000. A zero means that an individual was not seen in a year and a letter denotes its state for years when it was seen aboveground. V and F here stand for vegetative and flowering states, respectively. Probabilities of survival and state transitions can then be obtained by mere counting.Problems arise when there is an unobservable dormant state, i.e., when plants may stay belowground for one or more growing seasons. Encounter histories such as 0VF00F000 may then occur where the meaning of zeroes becomes ambiguous. A zero can either mean a dead or a dormant plant. Various ad hoc methods in wide use among plant ecologists have made strong assumptions about when a zero should be equated to a dormant individual. These methods have never been compared among each other. In our talk and in Kéry et al. (submitted), we show that these ad hoc estimators provide spurious estimates of survival and should not be used.In contrast, if detection probabilities for aboveground plants are known or can be estimated, capturerecapture (CR) models can be used to estimate probabilities of survival and state–transitions and the fraction of the population that is dormant. We have used this approach in two studies of terrestrial orchids, Cleistes bifaria (Kéry et al., submitted) and Cypripedium reginae(Kéry & Gregg, submitted) in West Virginia, U.S.A. For Cleistes, our data comprised one population with a total of 620

  16. Prediction of carbohydrate binding sites on protein surfaces with 3-dimensional probability density distributions of interacting atoms.

    PubMed

    Tsai, Keng-Chang; Jian, Jhih-Wei; Yang, Ei-Wen; Hsu, Po-Chiang; Peng, Hung-Pin; Chen, Ching-Tai; Chen, Jun-Bo; Chang, Jeng-Yih; Hsu, Wen-Lian; Yang, An-Suei

    2012-01-01

    Non-covalent protein-carbohydrate interactions mediate molecular targeting in many biological processes. Prediction of non-covalent carbohydrate binding sites on protein surfaces not only provides insights into the functions of the query proteins; information on key carbohydrate-binding residues could suggest site-directed mutagenesis experiments, design therapeutics targeting carbohydrate-binding proteins, and provide guidance in engineering protein-carbohydrate interactions. In this work, we show that non-covalent carbohydrate binding sites on protein surfaces can be predicted with relatively high accuracy when the query protein structures are known. The prediction capabilities were based on a novel encoding scheme of the three-dimensional probability density maps describing the distributions of 36 non-covalent interacting atom types around protein surfaces. One machine learning model was trained for each of the 30 protein atom types. The machine learning algorithms predicted tentative carbohydrate binding sites on query proteins by recognizing the characteristic interacting atom distribution patterns specific for carbohydrate binding sites from known protein structures. The prediction results for all protein atom types were integrated into surface patches as tentative carbohydrate binding sites based on normalized prediction confidence level. The prediction capabilities of the predictors were benchmarked by a 10-fold cross validation on 497 non-redundant proteins with known carbohydrate binding sites. The predictors were further tested on an independent test set with 108 proteins. The residue-based Matthews correlation coefficient (MCC) for the independent test was 0.45, with prediction precision and sensitivity (or recall) of 0.45 and 0.49 respectively. In addition, 111 unbound carbohydrate-binding protein structures for which the structures were determined in the absence of the carbohydrate ligands were predicted with the trained predictors. The overall

  17. Prediction of Carbohydrate Binding Sites on Protein Surfaces with 3-Dimensional Probability Density Distributions of Interacting Atoms

    PubMed Central

    Tsai, Keng-Chang; Jian, Jhih-Wei; Yang, Ei-Wen; Hsu, Po-Chiang; Peng, Hung-Pin; Chen, Ching-Tai; Chen, Jun-Bo; Chang, Jeng-Yih; Hsu, Wen-Lian; Yang, An-Suei

    2012-01-01

    Non-covalent protein-carbohydrate interactions mediate molecular targeting in many biological processes. Prediction of non-covalent carbohydrate binding sites on protein surfaces not only provides insights into the functions of the query proteins; information on key carbohydrate-binding residues could suggest site-directed mutagenesis experiments, design therapeutics targeting carbohydrate-binding proteins, and provide guidance in engineering protein-carbohydrate interactions. In this work, we show that non-covalent carbohydrate binding sites on protein surfaces can be predicted with relatively high accuracy when the query protein structures are known. The prediction capabilities were based on a novel encoding scheme of the three-dimensional probability density maps describing the distributions of 36 non-covalent interacting atom types around protein surfaces. One machine learning model was trained for each of the 30 protein atom types. The machine learning algorithms predicted tentative carbohydrate binding sites on query proteins by recognizing the characteristic interacting atom distribution patterns specific for carbohydrate binding sites from known protein structures. The prediction results for all protein atom types were integrated into surface patches as tentative carbohydrate binding sites based on normalized prediction confidence level. The prediction capabilities of the predictors were benchmarked by a 10-fold cross validation on 497 non-redundant proteins with known carbohydrate binding sites. The predictors were further tested on an independent test set with 108 proteins. The residue-based Matthews correlation coefficient (MCC) for the independent test was 0.45, with prediction precision and sensitivity (or recall) of 0.45 and 0.49 respectively. In addition, 111 unbound carbohydrate-binding protein structures for which the structures were determined in the absence of the carbohydrate ligands were predicted with the trained predictors. The overall

  18. Momentum Probabilities for a Single Quantum Particle in Three-Dimensional Regular "Infinite" Wells: One Way of Promoting Understanding of Probability Densities

    ERIC Educational Resources Information Center

    Riggs, Peter J.

    2013-01-01

    Students often wrestle unsuccessfully with the task of correctly calculating momentum probability densities and have difficulty in understanding their interpretation. In the case of a particle in an "infinite" potential well, its momentum can take values that are not just those corresponding to the particle's quantised energies but…

  19. Probability density of orbital angular momentum mode of autofocusing Airy beam carrying power-exponent-phase vortex through weak anisotropic atmosphere turbulence.

    PubMed

    Yan, Xu; Guo, Lixin; Cheng, Mingjian; Li, Jiangting; Huang, Qingqing; Sun, Ridong

    2017-06-26

    The probability densities of orbital angular momentum (OAM) modes of the autofocusing Airy beam (AAB) carrying power-exponent-phase vortex (PEPV) after passing through the weak anisotropic non-Kolmogorov turbulent atmosphere are theoretically formulated. It is found that the AAB carrying PEPV is the result of the weighted superposition of multiple OAM modes at differing positions within the beam cross-section, and the mutual crosstalk among different OAM modes will compensate the distortion of each OAM mode and be helpful for boosting the anti-jamming performance of the communication link. Based on numerical calculations, the role of the wavelength, waist width, topological charge and power order of PEPV in the probability density distribution variations of OAM modes of the AAB carrying PEPV is explored. Analysis shows that a relatively small beam waist and longer wavelength are good for separating the detection regions between signal OAM mode and crosstalk OAM modes. The probability density distribution of the signal OAM mode does not change obviously with the topological charge variation; but it will be greatly enhanced with the increase of power order. Furthermore, it is found that the detection region center position of crosstalk OAM mode is an emergent property resulting from power order and topological charge. Therefore, the power order can be introduced as an extra steering parameter to modulate the probability density distributions of OAM modes. These results provide guidelines for the design of an optimal detector, which has potential application in optical vortex communication systems.

  20. Thermal imaging method for estimating oxygen saturation.

    PubMed

    Tepper, Michal; Neeman, Rotem; Milstein, Yonat; David, Moshe Ben; Gannot, Israel

    2009-01-01

    The objective of this study is to develop a minimal invasive thermal imaging method to determine the oxygenation level of an internal tissue. In this method, the tissue is illuminated using an optical fiber by several wavelengths in the visible and near-IR range. Each wavelength is absorbed by the tissue and thus causes increase in its temperature. The temperature increase is observed by a coherent waveguide bundle in the mid-IR range. The thermal imaging of the tissue is done using a thermal camera through the coherent bundle. Analyzing the temperature rise allows estimating the tissue composition in general, and specifically the oxygenation level. Such a system enables imaging of the temperature within body cavities through a commercial endoscope. As an intermediate stage, the method is applied and tested on exposed skin tissue. A curve-fitting algorithm is used to find the most suitable saturation value affecting the temperature function. The algorithm is tested on a theoretical tissue model with various parameters, implemented for this study, and on agar phantom models. The calculated saturation values are in agreement with the real saturation values.

  1. A computer simulated phantom study of tomotherapy dose optimization based on probability density functions (PDF) and potential errors caused by low reproducibility of PDF.

    PubMed

    Sheng, Ke; Cai, Jing; Brookeman, James; Molloy, Janelle; Christopher, John; Read, Paul

    2006-09-01

    Lung tumor motion trajectories measured by four-dimensional CT or dynamic MRI can be converted to a probability density function (PDF), which describes the probability of the tumor at a certain position, for PDF based treatment planning. Using this method in simulated sequential tomotherapy, we study the dose reduction of normal tissues and more important, the effect of PDF reproducibility on the accuracy of dosimetry. For these purposes, realistic PDFs were obtained from two dynamic MRI scans of a healthy volunteer within a 2 week interval. The first PDF was accumulated from a 300 s scan and the second PDF was calculated from variable scan times from 5 s (one breathing cycle) to 300 s. Optimized beam fluences based on the second PDF were delivered to the hypothetical gross target volume (GTV) of a lung phantom that moved following the first PDF The reproducibility between two PDFs varied from low (78%) to high (94.8%) when the second scan time increased from 5 s to 300 s. When a highly reproducible PDF was used in optimization, the dose coverage of GTV was maintained; phantom lung receiving 10%-20% prescription dose was reduced by 40%-50% and the mean phantom lung dose was reduced by 9.6%. However, optimization based on PDF with low reproducibility resulted in a 50% underdosed GTV. The dosimetric error increased nearly exponentially as the PDF error increased. Therefore, although the dose of the tumor surrounding tissue can be theoretically reduced by PDF based treatment planning, the reliability and applicability of this method highly depend on if a reproducible PDF exists and is measurable. By correlating the dosimetric error and PDF error together, a useful guideline for PDF data acquisition and patient qualification for PDF based planning can be derived.

  2. Efficient resampling methods for nonsmooth estimating functions

    PubMed Central

    ZENG, DONGLIN

    2009-01-01

    Summary We propose a simple and general resampling strategy to estimate variances for parameter estimators derived from nonsmooth estimating functions. This approach applies to a wide variety of semiparametric and nonparametric problems in biostatistics. It does not require solving estimating equations and is thus much faster than the existing resampling procedures. Its usefulness is illustrated with heteroscedastic quantile regression and censored data rank regression. Numerical results based on simulated and real data are provided. PMID:17925303

  3. Identification of local extinction topology in axisymmetric bluff-body diffusion flames with a reactedness-mixture fraction presumed probability density function model

    NASA Astrophysics Data System (ADS)

    Koutmos, P.; Marazioti, P.

    2001-04-01

    The effects of finite-rate chemistry, such as partial extinctions and re-ignitions, are investigated in turbulent non-pre-mixed reacting flows stabilized in the wake of an axisymmetric bluff-body burner. A two-dimensional large-eddy simulation procedure is employed that uses a partial equilibrium/two-scalar reactedness mixture fraction probability density function (PDF) combustion sub-model, which is applied at the sub-grid scale (SGS) level. An anisotropic sub-grid eddy-viscosity and two equations for the SGS turbulence kinetic and scalar energies complete the SGS closure model. The scalar covariances required in the joint PDF formulation are obtained from an extended scale-similarity assumption between the resolved and the sub-grid fluctuations. Extinction due to strong turbulence/chemistry interactions is recognized with the help of a critical, locally variable, turbulent Damkohler number criterion, while transient localized extinctions and re-ignitions are treated with a Lagrangian transport equation for a reactedness progress variable. Comparisons with available experimental data suggested that the formulated approach was capable of identifying the effects of large-scale vortex structure activity, which were inherent in the reacting wake and dominant in the counterpart isothermal flows that otherwise would have been obscured if a standard time-averaged procedure had been used. Additionally, the post-extinction and re-ignition behaviour and its time-varying interaction with the large-scale structure dynamics were more appropriately addressed within the context of the present time-dependent method. Copyright

  4. An estimation method of the direct benefit of a waterlogging control project applicable to the changing environment

    NASA Astrophysics Data System (ADS)

    Zengmei, L.; Guanghua, Q.; Zishen, C.

    2015-05-01

    The direct benefit of a waterlogging control project is reflected by the reduction or avoidance of waterlogging loss. Before and after the construction of a waterlogging control project, the disaster-inducing environment in the waterlogging-prone zone is generally different. In addition, the category, quantity and spatial distribution of the disaster-bearing bodies are also changed more or less. Therefore, under the changing environment, the direct benefit of a waterlogging control project should be the reduction of waterlogging losses compared to conditions with no control project. Moreover, the waterlogging losses with or without the project should be the mathematical expectations of the waterlogging losses when rainstorms of all frequencies meet various water levels in the drainage-accepting zone. So an estimation model of the direct benefit of waterlogging control is proposed. Firstly, on the basis of a Copula function, the joint distribution of the rainstorms and the water levels are established, so as to obtain their joint probability density function. Secondly, according to the two-dimensional joint probability density distribution, the dimensional domain of integration is determined, which is then divided into small domains so as to calculate the probability for each of the small domains and the difference between the average waterlogging loss with and without a waterlogging control project, called the regional benefit of waterlogging control project, under the condition that rainstorms in the waterlogging-prone zone meet the water level in the drainage-accepting zone. Finally, it calculates the weighted mean of the project benefit of all small domains, with probability as the weight, and gets the benefit of the waterlogging control project. Taking the estimation of benefit of a waterlogging control project in Yangshan County, Guangdong Province, as an example, the paper briefly explains the procedures in waterlogging control project benefit estimation. The

  5. Foliage penetration obscuration probability density function analysis from overhead canopy photos for gimbaled linear-mode and Geiger-mode airborne lidar

    NASA Astrophysics Data System (ADS)

    Burton, Robin R.

    2010-04-01

    Three-dimensional (3D) Light Detection And Ranging (LIDAR) systems designed for foliage penetration can produce good bare-earth products in medium to medium-heavy obscuration environments, but product creation becomes increasingly more difficult as the obscuration level increases. A prior knowledge of the obscuration environment over large areas is hard to obtain. The competing factors of area coverage rate and product quality are difficult to balance. Ground-based estimates of obscuration levels are labor intensive and only capture a small portion of the area of interest. Estimates of obscuration levels derived from airborne data require that the area of interest has been collected previously. Recently, there has been a focus on lacunarity (scale dependent measure of translational invariance) to quantify the gap structure of canopies. While this approach is useful, it needs to be evaluated relative to the size of the instantaneous field-of-view (IFOV) of the system under consideration. In this paper, the author reports on initial results to generate not just average obscuration values from overhead canopy photographs, but to generate obscuration probability density functions (PDFs) for both gimbaled linear-mode and geiger-mode airborne LIDAR. In general, gimbaled linear-mode (LM) LIDAR collects data with higher signal-to-noise (SNR), but is limited to smaller areas and cannot collect at higher altitudes. Conversely, geiger-mode (GM) LIDAR has a much lower SNR, but is capable of higher area rates and collecting data at higher altitudes. To date, geiger-mode LIDAR obscurant penetration theory has relied on a single obscuration value, but recent work has extended it to use PDFs1. Whether or not the inclusion of PDFs significantly changes predicted results and more closely matches actual results awaits the generation of PDFs over specific ground truth targets and comparison to actual collections of those ground truth targets. Ideally, examination of individual PDFs

  6. Bayesian methods for parameter estimation in effective field theories

    SciTech Connect

    Schindler, M.R. Phillips, D.R.

    2009-03-15

    We demonstrate and explicate Bayesian methods for fitting the parameters that encode the impact of short-distance physics on observables in effective field theories (EFTs). We use Bayes' theorem together with the principle of maximum entropy to account for the prior information that these parameters should be natural, i.e., O(1) in appropriate units. Marginalization can then be employed to integrate the resulting probability density function (pdf) over the EFT parameters that are not of specific interest in the fit. We also explore marginalization over the order of the EFT calculation, M, and over the variable, R, that encodes the inherent ambiguity in the notion that these parameters are O(1). This results in a very general formula for the pdf of the EFT parameters of interest given a data set, D. We use this formula and the simpler 'augmented {chi}{sup 2}' in a toy problem for which we generate pseudo-data. These Bayesian methods, when used in combination with the 'naturalness prior', facilitate reliable extractions of EFT parameters in cases where {chi}{sup 2} methods are ambiguous at best. We also examine the problem of extracting the nucleon mass in the chiral limit, M{sub 0}, and the nucleon sigma term, from pseudo-data on the nucleon mass as a function of the pion mass. We find that Bayesian techniques can provide reliable information on M{sub 0}, even if some of the data points used for the extraction lie outside the region of applicability of the EFT.

  7. Online Direct Density-Ratio Estimation Applied to Inlier-Based Outlier Detection.

    PubMed

    du Plessis, Marthinus Christoffel; Shiino, Hiroaki; Sugiyama, Masashi

    2015-09-01

    Many machine learning problems, such as nonstationarity adaptation, outlier detection, dimensionality reduction, and conditional density estimation, can be effectively solved by using the ratio of probability densities. Since the naive two-step procedure of first estimating the probability densities and then taking their ratio performs poorly, methods to directly estimate the density ratio from two sets of samples without density estimation have been extensively studied recently. However, these methods are batch algorithms that use the whole data set to estimate the density ratio, and they are inefficient in the online setup, where training samples are provided sequentially and solutions are updated incrementally without storing previous samples. In this letter, we propose two online density-ratio estimators based on the adaptive regularization of weight vectors. Through experiments on inlier-based outlier detection, we demonstrate the usefulness of the proposed methods.

  8. Comparisons of Four Methods for Estimating a Dynamic Factor Model

    ERIC Educational Resources Information Center

    Zhang, Zhiyong; Hamaker, Ellen L.; Nesselroade, John R.

    2008-01-01

    Four methods for estimating a dynamic factor model, the direct autoregressive factor score (DAFS) model, are evaluated and compared. The first method estimates the DAFS model using a Kalman filter algorithm based on its state space model representation. The second one employs the maximum likelihood estimation method based on the construction of a…

  9. Comparisons of Four Methods for Estimating a Dynamic Factor Model

    ERIC Educational Resources Information Center

    Zhang, Zhiyong; Hamaker, Ellen L.; Nesselroade, John R.

    2008-01-01

    Four methods for estimating a dynamic factor model, the direct autoregressive factor score (DAFS) model, are evaluated and compared. The first method estimates the DAFS model using a Kalman filter algorithm based on its state space model representation. The second one employs the maximum likelihood estimation method based on the construction of a…

  10. Statistical methods of estimating mining costs

    USGS Publications Warehouse

    Long, K.R.

    2011-01-01

    Until it was defunded in 1995, the U.S. Bureau of Mines maintained a Cost Estimating System (CES) for prefeasibility-type economic evaluations of mineral deposits and estimating costs at producing and non-producing mines. This system had a significant role in mineral resource assessments to estimate costs of developing and operating known mineral deposits and predicted undiscovered deposits. For legal reasons, the U.S. Geological Survey cannot update and maintain CES. Instead, statistical tools are under development to estimate mining costs from basic properties of mineral deposits such as tonnage, grade, mineralogy, depth, strip ratio, distance from infrastructure, rock strength, and work index. The first step was to reestimate "Taylor's Rule" which relates operating rate to available ore tonnage. The second step was to estimate statistical models of capital and operating costs for open pit porphyry copper mines with flotation concentrators. For a sample of 27 proposed porphyry copper projects, capital costs can be estimated from three variables: mineral processing rate, strip ratio, and distance from nearest railroad before mine construction began. Of all the variables tested, operating costs were found to be significantly correlated only with strip ratio.

  11. Statistically advanced, self-similar, radial probability density functions of atmospheric and under-expanded hydrogen jets

    NASA Astrophysics Data System (ADS)

    Ruggles, Adam J.

    2015-11-01

    This paper presents improved statistical insight regarding the self-similar scalar mixing process of atmospheric hydrogen jets and the downstream region of under-expanded hydrogen jets. Quantitative planar laser Rayleigh scattering imaging is used to probe both jets. The self-similarity of statistical moments up to the sixth order (beyond the literature established second order) is documented in both cases. This is achieved using a novel self-similar normalization method that facilitated a degree of statistical convergence that is typically limited to continuous, point-based measurements. This demonstrates that image-based measurements of a limited number of samples can be used for self-similar scalar mixing studies. Both jets exhibit the same radial trends of these moments demonstrating that advanced atmospheric self-similarity can be applied in the analysis of under-expanded jets. Self-similar histograms away from the centerline are shown to be the combination of two distributions. The first is attributed to turbulent mixing. The second, a symmetric Poisson-type distribution centered on zero mass fraction, progressively becomes the dominant and eventually sole distribution at the edge of the jet. This distribution is attributed to shot noise-affected pure air measurements, rather than a diffusive superlayer at the jet boundary. This conclusion is reached after a rigorous measurement uncertainty analysis and inspection of pure air data collected with each hydrogen data set. A threshold based upon the measurement noise analysis is used to separate the turbulent and pure air data, and thusly estimate intermittency. Beta-distributions (four parameters) are used to accurately represent the turbulent distribution moments. This combination of measured intermittency and four-parameter beta-distributions constitutes a new, simple approach to model scalar mixing. Comparisons between global moments from the data and moments calculated using the proposed model show excellent

  12. A Study of Variance Estimation Methods. Working Paper Series.

    ERIC Educational Resources Information Center

    Zhang, Fan; Weng, Stanley; Salvucci, Sameena; Hu, Ming-xiu

    This working paper contains reports of five studies of variance estimation methods. The first, An Empirical Study of Poststratified Estimator, by Fan Zhang uses data from the National Household Education Survey to illustrate use of poststratified estimation. The second paper, BRR Variance Estimation Using BPLX Hadamard Procedure, by Stanley Weng…

  13. Evaluating Methods for Estimating Program Effects

    ERIC Educational Resources Information Center

    Reichardt, Charles S.

    2011-01-01

    I define a treatment effect in terms of a comparison of outcomes and provide a typology of all possible comparisons that can be used to estimate treatment effects, including comparisons that are relatively unknown in both the literature and practice. I then assess the relative merit, worth, and value of all possible comparisons based on the…

  14. Evaluating Methods for Estimating Program Effects

    ERIC Educational Resources Information Center

    Reichardt, Charles S.

    2011-01-01

    I define a treatment effect in terms of a comparison of outcomes and provide a typology of all possible comparisons that can be used to estimate treatment effects, including comparisons that are relatively unknown in both the literature and practice. I then assess the relative merit, worth, and value of all possible comparisons based on the…

  15. Nutrient Estimation Using Subsurface Sensing Methods

    USDA-ARS?s Scientific Manuscript database

    This report investigates the use of precision management techniques for measuring soil conductivity on feedlot surfaces to estimate nutrient value for crop production. An electromagnetic induction soil conductivity meter was used to collect apparent soil electrical conductivity (ECa) from feedlot p...

  16. New High Throughput Methods to Estimate Chemical ...

    EPA Pesticide Factsheets

    EPA has made many recent advances in high throughput bioactivity testing. However, concurrent advances in rapid, quantitative prediction of human and ecological exposures have been lacking, despite the clear importance of both measures for a risk-based approach to prioritizing and screening chemicals. A recent report by the National Research Council of the National Academies, Exposure Science in the 21st Century: A Vision and a Strategy (NRC 2012) laid out a number of applications in chemical evaluation of both toxicity and risk in critical need of quantitative exposure predictions, including screening and prioritization of chemicals for targeted toxicity testing, focused exposure assessments or monitoring studies, and quantification of population vulnerability. Despite these significant needs, for the majority of chemicals (e.g. non-pesticide environmental compounds) there are no or limited estimates of exposure. For example, exposure estimates exist for only 7% of the ToxCast Phase II chemical list. In addition, the data required for generating exposure estimates for large numbers of chemicals is severely lacking (Egeghy et al. 2012). This SAP reviewed the use of EPA's ExpoCast model to rapidly estimate potential chemical exposures for prioritization and screening purposes. The focus was on bounded chemical exposure values for people and the environment for the Endocrine Disruptor Screening Program (EDSP) Universe of Chemicals. In addition to exposure, the SAP

  17. New High Throughput Methods to Estimate Chemical ...

    EPA Pesticide Factsheets

    EPA has made many recent advances in high throughput bioactivity testing. However, concurrent advances in rapid, quantitative prediction of human and ecological exposures have been lacking, despite the clear importance of both measures for a risk-based approach to prioritizing and screening chemicals. A recent report by the National Research Council of the National Academies, Exposure Science in the 21st Century: A Vision and a Strategy (NRC 2012) laid out a number of applications in chemical evaluation of both toxicity and risk in critical need of quantitative exposure predictions, including screening and prioritization of chemicals for targeted toxicity testing, focused exposure assessments or monitoring studies, and quantification of population vulnerability. Despite these significant needs, for the majority of chemicals (e.g. non-pesticide environmental compounds) there are no or limited estimates of exposure. For example, exposure estimates exist for only 7% of the ToxCast Phase II chemical list. In addition, the data required for generating exposure estimates for large numbers of chemicals is severely lacking (Egeghy et al. 2012). This SAP reviewed the use of EPA's ExpoCast model to rapidly estimate potential chemical exposures for prioritization and screening purposes. The focus was on bounded chemical exposure values for people and the environment for the Endocrine Disruptor Screening Program (EDSP) Universe of Chemicals. In addition to exposure, the SAP

  18. A Method to Estimate the Probability that any Individual Cloud-to-Ground Lightning Stroke was Within any Radius of any Point

    NASA Technical Reports Server (NTRS)

    Huddleston, Lisa L.; Roeder, William P.; Merceret, Francis J.

    2011-01-01

    A new technique has been developed to estimate the probability that a nearby cloud to ground lightning stroke was within a specified radius of any point of interest. This process uses the bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to determine the probability that the stroke is inside any specified radius of any location, even if that location is not centered on or even with the location error ellipse. This technique is adapted from a method of calculating the probability of debris collision with spacecraft. Such a technique is important in spaceport processing activities because it allows engineers to quantify the risk of induced current damage to critical electronics due to nearby lightning strokes. This technique was tested extensively and is now in use by space launch organizations at Kennedy Space Center and Cape Canaveral Air Force Station. Future applications could include forensic meteorology.

  19. A Method to Estimate the Probability That Any Individual Cloud-to-Ground Lightning Stroke Was Within Any Radius of Any Point

    NASA Technical Reports Server (NTRS)

    Huddleston, Lisa L.; Roeder, William P.; Merceret, Francis J.

    2010-01-01

    A new technique has been developed to estimate the probability that a nearby cloud-to-ground lightning stroke was within a specified radius of any point of interest. This process uses the bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to determine the probability that the stroke is inside any specified radius of any location, even if that location is not centered on or even within the location error ellipse. This technique is adapted from a method of calculating the probability of debris collision with spacecraft. Such a technique is important in spaceport processing activities because it allows engineers to quantify the risk of induced current damage to critical electronics due to nearby lightning strokes. This technique was tested extensively and is now in use by space launch organizations at Kennedy Space Center and Cape Canaveral Air Force station.

  20. Estimation Methods for One-Parameter Testlet Models

    ERIC Educational Resources Information Center

    Jiao, Hong; Wang, Shudong; He, Wei

    2013-01-01

    This study demonstrated the equivalence between the Rasch testlet model and the three-level one-parameter testlet model and explored the Markov Chain Monte Carlo (MCMC) method for model parameter estimation in WINBUGS. The estimation accuracy from the MCMC method was compared with those from the marginalized maximum likelihood estimation (MMLE)…

  1. Estimation Methods for One-Parameter Testlet Models

    ERIC Educational Resources Information Center

    Jiao, Hong; Wang, Shudong; He, Wei

    2013-01-01

    This study demonstrated the equivalence between the Rasch testlet model and the three-level one-parameter testlet model and explored the Markov Chain Monte Carlo (MCMC) method for model parameter estimation in WINBUGS. The estimation accuracy from the MCMC method was compared with those from the marginalized maximum likelihood estimation (MMLE)…

  2. Quantum Estimation Methods for Quantum Illumination

    NASA Astrophysics Data System (ADS)

    Sanz, M.; Las Heras, U.; García-Ripoll, J. J.; Solano, E.; Di Candia, R.

    2017-02-01

    Quantum illumination consists in shining quantum light on a target region immersed in a bright thermal bath with the aim of detecting the presence of a possible low-reflective object. If the signal is entangled with the receiver, then a suitable choice of the measurement offers a gain with respect to the optimal classical protocol employing coherent states. Here, we tackle this detection problem by using quantum estimation techniques to measure the reflectivity parameter of the object, showing an enhancement in the signal-to-noise ratio up to 3 dB with respect to the classical case when implementing only local measurements. Our approach employs the quantum Fisher information to provide an upper bound for the error probability, supplies the concrete estimator saturating the bound, and extends the quantum illumination protocol to non-Gaussian states. As an example, we show how Schrödinger's cat states may be used for quantum illumination.

  3. Quantum Estimation Methods for Quantum Illumination.

    PubMed

    Sanz, M; Las Heras, U; García-Ripoll, J J; Solano, E; Di Candia, R

    2017-02-17

    Quantum illumination consists in shining quantum light on a target region immersed in a bright thermal bath with the aim of detecting the presence of a possible low-reflective object. If the signal is entangled with the receiver, then a suitable choice of the measurement offers a gain with respect to the optimal classical protocol employing coherent states. Here, we tackle this detection problem by using quantum estimation techniques to measure the reflectivity parameter of the object, showing an enhancement in the signal-to-noise ratio up to 3 dB with respect to the classical case when implementing only local measurements. Our approach employs the quantum Fisher information to provide an upper bound for the error probability, supplies the concrete estimator saturating the bound, and extends the quantum illumination protocol to non-Gaussian states. As an example, we show how Schrödinger's cat states may be used for quantum illumination.

  4. An automated method for serum magnesium estimation

    PubMed Central

    Whitmore, D. N.; Evans, D. I. K.

    1964-01-01

    An automated method for magnesium determination in serum is described using conventional AutoAnalyser equipment. The method gives results comparable with those obtained by the flame photometer. The method may prove particularly useful with subnormal serum magnesium levels. PMID:14227433

  5. Development of advanced acreage estimation methods

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr. (Principal Investigator)

    1980-01-01

    The use of the AMOEBA clustering/classification algorithm was investigated as a basis for both a color display generation technique and maximum likelihood proportion estimation procedure. An approach to analyzing large data reduction systems was formulated and an exploratory empirical study of spatial correlation in LANDSAT data was also carried out. Topics addressed include: (1) development of multiimage color images; (2) spectral spatial classification algorithm development; (3) spatial correlation studies; and (4) evaluation of data systems.

  6. alphaPDE: A new multivariate technique for parameter estimation

    SciTech Connect

    Knuteson, B.; Miettinen, H.; Holmstrom, L.

    2002-06-01

    We present alphaPDE, a new multivariate analysis technique for parameter estimation. The method is based on a direct construction of joint probability densities of known variables and the parameters to be estimated. We show how posterior densities and best-value estimates are then obtained for the parameters of interest by a straightforward manipulation of these densities. The method is essentially non-parametric and allows for an intuitive graphical interpretation. We illustrate the method by outlining how it can be used to estimate the mass of the top quark, and we explain how the method is applied to an ensemble of events containing background.

  7. Estimation of vegetation cover at subpixel resolution using LANDSAT data

    NASA Technical Reports Server (NTRS)

    Jasinski, Michael F.; Eagleson, Peter S.

    1986-01-01

    The present report summarizes the various approaches relevant to estimating canopy cover at subpixel resolution. The approaches are based on physical models of radiative transfer in non-homogeneous canopies and on empirical methods. The effects of vegetation shadows and topography are examined. Simple versions of the model are tested, using the Taos, New Mexico Study Area database. Emphasis has been placed on using relatively simple models requiring only one or two bands. Although most methods require some degree of ground truth, a two-band method is investigated whereby the percent cover can be estimated without ground truth by examining the limits of the data space. Future work is proposed which will incorporate additional surface parameters into the canopy cover algorithm, such as topography, leaf area, or shadows. The method involves deriving a probability density function for the percent canopy cover based on the joint probability density function of the observed radiances.

  8. An investigation of new methods for estimating parameter sensitivities

    NASA Technical Reports Server (NTRS)

    Beltracchi, Todd J.; Gabriele, Gary A.

    1989-01-01

    The method proposed for estimating sensitivity derivatives is based on the Recursive Quadratic Programming (RQP) method and in conjunction a differencing formula to produce estimates of the sensitivities. This method is compared to existing methods and is shown to be very competitive in terms of the number of function evaluations required. In terms of accuracy, the method is shown to be equivalent to a modified version of the Kuhn-Tucker method, where the Hessian of the Lagrangian is estimated using the BFS method employed by the RQP algorithm. Initial testing on a test set with known sensitivities demonstrates that the method can accurately calculate the parameter sensitivity.

  9. [Weighted estimation methods for multistage sampling survey data].

    PubMed

    Hou, Xiao-Yan; Wei, Yong-Yue; Chen, Feng

    2009-06-01

    Multistage sampling techniques are widely applied in the cross-sectional study of epidemiology, while methods based on independent assumption are still used to analyze such complex survey data. This paper aims to introduce the application of weighted estimation methods for the complex survey data. A brief overview of basic theory is described, and then a practical analysis is illustrated to apply to the weighted estimation algorithm in a stratified two-stage clustered sampling data. For multistage sampling survey data, weighted estimation method can be used to obtain unbiased point estimation and more reasonable variance estimation, and so make proper statistical inference by correcting the clustering, stratification and unequal probability effects.

  10. A modified DOI-based method to statistically estimate the depth of investigation of dc resistivity surveys

    NASA Astrophysics Data System (ADS)

    Deceuster, John; Etienne, Adélaïde; Robert, Tanguy; Nguyen, Frédéric; Kaufmann, Olivier

    2014-04-01

    Several techniques are available to estimate the depth of investigation or to identify possible artifacts in dc resistivity surveys. Commonly, the depth of investigation (DOI) is mainly estimated by using an arbitrarily chosen cut-off value on a selected indicator (resolution, sensitivity or DOI index). Ranges of cut-off values are recommended in the literature for the different indicators. However, small changes in threshold values may induce strong variations in the estimated depths of investigation. To overcome this problem, we developed a new statistical method to estimate the DOI of dc resistivity surveys based on a modified DOI index approach. This method is composed of 5 successive steps. First, two inversions are performed by using different resistivity reference models for the inversion (0.1 and 10 times the arithmetic mean of the logarithm of the observed apparent resistivity values). Inversion models are extended to the edges of the survey line and to a depth range of three times the pseudodepth of investigation of the largest array spacing used. In step 2, we compute the histogram of a newly defined scaled DOI index. Step 3 consists of the fitting of the mixture of two Gaussian distributions (G1 and G2) to the cumulative distribution function of the scaled DOI index values. Based on this fitting, step 4 focuses on the computation of an interpretation index (II) defined for every cell j of the model as the relative probability density that the cell j belongs to G1, which describes the Gaussian distribution of the cells with a scaled DOI index close to 0.0. In step 5, a new inversion is performed by using a third resistivity reference model (the arithmetic mean of the logarithm of the observed apparent resistivity values). The final electrical resistivity image is produced by using II as alpha blending values allowing the visual discrimination between well-constrained areas and poorly-constrained cells.

  11. Advancing Methods for Estimating Cropland Area

    NASA Astrophysics Data System (ADS)

    King, L.; Hansen, M.; Stehman, S. V.; Adusei, B.; Potapov, P.; Krylov, A.

    2014-12-01

    Measurement and monitoring of complex and dynamic agricultural land systems is essential with increasing demands on food, feed, fuel and fiber production from growing human populations, rising consumption per capita, the expansion of crops oils in industrial products, and the encouraged emphasis on crop biofuels as an alternative energy source. Soybean is an important global commodity crop, and the area of land cultivated for soybean has risen dramatically over the past 60 years, occupying more than 5% of all global croplands (Monfreda et al 2008). Escalating demands for soy over the next twenty years are anticipated to be met by an increase of 1.5 times the current global production, resulting in expansion of soybean cultivated land area by nearly the same amount (Masuda and Goldsmith 2009). Soybean cropland area is estimated with the use of a sampling strategy and supervised non-linear hierarchical decision tree classification for the United States, Argentina and Brazil as the prototype in development of a new methodology for crop specific agricultural area estimation. Comparison of our 30 m2 Landsat soy classification with the National Agricultural Statistical Services Cropland Data Layer (CDL) soy map shows a strong agreement in the United States for 2011, 2012, and 2013. RapidEye 5m2 imagery was also classified for soy presence and absence and used at the field scale for validation and accuracy assessment of the Landsat soy maps, describing a nearly 1 to 1 relationship in the United States, Argentina and Brazil. The strong correlation found between all products suggests high accuracy and precision of the prototype and has proven to be a successful and efficient way to assess soybean cultivated area at the sub-national and national scale for the United States with great potential for application elsewhere.

  12. A Joint Analytic Method for Estimating Aquitard Hydraulic Parameters.

    PubMed

    Zhuang, Chao; Zhou, Zhifang; Illman, Walter A

    2017-01-10

    The vertical hydraulic conductivity (Kv ), elastic (Sske ), and inelastic (Sskv ) skeletal specific storage of aquitards are three of the most critical parameters in land subsidence investigations. Two new analytic methods are proposed to estimate the three parameters. The first analytic method is based on a new concept of delay time ratio for estimating Kv and Sske of an aquitard subject to long-term stable, cyclic hydraulic head changes at boundaries. The second analytic method estimates the Sskv of the aquitard subject to linearly declining hydraulic heads at boundaries. Both methods are based on analytical solutions for flow within the aquitard, and they are jointly employed to obtain the three parameter estimates. This joint analytic method is applied to estimate the Kv , Sske , and Sskv of a 34.54-m thick aquitard for which the deformation progress has been recorded by an extensometer located in Shanghai, China. The estimated results are then calibrated by PEST (Doherty 2005), a parameter estimation code coupled with a one-dimensional aquitard-drainage model. The Kv and Sske estimated by the joint analytic method are quite close to those estimated via inverse modeling and performed much better in simulating elastic deformation than the estimates obtained from the stress-strain diagram method of Ye and Xue (2005). The newly proposed joint analytic method is an effective tool that provides reasonable initial values for calibrating land subsidence models.

  13. A quasi-Newton approach to optimization problems with probability density constraints. [problem solving in mathematical programming

    NASA Technical Reports Server (NTRS)

    Tapia, R. A.; Vanrooy, D. L.

    1976-01-01

    A quasi-Newton method is presented for minimizing a nonlinear function while constraining the variables to be nonnegative and sum to one. The nonnegativity constraints were eliminated by working with the squares of the variables and the resulting problem was solved using Tapia's general theory of quasi-Newton methods for constrained optimization. A user's guide for a computer program implementing this algorithm is provided.

  14. Optical method of atomic ordering estimation

    SciTech Connect

    Prutskij, T.; Attolini, G.

    2013-12-04

    It is well known that within metal-organic vapor-phase epitaxy (MOVPE) grown semiconductor III-V ternary alloys atomically ordered regions are spontaneously formed during the epitaxial growth. This ordering leads to bandgap reduction and to valence bands splitting, and therefore to anisotropy of the photoluminescence (PL) emission polarization. The same phenomenon occurs within quaternary semiconductor alloys. While the ordering in ternary alloys is widely studied, for quaternaries there have been only a few detailed experimental studies of it, probably because of the absence of appropriate methods of its detection. Here we propose an optical method to reveal atomic ordering within quaternary alloys by measuring the PL emission polarization.

  15. Rapid Methods for Estimating Navigation Channel Shoaling

    DTIC Science & Technology

    2009-01-01

    of the data dependence and lack of accounting for channel width. Mayor- Mora et al. (1976) developed an analytical method for infilling in a...1.2 cm/day near the end of the monitoring. The predictive expression of Mayor- Mora et al. (1976) is: /(1 ) (1 ) cosa dFh h FR in hdq q e q e...decision-support or initial planning studies that must be done quickly. Vicente and Uva (1984) present a method based on the assumption that a

  16. System and method for correcting attitude estimation

    NASA Technical Reports Server (NTRS)

    Josselson, Robert H. (Inventor)

    2010-01-01

    A system includes an angular rate sensor disposed in a vehicle for providing angular rates of the vehicle, and an instrument disposed in the vehicle for providing line-of-sight control with respect to a line-of-sight reference. The instrument includes an integrator which is configured to integrate the angular rates of the vehicle to form non-compensated attitudes. Also included is a compensator coupled across the integrator, in a feed-forward loop, for receiving the angular rates of the vehicle and outputting compensated angular rates of the vehicle. A summer combines the non-compensated attitudes and the compensated angular rates of the to vehicle to form estimated vehicle attitudes for controlling the instrument with respect to the line-of-sight reference. The compensator is configured to provide error compensation to the instrument free-of any feedback loop that uses an error signal. The compensator may include a transfer function providing a fixed gain to the received angular rates of the vehicle. The compensator may, alternatively, include a is transfer function providing a variable gain as a function of frequency to operate on the received angular rates of the vehicle.

  17. Comparison of three methods for estimating complete life tables

    NASA Astrophysics Data System (ADS)

    Ibrahim, Rose Irnawaty

    2013-04-01

    A question of interest in the demographic and actuarial fields is the estimation of the complete sets of qx values when the data are given in age groups. When the complete life tables are not available, estimating it from abridged life tables is necessary. Three methods such as King's Osculatory Interpolation, Six-point Lagrangian Interpolation and Heligman-Pollard Model are compared using data on abridged life tables for Malaysian population. Each of these methods considered was applied on the abridged data sets to estimate the complete sets of qx values. Then, the estimated complete sets of qx values were used to produce the estimated abridged ones by each of the three methods. The results were then compared with the actual values published in the abridged life tables. Among the three methods, the Six-point Lagrangian Interpolation method produces the best estimates of complete life tables from five-year abridged life tables.

  18. Bayesian methods to estimate urban growth potential

    USGS Publications Warehouse

    Smith, Jordan W.; Smart, Lindsey S.; Dorning, Monica; Dupéy, Lauren Nicole; Méley, Andréanne; Meentemeyer, Ross K.

    2017-01-01

    Urban growth often influences the production of ecosystem services. The impacts of urbanization on landscapes can subsequently affect landowners’ perceptions, values and decisions regarding their land. Within land-use and land-change research, very few models of dynamic landscape-scale processes like urbanization incorporate empirically-grounded landowner decision-making processes. Very little attention has focused on the heterogeneous decision-making processes that aggregate to influence broader-scale patterns of urbanization. We examine the land-use tradeoffs faced by individual landowners in one of the United States’ most rapidly urbanizing regions − the urban area surrounding Charlotte, North Carolina. We focus on the land-use decisions of non-industrial private forest owners located across the region’s development gradient. A discrete choice experiment is used to determine the critical factors influencing individual forest owners’ intent to sell their undeveloped properties across a series of experimentally varied scenarios of urban growth. Data are analyzed using a hierarchical Bayesian approach. The estimates derived from the survey data are used to modify a spatially-explicit trend-based urban development potential model, derived from remotely-sensed imagery and observed changes in the region’s socioeconomic and infrastructural characteristics between 2000 and 2011. This modeling approach combines the theoretical underpinnings of behavioral economics with spatiotemporal data describing a region’s historical development patterns. By integrating empirical social preference data into spatially-explicit urban growth models, we begin to more realistically capture processes as well as patterns that drive the location, magnitude and rates of urban growth.

  19. Understanding Rasch measurement: estimation methods for Rasch measures.

    PubMed

    Linacre, J M

    1999-01-01

    Rasch parameter estimation methods can be classified as non-interative and iterative. Non-iterative methods include the normal approximation algorithm (PROX) for complete dichotomous data. Iterative methods fall into 3 types. Datum-by-datum methods include Gaussian least-squares, minimum chi-square, and the pairwise (PAIR) method. Marginal methods without distributional assumptions include conditional maximum-likelihood estimation (CMLE), joint maximum-likelihood estimation (JMLE) and log-linear approaches. Marginal methods with distributional assumptions include marginal maximum-likelihood estimation (MMLE) and the normal approximation algorithm (PROX) for missing data. Estimates from all methods are characterized by standard errors and quality-control fit statistics. Standard errors can be local (defined relative to the measure of a particular item) or general (defined relative to the abstract origin of the scale). They can also be ideal (as though the data fit the model) or inflated by the misfit to the model present in the data. Five computer programs, implementing different estimation methods, produce statistically equivalent estimates. Nevertheless, comparing estimates from different programs requires care.

  20. Robust time and frequency domain estimation methods in adaptive control

    NASA Technical Reports Server (NTRS)

    Lamaire, Richard Orville

    1987-01-01

    A robust identification method was developed for use in an adaptive control system. The type of estimator is called the robust estimator, since it is robust to the effects of both unmodeled dynamics and an unmeasurable disturbance. The development of the robust estimator was motivated by a need to provide guarantees in the identification part of an adaptive controller. To enable the design of a robust control system, a nominal model as well as a frequency-domain bounding function on the modeling uncertainty associated with this nominal model must be provided. Two estimation methods are presented for finding parameter estimates, and, hence, a nominal model. One of these methods is based on the well developed field of time-domain parameter estimation. In a second method of finding parameter estimates, a type of weighted least-squares fitting to a frequency-domain estimated model is used. The frequency-domain estimator is shown to perform better, in general, than the time-domain parameter estimator. In addition, a methodology for finding a frequency-domain bounding function on the disturbance is used to compute a frequency-domain bounding function on the additive modeling error due to the effects of the disturbance and the use of finite-length data. The performance of the robust estimator in both open-loop and closed-loop situations is examined through the use of simulations.

  1. System and method for motor parameter estimation

    SciTech Connect

    Luhrs, Bin; Yan, Ting

    2014-03-18

    A system and method for determining unknown values of certain motor parameters includes a motor input device connectable to an electric motor having associated therewith values for known motor parameters and an unknown value of at least one motor parameter. The motor input device includes a processing unit that receives a first input from the electric motor comprising values for the known motor parameters for the electric motor and receive a second input comprising motor data on a plurality of reference motors, including values for motor parameters corresponding to the known motor parameters of the electric motor and values for motor parameters corresponding to the at least one unknown motor parameter value of the electric motor. The processor determines the unknown value of the at least one motor parameter from the first input and the second input and determines a motor management strategy for the electric motor based thereon.

  2. Carbon footprint: current methods of estimation.

    PubMed

    Pandey, Divya; Agrawal, Madhoolika; Pandey, Jai Shanker

    2011-07-01

    Increasing greenhouse gaseous concentration in the atmosphere is perturbing the environment to cause grievous global warming and associated consequences. Following the rule that only measurable is manageable, mensuration of greenhouse gas intensiveness of different products, bodies, and processes is going on worldwide, expressed as their carbon footprints. The methodologies for carbon footprint calculations are still evolving and it is emerging as an important tool for greenhouse gas management. The concept of carbon footprinting has permeated and is being commercialized in all the areas of life and economy, but there is little coherence in definitions and calculations of carbon footprints among the studies. There are disagreements in the selection of gases, and the order of emissions to be covered in footprint calculations. Standards of greenhouse gas accounting are the common resources used in footprint calculations, although there is no mandatory provision of footprint verification. Carbon footprinting is intended to be a tool to guide the relevant emission cuts and verifications, its standardization at international level are therefore necessary. Present review describes the prevailing carbon footprinting methods and raises the related issues.

  3. A Comparative Study of Distribution System Parameter Estimation Methods

    SciTech Connect

    Sun, Yannan; Williams, Tess L.; Gourisetti, Sri Nikhil Gup

    2016-07-17

    In this paper, we compare two parameter estimation methods for distribution systems: residual sensitivity analysis and state-vector augmentation with a Kalman filter. These two methods were originally proposed for transmission systems, and are still the most commonly used methods for parameter estimation. Distribution systems have much lower measurement redundancy than transmission systems. Therefore, estimating parameters is much more difficult. To increase the robustness of parameter estimation, the two methods are applied with combined measurement snapshots (measurement sets taken at different points in time), so that the redundancy for computing the parameter values is increased. The advantages and disadvantages of both methods are discussed. The results of this paper show that state-vector augmentation is a better approach for parameter estimation in distribution systems. Simulation studies are done on a modified version of IEEE 13-Node Test Feeder with varying levels of measurement noise and non-zero error in the other system model parameters.

  4. A Method of Estimating Item Characteristic Functions Using the Maximum Likelihood Estimate of Ability

    ERIC Educational Resources Information Center

    Samejima, Fumiko

    1977-01-01

    A method of estimating item characteristic functions is proposed, in which a set of test items, whose operating characteristics are known and which give a constant test information function for a wide range of ability, are used. The method is based on maximum likelihood estimation procedures. (Author/JKS)

  5. Validation of the probability density function for the calculated radiant power of synchrotron radiation according to the Schwinger formalism

    NASA Astrophysics Data System (ADS)

    Klein, Roman

    2016-06-01

    Electron storage rings with appropriate design are primary source standards, the spectral radiant intensity of which can be calculated from measured parameters using the Schwinger equation. PTB uses the electron storage rings BESSY II and MLS for source-based radiometry in the spectral range from the near-infrared to the x-ray region. The uncertainty of the calculated radiant intensity depends on the uncertainty of the measured parameters used for the calculation. Up to now the procedure described in the guide to the expression of uncertainty in measurement (GUM), i.e. the law of propagation of uncertainty, assuming a linear measurement model, was used to determine the combined uncertainty of the calculated spectral intensity, and for the determination of the coverage interval as well. Now it has been tested with a Monte Carlo simulation, according to Supplement 1 to the GUM, whether this procedure is valid for the rather complicated calculation by means of the Schwinger formalism and for different probability distributions of the input parameters. It was found that for typical uncertainties of the input parameters both methods yield similar results.

  6. Variance Difference between Maximum Likelihood Estimation Method and Expected A Posteriori Estimation Method Viewed from Number of Test Items

    ERIC Educational Resources Information Center

    Mahmud, Jumailiyah; Sutikno, Muzayanah; Naga, Dali S.

    2016-01-01

    The aim of this study is to determine variance difference between maximum likelihood and expected A posteriori estimation methods viewed from number of test items of aptitude test. The variance presents an accuracy generated by both maximum likelihood and Bayes estimation methods. The test consists of three subtests, each with 40 multiple-choice…

  7. Research on the estimation method for Earth rotation parameters

    NASA Astrophysics Data System (ADS)

    Yao, Yibin

    2008-12-01

    In this paper, the methods of earth rotation parameter (ERP) estimation based on IGS SINEX file of GPS solution are discussed in details. To estimate ERP, two different ways are involved: one is the parameter transformation method, and the other is direct adjustment method with restrictive conditions. With the IGS daily SINEX files produced by GPS tracking stations can be used to estimate ERP. The parameter transformation method can simplify the process. The process result indicates that the systemic error will exist in the estimated ERP by only using GPS observations. As to the daily GPS SINEX files, why the distinct systemic error is exist in the ERP, or whether this systemic error will affect other parameters estimation, and what its influenced magnitude being, it needs further study in the future.

  8. Investigating the Stability of Four Methods for Estimating Item Bias.

    ERIC Educational Resources Information Center

    Perlman, Carole L.; And Others

    The reliability of item bias estimates was studied for four methods: (1) the transformed delta method; (2) Shepard's modified delta method; (3) Rasch's one-parameter residual analysis; and (4) the Mantel-Haenszel procedure. Bias statistics were computed for each sample using all methods. Data were from administration of multiple-choice items from…

  9. Investigating the Stability of Four Methods for Estimating Item Bias.

    ERIC Educational Resources Information Center

    Perlman, Carole L.; And Others

    The reliability of item bias estimates was studied for four methods: (1) the transformed delta method; (2) Shepard's modified delta method; (3) Rasch's one-parameter residual analysis; and (4) the Mantel-Haenszel procedure. Bias statistics were computed for each sample using all methods. Data were from administration of multiple-choice items from…

  10. Evaluation of Methods to Estimate Understory Fruit Biomass

    PubMed Central

    Lashley, Marcus A.; Thompson, Jeffrey R.; Chitwood, M. Colter; DePerno, Christopher S.; Moorman, Christopher E.

    2014-01-01

    Fleshy fruit is consumed by many wildlife species and is a critical component of forest ecosystems. Because fruit production may change quickly during forest succession, frequent monitoring of fruit biomass may be needed to better understand shifts in wildlife habitat quality. Yet, designing a fruit sampling protocol that is executable on a frequent basis may be difficult, and knowledge of accuracy within monitoring protocols is lacking. We evaluated the accuracy and efficiency of 3 methods to estimate understory fruit biomass (Fruit Count, Stem Density, and Plant Coverage). The Fruit Count method requires visual counts of fruit to estimate fruit biomass. The Stem Density method uses counts of all stems of fruit producing species to estimate fruit biomass. The Plant Coverage method uses land coverage of fruit producing species to estimate fruit biomass. Using linear regression models under a censored-normal distribution, we determined the Fruit Count and Stem Density methods could accurately estimate fruit biomass; however, when comparing AIC values between models, the Fruit Count method was the superior method for estimating fruit biomass. After determining that Fruit Count was the superior method to accurately estimate fruit biomass, we conducted additional analyses to determine the sampling intensity (i.e., percentage of area) necessary to accurately estimate fruit biomass. The Fruit Count method accurately estimated fruit biomass at a 0.8% sampling intensity. In some cases, sampling 0.8% of an area may not be feasible. In these cases, we suggest sampling understory fruit production with the Fruit Count method at the greatest feasible sampling intensity, which could be valuable to assess annual fluctuations in fruit production. PMID:24819253

  11. The use of spatial dose gradients and probability density function to evaluate the effect of internal organ motion for prostate IMRT treatment planning

    NASA Astrophysics Data System (ADS)

    Jiang, Runqing; Barnett, Rob B.; Chow, James C. L.; Chen, Jeff Z. Y.

    2007-03-01

    The aim of this study is to investigate the effects of internal organ motion on IMRT treatment planning of prostate patients using a spatial dose gradient and probability density function. Spatial dose distributions were generated from a Pinnacle3 planning system using a co-planar, five-field intensity modulated radiation therapy (IMRT) technique. Five plans were created for each patient using equally spaced beams but shifting the angular displacement of the beam by 15° increments. Dose profiles taken through the isocentre in anterior-posterior (A-P), right-left (R-L) and superior-inferior (S-I) directions for IMRT plans were analysed by exporting RTOG file data from Pinnacle. The convolution of the 'static' dose distribution D0(x, y, z) and probability density function (PDF), denoted as P(x, y, z), was used to analyse the combined effect of repositioning error and internal organ motion. Organ motion leads to an enlarged beam penumbra. The amount of percentage mean dose deviation (PMDD) depends on the dose gradient and organ motion probability density function. Organ motion dose sensitivity was defined by the rate of change in PMDD with standard deviation of motion PDF and was found to increase with the maximum dose gradient in anterior, posterior, left and right directions. Due to common inferior and superior field borders of the field segments, the sharpest dose gradient will occur in the inferior or both superior and inferior penumbrae. Thus, prostate motion in the S-I direction produces the highest dose difference. The PMDD is within 2.5% when standard deviation is less than 5 mm, but the PMDD is over 2.5% in the inferior direction when standard deviation is higher than 5 mm in the inferior direction. Verification of prostate organ motion in the inferior directions is essential. The margin of the planning target volume (PTV) significantly impacts on the confidence of tumour control probability (TCP) and level of normal tissue complication probability (NTCP

  12. A new method for the estimation of the completeness magnitude

    NASA Astrophysics Data System (ADS)

    Godano, C.

    2017-02-01

    The estimation of the magnitude of completeness mc have strong consequences in any statistical analysis of seismic catalogue and in the evaluation of the seismic hazard. Here a new method for its estimation is presented. The goodness of the method has been tested using 104 simulated catalogues. Then the method has been applied to five experimental seismic catalogues: Greece, Italy, Japan, Northern California and Southern California.

  13. An evaluation of methods for estimating decadal stream loads

    USGS Publications Warehouse

    Lee, Casey; Hirsch, Robert M.; Schwarz, Gregory; Holtschlag, David J.; Preston, Stephen D.; Crawford, Charles G.; Vecchia, Skip V.

    2016-01-01

    Effective management of water resources requires accurate information on the mass, or load of water-quality constituents transported from upstream watersheds to downstream receiving waters. Despite this need, no single method has been shown to consistently provide accurate load estimates among different water-quality constituents, sampling sites, and sampling regimes. We evaluate the accuracy of several load estimation methods across a broad range of sampling and environmental conditions. This analysis uses random sub-samples drawn from temporally-dense data sets of total nitrogen, total phosphorus, nitrate, and suspended-sediment concentration, and includes measurements of specific conductance which was used as a surrogate for dissolved solids concentration. Methods considered include linear interpolation and ratio estimators, regression-based methods historically employed by the U.S. Geological Survey, and newer flexible techniques including Weighted Regressions on Time, Season, and Discharge (WRTDS) and a generalized non-linear additive model. No single method is identified to have the greatest accuracy across all constituents, sites, and sampling scenarios. Most methods provide accurate estimates of specific conductance (used as a surrogate for total dissolved solids or specific major ions) and total nitrogen – lower accuracy is observed for the estimation of nitrate, total phosphorus and suspended sediment loads. Methods that allow for flexibility in the relation between concentration and flow conditions, specifically Beale’s ratio estimator and WRTDS, exhibit greater estimation accuracy and lower bias. Evaluation of methods across simulated sampling scenarios indicate that (1) high-flow sampling is necessary to produce accurate load estimates, (2) extrapolation of sample data through time or across more extreme flow conditions reduces load estimate accuracy, and (3) WRTDS and methods that use a Kalman filter or smoothing to correct for departures between

  14. An evaluation of methods for estimating decadal stream loads

    NASA Astrophysics Data System (ADS)

    Lee, Casey J.; Hirsch, Robert M.; Schwarz, Gregory E.; Holtschlag, David J.; Preston, Stephen D.; Crawford, Charles G.; Vecchia, Aldo V.

    2016-11-01

    Effective management of water resources requires accurate information on the mass, or load of water-quality constituents transported from upstream watersheds to downstream receiving waters. Despite this need, no single method has been shown to consistently provide accurate load estimates among different water-quality constituents, sampling sites, and sampling regimes. We evaluate the accuracy of several load estimation methods across a broad range of sampling and environmental conditions. This analysis uses random sub-samples drawn from temporally-dense data sets of total nitrogen, total phosphorus, nitrate, and suspended-sediment concentration, and includes measurements of specific conductance which was used as a surrogate for dissolved solids concentration. Methods considered include linear interpolation and ratio estimators, regression-based methods historically employed by the U.S. Geological Survey, and newer flexible techniques including Weighted Regressions on Time, Season, and Discharge (WRTDS) and a generalized non-linear additive model. No single method is identified to have the greatest accuracy across all constituents, sites, and sampling scenarios. Most methods provide accurate estimates of specific conductance (used as a surrogate for total dissolved solids or specific major ions) and total nitrogen - lower accuracy is observed for the estimation of nitrate, total phosphorus and suspended sediment loads. Methods that allow for flexibility in the relation between concentration and flow conditions, specifically Beale's ratio estimator and WRTDS, exhibit greater estimation accuracy and lower bias. Evaluation of methods across simulated sampling scenarios indicate that (1) high-flow sampling is necessary to produce accurate load estimates, (2) extrapolation of sample data through time or across more extreme flow conditions reduces load estimate accuracy, and (3) WRTDS and methods that use a Kalman filter or smoothing to correct for departures between

  15. A source number estimation method for single optical fiber sensor

    NASA Astrophysics Data System (ADS)

    Hu, Junpeng; Huang, Zhiping; Su, Shaojing; Zhang, Yimeng; Liu, Chunwu

    2015-10-01

    The single-channel blind source separation (SCBSS) technique makes great significance in many fields, such as optical fiber communication, sensor detection, image processing and so on. It is a wide range application to realize blind source separation (BSS) from a single optical fiber sensor received data. The performance of many BSS algorithms and signal process methods will be worsened with inaccurate source number estimation. Many excellent algorithms have been proposed to deal with the source number estimation in array signal process which consists of multiple sensors, but they can not be applied directly to the single sensor condition. This paper presents a source number estimation method dealing with the single optical fiber sensor received data. By delay process, this paper converts the single sensor received data to multi-dimension form. And the data covariance matrix is constructed. Then the estimation algorithms used in array signal processing can be utilized. The information theoretic criteria (ITC) based methods, presented by AIC and MDL, Gerschgorin's disk estimation (GDE) are introduced to estimate the source number of the single optical fiber sensor's received signal. To improve the performance of these estimation methods at low signal noise ratio (SNR), this paper make a smooth process to the data covariance matrix. By the smooth process, the fluctuation and uncertainty of the eigenvalues of the covariance matrix are reduced. Simulation results prove that ITC base methods can not estimate the source number effectively under colored noise. The GDE method, although gets a poor performance at low SNR, but it is able to accurately estimate the number of sources with colored noise. The experiments also show that the proposed method can be applied to estimate the source number of single sensor received data.

  16. Stochastic BER estimation for coherent QPSK transmission systems with digital carrier phase recovery.

    PubMed

    Zhang, Fan; Gao, Yan; Luo, Yazhi; Chen, Zhangyuan; Xu, Anshi

    2010-04-26

    We propose a stochastic bit error ratio estimation approach based on a statistical analysis of the retrieved signal phase for coherent optical QPSK systems with digital carrier phase recovery. A family of the generalized exponential function is applied to fit the probability density function of the signal samples. The method provides reasonable performance estimation in presence of both linear and nonlinear transmission impairments while reduces the computational intensity greatly compared to Monte Carlo simulation.

  17. Method for estimating air-drying times of lumber

    Treesearch

    William T. Simpson; C. Arthur Hart

    2001-01-01

    Published information on estimated air-drying times of lumber is of limited usefulness because it is restricted to a specific location or to the time of year the lumber is stacked for drying. At best, these estimates give a wide range of possible times over a broad range of possible locations and stacking dates. In this paper, we describe a method for estimating air-...

  18. Evaluating maximum likelihood estimation methods to determine the hurst coefficients

    NASA Astrophysics Data System (ADS)

    Kendziorski, C. M.; Bassingthwaighte, J. B.; Tonellato, P. J.

    1999-12-01

    A maximum likelihood estimation method implemented in S-PLUS ( S-MLE) to estimate the Hurst coefficient ( H) is evaluated. The Hurst coefficient, with 0.5< H<1, characterizes long memory time series by quantifying the rate of decay of the autocorrelation function. S-MLE was developed to estimate H for fractionally differenced (fd) processes. However, in practice it is difficult to distinguish between fd processes and fractional Gaussian noise (fGn) processes. Thus, the method is evaluated for estimating H for both fd and fGn processes. S-MLE gave biased results of H for fGn processes of any length and for fd processes of lengths less than 2 10. A modified method is proposed to correct for this bias. It gives reliable estimates of H for both fd and fGn processes of length greater than or equal to 2 11.

  19. An automated method of tuning an attitude estimator

    NASA Technical Reports Server (NTRS)

    Mason, Paul A. C.; Mook, D. Joseph

    1995-01-01

    Attitude determination is a major element of the operation and maintenance of a spacecraft. There are several existing methods of determining the attitude of a spacecraft. One of the most commonly used methods utilizes the Kalman filter to estimate the attitude of the spacecraft. Given an accurate model of a system and adequate observations, a Kalman filter can produce accurate estimates of the attitude. If the system model, filter parameters, or observations are inaccurate, the attitude estimates may be degraded. Therefore, it is advantageous to develop a method of automatically tuning the Kalman filter to produce the accurate estimates. In this paper, a three-axis attitude determination Kalman filter, which uses only magnetometer measurements, is developed and tested using real data. The appropriate filter parameters are found via the Process Noise Covariance Estimator (PNCE). The PNCE provides an optimal criterion for determining the best filter parameters.

  20. Time evolution of the probability density function of a gamma-ray burst: a possible indication of the turbulent origin of gamma-ray bursts

    NASA Astrophysics Data System (ADS)

    Bhatt, Nilay; Bhattacharyya, Subir

    2012-02-01

    The time series of a gamma-ray burst (GRB) is a non-stationary time series and all of its statistical properties vary with time. Considering that each GRB is a different manifestation of the same stochastic process, we have studied the time-dependent and time-averaged probability density functions (pdfs), which characterize the underlying stochastic process. The pdfs are fitted with a Gaussian distribution function. It has been argued that the Gaussian pdfs possibly indicate the turbulent origin of GRBs. The spectral and temporal evolutions of GRBs are also studied using the evolution of spectral forms, colour-colour diagrams and hysteresis loops. The results do not contradict the interpretation of the turbulence of GRBs.

  1. Estimation of Glomerular Podocyte Number: A Selection of Valid Methods

    PubMed Central

    Bertram, John F.; Nicholas, Susanne B.; White, Kathryn

    2013-01-01

    The podocyte depletion hypothesis has emerged as an important unifying concept in glomerular pathology. The estimation of podocyte number is therefore often a critical component of studies of progressive renal diseases. Despite this, there is little uniformity in the biomedical literature with regard to the methods used to estimate this important parameter. Here we review a selection of valid methods for estimating podocyte number: exhaustive enumeration method, Weibel and Gomez method, disector/Cavalieri combination, disector/fractionator combination, and thick-and-thin section method. We propose the use of the disector/fractionator method for studies in which controlled sectioning of tissue is feasible, reserving the Weibel and Gomez method for studies based on archival or routine pathology material. PMID:23833256

  2. Methods for Estimating Medical Expenditures Attributable to Intimate Partner Violence

    ERIC Educational Resources Information Center

    Brown, Derek S.; Finkelstein, Eric A.; Mercy, James A.

    2008-01-01

    This article compares three methods for estimating the medical cost burden of intimate partner violence against U.S. adult women (18 years and older), 1 year postvictimization. To compute the estimates, prevalence data from the National Violence Against Women Survey are combined with cost data from the Medical Expenditure Panel Survey, the…

  3. Quantifying Accurate Calorie Estimation Using the "Think Aloud" Method

    ERIC Educational Resources Information Center

    Holmstrup, Michael E.; Stearns-Bruening, Kay; Rozelle, Jeffrey

    2013-01-01

    Objective: Clients often have limited time in a nutrition education setting. An improved understanding of the strategies used to accurately estimate calories may help to identify areas of focused instruction to improve nutrition knowledge. Methods: A "Think Aloud" exercise was recorded during the estimation of calories in a standard dinner meal…

  4. A new method for estimating extreme rainfall probabilities

    SciTech Connect

    Harper, G.A.; O'Hara, T.F. ); Morris, D.I. )

    1994-02-01

    As part of an EPRI-funded research program, the Yankee Atomic Electric Company developed a new method for estimating probabilities of extreme rainfall. It can be used, along with other techniques, to improve the estimation of probable maximum precipitation values for specific basins or regions.

  5. Methods for Estimating Medical Expenditures Attributable to Intimate Partner Violence

    ERIC Educational Resources Information Center

    Brown, Derek S.; Finkelstein, Eric A.; Mercy, James A.

    2008-01-01

    This article compares three methods for estimating the medical cost burden of intimate partner violence against U.S. adult women (18 years and older), 1 year postvictimization. To compute the estimates, prevalence data from the National Violence Against Women Survey are combined with cost data from the Medical Expenditure Panel Survey, the…

  6. Quantifying Accurate Calorie Estimation Using the "Think Aloud" Method

    ERIC Educational Resources Information Center

    Holmstrup, Michael E.; Stearns-Bruening, Kay; Rozelle, Jeffrey

    2013-01-01

    Objective: Clients often have limited time in a nutrition education setting. An improved understanding of the strategies used to accurately estimate calories may help to identify areas of focused instruction to improve nutrition knowledge. Methods: A "Think Aloud" exercise was recorded during the estimation of calories in a standard dinner meal…

  7. Performance of sampling methods to estimate log characteristics for wildlife.

    Treesearch

    Lisa J. Bate; Torolf R. Torgersen; Michael J. Wisdom; Edward O. Garton

    2004-01-01

    Accurate estimation of the characteristics of log resources, or coarse woody debris (CWD), is critical to effective management of wildlife and other forest resources. Despite the importance of logs as wildlife habitat, methods for sampling logs have traditionally focused on silvicultural and fire applications. These applications have emphasized estimates of log volume...

  8. A Novel Monopulse Angle Estimation Method for Wideband LFM Radars

    PubMed Central

    Zhang, Yi-Xiong; Liu, Qi-Fan; Hong, Ru-Jia; Pan, Ping-Ping; Deng, Zhen-Miao

    2016-01-01

    Traditional monopulse angle estimations are mainly based on phase comparison and amplitude comparison methods, which are commonly adopted in narrowband radars. In modern radar systems, wideband radars are becoming more and more important, while the angle estimation for wideband signals is little studied in previous works. As noise in wideband radars has larger bandwidth than narrowband radars, the challenge lies in the accumulation of energy from the high resolution range profile (HRRP) of monopulse. In wideband radars, linear frequency modulated (LFM) signals are frequently utilized. In this paper, we investigate the monopulse angle estimation problem for wideband LFM signals. To accumulate the energy of the received echo signals from different scatterers of a target, we propose utilizing a cross-correlation operation, which can achieve a good performance in low signal-to-noise ratio (SNR) conditions. In the proposed algorithm, the problem of angle estimation is converted to estimating the frequency of the cross-correlation function (CCF). Experimental results demonstrate the similar performance of the proposed algorithm compared with the traditional amplitude comparison method. It means that the proposed method for angle estimation can be adopted. When adopting the proposed method, future radars may only need wideband signals for both tracking and imaging, which can greatly increase the data rate and strengthen the capability of anti-jamming. More importantly, the estimated angle will not become ambiguous under an arbitrary angle, which can significantly extend the estimated angle range in wideband radars. PMID:27271629

  9. A Novel Monopulse Angle Estimation Method for Wideband LFM Radars.

    PubMed

    Zhang, Yi-Xiong; Liu, Qi-Fan; Hong, Ru-Jia; Pan, Ping-Ping; Deng, Zhen-Miao

    2016-06-03

    Traditional monopulse angle estimations are mainly based on phase comparison and amplitude comparison methods, which are commonly adopted in narrowband radars. In modern radar systems, wideband radars are becoming more and more important, while the angle estimation for wideband signals is little studied in previous works. As noise in wideband radars has larger bandwidth than narrowband radars, the challenge lies in the accumulation of energy from the high resolution range profile (HRRP) of monopulse. In wideband radars, linear frequency modulated (LFM) signals are frequently utilized. In this paper, we investigate the monopulse angle estimation problem for wideband LFM signals. To accumulate the energy of the received echo signals from different scatterers of a target, we propose utilizing a cross-correlation operation, which can achieve a good performance in low signal-to-noise ratio (SNR) conditions. In the proposed algorithm, the problem of angle estimation is converted to estimating the frequency of the cross-correlation function (CCF). Experimental results demonstrate the similar performance of the proposed algorithm compared with the traditional amplitude comparison method. It means that the proposed method for angle estimation can be adopted. When adopting the proposed method, future radars may only need wideband signals for both tracking and imaging, which can greatly increase the data rate and strengthen the capability of anti-jamming. More importantly, the estimated angle will not become ambiguous under an arbitrary angle, which can significantly extend the estimated angle range in wideband radars.

  10. An improved method of monopulse estimation in PD radar

    NASA Astrophysics Data System (ADS)

    Wang, Jun; Guo, Peng; Lei, Peng; Wei, Shaoming

    2011-10-01

    Monopulse estimation is an angle measurement method with high data rate, measurement precision and anti-jamming ability, since the angle information of target is obtained by comparing echoes received in two or more simultaneous antenna beams. However, the data rate of this method decreases due to coherent integration when applied in pulse Doppler (PD) radar. This paper presents an improved method of monopulse estimation in PD radar. In this method, the received echoes are selected by shift before coherent integration, detection and angle measurement. It can increase data rate while maintain angle measurement precision. And the validity of this method is verified by theoretical analysis and simulation results.

  11. Uncertainty estimation in seismo-acoustic reflection travel time inversion.

    PubMed

    Dettmer, Jan; Dosso, Stan E; Holland, Charles W

    2007-07-01

    This paper develops a nonlinear Bayesian inversion for high-resolution seabed reflection travel time data including rigorous uncertainty estimation and examination of statistical assumptions. Travel time data are picked on seismo-acoustic traces and inverted for a layered sediment sound-velocity model. Particular attention is paid to picking errors which are often biased, correlated, and nonstationary. Non-Toeplitz data covariance matrices are estimated and included in the inversion along with unknown travel time offset (bias) parameters to account for these errors. Simulated experiments show that neglecting error covariances and biases can cause misleading inversion results with unrealistically high confidence. The inversion samples the posterior probability density and provides a solution in terms of one- and two-dimensional marginal probability densities, correlations, and credibility intervals. Statistical assumptions are examined through the data residuals with rigorous statistical tests. The method is applied to shallow-water data collected on the Malta Plateau during the SCARAB98 experiment.

  12. Computerised prostate boundary estimation of ultrasound images using radial bas-relief method.

    PubMed

    Liu, Y J; Ng, W S; Teo, M Y; Lim, H C

    1997-09-01

    A new method is presented for automatic prostate boundary detection in ultrasound images taken transurethrally or transrectally. This is one of the stages in the implementation of a robotic procedure for prostate surgery performed by a robot known as the robot for urology (UROBOT). Unlike most edge detection methods, which detect object edges by means of either a spatial filter (such as Sobel, Laplacian or something of that nature) or a texture descriptor (local signal-to-noise ratio, joint probability density function etc.), this new approach employs a technique called radial bas-relief (RBR) to outline the prostate boundary area automatically. The results show that the RBR method works well in the detection of the prostate boundary in ultrasound images. It can also be useful for boundary detection problems in medical images where the object boundary is hard to detect using traditional edge detection algorithms, such as ultrasound of the uterus and kidney.

  13. Comparison of haemoglobin estimates using direct & indirect cyanmethaemoglobin methods

    PubMed Central

    Bansal, Priyanka Gupta; Toteja, Gurudayal Singh; Bhatia, Neena; Gupta, Sanjeev; Kaur, Manpreet; Adhikari, Tulsi; Garg, Ashok Kumar

    2016-01-01

    Background & objectives: Estimation of haemoglobin is the most widely used method to assess anaemia. Although direct cyanmethaemoglobin method is the recommended method for estimation of haemoglobin, but it may not be feasible under field conditions. Hence, the present study was undertaken to compare indirect cyanmethaemoglobin method against the conventional direct method for haemoglobin estimation. Methods: Haemoglobin levels were estimated for 888 adolescent girls aged 11-18 yr residing in an urban slum in Delhi by both direct and indirect cyanmethaemoglobin methods, and the results were compared. Results: The mean haemoglobin levels for 888 whole blood samples estimated by direct and indirect cyanmethaemoglobin method were 116.1 ± 12.7 and 110.5 ± 12.5 g/l, respectively, with a mean difference of 5.67 g/l (95% confidence interval: 5.45 to 5.90, P<0.001); which is equivalent to 0.567 g%. The prevalence of anaemia was reported as 59.6 and 78.2 per cent by direct and indirect methods, respectively. Sensitivity and specificity of indirect cyanmethaemoglobin method were 99.2 and 56.4 per cent, respectively. Using regression analysis, prediction equation was developed for indirect haemoglobin values. Interpretation & conclusions: The present findings revealed that indirect cyanmethaemoglobin method overestimated the prevalence of anaemia as compared to the direct method. However, if a correction factor is applied, indirect method could be successfully used for estimating true haemoglobin level. More studies should be undertaken to establish agreement and correction factor between direct and indirect cyanmethaemoglobin methods. PMID:28256465

  14. A comparison of methods for estimating the geoelectric field

    NASA Astrophysics Data System (ADS)

    Weigel, R. S.

    2017-02-01

    The geoelectric field is the primary input used for estimation of geomagnetically induced currents (GICs) in conducting systems. We compare three methods for estimating the geoelectric field given the measured geomagnetic field at four locations in the U.S. during time intervals with average Kp in the range of 2-3 and when the measurements had few data spikes and no baseline jumps. The methods include using (1) a preexisting 1-D conductivity model, (2) a conventional 3-D frequency domain method, and (3) a robust and remote reference 3-D frequency domain method. The quality of the estimates is determined using the power spectrum (in the period range 9.1 to 18,725 s) of estimation errors along with the prediction efficiency summary statistic. It is shown that with respect to these quality metrics, Method 1 produces average out-of-sample electric field estimation errors with a variance that can be equal to or larger than the average measured variance (due to underestimation or overestimation, respectively), and Method 3 produces reliable but slightly lower quality estimates than Method 2 for the time intervals and locations considered.

  15. Methods for Estimating Uncertainty in Factor Analytic Solutions

    EPA Science Inventory

    The EPA PMF (Environmental Protection Agency positive matrix factorization) version 5.0 and the underlying multilinear engine-executable ME-2 contain three methods for estimating uncertainty in factor analytic models: classical bootstrap (BS), displacement of factor elements (DI...

  16. Methods for Estimating Uncertainty in Factor Analytic Solutions

    EPA Science Inventory

    The EPA PMF (Environmental Protection Agency positive matrix factorization) version 5.0 and the underlying multilinear engine-executable ME-2 contain three methods for estimating uncertainty in factor analytic models: classical bootstrap (BS), displacement of factor elements (DI...

  17. A method for the estimation of urinary testosterone

    PubMed Central

    Ismail, A. A. A.; Harkness, R. A.

    1966-01-01

    1. A method has been developed for the estimation of testosterone in human urine by using acid hydrolysis followed by a quantitative form of a modified Girard reaction that separates a `conjugated-ketone' fraction from a urine extract; this is followed by column chromatography on alumina and paper chromatography. 2. Comparison of methods of estimation of testosterone in the final fraction shows that estimation by gas–liquid chromatography is more reproducible than by colorimetric methods applied to the same eluates from the paper chromatogram. 3. The mean recovery of testosterone by gas–liquid chromatography is 79·5%, and this method appears to be specific for testosterone. 4. The procedure is relatively rapid. Six determinations can be performed by one worker in 2 days. 5. Results of determinations on human urine are briefly presented. In general, they are similar to earlier estimates, but the maximal values are lower. PMID:5964968

  18. A bootstrap method for estimating uncertainty of water quality trends

    USGS Publications Warehouse

    Hirsch, Robert M.; Archfield, Stacey A.; DeCicco, Laura

    2015-01-01

    Estimation of the direction and magnitude of trends in surface water quality remains a problem of great scientific and practical interest. The Weighted Regressions on Time, Discharge, and Season (WRTDS) method was recently introduced as an exploratory data analysis tool to provide flexible and robust estimates of water quality trends. This paper enhances the WRTDS method through the introduction of the WRTDS Bootstrap Test (WBT), an extension of WRTDS that quantifies the uncertainty in WRTDS-estimates of water quality trends and offers various ways to visualize and communicate these uncertainties. Monte Carlo experiments are applied to estimate the Type I error probabilities for this method. WBT is compared to other water-quality trend-testing methods appropriate for data sets of one to three decades in length with sampling frequencies of 6–24 observations per year. The software to conduct the test is in the EGRETci R-package.

  19. Evapotranspiration: Mass balance measurements compared with flux estimation methods

    USDA-ARS?s Scientific Manuscript database

    Evapotranspiration (ET) may be measured by mass balance methods and estimated by flux sensing methods. The mass balance methods are typically restricted in terms of the area that can be represented (e.g., surface area of weighing lysimeter (LYS) or equivalent representative area of neutron probe (NP...

  20. Comparison of haemoglobin estimates using direct & indirect cyanmethaemoglobin methods.

    PubMed

    Bansal, Priyanka Gupta; Toteja, Gurudayal Singh; Bhatia, Neena; Gupta, Sanjeev; Kaur, Manpreet; Adhikari, Tulsi; Garg, Ashok Kumar

    2016-10-01

    Estimation of haemoglobin is the most widely used method to assess anaemia. Although direct cyanmethaemoglobin method is the recommended method for estimation of haemoglobin, but it may not be feasible under field conditions. Hence, the present study was undertaken to compare indirect cyanmethaemoglobin method against the conventional direct method for haemoglobin estimation. Haemoglobin levels were estimated for 888 adolescent girls aged 11-18 yr residing in an urban slum in Delhi by both direct and indirect cyanmethaemoglobin methods, and the results were compared. The mean haemoglobin levels for 888 whole blood samples estimated by direct and indirect cyanmethaemoglobin method were 116.1 ± 12.7 and 110.5 ± 12.5 g/l, respectively, with a mean difference of 5.67 g/l (95% confidence interval: 5.45 to 5.90, P<0.001); which is equivalent to 0.567 g%. The prevalence of anaemia was reported as 59.6 and 78.2 per cent by direct and indirect methods, respectively. Sensitivity and specificity of indirect cyanmethaemoglobin method were 99.2 and 56.4 per cent, respectively. Using regression analysis, prediction equation was developed for indirect haemoglobin values. The present findings revealed that indirect cyanmethaemoglobin method overestimated the prevalence of anaemia as compared to the direct method. However, if a correction factor is applied, indirect method could be successfully used for estimating true haemoglobin level. More studies should be undertaken to establish agreement and correction factor between direct and indirect cyanmethaemoglobin methods.

  1. A hybrid displacement estimation method for ultrasonic elasticity imaging.

    PubMed

    Chen, Lujie; Housden, R; Treece, Graham; Gee, Andrew; Prager, Richard

    2010-04-01

    Axial displacement estimation is fundamental to many freehand quasistatic ultrasonic strain imaging systems. In this paper, we present a novel estimation method that combines the strengths of quality-guided tracking, multi-level correlation, and phase-zero search to achieve high levels of accuracy and robustness. The paper includes a full description of the hybrid method, in vivo examples to illustrate the method's clinical relevance, and finite element simulations to assess its accuracy. Quantitative and qualitative comparisons are made with leading single- and multi-level alternatives. In the in vivo examples, the hybrid method produces fewer obvious peak-hopping errors, and in simulation, the hybrid method is found to reduce displacement estimation errors by 5 to 50%. With typical clinical data, the hybrid method can generate more than 25 strain images per second on commercial hardware; this is comparable with the alternative approaches considered in this paper.

  2. An investigation of new methods for estimating parameter sensitivities

    NASA Technical Reports Server (NTRS)

    Beltracchi, Todd J.; Gabriele, Gary A.

    1988-01-01

    Parameter sensitivity is defined as the estimation of changes in the modeling functions and the design variables due to small changes in the fixed parameters of the formulation. There are currently several methods for estimating parameter sensitivities requiring either difficult to obtain second order information, or do not return reliable estimates for the derivatives. Additionally, all the methods assume that the set of active constraints does not change in a neighborhood of the estimation point. If the active set does in fact change, than any extrapolations based on these derivatives may be in error. The objective here is to investigate more efficient new methods for estimating parameter sensitivities when the active set changes. The new method is based on the recursive quadratic programming (RQP) method and in conjunction a differencing formula to produce estimates of the sensitivities. This is compared to existing methods and is shown to be very competitive in terms of the number of function evaluations required. In terms of accuracy, the method is shown to be equivalent to a modified version of the Kuhn-Tucker method, where the Hessian of the Lagrangian is estimated using the BFS method employed by the RPQ algorithm. Inital testing on a test set with known sensitivities demonstrates that the method can accurately calculate the parameter sensitivity. To handle changes in the active set, a deflection algorithm is proposed for those cases where the new set of active constraints remains linearly independent. For those cases where dependencies occur, a directional derivative is proposed. A few simple examples are included for the algorithm, but extensive testing has not yet been performed.

  3. Estimation and classification by sigmoids based on mutual information

    NASA Technical Reports Server (NTRS)

    Baram, Yoram

    1994-01-01

    An estimate of the probability density function of a random vector is obtained by maximizing the mutual information between the input and the output of a feedforward network of sigmoidal units with respect to the input weights. Classification problems can be solved by selecting the class associated with the maximal estimated density. Newton's s method, applied to an estimated density, yields a recursive maximum likelihood estimator, consisting of a single internal layer of sigmoids, for a random variable or a random sequence. Applications to the diamond classification and to the prediction of a sun-spot process are demonstrated.

  4. A posteriori pointwise error estimates for the boundary element method

    SciTech Connect

    Paulino, G.H.; Gray, L.J.; Zarikian, V.

    1995-01-01

    This report presents a new approach for a posteriori pointwise error estimation in the boundary element method. The estimator relies upon the evaluation of hypersingular integral equations, and is therefore intrinsic to the boundary integral equation approach. This property allows some theoretical justification by mathematically correlating the exact and estimated errors. A methodology is developed for approximating the error on the boundary as well as in the interior of the domain. In the interior, error estimates for both the function and its derivatives (e.g. potential and interior gradients for potential problems, displacements and stresses for elasticity problems) are presented. Extensive computational experiments have been performed for the two dimensional Laplace equation on interior domains, employing Dirichlet and mixed boundary conditions. The results indicate that the error estimates successfully track the form of the exact error curve. Moreover, a reasonable estimate of the magnitude of the actual error is also obtained.

  5. Relative Camera Pose Estimation Method Using Optimization on the Manifold

    NASA Astrophysics Data System (ADS)

    Cheng, C.; Hao, X.; Li, J.

    2017-05-01

    To solve the problem of relative camera pose estimation, a method using optimization with respect to the manifold is proposed. Firstly from maximum-a-posteriori (MAP) model to nonlinear least squares (NLS) model, the general state estimation model using optimization is derived. Then the camera pose estimation model is applied to the general state estimation model, while the parameterization of rigid body transformation is represented by Lie group/algebra. The jacobian of point-pose model with respect to Lie group/algebra is derived in detail and thus the optimization model of rigid body transformation is established. Experimental results show that compared with the original algorithms, the approaches with optimization can obtain higher accuracy both in rotation and translation, while avoiding the singularity of Euler angle parameterization of rotation. Thus the proposed method can estimate relative camera pose with high accuracy and robustness.

  6. Methods for Estimation of Market Power in Electric Power Industry

    NASA Astrophysics Data System (ADS)

    Turcik, M.; Oleinikova, I.; Junghans, G.; Kolcun, M.

    2012-01-01

    The article is related to a topical issue of the newly-arisen market power phenomenon in the electric power industry. The authors point out to the importance of effective instruments and methods for credible estimation of the market power on liberalized electricity market as well as the forms and consequences of market power abuse. The fundamental principles and methods of the market power estimation are given along with the most common relevant indicators. Furthermore, in the work a proposal for determination of the relevant market place taking into account the specific features of power system and a theoretical example of estimating the residual supply index (RSI) in the electricity market are given.

  7. A Channelization-Based DOA Estimation Method for Wideband Signals

    PubMed Central

    Guo, Rui; Zhang, Yue; Lin, Qianqiang; Chen, Zengping

    2016-01-01

    In this paper, we propose a novel direction of arrival (DOA) estimation method for wideband signals with sensor arrays. The proposed method splits the wideband array output into multiple frequency sub-channels and estimates the signal parameters using a digital channelization receiver. Based on the output sub-channels, a channelization-based incoherent signal subspace method (Channelization-ISM) and a channelization-based test of orthogonality of projected subspaces method (Channelization-TOPS) are proposed. Channelization-ISM applies narrowband signal subspace methods on each sub-channel independently. Then the arithmetic mean or geometric mean of the estimated DOAs from each sub-channel gives the final result. Channelization-TOPS measures the orthogonality between the signal and the noise subspaces of the output sub-channels to estimate DOAs. The proposed channelization-based method isolates signals in different bandwidths reasonably and improves the output SNR. It outperforms the conventional ISM and TOPS methods on estimation accuracy and dynamic range, especially in real environments. Besides, the parallel processing architecture makes it easy to implement on hardware. A wideband digital array radar (DAR) using direct wideband radio frequency (RF) digitization is presented. Experiments carried out in a microwave anechoic chamber with the wideband DAR are presented to demonstrate the performance. The results verify the effectiveness of the proposed method. PMID:27384566

  8. Weight Estimation Methods in Children: A Systematic Review.

    PubMed

    Young, Kelly D; Korotzer, Noah C

    2016-10-01

    We seek to collect, review, evaluate, and synthesize the current literature focusing on all published methods of pediatric weight estimation. We conducted a literature review using PubMed and Web of Science databases, and the Google Scholar search engine, with the "similar articles" feature, as well as review of the bibliographies of identified studies. We excluded studies estimating weight of neonates, predominantly adults without separate information for children, child self-reported weight, and studies estimating outcomes other than weight. Quantitative outcomes of accuracy (proportion within 10% of actual weight), mean percentage error, and mean bias were preferred. Eighty studies met inclusion criteria with predominant methods: parent or health care worker weight estimation, age-based formulae, and length-based estimation without (eg, Broselow) or with adjustment for body habitus (eg, Pediatric Advanced Weight-Prediction in the Emergency Room, Mercy). Parent estimation was the most accurate at predicting total (actual) body weight, with length-based methods with habitus adjustment next. Length-based methods outperformed age-based formulae, and both tended to underestimate the weight of children from populations with high obesity rates and overestimate the weight of children from populations with high malnourishment rates. Health care worker estimation was not accurate. Parent estimation and length-based methods with adjustment for body habitus are the most accurate methods to predict children's total (actual) body weight. Age-based formulae and length-based methods without habitus adjustment likely tend to predict ideal body weight. Copyright © 2016 American College of Emergency Physicians. Published by Elsevier Inc. All rights reserved.

  9. Simultaneous Estimation of Esomeprazole and Domperidone by UV Spectrophotometric Method

    PubMed Central

    Prabu, S. Lakshmana; Shirwaikar, A.; Shirwaikar, Annie; Kumar, C. Dinesh; Joseph, A.; Kumar, R.

    2008-01-01

    A novel, simple, sensitive and rapid spectrophotometric method has been developed for simultaneous estimation of esomeprazole and domperidone. The method involved solving simultaneous equations based on measurement of absorbance at two wavelengths, 301 nm and 284 nm, λ max of esomeprazole and domperidone respectively. Beer's law was obeyed in the concentration range of 5-20 μg/ml and 8-30 μg/ml for esomeprazole and domperidone respectively. The method was found to be precise, accurate, and specific. The proposed method was successfully applied to estimation of esomeprazole and domperidone in combined solid dosage form. PMID:20390100

  10. Islanding detection scheme based on adaptive identifier signal estimation method.

    PubMed

    Bakhshi, M; Noroozian, R; Gharehpetian, G B

    2017-09-12

    This paper proposes a novel, passive-based anti-islanding method for both inverter and synchronous machine-based distributed generation (DG) units. Unfortunately, when the active/reactive power mismatches are near to zero, majority of the passive anti-islanding methods cannot detect the islanding situation, correctly. This study introduces a new islanding detection method based on exponentially damped signal estimation method. The proposed method uses adaptive identifier method for estimating of the frequency deviation of the point of common coupling (PCC) link as a target signal that can detect the islanding condition with near-zero active power imbalance. Main advantage of the adaptive identifier method over other signal estimation methods is its small sampling window. In this paper, the adaptive identifier based islanding detection method introduces a new detection index entitled decision signal by estimating of oscillation frequency of the PCC frequency and can detect islanding conditions, properly. In islanding conditions, oscillations frequency of PCC frequency reach to zero, thus threshold setting for decision signal is not a tedious job. The non-islanding transient events, which can cause a significant deviation in the PCC frequency are considered in simulations. These events include different types of faults, load changes, capacitor bank switching, and motor starting. Further, for islanding events, the capability of the proposed islanding detection method is verified by near-to-zero active power mismatches. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  11. A Computationally Efficient Method for Polyphonic Pitch Estimation

    NASA Astrophysics Data System (ADS)

    Zhou, Ruohua; Reiss, Joshua D.; Mattavelli, Marco; Zoia, Giorgio

    2009-12-01

    This paper presents a computationally efficient method for polyphonic pitch estimation. The method employs the Fast Resonator Time-Frequency Image (RTFI) as the basic time-frequency analysis tool. The approach is composed of two main stages. First, a preliminary pitch estimation is obtained by means of a simple peak-picking procedure in the pitch energy spectrum. Such spectrum is calculated from the original RTFI energy spectrum according to harmonic grouping principles. Then the incorrect estimations are removed according to spectral irregularity and knowledge of the harmonic structures of the music notes played on commonly used music instruments. The new approach is compared with a variety of other frame-based polyphonic pitch estimation methods, and results demonstrate the high performance and computational efficiency of the approach.

  12. A new class of methods for functional connectivity estimation

    NASA Astrophysics Data System (ADS)

    Lin, Wutu

    Measuring functional connectivity from neural recordings is important in understanding processing in cortical networks. The covariance-based methods are the current golden standard for functional connectivity estimation. However, the link between the pair-wise correlations and the physiological connections inside the neural network is unclear. Therefore, the power of inferring physiological basis from functional connectivity estimation is limited. To build a stronger tie and better understand the relationship between functional connectivity and physiological neural network, we need (1) a realistic model to simulate different types of neural recordings with known ground truth for benchmarking; (2) a new functional connectivity method that produce estimations closely reflecting the physiological basis. In this thesis, (1) I tune a spiking neural network model to match with human sleep EEG data, (2) introduce a new class of methods for estimating connectivity from different kinds of neural signals and provide theory proof for its superiority, (3) apply it to simulated fMRI data as an application.

  13. Estimation of equilibrium constants using automated group contribution methods.

    PubMed

    Forsythe, R G; Karp, P D; Mavrovouniotis, M L

    1997-10-01

    Group contribution methods are frequently used for estimating physical properties of compounds from their molecular structures. An algorithm for estimating Gibbs energies of formation through group contribution methods has been automated in an object-oriented framework. The algorithm decomposes compound structures according to a basis set of groups. It permits the use of wildcards and is able to distinguish between ring groups and chain groups that use similar search structures. Past methods relied on manual decomposition of compounds into constituent groups. The software is written in Common LISP and requires < 2 min to estimate Gibbs energies of formation for a database of 780 species of varying size and complexity. The software allows rapid expansion to incorporate different basis sets and to estimate a variety of other physical properties.

  14. Scanning Linear Estimation: Improvements over Region of Interest (ROI) Methods

    PubMed Central

    Kupinski, Meredith K.; Clarkson, Eric W.; Barrett, Harrison H.

    2013-01-01

    In tomographic medical imaging, signal activity is typically estimated by summing voxels from a reconstructed image. We introduce an alternative estimation scheme that operates on the raw projection data and offers a substantial improvement, as measured by the ensemble mean-square error (EMSE), when compared to using voxel values from a maximum-likelihood expectation-maximization (MLEM) reconstruction. The scanning-linear (SL) estimator operates on the raw projection data and is derived as a special case of maximum-likelihood (ML) estimation with a series of approximations to make the calculation tractable. The approximated likelihood accounts for background randomness, measurement noise, and variability in the parameters to be estimated. When signal size and location are known, the SL estimate of signal activity is an unbiased estimator, i.e., the average estimate equals the true value. By contrast, standard algorithms that operate on reconstructed data are subject to unpredictable bias arising from the null functions of the imaging system. The SL method is demonstrated for two different tasks: 1) simultaneously estimating a signal's size, location, and activity; 2) for a fixed signal size and location, estimating activity. Noisy projection data are realistically simulated using measured calibration data from the multi-module multi-resolution (M3R) small-animal SPECT imaging system. For both tasks the same set of images is reconstructed using the MLEM algorithm (80 iterations), and the average and the maximum value within the ROI are calculated for comparison. This comparison shows dramatic improvements in EMSE for the SL estimates. To show that the bias in ROI estimates affects not only absolute values but also relative differences, such as those used to monitor response to therapy, the activity estimation task is repeated for three different signal sizes. PMID:23384998

  15. Demographic estimation methods for plants with unobservable life-states

    USGS Publications Warehouse

    Kery, M.; Gregg, K.B.; Schaub, M.

    2005-01-01

    Demographic estimation of vital parameters in plants with an unobservable dormant state is complicated, because time of death is not known. Conventional methods assume that death occurs at a particular time after a plant has last been seen aboveground but the consequences of assuming a particular duration of dormancy have never been tested. Capture-recapture methods do not make assumptions about time of death; however, problems with parameter estimability have not yet been resolved. To date, a critical comparative assessment of these methods is lacking. We analysed data from a 10 year study of Cleistes bifaria, a terrestrial orchid with frequent dormancy, and compared demographic estimates obtained by five varieties of the conventional methods, and two capture-recapture methods. All conventional methods produced spurious unity survival estimates for some years or for some states, and estimates of demographic rates sensitive to the time of death assumption. In contrast, capture-recapture methods are more parsimonious in terms of assumptions, are based on well founded theory and did not produce spurious estimates. In Cleistes, dormant episodes lasted for 1-4 years (mean 1.4, SD 0.74). The capture-recapture models estimated ramet survival rate at 0.86 (SE~ 0.01), ranging from 0.77-0.94 (SEs # 0.1) in anyone year. The average fraction dormant was estimated at 30% (SE 1.5), ranging 16 -47% (SEs # 5.1) in anyone year. Multistate capture-recapture models showed that survival rates were positively related to precipitation in the current year, but transition rates were more strongly related to precipitation in the previous than in the current year, with more ramets going dormant following dry years. Not all capture-recapture models of interest have estimable parameters; for instance, without excavating plants in years when they do not appear aboveground, it is not possible to obtain independent timespecific survival estimates for dormant plants. We introduce rigorous

  16. A Group Contribution Method for Estimating Cetane and Octane Numbers

    SciTech Connect

    Kubic, William Louis

    2016-07-28

    Much of the research on advanced biofuels is devoted to the study of novel chemical pathways for converting nonfood biomass into liquid fuels that can be blended with existing transportation fuels. Many compounds under consideration are not found in the existing fuel supplies. Often, the physical properties needed to assess the viability of a potential biofuel are not available. The only reliable information available may be the molecular structure. Group contribution methods for estimating physical properties from molecular structure have been used for more than 60 years. The most common application is estimation of thermodynamic properties. More recently, group contribution methods have been developed for estimating rate dependent properties including cetane and octane numbers. Often, published group contribution methods are limited in terms of types of function groups and range of applicability. In this study, a new, broadly-applicable group contribution method based on an artificial neural network was developed to estimate cetane number research octane number, and motor octane numbers of hydrocarbons and oxygenated hydrocarbons. The new method is more accurate over a greater range molecular weights and structural complexity than existing group contribution methods for estimating cetane and octane numbers.

  17. A new method for parameter estimation in nonlinear dynamical equations

    NASA Astrophysics Data System (ADS)

    Wang, Liu; He, Wen-Ping; Liao, Le-Jian; Wan, Shi-Quan; He, Tao

    2015-01-01

    Parameter estimation is an important scientific problem in various fields such as chaos control, chaos synchronization and other mathematical models. In this paper, a new method for parameter estimation in nonlinear dynamical equations is proposed based on evolutionary modelling (EM). This will be achieved by utilizing the following characteristics of EM which includes self-organizing, adaptive and self-learning features which are inspired by biological natural selection, and mutation and genetic inheritance. The performance of the new method is demonstrated by using various numerical tests on the classic chaos model—Lorenz equation (Lorenz 1963). The results indicate that the new method can be used for fast and effective parameter estimation irrespective of whether partial parameters or all parameters are unknown in the Lorenz equation. Moreover, the new method has a good convergence rate. Noises are inevitable in observational data. The influence of observational noises on the performance of the presented method has been investigated. The results indicate that the strong noises, such as signal noise ratio (SNR) of 10 dB, have a larger influence on parameter estimation than the relatively weak noises. However, it is found that the precision of the parameter estimation remains acceptable for the relatively weak noises, e.g. SNR is 20 or 30 dB. It indicates that the presented method also has some anti-noise performance.

  18. Comparison of volume estimation methods for pancreatic islet cells

    NASA Astrophysics Data System (ADS)

    Dvořák, JiřÃ.­; Å vihlík, Jan; Habart, David; Kybic, Jan

    2016-03-01

    In this contribution we study different methods of automatic volume estimation for pancreatic islets which can be used in the quality control step prior to the islet transplantation. The total islet volume is an important criterion in the quality control. Also, the individual islet volume distribution is interesting -- it has been indicated that smaller islets can be more effective. A 2D image of a microscopy slice containing the islets is acquired. The input of the volume estimation methods are segmented images of individual islets. The segmentation step is not discussed here. We consider simple methods of volume estimation assuming that the islets have spherical or ellipsoidal shape. We also consider a local stereological method, namely the nucleator. The nucleator does not rely on any shape assumptions and provides unbiased estimates if isotropic sections through the islets are observed. We present a simulation study comparing the performance of the volume estimation methods in different scenarios and an experimental study comparing the methods on a real dataset.

  19. Scanning linear estimation: improvements over region of interest (ROI) methods.

    PubMed

    Kupinski, Meredith K; Clarkson, Eric W; Barrett, Harrison H

    2013-03-07

    In tomographic medical imaging, a signal activity is typically estimated by summing voxels from a reconstructed image. We introduce an alternative estimation scheme that operates on the raw projection data and offers a substantial improvement, as measured by the ensemble mean-square error (EMSE), when compared to using voxel values from a maximum-likelihood expectation-maximization (MLEM) reconstruction. The scanning-linear (SL) estimator operates on the raw projection data and is derived as a special case of maximum-likelihood estimation with a series of approximations to make the calculation tractable. The approximated likelihood accounts for background randomness, measurement noise and variability in the parameters to be estimated. When signal size and location are known, the SL estimate of signal activity is unbiased, i.e. the average estimate equals the true value. By contrast, unpredictable bias arising from the null functions of the imaging system affect standard algorithms that operate on reconstructed data. The SL method is demonstrated for two different tasks: (1) simultaneously estimating a signal's size, location and activity; (2) for a fixed signal size and location, estimating activity. Noisy projection data are realistically simulated using measured calibration data from the multi-module multi-resolution small-animal SPECT imaging system. For both tasks, the same set of images is reconstructed using the MLEM algorithm (80 iterations), and the average and maximum values within the region of interest (ROI) are calculated for comparison. This comparison shows dramatic improvements in EMSE for the SL estimates. To show that the bias in ROI estimates affects not only absolute values but also relative differences, such as those used to monitor the response to therapy, the activity estimation task is repeated for three different signal sizes.

  20. Motion estimation using point cluster method and Kalman filter.

    PubMed

    Senesh, M; Wolf, A

    2009-05-01

    The most frequently used method in a three dimensional human gait analysis involves placing markers on the skin of the analyzed segment. This introduces a significant artifact, which strongly influences the bone position and orientation and joint kinematic estimates. In this study, we tested and evaluated the effect of adding a Kalman filter procedure to the previously reported point cluster technique (PCT) in the estimation of a rigid body motion. We demonstrated the procedures by motion analysis of a compound planar pendulum from indirect opto-electronic measurements of markers attached to an elastic appendage that is restrained to slide along the rigid body long axis. The elastic frequency is close to the pendulum frequency, as in the biomechanical problem, where the soft tissue frequency content is similar to the actual movement of the bones. Comparison of the real pendulum angle to that obtained by several estimation procedures--PCT, Kalman filter followed by PCT, and low pass filter followed by PCT--enables evaluation of the accuracy of the procedures. When comparing the maximal amplitude, no effect was noted by adding the Kalman filter; however, a closer look at the signal revealed that the estimated angle based only on the PCT method was very noisy with fluctuation, while the estimated angle based on the Kalman filter followed by the PCT was a smooth signal. It was also noted that the instantaneous frequencies obtained from the estimated angle based on the PCT method is more dispersed than those obtained from the estimated angle based on Kalman filter followed by the PCT method. Addition of a Kalman filter to the PCT method in the estimation procedure of rigid body motion results in a smoother signal that better represents the real motion, with less signal distortion than when using a digital low pass filter. Furthermore, it can be concluded that adding a Kalman filter to the PCT procedure substantially reduces the dispersion of the maximal and minimal

  1. The estimation of the measurement results with using statistical methods

    NASA Astrophysics Data System (ADS)

    Velychko, O.; Gordiyenko, T.

    2015-02-01

    The row of international standards and guides describe various statistical methods that apply for a management, control and improvement of processes with the purpose of realization of analysis of the technical measurement results. The analysis of international standards and guides on statistical methods estimation of the measurement results recommendations for those applications in laboratories is described. For realization of analysis of standards and guides the cause-and-effect Ishikawa diagrams concerting to application of statistical methods for estimation of the measurement results are constructed.

  2. Novel methods for the estimation of acceptable daily intake

    SciTech Connect

    Dourson, M.L.; Hertzberg, R.C.; Hartung, R.; Blackburn, K.

    1985-12-01

    This paper describes two general methods for estimating ADIs that circumvent some of the limitations inherent in current approaches. The first method is based on a graphic presentation of toxicity data and is also shown to be useful for estimating acceptable intakes for durations of toxicant exposure other than the entire lifetime. The second method uses dose-response or dose-effect data to calculate lower CLs on the dose rate associated with specified response or effect levels. These approaches should lead to firmer, better established ADIs through increased use of the entire spectrum of toxicity data.

  3. Stability and error estimation for Component Adaptive Grid methods

    NASA Technical Reports Server (NTRS)

    Oliger, Joseph; Zhu, Xiaolei

    1994-01-01

    Component adaptive grid (CAG) methods for solving hyperbolic partial differential equations (PDE's) are discussed in this paper. Applying recent stability results for a class of numerical methods on uniform grids. The convergence of these methods for linear problems on component adaptive grids is established here. Furthermore, the computational error can be estimated on CAG's using the stability results. Using these estimates, the error can be controlled on CAG's. Thus, the solution can be computed efficiently on CAG's within a given error tolerance. Computational results for time dependent linear problems in one and two space dimensions are presented.

  4. Evaluation of alternative methods for estimating reference evapotranspiration

    USDA-ARS?s Scientific Manuscript database

    Evapotranspiration is an important component in water-balance and irrigation scheduling models. While the FAO-56 Penman-Monteith method has become the de facto standard for estimating reference evapotranspiration (ETo), it is a complex method requiring several weather parameters. Required weather ...

  5. Measuring landscape esthetics: the scenic beauty estimation method

    Treesearch

    Terry C. Daniel; Ron S. Boster

    1976-01-01

    The Scenic Beauty Estimation Method (SBE) provides quantitative measures of esthetic preferences for alternative wildland management systems. Extensive experimentation and testing with user, interest, and professional groups validated the method. SBE shows promise as an efficient and objective means for assessing the scenic beauty of public forests and wildlands, and...

  6. Comparison of Methods for Estimating and Testing Latent Variable Interactions.

    ERIC Educational Resources Information Center

    Moulder, Bradley C.; Algina, James

    2002-01-01

    Used simulation to compare structural equation modeling methods for estimating and testing hypotheses about an interaction between continuous variables. Findings indicate that the two-stage least squares procedure exhibited more bias and lower power than the other methods. The Jaccard-Wan procedure (J. Jaccard and C. Wan, 1995) and maximum…

  7. Comparison of Methods for Estimating and Testing Latent Variable Interactions.

    ERIC Educational Resources Information Center

    Moulder, Bradley C.; Algina, James

    2002-01-01

    Used simulation to compare structural equation modeling methods for estimating and testing hypotheses about an interaction between continuous variables. Findings indicate that the two-stage least squares procedure exhibited more bias and lower power than the other methods. The Jaccard-Wan procedure (J. Jaccard and C. Wan, 1995) and maximum…

  8. Assessing the sensitivity of methods for estimating principal causal effects.

    PubMed

    Stuart, Elizabeth A; Jo, Booil

    2015-12-01

    The framework of principal stratification provides a way to think about treatment effects conditional on post-randomization variables, such as level of compliance. In particular, the complier average causal effect (CACE) - the effect of the treatment for those individuals who would comply with their treatment assignment under either treatment condition - is often of substantive interest. However, estimation of the CACE is not always straightforward, with a variety of estimation procedures and underlying assumptions, but little advice to help researchers select between methods. In this article, we discuss and examine two methods that rely on very different assumptions to estimate the CACE: a maximum likelihood ('joint') method that assumes the 'exclusion restriction,' (ER) and a propensity score-based method that relies on 'principal ignorability.' We detail the assumptions underlying each approach, and assess each methods' sensitivity to both its own assumptions and those of the other method using both simulated data and a motivating example. We find that the ER-based joint approach appears somewhat less sensitive to its assumptions, and that the performance of both methods is significantly improved when there are strong predictors of compliance. Interestingly, we also find that each method performs particularly well when the assumptions of the other approach are violated. These results highlight the importance of carefully selecting an estimation procedure whose assumptions are likely to be satisfied in practice and of having strong predictors of principal stratum membership.

  9. Rapid-estimation method for assessing scour at highway bridges

    USGS Publications Warehouse

    Holnbeck, Stephen R.

    1998-01-01

    A method was developed by the U.S. Geological Survey for rapid estimation of scour at highway bridges using limited site data and analytical procedures to estimate pier, abutment, and contraction scour depths. The basis for the method was a procedure recommended by the Federal Highway Administration for conducting detailed scour investigations, commonly referred to as the Level 2 method. Using pier, abutment, and contraction scour results obtained from Level 2 investigations at 122 sites in 10 States, envelope curves and graphical relations were developed that enable determination of scour-depth estimates at most bridge sites in a matter of a few hours. Rather than using complex hydraulic variables, surrogate variables more easily obtained in the field were related to calculated scour-depth data from Level 2 studies. The method was tested by having several experienced individuals apply the method in the field, and results were compared among the individuals and with previous detailed analyses performed for the sites. Results indicated that the variability in predicted scour depth among individuals applying the method generally was within an acceptable range, and that conservatively greater scour depths generally were obtained by the rapid-estimation method compared to the Level 2 method. The rapid-estimation method is considered most applicable for conducting limited-detail scour assessments and as a screening tool to determine those bridge sites that may require more detailed analysis. The method is designed to be applied only by a qualified professional possessing knowledge and experience in the fields of bridge scour, hydraulics, and flood hydrology, and having specific expertise with the Level 2 method.

  10. Change-in-ratio methods for estimating population size

    USGS Publications Warehouse

    Udevitz, Mark S.; Pollock, Kenneth H.; McCullough, Dale R.; Barrett, Reginald H.

    2002-01-01

    Change-in-ratio (CIR) methods can provide an effective, low cost approach for estimating the size of wildlife populations. They rely on being able to observe changes in proportions of population subclasses that result from the removal of a known number of individuals from the population. These methods were first introduced in the 1940’s to estimate the size of populations with 2 subclasses under the assumption of equal subclass encounter probabilities. Over the next 40 years, closed population CIR models were developed to consider additional subclasses and use additional sampling periods. Models with assumptions about how encounter probabilities vary over time, rather than between subclasses, also received some attention. Recently, all of these CIR models have been shown to be special cases of a more general model. Under the general model, information from additional samples can be used to test assumptions about the encounter probabilities and to provide estimates of subclass sizes under relaxations of these assumptions. These developments have greatly extended the applicability of the methods. CIR methods are attractive because they do not require the marking of individuals, and subclass proportions often can be estimated with relatively simple sampling procedures. However, CIR methods require a carefully monitored removal of individuals from the population, and the estimates will be of poor quality unless the removals induce substantial changes in subclass proportions. In this paper, we review the state of the art for closed population estimation with CIR methods. Our emphasis is on the assumptions of CIR methods and on identifying situations where these methods are likely to be effective. We also identify some important areas for future CIR research.

  11. Precision of two methods for estimating age from burbot otoliths

    USGS Publications Warehouse

    Edwards, W.H.; Stapanian, M.A.; Stoneman, A.T.

    2011-01-01

    Lower reproductive success and older age structure are associated with many burbot (Lota lota L.) populations that are declining or of conservation concern. Therefore, reliable methods for estimating the age of burbot are critical for effective assessment and management. In Lake Erie, burbot populations have declined in recent years due to the combined effects of an aging population (&xmacr; = 10 years in 2007) and extremely low recruitment since 2002. We examined otoliths from burbot (N = 91) collected in Lake Erie in 2007 and compared the estimates of burbot age by two agers, each using two established methods (cracked-and-burned and thin-section) of estimating ages from burbot otoliths. One ager was experienced at estimating age from otoliths, the other was a novice. Agreement (precision) between the two agers was higher for the thin-section method, particularly at ages 6–11 years, based on linear regression analyses and 95% confidence intervals. As expected, precision between the two methods was higher for the more experienced ager. Both agers reported that the thin sections offered clearer views of the annuli, particularly near the margins on otoliths from burbot ages ≥8. Slides for the thin sections required some costly equipment and more than 2 days to prepare. In contrast, preparing the cracked-and-burned samples was comparatively inexpensive and quick. We suggest use of the thin-section method for estimating the age structure of older burbot populations.

  12. Assessment of Methods for Estimating Risk to Birds from ...

    EPA Pesticide Factsheets

    The U.S. EPA Ecological Risk Assessment Support Center (ERASC) announced the release of the final report entitled, Assessment of Methods for Estimating Risk to Birds from Ingestion of Contaminated Grit Particles. This report evaluates approaches for estimating the probability of ingestion by birds of contaminated particles such as pesticide granules or lead particles (i.e. shot or bullet fragments). In addition, it presents an approach for using this information to estimate the risk of mortality to birds from ingestion of lead particles. Response to ERASC Request #16

  13. Assessing the sensitivity of methods for estimating principal causal effects

    PubMed Central

    Stuart, Elizabeth A.; Jo, Booil

    2011-01-01

    The framework of principal stratification provides a way to think about treatment effects conditional on post-randomization variables, such as level of compliance. In particular, the complier average causal effect (CACE)–the effect of the treatment for those individuals who would comply with their treatment assignment under either treatment condition–is often of substantive interest. However, estimation of the CACE is not always straightforward, with a variety of estimation procedures and underlying assumptions, but little advice to help researchers select between methods. In this paper we discuss and examine two methods that rely on very different assumptions to estimate the CACE: a maximum likelihood (“joint”) method that assumes the “exclusion restriction,” and a propensity score based method that relies on “principal ignorability.” We detail the assumptions underlying each approach, and assess each method’s sensitivity to both its own assumptions and those of the other method using both simulated data and a motivating example. We find that the exclusion restriction based joint approach appears somewhat less sensitive to its assumptions, and that the performance of both methods is significantly improved when there are strong predictors of compliance. Interestingly, we also find that each method performs particularly well when the assumptions of the other approach are violated. These results highlight the importance of carefully selecting an estimation procedure whose assumptions are likely to be satisfied in practice and of having strong predictors of principal stratum membership. PMID:21971481

  14. Improving Density Estimation by Incorporating Spatial Information

    NASA Astrophysics Data System (ADS)

    Smith, Laura M.; Keegan, Matthew S.; Wittman, Todd; Mohler, George O.; Bertozzi, Andrea L.

    2010-12-01

    Given discrete event data, we wish to produce a probability density that can model the relative probability of events occurring in a spatial region. Common methods of density estimation, such as Kernel Density Estimation, do not incorporate geographical information. Using these methods could result in nonnegligible portions of the support of the density in unrealistic geographic locations. For example, crime density estimation models that do not take geographic information into account may predict events in unlikely places such as oceans, mountains, and so forth. We propose a set of Maximum Penalized Likelihood Estimation methods based on Total Variation and [InlineEquation not available: see fulltext.] Sobolev norm regularizers in conjunction with a priori high resolution spatial data to obtain more geographically accurate density estimates. We apply this method to a residential burglary data set of the San Fernando Valley using geographic features obtained from satellite images of the region and housing density information.

  15. Estimation Methods for Mixed Logistic Models with Few Clusters.

    PubMed

    McNeish, Daniel

    2016-01-01

    For mixed models generally, it is well known that modeling data with few clusters will result in biased estimates, particularly of the variance components and fixed effect standard errors. In linear mixed models, small sample bias is typically addressed through restricted maximum likelihood estimation (REML) and a Kenward-Roger correction. Yet with binary outcomes, there is no direct analog of either procedure. With a larger number of clusters, estimation methods for binary outcomes that approximate the likelihood to circumvent the lack of a closed form solution such as adaptive Gaussian quadrature and the Laplace approximation have been shown to yield less-biased estimates than linearization estimation methods that instead linearly approximate the model. However, adaptive Gaussian quadrature and the Laplace approximation are approximating the full likelihood rather than the restricted likelihood; the full likelihood is known to yield biased estimates with few clusters. On the other hand, linearization methods linearly approximate the model, which allows for restricted maximum likelihood and the Kenward-Roger correction to be applied. Thus, the following question arises: Which is preferable, a better approximation of a biased function or a worse approximation of an unbiased function? We address this question with a simulation and an illustrative empirical analysis.

  16. Spectral estimation of plasma fluctuations. I. Comparison of methods

    SciTech Connect

    Riedel, K.S.; Sidorenko, A. ); Thomson, D.J. )

    1994-03-01

    The relative root mean squared errors (RMSE) of nonparametric methods for spectral estimation is compared for microwave scattering data of plasma fluctuations. These methods reduce the variance of the periodogram estimate by averaging the spectrum over a frequency bandwidth. As the bandwidth increases, the variance decreases, but the bias error increases. The plasma spectra vary by over four orders of magnitude, and therefore, using a spectral window is necessary. The smoothed tapered periodogram is compared with the adaptive multiple taper methods and hybrid methods. It is found that a hybrid method, which uses four orthogonal tapers and then applies a kernel smoother, performs best. For 300 point data segments, even an optimized smoothed tapered periodogram has a 24% larger relative RMSE than the hybrid method. Two new adaptive multitaper weightings which outperform Thomson's original adaptive weighting are presented.

  17. Simple method for quick estimation of aquifer hydrogeological parameters

    NASA Astrophysics Data System (ADS)

    Ma, C.; Li, Y. Y.

    2017-08-01

    Development of simple and accurate methods to determine the aquifer hydrogeological parameters was of importance for groundwater resources assessment and management. Aiming at the present issue of estimating aquifer parameters based on some data of the unsteady pumping test, a fitting function of Theis well function was proposed using fitting optimization method and then a unitary linear regression equation was established. The aquifer parameters could be obtained by solving coefficients of the regression equation. The application of the proposed method was illustrated, using two published data sets. By the error statistics and analysis on the pumping drawdown, it showed that the method proposed in this paper yielded quick and accurate estimates of the aquifer parameters. The proposed method could reliably identify the aquifer parameters from long distance observed drawdowns and early drawdowns. It was hoped that the proposed method in this paper would be helpful for practicing hydrogeologists and hydrologists.

  18. Bounded Self-Weights Estimation Method for Non-Local Means Image Denoising Using Minimax Estimators.

    PubMed

    Nguyen, Minh Phuong; Chun, Se Young

    2017-04-01

    A non-local means (NLM) filter is a weighted average of a large number of non-local pixels with various image intensity values. The NLM filters have been shown to have powerful denoising performance, excellent detail preservation by averaging many noisy pixels, and using appropriate values for the weights, respectively. The NLM weights between two different pixels are determined based on the similarities between two patches that surround these pixels and a smoothing parameter. Another important factor that influences the denoising performance is the self-weight values for the same pixel. The recently introduced local James-Stein type center pixel weight estimation method (LJS) outperforms other existing methods when determining the contribution of the center pixels in the NLM filter. However, the LJS method may result in excessively large self-weight estimates since no upper bound is assumed, and the method uses a relatively large local area for estimating the self-weights, which may lead to a strong bias. In this paper, we investigated these issues in the LJS method, and then propose a novel local self-weight estimation methods using direct bounds (LMM-DB) and reparametrization (LMM-RP) based on the Baranchik's minimax estimator. Both the LMM-DB and LMM-RP methods were evaluated using a wide range of natural images and a clinical MRI image together with the various levels of additive Gaussian noise. Our proposed parameter selection methods yielded an improved bias-variance trade-off, a higher peak signal-to-noise (PSNR) ratio, and fewer visual artifacts when compared with the results of the classical NLM and LJS methods. Our proposed methods also provide a heuristic way to select a suitable global smoothing parameters that can yield PSNR values that are close to the optimal values.

  19. Fault detection in electromagnetic suspension systems with state estimation methods

    SciTech Connect

    Sinha, P.K.; Zhou, F.B.; Kutiyal, R.S. . Dept. of Engineering)

    1993-11-01

    High-speed maglev vehicles need a high level of safety that depends on the whole vehicle system's reliability. There are many ways of attaining high reliability for the system. Conventional method uses redundant hardware with majority vote logic circuits. Hardware redundancy costs more, weigh more and occupy more space than that of analytically redundant methods. Analytically redundant systems use parameter identification and state estimation methods based on the system models to detect and isolate the fault of instruments (sensors), actuator and components. In this paper the authors use the Luenberger observer to estimate three state variables of the electromagnetic suspension system: position (airgap), vehicle velocity, and vertical acceleration. These estimates are compared with the corresponding sensor outputs for fault detection. In this paper, they consider FDI of the accelerometer, the sensor which provides the ride quality.

  20. Simplified triangle method for estimating evaporative fraction over soybean crops

    NASA Astrophysics Data System (ADS)

    Silva-Fuzzo, Daniela Fernanda; Rocha, Jansle Vieira

    2016-10-01

    Accurate estimates are emerging with technological advances in remote sensing, and the triangle method has demonstrated to be a useful tool for the estimation of evaporative fraction (EF). The purpose of this study was to estimate the EF using the triangle method at the regional level. We used data from the Moderate Resolution Imaging Spectroradiometer orbital sensor, referring to indices of surface temperature and vegetation index for a 10-year period (2002/2003 to 2011/2012) of cropping seasons in the state of Paraná, Brazil. The triangle method has shown considerable results for the EF, and the validation of the estimates, as compared to observed data of climatological water balance, showed values >0.8 for modified "d" of Wilmott and R2 values between 0.6 and 0.7 for some counties. The errors were low for all years analyzed, and the test showed that the estimated data are very close to the observed data. Based on statistical validation, we can say that the triangle method is a consistent tool, is useful as it uses only images of remote sensing as variables, and can provide support for monitoring large-scale agroclimatic, specially for countries of great territorial dimensions, such as Brazil, which lacks a more dense network of meteorological ground stations, i.e., the country does not appear to cover a large field for data.

  1. Models and estimation methods for clinical HIV-1 data

    NASA Astrophysics Data System (ADS)

    Verotta, Davide

    2005-12-01

    Clinical HIV-1 data include many individual factors, such as compliance to treatment, pharmacokinetics, variability in respect to viral dynamics, race, sex, income, etc., which might directly influence or be associated with clinical outcome. These factors need to be taken into account to achieve a better understanding of clinical outcome and mathematical models can provide a unifying framework to do so. The first objective of this paper is to demonstrate the development of comprehensive HIV-1 dynamics models that describe viral dynamics and also incorporate different factors influencing such dynamics. The second objective of this paper is to describe alternative estimation methods that can be applied to the analysis of data with such models. In particular, we consider: (i) simple but effective two-stage estimation methods, in which data from each patient are analyzed separately and summary statistics derived from the results, (ii) more complex nonlinear mixed effect models, used to pool all the patient data in a single analysis. Bayesian estimation methods are also considered, in particular: (iii) maximum posterior approximations, MAP, and (iv) Markov chain Monte Carlo, MCMC. Bayesian methods incorporate prior knowledge into the models, thus avoiding some of the model simplifications introduced when the data are analyzed using two-stage methods, or a nonlinear mixed effect framework. We demonstrate the development of the models and the different estimation methods using real AIDS clinical trial data involving patients receiving multiple drugs regimens.

  2. New method for estimating low-earth-orbit collision probabilities

    NASA Technical Reports Server (NTRS)

    Vedder, John D.; Tabor, Jill L.

    1991-01-01

    An unconventional but general method is described for estimating the probability of collision between an earth-orbiting spacecraft and orbital debris. This method uses a Monte Caralo simulation of the orbital motion of the target spacecraft and each discrete debris object to generate an empirical set of distances, each distance representing the separation between the spacecraft and the nearest debris object at random times. Using concepts from the asymptotic theory of extreme order statistics, an analytical density function is fitted to this set of minimum distances. From this function, it is possible to generate realistic collision estimates for the spacecraft.

  3. New method for estimating low-earth-orbit collision probabilities

    NASA Technical Reports Server (NTRS)

    Vedder, John D.; Tabor, Jill L.

    1991-01-01

    An unconventional but general method is described for estimating the probability of collision between an earth-orbiting spacecraft and orbital debris. This method uses a Monte Caralo simulation of the orbital motion of the target spacecraft and each discrete debris object to generate an empirical set of distances, each distance representing the separation between the spacecraft and the nearest debris object at random times. Using concepts from the asymptotic theory of extreme order statistics, an analytical density function is fitted to this set of minimum distances. From this function, it is possible to generate realistic collision estimates for the spacecraft.

  4. Estimation Method of Body Temperature from Upper Arm Temperature

    NASA Astrophysics Data System (ADS)

    Suzuki, Arata; Ryu, Kazuteru; Kanai, Nobuyuki

    This paper proposes a method for estimation of a body temperature by using a relation between the upper arm temperature and the atmospheric temperature. Conventional method has measured by armpit or oral, because the body temperature from the body surface is influenced by the atmospheric temperature. However, there is a correlation between the body surface temperature and the atmospheric temperature. By using this correlation, the body temperature can estimated from the body surface temperature. Proposed method enables to measure body temperature by the temperature sensor that is embedded in the blood pressure monitor cuff. Therefore, simultaneous measurement of blood pressure and body temperature can be realized. The effectiveness of the proposed method is verified through the actual body temperature experiment. The proposed method might contribute to reduce the medical staff's workloads in the home medical care, and more.

  5. A review of action estimation methods for galactic dynamics

    NASA Astrophysics Data System (ADS)

    Sanders, Jason L.; Binney, James

    2016-04-01

    We review the available methods for estimating actions, angles and frequencies of orbits in both axisymmetric and triaxial potentials. The methods are separated into two classes. Unless an orbit has been trapped by a resonance, convergent, or iterative, methods are able to recover the actions to arbitrarily high accuracy given sufficient computing time. Faster non-convergent methods rely on the potential being sufficiently close to a separable potential, and the accuracy of the action estimate cannot be improved through further computation. We critically compare the accuracy of the methods and the required computation time for a range of orbits in an axisymmetric multicomponent Galactic potential. We introduce a new method for estimating actions that builds on the adiabatic approximation of Schönrich & Binney and discuss the accuracy required for the actions, angles and frequencies using suitable distribution functions for the thin and thick discs, the stellar halo and a star stream. We conclude that for studies of the disc and smooth halo component of the Milky Way, the most suitable compromise between speed and accuracy is the Stäckel Fudge, whilst when studying streams the non-convergent methods do not offer sufficient accuracy and the most suitable method is computing the actions from an orbit integration via a generating function. All the software used in this study can be downloaded from https://github.com/jls713/tact.

  6. Modified cross-validation as a method for estimating parameter

    NASA Astrophysics Data System (ADS)

    Shi, Chye Rou; Adnan, Robiah

    2014-12-01

    Best subsets regression is an effective approach to distinguish models that can attain objectives with as few predictors as would be prudent. Subset models might really estimate the regression coefficients and predict future responses with smaller variance than the full model using all predictors. The inquiry of how to pick subset size λ depends on the bias and variance. There are various method to pick subset size λ. Regularly pick the smallest model that minimizes an estimate of the expected prediction error. Since data are regularly small, so Repeated K-fold cross-validation method is the most broadly utilized method to estimate prediction error and select model. The data is reshuffled and re-stratified before each round. However, the "one-standard-error" rule of Repeated K-fold cross-validation method always picks the most stingy model. The objective of this research is to modify the existing cross-validation method to avoid overfitting and underfitting model, a modified cross-validation method is proposed. This paper compares existing cross-validation and modified cross-validation. Our results reasoned that the modified cross-validation method is better at submodel selection and evaluation than other methods.

  7. Estimating duration in partnership studies: issues, methods and examples

    PubMed Central

    Burington, Bart; Hughes, James P; Whittington, William L H; Stoner, Brad; Garnett, Geoff; Aral, Sevgi O; Holmes, King K

    2011-01-01

    Background and objectives Understanding the time course of sexual partnerships is important for understanding sexual behaviour, transmission risks for sexually transmitted infections (STI) and development of mathematical models of disease transmission. Study design The authors describe issues and biases relating to censoring, truncation and sampling that arise when estimating partnership duration. Recommendations for study design and analysis methods are presented and illustrated using data from a sexual-behaviour survey that enrolled individuals from an adolescent-health clinic and two STD clinics. Survey participants were queried, for each of (up to) four partnerships in the last 3 months, about the month and year of first sex, the number of days since last sex and whether partnerships were limited to single encounters. Participants were followed every 4 months for up to 1 year. Results After adjustment for censoring and truncation, the estimated median duration of sexual partnerships declined from 9 months (unadjusted) to 1.6 months (adjusted). Similarly, adjustment for censoring and truncation reduced the bias in relative risks for the effect of age in a Cox model. Other approaches, such as weighted estimation, also reduced bias in the estimated duration distribution. Conclusion Methods are available for estimating partnership duration from censored and truncated samples. Ignoring censoring, truncation and other sampling issues results in biased estimates. PMID:20332366

  8. Statistical methods of parameter estimation for deterministically chaotic time series.

    PubMed

    Pisarenko, V F; Sornette, D

    2004-03-01

    We discuss the possibility of applying some standard statistical methods (the least-square method, the maximum likelihood method, and the method of statistical moments for estimation of parameters) to deterministically chaotic low-dimensional dynamic system (the logistic map) containing an observational noise. A "segmentation fitting" maximum likelihood (ML) method is suggested to estimate the structural parameter of the logistic map along with the initial value x(1) considered as an additional unknown parameter. The segmentation fitting method, called "piece-wise" ML, is similar in spirit but simpler and has smaller bias than the "multiple shooting" previously proposed. Comparisons with different previously proposed techniques on simulated numerical examples give favorable results (at least, for the investigated combinations of sample size N and noise level). Besides, unlike some suggested techniques, our method does not require the a priori knowledge of the noise variance. We also clarify the nature of the inherent difficulties in the statistical analysis of deterministically chaotic time series and the status of previously proposed Bayesian approaches. We note the trade off between the need of using a large number of data points in the ML analysis to decrease the bias (to guarantee consistency of the estimation) and the unstable nature of dynamical trajectories with exponentially fast loss of memory of the initial condition. The method of statistical moments for the estimation of the parameter of the logistic map is discussed. This method seems to be the unique method whose consistency for deterministically chaotic time series is proved so far theoretically (not only numerically).

  9. Hydrological model uncertainty due to spatial evapotranspiration estimation methods

    NASA Astrophysics Data System (ADS)

    Yu, Xuan; Lamačová, Anna; Duffy, Christopher; Krám, Pavel; Hruška, Jakub

    2016-05-01

    Evapotranspiration (ET) continues to be a difficult process to estimate in seasonal and long-term water balances in catchment models. Approaches to estimate ET typically use vegetation parameters (e.g., leaf area index [LAI], interception capacity) obtained from field observation, remote sensing data, national or global land cover products, and/or simulated by ecosystem models. In this study we attempt to quantify the uncertainty that spatial evapotranspiration estimation introduces into hydrological simulations when the age of the forest is not precisely known. The Penn State Integrated Hydrologic Model (PIHM) was implemented for the Lysina headwater catchment, located 50°03‧N, 12°40‧E in the western part of the Czech Republic. The spatial forest patterns were digitized from forest age maps made available by the Czech Forest Administration. Two ET methods were implemented in the catchment model: the Biome-BGC forest growth sub-model (1-way coupled to PIHM) and with the fixed-seasonal LAI method. From these two approaches simulation scenarios were developed. We combined the estimated spatial forest age maps and two ET estimation methods to drive PIHM. A set of spatial hydrologic regime and streamflow regime indices were calculated from the modeling results for each method. Intercomparison of the hydrological responses to the spatial vegetation patterns suggested considerable variation in soil moisture and recharge and a small uncertainty in the groundwater table elevation and streamflow. The hydrologic modeling with ET estimated by Biome-BGC generated less uncertainty due to the plant physiology-based method. The implication of this research is that overall hydrologic variability induced by uncertain management practices was reduced by implementing vegetation models in the catchment models.

  10. Method for estimating the systemic burden of Pu from urinalyses

    SciTech Connect

    Leggett, R.W.; Eckerman, K.F.

    1987-03-01

    It is generally agreed that Langham's model for urinary excretion of Pu substantially overestimates the systemic burden several years after exposure. Improved estimates can be derived from information obtained since the development of that model, including comparative urine and autopsy data for occupationally exposed persons; reanalyzed and updated data for human subjects injected with Pu; and a large body of general physiological and Pu-specific information on the processes governing the behavior of Pu in the body. We examine modeling approaches based on each of these sets of information and show that the three approaches yield fairly consistent estimates of the urinary excretion rate over three decades after contamination of blood. Estimates from the various approaches are unified to obtain a single set of predicted urinary excretion rates that, in effect, is based on all three bodies of information. A simple method is described for using these excretion rates to estimate intakes and systemic burdens of Pu.

  11. The deposit size frequency method for estimating undiscovered uranium deposits

    USGS Publications Warehouse

    McCammon, R.B.; Finch, W.I.

    1993-01-01

    The deposit size frequency (DSF) method has been developed as a generalization of the method that was used in the National Uranium Resource Evaluation (NURE) program to estimate the uranium endowment of the United States. The DSF method overcomes difficulties encountered during the NURE program when geologists were asked to provide subjective estimates of (1) the endowed fraction of an area judged favorable (factor F) for the occurrence of undiscovered uranium deposits and (2) the tons of endowed rock per unit area (factor T) within the endowed fraction of the favorable area. Because the magnitudes of factors F and T were unfamiliar to nearly all of the geologists, most geologists responded by estimating the number of undiscovered deposits likely to occur within the favorable area and the average size of these deposits. The DSF method combines factors F and T into a single factor (F??T) that represents the tons of endowed rock per unit area of the undiscovered deposits within the favorable area. Factor F??T, provided by the geologist, is the estimated number of undiscovered deposits per unit area in each of a number of specified deposit-size classes. The number of deposit-size classes and the size interval of each class are based on the data collected from the deposits in known (control) areas. The DSF method affords greater latitude in making subjective estimates than the NURE method and emphasizes more of the everyday experience of exploration geologists. Using the DSF method, new assessments have been made for the "young, organic-rich" surficial uranium deposits in Washington and idaho and for the solution-collapse breccia pipe uranium deposits in the Grand Canyon region in Arizona and adjacent Utah. ?? 1993 Oxford University Press.

  12. Kernel bandwidth estimation for nonparametric modeling.

    PubMed

    Bors, Adrian G; Nasios, Nikolaos

    2009-12-01

    Kernel density estimation is a nonparametric procedure for probability density modeling, which has found several applications in various fields. The smoothness and modeling ability of the functional approximation are controlled by the kernel bandwidth. In this paper, we describe a Bayesian estimation method for finding the bandwidth from a given data set. The proposed bandwidth estimation method is applied in three different computational-intelligence methods that rely on kernel density estimation: 1) scale space; 2) mean shift; and 3) quantum clustering. The third method is a novel approach that relies on the principles of quantum mechanics. This method is based on the analogy between data samples and quantum particles and uses the SchrOdinger potential as a cost function. The proposed methodology is used for blind-source separation of modulated signals and for terrain segmentation based on topography information.

  13. MONTE CARLO ERROR ESTIMATION APPLIED TO NONDESTRUCTIVE ASSAY METHODS

    SciTech Connect

    R. ESTEP; ET AL

    2000-06-01

    Monte Carlo randomization of nuclear counting data into N replicate sets is the basis of a simple and effective method for estimating error propagation through complex analysis algorithms such as those using neural networks or tomographic image reconstructions. The error distributions of properly simulated replicate data sets mimic those of actual replicate measurements and can be used to estimate the std. dev. for an assay along with other statistical quantities. We have used this technique to estimate the standard deviation in radionuclide masses determined using the tomographic gamma scanner (TGS) and combined thermal/epithermal neutron (CTEN) methods. The effectiveness of this approach is demonstrated by a comparison of our Monte Carlo error estimates with the error distributions in actual replicate measurements and simulations of measurements. We found that the std. dev. estimated this way quickly converges to an accurate value on average and has a predictable error distribution similar to N actual repeat measurements. The main drawback of the Monte Carlo method is that N additional analyses of the data are required, which may be prohibitively time consuming with slow analysis algorithms.

  14. Estimation of uncertainty for contour method residual stress measurements

    SciTech Connect

    Olson, Mitchell D.; DeWald, Adrian T.; Prime, Michael B.; Hill, Michael R.

    2014-12-03

    This paper describes a methodology for the estimation of measurement uncertainty for the contour method, where the contour method is an experimental technique for measuring a two-dimensional map of residual stress over a plane. Random error sources including the error arising from noise in displacement measurements and the smoothing of the displacement surfaces are accounted for in the uncertainty analysis. The output is a two-dimensional, spatially varying uncertainty estimate such that every point on the cross-section where residual stress is determined has a corresponding uncertainty value. Both numerical and physical experiments are reported, which are used to support the usefulness of the proposed uncertainty estimator. The uncertainty estimator shows the contour method to have larger uncertainty near the perimeter of the measurement plane. For the experiments, which were performed on a quenched aluminum bar with a cross section of 51 × 76 mm, the estimated uncertainty was approximately 5 MPa (σ/E = 7 · 10⁻⁵) over the majority of the cross-section, with localized areas of higher uncertainty, up to 10 MPa (σ/E = 14 · 10⁻⁵).

  15. Estimation of uncertainty for contour method residual stress measurements

    DOE PAGES

    Olson, Mitchell D.; DeWald, Adrian T.; Prime, Michael B.; ...

    2014-12-03

    This paper describes a methodology for the estimation of measurement uncertainty for the contour method, where the contour method is an experimental technique for measuring a two-dimensional map of residual stress over a plane. Random error sources including the error arising from noise in displacement measurements and the smoothing of the displacement surfaces are accounted for in the uncertainty analysis. The output is a two-dimensional, spatially varying uncertainty estimate such that every point on the cross-section where residual stress is determined has a corresponding uncertainty value. Both numerical and physical experiments are reported, which are used to support the usefulnessmore » of the proposed uncertainty estimator. The uncertainty estimator shows the contour method to have larger uncertainty near the perimeter of the measurement plane. For the experiments, which were performed on a quenched aluminum bar with a cross section of 51 × 76 mm, the estimated uncertainty was approximately 5 MPa (σ/E = 7 · 10⁻⁵) over the majority of the cross-section, with localized areas of higher uncertainty, up to 10 MPa (σ/E = 14 · 10⁻⁵).« less

  16. Methods to estimate breeding values in honey bees.

    PubMed

    Brascamp, Evert W; Bijma, Piter

    2014-09-19

    Efficient methodologies based on animal models are widely used to estimate breeding values in farm animals. These methods are not applicable in honey bees because of their mode of reproduction. Observations are recorded on colonies, which consist of a single queen and thousands of workers that descended from the queen mated to 10 to 20 drones. Drones are haploid and sperms are copies of a drone's genotype. As a consequence, Mendelian sampling terms of full-sibs are correlated, such that the covariance matrix of Mendelian sampling terms is not diagonal. In this paper, we show how the numerator relationship matrix and its inverse can be obtained for honey bee populations. We present algorithms to derive the covariance matrix of Mendelian sampling terms that accounts for correlated terms. The resulting matrix is a block-diagonal matrix, with a small block for each full-sib family, and is easy to invert numerically. The method allows incorporating the within-colony distribution of progeny from drone-producing queens and drones, such that estimates of breeding values weigh information from relatives appropriately. Simulation shows that the resulting estimated breeding values are unbiased predictors of true breeding values. Benefits for response to selection, compared to an existing approximate method, appear to be limited (~5%). Benefits may however be greater when estimating genetic parameters. This work shows how the relationship matrix and its inverse can be developed for honey bee populations, and used to estimate breeding values and variance components.

  17. A new colorimetric method for the estimation of glycosylated hemoglobin.

    PubMed

    Nayak, S S; Pattabiraman, T N

    1981-02-05

    A new colorimetric method, based on the phenol sulphuric acid reaction of carbohydrates, is described for the determination of glycosylated hemoglobin. Hemolyzates were treated with 1 mol/l oxalic acid in 2 mol/l Hcl for 4 h at 100 degrees C, the protein was precipitated with trichloroacetic acid, and the free sugars and hydroxymethyl furfural in the protein free supernatant were treated with phenol and sulphuric acid to form the color. The new method is compared to the thiobarbituric acid method and the ion-exchange chromatographic method for the estimation of glycosylated hemoglobin in normals and diabetics. The increase in glycosylated hemoglobin in diabetic patients as estimated by the phenol-sulphuric acid method was more significant (P less than 0.001) than the increase observed by the thiobarbituric acid method (P less than 0.01). The correlation between the phenol-sulphuric acid method and the column method was better (r = 0.91) than the correlation between the thiobarbituric acid method and the column method (r = 0.84). No significant correlation between fasting and postprandial blood sugar level and glycosylated hemoglobin level as determined by the two colorimetric methods was observed in diabetic patients.

  18. Improvement of Source Number Estimation Method for Single Channel Signal

    PubMed Central

    Du, Bolun; He, Yunze

    2016-01-01

    Source number estimation methods for single channel signal have been investigated and the improvements for each method are suggested in this work. Firstly, the single channel data is converted to multi-channel form by delay process. Then, algorithms used in the array signal processing, such as Gerschgorin’s disk estimation (GDE) and minimum description length (MDL), are introduced to estimate the source number of the received signal. The previous results have shown that the MDL based on information theoretic criteria (ITC) obtains a superior performance than GDE at low SNR. However it has no ability to handle the signals containing colored noise. On the contrary, the GDE method can eliminate the influence of colored noise. Nevertheless, its performance at low SNR is not satisfactory. In order to solve these problems and contradictions, the work makes remarkable improvements on these two methods on account of the above consideration. A diagonal loading technique is employed to ameliorate the MDL method and a jackknife technique is referenced to optimize the data covariance matrix in order to improve the performance of the GDE method. The results of simulation have illustrated that the performance of original methods have been promoted largely. PMID:27736959

  19. Paradigms and commonalities in atmospheric source term estimation methods

    NASA Astrophysics Data System (ADS)

    Bieringer, Paul E.; Young, George S.; Rodriguez, Luna M.; Annunzio, Andrew J.; Vandenberghe, Francois; Haupt, Sue Ellen

    2017-05-01

    Modeling the downwind hazard area resulting from the unknown release of an atmospheric contaminant requires estimation of the source characteristics of a localized source from concentration or dosage observations and use of this information to model the subsequent transport and dispersion of the contaminant. This source term estimation problem is mathematically challenging because airborne material concentration observations and wind data are typically sparse and the turbulent wind field chaotic. Methods for addressing this problem fall into three general categories: forward modeling, inverse modeling, and nonlinear optimization. Because numerous methods have been developed on various foundations, they often have a disparate nomenclature. This situation poses challenges to those facing a new source term estimation problem, particularly when selecting the best method for the problem at hand. There is, however, much commonality between many of these methods, especially within each category. Here we seek to address the difficulties encountered when selecting an STE method by providing a synthesis of the various methods that highlights commonalities, potential opportunities for component exchange, and lessons learned that can be applied across methods.

  20. Inverse method for estimating shear stress in machining

    NASA Astrophysics Data System (ADS)

    Burns, T. J.; Mates, S. P.; Rhorer, R. L.; Whitenton, E. P.; Basak, D.

    2016-01-01

    An inverse method is presented for estimating shear stress in the work material in the region of chip-tool contact along the rake face of the tool during orthogonal machining. The method is motivated by a model of heat generation in the chip, which is based on a two-zone contact model for friction along the rake face, and an estimate of the steady-state flow of heat into the cutting tool. Given an experimentally determined discrete set of steady-state temperature measurements along the rake face of the tool, it is shown how to estimate the corresponding shear stress distribution on the rake face, even when no friction model is specified.

  1. Estimating Agricultural Water Use using the Operational Simplified Surface Energy Balance Evapotranspiration Estimation Method

    NASA Astrophysics Data System (ADS)

    Forbes, B. T.

    2015-12-01

    Due to the predominantly arid climate in Arizona, access to adequate water supply is vital to the economic development and livelihood of the State. Water supply has become increasingly important during periods of prolonged drought, which has strained reservoir water levels in the Desert Southwest over past years. Arizona's water use is dominated by agriculture, consuming about seventy-five percent of the total annual water demand. Tracking current agricultural water use is important for managers and policy makers so that current water demand can be assessed and current information can be used to forecast future demands. However, many croplands in Arizona are irrigated outside of areas where water use reporting is mandatory. To estimate irrigation withdrawals on these lands, we use a combination of field verification, evapotranspiration (ET) estimation, and irrigation system qualification. ET is typically estimated in Arizona using the Modified Blaney-Criddle method which uses meteorological data to estimate annual crop water requirements. The Modified Blaney-Criddle method assumes crops are irrigated to their full potential over the entire growing season, which may or may not be realistic. We now use the Operational Simplified Surface Energy Balance (SSEBop) ET data in a remote-sensing and energy-balance framework to estimate cropland ET. SSEBop data are of sufficient resolution (30m by 30m) for estimation of field-scale cropland water use. We evaluate our SSEBop-based estimates using ground-truth information and irrigation system qualification obtained in the field. Our approach gives the end user an estimate of crop consumptive use as well as inefficiencies in irrigation system performance—both of which are needed by water managers for tracking irrigated water use in Arizona.

  2. Method for estimating spin-spin interactions from magnetization curves

    NASA Astrophysics Data System (ADS)

    Tamura, Ryo; Hukushima, Koji

    2017-02-01

    We develop a method to estimate the spin-spin interactions in the Hamiltonian from the observed magnetization curve by machine learning based on Bayesian inference. In our method, plausible spin-spin interactions are determined by maximizing the posterior distribution, which is the conditional probability of the spin-spin interactions in the Hamiltonian for a given magnetization curve with observation noise. The conditional probability is obtained with the Markov chain Monte Carlo simulations combined with an exchange Monte Carlo method. The efficiency of our method is tested using synthetic magnetization curve data, and the results show that spin-spin interactions are estimated with a high accuracy. In particular, the relevant terms of the spin-spin interactions are successfully selected from the redundant interaction candidates by the l1 regularization in the prior distribution.

  3. Kernel density estimation applied to bond length, bond angle, and torsion angle distributions.

    PubMed

    McCabe, Patrick; Korb, Oliver; Cole, Jason

    2014-05-27

    We describe the method of kernel density estimation (KDE) and apply it to molecular structure data. KDE is a quite general nonparametric statistical method suitable even for multimodal data. The method generates smooth probability density function (PDF) representations and finds application in diverse fields such as signal processing and econometrics. KDE appears to have been under-utilized as a method in molecular geometry analysis, chemo-informatics, and molecular structure optimization. The resulting probability densities have advantages over histograms and, importantly, are also suitable for gradient-based optimization. To illustrate KDE, we describe its application to chemical bond length, bond valence angle, and torsion angle distributions and show the ability of the method to model arbitrary torsion angle distributions.

  4. Applications of truncated QR methods to sinusoidal frequency estimation

    NASA Technical Reports Server (NTRS)

    Hsieh, S. F.; Liu, K. J. R.; Yao, K.

    1990-01-01

    Three truncated QR methods are proposed for sinusoidal frequency estimation: (1) truncated QR without column pivoting (TQR), (2) truncated QR with preordered columns, and (3) truncated QR with column pivoting. It is demonstrated that the benefit of truncated SVD for high frequency resolution is achievable under the truncated QR approach with much lower computational cost. Other attractive features of the proposed methods include the ease of updating, which is difficult for the SVD method, and numerical stability. TQR methods thus offer efficient ways to identify sinusoidals closely clustered in frequencies under stationary and nonstationary conditions.

  5. Accurate position estimation methods based on electrical impedance tomography measurements

    NASA Astrophysics Data System (ADS)

    Vergara, Samuel; Sbarbaro, Daniel; Johansen, T. A.

    2017-08-01

    Electrical impedance tomography (EIT) is a technology that estimates the electrical properties of a body or a cross section. Its main advantages are its non-invasiveness, low cost and operation free of radiation. The estimation of the conductivity field leads to low resolution images compared with other technologies, and high computational cost. However, in many applications the target information lies in a low intrinsic dimensionality of the conductivity field. The estimation of this low-dimensional information is addressed in this work. It proposes optimization-based and data-driven approaches for estimating this low-dimensional information. The accuracy of the results obtained with these approaches depends on modelling and experimental conditions. Optimization approaches are sensitive to model discretization, type of cost function and searching algorithms. Data-driven methods are sensitive to the assumed model structure and the data set used for parameter estimation. The system configuration and experimental conditions, such as number of electrodes and signal-to-noise ratio (SNR), also have an impact on the results. In order to illustrate the effects of all these factors, the position estimation of a circular anomaly is addressed. Optimization methods based on weighted error cost functions and derivate-free optimization algorithms provided the best results. Data-driven approaches based on linear models provided, in this case, good estimates, but the use of nonlinear models enhanced the estimation accuracy. The results obtained by optimization-based algorithms were less sensitive to experimental conditions, such as number of electrodes and SNR, than data-driven approaches. Position estimation mean squared errors for simulation and experimental conditions were more than twice for the optimization-based approaches compared with the data-driven ones. The experimental position estimation mean squared error of the data-driven models using a 16-electrode setup was less

  6. A preliminary comparison of different methods for observer performance estimation

    NASA Astrophysics Data System (ADS)

    Massanes, Francesc; Brankov, Jovan G.

    2013-03-01

    In medical imaging, image quality is assessed by the degree to which a human observer can correctly perform a given diagnostic task. Therefore the image quality is typically quantified by using performance measurements from decision/detection theory like the receiver operation characteristic (ROC) curve and the area under ROC curve (AUC). In this paper we compare five different AUC estimation techniques, widely used in the literature, including parametric and non-parametric methods. We compared each method by equivalence hypothesis testing using a model observer as well as data sets from a previously published human observer study. The main conclusions of this work are 1) if a small number of images are scored, one cannot tell apart different AUC estimation methods due to large variability in AUC estimates, regardless whether image scores are reported on a continuous or quantized scale. 2) If the number of scored images is large and image scores are reported on a continuous scale, all tested AUC estimation methods are statistically equivalent.

  7. Lidar method to estimate emission rates from extended sources

    USDA-ARS?s Scientific Manuscript database

    Currently, point measurements, often combined with models, are the primary means by which atmospheric emission rates are estimated from extended sources. However, these methods often fall short in their spatial and temporal resolution and accuracy. In recent years, lidar has emerged as a suitable to...

  8. Neural Network Based Method for Estimating Helicopter Low Airspeed

    DTIC Science & Technology

    1996-10-24

    The present invention relates generally to virtual sensors and, more particularly, to a means and method utilizing a neural network for estimating...helicopter airspeed at speeds below about 50 knots using only fixed system parameters (i.e., parameters measured or determined in a reference frame fixed relative to the helicopter fuselage) as inputs to the neural network .

  9. A study of methods to estimate debris flow velocity

    USGS Publications Warehouse

    Prochaska, A.B.; Santi, P.M.; Higgins, J.D.; Cannon, S.H.

    2008-01-01

    Debris flow velocities are commonly back-calculated from superelevation events which require subjective estimates of radii of curvature of bends in the debris flow channel or predicted using flow equations that require the selection of appropriate rheological models and material property inputs. This research investigated difficulties associated with the use of these conventional velocity estimation methods. Radii of curvature estimates were found to vary with the extent of the channel investigated and with the scale of the media used, and back-calculated velocities varied among different investigated locations along a channel. Distinct populations of Bingham properties were found to exist between those measured by laboratory tests and those back-calculated from field data; thus, laboratory-obtained values would not be representative of field-scale debris flow behavior. To avoid these difficulties with conventional methods, a new preliminary velocity estimation method is presented that statistically relates flow velocity to the channel slope and the flow depth. This method presents ranges of reasonable velocity predictions based on 30 previously measured velocities. ?? 2008 Springer-Verlag.

  10. Implementing the measurement interval midpoint method for change estimation

    Treesearch

    James A. Westfall; Thomas Frieswyk; Douglas M. Griffith

    2009-01-01

    The adoption of nationally consistent estimation procedures for the Forest Inventory and Analysis (FIA) program mandates changes in the methods used to develop resource trend information. Particularly, it is prescribed that changes in tree status occur at the midpoint of the measurement interval to minimize potential bias. The individual-tree characteristics requiring...

  11. Stress intensity estimates by a computer assisted photoelastic method

    NASA Technical Reports Server (NTRS)

    Smith, C. W.

    1977-01-01

    Following an introductory history, the frozen stress photoelastic method is reviewed together with analytical and experimental aspects of cracks in photoelastic models. Analytical foundations are then presented upon which a computer assisted frozen stress photoelastic technique is based for extracting estimates of stress intensity factors from three-dimensional cracked body problems. The use of the method is demonstrated for two currently important three-dimensional crack problems.

  12. An experiment to compare multiple methods for streamflow uncertainty estimation

    NASA Astrophysics Data System (ADS)

    Kiang, Julie; McMillan, Hilary; Gazoorian, Chris; Mason, Robert; Le Coz, Jerome; Renard, Benjamin; Mansanarez, Valentin; Westerberg, Ida; Petersen-Øverleir, Asgeir; Reitan, Trond; Sikorska, Anna; Seibert, Jan; Coxon, Gemma; Freer, Jim; Belleville, Arnaud; Hauet, Alexandre

    2017-04-01

    Stage-discharge rating curves are used to relate streamflow discharge to continuously measured river stage readings to create a continuous record of streamflow discharge. The stage-discharge relationship is estimated and refined using discrete streamflow measurements over time, during which both the discharge and stage are measured. There is uncertainty in the resulting rating curve due to multiple factors including the curve-fitting process, assumptions on the form of the model used, fluvial geomorphology of natural channels, and the approaches used to extrapolate the rating equation beyond available observations. This rating curve uncertainty leads to uncertainty in the streamflow timeseries, and therefore to uncertainty in predictive models that use the streamflow data. Many different methods have been proposed in the literature for estimating rating curve uncertainty, differing in mathematical rigor, in the assumptions made about the component errors, and in the information required to implement the method at any given site. This study describes the results of an international experiment to test and compare streamflow uncertainty estimation methods from 7 research groups across 9 institutions. The methods range from simple LOWESS fits to more complicated Bayesian methods that consider hydraulic principles directly. We evaluate these different methods when applied to three diverse gauging stations using standardized information (channel characteristics, hydrographs, and streamflow measurements). Our results quantify the resultant spread of the stage-discharge curves and compare the level of uncertainty attributed to the streamflow records by each different method. We provide insight into the sensitivity of streamflow uncertainty bounds to the choice of uncertainty estimation method, and discuss the implications for model uncertainty assessment.

  13. Adaptive Spectral Estimation Methods in Color Flow Imaging.

    PubMed

    Karabiyik, Yucel; Ekroll, Ingvild Kinn; Eik-Nes, Sturla H; Avdal, Jorgen; Lovstakken, Lasse

    2016-11-01

    Clutter rejection for color flow imaging (CFI) remains a challenge due to either a limited amount of temporal samples available or nonstationary tissue clutter. This is particularly the case for interleaved CFI and B-mode acquisitions. Low velocity blood signal is attenuated along with the clutter due to the long transition band of the available clutter filters, causing regions of biased mean velocity estimates or signal dropouts. This paper investigates how adaptive spectral estimation methods, Capon and blood iterative adaptive approach (BIAA), can be used to estimate the mean velocity in CFI without prior clutter filtering. The approach is based on confining the clutter signal in a narrow spectral region around the zero Doppler frequency while keeping the spectral side lobes below the blood signal level, allowing for the clutter signal to be removed by thresholding in the frequency domain. The proposed methods are evaluated using computer simulations, flow phantom experiments, and in vivo recordings from the common carotid and jugular vein of healthy volunteers. Capon and BIAA methods could estimate low blood velocities, which are normally attenuated by polynomial regression filters, and may potentially give better estimation of mean velocities for CFI at a higher computational cost. The Capon method decreased the bias by 81% in the transition band of the used polynomial regression filter for small packet size ( N=8 ) and low SNR (5 dB). Flow phantom and in vivo results demonstrate that the Capon method can provide color flow images and flow profiles with lower variance and bias especially in the regions close to the artery walls.

  14. Adaptive Spectral Estimation Methods in Color Flow Imaging.

    PubMed

    Karabiyik, Yucel; Ekroll, Ingvild Kinn; Eik-Nes, Sturla; Avdal, Jorgen; Lovstakken, Lasse

    2016-07-28

    Clutter rejection for color flow imaging (CFI) remains a challenge due to either limited amount of temporal samples available or non-stationary tissue clutter. This is particularly the case for interleaved CFI and B-mode acquisitions. Low velocity blood signal is attenuated along with the clutter due to the long transition band of the available clutter filters, causing regions of biased mean velocity estimates or signal dropouts. This work investigates how adaptive spectral estimation methods, the Capon and BIAA, can be used to estimate the mean velocity in CFI without prior clutter filtering. The approach is based on confining the clutter signal in a narrow spectral region around the zero Doppler frequency while keeping the spectral side lobes below the blood signal level, allowing for the clutter signal to be removed by thresholding in the frequency domain. The proposed methods are evaluated using computer simulations, flow phantom experiments and in-vivo recordings from the common carotid and jugular vein of healthy volunteers. Capon and BIAA methods could estimate low blood velocities which are normally attenuated by polynomial regression filters, and may potentially give better estimation of mean velocities for CFI at a higher computational cost. The Capon method decreased the bias by 81% in the transition band of the used polynomial regression filter for small packet size (N=8) and low SNR (5 dB). Flow phantom and invivo results demonstrate that the Capon method can provide color flow images and flow profiles with lower variance and bias especially in the regions close to the artery walls.

  15. Three Different Methods of Estimating LAI in a Small Watershed

    NASA Astrophysics Data System (ADS)

    Speckman, H. N.; Ewers, B. E.; Beverly, D.

    2015-12-01

    Leaf area index (LAI) is a critical input of models that improve predictive understanding of ecology, hydrology, and climate change. Multiple techniques exist to quantify LAI, most of which are labor intensive, and all often fail to converge on similar estimates. . Recent large-scale bark beetle induced mortality greatly altered LAI, which is now dominated by younger and more metabolically active trees compared to the pre-beetle forest. Tree mortality increases error in optical LAI estimates due to the lack of differentiation between live and dead branches in dense canopy. Our study aims to quantify LAI using three different LAI methods, and then to compare the techniques to each other and topographic drivers to develop an effective predictive model of LAI. This study focuses on quantifying LAI within a small (~120 ha) beetle infested watershed in Wyoming's Snowy Range Mountains. The first technique estimated LAI using in-situ hemispherical canopy photographs that were then analyzed with Hemisfer software. The second LAI estimation technique was use of the Kaufmann 1982 allometrerics from forest inventories conducted throughout the watershed, accounting for stand basal area, species composition, and the extent of bark beetle driven mortality. The final technique used airborne light detection and ranging (LIDAR) first DMS returns, which were used to estimating canopy heights and crown area. LIDAR final returns provided topographical information and were then ground-truthed during forest inventories. Once data was collected, a fractural analysis was conducted comparing the three methods. Species composition was driven by slope position and elevation Ultimately the three different techniques provided very different estimations of LAI, but each had their advantage: estimates from hemisphere photos were well correlated with SWE and snow depth measurements, forest inventories provided insight into stand health and composition, and LIDAR were able to quickly and

  16. Comparison of Methods for Estimating Low Flow Characteristics of Streams

    USGS Publications Warehouse

    Tasker, Gary D.

    1987-01-01

    Four methods for estimating the 7-day, 10-year and 7-day, 20-year low flows for streams are compared by the bootstrap method. The bootstrap method is a Monte Carlo technique in which random samples are drawn from an unspecified sampling distribution defined from observed data. The nonparametric nature of the bootstrap makes it suitable for comparing methods based on a flow series for which the true distribution is unknown. Results show that the two methods based on hypothetical distribution (Log-Pearson III and Weibull) had lower mean square errors than did the G. E. P. Box-D. R. Cox transformation method or the Log-W. C. Boughton method which is based on a fit of plotting positions.

  17. Uncertain Photometric Redshifts with Deep Learning Methods

    NASA Astrophysics Data System (ADS)

    D'Isanto, A.

    2017-06-01

    The need for accurate photometric redshifts estimation is a topic that has fundamental importance in Astronomy, due to the necessity of efficiently obtaining redshift information without the need of spectroscopic analysis. We propose a method for determining accurate multi-modal photo-z probability density functions (PDFs) using Mixture Density Networks (MDN) and Deep Convolutional Networks (DCN). A comparison with a Random Forest (RF) is performed.

  18. Methods for Measuring and Estimating Methane Emission from Ruminants

    PubMed Central

    Storm, Ida M. L. D.; Hellwing, Anne Louise F.; Nielsen, Nicolaj I.; Madsen, Jørgen

    2012-01-01

    Simple Summary Knowledge about methods used in quantification of greenhouse gasses is currently needed due to international commitments to reduce the emissions. In the agricultural sector one important task is to reduce enteric methane emissions from ruminants. Different methods for quantifying these emissions are presently being used and others are under development, all with different conditions for application. For scientist and other persons working with the topic it is very important to understand the advantages and disadvantage of the different methods in use. This paper gives a brief introduction to existing methods but also a description of newer methods and model-based techniques. Abstract This paper is a brief introduction to the different methods used to quantify the enteric methane emission from ruminants. A thorough knowledge of the advantages and disadvantages of these methods is very important in order to plan experiments, understand and interpret experimental results, and compare them with other studies. The aim of the paper is to describe the principles, advantages and disadvantages of different methods used to quantify the enteric methane emission from ruminants. The best-known methods: Chambers/respiration chambers, SF6 technique and in vitro gas production technique and the newer CO2 methods are described. Model estimations, which are used to calculate national budget and single cow enteric emission from intake and diet composition, are also discussed. Other methods under development such as the micrometeorological technique, combined feeder and CH4 analyzer and proxy methods are briefly mentioned. Methods of choice for estimating enteric methane emission depend on aim, equipment, knowledge, time and money available, but interpretation of results obtained with a given method can be improved if knowledge about the disadvantages and advantages are used in the planning of experiments. PMID:26486915

  19. New method to estimate the cycling frontal area.

    PubMed

    Debraux, P; Bertucci, W; Manolova, A V; Rogier, S; Lodini, A

    2009-04-01

    The purpose of this study was to test the validity and reliability of a new method to estimate the projected frontal area of the body during cycling. To illustrate the use of this method in another cycling speciality (i.e. mountain bike), the NM data were coupled with a powermeter measurement to determine the projected frontal area and the coefficient of drag in actual conditions. Nine male cyclists had their frontal area determined from digital photographic images in a laboratory while seated on their bicycles in two positions:Upright Position (UP) and Traditional Aerodynamic Position (TAP). For each position, the projected frontal area for the body of the cyclist as well as the cyclist and his bicycle were measured using a new method with computer aided-design software, the method of weighing photographs and the digitizing method. The results showed that no significant difference existed between the new method and the method of weighing photographs in the measurement of the frontal area of the body of cyclists in UP (p=0.43) and TAP (p=0.14), or between the new method and the digitizing method in measurement of the frontal area for the cyclist and his bicycle in UP (p=0.12) and TAP (p=0.31). The coefficients of variation of the new method and the method of weighing photographs were 0.1% and 1.26%, respectively. In conclusion, the new method was valid and reliable in estimating the frontal area compared with the method of weighing photographs and the digitizing method.

  20. Automatic method for estimation of in situ effective contact angle from X-ray micro tomography images of two-phase flow in porous media.

    PubMed

    Scanziani, Alessio; Singh, Kamaljit; Blunt, Martin J; Guadagnini, Alberto

    2017-06-15

    Multiphase flow in porous media is strongly influenced by the wettability of the system, which affects the arrangement of the interfaces of different phases residing in the pores. We present a method for estimating the effective contact angle, which quantifies the wettability and controls the local capillary pressure within the complex pore space of natural rock samples, based on the physical constraint of constant curvature of the interface between two fluids. This algorithm is able to extract a large number of measurements from a single rock core, resulting in a characteristic distribution of effective in situ contact angle for the system, that is modelled as a truncated Gaussian probability density distribution. The method is first validated on synthetic images, where the exact angle is known analytically; then the results obtained from measurements within the pore space of rock samples imaged at a resolution of a few microns are compared to direct manual assessment. Finally the method is applied to X-ray micro computed tomography (micro-CT) scans of two Ketton cores after waterflooding, that display water-wet and mixed-wet behaviour. The resulting distribution of in situ contact angles is characterized in terms of a mixture of truncated Gaussian densities.

  1. Investigation of the HD-sEMG probability density function shapes with varying muscle force using data fusion and shape descriptors.

    PubMed

    Al Harrach, Mariam; Boudaoud, Sofiane; Carriou, Vincent; Laforet, Jeremy; Letocart, Adrien J; Grosset, Jean-François; Marin, Frédéric

    2017-08-01

    This work presents an evaluation of the High Density surface Electromyogram (HD-sEMG) Probability Density Function (PDF) shape variation according to contraction level. On that account, using PDF shape descriptors: High Order Statistics (HOS) and Shape Distances (SD), we try to address the absence of a consensus for the sEMG non-Gaussianity evolution with force variation. This is motivated by the fact that PDF shape information are relevant in physiological assessment of the muscle architecture and function, such as contraction level classification, in complement to classical amplitude parameters. Accordingly, both experimental and simulation studies are presented in this work. For data fusion, the watershed image processing technique was used. This technique allowed us to find the dominant PDF shape variation profiles from the 64 signals. The experimental protocol consisted of three isometric isotonic contractions of 30, 50 and 70% of the Maximum Voluntary Contraction (MVC). This protocol was performed by six subjects and recorded using an 8 × 8 HD-sEMG grid. For the simulation study, the muscle modeling was done using a fast computing cylindrical HD-sEMG generation model. This model was personalized by morphological parameters obtained by sonography. Moreover, a set of the model parameter configurations were compared as a focused sensitivity analysis of the PDF shape variation. Further, monopolar, bipolar and Laplacian electrode configurations were investigated in both experimental and simulation studies. Results indicated that sEMG PDF shape variations according to force increase are mainly dependent on the Motor Unit (MU) spatial recruitment strategy, the MU type distribution within the muscle, and the used electrode arrangement. Consequently, these statistics can give us an insight into non measurable parameters and specifications of the studied muscle primarily the MU type distribution. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Estimate capital for operational risk using peak over threshold method

    NASA Astrophysics Data System (ADS)

    Saputri, Azizah Anugrahwati; Noviyanti, Lienda; Soleh, Achmad Zanbar

    2015-12-01

    Operational risk is inherent in bank activities. To cover this risk a bank reserves a fund called as capital. Often a bank uses Basic Indicator approach (BIA), Standardized Approach (SA), or Advanced Measurement Approach (AMA) for estimating the capital amount. BIA and SA are less-objective in comparison to AMA, since BIA and SA use non-actual loss data while AMA use the actual one. In this research, we define the capital as an OpVaR (i.e. the worst loss at a given confidence level) which will be estimated by Peak Over Threshold Method.

  3. New Statistical Learning Methods for Estimating Optimal Dynamic Treatment Regimes

    PubMed Central

    Zhao, Ying-Qi; Zeng, Donglin; Laber, Eric B.; Kosorok, Michael R.

    2014-01-01

    Dynamic treatment regimes (DTRs) are sequential decision rules for individual patients that can adapt over time to an evolving illness. The goal is to accommodate heterogeneity among patients and find the DTR which will produce the best long term outcome if implemented. We introduce two new statistical learning methods for estimating the optimal DTR, termed backward outcome weighted learning (BOWL), and simultaneous outcome weighted learning (SOWL). These approaches convert individualized treatment selection into an either sequential or simultaneous classification problem, and can thus be applied by modifying existing machine learning techniques. The proposed methods are based on directly maximizing over all DTRs a nonparametric estimator of the expected long-term outcome; this is fundamentally different than regression-based methods, for example Q-learning, which indirectly attempt such maximization and rely heavily on the correctness of postulated regression models. We prove that the resulting rules are consistent, and provide finite sample bounds for the errors using the estimated rules. Simulation results suggest the proposed methods produce superior DTRs compared with Q-learning especially in small samples. We illustrate the methods using data from a clinical trial for smoking cessation. PMID:26236062

  4. New Statistical Learning Methods for Estimating Optimal Dynamic Treatment Regimes.

    PubMed

    Zhao, Ying-Qi; Zeng, Donglin; Laber, Eric B; Kosorok, Michael R

    Dynamic treatment regimes (DTRs) are sequential decision rules for individual patients that can adapt over time to an evolving illness. The goal is to accommodate heterogeneity among patients and find the DTR which will produce the best long term outcome if implemented. We introduce two new statistical learning methods for estimating the optimal DTR, termed backward outcome weighted learning (BOWL), and simultaneous outcome weighted learning (SOWL). These approaches convert individualized treatment selection into an either sequential or simultaneous classification problem, and can thus be applied by modifying existing machine learning techniques. The proposed methods are based on directly maximizing over all DTRs a nonparametric estimator of the expected long-term outcome; this is fundamentally different than regression-based methods, for example Q-learning, which indirectly attempt such maximization and rely heavily on the correctness of postulated regression models. We prove that the resulting rules are consistent, and provide finite sample bounds for the errors using the estimated rules. Simulation results suggest the proposed methods produce superior DTRs compared with Q-learning especially in small samples. We illustrate the methods using data from a clinical trial for smoking cessation.

  5. Phenology of Net Ecosystem Exchange: A Simple Estimation Method

    NASA Astrophysics Data System (ADS)

    Losleben, M. V.

    2007-12-01

    Carbon sequestration is important to global carbon budget and ecosystem function and dynamics research. Direct measurement of Net Ecosystem Exchange (NEE), a measure of the carbon sequestration of an ecosystem, is instrument, labor, and fiscally intensive, thus there is value to establish a simple, robust estimation method. Six ecosystem types across the United States, ranging from deciduous and coniferous forests to desert shrub land and grasslands, are compared. Initial results suggest instrumentally measured NEE and this proxy method are promising, showing excellent temporal matches of the two methods for onset and termination of carbon sequestration in a sub-alpine forest for the study period, 1997-2006. Moreover, the similarity of climatic signatures in all six ecosystems of this study suggests this proxy estimation method may be widely applicable across diverse environmental zones This estimation method is simply the interpretation of annual accumulated daily precipitation plotted against the annual daily accumulated degree growing days above a zero degree C base. Applicability at sub-seasonal time scales will also be discussed in this presentation.

  6. An aerial survey method to estimate sea otter abundance

    USGS Publications Warehouse

    Bodkin, J.L.; Udevitz, M.S.; Garner, G.W.; Amstrup, Steven C.; Laake, J.L.; Manly, B. F. J.; McDonald, L.L.; Robertson, Donna G.

    1999-01-01

    Sea otters (Enhydra lutris) occur in shallow coastal habitats and can be highly visible on the sea surface. They generally rest in groups and their detection depends on factors that include sea conditions, viewing platform, observer technique and skill, distance, habitat and group size. While visible on the surface, they are difficult to see while diving and may dive in response to an approaching survey platform. We developed and tested an aerial survey method that uses intensive searches within portions of strip transects to adjust for availability and sightability biases. Correction factors are estimated independently for each survey and observer. In tests of our method using shore-based observers, we estimated detection probabilities of 0.52-0.72 in standard strip-transects and 0.96 in intensive searches. We used the survey method in Prince William Sound, Alaska to estimate a sea otter population size of 9,092 (SE = 1422). The new method represents an improvement over various aspects of previous methods, but additional development and testing will be required prior to its broad application.

  7. Parameter estimation method for blurred cell images from fluorescence microscope

    NASA Astrophysics Data System (ADS)

    He, Fuyun; Zhang, Zhisheng; Luo, Xiaoshu; Zhao, Shulin

    2016-10-01

    Microscopic cell image analysis is indispensable to cell biology. Images of cells can easily degrade due to optical diffraction or focus shift, as this results in low signal-to-noise ratio (SNR) and poor image quality, hence affecting the accuracy of cell analysis and identification. For a quantitative analysis of cell images, restoring blurred images to improve the SNR is the first step. A parameter estimation method for defocused microscopic cell images based on the power law properties of the power spectrum of cell images is proposed. The circular radon transform (CRT) is used to identify the zero-mode of the power spectrum. The parameter of the CRT curve is initially estimated by an improved differential evolution algorithm. Following this, the parameters are optimized through the gradient descent method. Using synthetic experiments, it was confirmed that the proposed method effectively increased the peak SNR (PSNR) of the recovered images with high accuracy. Furthermore, experimental results involving actual microscopic cell images verified that the superiority of the proposed parameter estimation method for blurred microscopic cell images other method in terms of qualitative visual sense as well as quantitative gradient and PSNR.

  8. Noninvasive method of estimating human newborn regional cerebral blood flow

    SciTech Connect

    Younkin, D.P.; Reivich, M.; Jaggi, J.; Obrist, W.; Delivoria-Papadopoulos, M.

    1982-12-01

    A noninvasive method of estimating regional cerebral blood flow (rCBF) in premature and full-term babies has been developed. Based on a modification of the /sup 133/Xe inhalation rCBF technique, this method uses eight extracranial NaI scintillation detectors and an i.v. bolus injection of /sup 133/Xe (approximately 0.5 mCi/kg). Arterial xenon concentration was estimated with an external chest detector. Cerebral blood flow was measured in 15 healthy, neurologically normal premature infants. Using Obrist's method of two-compartment analysis, normal values were calculated for flow in both compartments, relative weight and fractional flow in the first compartment (gray matter), initial slope of gray matter blood flow, mean cerebral blood flow, and initial slope index of mean cerebral blood flow. The application of this technique to newborns, its relative advantages, and its potential uses are discussed.

  9. Method to Estimate the Dissolved Air Content in Hydraulic Fluid

    NASA Technical Reports Server (NTRS)

    Hauser, Daniel M.

    2011-01-01

    In order to verify the air content in hydraulic fluid, an instrument was needed to measure the dissolved air content before the fluid was loaded into the system. The instrument also needed to measure the dissolved air content in situ and in real time during the de-aeration process. The current methods used to measure the dissolved air content require the fluid to be drawn from the hydraulic system, and additional offline laboratory processing time is involved. During laboratory processing, there is a potential for contamination to occur, especially when subsaturated fluid is to be analyzed. A new method measures the amount of dissolved air in hydraulic fluid through the use of a dissolved oxygen meter. The device measures the dissolved air content through an in situ, real-time process that requires no additional offline laboratory processing time. The method utilizes an instrument that measures the partial pressure of oxygen in the hydraulic fluid. By using a standardized calculation procedure that relates the oxygen partial pressure to the volume of dissolved air in solution, the dissolved air content is estimated. The technique employs luminescent quenching technology to determine the partial pressure of oxygen in the hydraulic fluid. An estimated Henry s law coefficient for oxygen and nitrogen in hydraulic fluid is calculated using a standard method to estimate the solubility of gases in lubricants. The amount of dissolved oxygen in the hydraulic fluid is estimated using the Henry s solubility coefficient and the measured partial pressure of oxygen in solution. The amount of dissolved nitrogen that is in solution is estimated by assuming that the ratio of dissolved nitrogen to dissolved oxygen is equal to the ratio of the gas solubility of nitrogen to oxygen at atmospheric pressure and temperature. The technique was performed at atmospheric pressure and room temperature. The technique could be theoretically carried out at higher pressures and elevated

  10. NEW COMPLETENESS METHODS FOR ESTIMATING EXOPLANET DISCOVERIES BY DIRECT DETECTION

    SciTech Connect

    Brown, Robert A.; Soummer, Remi

    2010-05-20

    We report on new methods for evaluating realistic observing programs that search stars for planets by direct imaging, where observations are selected from an optimized star list and stars can be observed multiple times. We show how these methods bring critical insight into the design of the mission and its instruments. These methods provide an estimate of the outcome of the observing program: the probability distribution of discoveries (detection and/or characterization) and an estimate of the occurrence rate of planets ({eta}). We show that these parameters can be accurately estimated from a single mission simulation, without the need for a complete Monte Carlo mission simulation, and we prove the accuracy of this new approach. Our methods provide tools to define a mission for a particular science goal; for example, a mission can be defined by the expected number of discoveries and its confidence level. We detail how an optimized star list can be built and how successive observations can be selected. Our approach also provides other critical mission attributes, such as the number of stars expected to be searched and the probability of zero discoveries. Because these attributes depend strongly on the mission scale (telescope diameter, observing capabilities and constraints, mission lifetime, etc.), our methods are directly applicable to the design of such future missions and provide guidance to the mission and instrument design based on scientific performance. We illustrate our new methods with practical calculations and exploratory design reference missions for the James Webb Space Telescope (JWST) operating with a distant starshade to reduce scattered and diffracted starlight on the focal plane. We estimate that five habitable Earth-mass planets would be discovered and characterized with spectroscopy, with a probability of zero discoveries of 0.004, assuming a small fraction of JWST observing time (7%), {eta} = 0.3, and 70 observing visits, limited by starshade

  11. Dental age estimation using Willems method: A digital orthopantomographic study

    PubMed Central

    Mohammed, Rezwana Begum; Krishnamraju, P. V.; Prasanth, P. S.; Sanghvi, Praveen; Lata Reddy, M. Asha; Jyotsna, S.

    2014-01-01

    In recent years, age estimation has become increasingly important in living people for a variety of reasons, including identifying criminal and legal responsibility, and for many other social events such as a birth certificate, marriage, beginning a job, joining the army, and retirement. Objectives: The aim of this study was to assess the developmental stages of left seven mandibular teeth for estimation of dental age (DA) in different age groups and to evaluate the possible correlation between DA and chronological age (CA) in South Indian population using Willems method. Materials and Methods: Digital Orthopantomogram of 332 subjects (166 males, 166 females) who fit the study and the criteria were obtained. Assessment of mandibular teeth (from central incisor to the second molar on left quadrant) development was undertaken and DA was assessed using Willems method. Results and Discussion: The present study showed a significant correlation between DA and CA in both males (r = 0.71 and females (r = 0.88). The overall mean difference between the estimated DA and CA for males was 0.69 ± 2.14 years (P < 0.001) while for females, it was 0.08 ± 1.34 years (P > 0.05). Willems method underestimated the mean age of males by 0.69 years and females by 0.08 years and showed that females mature earlier than males in selected population. The mean difference between DA and CA according to Willems method was 0.39 years and is statistically significant (P < 0.05). Conclusion: This study showed significant relation between DA and CA. Thus, digital radiographic assessment of mandibular teeth development can be used to generate mean DA using Willems method and also the estimated age range for an individual of unknown CA. PMID:25191076

  12. Smeared star spot location estimation using directional integral method.

    PubMed

    Hou, Wang; Liu, Haibo; Lei, Zhihui; Yu, Qifeng; Liu, Xiaochun; Dong, Jing

    2014-04-01

    Image smearing significantly affects the accuracy of attitude determination of most star sensors. To ensure the accuracy and reliability of a star sensor under image smearing conditions, a novel directional integral method is presented for high-precision star spot location estimation to improve the accuracy of attitude determination. Simulations based on the orbit data of the challenging mini-satellite payload satellite were performed. Simulation results demonstrated that the proposed method exhibits high performance and good robustness, which indicates that the method can be applied effectively.

  13. Methods of Mmax Estimation East of the Rocky Mountains

    USGS Publications Warehouse

    Wheeler, Russell L.

    2009-01-01

    Several methods have been used to estimate the magnitude of the largest possible earthquake (Mmax) in parts of the Central and Eastern United States and adjacent Canada (CEUSAC). Each method has pros and cons. The largest observed earthquake in a specified area provides an unarguable lower bound on Mmax in the area. Beyond that, all methods are undermined by the enigmatic nature of geologic controls on the propagation of large CEUSAC ruptures. Short historical-seismicity records decrease the defensibility of several methods that are based on characteristics of small areas in most of CEUSAC. Methods that use global tectonic analogs of CEUSAC encounter uncertainties in understanding what 'analog' means. Five of the methods produce results that are inconsistent with paleoseismic findings from CEUSAC seismic zones or individual active faults.

  14. Comparison of different ore reserve estimation methods using conditional simulation

    SciTech Connect

    Baafi, E.Y.; Kim, Y.C.

    1983-12-01

    The authors discuss the results of a series of comparative studies where polygonal, IDS, and kriging were compared for their local estimation accuracy. The study was performed on simulated coal deposits and the evaluation parameter chosen was the coal thickness. For this study many conditionally simulated deposits having different degrees of seam thickness continuity were first developed. Each simulated deposit covered an area of 9km x 9km that was subdivided into 100 mine-planning blocks of 900m x 900m. Thickness estimation was made for these mine-planning blocks using surrounding DDH assays. In all cases tested, from a perfect continuity to a complete lack of continuity in coal-seam thickness kriging gave the best estimates, followed by IDS and polygonal methods.

  15. A generic computerized method for estimate of familial risks.

    PubMed Central

    Colombet, Isabelle; Xu, Yigang; Jaulent, Marie-Christine; Desages, Daniel; Degoulet, Patrice; Chatellier, Gilles

    2002-01-01

    Most guidelines developed for cancers screening and for cardiovascular risk management use rules to estimate familial risk. These rules are complex, difficult to memorize, and need to collect a complete pedigree. This paper describes a generic computerized method to estimate familial risks and its implementation in an internet-based application. The program is based on 3 generic models: a model of the family; a model of familial risk; a display model for the pedigree. The model of family allows to represent each member of the family and to construct and display a family tree. The model of familial risk is generic and allows easy update of the program with new diseases or new rules. It was possible to implement guidelines dealing with breast and colorectal cancer and cardiovascular diseases prevention. First evaluation with general practitioners showed that the program was usable. Impact on quality of familial risk estimate should be more documented. PMID:12463810

  16. Methods to estimate irrigated reference crop evapotranspiration - a review.

    PubMed

    Kumar, R; Jat, M K; Shankar, V

    2012-01-01

    Efficient water management of crops requires accurate irrigation scheduling which, in turn, requires the accurate measurement of crop water requirement. Irrigation is applied to replenish depleted moisture for optimum plant growth. Reference evapotranspiration plays an important role for the determination of water requirements for crops and irrigation scheduling. Various models/approaches varying from empirical to physically base distributed are available for the estimation of reference evapotranspiration. Mathematical models are useful tools to estimate the evapotranspiration and water requirement of crops, which is essential information required to design or choose best water management practices. In this paper the most commonly used models/approaches, which are suitable for the estimation of daily water requirement for agricultural crops grown in different agro-climatic regions, are reviewed. Further, an effort has been made to compare the accuracy of various widely used methods under different climatic conditions.

  17. Experimental evaluation of chromatic dispersion estimation method using polynomial fitting

    NASA Astrophysics Data System (ADS)

    Jiang, Xin; Wang, Junyi; Pan, Zhongqi

    2014-11-01

    We experimentally validate a non-data-aided, modulation-format independent chromatic dispersion (CD) estimation method based on polynomial fitting algorithm in single-carrier coherent optical system with a 40 Gb/s polarization-division-multiplexed quadrature-phase-shift-keying (PDM-QPSK) system. The non-data-aided CD estimation for arbitrary modulation formats is achieved by measuring the differential phase between frequency f±fs/2 (fs is the symbol rate) in digital coherent receivers. The estimation range for a 40 Gb/s PDM-QPSK signal can be up to 20,000 ps/nm with a measurement accuracy of ±200 ps/nm. The maximum CD measurement is 25,000 ps/nm with a measurement error of 2%.

  18. pyGMMis: Mixtures-of-Gaussians density estimation method

    NASA Astrophysics Data System (ADS)

    Melchior, Peter; Goulding, Andy D.

    2016-11-01

    pyGMMis is a mixtures-of-Gaussians density estimation method that accounts for arbitrary incompleteness in the process that creates the samples as long as the incompleteness is known over the entire feature space and does not depend on the sample density (missing at random). pyGMMis uses the Expectation-Maximization procedure and generates its best guess of the unobserved samples on the fly. It can also incorporate an uniform "background" distribution as well as independent multivariate normal measurement errors for each of the observed samples, and then recovers an estimate of the error-free distribution from which both observed and unobserved samples are drawn. The code automatically segments the data into localized neighborhoods, and is capable of performing density estimation with millions of samples and thousands of model components on machines with sufficient memory.

  19. A tool for the estimation of the distribution of landslide area in R

    NASA Astrophysics Data System (ADS)

    Rossi, M.; Cardinali, M.; Fiorucci, F.; Marchesini, I.; Mondini, A. C.; Santangelo, M.; Ghosh, S.; Riguer, D. E. L.; Lahousse, T.; Chang, K. T.; Guzzetti, F.

    2012-04-01

    We have developed a tool in R (the free software environment for statistical computing, http://www.r-project.org/) to estimate the probability density and the frequency density of landslide area. The tool implements parametric and non-parametric approaches to the estimation of the probability density and the frequency density of landslide area, including: (i) Histogram Density Estimation (HDE), (ii) Kernel Density Estimation (KDE), and (iii) Maximum Likelihood Estimation (MLE). The tool is available as a standard Open Geospatial Consortium (OGC) Web Processing Service (WPS), and is accessible through the web using different GIS software clients. We tested the tool to compare Double Pareto and Inverse Gamma models for the probability density of landslide area in different geological, morphological and climatological settings, and to compare landslides shown in inventory maps prepared using different mapping techniques, including (i) field mapping, (ii) visual interpretation of monoscopic and stereoscopic aerial photographs, (iii) visual interpretation of monoscopic and stereoscopic VHR satellite images and (iv) semi-automatic detection and mapping from VHR satellite images. Results show that both models are applicable in different geomorphological settings. In most cases the two models provided very similar results. Non-parametric estimation methods (i.e., HDE and KDE) provided reasonable results for all the tested landslide datasets. For some of the datasets, MLE failed to provide a result, for convergence problems. The two tested models (Double Pareto and Inverse Gamma) resulted in very similar results for large and very large datasets (> 150 samples). Differences in the modeling results were observed for small datasets affected by systematic biases. A distinct rollover was observed in all analyzed landslide datasets, except for a few datasets obtained from landslide inventories prepared through field mapping or by semi-automatic mapping from VHR satellite imagery

  20. The Lyapunov dimension and its estimation via the Leonov method

    NASA Astrophysics Data System (ADS)

    Kuznetsov, N. V.

    2016-06-01

    Along with widely used numerical methods for estimating and computing the Lyapunov dimension there is an effective analytical approach, proposed by G.A. Leonov in 1991. The Leonov method is based on the direct Lyapunov method with special Lyapunov-like functions. The advantage of the method is that it allows one to estimate the Lyapunov dimension of invariant sets without localization of the set in the phase space and, in many cases, to get effectively an exact Lyapunov dimension formula. In this work the invariance of the Lyapunov dimension with respect to diffeomorphisms and its connection with the Leonov method are discussed. For discrete-time dynamical systems an analog of Leonov method is suggested. In a simple but rigorous way, here it is presented the connection between the Leonov method and the key related works: Kaplan and Yorke (the concept of the Lyapunov dimension, 1979), Douady and Oesterlé (upper bounds of the Hausdorff dimension via the Lyapunov dimension of maps, 1980), Constantin, Eden, Foiaş, and Temam (upper bounds of the Hausdorff dimension via the Lyapunov exponents and Lyapunov dimension of dynamical systems, 1985-90), and the numerical calculation of the Lyapunov exponents and dimension.

  1. Comparison of Accelerometry Methods for Estimating Physical Activity.

    PubMed

    Kerr, Jacqueline; Marinac, Catherine R; Ellis, Katherine; Godbole, Suneeta; Hipp, Aaron; Glanz, Karen; Mitchell, Jonathan; Laden, Francine; James, Peter; Berrigan, David

    2017-03-01

    This study aimed to compare physical activity estimates across different accelerometer wear locations, wear time protocols, and data processing techniques. A convenience sample of middle-age to older women wore a GT3X+ accelerometer at the wrist and hip for 7 d. Physical activity estimates were calculated using three data processing techniques: single-axis cut points, raw vector magnitude thresholds, and machine learning algorithms applied to the raw data from the three axes. Daily estimates were compared for the 321 women using generalized estimating equations. A total of 1420 d were analyzed. Compliance rates for the hip versus wrist location only varied by 2.7%. All differences between techniques, wear locations, and wear time protocols were statistically different (P < 0.05). Mean minutes per day in physical activity varied from 22 to 67 depending on location and method. On the hip, the 1952-count cut point found at least 150 min·wk of physical activity in 22% of participants, raw vector magnitude found 32%, and the machine-learned algorithm found 74% of participants with 150 min of walking/running per week. The wrist algorithms found 59% and 60% of participants with 150 min of physical activity per week using the raw vector magnitude and machine-learned techniques, respectively. When the wrist device was worn overnight, up to 4% more participants met guidelines. Estimates varied by 52% across techniques and by as much as 41% across wear locations. Findings suggest that researchers should be cautious when comparing physical activity estimates from different studies. Efforts to standardize accelerometry-based estimates of physical activity are needed. A first step might be to report on multiple procedures until a consensus is achieved.

  2. Networked Estimation with an Area-Triggered Transmission Method

    PubMed Central

    Nguyen, Vinh Hao; Suh, Young Soo

    2008-01-01

    This paper is concerned with the networked estimation problem in which sensor data are transmitted over the network. In the event-driven sampling scheme known as level-crossing or send-on-delta, sensor data are transmitted to the estimator node if the difference between the current sensor value and the last transmitted one is greater than a given threshold. The event-driven sampling generally requires less transmission than the time-driven one. However, the transmission rate of the send-on-delta method becomes large when the sensor noise is large since sensor data variation becomes large due to the sensor noise. Motivated by this issue, we propose another event-driven sampling method called area-triggered in which sensor data are sent only when the integral of differences between the current sensor value and the last transmitted one is greater than a given threshold. Through theoretical analysis and simulation results, we show that in the certain cases the proposed method not only reduces data transmission rate but also improves estimation performance in comparison with the conventional event-driven method. PMID:27879742

  3. Estimating the extreme low-temperature event using nonparametric methods

    NASA Astrophysics Data System (ADS)

    D'Silva, Anisha

    This thesis presents a new method of estimating the one-in-N low temperature threshold using a non-parametric statistical method called kernel density estimation applied to daily average wind-adjusted temperatures. We apply our One-in-N Algorithm to local gas distribution companies (LDCs), as they have to forecast the daily natural gas needs of their consumers. In winter, demand for natural gas is high. Extreme low temperature events are not directly related to an LDCs gas demand forecasting, but knowledge of extreme low temperatures is important to ensure that an LDC has enough capacity to meet customer demands when extreme low temperatures are experienced. We present a detailed explanation of our One-in-N Algorithm and compare it to the methods using the generalized extreme value distribution, the normal distribution, and the variance-weighted composite distribution. We show that our One-in-N Algorithm estimates the one-in- N low temperature threshold more accurately than the methods using the generalized extreme value distribution, the normal distribution, and the variance-weighted composite distribution according to root mean square error (RMSE) measure at a 5% level of significance. The One-in- N Algorithm is tested by counting the number of times the daily average wind-adjusted temperature is less than or equal to the one-in- N low temperature threshold.

  4. Vegetation index methods for estimating evapotranspiration by remote sensing

    USGS Publications Warehouse

    Glenn, Edward P.; Nagler, Pamela L.; Huete, Alfredo R.

    2010-01-01

    Evapotranspiration (ET) is the largest term after precipitation in terrestrial water budgets. Accurate estimates of ET are needed for numerous agricultural and natural resource management tasks and to project changes in hydrological cycles due to potential climate change. We explore recent methods that combine vegetation indices (VI) from satellites with ground measurements of actual ET (ETa) and meteorological data to project ETa over a wide range of biome types and scales of measurement, from local to global estimates. The majority of these use time-series imagery from the Moderate Resolution Imaging Spectrometer on the Terra satellite to project ET over seasons and years. The review explores the theoretical basis for the methods, the types of ancillary data needed, and their accuracy and limitations. Coefficients of determination between modeled ETa and measured ETa are in the range of 0.45–0.95, and root mean square errors are in the range of 10–30% of mean ETa values across biomes, similar to methods that use thermal infrared bands to estimate ETa and within the range of accuracy of the ground measurements by which they are calibrated or validated. The advent of frequent-return satellites such as Terra and planed replacement platforms, and the increasing number of moisture and carbon flux tower sites over the globe, have made these methods feasible. Examples of operational algorithms for ET in agricultural and natural ecosystems are presented. The goal of the review is to enable potential end-users from different disciplines to adapt these methods to new applications that require spatially-distributed ET estimates.

  5. Advances in Time Estimation Methods for Molecular Data.

    PubMed

    Kumar, Sudhir; Hedges, S Blair

    2016-04-01

    Molecular dating has become central to placing a temporal dimension on the tree of life. Methods for estimating divergence times have been developed for over 50 years, beginning with the proposal of molecular clock in 1962. We categorize the chronological development of these methods into four generations based on the timing of their origin. In the first generation approaches (1960s-1980s), a strict molecular clock was assumed to date divergences. In the second generation approaches (1990s), the equality of evolutionary rates between species was first tested and then a strict molecular clock applied to estimate divergence times. The third generation approaches (since ∼2000) account for differences in evolutionary rates across the tree by using a statistical model, obviating the need to assume a clock or to test the equality of evolutionary rates among species. Bayesian methods in the third generation require a specific or uniform prior on the speciation-process and enable the inclusion of uncertainty in clock calibrations. The fourth generation approaches (since 2012) allow rates to vary from branch to branch, but do not need prior selection of a statistical model to describe the rate variation or the specification of speciation model. With high accuracy, comparable to Bayesian approaches, and speeds that are orders of magnitude faster, fourth generation methods are able to produce reliable timetrees of thousands of species using genome scale data. We found that early time estimates from second generation studies are similar to those of third and fourth generation studies, indicating that methodological advances have not fundamentally altered the timetree of life, but rather have facilitated time estimation by enabling the inclusion of more species. Nonetheless, we feel an urgent need for testing the accuracy and precision of third and fourth generation methods, including their robustness to misspecification of priors in the analysis of large phylogenies and data

  6. Dental age estimation using Willems method: A digital orthopantomographic study.

    PubMed

    Mohammed, Rezwana Begum; Krishnamraju, P V; Prasanth, P S; Sanghvi, Praveen; Lata Reddy, M Asha; Jyotsna, S

    2014-07-01

    In recent years, age estimation has become increasingly important in living people for a variety of reasons, including identifying criminal and legal responsibility, and for many other social events such as a birth certificate, marriage, beginning a job, joining the army, and retirement. The aim of this study was to assess the developmental stages of left seven mandibular teeth for estimation of dental age (DA) in different age groups and to evaluate the possible correlation between DA and chronological age (CA) in South Indian population using Willems method. Digital Orthopantomogram of 332 subjects (166 males, 166 females) who fit the study and the criteria were obtained. Assessment of mandibular teeth (from central incisor to the second molar on left quadrant) development was undertaken and DA was assessed using Willems method. The present study showed a significant correlation between DA and CA in both males (r = 0.71 and females (r = 0.88). The overall mean difference between the estimated DA and CA for males was 0.69 ± 2.14 years (P < 0.001) while for females, it was 0.08 ± 1.34 years (P > 0.05). Willems method underestimated the mean age of males by 0.69 years and females by 0.08 years and showed that females mature earlier than males in selected population. The mean difference between DA and CA according to Willems method was 0.39 years and is statistically significant (P < 0.05). This study showed significant relation between DA and CA. Thus, digital radiographic assessment of mandibular teeth development can be used to generate mean DA using Willems method and also the estimated age range for an individual of unknown CA.

  7. Vegetation Index Methods for Estimating Evapotranspiration by Remote Sensing

    NASA Astrophysics Data System (ADS)

    Glenn, Edward P.; Nagler, Pamela L.; Huete, Alfredo R.

    2010-12-01

    Evapotranspiration (ET) is the largest term after precipitation in terrestrial water budgets. Accurate estimates of ET are needed for numerous agricultural and natural resource management tasks and to project changes in hydrological cycles due to potential climate change. We explore recent methods that combine vegetation indices (VI) from satellites with ground measurements of actual ET (ETa) and meteorological data to project ETa over a wide range of biome types and scales of measurement, from local to global estimates. The majority of these use time-series imagery from the Moderate Resolution Imaging Spectrometer on the Terra satellite to project ET over seasons and years. The review explores the theoretical basis for the methods, the types of ancillary data needed, and their accuracy and limitations. Coefficients of determination between modeled ETa and measured ETa are in the range of 0.45-0.95, and root mean square errors are in the range of 10-30% of mean ETa values across biomes, similar to methods that use thermal infrared bands to estimate ETa and within the range of accuracy of the ground measurements by which they are calibrated or validated. The advent of frequent-return satellites such as Terra and planed replacement platforms, and the increasing number of moisture and carbon flux tower sites over the globe, have made these methods feasible. Examples of operational algorithms for ET in agricultural and natural ecosystems are presented. The goal of the review is to enable potential end-users from different disciplines to adapt these methods to new applications that require spatially-distributed ET estimates.

  8. A Subspace Method for Dynamical Estimation of Evoked Potentials

    PubMed Central

    Georgiadis, Stefanos D.; Ranta-aho, Perttu O.; Tarvainen, Mika P.; Karjalainen, Pasi A.

    2007-01-01

    It is a challenge in evoked potential (EP) analysis to incorporate prior physiological knowledge for estimation. In this paper, we address the problem of single-channel trial-to-trial EP characteristics estimation. Prior information about phase-locked properties of the EPs is assesed by means of estimated signal subspace and eigenvalue decomposition. Then for those situations that dynamic fluctuations from stimulus-to-stimulus could be expected, prior information can be exploited by means of state-space modeling and recursive Bayesian mean square estimation methods (Kalman filtering and smoothing). We demonstrate that a few dominant eigenvectors of the data correlation matrix are able to model trend-like changes of some component of the EPs, and that Kalman smoother algorithm is to be preferred in terms of better tracking capabilities and mean square error reduction. We also demonstrate the effect of strong artifacts, particularly eye blinks, on the quality of the signal subspace and EP estimates by means of independent component analysis applied as a prepossessing step on the multichannel measurements. PMID:18288257

  9. A Statistical Method for Estimating Luminosity Functions Using Truncated Data

    NASA Astrophysics Data System (ADS)

    Schafer, Chad M.

    2007-06-01

    The observational limitations of astronomical surveys lead to significant statistical inference challenges. One such challenge is the estimation of luminosity functions given redshift (z) and absolute magnitude (M) measurements from an irregularly truncated sample of objects. This is a bivariate density estimation problem; we develop here a statistically rigorous method which (1) does not assume a strict parametric form for the bivariate density; (2) does not assume independence between redshift and absolute magnitude (and hence allows evolution of the luminosity function with redshift); (3) does not require dividing the data into arbitrary bins; and (4) naturally incorporates a varying selection function. We accomplish this by decomposing the bivariate density φ(z,M) vialogφ(z,M)=f(z)+g(M)+h(z,M,θ), where f and g are estimated nonparametrically and h takes an assumed parametric form. There is a simple way of estimating the integrated mean squared error of the estimator; smoothing parameters are selected to minimize this quantity. Results are presented from the analysis of a sample of quasars.

  10. Impedance-estimation methods, modeling methods, articles of manufacture, impedance-modeling devices, and estimated-impedance monitoring systems

    SciTech Connect

    Richardson, John G

    2009-11-17

    An impedance estimation method includes measuring three or more impedances of an object having a periphery using three or more probes coupled to the periphery. The three or more impedance measurements are made at a first frequency. Three or more additional impedance measurements of the object are made using the three or more probes. The three or more additional impedance measurements are made at a second frequency different from the first frequency. An impedance of the object at a point within the periphery is estimated based on the impedance measurements and the additional impedance measurements.

  11. Estimating surface acoustic impedance with the inverse method.

    PubMed

    Piechowicz, Janusz

    2011-01-01

    Sound field parameters are predicted with numerical methods in sound control systems, in acoustic designs of building and in sound field simulations. Those methods define the acoustic properties of surfaces, such as sound absorption coefficients or acoustic impedance, to determine boundary conditions. Several in situ measurement techniques were developed; one of them uses 2 microphones to measure direct and reflected sound over a planar test surface. Another approach is used in the inverse boundary elements method, in which estimating acoustic impedance of a surface is expressed as an inverse boundary problem. The boundary values can be found from multipoint sound pressure measurements in the interior of a room. This method can be applied to arbitrarily-shaped surfaces. This investigation is part of a research programme on using inverse methods in industrial room acoustics.

  12. Estimating functions and the generalized method of moments

    PubMed Central

    Jesus, Joao; Chandler, Richard E.

    2011-01-01

    Estimating functions provide a very general framework for statistical inference, and are particularly useful when one is either unable or unwilling to specify a likelihood function. This paper aims to provide an accessible review of estimating function theory that has potential for application to the analysis and modelling of a wide range of complex systems. Assumptions are given in terms that can be checked relatively easily in practice, and some of the more technical derivations are relegated to an online supplement for clarity of exposition. The special case of the generalized method of moments is considered in some detail. The main points are illustrated by considering the problem of inference for a class of stochastic rainfall models based on point processes, with simulations used to demonstrate the performance of the methods. PMID:23226587

  13. Simple Method for Soil Moisture Estimation from Sentinel-1 Data

    NASA Astrophysics Data System (ADS)

    Gilewski, Pawei Grzegorz; Kedzior, Mateusz Andrzej; Zawadzki, Jaroslaw

    2016-08-01

    In this paper, authors calculated high resolution volumetric soil moisture (SM) by means of the Sentinel- 1 data for the Kampinos National Park in Poland and verified obtained results.To do so, linear regression coefficients (LRC) between in-situ SM measurements and Sentinel-1 radar backscatter values were calculated. Next, LRC were applied to obtain SM estimates from Sentinel-1 data. Sentinel-1 SM was verified against in-situ measurements and low-resolution SMOS SM estimates using Pearson's linear correlation coefficient. Simple SM retrieval method from radar data used in this study gives better results for meadows and when Sentinel-1 data in VH polarisation are used.Further research should be conducted to prove usefulness of proposed method.

  14. Adaptive error covariances estimation methods for ensemble Kalman filters

    SciTech Connect

    Zhen, Yicun; Harlim, John

    2015-08-01

    This paper presents a computationally fast algorithm for estimating, both, the system and observation noise covariances of nonlinear dynamics, that can be used in an ensemble Kalman filtering framework. The new method is a modification of Belanger's recursive method, to avoid an expensive computational cost in inverting error covariance matrices of product of innovation processes of different lags when the number of observations becomes large. When we use only product of innovation processes up to one-lag, the computational cost is indeed comparable to a recently proposed method by Berry–Sauer's. However, our method is more flexible since it allows for using information from product of innovation processes of more than one-lag. Extensive numerical comparisons between the proposed method and both the original Belanger's and Berry–Sauer's schemes are shown in various examples, ranging from low-dimensional linear and nonlinear systems of SDEs and 40-dimensional stochastically forced Lorenz-96 model. Our numerical results suggest that the proposed scheme is as accurate as the original Belanger's scheme on low-dimensional problems and has a wider range of more accurate estimates compared to Berry–Sauer's method on L-96 example.

  15. Methods to Develop Inhalation Cancer Risk Estimates for ...

    EPA Pesticide Factsheets

    This document summarizes the approaches and rationale for the technical and scientific considerations used to derive inhalation cancer risks for emissions of chromium and nickel compounds from electric utility steam generating units. The purpose of this document is to discuss the methods used to develop inhalation cancer risk estimates associated with emissions of chromium and nickel compounds from coal- and oil-fired electric utility steam generating units (EGUs) in support of EPA's recently proposed Air Toxics Rule.

  16. Optimizing Probability of Detection Point Estimate Demonstration

    NASA Technical Reports Server (NTRS)

    Koshti, Ajay M.

    2017-01-01

    Probability of detection (POD) analysis is used in assessing reliably detectable flaw size in nondestructive evaluation (NDE). MIL-HDBK-18231and associated mh18232POD software gives most common methods of POD analysis. Real flaws such as cracks and crack-like flaws are desired to be detected using these NDE methods. A reliably detectable crack size is required for safe life analysis of fracture critical parts. The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using Point Estimate Method. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible.

  17. Geometric estimation method for x-ray digital intraoral tomosynthesis

    NASA Astrophysics Data System (ADS)

    Li, Liang; Yang, Yao; Chen, Zhiqiang

    2016-06-01

    It is essential for accurate image reconstruction to obtain a set of parameters that describes the x-ray scanning geometry. A geometric estimation method is presented for x-ray digital intraoral tomosynthesis (DIT) in which the detector remains stationary while the x-ray source rotates. The main idea is to estimate the three-dimensional (3-D) coordinates of each shot position using at least two small opaque balls adhering to the detector surface as the positioning markers. From the radiographs containing these balls, the position of each x-ray focal spot can be calculated independently relative to the detector center no matter what kind of scanning trajectory is used. A 3-D phantom which roughly simulates DIT was designed to evaluate the performance of this method both quantitatively and qualitatively in the sense of mean square error and structural similarity. Results are also presented for real data acquired with a DIT experimental system. These results prove the validity of this geometric estimation method.

  18. A robust method for estimating landfill methane emissions.

    PubMed

    Figueroa, Veronica K; Mackie, Kevin R; Guarriello, Nick; Cooper, C David

    2009-08-01

    Because municipal solid waste (MSW) landfills emit significant amounts of methane, a potent greenhouse gas, there is considerable interest in quantifying surficial methane emissions from landfills. The authors present a method to estimate methane emissions, using ambient air volatile organic compound (VOC) measurements taken above the surface of the landfill. Using a hand-held monitor, hundreds of VOC concentrations can be taken easily in a day, and simple meteorological data can be recorded at the same time. The standard Gaussian dispersion equations are inverted and solved by matrix methods to determine the methane emission rates at hundreds of point locations throughout a MSW landfill. These point emission rates are then summed to give the total landfill emission rate. This method is tested on a central Florida MSW landfill using data from 3 different days, taken 6 and 12 months apart. A sensitivity study is conducted, and the emission estimates are most sensitive to the input meteorological parameters of wind speed and stability class. Because of the many measurements that are used, the results are robust. When the emission estimates were used as inputs into a dispersion model, a reasonable scatterplot fit of the individual concentration measurement data resulted.

  19. Improving stochastic estimates with inference methods: Calculating matrix diagonals

    NASA Astrophysics Data System (ADS)

    Selig, Marco; Oppermann, Niels; Enßlin, Torsten A.

    2012-02-01

    Estimating the diagonal entries of a matrix, that is not directly accessible but only available as a linear operator in the form of a computer routine, is a common necessity in many computational applications, especially in image reconstruction and statistical inference. Here, methods of statistical inference are used to improve the accuracy or the computational costs of matrix probing methods to estimate matrix diagonals. In particular, the generalized Wiener filter methodology, as developed within information field theory, is shown to significantly improve estimates based on only a few sampling probes, in cases in which some form of continuity of the solution can be assumed. The strength, length scale, and precise functional form of the exploited autocorrelation function of the matrix diagonal is determined from the probes themselves. The developed algorithm is successfully applied to mock and real world problems. These performance tests show that, in situations where a matrix diagonal has to be calculated from only a small number of computationally expensive probes, a speedup by a factor of 2 to 10 is possible with the proposed method.

  20. Minimally important difference estimates and methods: a protocol

    PubMed Central

    Johnston, Bradley C; Ebrahim, Shanil; Carrasco-Labra, Alonso; Furukawa, Toshi A; Patrick, Donald L; Crawford, Mark W; Hemmelgarn, Brenda R; Schunemann, Holger J; Guyatt, Gordon H; Nesrallah, Gihad

    2015-01-01

    Introduction Patient-reported outcomes (PROs) are often the outcomes of greatest importance to patients. The minimally important difference (MID) provides a measure of the smallest change in the PRO that patients perceive as important. An anchor-based approach is the most appropriate method for MID determination. No study or database currently exists that provides all anchor-based MIDs associated with PRO instruments; nor are there any accepted standards for appraising the credibility of MID estimates. Our objectives are to complete a systematic survey of the literature to collect and characterise published anchor-based MIDs associated with PRO instruments used in evaluating the effects of interventions on chronic medical and psychiatric conditions and to assess their credibility. Methods and analysis We will search MEDLINE, EMBASE and PsycINFO (1989 to present) to identify studies addressing methods to estimate anchor-based MIDs of target PRO instruments or reporting empirical ascertainment of anchor-based MIDs. Teams of two reviewers will screen titles and abstracts, review full texts of citations, and extract relevant data. On the basis of findings from studies addressing methods to estimate anchor-based MIDs, we will summarise the available methods and develop an instrument addressing the credibility of empirically ascertained MIDs. We will evaluate the credibility of all studies reporting on the empirical ascertainment of anchor-based MIDs using the credibility instrument, and assess the instrument's inter-rater reliability. We will separately present reports for adult and paediatric populations. Ethics and dissemination No research ethics approval was required as we will be using aggregate data from published studies. Our work will summarise anchor-based methods available to establish MIDs, provide an instrument to assess the credibility of available MIDs, determine the reliability of that instrument, and provide a comprehensive compendium of published anchor

  1. SCoPE: an efficient method of Cosmological Parameter Estimation

    SciTech Connect

    Das, Santanu; Souradeep, Tarun E-mail: tarun@iucaa.ernet.in

    2014-07-01

    Markov Chain Monte Carlo (MCMC) sampler is widely used for cosmological parameter estimation from CMB and other data. However, due to the intrinsic serial nature of the MCMC sampler, convergence is often very slow. Here we present a fast and independently written Monte Carlo method for cosmological parameter estimation named as Slick Cosmological Parameter Estimator (SCoPE), that employs delayed rejection to increase the acceptance rate of a chain, and pre-fetching that helps an individual chain to run on parallel CPUs. An inter-chain covariance update is also incorporated to prevent clustering of the chains allowing faster and better mixing of the chains. We use an adaptive method for covariance calculation to calculate and update the covariance automatically as the chains progress. Our analysis shows that the acceptance probability of each step in SCoPE is more than 95% and the convergence of the chains are faster. Using SCoPE, we carry out some cosmological parameter estimations with different cosmological models using WMAP-9 and Planck results. One of the current research interests in cosmology is quantifying the nature of dark energy. We analyze the cosmological parameters from two illustrative commonly used parameterisations of dark energy models. We also asses primordial helium fraction in the universe can be constrained by the present CMB data from WMAP-9 and Planck. The results from our MCMC analysis on the one hand helps us to understand the workability of the SCoPE better, on the other hand it provides a completely independent estimation of cosmological parameters from WMAP-9 and Planck data.

  2. Methods for estimating low-flow statistics for Massachusetts streams

    USGS Publications Warehouse

    Ries, Kernell G.; Friesz, Paul J.

    2000-01-01

    Methods and computer software are described in this report for determining flow duration, low-flow frequency statistics, and August median flows. These low-flow statistics can be estimated for unregulated streams in Massachusetts using different methods depending on whether the location of interest is at a streamgaging station, a low-flow partial-record station, or an ungaged site where no data are available. Low-flow statistics for streamgaging stations can be estimated using standard U.S. Geological Survey methods described in the report. The MOVE.1 mathematical method and a graphical correlation method can be used to estimate low-flow statistics for low-flow partial-record stations. The MOVE.1 method is recommended when the relation between measured flows at a partial-record station and daily mean flows at a nearby, hydrologically similar streamgaging station is linear, and the graphical method is recommended when the relation is curved. Equations are presented for computing the variance and equivalent years of record for estimates of low-flow statistics for low-flow partial-record stations when either a single or multiple index stations are used to determine the estimates. The drainage-area ratio method or regression equations can be used to estimate low-flow statistics for ungaged sites where no data are available. The drainage-area ratio method is generally as accurate as or more accurate than regression estimates when the drainage-area ratio for an ungaged site is between 0.3 and 1.5 times the drainage area of the index data-collection site. Regression equations were developed to estimate the natural, long-term 99-, 98-, 95-, 90-, 85-, 80-, 75-, 70-, 60-, and 50-percent duration flows; the 7-day, 2-year and the 7-day, 10-year low flows; and the August median flow for ungaged sites in Massachusetts. Streamflow statistics and basin characteristics for 87 to 133 streamgaging stations and low-flow partial-record stations were used to develop the equations. The

  3. The composite method: An improved method for stream-water solute load estimation

    USGS Publications Warehouse

    Aulenbach, Brent T.; Hooper, R.P.

    2006-01-01

    The composite method is an alternative method for estimating stream-water solute loads, combining aspects of two commonly used methods: the regression-model method (which is used by the composite method to predict variations in concentrations between collected samples) and a period-weighted approach (which is used by the composite method to apply the residual concentrations from the regression model over time). The extensive dataset collected at the outlet of the Panola Mountain Research Watershed (PMRW) near Atlanta, Georgia, USA, was used in data analyses for illustrative purposes. A bootstrap (subsampling) experiment (using the composite method and the PMRW dataset along with various fixed-interval and large storm sampling schemes) obtained load estimates for the 8-year study period with a magnitude of the bias of less than 1%, even for estimates that included the fewest number of samples. Precisions were always <2% on a study period and annual basis, and <2% precisions were obtained for quarterly and monthly time intervals for estimates that had better sampling. The bias and precision of composite-method load estimates varies depending on the variability in the regression-model residuals, how residuals systematically deviated from the regression model over time, sampling design, and the time interval of the load estimate. The regression-model method did not estimate loads precisely during shorter time intervals, from annually to monthly, because the model could not explain short-term patterns in the observed concentrations. Load estimates using the period-weighted approach typically are biased as a result of sampling distribution and are accurate only with extensive sampling. The formulation of the composite method facilitates exploration of patterns (trends) contained in the unmodelled portion of the load. Published in 2006 by John Wiley & Sons, Ltd.

  4. Method to estimate center of rigidity using vibration recordings

    USGS Publications Warehouse

    Safak, Erdal; Celebi, Mehmet

    1990-01-01

    A method to estimate the center of rigidity of buildings by using vibration recordings is presented. The method is based on the criterion that the coherence of translational motions with the rotational motion is minimum at the center of rigidity. Since the coherence is a function of frequency, a gross but frequency-independent measure of the coherency is defined as the integral of the coherence function over the frequency. The center of rigidity is determined by minimizing this integral. The formulation is given for two-dimensional motions. Two examples are presented for the method; a rectangular building with ambient-vibration recordings, and a triangular building with earthquake-vibration recordings. Although the examples given are for buildings, the method can be applied to any structure with two-dimensional motions.

  5. Evaluation of estimation methods for organic carbon normalized sorption coefficients

    USGS Publications Warehouse

    Baker, James R.; Mihelcic, James R.; Luehrs, Dean C.; Hickey, James P.

    1997-01-01

    A critically evaluated set of 94 soil water partition coefficients normalized to soil organic carbon content (Koc) is presented for 11 classes of organic chemicals. This data set is used to develop and evaluate Koc estimation methods using three different descriptors. The three types of descriptors used in predicting Koc were octanol/water partition coefficient (Kow), molecular connectivity (mXt) and linear solvation energy relationships (LSERs). The best results were obtained estimating Koc from Kow, though a slight improvement in the correlation coefficient was obtained by using a two-parameter regression with Kow and the third order difference term from mXt. Molecular connectivity correlations seemed to be best suited for use with specific chemical classes. The LSER provided a better fit than mXt but not as good as the correlation with Koc. The correlation to predict Koc from Kow was developed for 72 chemicals; log Koc = 0.903* log Kow + 0.094. This correlation accounts for 91% of the variability in the data for chemicals with log Kow ranging from 1.7 to 7.0. The expression to determine the 95% confidence interval on the estimated Koc is provided along with an example for two chemicals of different hydrophobicity showing the confidence interval of the retardation factor determined from the estimated Koc. The data showed that Koc is not likely to be applicable for chemicals with log Kow < 1.7. Finally, the Koc correlation developed using Kow as a descriptor was compared with three nonclass-specific correlations and two 'commonly used' class-specific correlations to determine which method(s) are most suitable.

  6. Reliability of field methods for estimating body fat.

    PubMed

    Loenneke, Jeremy P; Barnes, Jeremy T; Wilson, Jacob M; Lowery, Ryan P; Isaacs, Melissa N; Pujol, Thomas J

    2013-09-01

    When health professionals measure the fitness levels of clients, body composition is usually estimated. In practice, the reliability of the measurement may be more important than the actual validity, as reliability determines how much change is needed to be considered meaningful. Therefore, the purpose of this study was to determine the reliability of two bioelectrical impedance analysis (BIA) devices (in athlete and non-athlete mode) and compare that to 3-site skinfold (SKF) readings. Twenty-one college students attended the laboratory on two occasions and had their measurements taken in the following order: body mass, height, SKF, Tanita body fat-350 (BF-350) and Omron HBF-306C. There were no significant pairwise differences between Visit 1 and Visit 2 for any of the estimates (P>0.05). The Pearson product correlations ranged from r = 0.933 for HBF-350 in the athlete mode (A) to r = 0.994 for SKF. The ICC's ranged from 0.93 for HBF-350(A) to 0.992 for SKF, and the MD's ranged from 1.8% for SKF to 5.1% for BF-350(A). The current study found that SKF and HBF-306C(A) were the most reliable (<2%) methods of estimating BF%, with the other methods (BF-350, BF-350(A), HBF-306C) producing minimal differences greater than 2%. In conclusion, the SKF method presented with the best reliability because of its low minimal difference, suggesting this method may be the best field method to track changes over time if you have an experienced tester. However, if technical error is a concern, the practitioner may use the HBF-306C(A) because it had a minimal difference value comparable to SKF.

  7. How Accurately Do Spectral Methods Estimate Effective Elastic Thickness?

    NASA Astrophysics Data System (ADS)

    Perez-Gussinye, M.; Lowry, A. R.; Watts, A. B.; Velicogna, I.

    2002-12-01

    The effective elastic thickness, Te, is an important parameter that has the potential to provide information on the long-term thermal and mechanical properties of the the lithosphere. Previous studies have estimated Te using both forward and inverse (spectral) methods. While there is generally good agreement between the results obtained using these methods, spectral methods are limited because they depend on the spectral estimator and the window size chosen for analysis. In order to address this problem, we have used a multitaper technique which yields optimal estimates of the bias and variance of the Bouguer coherence function relating topography and gravity anomaly data. The technique has been tested using realistic synthetic topography and gravity. Synthetic data were generated assuming surface and sub-surface (buried) loading of an elastic plate with fractal statistics consistent with real data sets. The cases of uniform and spatially varying Te are examined. The topography and gravity anomaly data consist of 2000x2000 km grids sampled at 8 km interval. The bias in the Te estimate is assessed from the difference between the true Te value and the mean from analyzing 100 overlapping windows within the 2000x2000 km data grids. For the case in which Te is uniform, the bias and variance decrease with window size and increase with increasing true Te value. In the case of a spatially varying Te, however, there is a trade-off between spatial resolution and variance. With increasing window size the variance of the Te estimate decreases, but the spatial changes in Te are smeared out. We find that for a Te distribution consisting of a strong central circular region of Te=50 km (radius 600 km) and progressively smaller Te towards its edges, the 800x800 and 1000x1000 km window gave the best compromise between spatial resolution and variance. Our studies demonstrate that assumed stationarity of the relationship between gravity and topography data yields good results even in

  8. Methods for cost estimation in software project management

    NASA Astrophysics Data System (ADS)

    Briciu, C. V.; Filip, I.; Indries, I. I.

    2016-02-01

    The speed in which the processes used in software development field have changed makes it very difficult the task of forecasting the overall costs for a software project. By many researchers, this task has been considered unachievable, but there is a group of scientist for which this task can be solved using the already known mathematical methods (e.g. multiple linear regressions) and the new techniques as genetic programming and neural networks. The paper presents a solution for building a model for the cost estimation models in the software project management using genetic algorithms starting from the PROMISE datasets related COCOMO 81 model. In the first part of the paper, a summary of the major achievements in the research area of finding a model for estimating the overall project costs is presented together with the description of the existing software development process models. In the last part, a basic proposal of a mathematical model of a genetic programming is proposed including here the description of the chosen fitness function and chromosome representation. The perspective of model described it linked with the current reality of the software development considering as basis the software product life cycle and the current challenges and innovations in the software development area. Based on the author's experiences and the analysis of the existing models and product lifecycle it was concluded that estimation models should be adapted with the new technologies and emerging systems and they depend largely by the chosen software development method.

  9. A method to estimate groundwater depletion from confining layers

    USGS Publications Warehouse

    Konikow, L.F.; Neuzil, C.E.

    2007-01-01

    Although depletion of storage in low-permeability confining layers is the source of much of the groundwater produced from many confined aquifer systems, it is all too frequently overlooked or ignored. This makes effective management of groundwater resources difficult by masking how much water has been derived from storage and, in some cases, the total amount of water that has been extracted from an aquifer system. Analyzing confining layer storage is viewed as troublesome because of the additional computational burden and because the hydraulic properties of confining layers are poorly known. In this paper we propose a simplified method for computing estimates of confining layer depletion, as well as procedures for approximating confining layer hydraulic conductivity (K) and specific storage (Ss) using geologic information. The latter makes the technique useful in developing countries and other settings where minimal data are available or when scoping calculations are needed. As such, our approach may be helpful for estimating the global transfer of groundwater to surface water. A test of the method on a synthetic system suggests that the computational errors will generally be small. Larger errors will probably result from inaccuracy in confining layer property estimates, but these may be no greater than errors in more sophisticated analyses. The technique is demonstrated by application to two aquifer systems: the Dakota artesian aquifer system in South Dakota and the coastal plain aquifer system in Virginia. In both cases, depletion from confining layers was substantially larger than depletion from the aquifers.

  10. Laser heating method for estimation of carbon nanotube purity

    NASA Astrophysics Data System (ADS)

    Terekhov, S. V.; Obraztsova, E. D.; Lobach, A. S.; Konov, V. I.

    A new method of a carbon nanotube purity estimation has been developed on the basis of Raman spectroscopy. The spectra of carbon soot containing different amounts of nanotubes were registered under heating from a probing laser beam with a step-by-step increased power density. The material temperature in the laser spot was estimated from a position of the tangential Raman mode demonstrating a linear thermal shift (-0.012 cm-1/K) from the position 1592 cm-1 (at room temperature). The rate of the material temperature rise versus the laser power density (determining the slope of a corresponding graph) appeared to correlate strongly with the nanotube content in the soot. The influence of the experimental conditions on the slope value has been excluded via a simultaneous measurement of a reference sample with a high nanotube content (95 vol.%). After the calibration (done by a comparison of the Raman and the transmission electron microscopy data for the nanotube percentage in the same samples) the Raman-based method is able to provide a quantitative purity estimation for any nanotube-containing material.

  11. Causes and methods to estimate cryptic sources of fishing mortality.

    PubMed

    Gilman, E; Suuronen, P; Hall, M; Kennelly, S

    2013-10-01

    Cryptic, not readily detectable, components of fishing mortality are not routinely accounted for in fisheries management because of a lack of adequate data, and for some components, a lack of accurate estimation methods. Cryptic fishing mortalities can cause adverse ecological effects, are a source of wastage, reduce the sustainability of fishery resources and, when unaccounted for, can cause errors in stock assessments and population models. Sources of cryptic fishing mortality are (1) pre-catch losses, where catch dies from the fishing operation but is not brought onboard when the gear is retrieved, (2) ghost-fishing mortality by fishing gear that was abandoned, lost or discarded, (3) post-release mortality of catch that is retrieved and then released alive but later dies as a result of stress and injury sustained from the fishing interaction, (4) collateral mortalities indirectly caused by various ecological effects of fishing and (5) losses due to synergistic effects of multiple interacting sources of stress and injury from fishing operations, or from cumulative stress and injury caused by repeated sub-lethal interactions with fishing operations. To fill a gap in international guidance on best practices, causes and methods for estimating each component of cryptic fishing mortality are described, and considerations for their effective application are identified. Research priorities to fill gaps in understanding the causes and estimating cryptic mortality are highlighted. © 2013 The Authors. Journal of Fish Biology © 2013 The Fisheries Society of the British Isles.

  12. A power function method for estimating base flow.

    PubMed

    Lott, Darline A; Stewart, Mark T

    2013-01-01

    Analytical base flow separation techniques are often used to determine the base flow contribution to total stream flow. Most analytical methods derive base flow from discharge records alone without using basin-specific variables other than basin area. This paper derives a power function for estimating base flow, the form being aQ(b) + cQ, an analytical method calibrated against an integrated basin variable, specific conductance, relating base flow to total discharge, and is consistent with observed mathematical behavior of dissolved solids in stream flow with varying discharge. Advantages of the method are being uncomplicated, reproducible, and applicable to hydrograph separation in basins with limited specific conductance data. The power function relationship between base flow and discharge holds over a wide range of basin areas. It better replicates base flow determined by mass balance methods than analytical methods such as filters or smoothing routines that are not calibrated to natural tracers or empirical basin and gauge-specific variables. Also, it can be used with discharge during periods without specific conductance values, including separating base flow from quick flow for single events. However, it may overestimate base flow during very high flow events. Application of geochemical mass balance and power function base flow separation methods to stream flow and specific conductance records from multiple gauges in the same basin suggests that analytical base flow separation methods must be calibrated at each gauge. Using average values of coefficients introduces a potentially significant and unknown error in base flow as compared with mass balance methods.

  13. Intensity estimation method of LED array for visible light communication

    NASA Astrophysics Data System (ADS)

    Ito, Takanori; Yendo, Tomohiro; Arai, Shintaro; Yamazato, Takaya; Okada, Hiraku; Fujii, Toshiaki

    2013-03-01

    This paper focuses on a road-to-vehicle visible light communication (VLC) system using LED traffic light as the transmitter and camera as the receiver. The traffic light is composed of a hundred of LEDs on two dimensional plain. In this system, data is sent as two dimensional brightness patterns by controlling each LED of the traffic light individually, and they are received as images by the camera. Here, there are problems that neighboring LEDs on the received image are merged due to less number of pixels in case that the receiver is distant from the transmitter, and/or due to blurring by defocus of the camera. Because of that, bit error rate (BER) increases due to recognition error of intensity of LEDs To solve the problem, we propose a method that estimates the intensity of LEDs by solving the inverse problem of communication channel characteristic from the transmitter to the receiver. The proposed method is evaluated by BER characteristics which are obtained by computer simulation and experiments. In the result, the proposed method can estimate with better accuracy than the conventional methods, especially in case that the received image is blurred a lot, and the number of pixels is small.

  14. A new method of SC image processing for confluence estimation.

    PubMed

    Soleimani, Sajjad; Mirzaei, Mohsen; Toncu, Dana-Cristina

    2017-10-01

    Stem cells images are a strong instrument in the estimation of confluency during their culturing for therapeutic processes. Various laboratory conditions, such as lighting, cell container support and image acquisition equipment, effect on the image quality, subsequently on the estimation efficiency. This paper describes an efficient image processing method for cell pattern recognition and morphological analysis of images that were affected by uneven background. The proposed algorithm for enhancing the image is based on coupling a novel image denoising method through BM3D filter with an adaptive thresholding technique for improving the uneven background. This algorithm works well to provide a faster, easier, and more reliable method than manual measurement for the confluency assessment of stem cell cultures. The present scheme proves to be valid for the prediction of the confluency and growth of stem cells at early stages for tissue engineering in reparatory clinical surgery. The method used in this paper is capable of processing the image of the cells, which have already contained various defects due to either personnel mishandling or microscope limitations. Therefore, it provides proper information even out of the worst original images available. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Estimation of regionalized compositions: A comparison of three methods

    USGS Publications Warehouse

    Pawlowsky, V.; Olea, R.A.; Davis, J.C.

    1995-01-01

    A regionalized composition is a random vector function whose components are positive and sum to a constant at every point of the sampling region. Consequently, the components of a regionalized composition are necessarily spatially correlated. This spatial dependence-induced by the constant sum constraint-is a spurious spatial correlation and may lead to misinterpretations of statistical analyses. Furthermore, the cross-covariance matrices of the regionalized composition are singular, as is the coefficient matrix of the cokriging system of equations. Three methods of performing estimation or prediction of a regionalized composition at unsampled points are discussed: (1) the direct approach of estimating each variable separately; (2) the basis method, which is applicable only when a random function is available that can he regarded as the size of the regionalized composition under study; (3) the logratio approach, using the additive-log-ratio transformation proposed by J. Aitchison, which allows statistical analysis of compositional data. We present a brief theoretical review of these three methods and compare them using compositional data from the Lyons West Oil Field in Kansas (USA). It is shown that, although there are no important numerical differences, the direct approach leads to invalid results, whereas the basis method and the additive-log-ratio approach are comparable. ?? 1995 International Association for Mathematical Geology.

  16. Understanding Methods for Estimating HIV-Associated Maternal Mortality

    PubMed Central

    Rosen, James E.; de Zoysa, Isabelle; Dehne, Karl; Mangiaterra, Viviana; Abdool-Karim, Quarraisha

    2012-01-01

    The impact of HIV on maternal mortality and more broadly on the health of women, remains poorly documented and understood. Two recent reports attempt to address the conceptual and methodological challenges that arise in estimating HIV-related maternal mortality and trends. This paper presents and compares the methods and discusses how they affect estimates at global and regional levels. Country examples of likely patterns of mortality among women of reproductive age are provided to illustrate the critical interactions between HIV and complications of pregnancy in high-HIV-burden countries. The implications for collaboration between HIV and reproductive health programmes are discussed, in support of accelerated action to reach the Millennium Development Goals and improve the health of women. PMID:21966594

  17. GPS receiver CODE bias estimation: A comparison of two methods

    NASA Astrophysics Data System (ADS)

    McCaffrey, Anthony M.; Jayachandran, P. T.; Themens, D. R.; Langley, R. B.

    2017-04-01

    The Global Positioning System (GPS) is a valuable tool in the measurement and monitoring of ionospheric total electron content (TEC). To obtain accurate GPS-derived TEC, satellite and receiver hardware biases, known as differential code biases (DCBs), must be estimated and removed. The Center for Orbit Determination in Europe (CODE) provides monthly averages of receiver DCBs for a significant number of stations in the International Global Navigation Satellite Systems Service (IGS) network. A comparison of the monthly receiver DCBs provided by CODE with DCBs estimated using the minimization of standard deviations (MSD) method on both daily and monthly time intervals, is presented. Calibrated TEC obtained using CODE-derived DCBs, is accurate to within 0.74 TEC units (TECU) in differenced slant TEC (sTEC), while calibrated sTEC using MSD-derived DCBs results in an accuracy of 1.48 TECU.

  18. Estimating return on investment in translational research: methods and protocols.

    PubMed

    Grazier, Kyle L; Trochim, William M; Dilts, David M; Kirk, Rosalind

    2013-12-01

    Assessing the value of clinical and translational research funding on accelerating the translation of scientific knowledge is a fundamental issue faced by the National Institutes of Health (NIH) and its Clinical and Translational Awards (CTSAs). To address this issue, the authors propose a model for measuring the return on investment (ROI) of one key CTSA program, the clinical research unit (CRU). By estimating the economic and social inputs and outputs of this program, this model produces multiple levels of ROI: investigator, program, and institutional estimates. A methodology, or evaluation protocol, is proposed to assess the value of this CTSA function, with specific objectives, methods, descriptions of the data to be collected, and how data are to be filtered, analyzed, and evaluated. This article provides an approach CTSAs could use to assess the economic and social returns on NIH and institutional investments in these critical activities.

  19. Estimating Return on Investment in Translational Research: Methods and Protocols

    PubMed Central

    Trochim, William; Dilts, David M.; Kirk, Rosalind

    2014-01-01

    Assessing the value of clinical and translational research funding on accelerating the translation of scientific knowledge is a fundamental issue faced by the National Institutes of Health and its Clinical and Translational Awards (CTSA). To address this issue, the authors propose a model for measuring the return on investment (ROI) of one key CTSA program, the clinical research unit (CRU). By estimating the economic and social inputs and outputs of this program, this model produces multiple levels of ROI: investigator, program and institutional estimates. A methodology, or evaluation protocol, is proposed to assess the value of this CTSA function, with specific objectives, methods, descriptions of the data to be collected, and how data are to be filtered, analyzed, and evaluated. This paper provides an approach CTSAs could use to assess the economic and social returns on NIH and institutional investments in these critical activities. PMID:23925706

  20. Estimating Bacterial Diversity for Ecological Studies: Methods, Metrics, and Assumptions

    PubMed Central

    Birtel, Julia; Walser, Jean-Claude; Pichon, Samuel; Bürgmann, Helmut; Matthews, Blake

    2015-01-01

    Methods to estimate microbial diversity have developed rapidly in an effort to understand the distribution and diversity of microorganisms in natural environments. For bacterial communities, the 16S rRNA gene is the phylogenetic marker gene of choice, but most studies select only a specific region of the 16S rRNA to estimate bacterial diversity. Whereas biases derived from from DNA extraction, primer choice and PCR amplification are well documented, we here address how the choice of variable region can influence a wide range of standard ecological metrics, such as species richness, phylogenetic diversity, β-diversity and rank-abundance distributions. We have used Illumina paired-end sequencing to estimate the bacterial diversity of 20 natural lakes across Switzerland derived from three trimmed variable 16S rRNA regions (V3, V4, V5). Species richness, phylogenetic diversity, community composition, β-diversity, and rank-abundance distributions differed significantly between 16S rRNA regions. Overall, patterns of diversity quantified by the V3 and V5 regions were more similar to one another than those assessed by the V4 region. Similar results were obtained when analyzing the datasets with different sequence similarity thresholds used during sequences clustering and when the same analysis was used on a reference dataset of sequences from the Greengenes database. In addition we also measured species richness from the same lake samples using ARISA Fingerprinting, but did not find a strong relationship between species richness estimated by Illumina and ARISA. We conclude that the selection of 16S rRNA region significantly influences the estimation of bacterial diversity and species distributions and that caution is warranted when comparing data from different variable regions as well as when using different sequencing techniques. PMID:25915756

  1. Spectrophotometric estimation of tamsulosin hydrochloride by acid-dye method.

    PubMed

    Shrivastava, Alankar; Saxena, Prachi; Gupta, Vipin B

    2011-01-01

    A new spectrophotometric method for the estimation of tamsulosin hydrochloride in pharmaceutical dosage forms has been developed and validated. The method is based on reaction between drug and bromophenol blue and complex was measured at 421 nm. The slope, intercept and correlation coefficient was found to be 0.054, -0.020 and 0.999, respectively. Method was validated in terms of specificity, linearity, range, precision and accuracy. The developed method can be used to determine drug in both tablet and capsule formulations. Reaction was optimized using three parameters i.e., concentration of the dye, pH of the buffer, volume of the buffer and shaking time. Maximum stability of the chromophore was achieved by using pH 2 and 2 ml volume of buffer. Shaking time kept was 2 min and concentration of the dye used was 2 ml of 0.05% w/v solution. Method was validated in terms of linearity, precision, range, accuracy, LOD and LOQ and stochiometry of the method was also established using Mole ratio and Job's method of continuous variation. The dye benzonoid form (blue color) of dye ionized into quinonoid form (purple color) in presence of buffer and reacts with protonated form of drug in 1:1 ratio and forms an ion-pair complex (yellow color).

  2. Nonparametric entropy estimation using kernel densities.

    PubMed

    Lake, Douglas E

    2009-01-01

    The entropy of experimental data from the biological and medical sciences provides additional information over summary statistics. Calculating entropy involves estimates of probability density functions, which can be effectively accomplished using kernel density methods. Kernel density estimation has been widely studied and a univariate implementation is readily available in MATLAB. The traditional definition of Shannon entropy is part of a larger family of statistics, called Renyi entropy, which are useful in applications that require a measure of the Gaussianity of data. Of particular note is the quadratic entropy which is related to the Friedman-Tukey (FT) index, a widely used measure in the statistical community. One application where quadratic entropy is very useful is the detection of abnormal cardiac rhythms, such as atrial fibrillation (AF). Asymptotic and exact small-sample results for optimal bandwidth and kernel selection to estimate the FT index are presented and lead to improved methods for entropy estimation.

  3. A Method to Estimate the Probability that Any Individual Cloud-to-Ground Lightning Stroke was Within Any Radius of Any Point

    NASA Technical Reports Server (NTRS)

    Huddleston, Lisa; Roeder, WIlliam P.; Merceret, Francis J.

    2011-01-01

    A new technique has been developed to estimate the probability that a nearby cloud-to-ground lightning stroke was within a specified radius of any point of interest. This process uses the bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to determine the probability that the stroke is inside any specified radius of any location, even if that location is not centered on or even within the location error ellipse. This technique is adapted from a method of calculating the probability of debris collision with spacecraft. Such a technique is important in spaceport processing activities because it allows engineers to quantify the risk of induced current damage to critical electronics due to nearby lightning strokes. This technique was tested extensively and is now in use by space launch organizations at Kennedy Space Center and Cape Canaveral Air Force station. Future applications could include forensic meteorology.

  4. Estimating Fuel Cycle Externalities: Analytical Methods and Issues, Report 2

    SciTech Connect

    Barnthouse, L.W.; Cada, G.F.; Cheng, M.-D.; Easterly, C.E.; Kroodsma, R.L.; Lee, R.; Shriner, D.S.; Tolbert, V.R.; Turner, R.S.

    1994-07-01

    that also have not been fully addressed. This document contains two types of papers that seek to fill part of this void. Some of the papers describe analytical methods that can be applied to one of the five steps of the damage function approach. The other papers discuss some of the complex issues that arise in trying to estimate externalities. This report, the second in a series of eight reports, is part of a joint study by the U.S. Department of Energy (DOE) and the Commission of the European Communities (EC)* on the externalities of fuel cycles. Most of the papers in this report were originally written as working papers during the initial phases of this study. The papers provide descriptions of the (non-radiological) atmospheric dispersion modeling that the study uses; reviews much of the relevant literature on ecological and health effects, and on the economic valuation of those impacts; contains several papers on some of the more complex and contentious issues in estimating externalities; and describes a method for depicting the quality of scientific information that a study uses. The analytical methods and issues that this report discusses generally pertain to more than one of the fuel cycles, though not necessarily to all of them. The report is divided into six parts, each one focusing on a different subject area.

  5. Streamflow-Characteristic Estimation Methods for Unregulated Streams of Tennessee

    USGS Publications Warehouse

    Law, George S.; Tasker, Gary D.; Ladd, David E.

    2009-01-01

    Streamflow-characteristic estimation methods for unregulated rivers and streams of Tennessee were developed by the U.S. Geological Survey in cooperation with the Tennessee Department of Environment and Conservation. Streamflow estimates are provided for 1,224 stream sites. Streamflow characteristics include the 7-consecutive-day, 10-year recurrence-interval low flow, the 30-consecutive-day, 5-year recurrence-interval low flow, the mean annual and mean summer flows, and the 99.5-, 99-, 98-, 95-, 90-, 80-, 70-, 60-, 50-, 40-, 30-, 20-, and 10-percent flow durations. Estimation methods include regional regression (RRE) equations and the region-of-influence (ROI) method. Both methods use zero-flow probability screening to estimate zero-flow quantiles. A low flow and flow duration (LFFD) computer program (TDECv301) performs zero-flow screening and calculation of nonzero-streamflow characteristics using the RRE equations and ROI method and provides quality measures including the 90-percent prediction interval and equivalent years of record. The U.S. Geological Survey StreamStats geographic information system automates the calculation of basin characteristics and streamflow characteristics. In addition, basin characteristics can be manually input to the stand-alone version of the computer program (TDECv301) to calculate streamflow characteristics in Tennessee. The RRE equations were computed using multivariable regression analysis. The two regions used for this study, the western part of the State (West) and the central and eastern part of the State (Central+East), are separated by the Tennessee River as it flows south to north from Hardin County to Stewart County. The West region uses data from 124 of the 1,224 streamflow sites, and the Central+East region uses data from 893 of the 1,224 streamflow sites. The study area also includes parts of the adjacent States of Georgia, North Carolina, Virginia, Alabama, Kentucky, and Mississippi. Total drainage area, a geology

  6. Using optimal estimation method for upper atmospheric Lidar temperature retrieval

    NASA Astrophysics Data System (ADS)

    Zou, Rongshi; Pan, Weilin; Qiao, Shuai

    2016-07-01

    Conventional ground based Rayleigh lidar temperature retrieval use integrate technique, which has limitations that necessitate abandoning temperatures retrieved at the greatest heights due to the assumption of a seeding value required to initialize the integration at the highest altitude. Here we suggests the use of a method that can incorporate information from various sources to improve the quality of the retrieval result. This approach inverts lidar equation via optimal estimation method(OEM) based on Bayesian theory together with Gaussian statistical model. It presents many advantages over the conventional ones: 1) the possibility of incorporating information from multiple heterogeneous sources; 2) provides diagnostic information about retrieval qualities; 3) ability of determining vertical resolution and maximum height to which the retrieval is mostly independent of the a priori profile. This paper compares one-hour temperature profiles retrieved using conventional and optimal estimation methods at Golmud, Qinghai province, China. Results show that OEM results show a better agreement with SABER profile compared with conventional one, in some region it is much lower than SABER profile, which is a very different results compared with previous studies, further studies are needed to explain this phenomenon. The success of applying OEM on temperature retrieval is a validation for using as retrieval framework in large synthetic observation systems including various active remote sensing instruments by incorporating all available measurement information into the model and analyze groups of measurements simultaneously to improve the results.

  7. CME Velocity and Acceleration Error Estimates Using the Bootstrap Method

    NASA Astrophysics Data System (ADS)

    Michalek, Grzegorz; Gopalswamy, Nat; Yashiro, Seiji

    2017-08-01

    The bootstrap method is used to determine errors of basic attributes of coronal mass ejections (CMEs) visually identified in images obtained by the Solar and Heliospheric Observatory (SOHO) mission's Large Angle and Spectrometric Coronagraph (LASCO) instruments. The basic parameters of CMEs are stored, among others, in a database known as the SOHO/LASCO CME catalog and are widely employed for many research studies. The basic attributes of CMEs ( e.g. velocity and acceleration) are obtained from manually generated height-time plots. The subjective nature of manual measurements introduces random errors that are difficult to quantify. In many studies the impact of such measurement errors is overlooked. In this study we present a new possibility to estimate measurements errors in the basic attributes of CMEs. This approach is a computer-intensive method because it requires repeating the original data analysis procedure several times using replicate datasets. This is also commonly called the bootstrap method in the literature. We show that the bootstrap approach can be used to estimate the errors of the basic attributes of CMEs having moderately large numbers of height-time measurements. The velocity errors are in the vast majority small and depend mostly on the number of height-time points measured for a particular event. In the case of acceleration, the errors are significant, and for more than half of all CMEs, they are larger than the acceleration itself.

  8. Probabilistic seismic hazard assessment of Italy using kernel estimation methods

    NASA Astrophysics Data System (ADS)

    Zuccolo, Elisa; Corigliano, Mirko; Lai, Carlo G.

    2013-07-01

    A representation of seismic hazard is proposed for Italy based on the zone-free approach developed by Woo (BSSA 86(2):353-362, 1996a), which is based on a kernel estimation method governed by concepts of fractal geometry and self-organized seismicity, not requiring the definition of seismogenic zoning. The purpose is to assess the influence of seismogenic zoning on the results obtained for the probabilistic seismic hazard analysis (PSHA) of Italy using the standard Cornell's method. The hazard has been estimated for outcropping rock site conditions in terms of maps and uniform hazard spectra for a selected site, with 10 % probability of exceedance in 50 years. Both spectral acceleration and spectral displacement have been considered as ground motion parameters. Differences in the results of PSHA between the two methods are compared and discussed. The analysis shows that, in areas such as Italy, characterized by a reliable earthquake catalog and in which faults are generally not easily identifiable, a zone-free approach can be considered a valuable tool to address epistemic uncertainty within a logic tree framework.

  9. Bayesian Threshold Estimation

    ERIC Educational Resources Information Center

    Gustafson, S. C.; Costello, C. S.; Like, E. C.; Pierce, S. J.; Shenoy, K. N.

    2009-01-01

    Bayesian estimation of a threshold time (hereafter simply threshold) for the receipt of impulse signals is accomplished given the following: 1) data, consisting of the number of impulses received in a time interval from zero to one and the time of the largest time impulse; 2) a model, consisting of a uniform probability density of impulse time…

  10. Bayesian Threshold Estimation

    ERIC Educational Resources Information Center

    Gustafson, S. C.; Costello, C. S.; Like, E. C.; Pierce, S. J.; Shenoy, K. N.

    2009-01-01

    Bayesian estimation of a threshold time (hereafter simply threshold) for the receipt of impulse signals is accomplished given the following: 1) data, consisting of the number of impulses received in a time interval from zero to one and the time of the largest time impulse; 2) a model, consisting of a uniform probability density of impulse time…

  11. A Simplified Method for the Estimation of Nickel in Urine

    PubMed Central

    Morgan, J. Gwynne

    1960-01-01

    A simplification of Sandell's method for estimating nickel in urine is described. Nickel is a normal constituent of most articles of food and between 0·01 and 0·03 p.p.m. are found in normal urine. There is a slight increase of urinary nickel in workers engaged in the carbonyl process. After accidental inhalation of nickel carbonyl, urinary nickel increased in a few hours and reached a maximum about the fourth day, returning to normal in 10 to 14 days. Although an increase of urinary nickel gives an indication of nickel carbonyl absorption, clinical signs and symptoms remain the best guide of the severity of poisoning. PMID:14424117

  12. Comparative study on parameter estimation methods for attenuation relationships

    NASA Astrophysics Data System (ADS)

    Sedaghati, Farhad; Pezeshk, Shahram

    2016-12-01

    In this paper, the performance and advantages and disadvantages of various regression methods to derive coefficients of an attenuation relationship have been investigated. A database containing 350 records out of 85 earthquakes with moment magnitudes of 5-7.6 and Joyner-Boore distances up to 100 km in Europe and the Middle East has been considered. The functional form proposed by Ambraseys et al (2005 Bull. Earthq. Eng. 3 1-53) is selected to compare chosen regression methods. Statistical tests reveal that although the estimated parameters are different for each method, the overall results are very similar. In essence, the weighted least squares method and one-stage maximum likelihood perform better than the other considered regression methods. Moreover, using a blind weighting matrix or a weighting matrix related to the number of records would not yield in improving the performance of the results. Further, to obtain the true standard deviation, the pure error analysis is necessary. Assuming that the correlation between different records of a specific earthquake exists, the one-stage maximum likelihood considering the true variance acquired by the pure error analysis is the most preferred method to compute the coefficients of a ground motion predication equation.

  13. Residual fatigue life estimation using a nonlinear ultrasound modulation method

    NASA Astrophysics Data System (ADS)

    Piero Malfense Fierro, Gian; Meo, Michele

    2015-02-01

    Predicting the residual fatigue life of a material is not a simple task and requires the development and association of many variables that as standalone tasks can be difficult to determine. This work develops a modulated nonlinear elastic wave spectroscopy method for the evaluation of a metallic components residual fatigue life. An aluminium specimen (AA6082-T6) was tested at predetermined fatigue stages throughout its fatigue life using a dual-frequency ultrasound method. A modulated nonlinear parameter was derived, which described the relationship between the generation of modulated (sideband) responses of a dual frequency signal and the linear response. The sideband generation from the dual frequency (two signal output system) was shown to increase as the residual fatigue life decreased, and as a standalone measurement method it can be used to show an increase in a materials damage. A baseline-free method was developed by linking a theoretical model, obtained by combining the Paris law and the Nazarov-Sutin crack equation, to experimental nonlinear modulation measurements. The results showed good correlation between the derived theoretical model and the modulated nonlinear parameter, allowing for baseline-free material residual fatigue life estimation. Advantages and disadvantages of these methods are discussed, as well as presenting further methods that would lead to increased accuracy of residual fatigue life detection.

  14. Analytical method to estimate resin cement diffusion into dentin

    NASA Astrophysics Data System (ADS)

    de Oliveira Ferraz, Larissa Cristina; Ubaldini, Adriana Lemos Mori; de Oliveira, Bruna Medeiros Bertol; Neto, Antonio Medina; Sato, Fracielle; Baesso, Mauro Luciano; Pascotto, Renata Corrêa

    2016-05-01

    This study analyzed the diffusion of two resin luting agents (resin cements) into dentin, with the aim of presenting an analytical method for estimating the thickness of the diffusion zone. Class V cavities were prepared in the buccal and lingual surfaces of molars (n=9). Indirect composite inlays were luted into the cavities with either a self-adhesive or a self-etch resin cement. The teeth were sectioned bucco-lingually and the cement-dentin interface was analyzed by using micro-Raman spectroscopy (MRS) and scanning electron microscopy. Evolution of peak intensities of the Raman bands, collected from the functional groups corresponding to the resin monomer (C-O-C, 1113 cm-1) present in the cements, and the mineral content (P-O, 961 cm-1) in dentin were sigmoid shaped functions. A Boltzmann function (BF) was then fitted to the peaks encountered at 1113 cm-1 to estimate the resin cement diffusion into dentin. The BF identified a resin cement-dentin diffusion zone of 1.8±0.4 μm for the self-adhesive cement and 2.5±0.3 μm for the self-etch cement. This analysis allowed the authors to estimate the diffusion of the resin cements into the dentin. Fitting the MRS data to the BF contributed to and is relevant for future studies of the adhesive interface.

  15. New method for volumetric estimation of functioning thyroid tissue

    NASA Astrophysics Data System (ADS)

    Feiglin, David H.; Krol, Andrzej; Gagne, George M.; Hellwig, Bradford J.; Thomas, Frank D.

    2000-04-01

    99m-Tc-pertechnetate SPECT studies of thyroid phantoms, with volumes in 13-198 cc range, and of a number of patients were performed. A modified reconstruction and analytic methods were applied to the data analysis. It involved: (1) transformation of the initial ECT data by a suitable mathematical function to compress the dynamical range of the data; (2) tomographic reconstruction resulting in transaxial (TV) images; (3) decompression of TV images via a reverse function; (4) Gaussian smoothing of the decompressed TV (dTV) images; (5) subtraction of a Compton dTV images from photopeak dTV images, (6) parallel operation on dTV images: Sobel edge detection and impulse filtering; (7) combining the filtered images via the AND operator; (8) edge tracing in the combined image by a human operator; (9) volume estimation by a computer. Based on the phantom studies it was established that the proposed technique yielded volumes with the relative error not exceeding 10%. For patient studies, the obtained volumes were also compared with palpation estimation. It has been found that inter-observer variability of the thyroid volume estimate is a function of physician's experience in utilization of manual palpation and it may result in the relative errors exceeding 50%.

  16. Sub-pixel Area Calculation Methods for Estimating Irrigated Areas.

    PubMed

    Thenkabailc, Prasad S; Biradar, Chandrashekar M; Noojipady, Praveen; Cai, Xueliang; Dheeravath, Venkateswarlu; Li, Yuanjie; Velpuri, Manohar; Gumma, Muralikrishna; Pandey, Suraj

    2007-10-31

    The goal of this paper was to develop and demonstrate practical methods forcomputing sub-pixel areas (SPAs) from coarse-resolution satellite sensor data. Themethods were tested and verified using: (a) global irrigated area map (GIAM) at 10-kmresolution based, primarily, on AVHRR data, and (b) irrigated area map for India at 500-mbased, primarily, on MODIS data. The sub-pixel irrigated areas (SPIAs) from coarse-resolution satellite sensor data were estimated by multiplying the full pixel irrigated areas(FPIAs) with irrigated area fractions (IAFs). Three methods were presented for IAFcomputation: (a) Google Earth Estimate (IAF-GEE); (b) High resolution imagery (IAF-HRI); and (c) Sub-pixel de-composition technique (IAF-SPDT). The IAF-GEE involvedthe use of "zoom-in-views" of sub-meter to 4-meter very high resolution imagery (VHRI)from Google Earth and helped determine total area available for irrigation (TAAI) or netirrigated areas that does not consider intensity or seasonality of irrigation. The IAF-HRI isa well known method that uses finer-resolution data to determine SPAs of the coarser-resolution imagery. The IAF-SPDT is a unique and innovative method wherein SPAs aredetermined based on the precise location of every pixel of a class in 2-dimensionalbrightness-greenness-wetness (BGW) feature-space plot of red band versus near-infraredband spectral reflectivity. The SPIAs computed using IAF-SPDT for the GIAM was within2 % of the SPIA computed using well known IAF-HRI. Further the fractions from the 2 methods were significantly correlated. The IAF-HRI and IAF-SPDT help to determine annualized or gross irrigated areas (AIA) that does consider intensity or seasonality (e.g., sum of areas from season 1, season 2, and continuous year-round crops). The national census based irrigated areas for the top 40 irrigated nations (which covers about 90% of global irrigation) was significantly better related (and had lesser uncertainties and errors) when compared to SPIAs than

  17. Dental age estimation in Brazilian HIV children using Willems' method.

    PubMed

    de Souza, Rafael Boschetti; da Silva Assunção, Luciana Reichert; Franco, Ademir; Zaroni, Fábio Marzullo; Holderbaum, Rejane Maria; Fernandes, Ângela

    2015-12-01

    The notification of the Human Immunodeficiency Virus (HIV) in Brazilian children was first reported in 1984. Since that time more than 21 thousand children became infected. Approximately 99.6% of the children aged less than 13 years old are vertically infected. In this context, most of the children are abandoned after birth, or lose their relatives in a near future, growing with uncertain identification. The present study aims to estimate the dental age of Brazilian HIV patients in face of healthy patients paired by age and gender. The sample consisted of 160 panoramic radiographs of male (n: 80) and female (n: 80) patients aged between 4 and 15 years (mean age: 8.88 years), divided into HIV (n: 80) and control (n: 80) groups. The sample was analyzed by three trained examiners, using Willems' method, 2001. Intraclass Correlation Coefficient (ICC) was applied to test intra- and inter-examiner agreement, and Student paired t-test was used to determine the age association between HIV and control groups. Intra-examiner (ICC: from 0.993 to 0.997) and inter-examiner (ICC: from 0.991 to 0.995) agreement tests indicated high reproducibility of the method between the examiners (P<0.01). Willems' method revealed discrete statistical overestimation in HIV (2.86 months; P=0.019) and control (1.90 months; P=0.039) groups. However, stratified analysis by gender indicate that overestimation were only concentrated in male HIV (3.85 months; P=0.001) and control (2.86 months; P=0.022) patients. The significant statistical differences are not clinically relevant once only few months of discrepancy are detected applying Willems' method in a Brazilian HIV sample, making this method highly recommended for dental age estimation of both HIV and healthy children with unknown age.

  18. Estimation of Convective Momentum Fluxes Using Satellite-Based Methods

    NASA Astrophysics Data System (ADS)

    Jewett, C.; Mecikalski, J. R.

    2009-12-01

    Research and case studies have shown that convection plays a significant role in large-scale environmental circulations. Convective momentum fluxes (CMFs) have been studied for many years using in-situ and aircraft measurements, along with numerical simulations. However, despite these successes, little work has been conducted on methods that use satellite remote sensing as a tool to diagnose these fluxes. Uses of satellite data have the capability to provide continuous analysis across regions void of ground-based remote sensing. Therefore, the project's overall goal is to develop a synergistic approach for retrieving CMFs using a collection of instruments including GOES, TRMM, CloudSat, MODIS, and QuikScat. However, this particular study will focus on the work using TRMM and QuikScat, and the methodology of using CloudSat. Sound research has already been conducted for computing CMFs using the GOES instruments (Jewett and Mecikalski 2009, submitted to J. Geophys. Res.). Using satellite-derived winds, namely mesoscale atmospheric motion vectors (MAMVs) as described by Bedka and Mecikalski (2005), one can obtain the actual winds occurring within a convective environment as perturbed by convection. Surface outflow boundaries and upper-tropospheric anvil outflow will produce “perturbation” winds on smaller, convective scales. Combined with estimated vertical motion retrieved using geostationary infrared imagery, CMFs were estimated using MAMVs, with an average profile being calculated across a convective regime or a domain covered by active storms. This study involves estimating draft-tilt from TRMM PR radar reflectivity and sub-cloud base fluxes using QuikScat data. The “slope” of falling hydrometeors (relative to Earth) in data are related to u', v' and w' winds within convection. The main up- and down-drafts within convection are described by precipitation patterns (Mecikalski 2003). Vertical motion estimates are made using model results for deep convection

  19. Probability density function (Pdf) of daily rainfall depths by means of superstatistics of hydro-climatic fluctuations for African test cities

    NASA Astrophysics Data System (ADS)

    Topa, M. E.; De Paola, F.; Giugni, M.; Kombe, W.; Touré, H.

    2012-04-01

    The dynamic of hydro-climatic processes can fluctuate in a wide range of temporal scales. Such fluctuations are often unpredictable for ecosystems and the adaptation to these represent the great challenge for the survival and the stability of the species. An unsolved issue is how much these fluctuations of climatic variables to different temporal scales can influence the frequency and the intensity of the extreme events, and how much these events can modify the ecosystems life. It is by now widespread that an increment of the frequency and the intensity of the extreme events will represent one of the strongest characteristic of the global climatic change, with the greatest social and biotics implications (Porporato et al 2006). Recent field experiments (Gutshick and BassiriRad, 2003) and numerical analysis (Porporato et al 2004) have shown that the extreme events can generate not negligible consequences on organisms of water-limited ecosystems. Adaptation measures and species and ecosystems answers to the hydro-climatic variations, is therefore srongly interconnected to the probabilistic structure of these fluctuations. Generally the not-linear intermittent dynamic of a state variable z (a rainfall depth or the interarrival time between two storms), at short time scales (for example daily) is described by a probability density function (pdf), p (z|υ), where υ is the parameter of the distribution. If the same parameter υ varies so that the external forcing fluctuates at longer temporal scale, z reaches a new "local" equilibrium. When the temporal scale of the variation of υ is larger than the one of z, the probability distribution of z can be obtained as a overlapping of the temporary equlibria ("Superstatistic" approach), i.e.: p(z) = ∫ p(z|υ)·φ(υ)dυ (1) where p(z|υ) is the conditioned probability of z to υ, while φ(υ) is the pdf of υ (Beck, 2001; Benjamin and Cornell, 1970). The present work, carried out within FP7-ENV-2010 CLUVA (CLimate Change

  20. Matrix Methods for Estimating the Coherence Functions from Estimates of the Cross-Spectral Density Matrix

    DOE PAGES

    Smallwood, D. O.

    1996-01-01

    It is shown that the usual method for estimating the coherence functions (ordinary, partial, and multiple) for a general multiple-input! multiple-output problem can be expressed as a modified form of Cholesky decomposition of the cross-spectral density matrix of the input and output records. The results can be equivalently obtained using singular value decomposition (SVD) of the cross-spectral density matrix. Using SVD suggests a new form of fractional coherence. The formulation as a SVD problem also suggests a way to order the inputs when a natural physical order of the inputs is absent.