Science.gov

Sample records for probability-density estimation method

  1. Accurate photometric redshift probability density estimation - method comparison and application

    NASA Astrophysics Data System (ADS)

    Rau, Markus Michael; Seitz, Stella; Brimioulle, Fabrice; Frank, Eibe; Friedrich, Oliver; Gruen, Daniel; Hoyle, Ben

    2015-10-01

    We introduce an ordinal classification algorithm for photometric redshift estimation, which significantly improves the reconstruction of photometric redshift probability density functions (PDFs) for individual galaxies and galaxy samples. As a use case we apply our method to CFHTLS galaxies. The ordinal classification algorithm treats distinct redshift bins as ordered values, which improves the quality of photometric redshift PDFs, compared with non-ordinal classification architectures. We also propose a new single value point estimate of the galaxy redshift, which can be used to estimate the full redshift PDF of a galaxy sample. This method is competitive in terms of accuracy with contemporary algorithms, which stack the full redshift PDFs of all galaxies in the sample, but requires orders of magnitude less storage space. The methods described in this paper greatly improve the log-likelihood of individual object redshift PDFs, when compared with a popular neural network code (ANNZ). In our use case, this improvement reaches 50 per cent for high-redshift objects (z ≥ 0.75). We show that using these more accurate photometric redshift PDFs will lead to a reduction in the systematic biases by up to a factor of 4, when compared with less accurate PDFs obtained from commonly used methods. The cosmological analyses we examine and find improvement upon are the following: gravitational lensing cluster mass estimates, modelling of angular correlation functions and modelling of cosmic shear correlation functions.

  2. Structural Reliability Using Probability Density Estimation Methods Within NESSUS

    NASA Technical Reports Server (NTRS)

    Chamis, Chrisos C. (Technical Monitor); Godines, Cody Ric

    2003-01-01

    A reliability analysis studies a mathematical model of a physical system taking into account uncertainties of design variables and common results are estimations of a response density, which also implies estimations of its parameters. Some common density parameters include the mean value, the standard deviation, and specific percentile(s) of the response, which are measures of central tendency, variation, and probability regions, respectively. Reliability analyses are important since the results can lead to different designs by calculating the probability of observing safe responses in each of the proposed designs. All of this is done at the expense of added computational time as compared to a single deterministic analysis which will result in one value of the response out of many that make up the density of the response. Sampling methods, such as monte carlo (MC) and latin hypercube sampling (LHS), can be used to perform reliability analyses and can compute nonlinear response density parameters even if the response is dependent on many random variables. Hence, both methods are very robust; however, they are computationally expensive to use in the estimation of the response density parameters. Both methods are 2 of 13 stochastic methods that are contained within the Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) program. NESSUS is a probabilistic finite element analysis (FEA) program that was developed through funding from NASA Glenn Research Center (GRC). It has the additional capability of being linked to other analysis programs; therefore, probabilistic fluid dynamics, fracture mechanics, and heat transfer are only a few of what is possible with this software. The LHS method is the newest addition to the stochastic methods within NESSUS. Part of this work was to enhance NESSUS with the LHS method. The new LHS module is complete, has been successfully integrated with NESSUS, and been used to study four different test cases that have been

  3. On the method of logarithmic cumulants for parametric probability density function estimation.

    PubMed

    Krylov, Vladimir A; Moser, Gabriele; Serpico, Sebastiano B; Zerubia, Josiane

    2013-10-01

    Parameter estimation of probability density functions is one of the major steps in the area of statistical image and signal processing. In this paper we explore several properties and limitations of the recently proposed method of logarithmic cumulants (MoLC) parameter estimation approach which is an alternative to the classical maximum likelihood (ML) and method of moments (MoM) approaches. We derive the general sufficient condition for a strong consistency of the MoLC estimates which represents an important asymptotic property of any statistical estimator. This result enables the demonstration of the strong consistency of MoLC estimates for a selection of widely used distribution families originating from (but not restricted to) synthetic aperture radar image processing. We then derive the analytical conditions of applicability of MoLC to samples for the distribution families in our selection. Finally, we conduct various synthetic and real data experiments to assess the comparative properties, applicability and small sample performance of MoLC notably for the generalized gamma and K families of distributions. Supervised image classification experiments are considered for medical ultrasound and remote-sensing SAR imagery. The obtained results suggest that MoLC is a feasible and computationally fast yet not universally applicable alternative to MoM. MoLC becomes especially useful when the direct ML approach turns out to be unfeasible.

  4. On the method of logarithmic cumulants for parametric probability density function estimation.

    PubMed

    Krylov, Vladimir A; Moser, Gabriele; Serpico, Sebastiano B; Zerubia, Josiane

    2013-10-01

    Parameter estimation of probability density functions is one of the major steps in the area of statistical image and signal processing. In this paper we explore several properties and limitations of the recently proposed method of logarithmic cumulants (MoLC) parameter estimation approach which is an alternative to the classical maximum likelihood (ML) and method of moments (MoM) approaches. We derive the general sufficient condition for a strong consistency of the MoLC estimates which represents an important asymptotic property of any statistical estimator. This result enables the demonstration of the strong consistency of MoLC estimates for a selection of widely used distribution families originating from (but not restricted to) synthetic aperture radar image processing. We then derive the analytical conditions of applicability of MoLC to samples for the distribution families in our selection. Finally, we conduct various synthetic and real data experiments to assess the comparative properties, applicability and small sample performance of MoLC notably for the generalized gamma and K families of distributions. Supervised image classification experiments are considered for medical ultrasound and remote-sensing SAR imagery. The obtained results suggest that MoLC is a feasible and computationally fast yet not universally applicable alternative to MoM. MoLC becomes especially useful when the direct ML approach turns out to be unfeasible. PMID:23799694

  5. Nonparametric maximum likelihood estimation of probability densities by penalty function methods

    NASA Technical Reports Server (NTRS)

    Demontricher, G. F.; Tapia, R. A.; Thompson, J. R.

    1974-01-01

    When it is known a priori exactly to which finite dimensional manifold the probability density function gives rise to a set of samples, the parametric maximum likelihood estimation procedure leads to poor estimates and is unstable; while the nonparametric maximum likelihood procedure is undefined. A very general theory of maximum penalized likelihood estimation which should avoid many of these difficulties is presented. It is demonstrated that each reproducing kernel Hilbert space leads, in a very natural way, to a maximum penalized likelihood estimator and that a well-known class of reproducing kernel Hilbert spaces gives polynomial splines as the nonparametric maximum penalized likelihood estimates.

  6. Estimation of the four-wave mixing noise probability-density function by the multicanonical Monte Carlo method.

    PubMed

    Neokosmidis, Ioannis; Kamalakis, Thomas; Chipouras, Aristides; Sphicopoulos, Thomas

    2005-01-01

    The performance of high-powered wavelength-division multiplexed (WDM) optical networks can be severely degraded by four-wave-mixing- (FWM-) induced distortion. The multicanonical Monte Carlo method (MCMC) is used to calculate the probability-density function (PDF) of the decision variable of a receiver, limited by FWM noise. Compared with the conventional Monte Carlo method previously used to estimate this PDF, the MCMC method is much faster and can accurately estimate smaller error probabilities. The method takes into account the correlation between the components of the FWM noise, unlike the Gaussian model, which is shown not to provide accurate results. PMID:15648621

  7. Conditional probability density function estimation with sigmoidal neural networks.

    PubMed

    Sarajedini, A; Hecht-Nielsen, R; Chau, P M

    1999-01-01

    Real-world problems can often be couched in terms of conditional probability density function estimation. In particular, pattern recognition, signal detection, and financial prediction are among the multitude of applications requiring conditional density estimation. Previous developments in this direction have used neural nets to estimate statistics of the distribution or the marginal or joint distributions of the input-output variables. We have modified the joint distribution estimating sigmoidal neural network to estimate the conditional distribution. Thus, the probability density of the output conditioned on the inputs is estimated using a neural network. We have derived and implemented the learning laws to train the network. We show that this network has computational advantages over a brute force ratio of joint and marginal distributions. We also compare its performance to a kernel conditional density estimator in a larger scale (higher dimensional) problem simulating more realistic conditions.

  8. Probability Density and CFAR Threshold Estimation for Hyperspectral Imaging

    SciTech Connect

    Clark, G A

    2004-09-21

    The work reported here shows the proof of principle (using a small data set) for a suite of algorithms designed to estimate the probability density function of hyperspectral background data and compute the appropriate Constant False Alarm Rate (CFAR) matched filter decision threshold for a chemical plume detector. Future work will provide a thorough demonstration of the algorithms and their performance with a large data set. The LASI (Large Aperture Search Initiative) Project involves instrumentation and image processing for hyperspectral images of chemical plumes in the atmosphere. The work reported here involves research and development on algorithms for reducing the false alarm rate in chemical plume detection and identification algorithms operating on hyperspectral image cubes. The chemical plume detection algorithms to date have used matched filters designed using generalized maximum likelihood ratio hypothesis testing algorithms [1, 2, 5, 6, 7, 12, 10, 11, 13]. One of the key challenges in hyperspectral imaging research is the high false alarm rate that often results from the plume detector [1, 2]. The overall goal of this work is to extend the classical matched filter detector to apply Constant False Alarm Rate (CFAR) methods to reduce the false alarm rate, or Probability of False Alarm P{sub FA} of the matched filter [4, 8, 9, 12]. A detector designer is interested in minimizing the probability of false alarm while simultaneously maximizing the probability of detection P{sub D}. This is summarized by the Receiver Operating Characteristic Curve (ROC) [10, 11], which is actually a family of curves depicting P{sub D} vs. P{sub FA}parameterized by varying levels of signal to noise (or clutter) ratio (SNR or SCR). Often, it is advantageous to be able to specify a desired P{sub FA} and develop a ROC curve (P{sub D} vs. decision threshold r{sub 0}) for that case. That is the purpose of this work. Specifically, this work develops a set of algorithms and MATLAB

  9. Nonparametric probability density estimation by optimization theoretic techniques

    NASA Technical Reports Server (NTRS)

    Scott, D. W.

    1976-01-01

    Two nonparametric probability density estimators are considered. The first is the kernel estimator. The problem of choosing the kernel scaling factor based solely on a random sample is addressed. An interactive mode is discussed and an algorithm proposed to choose the scaling factor automatically. The second nonparametric probability estimate uses penalty function techniques with the maximum likelihood criterion. A discrete maximum penalized likelihood estimator is proposed and is shown to be consistent in the mean square error. A numerical implementation technique for the discrete solution is discussed and examples displayed. An extensive simulation study compares the integrated mean square error of the discrete and kernel estimators. The robustness of the discrete estimator is demonstrated graphically.

  10. Probability Density Function Method for Langevin Equations with Colored Noise

    SciTech Connect

    Wang, Peng; Tartakovsky, Alexandre M.; Tartakovsky, Daniel M.

    2013-04-05

    We present a novel method to derive closed-form, computable PDF equations for Langevin systems with colored noise. The derived equations govern the dynamics of joint or marginal probability density functions (PDFs) of state variables, and rely on a so-called Large-Eddy-Diffusivity (LED) closure. We demonstrate the accuracy of the proposed PDF method for linear and nonlinear Langevin equations, describing the classical Brownian displacement and dispersion in porous media.

  11. Numerical methods for high-dimensional probability density function equations

    NASA Astrophysics Data System (ADS)

    Cho, H.; Venturi, D.; Karniadakis, G. E.

    2016-01-01

    In this paper we address the problem of computing the numerical solution to kinetic partial differential equations involving many phase variables. These types of equations arise naturally in many different areas of mathematical physics, e.g., in particle systems (Liouville and Boltzmann equations), stochastic dynamical systems (Fokker-Planck and Dostupov-Pugachev equations), random wave theory (Malakhov-Saichev equations) and coarse-grained stochastic systems (Mori-Zwanzig equations). We propose three different classes of new algorithms addressing high-dimensionality: The first one is based on separated series expansions resulting in a sequence of low-dimensional problems that can be solved recursively and in parallel by using alternating direction methods. The second class of algorithms relies on truncation of interaction in low-orders that resembles the Bogoliubov-Born-Green-Kirkwood-Yvon (BBGKY) framework of kinetic gas theory and it yields a hierarchy of coupled probability density function equations. The third class of algorithms is based on high-dimensional model representations, e.g., the ANOVA method and probabilistic collocation methods. A common feature of all these approaches is that they are reducible to the problem of computing the solution to high-dimensional equations via a sequence of low-dimensional problems. The effectiveness of the new algorithms is demonstrated in numerical examples involving nonlinear stochastic dynamical systems and partial differential equations, with up to 120 variables.

  12. Parameterizing deep convection using the assumed probability density function method

    DOE PAGES

    Storer, R. L.; Griffin, B. M.; Höft, J.; Weber, J. K.; Raut, E.; Larson, V. E.; Wang, M.; Rasch, P. J.

    2014-06-11

    Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method. The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and mid-latitude deep convection. These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing ismore » weak. The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less

  13. Parameterizing deep convection using the assumed probability density function method

    SciTech Connect

    Storer, R. L.; Griffin, B. M.; Hoft, Jan; Weber, J. K.; Raut, E.; Larson, Vincent E.; Wang, Minghuai; Rasch, Philip J.

    2015-01-06

    Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method.The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and mid-latitude deep convection.These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing is weak. The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.

  14. Parameterizing deep convection using the assumed probability density function method

    DOE PAGES

    Storer, R. L.; Griffin, B. M.; Höft, J.; Weber, J. K.; Raut, E.; Larson, V. E.; Wang, M.; Rasch, P. J.

    2015-01-06

    Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method. The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and midlatitude deep convection. These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing ismore » weak. The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less

  15. Robust location and spread measures for nonparametric probability density function estimation.

    PubMed

    López-Rubio, Ezequiel

    2009-10-01

    Robustness against outliers is a desirable property of any unsupervised learning scheme. In particular, probability density estimators benefit from incorporating this feature. A possible strategy to achieve this goal is to substitute the sample mean and the sample covariance matrix by more robust location and spread estimators. Here we use the L1-median to develop a nonparametric probability density function (PDF) estimator. We prove its most relevant properties, and we show its performance in density estimation and classification applications.

  16. SAR amplitude probability density function estimation based on a generalized Gaussian model.

    PubMed

    Moser, Gabriele; Zerubia, Josiane; Serpico, Sebastiano B

    2006-06-01

    In the context of remotely sensed data analysis, an important problem is the development of accurate models for the statistics of the pixel intensities. Focusing on synthetic aperture radar (SAR) data, this modeling process turns out to be a crucial task, for instance, for classification or for denoising purposes. In this paper, an innovative parametric estimation methodology for SAR amplitude data is proposed that adopts a generalized Gaussian (GG) model for the complex SAR backscattered signal. A closed-form expression for the corresponding amplitude probability density function (PDF) is derived and a specific parameter estimation algorithm is developed in order to deal with the proposed model. Specifically, the recently proposed "method-of-log-cumulants" (MoLC) is applied, which stems from the adoption of the Mellin transform (instead of the usual Fourier transform) in the computation of characteristic functions and from the corresponding generalization of the concepts of moment and cumulant. For the developed GG-based amplitude model, the resulting MoLC estimates turn out to be numerically feasible and are also analytically proved to be consistent. The proposed parametric approach was validated by using several real ERS-1, XSAR, E-SAR, and NASA/JPL airborne SAR images, and the experimental results prove that the method models the amplitude PDF better than several previously proposed parametric models for backscattering phenomena. PMID:16764268

  17. Computing light statistics in heterogeneous media based on a mass weighted probability density function method.

    PubMed

    Jenny, Patrick; Mourad, Safer; Stamm, Tobias; Vöge, Markus; Simon, Klaus

    2007-08-01

    Based on the transport theory, we present a modeling approach to light scattering in turbid material. It uses an efficient and general statistical description of the material's scattering and absorption behavior. The model estimates the spatial distribution of intensity and the flow direction of radiation, both of which are required, e.g., for adaptable predictions of the appearance of colors in halftone prints. This is achieved by employing a computational particle method, which solves a model equation for the probability density function of photon positions and propagation directions. In this framework, each computational particle represents a finite probability of finding a photon in a corresponding state, including properties like wavelength. Model evaluations and verifications conclude the discussion.

  18. Unification of field theory and maximum entropy methods for learning probability densities.

    PubMed

    Kinney, Justin B

    2015-09-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

  19. A Tomographic Method for the Reconstruction of Local Probability Density Functions

    NASA Technical Reports Server (NTRS)

    Sivathanu, Y. R.; Gore, J. P.

    1993-01-01

    A method of obtaining the probability density function (PDF) of local properties from path integrated measurements is described. The approach uses a discrete probability function (DPF) method to infer the PDF of the local extinction coefficient from measurements of the PDFs of the path integrated transmittance. The local PDFs obtained using the method are compared with those obtained from direct intrusive measurements in propylene/air and ethylene/air diffusion flames. The results of this comparison are good.

  20. Identification of contaminant point source in surface waters based on backward location probability density function method

    NASA Astrophysics Data System (ADS)

    Cheng, Wei Ping; Jia, Yafei

    2010-04-01

    A backward location probability density function (BL-PDF) method capable of identifying location of point sources in surface waters is presented in this paper. The relation of forward location probability density function (FL-PDF) and backward location probability density, based on adjoint analysis, is validated using depth-averaged free-surface flow and mass transport models and several surface water test cases. The solutions of the backward location PDF transport equation agreed well to the forward location PDF computed using the pollutant concentration at the monitoring points. Using this relation and the distribution of the concentration detected at the monitoring points, an effective point source identification method is established. The numerical error of the backward location PDF simulation is found to be sensitive to the irregularity of the computational meshes, diffusivity, and velocity gradients. The performance of identification method is evaluated regarding the random error and number of observed values. In addition to hypothetical cases, a real case was studied to identify the source location where a dye tracer was instantaneously injected into a stream. The study indicated the proposed source identification method is effective, robust, and quite efficient in surface waters; the number of advection-diffusion equations needed to solve is equal to the number of observations.

  1. Fast and accurate probability density estimation in large high dimensional astronomical datasets

    NASA Astrophysics Data System (ADS)

    Gupta, Pramod; Connolly, Andrew J.; Gardner, Jeffrey P.

    2015-01-01

    Astronomical surveys will generate measurements of hundreds of attributes (e.g. color, size, shape) on hundreds of millions of sources. Analyzing these large, high dimensional data sets will require efficient algorithms for data analysis. An example of this is probability density estimation that is at the heart of many classification problems such as the separation of stars and quasars based on their colors. Popular density estimation techniques use binning or kernel density estimation. Kernel density estimation has a small memory footprint but often requires large computational resources. Binning has small computational requirements but usually binning is implemented with multi-dimensional arrays which leads to memory requirements which scale exponentially with the number of dimensions. Hence both techniques do not scale well to large data sets in high dimensions. We present an alternative approach of binning implemented with hash tables (BASH tables). This approach uses the sparseness of data in the high dimensional space to ensure that the memory requirements are small. However hashing requires some extra computation so a priori it is not clear if the reduction in memory requirements will lead to increased computational requirements. Through an implementation of BASH tables in C++ we show that the additional computational requirements of hashing are negligible. Hence this approach has small memory and computational requirements. We apply our density estimation technique to photometric selection of quasars using non-parametric Bayesian classification and show that the accuracy of the classification is same as the accuracy of earlier approaches. Since the BASH table approach is one to three orders of magnitude faster than the earlier approaches it may be useful in various other applications of density estimation in astrostatistics.

  2. Probability density based gradient projection method for inverse kinematics of a robotic human body model.

    PubMed

    Lura, Derek; Wernke, Matthew; Alqasemi, Redwan; Carey, Stephanie; Dubey, Rajiv

    2012-01-01

    This paper presents the probability density based gradient projection (GP) of the null space of the Jacobian for a 25 degree of freedom bilateral robotic human body model (RHBM). This method was used to predict the inverse kinematics of the RHBM and maximize the similarity between predicted inverse kinematic poses and recorded data of 10 subjects performing activities of daily living. The density function was created for discrete increments of the workspace. The number of increments in each direction (x, y, and z) was varied from 1 to 20. Performance of the method was evaluated by finding the root mean squared (RMS) of the difference between the predicted joint angles relative to the joint angles recorded from motion capture. The amount of data included in the creation of the probability density function was varied from 1 to 10 subjects, creating sets of for subjects included and excluded from the density function. The performance of the GP method for subjects included and excluded from the density function was evaluated to test the robustness of the method. Accuracy of the GP method varied with amount of incremental division of the workspace, increasing the number of increments decreased the RMS error of the method, with the error of average RMS error of included subjects ranging from 7.7° to 3.7°. However increasing the number of increments also decreased the robustness of the method.

  3. A Priori Knowledge and Probability Density Based Segmentation Method for Medical CT Image Sequences

    PubMed Central

    Tan, Hanqing; Yang, Benqiang

    2014-01-01

    This paper briefly introduces a novel segmentation strategy for CT images sequences. As first step of our strategy, we extract a priori intensity statistical information from object region which is manually segmented by radiologists. Then we define a search scope for object and calculate probability density for each pixel in the scope using a voting mechanism. Moreover, we generate an optimal initial level set contour based on a priori shape of object of previous slice. Finally the modified distance regularity level set method utilizes boundaries feature and probability density to conform final object. The main contributions of this paper are as follows: a priori knowledge is effectively used to guide the determination of objects and a modified distance regularization level set method can accurately extract actual contour of object in a short time. The proposed method is compared to other seven state-of-the-art medical image segmentation methods on abdominal CT image sequences datasets. The evaluated results demonstrate our method performs better and has the potential for segmentation in CT image sequences. PMID:24967402

  4. Probability density function method for variable-density pressure-gradient-driven turbulence and mixing

    SciTech Connect

    Bakosi, Jozsef; Ristorcelli, Raymond J

    2010-01-01

    Probability density function (PDF) methods are extended to variable-density pressure-gradient-driven turbulence. We apply the new method to compute the joint PDF of density and velocity in a non-premixed binary mixture of different-density molecularly mixing fluids under gravity. The full time-evolution of the joint PDF is captured in the highly non-equilibrium flow: starting from a quiescent state, transitioning to fully developed turbulence and finally dissipated by molecular diffusion. High-Atwood-number effects (as distinguished from the Boussinesq case) are accounted for: both hydrodynamic turbulence and material mixing are treated at arbitrary density ratios, with the specific volume, mass flux and all their correlations in closed form. An extension of the generalized Langevin model, originally developed for the Lagrangian fluid particle velocity in constant-density shear-driven turbulence, is constructed for variable-density pressure-gradient-driven flows. The persistent small-scale anisotropy, a fundamentally 'non-Kolmogorovian' feature of flows under external acceleration forces, is captured by a tensorial diffusion term based on the external body force. The material mixing model for the fluid density, an active scalar, is developed based on the beta distribution. The beta-PDF is shown to be capable of capturing the mixing asymmetry and that it can accurately represent the density through transition, in fully developed turbulence and in the decay process. The joint model for hydrodynamics and active material mixing yields a time-accurate evolution of the turbulent kinetic energy and Reynolds stress anisotropy without resorting to gradient diffusion hypotheses, and represents the mixing state by the density PDF itself, eliminating the need for dubious mixing measures. Direct numerical simulations of the homogeneous Rayleigh-Taylor instability are used for model validation.

  5. Analysis of altered gait cycle duration in amyotrophic lateral sclerosis based on nonparametric probability density function estimation.

    PubMed

    Wu, Yunfeng; Shi, Lei

    2011-04-01

    Human locomotion is regulated by the central nervous system (CNS). The neurophysiological changes in the CNS due to amyotrophic lateral sclerosis (ALS) may cause altered gait cycle duration (stride interval) or other gait rhythm. This article used a statistical method to analyze the altered stride interval in patients with ALS. We first estimated the probability density functions (PDFs) of stride interval from the outlier-processed gait rhythm time series, by using the nonparametric Parzen-window approach. Based on the PDFs estimated, the mean of the left-foot stride interval and the modified Kullback-Leibler divergence (MKLD) can be computed to serve as dominant features. In the classification experiments, the least squares support vector machine (LS-SVM) with Gaussian kernels was applied to distinguish the stride patterns in ALS patients. According to the results obtained with the stride interval time series recorded from 16 healthy control subjects and 13 patients with ALS, the key findings of the present study are summarized as follows. (1) It is observed that the mean of stride interval computed based on the PDF for the left foot is correlated with that for the right foot in patients with ALS. (2) The MKLD parameter of the gait in ALS is significantly different from that in healthy controls. (3) The diagnostic performance of the nonlinear LS-SVM, evaluated by the leave-one-out cross-validation method, is superior to that obtained by the linear discriminant analysis. The LS-SVM can effectively separate the stride patterns between the groups of healthy controls and ALS patients with an overall accurate rate of 82.8% and an area of 0.869 under the receiver operating characteristic curve. PMID:21130016

  6. Analysis of altered gait cycle duration in amyotrophic lateral sclerosis based on nonparametric probability density function estimation.

    PubMed

    Wu, Yunfeng; Shi, Lei

    2011-04-01

    Human locomotion is regulated by the central nervous system (CNS). The neurophysiological changes in the CNS due to amyotrophic lateral sclerosis (ALS) may cause altered gait cycle duration (stride interval) or other gait rhythm. This article used a statistical method to analyze the altered stride interval in patients with ALS. We first estimated the probability density functions (PDFs) of stride interval from the outlier-processed gait rhythm time series, by using the nonparametric Parzen-window approach. Based on the PDFs estimated, the mean of the left-foot stride interval and the modified Kullback-Leibler divergence (MKLD) can be computed to serve as dominant features. In the classification experiments, the least squares support vector machine (LS-SVM) with Gaussian kernels was applied to distinguish the stride patterns in ALS patients. According to the results obtained with the stride interval time series recorded from 16 healthy control subjects and 13 patients with ALS, the key findings of the present study are summarized as follows. (1) It is observed that the mean of stride interval computed based on the PDF for the left foot is correlated with that for the right foot in patients with ALS. (2) The MKLD parameter of the gait in ALS is significantly different from that in healthy controls. (3) The diagnostic performance of the nonlinear LS-SVM, evaluated by the leave-one-out cross-validation method, is superior to that obtained by the linear discriminant analysis. The LS-SVM can effectively separate the stride patterns between the groups of healthy controls and ALS patients with an overall accurate rate of 82.8% and an area of 0.869 under the receiver operating characteristic curve.

  7. Model assembly for estimating cell surviving fraction for both targeted and nontargeted effects based on microdosimetric probability densities.

    PubMed

    Sato, Tatsuhiko; Hamada, Nobuyuki

    2014-01-01

    We here propose a new model assembly for estimating the surviving fraction of cells irradiated with various types of ionizing radiation, considering both targeted and nontargeted effects in the same framework. The probability densities of specific energies in two scales, which are the cell nucleus and its substructure called a domain, were employed as the physical index for characterizing the radiation fields. In the model assembly, our previously established double stochastic microdosimetric kinetic (DSMK) model was used to express the targeted effect, whereas a newly developed model was used to express the nontargeted effect. The radioresistance caused by overexpression of anti-apoptotic protein Bcl-2 known to frequently occur in human cancer was also considered by introducing the concept of the adaptive response in the DSMK model. The accuracy of the model assembly was examined by comparing the computationally and experimentally determined surviving fraction of Bcl-2 cells (Bcl-2 overexpressing HeLa cells) and Neo cells (neomycin resistant gene-expressing HeLa cells) irradiated with microbeam or broadbeam of energetic heavy ions, as well as the WI-38 normal human fibroblasts irradiated with X-ray microbeam. The model assembly reproduced very well the experimentally determined surviving fraction over a wide range of dose and linear energy transfer (LET) values. Our newly established model assembly will be worth being incorporated into treatment planning systems for heavy-ion therapy, brachytherapy, and boron neutron capture therapy, given critical roles of the frequent Bcl-2 overexpression and the nontargeted effect in estimating therapeutic outcomes and harmful effects of such advanced therapeutic modalities.

  8. Non-stationary random vibration analysis of a 3D train-bridge system using the probability density evolution method

    NASA Astrophysics Data System (ADS)

    Yu, Zhi-wu; Mao, Jian-feng; Guo, Feng-qi; Guo, Wei

    2016-03-01

    Rail irregularity is one of the main sources causing train-bridge random vibration. A new random vibration theory for the coupled train-bridge systems is proposed in this paper. First, number theory method (NTM) with 2N-dimensional vectors for the stochastic harmonic function (SHF) of rail irregularity power spectrum density was adopted to determine the representative points of spatial frequencies and phases to generate the random rail irregularity samples, and the non-stationary rail irregularity samples were modulated with the slowly varying function. Second, the probability density evolution method (PDEM) was employed to calculate the random dynamic vibration of the three-dimensional (3D) train-bridge system by a program compiled on the MATLAB® software platform. Eventually, the Newmark-β integration method and double edge difference method of total variation diminishing (TVD) format were adopted to obtain the mean value curve, the standard deviation curve and the time-history probability density information of responses. A case study was presented in which the ICE-3 train travels on a three-span simply-supported high-speed railway bridge with excitation of random rail irregularity. The results showed that compared to the Monte Carlo simulation, the PDEM has higher computational efficiency for the same accuracy, i.e., an improvement by 1-2 orders of magnitude. Additionally, the influences of rail irregularity and train speed on the random vibration of the coupled train-bridge system were discussed.

  9. Estimated probability density functions for the times between flashes in the storms of 12 September 1975, 26 August 1975, and 13 July 1976

    NASA Technical Reports Server (NTRS)

    Tretter, S. A.

    1977-01-01

    A report is given to supplement the progress report of June 17, 1977. In that progress report gamma, lognormal, and Rayleigh probability density functions were fitted to the times between lightning flashes in the storms of 9/12/75, 8/26/75, and 7/13/76 by the maximum likelihood method. The goodness of fit is checked by the Kolmogoroff-Smirnoff test. Plots of the estimated densities along with normalized histograms are included to provide a visual check on the goodness of fit. The lognormal densities are the most peaked and have the highest tails. This results in the best fit to the normalized histogram in most cases. The Rayleigh densities have too broad and rounded peaks to give good fits. In addition, they have the lowest tails. The gamma densities fall inbetween and give the best fit in a few cases.

  10. A method for evaluating the expectation value of a power spectrum using the probability density function of phases

    SciTech Connect

    Caliandro, G.A.; Torres, D.F.; Rea, N. E-mail: dtorres@aliga.ieec.uab.es

    2013-07-01

    Here, we present a new method to evaluate the expectation value of the power spectrum of a time series. A statistical approach is adopted to define the method. After its demonstration, it is validated showing that it leads to the known properties of the power spectrum when the time series contains a periodic signal. The approach is also validated in general with numerical simulations. The method puts into evidence the importance that is played by the probability density function of the phases associated to each time stamp for a given frequency, and how this distribution can be perturbed by the uncertainties of the parameters in the pulsar ephemeris. We applied this method to solve the power spectrum in the case the first derivative of the pulsar frequency is unknown and not negligible. We also undertook the study of the most general case of a blind search, in which both the frequency and its first derivative are uncertain. We found the analytical solutions of the above cases invoking the sum of Fresnel's integrals squared.

  11. Retrievals of atmospheric columnar carbon dioxide and methane from GOSAT observations with photon path-length probability density function (PPDF) method

    NASA Astrophysics Data System (ADS)

    Bril, A.; Oshchepkov, S.; Yokota, T.; Yoshida, Y.; Morino, I.; Uchino, O.; Belikov, D. A.; Maksyutov, S. S.

    2014-12-01

    We retrieved the column-averaged dry air mole fraction of atmospheric carbon dioxide (XCO2) and methane (XCH4) from the radiance spectra measured by Greenhouse gases Observing SATellite (GOSAT) for 48 months of the satellite operation from June 2009. Recent version of the Photon path-length Probability Density Function (PPDF)-based algorithm was used to estimate XCO2 and optical path modifications in terms of PPDF parameters. We also present results of numerical simulations for over-land observations and "sharp edge" tests for sun-glint mode to discuss the algorithm accuracy under conditions of strong optical path modification. For the methane abundance retrieved from 1.67-µm-absorption band we applied optical path correction based on PPDF parameters from 1.6-µm carbon dioxide (CO2) absorption band. Similarly to CO2-proxy technique, this correction assumes identical light path modifications in 1.67-µm and 1.6-µm bands. However, proxy approach needs pre-defined XCO2 values to compute XCH4, whilst the PPDF-based approach does not use prior assumptions on CO2 concentrations.Post-processing data correction for XCO2 and XCH4 over land observations was performed using regression matrix based on multivariate analysis of variance (MANOVA). The MANOVA statistics was applied to the GOSAT retrievals using reference collocated measurements of Total Carbon Column Observing Network (TCCON). The regression matrix was constructed using the parameters that were found to correlate with GOSAT-TCCON discrepancies: PPDF parameters α and ρ, that are mainly responsible for shortening and lengthening of the optical path due to atmospheric light scattering; solar and satellite zenith angles; surface pressure; surface albedo in three GOSAT short wave infrared (SWIR) bands. Application of the post-correction generally improves statistical characteristics of the GOSAT-TCCON correlation diagrams for individual stations as well as for aggregated data.In addition to the analysis of the

  12. Modulation Based on Probability Density Functions

    NASA Technical Reports Server (NTRS)

    Williams, Glenn L.

    2009-01-01

    A proposed method of modulating a sinusoidal carrier signal to convey digital information involves the use of histograms representing probability density functions (PDFs) that characterize samples of the signal waveform. The method is based partly on the observation that when a waveform is sampled (whether by analog or digital means) over a time interval at least as long as one half cycle of the waveform, the samples can be sorted by frequency of occurrence, thereby constructing a histogram representing a PDF of the waveform during that time interval.

  13. Effects of combined dimension reduction and tabulation on the simulations of a turbulent premixed flame using a large-eddy simulation/probability density function method

    NASA Astrophysics Data System (ADS)

    Kim, Jeonglae; Pope, Stephen B.

    2014-05-01

    A turbulent lean-premixed propane-air flame stabilised by a triangular cylinder as a flame-holder is simulated to assess the accuracy and computational efficiency of combined dimension reduction and tabulation of chemistry. The computational condition matches the Volvo rig experiments. For the reactive simulation, the Lagrangian Large-Eddy Simulation/Probability Density Function (LES/PDF) formulation is used. A novel two-way coupling approach between LES and PDF is applied to obtain resolved density to reduce its statistical fluctuations. Composition mixing is evaluated by the modified Interaction-by-Exchange with the Mean (IEM) model. A baseline case uses In Situ Adaptive Tabulation (ISAT) to calculate chemical reactions efficiently. Its results demonstrate good agreement with the experimental measurements in turbulence statistics, temperature, and minor species mass fractions. For dimension reduction, 11 and 16 represented species are chosen and a variant of Rate Controlled Constrained Equilibrium (RCCE) is applied in conjunction with ISAT to each case. All the quantities in the comparison are indistinguishable from the baseline results using ISAT only. The combined use of RCCE/ISAT reduces the computational time for chemical reaction by more than 50%. However, for the current turbulent premixed flame, chemical reaction takes only a minor portion of the overall computational cost, in contrast to non-premixed flame simulations using LES/PDF, presumably due to the restricted manifold of purely premixed flame in the composition space. Instead, composition mixing is the major contributor to cost reduction since the mean-drift term, which is computationally expensive, is computed for the reduced representation. Overall, a reduction of more than 15% in the computational cost is obtained.

  14. Use of ELVIS II platform for random process modelling and analysis of its probability density function

    NASA Astrophysics Data System (ADS)

    Maslennikova, Yu. S.; Nugmanov, I. S.

    2016-08-01

    The problem of probability density function estimation for a random process is one of the most common in practice. There are several methods to solve this problem. Presented laboratory work uses methods of the mathematical statistics to detect patterns in the realization of random process. On the basis of ergodic theory, we construct algorithm for estimating univariate probability density distribution function for a random process. Correlational analysis of realizations is applied to estimate the necessary size of the sample and the time of observation. Hypothesis testing for two probability distributions (normal and Cauchy) is used on the experimental data, using χ2 criterion. To facilitate understanding and clarity of the problem solved, we use ELVIS II platform and LabVIEW software package that allows us to make the necessary calculations, display results of the experiment and, most importantly, to control the experiment. At the same time students are introduced to a LabVIEW software package and its capabilities.

  15. Direct propagation of probability density functions in hydrological equations

    NASA Astrophysics Data System (ADS)

    Kunstmann, Harald; Kastens, Marko

    2006-06-01

    Sustainable decisions in hydrological risk management require detailed information on the probability density function ( pdf) of the model output. Only then probabilities for the failure of a specific management option or the exceedance of critical thresholds (e.g. of pollutants) can be derived. A new approach of uncertainty propagation in hydrological equations is developed that directly propagates the probability density functions of uncertain model input parameters into the corresponding probability density functions of model output. The basics of the methodology are presented and central applications to different disciplines in hydrology are shown. This work focuses on the following basic hydrological equations: (1) pumping test analysis (Theis-equation, propagation of uncertainties in recharge and transmissivity), (2) 1-dim groundwater contaminant transport equation (Gauss-equation, propagation of uncertainties in decay constant and dispersivity), (3) evapotranspiration estimation (Penman-Monteith-equation, propagation of uncertainty in roughness length). The direct propagation of probability densities is restricted to functions that are monotonically increasing or decreasing or that can be separated in corresponding monotonic branches so that inverse functions can be derived. In case no analytic solutions for inverse functions could be derived, semi-analytical approximations were used. It is shown that the results of direct probability density function propagation are in perfect agreement with results obtained from corresponding Monte Carlo derived frequency distributions. Direct pdf propagation, however, has the advantage that is yields exact solutions for the resulting hydrological pdfs rather than approximating discontinuous frequency distributions. It is additionally shown that the type of the resulting pdf depends on the specific values (order of magnitude, respectively) of the standard deviation of the input pdf. The dependency of skewness and kurtosis

  16. Carrier Modulation Via Waveform Probability Density Function

    NASA Technical Reports Server (NTRS)

    Williams, Glenn L.

    2004-01-01

    Beyond the classic modes of carrier modulation by varying amplitude (AM), phase (PM), or frequency (FM), we extend the modulation domain of an analog carrier signal to include a class of general modulations which are distinguished by their probability density function histogram. Separate waveform states are easily created by varying the pdf of the transmitted waveform. Individual waveform states are assignable as proxies for digital ONEs or ZEROs. At the receiver, these states are easily detected by accumulating sampled waveform statistics and performing periodic pattern matching, correlation, or statistical filtering. No fundamental natural laws are broken in the detection process. We show how a typical modulation scheme would work in the digital domain and suggest how to build an analog version. We propose that clever variations of the modulating waveform (and thus the histogram) can provide simple steganographic encoding.

  17. Carrier Modulation Via Waveform Probability Density Function

    NASA Technical Reports Server (NTRS)

    Williams, Glenn L.

    2006-01-01

    Beyond the classic modes of carrier modulation by varying amplitude (AM), phase (PM), or frequency (FM), we extend the modulation domain of an analog carrier signal to include a class of general modulations which are distinguished by their probability density function histogram. Separate waveform states are easily created by varying the pdf of the transmitted waveform. Individual waveform states are assignable as proxies for digital one's or zero's. At the receiver, these states are easily detected by accumulating sampled waveform statistics and performing periodic pattern matching, correlation, or statistical filtering. No fundamental physical laws are broken in the detection process. We show how a typical modulation scheme would work in the digital domain and suggest how to build an analog version. We propose that clever variations of the modulating waveform (and thus the histogram) can provide simple steganographic encoding.

  18. Application of the response probability density function technique to biodynamic models.

    PubMed

    Hershey, R L; Higgins, T H

    1978-01-01

    A method has been developed, which we call the "response probability density function technique," which has applications in predicting the probability of injury in a wide range of biodynamic situations. The method, which was developed in connection with sonic boom damage prediction, utilized the probability density function of the excitation force and the probability density function of the sensitivity of the material being acted upon. The method is especially simple to use when both these probability density functions are lognormal. Studies thus far have shown that the stresses from sonic booms, as well as the strengths of glass and mortars, are distributed lognormally. Some biodynamic processes also have lognormal distributions and are, therefore, amenable to modeling by this technique. In particular, this paper discusses the application of the response probability density function technique to the analysis of the thoracic response to air blast and the prediction of skull fracture from head impact. PMID:623590

  19. Probability density function learning by unsupervised neurons.

    PubMed

    Fiori, S

    2001-10-01

    In a recent work, we introduced the concept of pseudo-polynomial adaptive activation function neuron (FAN) and presented an unsupervised information-theoretic learning theory for such structure. The learning model is based on entropy optimization and provides a way of learning probability distributions from incomplete data. The aim of the present paper is to illustrate some theoretical features of the FAN neuron, to extend its learning theory to asymmetrical density function approximation, and to provide an analytical and numerical comparison with other known density function estimation methods, with special emphasis to the universal approximation ability. The paper also provides a survey of PDF learning from incomplete data, as well as results of several experiments performed on real-world problems and signals. PMID:11709808

  20. Downlink Probability Density Functions for EOS-McMurdo Sound

    NASA Technical Reports Server (NTRS)

    Christopher, P.; Jackson, A. H.

    1996-01-01

    The visibility times and communication link dynamics for the Earth Observations Satellite (EOS)-McMurdo Sound direct downlinks have been studied. The 16 day EOS periodicity may be shown with the Goddard Trajectory Determination System (GTDS) and the entire 16 day period should be simulated for representative link statistics. We desire many attributes of the downlink, however, and a faster orbital determination method is desirable. We use the method of osculating elements for speed and accuracy in simulating the EOS orbit. The accuracy of the method of osculating elements is demonstrated by closely reproducing the observed 16 day Landsat periodicity. An autocorrelation function method is used to show the correlation spike at 16 days. The entire 16 day record of passes over McMurdo Sound is then used to generate statistics for innage time, outage time, elevation angle, antenna angle rates, and propagation loss. The levation angle probability density function is compared with 1967 analytic approximation which has been used for medium to high altitude satellites. One practical result of this comparison is seen to be the rare occurrence of zenith passes. The new result is functionally different than the earlier result, with a heavy emphasis on low elevation angles. EOS is one of a large class of sun synchronous satellites which may be downlinked to McMurdo Sound. We examine delay statistics for an entire group of sun synchronous satellites ranging from 400 km to 1000 km altitude. Outage probability density function results are presented three dimensionally.

  1. Construction of Coarse-Grained Models by Reproducing Equilibrium Probability Density Function

    NASA Astrophysics Data System (ADS)

    Lu, Shi-Jing; Zhou, Xin

    2015-01-01

    The present work proposes a novel methodology for constructing coarse-grained (CG) models, which aims at minimizing the difference between CG model and the corresponding original system. The difference is defined as a functional of their equilibrium conformational probability densities, then is estimated from equilibrium averages of many independent physical quantities denoted as basis functions. An orthonormalization strategy is adopted to get the independent basis functions from sufficiently preselected interesting physical quantities of the system. Thus the current method is named as probability density matching coarse-graining (PMCG) scheme, which effectively takes into account the overall characteristics of the original systems to construct CG model, and it is a natural improvement of the usual CG scheme wherein some physical quantities are intuitively chosen without considering their correlations. We verify the general PMCG framework in constructing a one-site CG water model from TIP3P model. Both structure of liquids and pressure of the TIP3P water system are found to be well reproduced at the same time in the constructed CG model.

  2. Probability Density Functions of Observed Rainfall in Montana

    NASA Technical Reports Server (NTRS)

    Larsen, Scott D.; Johnson, L. Ronald; Smith, Paul L.

    1995-01-01

    The question of whether a rain rate probability density function (PDF) can vary uniformly between precipitation events is examined. Image analysis on large samples of radar echoes is possible because of advances in technology. The data provided by such an analysis easily allow development of radar reflectivity factors (and by extension rain rate) distribution. Finding a PDF becomes a matter of finding a function that describes the curve approximating the resulting distributions. Ideally, one PDF would exist for all cases; or many PDF's that have the same functional form with only systematic variations in parameters (such as size or shape) exist. Satisfying either of theses cases will, validate the theoretical basis of the Area Time Integral (ATI). Using the method of moments and Elderton's curve selection criteria, the Pearson Type 1 equation was identified as a potential fit for 89 percent of the observed distributions. Further analysis indicates that the Type 1 curve does approximate the shape of the distributions but quantitatively does not produce a great fit. Using the method of moments and Elderton's curve selection criteria, the Pearson Type 1 equation was identified as a potential fit for 89% of the observed distributions. Further analysis indicates that the Type 1 curve does approximate the shape of the distributions but quantitatively does not produce a great fit.

  3. Representation of Probability Density Functions from Orbit Determination using the Particle Filter

    NASA Technical Reports Server (NTRS)

    Mashiku, Alinda K.; Garrison, James; Carpenter, J. Russell

    2012-01-01

    Statistical orbit determination enables us to obtain estimates of the state and the statistical information of its region of uncertainty. In order to obtain an accurate representation of the probability density function (PDF) that incorporates higher order statistical information, we propose the use of nonlinear estimation methods such as the Particle Filter. The Particle Filter (PF) is capable of providing a PDF representation of the state estimates whose accuracy is dependent on the number of particles or samples used. For this method to be applicable to real case scenarios, we need a way of accurately representing the PDF in a compressed manner with little information loss. Hence we propose using the Independent Component Analysis (ICA) as a non-Gaussian dimensional reduction method that is capable of maintaining higher order statistical information obtained using the PF. Methods such as the Principal Component Analysis (PCA) are based on utilizing up to second order statistics, hence will not suffice in maintaining maximum information content. Both the PCA and the ICA are applied to two scenarios that involve a highly eccentric orbit with a lower apriori uncertainty covariance and a less eccentric orbit with a higher a priori uncertainty covariance, to illustrate the capability of the ICA in relation to the PCA.

  4. Probability density function modeling for sub-powered interconnects

    NASA Astrophysics Data System (ADS)

    Pater, Flavius; Amaricǎi, Alexandru

    2016-06-01

    This paper proposes three mathematical models for reliability probability density function modeling the interconnect supplied at sub-threshold voltages: spline curve approximations, Gaussian models,and sine interpolation. The proposed analysis aims at determining the most appropriate fitting for the switching delay - probability of correct switching for sub-powered interconnects. We compare the three mathematical models with the Monte-Carlo simulations of interconnects for 45 nm CMOS technology supplied at 0.25V.

  5. Robust functional statistics applied to Probability Density Function shape screening of sEMG data.

    PubMed

    Boudaoud, S; Rix, H; Al Harrach, M; Marin, F

    2014-01-01

    Recent studies pointed out possible shape modifications of the Probability Density Function (PDF) of surface electromyographical (sEMG) data according to several contexts like fatigue and muscle force increase. Following this idea, criteria have been proposed to monitor these shape modifications mainly using High Order Statistics (HOS) parameters like skewness and kurtosis. In experimental conditions, these parameters are confronted with small sample size in the estimation process. This small sample size induces errors in the estimated HOS parameters restraining real-time and precise sEMG PDF shape monitoring. Recently, a functional formalism, the Core Shape Model (CSM), has been used to analyse shape modifications of PDF curves. In this work, taking inspiration from CSM method, robust functional statistics are proposed to emulate both skewness and kurtosis behaviors. These functional statistics combine both kernel density estimation and PDF shape distances to evaluate shape modifications even in presence of small sample size. Then, the proposed statistics are tested, using Monte Carlo simulations, on both normal and Log-normal PDFs that mimic observed sEMG PDF shape behavior during muscle contraction. According to the obtained results, the functional statistics seem to be more robust than HOS parameters to small sample size effect and more accurate in sEMG PDF shape screening applications.

  6. Robust functional statistics applied to Probability Density Function shape screening of sEMG data.

    PubMed

    Boudaoud, S; Rix, H; Al Harrach, M; Marin, F

    2014-01-01

    Recent studies pointed out possible shape modifications of the Probability Density Function (PDF) of surface electromyographical (sEMG) data according to several contexts like fatigue and muscle force increase. Following this idea, criteria have been proposed to monitor these shape modifications mainly using High Order Statistics (HOS) parameters like skewness and kurtosis. In experimental conditions, these parameters are confronted with small sample size in the estimation process. This small sample size induces errors in the estimated HOS parameters restraining real-time and precise sEMG PDF shape monitoring. Recently, a functional formalism, the Core Shape Model (CSM), has been used to analyse shape modifications of PDF curves. In this work, taking inspiration from CSM method, robust functional statistics are proposed to emulate both skewness and kurtosis behaviors. These functional statistics combine both kernel density estimation and PDF shape distances to evaluate shape modifications even in presence of small sample size. Then, the proposed statistics are tested, using Monte Carlo simulations, on both normal and Log-normal PDFs that mimic observed sEMG PDF shape behavior during muscle contraction. According to the obtained results, the functional statistics seem to be more robust than HOS parameters to small sample size effect and more accurate in sEMG PDF shape screening applications. PMID:25570426

  7. Statistical analysis of gait maturation in children based on probability density functions.

    PubMed

    Wu, Yunfeng; Zhong, Zhangting; Lu, Meng; He, Jia

    2011-01-01

    Analysis of gait patterns in children is useful for the study of maturation of locomotor control. In this paper, we utilized the Parzen-window method to estimate the probability density functions (PDFs) of the stride interval for 50 children. With the estimated PDFs, the statistical measures, i.e., averaged stride interval (ASI), variation of stride interval (VSI), PDF skewness (SK), and PDF kurtosis (KU), were computed for the gait maturation in three age groups (aged 3-5 years, 6-8 years, and 10-14 years) of young children. The results indicated that the ASI and VSI values are significantly different between the three age groups. The VSI is decreased rapidly until 8 years of age, and then continues to be decreased at a slower rate. The SK values of the PDFs for all of the three age groups are positive, which shows a slight imbalance in the stride interval distribution within each age group. In addition, the decrease of the KU values of the PDFs is age-dependent, which suggests the effects of the musculo-skeletal growth on the gait maturation in young children. PMID:22254641

  8. Representation of layer-counted proxy records as probability densities on error-free time axes

    NASA Astrophysics Data System (ADS)

    Boers, Niklas; Goswami, Bedartha; Ghil, Michael

    2016-04-01

    Time series derived from paleoclimatic proxy records exhibit substantial dating uncertainties in addition to the measurement errors of the proxy values. For radiometrically dated proxy archives, Goswami et al. [1] have recently introduced a framework rooted in Bayesian statistics that successfully propagates the dating uncertainties from the time axis to the proxy axis. The resulting proxy record consists of a sequence of probability densities over the proxy values, conditioned on prescribed age values. One of the major benefits of this approach is that the proxy record is represented on an accurate, error-free time axis. Such unambiguous dating is crucial, for instance, in comparing different proxy records. This approach, however, is not directly applicable to proxy records with layer-counted chronologies, as for example ice cores, which are typically dated by counting quasi-annually deposited ice layers. Hence the nature of the chronological uncertainty in such records is fundamentally different from that in radiometrically dated ones. Here, we introduce a modification of the Goswami et al. [1] approach that is specifically designed for layer-counted proxy records, instead of radiometrically dated ones. We apply our method to isotope ratios and dust concentrations in the NGRIP core, using a published 60,000-year chronology [2]. It is shown that the further one goes into the past, the more the layer-counting errors accumulate and lead to growing uncertainties in the probability density sequence for the proxy values that results from the proposed approach. For the older parts of the record, these uncertainties affect more and more a statistically sound estimation of proxy values. This difficulty implies that great care has to be exercised when comparing and in particular aligning specific events among different layer-counted proxy records. On the other hand, when attempting to derive stochastic dynamical models from the proxy records, one is only interested in the

  9. Probability density function characterization for aggregated large-scale wind power based on Weibull mixtures

    DOE PAGES

    Gomez-Lazaro, Emilio; Bueso, Maria C.; Kessler, Mathieu; Martin-Martinez, Sergio; Zhang, Jie; Hodge, Bri -Mathias; Molina-Garcia, Angel

    2016-02-02

    Here, the Weibull probability distribution has been widely applied to characterize wind speeds for wind energy resources. Wind power generation modeling is different, however, due in particular to power curve limitations, wind turbine control methods, and transmission system operation requirements. These differences are even greater for aggregated wind power generation in power systems with high wind penetration. Consequently, models based on one-Weibull component can provide poor characterizations for aggregated wind power generation. With this aim, the present paper focuses on discussing Weibull mixtures to characterize the probability density function (PDF) for aggregated wind power generation. PDFs of wind power datamore » are firstly classified attending to hourly and seasonal patterns. The selection of the number of components in the mixture is analyzed through two well-known different criteria: the Akaike information criterion (AIC) and the Bayesian information criterion (BIC). Finally, the optimal number of Weibull components for maximum likelihood is explored for the defined patterns, including the estimated weight, scale, and shape parameters. Results show that multi-Weibull models are more suitable to characterize aggregated wind power data due to the impact of distributed generation, variety of wind speed values and wind power curtailment.« less

  10. Analytical Formulation of the Single-visit Completeness Joint Probability Density Function

    NASA Astrophysics Data System (ADS)

    Garrett, Daniel; Savransky, Dmitry

    2016-09-01

    We derive an exact formulation of the multivariate integral representing the single-visit obscurational and photometric completeness joint probability density function for arbitrary distributions for planetary parameters. We present a derivation of the region of nonzero values of this function, which extends previous work, and discuss the time and computational complexity costs and benefits of the method. We present a working implementation and demonstrate excellent agreement between this approach and Monte Carlo simulation results.

  11. Effect of Non-speckle Echo Signals on Tissue Characteristics for Liver Fibrosis using Probability Density Function of Ultrasonic B-mode image

    NASA Astrophysics Data System (ADS)

    Mori, Shohei; Hirata, Shinnosuke; Yamaguchi, Tadashi; Hachiya, Hiroyuki

    To develop a quantitative diagnostic method for liver fibrosis using an ultrasound B-mode image, a probability imaging method of tissue characteristics based on a multi-Rayleigh model, which expresses a probability density function of echo signals from liver fibrosis, has been proposed. In this paper, an effect of non-speckle echo signals on tissue characteristics estimated from the multi-Rayleigh model was evaluated. Non-speckle signals were determined and removed using the modeling error of the multi-Rayleigh model. The correct tissue characteristics of fibrotic tissue could be estimated with the removal of non-speckle signals.

  12. Probability Density Function for Waves Propagating in a Straight PEC Rough Wall Tunnel

    SciTech Connect

    Pao, H

    2004-11-08

    The probability density function for wave propagating in a straight perfect electrical conductor (PEC) rough wall tunnel is deduced from the mathematical models of the random electromagnetic fields. The field propagating in caves or tunnels is a complex-valued Gaussian random processing by the Central Limit Theorem. The probability density function for single modal field amplitude in such structure is Ricean. Since both expected value and standard deviation of this field depend only on radial position, the probability density function, which gives what is the power distribution, is a radially dependent function. The radio channel places fundamental limitations on the performance of wireless communication systems in tunnels and caves. The transmission path between the transmitter and receiver can vary from a simple direct line of sight to one that is severely obstructed by rough walls and corners. Unlike wired channels that are stationary and predictable, radio channels can be extremely random and difficult to analyze. In fact, modeling the radio channel has historically been one of the more challenging parts of any radio system design; this is often done using statistical methods. In this contribution, we present the most important statistic property, the field probability density function, of wave propagating in a straight PEC rough wall tunnel. This work only studies the simplest case--PEC boundary which is not the real world but the methods and conclusions developed herein are applicable to real world problems which the boundary is dielectric. The mechanisms behind electromagnetic wave propagation in caves or tunnels are diverse, but can generally be attributed to reflection, diffraction, and scattering. Because of the multiple reflections from rough walls, the electromagnetic waves travel along different paths of varying lengths. The interactions between these waves cause multipath fading at any location, and the strengths of the waves decrease as the distance

  13. Using Prediction Markets to Generate Probability Density Functions for Climate Change Risk Assessment

    NASA Astrophysics Data System (ADS)

    Boslough, M.

    2011-12-01

    Climate-related uncertainty is traditionally presented as an error bar, but it is becoming increasingly common to express it in terms of a probability density function (PDF). PDFs are a necessary component of probabilistic risk assessments, for which simple "best estimate" values are insufficient. Many groups have generated PDFs for climate sensitivity using a variety of methods. These PDFs are broadly consistent, but vary significantly in their details. One axiom of the verification and validation community is, "codes don't make predictions, people make predictions." This is a statement of the fact that subject domain experts generate results using assumptions within a range of epistemic uncertainty and interpret them according to their expert opinion. Different experts with different methods will arrive at different PDFs. For effective decision support, a single consensus PDF would be useful. We suggest that market methods can be used to aggregate an ensemble of opinions into a single distribution that expresses the consensus. Prediction markets have been shown to be highly successful at forecasting the outcome of events ranging from elections to box office returns. In prediction markets, traders can take a position on whether some future event will or will not occur. These positions are expressed as contracts that are traded in a double-action market that aggregates price, which can be interpreted as a consensus probability that the event will take place. Since climate sensitivity cannot directly be measured, it cannot be predicted. However, the changes in global mean surface temperature are a direct consequence of climate sensitivity, changes in forcing, and internal variability. Viable prediction markets require an undisputed event outcome on a specific date. Climate-related markets exist on Intrade.com, an online trading exchange. One such contract is titled "Global Temperature Anomaly for Dec 2011 to be greater than 0.65 Degrees C." Settlement is based

  14. Probability density distribution of velocity differences at high Reynolds numbers

    NASA Technical Reports Server (NTRS)

    Praskovsky, Alexander A.

    1993-01-01

    Recent understanding of fine-scale turbulence structure in high Reynolds number flows is mostly based on Kolmogorov's original and revised models. The main finding of these models is that intrinsic characteristics of fine-scale fluctuations are universal ones at high Reynolds numbers, i.e., the functional behavior of any small-scale parameter is the same in all flows if the Reynolds number is high enough. The only large-scale quantity that directly affects small-scale fluctuations is the energy flux through a cascade. In dynamical equilibrium between large- and small-scale motions, this flux is equal to the mean rate of energy dissipation epsilon. The pdd of velocity difference is a very important characteristic for both the basic understanding of fully developed turbulence and engineering problems. Hence, it is important to test the findings: (1) the functional behavior of the tails of the probability density distribution (pdd) represented by P(delta(u)) is proportional to exp(-b(r) absolute value of delta(u)/sigma(sub delta(u))) and (2) the logarithmic decrement b(r) scales as b(r) is proportional to r(sup 0.15) when separation r lies in the inertial subrange in high Reynolds number laboratory shear flows.

  15. Efficiency issues related to probability density function comparison

    SciTech Connect

    Kelly, P.M.; Cannon, M.; Barros, J.E.

    1996-03-01

    The CANDID project (Comparison Algorithm for Navigating Digital Image Databases) employs probability density functions (PDFs) of localized feature information to represent the content of an image for search and retrieval purposes. A similarity measure between PDFs is used to identify database images that are similar to a user-provided query image. Unfortunately, signature comparison involving PDFs is a very time-consuming operation. In this paper, we look into some efficiency considerations when working with PDFS. Since PDFs can take on many forms, we look into tradeoffs between accurate representation and efficiency of manipulation for several data sets. In particular, we typically represent each PDF as a Gaussian mixture (e.g. as a weighted sum of Gaussian kernels) in the feature space. We find that by constraining all Gaussian kernels to have principal axes that are aligned to the natural axes of the feature space, computations involving these PDFs are simplified. We can also constrain the Gaussian kernels to be hyperspherical rather than hyperellipsoidal, simplifying computations even further, and yielding an order of magnitude speedup in signature comparison. This paper illustrates the tradeoffs encountered when using these constraints.

  16. Efficiency issues related to probability density function comparison

    NASA Astrophysics Data System (ADS)

    Kelly, Patrick M.; Cannon, T. Michael; Barros, Julio E.

    1996-03-01

    The CANDID project (comparison algorithm for navigating digital image databases) employs probability density functions (PDFs) of localized feature information to represent the content of an image for search and retrieval purposes. A similarity measure between PDFs is used to identify database images that are similar to a user-provided query image. Unfortunately, signature comparison involving PDFs is a very time-consuming operation. In this paper, we look into some efficiency considerations when working with PDFs. Since PDFs can take on many forms, we look into tradeoffs between accurate representation and efficiency of manipulation for several data sets. In particular, we typically represent each PDF as a Gaussian mixture (e.g. as a weighted sum of Gaussian kernels) in the feature space. We find that by constraining all Gaussian kernels to have principal axes that are aligned to the natural axes of the feature space, computations involving these PDFs are simplified. We can also constrain the Gaussian kernels to be hyperspherical rather than hyperellipsoidal, simplifying computations even further, and yielding an order of magnitude speedup in signature comparison. This paper illustrates the tradeoffs encountered when using these constraints.

  17. The probability density function (PDF) of Lagrangian Turbulence

    NASA Astrophysics Data System (ADS)

    Birnir, B.

    2012-12-01

    The statistical theory of Lagrangian turbulence is derived from the stochastic Navier-Stokes equation. Assuming that the noise in fully-developed turbulence is a generic noise determined by the general theorems in probability, the central limit theorem and the large deviation principle, we are able to formulate and solve the Kolmogorov-Hopf equation for the invariant measure of the stochastic Navier-Stokes equations. The intermittency corrections to the scaling exponents of the structure functions require a multiplicative (multipling the fluid velocity) noise in the stochastic Navier-Stokes equation. We let this multiplicative noise, in the equation, consists of a simple (Poisson) jump process and then show how the Feynmann-Kac formula produces the log-Poissonian processes, found by She and Leveque, Waymire and Dubrulle. These log-Poissonian processes give the intermittency corrections that agree with modern direct Navier-Stokes simulations (DNS) and experiments. The probability density function (PDF) plays a key role when direct Navier-Stokes simulations or experimental results are compared to theory. The statistical theory of turbulence is determined, including the scaling of the structure functions of turbulence, by the invariant measure of the Navier-Stokes equation and the PDFs for the various statistics (one-point, two-point, N-point) can be obtained by taking the trace of the corresponding invariant measures. Hopf derived in 1952 a functional equation for the characteristic function (Fourier transform) of the invariant measure. In distinction to the nonlinear Navier-Stokes equation, this is a linear functional differential equation. The PDFs obtained from the invariant measures for the velocity differences (two-point statistics) are shown to be the four parameter generalized hyperbolic distributions, found by Barndorff-Nilsen. These PDF have heavy tails and a convex peak at the origin. A suitable projection of the Kolmogorov-Hopf equations is the

  18. SHORT COMMUNICATION: Assigning probability density functions in a context of information shortage

    NASA Astrophysics Data System (ADS)

    Cordero, Raul R.; Roth, Pedro

    2004-08-01

    In the context of experimental information shortage, uncertainty evaluation of a directly measured quantity involves obtaining its standard uncertainty as the standard deviation of an assigned probability density function (pdf) that is assumed to apply. In this article, we present a criterion to select the appropriate pdf associated with the estimate of a quantity by seeking that pdf which is the most probable among those which agree with the available information. As examples, we apply this criterion to assign the proper pdf to a measurand assuming that we know just its estimate, or both its estimate and its standard uncertainty. Our results agree with those obtained by applying the principle of maximum entropy to both situations.

  19. Probability density function analysis for optical turbulence with applications to underwater communications systems

    NASA Astrophysics Data System (ADS)

    Bernotas, Marius P.; Nelson, Charles

    2016-05-01

    The Weibull and Exponentiated Weibull probability density functions have been examined for the free space regime using heuristically derived shape and scale parameters. This paper extends current literature to the underwater channel and explores use of experimentally derived parameters. Data gathered in a short range underwater channel emulator was analyzed using a nonlinear curve fitting methodology to optimize the scale and shape parameters of the PDFs. This method provides insight into the scaled effects of underwater optical turbulence on a long range link, and may yield a general set of equations for determining the PDF for an underwater optical link.

  20. Spectral Discrete Probability Density Function of Measured Wind Turbine Noise in the Far Field

    PubMed Central

    Ashtiani, Payam; Denison, Adelaide

    2015-01-01

    Of interest is the spectral character of wind turbine noise at typical residential set-back distances. In this paper, a spectral statistical analysis has been applied to immission measurements conducted at three locations. This method provides discrete probability density functions for the Turbine ONLY component of the measured noise. This analysis is completed for one-third octave sound levels, at integer wind speeds, and is compared to existing metrics for measuring acoustic comfort as well as previous discussions on low-frequency noise sources. PMID:25905097

  1. Spectral discrete probability density function of measured wind turbine noise in the far field.

    PubMed

    Ashtiani, Payam; Denison, Adelaide

    2015-01-01

    Of interest is the spectral character of wind turbine noise at typical residential set-back distances. In this paper, a spectral statistical analysis has been applied to immission measurements conducted at three locations. This method provides discrete probability density functions for the Turbine ONLY component of the measured noise. This analysis is completed for one-third octave sound levels, at integer wind speeds, and is compared to existing metrics for measuring acoustic comfort as well as previous discussions on low-frequency noise sources.

  2. Spectral discrete probability density function of measured wind turbine noise in the far field.

    PubMed

    Ashtiani, Payam; Denison, Adelaide

    2015-01-01

    Of interest is the spectral character of wind turbine noise at typical residential set-back distances. In this paper, a spectral statistical analysis has been applied to immission measurements conducted at three locations. This method provides discrete probability density functions for the Turbine ONLY component of the measured noise. This analysis is completed for one-third octave sound levels, at integer wind speeds, and is compared to existing metrics for measuring acoustic comfort as well as previous discussions on low-frequency noise sources. PMID:25905097

  3. Firing statistics of inhibitory neuron with delayed feedback. I. Output ISI probability density.

    PubMed

    Vidybida, A K; Kravchuk, K G

    2013-06-01

    Activity of inhibitory neuron with delayed feedback is considered in the framework of point stochastic processes. The neuron receives excitatory input impulses from a Poisson stream, and inhibitory impulses from the feedback line with a delay. We investigate here, how does the presence of inhibitory feedback affect the output firing statistics. Using binding neuron (BN) as a model, we derive analytically the exact expressions for the output interspike intervals (ISI) probability density, mean output ISI and coefficient of variation as functions of model's parameters for the case of threshold 2. Using the leaky integrate-and-fire (LIF) model, as well as the BN model with higher thresholds, these statistical quantities are found numerically. In contrast to the previously studied situation of no feedback, the ISI probability densities found here both for BN and LIF neuron become bimodal and have discontinuity of jump type. Nevertheless, the presence of inhibitory delayed feedback was not found to affect substantially the output ISI coefficient of variation. The ISI coefficient of variation found ranges between 0.5 and 1. It is concluded that introduction of delayed inhibitory feedback can radically change neuronal output firing statistics. This statistics is as well distinct from what was found previously (Vidybida and Kravchuk, 2009) by a similar method for excitatory neuron with delayed feedback.

  4. Sliding-mode control design for nonlinear systems using probability density function shaping.

    PubMed

    Liu, Yu; Wang, Hong; Hou, Chaohuan

    2014-02-01

    In this paper, we propose a sliding-mode-based stochastic distribution control algorithm for nonlinear systems, where the sliding-mode controller is designed to stabilize the stochastic system and stochastic distribution control tries to shape the sliding surface as close as possible to the desired probability density function. Kullback-Leibler divergence is introduced to the stochastic distribution control, and the parameter of the stochastic distribution controller is updated at each sample interval rather than using a batch mode. It is shown that the estimated weight vector will converge to its ideal value and the system will be asymptotically stable under the rank-condition, which is much weaker than the persistent excitation condition. The effectiveness of the proposed algorithm is illustrated by simulation.

  5. Using skew-logistic probability density function as a model for age-specific fertility rate pattern.

    PubMed

    Asili, Sahar; Rezaei, Sadegh; Najjar, Lotfollah

    2014-01-01

    Fertility rate is one of the most important global indexes. Past researchers found models which fit to age-specific fertility rates. For example, mixture probability density functions have been proposed for situations with bi-modal fertility patterns. This model is less useful for unimodal age-specific fertility rate patterns, so a model based on skew-symmetric (skew-normal) pdf was proposed by Mazzuco and Scarpa (2011) which was flexible for unimodal and bimodal fertility patterns. In this paper, we introduce skew-logistic probability density function as a better model: its residuals are less than those of the skew-normal model and it can more precisely estimate the parameters of the model. PMID:24967404

  6. Comparative assessment of surface fluxes from different sources using probability density distributions

    NASA Astrophysics Data System (ADS)

    Gulev, Sergey; Tilinina, Natalia; Belyaev, Konstantin

    2015-04-01

    Surface turbulent heat fluxes from modern era and first generation reanalyses (NCEP-DOE, ERA-Interim, MERRA NCEP-CFSR, JRA) as well as from satellite products (SEAFLUX, IFREMER, HOAPS) were intercompared using framework of probability distributions for sensible and latent heat fluxes. For approximation of probability distributions and estimation of extreme flux values Modified Fisher-Tippett (MFT) distribution has been used. Besides mean flux values, consideration is given to the comparative analysis of (i) parameters of the MFT probability density functions (scale and location), (ii) extreme flux values corresponding high order percentiles of fluxes (e.g. 99th and higher) and (iii) fractional contribution of extreme surface flux events in the total surface turbulent fluxes integrated over months and seasons. The latter was estimated using both fractional distribution derived from MFT and empirical estimates based upon occurrence histograms. The strongest differences in the parameters of probability distributions of surface fluxes and extreme surface flux values between different reanalyses are found in the western boundary current extension regions and high latitudes, while the highest differences in the fractional contributions of surface fluxes may occur in mid ocean regions being closely associated with atmospheric synoptic dynamics. Generally, satellite surface flux products demonstrate relatively stronger extreme fluxes compared to reanalyses, even in the Northern Hemisphere midlatitudes where data assimilation input in reanalyses is quite dense compared to the Southern Ocean regions.

  7. Incorporating photometric redshift probability density information into real-space clustering measurements

    NASA Astrophysics Data System (ADS)

    Myers, Adam D.; White, Martin; Ball, Nicholas M.

    2009-11-01

    The use of photometric redshifts in cosmology is increasing. Often, however these photo-z are treated like spectroscopic observations, in that the peak of the photometric redshift, rather than the full probability density function (PDF), is used. This overlooks useful information inherent in the full PDF. We introduce a new real-space estimator for one of the most used cosmological statistics, the two-point correlation function, that weights by the PDF of individual photometric objects in a manner that is optimal when Poisson statistics dominate. As our estimator does not bin based on the PDF peak, it substantially enhances the clustering signal by usefully incorporating information from all photometric objects that overlap the redshift bin of interest. As a real-world application, we measure quasi-stellar object (QSO) clustering in the Sloan Digital Sky Survey (SDSS). We find that our simplest binned estimator improves the clustering signal by a factor equivalent to increasing the survey size by a factor of 2-3. We also introduce a new implementation that fully weights between pairs of objects in constructing the cross-correlation and find that this pair-weighted estimator improves clustering signal in a manner equivalent to increasing the survey size by a factor of 4-5. Our technique uses spectroscopic data to anchor the distance scale and it will be particularly useful where spectroscopic data (e.g. from BOSS) overlap deeper photometry (e.g. from Pan-STARRS, DES or the LSST). We additionally provide simple, informative expressions to determine when our estimator will be competitive with the autocorrelation of spectroscopic objects. Although we use QSOs as an example population, our estimator can and should be applied to any clustering estimate that uses photometric objects.

  8. Simulation Of Wave Function And Probability Density Of Modified Poschl Teller Potential Derived Using Supersymmetric Quantum Mechanics

    SciTech Connect

    Angraini, Lily Maysari; Suparmi,; Variani, Viska Inda

    2010-12-23

    SUSY quantum mechanics can be applied to solve Schrodinger equation for high dimensional system that can be reduced into one dimensional system and represented in lowering and raising operators. Lowering and raising operators can be obtained using relationship between original Hamiltonian equation and the (super) potential equation. In this paper SUSY quantum mechanics is used as a method to obtain the wave function and the energy level of the Modified Poschl Teller potential. The graph of wave function equation and probability density is simulated by using Delphi 7.0 programming language. Finally, the expectation value of quantum mechanics operator could be calculated analytically using integral form or probability density graph resulted by the programming.

  9. Entrainment Rate in Shallow Cumuli: Dependence on Entrained Dry Air Sources and Probability Density Functions

    NASA Astrophysics Data System (ADS)

    Lu, C.; Liu, Y.; Niu, S.; Vogelmann, A. M.

    2012-12-01

    In situ aircraft cumulus observations from the RACORO field campaign are used to estimate entrainment rate for individual clouds using a recently developed mixing fraction approach. The entrainment rate is computed based on the observed state of the cloud core and the state of the air that is laterally mixed into the cloud at its edge. The computed entrainment rate decreases when the air is entrained from increasing distance from the cloud core edge; this is because the air farther away from cloud edge is drier than the neighboring air that is within the humid shells around cumulus clouds. Probability density functions of entrainment rate are well fitted by lognormal distributions at different heights above cloud base for different dry air sources (i.e., different source distances from the cloud core edge). Such lognormal distribution functions are appropriate for inclusion into future entrainment rate parameterization in large scale models. To the authors' knowledge, this is the first time that probability density functions of entrainment rate have been obtained in shallow cumulus clouds based on in situ observations. The reason for the wide spread of entrainment rate is that the observed clouds are affected by entrainment mixing processes to different extents, which is verified by the relationships between the entrainment rate and cloud microphysics/dynamics. The entrainment rate is negatively correlated with liquid water content and cloud droplet number concentration due to the dilution and evaporation in entrainment mixing processes. The entrainment rate is positively correlated with relative dispersion (i.e., ratio of standard deviation to mean value) of liquid water content and droplet size distributions, consistent with the theoretical expectation that entrainment mixing processes are responsible for microphysics fluctuations and spectral broadening. The entrainment rate is negatively correlated with vertical velocity and dissipation rate because entrainment

  10. Predicting Ligand Binding Sites on Protein Surfaces by 3-Dimensional Probability Density Distributions of Interacting Atoms

    PubMed Central

    Jian, Jhih-Wei; Elumalai, Pavadai; Pitti, Thejkiran; Wu, Chih Yuan; Tsai, Keng-Chang; Chang, Jeng-Yih; Peng, Hung-Pin; Yang, An-Suei

    2016-01-01

    Predicting ligand binding sites (LBSs) on protein structures, which are obtained either from experimental or computational methods, is a useful first step in functional annotation or structure-based drug design for the protein structures. In this work, the structure-based machine learning algorithm ISMBLab-LIG was developed to predict LBSs on protein surfaces with input attributes derived from the three-dimensional probability density maps of interacting atoms, which were reconstructed on the query protein surfaces and were relatively insensitive to local conformational variations of the tentative ligand binding sites. The prediction accuracy of the ISMBLab-LIG predictors is comparable to that of the best LBS predictors benchmarked on several well-established testing datasets. More importantly, the ISMBLab-LIG algorithm has substantial tolerance to the prediction uncertainties of computationally derived protein structure models. As such, the method is particularly useful for predicting LBSs not only on experimental protein structures without known LBS templates in the database but also on computationally predicted model protein structures with structural uncertainties in the tentative ligand binding sites. PMID:27513851

  11. Development and evaluation of probability density functions for a set of human exposure factors

    SciTech Connect

    Maddalena, R.L.; McKone, T.E.; Bodnar, A.; Jacobson, J.

    1999-06-01

    The purpose of this report is to describe efforts carried out during 1998 and 1999 at the Lawrence Berkeley National Laboratory to assist the U.S. EPA in developing and ranking the robustness of a set of default probability distributions for exposure assessment factors. Among the current needs of the exposure-assessment community is the need to provide data for linking exposure, dose, and health information in ways that improve environmental surveillance, improve predictive models, and enhance risk assessment and risk management (NAS, 1994). The U.S. Environmental Protection Agency (EPA) Office of Emergency and Remedial Response (OERR) plays a lead role in developing national guidance and planning future activities that support the EPA Superfund Program. OERR is in the process of updating its 1989 Risk Assessment Guidance for Superfund (RAGS) as part of the EPA Superfund reform activities. Volume III of RAGS, when completed in 1999 will provide guidance for conducting probabilistic risk assessments. This revised document will contain technical information including probability density functions (PDFs) and methods used to develop and evaluate these PDFs. The PDFs provided in this EPA document are limited to those relating to exposure factors.

  12. Evaluation of joint probability density function models for turbulent nonpremixed combustion with complex chemistry

    NASA Technical Reports Server (NTRS)

    Smith, N. S. A.; Frolov, S. M.; Bowman, C. T.

    1996-01-01

    Two types of mixing sub-models are evaluated in connection with a joint-scalar probability density function method for turbulent nonpremixed combustion. Model calculations are made and compared to simulation results for homogeneously distributed methane-air reaction zones mixing and reacting in decaying turbulence within a two-dimensional enclosed domain. The comparison is arranged to ensure that both the simulation and model calculations a) make use of exactly the same chemical mechanism, b) do not involve non-unity Lewis number transport of species, and c) are free from radiation loss. The modified Curl mixing sub-model was found to provide superior predictive accuracy over the simple relaxation-to-mean submodel in the case studied. Accuracy to within 10-20% was found for global means of major species and temperature; however, nitric oxide prediction accuracy was lower and highly dependent on the choice of mixing sub-model. Both mixing submodels were found to produce non-physical mixing behavior for mixture fractions removed from the immediate reaction zone. A suggestion for a further modified Curl mixing sub-model is made in connection with earlier work done in the field.

  13. Dynamic Graphics in Excel for Teaching Statistics: Understanding the Probability Density Function

    ERIC Educational Resources Information Center

    Coll-Serrano, Vicente; Blasco-Blasco, Olga; Alvarez-Jareno, Jose A.

    2011-01-01

    In this article, we show a dynamic graphic in Excel that is used to introduce an important concept in our subject, Statistics I: the probability density function. This interactive graphic seeks to facilitate conceptual understanding of the main aspects analysed by the learners.

  14. Derivation of Probability Density Function of Signal-to-Interference-Plus-Noise Ratio for the MS-to-MS Interference Analysis

    PubMed Central

    2013-01-01

    This paper provides an analytical derivation of the probability density function of signal-to-interference-plus-noise ratio in the scenario where mobile stations interfere with each other. This analysis considers cochannel interference and adjacent channel interference. This could also remove the need for Monte Carlo simulations when evaluating the interference effect between mobile stations. Numerical verification shows that the analytical result agrees well with a Monte Carlo simulation. Also, we applied analytical methods for evaluating the interference effect between mobile stations using adjacent frequency bands. The analytical derivation of the probability density function can be used to provide the technical criteria for sharing a frequency band. PMID:24453792

  15. A method of estimating optimal catchment model parameters

    NASA Astrophysics Data System (ADS)

    Ibrahim, Yaacob; Liong, Shie-Yui

    1993-09-01

    A review of a calibration method developed earlier (Ibrahim and Liong, 1992) is presented. The method generates optimal values for single events. It entails randomizing the calibration parameters over bounds such that a system response under consideration is bounded. Within the bounds, which are narrow and generated automatically, explicit response surface representation of the response is obtained using experimental design techniques and regression analysis. The optimal values are obtained by searching on the response surface for a point at which the predicted response is equal to the measured response and the value of the joint probability density function at that point in a transformed space is the highest. The method is demonstrated on a catchment in Singapore. The issue of global optimal values is addressed by applying the method on wider bounds. The results indicate that the optimal values arising from the narrow set of bounds are, indeed, global. Improvements which are designed to achieve comparably accurate estimates but with less expense are introduced. A linear response surface model is used. Two approximations of the model are studied. The first is to fit the model using data points generated from simple Monte Carlo simulation; the second is to approximate the model by a Taylor series expansion. Very good results are obtained from both approximations. Two methods of obtaining a single estimate from the individual event's estimates of the parameters are presented. The simulated and measured hydrographs of four verification storms using these estimates compare quite well.

  16. Recent work on the probability density function (PDF) of concentration in atmospheric dispersion

    SciTech Connect

    Chatwin, P.C.; Lewis, D.M.; Mole, N.; Sullivan, P.J.

    1996-12-31

    Since Pasquill`s classic research, there have been revolutionary changes in the tools available to workers in air pollution meteorology. These include vase improvements in concentration sensors and data acquisition systems, and (of course) more and more powerful computers. While it has always been realized that the concentration {Gamma} of a pollutant is a random function of position and time, it is now widely recognized as a consequence of these developments that the random fluctuations of {Gamma} about its mean {mu} are large. In particular, methods of risk assessment that ignore these fluctuations are likely to be seriously wrong. A fundamental quantity for describing fluctuations is the pdf (probability density function) p({theta};x,t) of concentration {Gamma}(x,t), where (x,t) denotes position in space-time and p({theta};x,t) = d/d{theta} prob [{Gamma}(x,t) {le} {theta}]. Here {theta} denotes the range of possible concentration values. The mean concentration {mu}(x,t) and the central moments of concentration {mu}{sub n}(x,t) > 1 are related to p({theta};x,t) by {mu} = {integral}{sub 0}{sup {infinity}} {theta}pd{theta}, {mu}{sub n} = {integral}{sub 0}{sup {infinity}} ({theta} {minus} {mu}){sup n}pd{theta}. The rms concentration fluctuation (standard deviation) {sigma}(x,t), the skewness S(x,t) and the kurtosis K(x,t) are defined as follows: {sigma}{sup 2} = {mu}{sub 2}, S = {mu}{sub 3}/{sigma}{sup 3}, K = {mu}{sub 4}/{sigma}{sup 4}.

  17. The role of presumed probability density functions in the simulation of nonpremixed turbulent combustion

    NASA Astrophysics Data System (ADS)

    Coclite, A.; Pascazio, G.; De Palma, P.; Cutrone, L.

    2016-07-01

    Flamelet-Progress-Variable (FPV) combustion models allow the evaluation of all thermochemical quantities in a reacting flow by computing only the mixture fraction Z and a progress variable C. When using such a method to predict turbulent combustion in conjunction with a turbulence model, a probability density function (PDF) is required to evaluate statistical averages (e. g., Favre averages) of chemical quantities. The choice of the PDF is a compromise between computational costs and accuracy level. The aim of this paper is to investigate the influence of the PDF choice and its modeling aspects to predict turbulent combustion. Three different models are considered: the standard one, based on the choice of a β-distribution for Z and a Dirac-distribution for C; a model employing a β-distribution for both Z and C; and the third model obtained using a β-distribution for Z and the statistically most likely distribution (SMLD) for C. The standard model, although widely used, does not take into account the interaction between turbulence and chemical kinetics as well as the dependence of the progress variable not only on its mean but also on its variance. The SMLD approach establishes a systematic framework to incorporate informations from an arbitrary number of moments, thus providing an improvement over conventionally employed presumed PDF closure models. The rational behind the choice of the three PDFs is described in some details and the prediction capability of the corresponding models is tested vs. well-known test cases, namely, the Sandia flames, and H2-air supersonic combustion.

  18. Model-based prognostics for batteries which estimates useful life and uses a probability density function

    NASA Technical Reports Server (NTRS)

    Saha, Bhaskar (Inventor); Goebel, Kai F. (Inventor)

    2012-01-01

    This invention develops a mathematical model to describe battery behavior during individual discharge cycles as well as over its cycle life. The basis for the form of the model has been linked to the internal processes of the battery and validated using experimental data. Effects of temperature and load current have also been incorporated into the model. Subsequently, the model has been used in a Particle Filtering framework to make predictions of remaining useful life for individual discharge cycles as well as for cycle life. The prediction performance was found to be satisfactory as measured by performance metrics customized for prognostics for a sample case. The work presented here provides initial steps towards a comprehensive health management solution for energy storage devices.

  19. Relation between the probability density and other properties of a stationary random process.

    PubMed

    Sokolov, I M

    1999-09-01

    We consider the Pope-Ching differential equation [Phys. Fluids A 5, 1529 (1993)] connecting the probability density p(x)(x) of a stationary, homogeneous stochastic process x(t) and the conditional moments of its squared velocity and acceleration. We show that the solution of the Pope-Ching equation can be expressed as n(x), where n(x) is the mean number of crossings of the x level per unit time and is the mean inverse velocity of crossing. This result shows that the probability density at x is fully determined by a one-point measurement of crossing velocities, and does not imply knowledge of the x(t) behavior outside of the infinitesimally narrow window near x. PMID:11970158

  20. Research on Parameter Estimation Methods for Alpha Stable Noise in a Laser Gyroscope's Random Error.

    PubMed

    Wang, Xueyun; Li, Kui; Gao, Pengyu; Meng, Suxia

    2015-01-01

    Alpha stable noise, determined by four parameters, has been found in the random error of a laser gyroscope. Accurate estimation of the four parameters is the key process for analyzing the properties of alpha stable noise. Three widely used estimation methods-quantile, empirical characteristic function (ECF) and logarithmic moment method-are analyzed in contrast with Monte Carlo simulation in this paper. The estimation accuracy and the application conditions of all methods, as well as the causes of poor estimation accuracy, are illustrated. Finally, the highest precision method, ECF, is applied to 27 groups of experimental data to estimate the parameters of alpha stable noise in a laser gyroscope's random error. The cumulative probability density curve of the experimental data fitted by an alpha stable distribution is better than that by a Gaussian distribution, which verifies the existence of alpha stable noise in a laser gyroscope's random error.

  1. On the scaling of probability density functions with apparent power-law exponents less than unity

    NASA Astrophysics Data System (ADS)

    Christensen, K.; Farid, N.; Pruessner, G.; Stapleton, M.

    2008-04-01

    We derive general properties of the finite-size scaling of probability density functions and show that when the apparent exponent tilde{tau} of a probability density is less than 1, the associated finite-size scaling ansatz has a scaling exponent τ equal to 1, provided that the fraction of events in the universal scaling part of the probability density function is non-vanishing in the thermodynamic limit. We find the general result that τ≥1 and tau ge tilde{tau}. Moreover, we show that if the scaling function mathcal{G}(x) approaches a non-zero constant for small arguments, lim_{x to 0} mathcal{G}(x) > 0, then tau = tilde{tau}. However, if the scaling function vanishes for small arguments, lim_{x to 0} mathcal{G}(x) = 0, then τ= 1, again assuming a non-vanishing fraction of universal events. Finally, we apply the formalism developed to examples from the literature, including some where misunderstandings of the theory of scaling have led to erroneous conclusions.

  2. A projection and density estimation method for knowledge discovery.

    PubMed

    Stanski, Adam; Hellwich, Olaf

    2012-01-01

    A key ingredient to modern data analysis is probability density estimation. However, it is well known that the curse of dimensionality prevents a proper estimation of densities in high dimensions. The problem is typically circumvented by using a fixed set of assumptions about the data, e.g., by assuming partial independence of features, data on a manifold or a customized kernel. These fixed assumptions limit the applicability of a method. In this paper we propose a framework that uses a flexible set of assumptions instead. It allows to tailor a model to various problems by means of 1d-decompositions. The approach achieves a fast runtime and is not limited by the curse of dimensionality as all estimations are performed in 1d-space. The wide range of applications is demonstrated at two very different real world examples. The first is a data mining software that allows the fully automatic discovery of patterns. The software is publicly available for evaluation. As a second example an image segmentation method is realized. It achieves state of the art performance on a benchmark dataset although it uses only a fraction of the training data and very simple features.

  3. A Projection and Density Estimation Method for Knowledge Discovery

    PubMed Central

    Stanski, Adam; Hellwich, Olaf

    2012-01-01

    A key ingredient to modern data analysis is probability density estimation. However, it is well known that the curse of dimensionality prevents a proper estimation of densities in high dimensions. The problem is typically circumvented by using a fixed set of assumptions about the data, e.g., by assuming partial independence of features, data on a manifold or a customized kernel. These fixed assumptions limit the applicability of a method. In this paper we propose a framework that uses a flexible set of assumptions instead. It allows to tailor a model to various problems by means of 1d-decompositions. The approach achieves a fast runtime and is not limited by the curse of dimensionality as all estimations are performed in 1d-space. The wide range of applications is demonstrated at two very different real world examples. The first is a data mining software that allows the fully automatic discovery of patterns. The software is publicly available for evaluation. As a second example an image segmentation method is realized. It achieves state of the art performance on a benchmark dataset although it uses only a fraction of the training data and very simple features. PMID:23049675

  4. PDV Uncertainty Estimation & Methods Comparison

    SciTech Connect

    Machorro, E.

    2011-11-01

    Several methods are presented for estimating the rapidly changing instantaneous frequency of a time varying signal that is contaminated by measurement noise. Useful a posteriori error estimates for several methods are verified numerically through Monte Carlo simulation. However, given the sampling rates of modern digitizers, sub-nanosecond variations in velocity are shown to be reliably measurable in most (but not all) cases. Results support the hypothesis that in many PDV regimes of interest, sub-nanosecond resolution can be achieved.

  5. A model for the probability density function of downwelling irradiance under ocean waves.

    PubMed

    Shen, Meng; Xu, Zao; Yue, Dick K P

    2011-08-29

    We present a statistical model that analytically quantifies the probability density function (PDF) of the downwelling light irradiance under random ocean waves modeling the surface as independent and identically distributed flat facets. The model can incorporate the separate effects of surface short waves and volume light scattering. The theoretical model captures the characteristics of the PDF, from skewed to near-Gaussian shape as the depth increases from shallow to deep water. The model obtains a closed-form asymptotic for the probability that diminishes at a rate between exponential and Gaussian with increasing extreme values. The model is validated by comparisons with existing field measurements and Monte Carlo simulation.

  6. Breather turbulence versus soliton turbulence: Rogue waves, probability density functions, and spectral features.

    PubMed

    Akhmediev, N; Soto-Crespo, J M; Devine, N

    2016-08-01

    Turbulence in integrable systems exhibits a noticeable scientific advantage: it can be expressed in terms of the nonlinear modes of these systems. Whether the majority of the excitations in the system are breathers or solitons defines the properties of the turbulent state. In the two extreme cases we can call such states "breather turbulence" or "soliton turbulence." The number of rogue waves, the probability density functions of the chaotic wave fields, and their physical spectra are all specific for each of these two situations. Understanding these extreme cases also helps in studies of mixed turbulent states when the wave field contains both solitons and breathers, thus revealing intermediate characteristics. PMID:27627303

  7. Eulerian Mapping Closure Approach for Probability Density Function of Concentration in Shear Flows

    NASA Technical Reports Server (NTRS)

    He, Guowei; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    The Eulerian mapping closure approach is developed for uncertainty propagation in computational fluid mechanics. The approach is used to study the Probability Density Function (PDF) for the concentration of species advected by a random shear flow. An analytical argument shows that fluctuation of the concentration field at one point in space is non-Gaussian and exhibits stretched exponential form. An Eulerian mapping approach provides an appropriate approximation to both convection and diffusion terms and leads to a closed mapping equation. The results obtained describe the evolution of the initial Gaussian field, which is in agreement with direct numerical simulations.

  8. Breather turbulence versus soliton turbulence: Rogue waves, probability density functions, and spectral features

    NASA Astrophysics Data System (ADS)

    Akhmediev, N.; Soto-Crespo, J. M.; Devine, N.

    2016-08-01

    Turbulence in integrable systems exhibits a noticeable scientific advantage: it can be expressed in terms of the nonlinear modes of these systems. Whether the majority of the excitations in the system are breathers or solitons defines the properties of the turbulent state. In the two extreme cases we can call such states "breather turbulence" or "soliton turbulence." The number of rogue waves, the probability density functions of the chaotic wave fields, and their physical spectra are all specific for each of these two situations. Understanding these extreme cases also helps in studies of mixed turbulent states when the wave field contains both solitons and breathers, thus revealing intermediate characteristics.

  9. A biology-driven receptor model for daily pollen allergy risk in Korea based on Weibull probability density function

    NASA Astrophysics Data System (ADS)

    Kim, Kyu Rang; Kim, Mijin; Choe, Ho-Seong; Han, Mae Ja; Lee, Hye-Rim; Oh, Jae-Won; Kim, Baek-Jo

    2016-07-01

    Pollen is an important cause of respiratory allergic reactions. As individual sanitation has improved, allergy risk has increased, and this trend is expected to continue due to climate change. Atmospheric pollen concentration is highly influenced by weather conditions. Regression analysis and modeling of the relationships between airborne pollen concentrations and weather conditions were performed to analyze and forecast pollen conditions. Traditionally, daily pollen concentration has been estimated using regression models that describe the relationships between observed pollen concentrations and weather conditions. These models were able to forecast daily concentrations at the sites of observation, but lacked broader spatial applicability beyond those sites. To overcome this limitation, an integrated modeling scheme was developed that is designed to represent the underlying processes of pollen production and distribution. A maximum potential for airborne pollen is first determined using the Weibull probability density function. Then, daily pollen concentration is estimated using multiple regression models. Daily risk grade levels are determined based on the risk criteria used in Korea. The mean percentages of agreement between the observed and estimated levels were 81.4-88.2 % and 92.5-98.5 % for oak and Japanese hop pollens, respectively. The new models estimated daily pollen risk more accurately than the original statistical models because of the newly integrated biological response curves. Although they overestimated seasonal mean concentration, they did not simulate all of the peak concentrations. This issue would be resolved by adding more variables that affect the prevalence and internal maturity of pollens.

  10. Parameter estimation of social forces in pedestrian dynamics models via a probabilistic method.

    PubMed

    Corbetta, Alessandro; Muntean, Adrian; Vafayi, Kiamars

    2015-04-01

    Focusing on a specific crowd dynamics situation, including real life experiments and measurements, our paper targets a twofold aim: (1) we present a Bayesian probabilistic method to estimate the value and the uncertainty (in the form of a probability density function) of parameters in crowd dynamic models from the experimental data; and (2) we introduce a fitness measure for the models to classify a couple of model structures (forces) according to their fitness to the experimental data, preparing the stage for a more general model-selection and validation strategy inspired by probabilistic data analysis. Finally, we review the essential aspects of our experimental setup and measurement technique.

  11. Ensemble Averaged Probability Density Function (APDF) for Compressible Turbulent Reacting Flows

    NASA Technical Reports Server (NTRS)

    Shih, Tsan-Hsing; Liu, Nan-Suey

    2012-01-01

    In this paper, we present a concept of the averaged probability density function (APDF) for studying compressible turbulent reacting flows. The APDF is defined as an ensemble average of the fine grained probability density function (FG-PDF) with a mass density weighting. It can be used to exactly deduce the mass density weighted, ensemble averaged turbulent mean variables. The transport equation for APDF can be derived in two ways. One is the traditional way that starts from the transport equation of FG-PDF, in which the compressible Navier- Stokes equations are embedded. The resulting transport equation of APDF is then in a traditional form that contains conditional means of all terms from the right hand side of the Navier-Stokes equations except for the chemical reaction term. These conditional means are new unknown quantities that need to be modeled. Another way of deriving the transport equation of APDF is to start directly from the ensemble averaged Navier-Stokes equations. The resulting transport equation of APDF derived from this approach appears in a closed form without any need for additional modeling. The methodology of ensemble averaging presented in this paper can be extended to other averaging procedures: for example, the Reynolds time averaging for statistically steady flow and the Reynolds spatial averaging for statistically homogeneous flow. It can also be extended to a time or spatial filtering procedure to construct the filtered density function (FDF) for the large eddy simulation (LES) of compressible turbulent reacting flows.

  12. Regression approaches to derive generic and fish group-specific probability density functions of bioconcentration factors for metals.

    PubMed

    Tanaka, Taku; Ciffroy, Philippe; Stenberg, Kristofer; Capri, Ettore

    2010-11-01

    In the framework of environmental multimedia modeling studies dedicated to environmental and health risk assessments of chemicals, the bioconcentration factor (BCF) is a parameter commonly used, especially for fish. As for neutral lipophilic substances, it is assumed that BCF is independent of exposure levels of the substances. However, for metals some studies found the inverse relationship between BCF values and aquatic exposure concentrations for various aquatic species and metals, and also high variability in BCF data. To deal with the factors determining BCF for metals, we conducted regression analyses to evaluate the inverse relationships and introduce the concept of probability density function (PDF) for Cd, Cu, Zn, Pb, and As. In the present study, for building the regression model and derive the PDF of fish BCF, two statistical approaches are applied: ordinary regression analysis to estimate a regression model that does not consider the variation in data across different fish family groups; and hierarchical Bayesian regression analysis to estimate fish group-specific regression models. The results show that the BCF ranges and PDFs estimated for metals by both statistical approaches have less uncertainty than the variation of collected BCF data (the uncertainty is reduced by 9%-61%), and thus such PDFs proved to be useful to obtain accurate model predictions for environmental and health risk assessment concerning metals.

  13. Probability density function formalism for optical coherence tomography signal analysis: a controlled phantom study.

    PubMed

    Weatherbee, Andrew; Sugita, Mitsuro; Bizheva, Kostadinka; Popov, Ivan; Vitkin, Alex

    2016-06-15

    The distribution of backscattered intensities as described by the probability density function (PDF) of tissue-scattered light contains information that may be useful for tissue assessment and diagnosis, including characterization of its pathology. In this Letter, we examine the PDF description of the light scattering statistics in a well characterized tissue-like particulate medium using optical coherence tomography (OCT). It is shown that for low scatterer density, the governing statistics depart considerably from a Gaussian description and follow the K distribution for both OCT amplitude and intensity. The PDF formalism is shown to be independent of the scatterer flow conditions; this is expected from theory, and suggests robustness and motion independence of the OCT amplitude (and OCT intensity) PDF metrics in the context of potential biomedical applications.

  14. Probability density function formalism for optical coherence tomography signal analysis: a controlled phantom study.

    PubMed

    Weatherbee, Andrew; Sugita, Mitsuro; Bizheva, Kostadinka; Popov, Ivan; Vitkin, Alex

    2016-06-15

    The distribution of backscattered intensities as described by the probability density function (PDF) of tissue-scattered light contains information that may be useful for tissue assessment and diagnosis, including characterization of its pathology. In this Letter, we examine the PDF description of the light scattering statistics in a well characterized tissue-like particulate medium using optical coherence tomography (OCT). It is shown that for low scatterer density, the governing statistics depart considerably from a Gaussian description and follow the K distribution for both OCT amplitude and intensity. The PDF formalism is shown to be independent of the scatterer flow conditions; this is expected from theory, and suggests robustness and motion independence of the OCT amplitude (and OCT intensity) PDF metrics in the context of potential biomedical applications. PMID:27304274

  15. Computing approximate random Delta v magnitude probability densities. [for spacecraft trajectory correction

    NASA Technical Reports Server (NTRS)

    Chadwick, C.

    1984-01-01

    This paper describes the development and use of an algorithm to compute approximate statistics of the magnitude of a single random trajectory correction maneuver (TCM) Delta v vector. The TCM Delta v vector is modeled as a three component Cartesian vector each of whose components is a random variable having a normal (Gaussian) distribution with zero mean and possibly unequal standard deviations. The algorithm uses these standard deviations as input to produce approximations to (1) the mean and standard deviation of the magnitude of Delta v, (2) points of the probability density function of the magnitude of Delta v, and (3) points of the cumulative and inverse cumulative distribution functions of Delta v. The approximates are based on Monte Carlo techniques developed in a previous paper by the author and extended here. The algorithm described is expected to be useful in both pre-flight planning and in-flight analysis of maneuver propellant requirements for space missions.

  16. Methods for Cloud Cover Estimation

    NASA Technical Reports Server (NTRS)

    Glackin, D. L.; Huning, J. R.; Smith, J. H.; Logan, T. L.

    1984-01-01

    Several methods for cloud cover estimation are described relevant to assessing the performance of a ground-based network of solar observatories. The methods rely on ground and satellite data sources and provide meteorological or climatological information. One means of acquiring long-term observations of solar oscillations is the establishment of a ground-based network of solar observatories. Criteria for station site selection are: gross cloudiness, accurate transparency information, and seeing. Alternative methods for computing this duty cycle are discussed. The cycle, or alternatively a time history of solar visibility from the network, can then be input to a model to determine the effect of duty cycle on derived solar seismology parameters. Cloudiness from space is studied to examine various means by which the duty cycle might be computed. Cloudiness, and to some extent transparency, can potentially be estimated from satellite data.

  17. The Sherrington-Kirkpatrick spin glass model in the presence of a random field with a joint Gaussian probability density function for the exchange interactions and random fields

    NASA Astrophysics Data System (ADS)

    Hadjiagapiou, Ioannis A.

    2014-03-01

    The magnetic systems with disorder form an important class of systems, which are under intensive studies, since they reflect real systems. Such a class of systems is the spin glass one, which combines randomness and frustration. The Sherrington-Kirkpatrick Ising spin glass with random couplings in the presence of a random magnetic field is investigated in detail within the framework of the replica method. The two random variables (exchange integral interaction and random magnetic field) are drawn from a joint Gaussian probability density function characterized by a correlation coefficient ρ. The thermodynamic properties and phase diagrams are studied with respect to the natural parameters of both random components of the system contained in the probability density. The de Almeida-Thouless line is explored as a function of temperature, ρ and other system parameters. The entropy for zero temperature as well as for non zero temperatures is partly negative or positive, acquiring positive branches as h0 increases.

  18. Probability density of the orbital angular momentum mode of Hankel-Bessel beams in an atmospheric turbulence.

    PubMed

    Zhu, Yu; Liu, Xiaojun; Gao, Jie; Zhang, Yixin; Zhao, Fengsheng

    2014-04-01

    We develop a novel model of the probability density of the orbital angular momentum (OAM) modes for Hankel-Bessel beams in paraxial turbulence channel based on the Rytov approximation. The results show that there are multi-peaks of the mode probability density along the radial direction. The peak position of the mode probability density moves to beam center with the increasing of non-Kolmogorov turbulence-parameters and the generalized refractive-index structure parameters and with the decreasing of OAM quantum number, propagation distance and wavelength of the beams. Additionally, larger OAM quantum number and smaller non-Kolmogorov turbulence-parameter can be selected in order to obtain larger mode probability density. The probability density of the OAM mode crosstalk is increasing with the decreasing of the quantum number deviation and the wavelength. Because of the focusing properties of Hankel-Bessel beams in turbulence channel, compared with the Laguerre-Gaussian beams, Hankel-Bessel beams are a good light source for weakening turbulence spreading of the beams and mitigating the effects of turbulence on the probability density of the OAM mode.

  19. Translating CFC-based piston ages into probability density functions of ground-water age in karst

    USGS Publications Warehouse

    Long, A.J.; Putnam, L.D.

    2006-01-01

    Temporal age distributions are equivalent to probability density functions (PDFs) of transit time. The type and shape of a PDF provides important information related to ground-water mixing at the well or spring and the complex nature of flow networks in karst aquifers. Chlorofluorocarbon (CFC) concentrations measured for samples from 12 locations in the karstic Madison aquifer were used to evaluate the suitability of various PDF types for this aquifer. Parameters of PDFs could not be estimated within acceptable confidence intervals for any of the individual sites. Therefore, metrics derived from CFC-based apparent ages were used to evaluate results of PDF modeling in a more general approach. The ranges of these metrics were established as criteria against which families of PDFs could be evaluated for their applicability to different parts of the aquifer. Seven PDF types, including five unimodal and two bimodal models, were evaluated. Model results indicate that unimodal models may be applicable to areas close to conduits that have younger piston (i.e., apparent) ages and that bimodal models probably are applicable to areas farther from conduits that have older piston ages. The two components of a bimodal PDF are interpreted as representing conduit and diffuse flow, and transit times of as much as two decades may separate these PDF components. Areas near conduits may be dominated by conduit flow, whereas areas farther from conduits having bimodal distributions probably have good hydraulic connection to both diffuse and conduit flow. ?? 2006 Elsevier B.V. All rights reserved.

  20. Ellipsoidal Guaranteed Estimation Method for Satellite Collision Avoidance

    NASA Astrophysics Data System (ADS)

    Kim, Y.; Lee, J.; Ovseevich, A.

    2012-01-01

    The article represents a new guaranteed approach to determine a small area of deviations around Earth orbiting satellite nominal Keplerian orbit position, caused by a set of acting external disturbing forces and initial conditions. Only very restricted information is assumed about the disturbances: maximum values with no assumptions about the law of their distribution of probability density. The area of satellite deviations achievability is approximated by a state vector ellipsoid that can include satellite position and the velocity as the vector components. Mathematical equations that allow one to find the ellipsoid are developed on the base of linear Euler-Hill equations of satellite orbital motion. The approach can be considered and applied to various problems of satellite collision avoidance with other satellite or space debris, as well as for establishing potentially safe space traffic control norms. In particular, in CSA it is considering for planning collision avoidance manoeuvres of Earth observation satellite family RADARSAT, SCISAT and newly developing satellites. Originally general approach of ellipsoidal estimation was developed by Russian scientist academician .F. Chernousko. Considered in the article problem was studied by his followers and some of them participated in the method development together with the founder.

  1. Spatial-temporal filtering method based on kernel density estimation in suppressing background clutter

    NASA Astrophysics Data System (ADS)

    Tian, Yuexin; Liu, Yinghui; Gao, Kun; Shu, Yuwen; Ni, Guoqiang

    2014-11-01

    A temporal-spatial filtering algorithm based on kernel density estimation structure is presented for background suppression in this paper. The algorithm can be divided into spatial filtering and temporal filtering. Smoothing process is applied to the background of an infrared image sequence by using the kernel density estimation algorithm in spatial filtering. The probability density of the image gray values after spatial filtering is calculated with the kernel density estimation algorithm in temporal filtering. The background residual and blind pixels are picked out based on their gray values, and are further filtered. The algorithm is validated with a real infrared image sequence. The image sequence is processed by using Fuller kernel filter, Uniform kernel filter and high-pass filter. Quantitatively analysis shows that the temporal-spatial filtering algorithm based on the nonparametric method is a satisfactory way to suppress background clutter in infrared images. The SNR is significantly improved as well.

  2. On the Evolution of the Density Probability Density Function in Strongly Self-gravitating Systems

    NASA Astrophysics Data System (ADS)

    Girichidis, Philipp; Konstandin, Lukas; Whitworth, Anthony P.; Klessen, Ralf S.

    2014-02-01

    The time evolution of the probability density function (PDF) of the mass density is formulated and solved for systems in free-fall using a simple approximate function for the collapse of a sphere. We demonstrate that a pressure-free collapse results in a power-law tail on the high-density side of the PDF. The slope quickly asymptotes to the functional form PV (ρ)vpropρ-1.54 for the (volume-weighted) PDF and PM (ρ)vpropρ-0.54 for the corresponding mass-weighted distribution. From the simple approximation of the PDF we derive analytic descriptions for mass accretion, finding that dynamically quiet systems with narrow density PDFs lead to retarded star formation and low star formation rates (SFRs). Conversely, strong turbulent motions that broaden the PDF accelerate the collapse causing a bursting mode of star formation. Finally, we compare our theoretical work with observations. The measured SFRs are consistent with our model during the early phases of the collapse. Comparison of observed column density PDFs with those derived from our model suggests that observed star-forming cores are roughly in free-fall.

  3. Probability density functions for axial ratios of sectioning profiles of anisotropically arranged elliptical microvessels.

    PubMed

    Krasnoperov, R A; Gerasimov, A N

    2009-06-01

    The article theoretically regards probability density functions (PDFs) for axial ratio (X/Y) of sectioning profiles of elliptical microvessels (MVs) arranged with anisotropy in a biological tissue volume. A technique for the PDF(X/Y) calculations in anisotropy of the elliptical MVs is described. The essence of this technique is introducing anisotropy in PDF(alpha,phi), i.e. the function of the joint distribution of the polar and planar angles alpha and phi, which define mutual orientation of the elliptical MVs and sectioning planes. With the aid of this technique, the anisotropy cases are studied with PDF(alpha,phi) given by pair combinations of the following distributions: (i) a uniform distribution of the angles alpha and/or phi, (ii) the angle alpha distribution with PDF(alpha)=sin alpha(alpha in [0,pi/2]), and (iii) Gaussian distributions of the alpha or phi values. Specifically, PDF(X/Y) curves are obtained for MVs with the true, or three-dimensional, axial ratio X(0)/Y(0)=2.0, and the anisotropy effects on the X/Y expected frequencies are analysed. Conclusions of this analysis, the PDF(X/Y) calculation technique, and the PDF(X/Y) curves obtained are useful for stereological reconstruction of anisotropically organised microcirculatory networks, with an ellipticity of their MVs being taken into consideration. PMID:19318110

  4. Stable nonequilibrium probability densities and phase transitions for mean-field models in the thermodynamic limit

    SciTech Connect

    Bonilla, L.L.

    1987-02-01

    A nonlinear Fokker-Planck equation is derived to describe the cooperative behavior of general stochastic systems interacting via mean-field couplings, in the limit of a infinite number of such systems. Disordered systems are also considered. In the weak-noise limit; a general result yields the possibility of having bifurcations from stationary solutions of the nonlinear Fokker-Planck equation into stable time-dependent solutions. The latter are interpreted as nonequilibrium probability distributions (states), and the bifurcations to them as nonequilibrium phase transitions. In the thermodynamic limit, results for three models are given for illustrative purposes. A model of self-synchronization of nonlinear oscillators presents a Hopf bifurcation to a time-periodic probability density, which can be analyzed for any value of the noise. The effects of disorder are illustrated by a simplified version of the Sompolinsky-Zippelius model of spin-glasses. Finally, results for the Fukuyama-Lee-Fisher model of charge-density waves are given. A singular perturbation analysis shows that the depinning transition is a bifurcation problem modified by the disorder noise due to impurities. Far from the bifurcation point, the CDW is either pinned or free, obeying (to leading order) the Gruener-Zawadowki-Chaikin equation. Near the bifurcation, the disorder noise drastically modifies the pattern, giving a quenched average of the CDW current which is constant. Critical exponents are found to depend on the noise, and they are larger than Fisher's values for the two probability distributions considered.

  5. An analytical model for regular respiratory signals derived from the probability density function of Rayleigh distribution.

    PubMed

    Li, Xin; Li, Ye

    2015-01-01

    Regular respiratory signals (RRSs) acquired with physiological sensing systems (e.g., the life-detection radar system) can be used to locate survivors trapped in debris in disaster rescue, or predict the breathing motion to allow beam delivery under free breathing conditions in external beam radiotherapy. Among the existing analytical models for RRSs, the harmonic-based random model (HRM) is shown to be the most accurate, which, however, is found to be subject to considerable error if the RRS has a slowly descending end-of-exhale (EOE) phase. The defect of the HRM motivates us to construct a more accurate analytical model for the RRS. In this paper, we derive a new analytical RRS model from the probability density function of Rayleigh distribution. We evaluate the derived RRS model by using it to fit a real-life RRS in the sense of least squares, and the evaluation result shows that, our presented model exhibits lower error and fits the slowly descending EOE phases of the real-life RRS better than the HRM. PMID:26736208

  6. On the evolution of the density probability density function in strongly self-gravitating systems

    SciTech Connect

    Girichidis, Philipp; Konstandin, Lukas; Klessen, Ralf S.; Whitworth, Anthony P.

    2014-02-01

    The time evolution of the probability density function (PDF) of the mass density is formulated and solved for systems in free-fall using a simple approximate function for the collapse of a sphere. We demonstrate that a pressure-free collapse results in a power-law tail on the high-density side of the PDF. The slope quickly asymptotes to the functional form P{sub V} (ρ)∝ρ{sup –1.54} for the (volume-weighted) PDF and P{sub M} (ρ)∝ρ{sup –0.54} for the corresponding mass-weighted distribution. From the simple approximation of the PDF we derive analytic descriptions for mass accretion, finding that dynamically quiet systems with narrow density PDFs lead to retarded star formation and low star formation rates (SFRs). Conversely, strong turbulent motions that broaden the PDF accelerate the collapse causing a bursting mode of star formation. Finally, we compare our theoretical work with observations. The measured SFRs are consistent with our model during the early phases of the collapse. Comparison of observed column density PDFs with those derived from our model suggests that observed star-forming cores are roughly in free-fall.

  7. Analyses of turbulence in a wind tunnel by a multifractal theory for probability density functions

    NASA Astrophysics Data System (ADS)

    Arimitsu, Toshihico; Arimitsu, Naoko; Mouri, Hideaki

    2012-06-01

    The probability density functions (PDFs) for energy dissipation rates, created from time-series data of grid turbulence in a wind tunnel, are analyzed at a high precision by the theoretical formulae for PDFs within multifractal PDF theory which is constructed under the assumption that there are two main elements constituting fully developed turbulence, i.e. coherent and incoherent elements. The tail part of the PDF, representing intermittent coherent motion, is determined by a Tsallis-type PDF for singularity exponents essentially with one parameter with the help of a new scaling relation whose validity is checked for the case of the grid turbulence. For the central part PDF representing both contributions from the coherent motion and the fluctuating incoherent motion surrounding the former, we introduced a trial function specified by three adjustable parameters which amazingly represent scaling behaviors in a much wider area not restricted to the inertial range. From the investigation of the difference between two difference formulae approximating the velocity time derivative, it is revealed that the connection point between the central and tail parts of the PDF extracted by theoretical analyses of PDFs is actually the boundary of the two kinds of instabilities associated respectively with coherent and incoherent elements.

  8. Application of the compound probability density function for characterization of breast masses in ultrasound B scans.

    PubMed

    Shankar, P M; Piccoli, C W; Reid, J M; Forsberg, F; Goldberg, B B

    2005-05-21

    The compound probability density function (pdf) is investigated for the ability of its parameters to classify masses in ultrasonic B scan breast images. Results of 198 images (29 malignant and 70 benign cases and two images per case) are reported and compared to the classification performance reported by us earlier in this journal. A new parameter, the speckle factor, calculated from the parameters of the compound pdf was explored to separate benign and malignant masses. The receiver operating characteristic curve for the parameter resulted in an A(z) value of 0.852. This parameter was combined with one of the parameters from our previous work, namely the ratio of the K distribution parameter at the site and away from the site. This combined parameter resulted in an A(z) value of 0.955. In conclusion, the parameters of the K distribution and the compound pdf may be useful in the classification of breast masses. These parameters can be calculated in an automated fashion. It should be possible to combine the results of the ultrasonic image analysis with those of traditional mammography, thereby increasing the accuracy of breast cancer diagnosis.

  9. Direct measurement of the velocity joint probability density function and higher order moments in turbulent boundary layer flows

    NASA Astrophysics Data System (ADS)

    Soria, Julio; Atkinson, Callum

    2013-11-01

    This work shows how the joint probability density function (JPDF) of the streamwise and wall normal velocity components of a zero-pressure gradient turbulent boundary layer (ZPG-TBL) can be directly measured using the methodology and theoretical framework proposed by Soria & Willert (2012) MST 23, 065301. Higher order moments including Reynolds stresses can be computed directly from two-component (2C) JPDFs of the streamwise and wall normal velocity components by taking moments of the 2C-JPDF. The base data for the direct measurement of the 2C-JPDF are single-exposed image pairs typically used to determine instantaneous 2C-2D particle image velocimetry (PIV) fields. However, in the new direct measurement method, the instantaneous velocity samples necessary to build up the JPDF never need to be determined, which avoids the problems in PIV due to large velocity gradients that are typically encountered in turbulent wall-bounded flows. This new method has been applied to single-exposed image pairs acquired over a range of Reynolds numbers ranging up to Reτ = 19500 in ZPG-TBL experiments. This paper presents directly measured 2C-JPDFs across the ZPG-TBL as well as higher moment distributions determined from these 2C-JPDFs. The financial support of the Australian Research Council to undertake this research is gratefully acknowledged.

  10. Comparison of Fatigue Life Estimation Using Equivalent Linearization and Time Domain Simulation Methods

    NASA Technical Reports Server (NTRS)

    Mei, Chuh; Dhainaut, Jean-Michel

    2000-01-01

    The Monte Carlo simulation method in conjunction with the finite element large deflection modal formulation are used to estimate fatigue life of aircraft panels subjected to stationary Gaussian band-limited white-noise excitations. Ten loading cases varying from 106 dB to 160 dB OASPL with bandwidth 1024 Hz are considered. For each load case, response statistics are obtained from an ensemble of 10 response time histories. The finite element nonlinear modal procedure yields time histories, probability density functions (PDF), power spectral densities and higher statistical moments of the maximum deflection and stress/strain. The method of moments of PSD with Dirlik's approach is employed to estimate the panel fatigue life.

  11. On recontamination and directional-bias problems in Monte Carlo simulation of PDF turbulence models. [probability density function

    NASA Technical Reports Server (NTRS)

    Hsu, Andrew T.

    1992-01-01

    Turbulent combustion can not be simulated adequately by conventional moment closure turbulent models. The probability density function (PDF) method offers an attractive alternative: in a PDF model, the chemical source terms are closed and do not require additional models. Because the number of computational operations grows only linearly in the Monte Carlo scheme, it is chosen over finite differencing schemes. A grid dependent Monte Carlo scheme following J.Y. Chen and W. Kollmann has been studied in the present work. It was found that in order to conserve the mass fractions absolutely, one needs to add further restrictions to the scheme, namely alpha(sub j) + gamma(sub j) = alpha(sub j - 1) + gamma(sub j + 1). A new algorithm was devised that satisfied this restriction in the case of pure diffusion or uniform flow problems. Using examples, it is shown that absolute conservation can be achieved. Although for non-uniform flows absolute conservation seems impossible, the present scheme has reduced the error considerably.

  12. Comparisons of model predictions for radar backscatter amplitude probability density functions with measurements from SAXON

    NASA Astrophysics Data System (ADS)

    Thompson, D. R.; Gotwols, B. L.

    1994-05-01

    Data ranging from L to Ka band were collected from radars mounted on the Forschungsplatform Nordsee during the Synthetic Aperture Radar and X Band Ocean Nonlinearities experiment in November 1990. In this paper we examine, for each of these radars, the total amplitude probability density function (pdf) of the field backscattered from the ocean surface. These pdfs are compared with predictions from a simulation based on our time-dependent scattering model. We find that for lower incidence angles (˜20°), the agreement between the measured and computed pdfs is generally quite good. At these small incidence angles the behavior of the pdfs is determined by the local tilting of the long-wave surface. No modulation of the shortwave spectral density over the long-wave phase is needed to obtain good agreement. For larger incidence angles (˜45°) the agreement between the measured and predicted pdfs is not so good; the major discrepancy is that the tails of the predicted pdfs are somewhat too short. In this study we have attempted to account for the hydrodynamic modulation of the short-scale waves using an approximate procedure based on the assumption that the hydrodynamic modulation is due to the interaction of the short-scale waves with the orbital velocity of the long waves. With this procedure we are able to obtain agreement between the measured and computed pdfs at 45° incidence, although the strength of the hydrodynamic modulation needs to be adjusted. Our simulation procedure will be discussed in some detail. Also, we will show how our results are related to more conventional measurements of so-called modulation transfer functions and give some arguments as to why in many cases the correlation between the backscattered power and the long-wave surface velocity can be rather low.

  13. Nonparametric estimation of plant density by the distance method

    USGS Publications Warehouse

    Patil, S.A.; Burnham, K.P.; Kovner, J.L.

    1979-01-01

    A relation between the plant density and the probability density function of the nearest neighbor distance (squared) from a random point is established under fairly broad conditions. Based upon this relationship, a nonparametric estimator for the plant density is developed and presented in terms of order statistics. Consistency and asymptotic normality of the estimator are discussed. An interval estimator for the density is obtained. The modifications of this estimator and its variance are given when the distribution is truncated. Simulation results are presented for regular, random and aggregated populations to illustrate the nonparametric estimator and its variance. A numerical example from field data is given. Merits and deficiencies of the estimator are discussed with regard to its robustness and variance.

  14. EUPDF: Eulerian Monte Carlo Probability Density Function Solver for Applications With Parallel Computing, Unstructured Grids, and Sprays

    NASA Technical Reports Server (NTRS)

    Raju, M. S.

    1998-01-01

    The success of any solution methodology used in the study of gas-turbine combustor flows depends a great deal on how well it can model the various complex and rate controlling processes associated with the spray's turbulent transport, mixing, chemical kinetics, evaporation, and spreading rates, as well as convective and radiative heat transfer and other phenomena. The phenomena to be modeled, which are controlled by these processes, often strongly interact with each other at different times and locations. In particular, turbulence plays an important role in determining the rates of mass and heat transfer, chemical reactions, and evaporation in many practical combustion devices. The influence of turbulence in a diffusion flame manifests itself in several forms, ranging from the so-called wrinkled, or stretched, flamelets regime to the distributed combustion regime, depending upon how turbulence interacts with various flame scales. Conventional turbulence models have difficulty treating highly nonlinear reaction rates. A solution procedure based on the composition joint probability density function (PDF) approach holds the promise of modeling various important combustion phenomena relevant to practical combustion devices (such as extinction, blowoff limits, and emissions predictions) because it can account for nonlinear chemical reaction rates without making approximations. In an attempt to advance the state-of-the-art in multidimensional numerical methods, we at the NASA Lewis Research Center extended our previous work on the PDF method to unstructured grids, parallel computing, and sprays. EUPDF, which was developed by M.S. Raju of Nyma, Inc., was designed to be massively parallel and could easily be coupled with any existing gas-phase and/or spray solvers. EUPDF can use an unstructured mesh with mixed triangular, quadrilateral, and/or tetrahedral elements. The application of the PDF method showed favorable results when applied to several supersonic

  15. Kinetic and dynamic probability-density-function descriptions of disperse turbulent two-phase flows

    NASA Astrophysics Data System (ADS)

    Minier, Jean-Pierre; Profeta, Christophe

    2015-11-01

    This article analyzes the status of two classical one-particle probability density function (PDF) descriptions of the dynamics of discrete particles dispersed in turbulent flows. The first PDF formulation considers only the process made up by particle position and velocity Zp=(xp,Up) and is represented by its PDF p (t ;yp,Vp) which is the solution of a kinetic PDF equation obtained through a flux closure based on the Furutsu-Novikov theorem. The second PDF formulation includes fluid variables into the particle state vector, for example, the fluid velocity seen by particles Zp=(xp,Up,Us) , and, consequently, handles an extended PDF p (t ;yp,Vp,Vs) which is the solution of a dynamic PDF equation. For high-Reynolds-number fluid flows, a typical formulation of the latter category relies on a Langevin model for the trajectories of the fluid seen or, conversely, on a Fokker-Planck equation for the extended PDF. In the present work, a new derivation of the kinetic PDF equation is worked out and new physical expressions of the dispersion tensors entering the kinetic PDF equation are obtained by starting from the extended PDF and integrating over the fluid seen. This demonstrates that, under the same assumption of a Gaussian colored noise and irrespective of the specific stochastic model chosen for the fluid seen, the kinetic PDF description is the marginal of a dynamic PDF one. However, a detailed analysis reveals that kinetic PDF models of particle dynamics in turbulent flows described by statistical correlations constitute incomplete stand-alone PDF descriptions and, moreover, that present kinetic-PDF equations are mathematically ill posed. This is shown to be the consequence of the non-Markovian characteristic of the stochastic process retained to describe the system and the use of an external colored noise. Furthermore, developments bring out that well-posed PDF descriptions are essentially due to a proper choice of the variables selected to describe physical systems

  16. Accuracy of the non-relativistic approximation to relativistic probability densities for a low-speed weak-gravity system

    NASA Astrophysics Data System (ADS)

    Liang, Shiuan-Ni; Lan, Boon Leong

    2015-11-01

    The Newtonian and general-relativistic position and velocity probability densities, which are calculated from the same initial Gaussian ensemble of trajectories using the same system parameters, are compared for a low-speed weak-gravity bouncing ball system. The Newtonian approximation to the general-relativistic probability densities does not always break down rapidly if the trajectories in the ensembles are chaotic -- the rapid breakdown occurs only if the initial position and velocity standard deviations are sufficiently small. This result is in contrast to the previously studied single-trajectory case where the Newtonian approximation to a general-relativistic trajectory will always break down rapidly if the two trajectories are chaotic. Similar rapid breakdown of the Newtonian approximation to the general-relativistic probability densities should also occur for other low-speed weak-gravity chaotic systems since it is due to sensitivity to the small difference between the two dynamical theories at low speed and weak gravity. For the bouncing ball system, the breakdown of the Newtonian approximation is transient because the Newtonian and general-relativistic probability densities eventually converge to invariant densities which are close in agreement.

  17. Smoothing Methods for Estimating Test Score Distributions.

    ERIC Educational Resources Information Center

    Kolen, Michael J.

    1991-01-01

    Estimation/smoothing methods that are flexible enough to fit a wide variety of test score distributions are reviewed: kernel method, strong true-score model-based method, and method that uses polynomial log-linear models. Applications of these methods include describing/comparing test score distributions, estimating norms, and estimating…

  18. An automatic locally-adaptive method to estimate heavily-tailed breakthrough curves from particle distributions

    NASA Astrophysics Data System (ADS)

    Pedretti, Daniele; Fernàndez-Garcia, Daniel

    2013-09-01

    Particle tracking methods to simulate solute transport deal with the issue of having to reconstruct smooth concentrations from a limited number of particles. This is an error-prone process that typically leads to large fluctuations in the determined late-time behavior of breakthrough curves (BTCs). Kernel density estimators (KDE) can be used to automatically reconstruct smooth BTCs from a small number of particles. The kernel approach incorporates the uncertainty associated with subsampling a large population by equipping each particle with a probability density function. Two broad classes of KDE methods can be distinguished depending on the parametrization of this function: global and adaptive methods. This paper shows that each method is likely to estimate a specific portion of the BTCs. Although global methods offer a valid approach to estimate early-time behavior and peak of BTCs, they exhibit important fluctuations at the tails where fewer particles exist. In contrast, locally adaptive methods improve tail estimation while oversmoothing both early-time and peak concentrations. Therefore a new method is proposed combining the strength of both KDE approaches. The proposed approach is universal and only needs one parameter (α) which slightly depends on the shape of the BTCs. Results show that, for the tested cases, heavily-tailed BTCs are properly reconstructed with α ≈ 0.5 .

  19. Coupled Monte Carlo Probability Density Function/ SPRAY/CFD Code Developed for Modeling Gas-Turbine Combustor Flows

    NASA Technical Reports Server (NTRS)

    1995-01-01

    The success of any solution methodology for studying gas-turbine combustor flows depends a great deal on how well it can model various complex, rate-controlling processes associated with turbulent transport, mixing, chemical kinetics, evaporation and spreading rates of the spray, convective and radiative heat transfer, and other phenomena. These phenomena often strongly interact with each other at disparate time and length scales. In particular, turbulence plays an important role in determining the rates of mass and heat transfer, chemical reactions, and evaporation in many practical combustion devices. Turbulence manifests its influence in a diffusion flame in several forms depending on how turbulence interacts with various flame scales. These forms range from the so-called wrinkled, or stretched, flamelets regime, to the distributed combustion regime. Conventional turbulence closure models have difficulty in treating highly nonlinear reaction rates. A solution procedure based on the joint composition probability density function (PDF) approach holds the promise of modeling various important combustion phenomena relevant to practical combustion devices such as extinction, blowoff limits, and emissions predictions because it can handle the nonlinear chemical reaction rates without any approximation. In this approach, mean and turbulence gas-phase velocity fields are determined from a standard turbulence model; the joint composition field of species and enthalpy are determined from the solution of a modeled PDF transport equation; and a Lagrangian-based dilute spray model is used for the liquid-phase representation with appropriate consideration of the exchanges of mass, momentum, and energy between the two phases. The PDF transport equation is solved by a Monte Carlo method, and existing state-of-the-art numerical representations are used to solve the mean gasphase velocity and turbulence fields together with the liquid-phase equations. The joint composition PDF

  20. Nearest-neighbor nonparametric method for estimating the configurational entropy of complex molecules.

    PubMed

    Hnizdo, Vladimir; Darian, Eva; Fedorowicz, Adam; Demchuk, Eugene; Li, Shengqiao; Singh, Harshinder

    2007-02-01

    A method for estimating the configurational (i.e., non-kinetic) part of the entropy of internal motion in complex molecules is introduced that does not assume any particular parametric form for the underlying probability density function. It is based on the nearest-neighbor (NN) distances of the points of a sample of internal molecular coordinates obtained by a computer simulation of a given molecule. As the method does not make any assumptions about the underlying potential energy function, it accounts fully for any anharmonicity of internal molecular motion. It provides an asymptotically unbiased and consistent estimate of the configurational part of the entropy of the internal degrees of freedom of the molecule. The NN method is illustrated by estimating the configurational entropy of internal rotation of capsaicin and two stereoisomers of tartaric acid, and by providing a much closer upper bound on the configurational entropy of internal rotation of a pentapeptide molecule than that obtained by the standard quasi-harmonic method. As a measure of dependence between any two internal molecular coordinates, a general coefficient of association based on the information-theoretic quantity of mutual information is proposed. Using NN estimates of this measure, statistical clustering procedures can be employed to group the coordinates into clusters of manageable dimensions and characterized by minimal dependence between coordinates belonging to different clusters.

  1. Assessment of a three-dimensional line-of-response probability density function system matrix for PET.

    PubMed

    Yao, Rutao; Ramachandra, Ranjith M; Mahajan, Neeraj; Rathod, Vinay; Gunasekar, Noel; Panse, Ashish; Ma, Tianyu; Jian, Yiqiang; Yan, Jianhua; Carson, Richard E

    2012-11-01

    To achieve optimal PET image reconstruction through better system modeling, we developed a system matrix that is based on the probability density function for each line of response (LOR-PDF). The LOR-PDFs are grouped by LOR-to-detector incident angles to form a highly compact system matrix. The system matrix was implemented in the MOLAR list mode reconstruction algorithm for a small animal PET scanner. The impact of LOR-PDF on reconstructed image quality was assessed qualitatively as well as quantitatively in terms of contrast recovery coefficient (CRC) and coefficient of variance (COV), and its performance was compared with a fixed Gaussian (iso-Gaussian) line spread function. The LOR-PDFs of three coincidence signal emitting sources, (1) ideal positron emitter that emits perfect back-to-back γ rays (γγ) in air; (2) fluorine-18 (¹⁸F) nuclide in water; and (3) oxygen-15 (¹⁵O) nuclide in water, were derived, and assessed with simulated and experimental phantom data. The derived LOR-PDFs showed anisotropic and asymmetric characteristics dependent on LOR-detector angle, coincidence emitting source, and the medium, consistent with common PET physical principles. The comparison of the iso-Gaussian function and LOR-PDF showed that: (1) without positron range and acollinearity effects, the LOR-PDF achieved better or similar trade-offs of contrast recovery and noise for objects of 4 mm radius or larger, and this advantage extended to smaller objects (e.g. 2 mm radius sphere, 0.6 mm radius hot-rods) at higher iteration numbers; and (2) with positron range and acollinearity effects, the iso-Gaussian achieved similar or better resolution recovery depending on the significance of positron range effect. We conclude that the 3D LOR-PDF approach is an effective method to generate an accurate and compact system matrix. However, when used directly in expectation-maximization based list-mode iterative reconstruction algorithms such as MOLAR, its superiority is not clear

  2. An at-site flood estimation method in the context of nonstationarity I. A simulation study

    NASA Astrophysics Data System (ADS)

    Gado, Tamer A.; Nguyen, Van-Thanh-Van

    2016-04-01

    The stationarity of annual flood peak records is the traditional assumption of flood frequency analysis. In some cases, however, as a result of land-use and/or climate change, this assumption is no longer valid. Therefore, new statistical models are needed to capture dynamically the change of probability density functions over time, in order to obtain reliable flood estimation. In this study, an innovative method for nonstationary flood frequency analysis was presented. Here, the new method is based on detrending the flood series and applying the L-moments along with the GEV distribution to the transformed "stationary" series (hereafter, this is called the LM-NS). The LM-NS method was assessed through a comparative study with the maximum likelihood (ML) method for the nonstationary GEV model, as well as with the stationary (S) GEV model. The comparative study, based on Monte Carlo simulations, was carried out for three nonstationary GEV models: a linear dependence of the mean on time (GEV1), a quadratic dependence of the mean on time (GEV2), and linear dependence in both the mean and log standard deviation on time (GEV11). The simulation results indicated that the LM-NS method performs better than the ML method for most of the cases studied, whereas the stationary method provides the least accurate results. An additional advantage of the LM-NS method is to avoid the numerical problems (e.g., convergence problems) that may occur with the ML method when estimating parameters for small data samples.

  3. Probability density function selection based on the characteristics of wind speed data

    NASA Astrophysics Data System (ADS)

    Yürüşen, N. Y.; Melero, Julio J.

    2016-09-01

    The probabilistic approach has an important place in the wind energy research field as it provides cheap and fast initial information for experts with the help of simulations and estimations. Wind energy experts have been using the Weibull distribution for wind speed data for many years. Nevertheless, there exist cases, where the Weibull distribution is inappropriate with data presenting bimodal or multimodal behaviour which are unfit in high, null and low winds that can cause serious energy estimation errors. This paper presents a procedure for dealing with wind speed data taking into account non-Weibull distributions or data treatment when needed. The procedure detects deviations from the unimodal (Weibull) distribution and proposes other possible distributions to be used. The deviations of the used distributions regarding real data are addressed with the Root Mean Square Error (RMSE) and the annual energy production (AEP).

  4. Probability density adjoint for sensitivity analysis of the Mean of Chaos

    SciTech Connect

    Blonigan, Patrick J. Wang, Qiqi

    2014-08-01

    Sensitivity analysis, especially adjoint based sensitivity analysis, is a powerful tool for engineering design which allows for the efficient computation of sensitivities with respect to many parameters. However, these methods break down when used to compute sensitivities of long-time averaged quantities in chaotic dynamical systems. This paper presents a new method for sensitivity analysis of ergodic chaotic dynamical systems, the density adjoint method. The method involves solving the governing equations for the system's invariant measure and its adjoint on the system's attractor manifold rather than in phase-space. This new approach is derived for and demonstrated on one-dimensional chaotic maps and the three-dimensional Lorenz system. It is found that the density adjoint computes very finely detailed adjoint distributions and accurate sensitivities, but suffers from large computational costs.

  5. Probability density functions for the variable solar wind near the solar cycle minimum

    NASA Astrophysics Data System (ADS)

    Vörös, Z.; Leitner, M.; Narita, Y.; Consolini, G.; Kovács, P.; Tóth, A.; Lichtenberger, J.

    2015-08-01

    Unconditional and conditional statistics are used for studying the histograms of magnetic field multiscale fluctuations in the solar wind near the solar cycle minimum in 2008. The unconditional statistics involves the magnetic data during the whole year in 2008. The conditional statistics involves the magnetic field time series split into concatenated subsets of data according to a threshold in dynamic pressure. The threshold separates fast-stream leading edge compressional and trailing edge uncompressional fluctuations. The histograms obtained from these data sets are associated with both multiscale (B) and small-scale (δB) magnetic fluctuations, the latter corresponding to time-delayed differences. It is shown here that, by keeping flexibility but avoiding the unnecessary redundancy in modeling, the histograms can be effectively described by a limited set of theoretical probability distribution functions (PDFs), such as the normal, lognormal, kappa, and log-kappa functions. In a statistical sense the model PDFs correspond to additive and multiplicative processes exhibiting correlations. It is demonstrated here that the skewed small-scale histograms inherent in turbulent cascades are better described by the skewed log-kappa than by the symmetric kappa model. Nevertheless, the observed skewness is rather small, resulting in potential difficulties of estimation of the third-order moments. This paper also investigates the dependence of the statistical convergence of PDF model parameters, goodness of fit, and skewness on the data sample size. It is shown that the minimum lengths of data intervals required for the robust estimation of parameters is scale, process, and model dependent.

  6. Finite-size scaling of the magnetization probability density for the critical Ising model in slab geometry

    NASA Astrophysics Data System (ADS)

    Lopes Cardozo, David; Holdsworth, Peter C. W.

    2016-04-01

    The magnetization probability density in d  =  2 and 3 dimensional Ising models in slab geometry of volume L\\paralleld-1× {{L}\\bot} is computed through Monte-Carlo simulation at the critical temperature and zero magnetic field. The finite-size scaling of this distribution and its dependence on the system aspect-ratio ρ =\\frac{{{L}\\bot}}{{{L}\\parallel}} and boundary conditions are discussed. In the limiting case ρ \\to 0 of a macroscopically large slab ({{L}\\parallel}\\gg {{L}\\bot} ) the distribution is found to scale as a Gaussian function for all tested system sizes and boundary conditions.

  7. Existence, uniqueness and regularity of a time-periodic probability density distribution arising in a sedimentation-diffusion problem

    NASA Technical Reports Server (NTRS)

    Nitsche, Ludwig C.; Nitsche, Johannes M.; Brenner, Howard

    1988-01-01

    The sedimentation and diffusion of a nonneutrally buoyant Brownian particle in vertical fluid-filled cylinder of finite length which is instantaneously inverted at regular intervals are investigated analytically. A one-dimensional convective-diffusive equation is derived to describe the temporal and spatial evolution of the probability density; a periodicity condition is formulated; the applicability of Fredholm theory is established; and the parameter-space regions are determined within which the existence and uniqueness of solutions are guaranteed. Numerical results for sample problems are presented graphically and briefly characterized.

  8. An assessment of vapour pressure estimation methods.

    PubMed

    O'Meara, Simon; Booth, Alastair Murray; Barley, Mark Howard; Topping, David; McFiggans, Gordon

    2014-09-28

    Laboratory measurements of vapour pressures for atmospherically relevant compounds were collated and used to assess the accuracy of vapour pressure estimates generated by seven estimation methods and impacts on predicted secondary organic aerosol. Of the vapour pressure estimation methods that were applicable to all the test set compounds, the Lee-Kesler [Reid et al., The Properties of Gases and Liquids, 1987] method showed the lowest mean absolute error and the Nannoolal et al. [Nannoonal et al., Fluid Phase Equilib., 2008, 269, 117-133] method showed the lowest mean bias error (when both used normal boiling points estimated using the Nannoolal et al. [Nannoolal et al., Fluid Phase Equilib., 2004, 226, 45-63] method). The effect of varying vapour pressure estimation methods on secondary organic aerosol (SOA) mass loading and composition was investigated using an absorptive partitioning equilibrium model. The Myrdal and Yalkowsky [Myrdal and Yalkowsky, Ind. Eng. Chem. Res., 1997, 36, 2494-2499] vapour pressure estimation method using the Nannoolal et al. [Nannoolal et al., Fluid Phase Equilib., 2004, 226, 45-63] normal boiling point gave the most accurate estimation of SOA loading despite not being the most accurate for vapour pressures alone. PMID:25105180

  9. Optimum quantisers for a Gaussian input probability density and for the magnitude-error distortion measure

    NASA Technical Reports Server (NTRS)

    Demaret, J. C.

    1975-01-01

    The parameters of non-uniform and uniform quantizers up to ten bits of quantization, optimum for a Gaussian input probability and for the magnitude-error distortion criterion are computed. Optimum quantizers must be understood as quantizers with minimum distortion. The numerical method used for the optimization converges relatively rapidly. The comparison between optimum non-uniform quantizers and optimum uniform quantizers is made.

  10. Probability density function of non-reactive solute concentration in heterogeneous porous formations.

    PubMed

    Bellin, Alberto; Tonina, Daniele

    2007-10-30

    Available models of solute transport in heterogeneous formations lack in providing complete characterization of the predicted concentration. This is a serious drawback especially in risk analysis where confidence intervals and probability of exceeding threshold values are required. Our contribution to fill this gap of knowledge is a probability distribution model for the local concentration of conservative tracers migrating in heterogeneous aquifers. Our model accounts for dilution, mechanical mixing within the sampling volume and spreading due to formation heterogeneity. It is developed by modeling local concentration dynamics with an Ito Stochastic Differential Equation (SDE) that under the hypothesis of statistical stationarity leads to the Beta probability distribution function (pdf) for the solute concentration. This model shows large flexibility in capturing the smoothing effect of the sampling volume and the associated reduction of the probability of exceeding large concentrations. Furthermore, it is fully characterized by the first two moments of the solute concentration, and these are the same pieces of information required for standard geostatistical techniques employing Normal or Log-Normal distributions. Additionally, we show that in the absence of pore-scale dispersion and for point concentrations the pdf model converges to the binary distribution of [Dagan, G., 1982. Stochastic modeling of groundwater flow by unconditional and conditional probabilities, 2, The solute transport. Water Resour. Res. 18 (4), 835-848.], while it approaches the Normal distribution for sampling volumes much larger than the characteristic scale of the aquifer heterogeneity. Furthermore, we demonstrate that the same model with the spatial moments replacing the statistical moments can be applied to estimate the proportion of the plume volume where solute concentrations are above or below critical thresholds. Application of this model to point and vertically averaged bromide

  11. Direct Density Derivative Estimation.

    PubMed

    Sasaki, Hiroaki; Noh, Yung-Kyun; Niu, Gang; Sugiyama, Masashi

    2016-06-01

    Estimating the derivatives of probability density functions is an essential step in statistical data analysis. A naive approach to estimate the derivatives is to first perform density estimation and then compute its derivatives. However, this approach can be unreliable because a good density estimator does not necessarily mean a good density derivative estimator. To cope with this problem, in this letter, we propose a novel method that directly estimates density derivatives without going through density estimation. The proposed method provides computationally efficient estimation for the derivatives of any order on multidimensional data with a hyperparameter tuning method and achieves the optimal parametric convergence rate. We further discuss an extension of the proposed method by applying regularized multitask learning and a general framework for density derivative estimation based on Bregman divergences. Applications of the proposed method to nonparametric Kullback-Leibler divergence approximation and bandwidth matrix selection in kernel density estimation are also explored. PMID:27140943

  12. Probability Density Function for Waves Propagating in a Straight Rough Wall Tunnel

    SciTech Connect

    Pao, H

    2004-01-28

    The radio channel places fundamental limitations on the performance of wireless communication systems in tunnels and caves. The transmission path between the transmitter and receiver can vary from a simple direct line of sight to one that is severely obstructed by rough walls and corners. Unlike wired channels that are stationary and predictable, radio channels can be extremely random and difficult to analyze. In fact, modeling the radio channel has historically been one of the more challenging parts of any radio system design; this is often done using statistical methods. The mechanisms behind electromagnetic wave propagation are diverse, but can generally be attributed to reflection, diffraction, and scattering. Because of the multiple reflections from rough walls, the electromagnetic waves travel along different paths of varying lengths. The interactions between these waves cause multipath fading at any location, and the strengths of the waves decrease as the distance between the transmitter and receiver increases. As a consequence of the central limit theorem, the received signals are approximately Gaussian random process. This means that the field propagating in a cave or tunnel is typically a complex-valued Gaussian random process.

  13. Estimation of avidin activity by two methods.

    PubMed

    Borza, B; Marcheş, F; Repanovici, R; Burducea, O; Popa, L M

    1991-01-01

    The biological activity of avidin was estimated by two different methods. The spectrophotometric method used the avidin titration with biotin in the presence of 4 hydroxiazobenzen-2'carboxilic acid as indicator. In the radioisotopic determination the titration with tritiated biotin was accomplished. Both methods led to the same results, but the spectrophotometric one is less avidin expensive and more rapid, being more convenient.

  14. [Bayesian methods for genomic breeding value estimation].

    PubMed

    Wang, Chonglong; Ding, Xiangdong; Liu, Jianfeng; Yin, Zongjun; Zhang, Qin

    2014-02-01

    Estimation of genomic breeding values is the key step in genomic selection. The successful application of genomic selection depends on the accuracy of genomic estimated breeding values, which is mostly determined by the estimation method. Bayes-type and BLUP-type methods are the two main methods which have been widely studied and used. Here, we systematically introduce the currently proposed Bayesian methods, and summarize their effectiveness and improvements. Results from both simulated and real data showed that the accuracies of Bayesian methods are higher than those of BLUP methods, especially for the traits which are influenced by QTL with large effect. Because the theories and computation of Bayesian methods are relatively complicated, their use in practical breeding is less common than BLUP methods. However, with the development of fast algorithms and the improvement of computer hardware, the computational problem of Bayesian methods is expected to be solved. In addition, further studies on the genetic architecture of traits will provide Bayesian methods more accurate prior information, which will make their advantage in accuracy of genomic estimated breeding values more prominent. Therefore, the application of Bayesian methods will be more extensive.

  15. Large-eddy simulation/probability density function modeling of local extinction and re-ignition in Sandia Flame E

    NASA Astrophysics Data System (ADS)

    Wang, Haifeng; Popov, Pavel; Hiremath, Varun; Lantz, Steven; Viswanathan, Sharadha; Pope, Stephen

    2010-11-01

    A large-eddy simulation (LES)/probability density function (PDF) code is developed and applied to the study of local extinction and re-ignition in Sandia Flame E. The modified Curl mixing model is used to account for the sub-filter scalar mixing; the ARM1 mechanism is used for the chemical reaction; and the in- situ adaptive tabulation (ISAT) algorithm is used to accelerate the chemistry calculations. Calculations are performed on different grids to study the resolution requirement for this flame. Then, with sufficient grid resolution, full-scale LES/PDF calculations are performed to study the flame characteristics and the turbulence-chemistry interactions. Sensitivity to the mixing frequency model is explored in order to understand the behavior of sub-filter scalar mixing in the context of LES. The simulation results are compared to the experimental data to demonstrate the capability of the code. Comparison is also made to previous RANS/PDF simulations.

  16. Evaluating methods for estimating existential risks.

    PubMed

    Tonn, Bruce; Stiefel, Dorian

    2013-10-01

    Researchers and commissions contend that the risk of human extinction is high, but none of these estimates have been based upon a rigorous methodology suitable for estimating existential risks. This article evaluates several methods that could be used to estimate the probability of human extinction. Traditional methods evaluated include: simple elicitation; whole evidence Bayesian; evidential reasoning using imprecise probabilities; and Bayesian networks. Three innovative methods are also considered: influence modeling based on environmental scans; simple elicitation using extinction scenarios as anchors; and computationally intensive possible-worlds modeling. Evaluation criteria include: level of effort required by the probability assessors; level of effort needed to implement the method; ability of each method to model the human extinction event; ability to incorporate scientific estimates of contributory events; transparency of the inputs and outputs; acceptability to the academic community (e.g., with respect to intellectual soundness, familiarity, verisimilitude); credibility and utility of the outputs of the method to the policy community; difficulty of communicating the method's processes and outputs to nonexperts; and accuracy in other contexts. The article concludes by recommending that researchers assess the risks of human extinction by combining these methods. PMID:23551083

  17. Nonparametric estimation of population density for line transect sampling using FOURIER series

    USGS Publications Warehouse

    Crain, B.R.; Burnham, K.P.; Anderson, D.R.; Lake, J.L.

    1979-01-01

    A nonparametric, robust density estimation method is explored for the analysis of right-angle distances from a transect line to the objects sighted. The method is based on the FOURIER series expansion of a probability density function over an interval. With only mild assumptions, a general population density estimator of wide applicability is obtained.

  18. FINAL PROJECT REPORT DOE Early Career Principal Investigator Program Project Title: Developing New Mathematical Models for Multiphase Flows Based on a Fundamental Probability Density Function Approach

    SciTech Connect

    Shankar Subramaniam

    2009-04-01

    This final project report summarizes progress made towards the objectives described in the proposal entitled “Developing New Mathematical Models for Multiphase Flows Based on a Fundamental Probability Density Function Approach”. Substantial progress has been made in theory, modeling and numerical simulation of turbulent multiphase flows. The consistent mathematical framework based on probability density functions is described. New models are proposed for turbulent particle-laden flows and sprays.

  19. Habitat suitability criteria via parametric distributions: estimation, model selection and uncertainty

    USGS Publications Warehouse

    Som, Nicholas A.; Goodman, Damon H.; Perry, Russell W.; Hardy, Thomas B.

    2016-01-01

    Previous methods for constructing univariate habitat suitability criteria (HSC) curves have ranged from professional judgement to kernel-smoothed density functions or combinations thereof. We present a new method of generating HSC curves that applies probability density functions as the mathematical representation of the curves. Compared with previous approaches, benefits of our method include (1) estimation of probability density function parameters directly from raw data, (2) quantitative methods for selecting among several candidate probability density functions, and (3) concise methods for expressing estimation uncertainty in the HSC curves. We demonstrate our method with a thorough example using data collected on the depth of water used by juvenile Chinook salmon (Oncorhynchus tschawytscha) in the Klamath River of northern California and southern Oregon. All R code needed to implement our example is provided in the appendix. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.

  20. Computing rates of Markov models of voltage-gated ion channels by inverting partial differential equations governing the probability density functions of the conducting and non-conducting states

    PubMed Central

    Tveito, Aslak; Lines, Glenn T.; Edwards, Andrew G.; McCulloch, Andrew

    2016-01-01

    Markov models are ubiquitously used to represent the function of single ion channels. However, solving the inverse problem to construct a Markov model of single channel dynamics from bilayer or patch-clamp recordings remains challenging, particularly for channels involving complex gating processes. Methods for solving the inverse problem are generally based on data from voltage clamp measurements. Here, we describe an alternative approach to this problem based on measurements of voltage traces. The voltage traces define probability density functions of the functional states of an ion channel. These probability density functions can also be computed by solving a deterministic system of partial differential equations. The inversion is based on tuning the rates of the Markov models used in the deterministic system of partial differential equations such that the solution mimics the properties of the probability density function gathered from (pseudo) experimental data as well as possible. The optimization is done by defining a cost function to measure the difference between the deterministic solution and the solution based on experimental data. By evoking the properties of this function, it is possible to infer whether the rates of the Markov model are identifiable by our method. We present applications to Markov model well known from the literature. PMID:27154008

  1. A simple method to estimate interwell autocorrelation

    SciTech Connect

    Pizarro, J.O.S.; Lake, L.W.

    1997-08-01

    The estimation of autocorrelation in the lateral or interwell direction is important when performing reservoir characterization studies using stochastic modeling. This paper presents a new method to estimate the interwell autocorrelation based on parameters, such as the vertical range and the variance, that can be estimated with commonly available data. We used synthetic fields that were generated from stochastic simulations to provide data to construct the estimation charts. These charts relate the ratio of areal to vertical variance and the autocorrelation range (expressed variously) in two directions. Three different semivariogram models were considered: spherical, exponential and truncated fractal. The overall procedure is demonstrated using field data. We find that the approach gives the most self-consistent results when it is applied to previously identified facies. Moreover, the autocorrelation trends follow the depositional pattern of the reservoir, which gives confidence in the validity of the approach.

  2. Variational bayesian method of estimating variance components.

    PubMed

    Arakawa, Aisaku; Taniguchi, Masaaki; Hayashi, Takeshi; Mikawa, Satoshi

    2016-07-01

    We developed a Bayesian analysis approach by using a variational inference method, a so-called variational Bayesian method, to determine the posterior distributions of variance components. This variational Bayesian method and an alternative Bayesian method using Gibbs sampling were compared in estimating genetic and residual variance components from both simulated data and publically available real pig data. In the simulated data set, we observed strong bias toward overestimation of genetic variance for the variational Bayesian method in the case of low heritability and low population size, and less bias was detected with larger population sizes in both methods examined. The differences in the estimates of variance components between the variational Bayesian and the Gibbs sampling were not found in the real pig data. However, the posterior distributions of the variance components obtained with the variational Bayesian method had shorter tails than those obtained with the Gibbs sampling. Consequently, the posterior standard deviations of the genetic and residual variances of the variational Bayesian method were lower than those of the method using Gibbs sampling. The computing time required was much shorter with the variational Bayesian method than with the method using Gibbs sampling.

  3. Density estimation with non-parametric methods

    NASA Astrophysics Data System (ADS)

    Fadda, D.; Slezak, E.; Bijaoui, A.

    1998-01-01

    One key issue in several astrophysical problems is the evaluation of the density probability function underlying an observational discrete data set. We here review two non-parametric density estimators which recently appeared in the astrophysical literature, namely the adaptive kernel density estimator and the Maximum Penalized Likelihood technique, and describe another method based on the wavelet transform. The efficiency of these estimators is tested by using extensive numerical simulations in the one-dimensional case. The results are in good agreement with theoretical functions and the three methods appear to yield consistent estimates. However, the Maximum Penalized Likelihood suffers from a lack of resolution and high computational cost due to its dependency on a minimization algorithm. The small differences between kernel and wavelet estimates are mainly explained by the ability of the wavelet method to take into account local gaps in the data distribution. This new approach is very promising, since smaller structures superimposed onto a larger one are detected only by this technique, especially when small samples are investigated. Thus, wavelet solutions appear to be better suited for subclustering studies. Nevertheless, kernel estimates seem more robust and are reliable solutions although some small-scale details can be missed. In order to check these estimators with respect to previous studies, two galaxy redshift samples, related to the galaxy cluster A3526 and to the Corona Borealis region, have been analyzed. In both these cases claims for bimodality are confirmed at a high confidence level. The complete version of this paper with the whole set of figures can be accessed from the electronic version of the A\\&A Suppl. Ser. managed by Editions de Physique as well as from the SISSA database (astro-ph/9704096).

  4. New evolution equations for the joint response-excitation probability density function of stochastic solutions to first-order nonlinear PDEs

    SciTech Connect

    Venturi, D.; Karniadakis, G.E.

    2012-08-30

    By using functional integral methods we determine new evolution equations satisfied by the joint response-excitation probability density function (PDF) associated with the stochastic solution to first-order nonlinear partial differential equations (PDEs). The theory is presented for both fully nonlinear and for quasilinear scalar PDEs subject to random boundary conditions, random initial conditions or random forcing terms. Particular applications are discussed for the classical linear and nonlinear advection equations and for the advection-reaction equation. By using a Fourier-Galerkin spectral method we obtain numerical solutions of the proposed response-excitation PDF equations. These numerical solutions are compared against those obtained by using more conventional statistical approaches such as probabilistic collocation and multi-element probabilistic collocation methods. It is found that the response-excitation approach yields accurate predictions of the statistical properties of the system. In addition, it allows to directly ascertain the tails of probabilistic distributions, thus facilitating the assessment of rare events and associated risks. The computational cost of the response-excitation method is order magnitudes smaller than the one of more conventional statistical approaches if the PDE is subject to high-dimensional random boundary or initial conditions. The question of high-dimensionality for evolution equations involving multidimensional joint response-excitation PDFs is also addressed.

  5. Estimation method for serial dilution experiments.

    PubMed

    Ben-David, Avishai; Davidson, Charles E

    2014-12-01

    Titration of microorganisms in infectious or environmental samples is a corner stone of quantitative microbiology. A simple method is presented to estimate the microbial counts obtained with the serial dilution technique for microorganisms that can grow on bacteriological media and develop into a colony. The number (concentration) of viable microbial organisms is estimated from a single dilution plate (assay) without a need for replicate plates. Our method selects the best agar plate with which to estimate the microbial counts, and takes into account the colony size and plate area that both contribute to the likelihood of miscounting the number of colonies on a plate. The estimate of the optimal count given by our method can be used to narrow the search for the best (optimal) dilution plate and saves time. The required inputs are the plate size, the microbial colony size, and the serial dilution factors. The proposed approach shows relative accuracy well within ±0.1log10 from data produced by computer simulations. The method maintains this accuracy even in the presence of dilution errors of up to 10% (for both the aliquot and diluent volumes), microbial counts between 10(4) and 10(12) colony-forming units, dilution ratios from 2 to 100, and plate size to colony size ratios between 6.25 to 200.

  6. Cost estimating methods for advanced space systems

    NASA Technical Reports Server (NTRS)

    Cyr, Kelley

    1988-01-01

    The development of parametric cost estimating methods for advanced space systems in the conceptual design phase is discussed. The process of identifying variables which drive cost and the relationship between weight and cost are discussed. A theoretical model of cost is developed and tested using a historical data base of research and development projects.

  7. Estimation of typhoon rainfall in GaoPing River: A Multivariate Maximum Entropy Method

    NASA Astrophysics Data System (ADS)

    Pei-Jui, Wu; Hwa-Lung, Yu

    2016-04-01

    The heavy rainfall from typhoons is the main factor of the natural disaster in Taiwan, which causes the significant loss of human lives and properties. Statistically average 3.5 typhoons invade Taiwan every year, and the serious typhoon, Morakot in 2009, impacted Taiwan in recorded history. Because the duration, path and intensity of typhoon, also affect the temporal and spatial rainfall type in specific region , finding the characteristics of the typhoon rainfall type is advantageous when we try to estimate the quantity of rainfall. This study developed a rainfall prediction model and can be divided three parts. First, using the EEOF(extended empirical orthogonal function) to classify the typhoon events, and decompose the standard rainfall type of all stations of each typhoon event into the EOF and PC(principal component). So we can classify the typhoon events which vary similarly in temporally and spatially as the similar typhoon types. Next, according to the classification above, we construct the PDF(probability density function) in different space and time by means of using the multivariate maximum entropy from the first to forth moment statistically. Therefore, we can get the probability of each stations of each time. Final we use the BME(Bayesian Maximum Entropy method) to construct the typhoon rainfall prediction model , and to estimate the rainfall for the case of GaoPing river which located in south of Taiwan.This study could be useful for typhoon rainfall predictions in future and suitable to government for the typhoon disaster prevention .

  8. A computerized method to estimate friction coefficient from orientation distribution of meso-scale faults

    NASA Astrophysics Data System (ADS)

    Sato, Katsushi

    2016-08-01

    The friction coefficient controls the brittle strength of the Earth's crust for deformation recorded by faults. This study proposes a computerized method to determine the friction coefficient of meso-scale faults. The method is based on the analysis of orientation distribution of faults, and the principal stress axes and the stress ratio calculated by a stress tensor inversion technique. The method assumes that faults are activated according to the cohesionless Coulomb's failure criterion, where the fluctuations of fluid pressure and the magnitude of differential stress are assumed to induce faulting. In this case, the orientation distribution of fault planes is described by a probability density function that is visualized as linear contours on a Mohr diagram. The parametric optimization of the function for an observed fault population yields the friction coefficient. A test using an artificial fault-slip dataset successfully determines the internal friction angle (the arctangent of the friction coefficient) with its confidence interval of several degrees estimated by the bootstrap resampling technique. An application to natural faults cutting a Pleistocene forearc basin fill yields a friction coefficient around 0.7 which is experimentally predicted by the Byerlee's law.

  9. Cost estimating methods for advanced space systems

    NASA Technical Reports Server (NTRS)

    Cyr, Kelley

    1988-01-01

    Parametric cost estimating methods for space systems in the conceptual design phase are developed. The approach is to identify variables that drive cost such as weight, quantity, development culture, design inheritance, and time. The relationship between weight and cost is examined in detail. A theoretical model of cost is developed and tested statistically against a historical data base of major research and development programs. It is concluded that the technique presented is sound, but that it must be refined in order to produce acceptable cost estimates.

  10. Implicit solvent methods for free energy estimation

    PubMed Central

    Decherchi, Sergio; Masetti, Matteo; Vyalov, Ivan; Rocchia, Walter

    2014-01-01

    Solvation is a fundamental contribution in many biological processes and especially in molecular binding. Its estimation can be performed by means of several computational approaches. The aim of this review is to give an overview of existing theories and methods to estimate solvent effects giving a specific focus on the category of implicit solvent models and their use in Molecular Dynamics. In many of these models, the solvent is considered as a continuum homogenous medium, while the solute can be represented at the atomic detail and at different levels of theory. Despite their degree of approximation, implicit methods are still widely employed due to their trade-off between accuracy and efficiency. Their derivation is rooted in the statistical mechanics and integral equations disciplines, some of the related details being provided here. Finally, methods that combine implicit solvent models and molecular dynamics simulation, are briefly described. PMID:25193298

  11. Unit-Sphere Anisotropic Multiaxial Stochastic-Strength Model Probability Density Distribution for the Orientation of Critical Flaws

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel

    2013-01-01

    Models that predict the failure probability of monolithic glass and ceramic components under multiaxial loading have been developed by authors such as Batdorf, Evans, and Matsuo. These "unit-sphere" failure models assume that the strength-controlling flaws are randomly oriented, noninteracting planar microcracks of specified geometry but of variable size. This report develops a formulation to describe the probability density distribution of the orientation of critical strength-controlling flaws that results from an applied load. This distribution is a function of the multiaxial stress state, the shear sensitivity of the flaws, the Weibull modulus, and the strength anisotropy. Examples are provided showing the predicted response on the unit sphere for various stress states for isotropic and transversely isotropic (anisotropic) materials--including the most probable orientation of critical flaws for offset uniaxial loads with strength anisotropy. The author anticipates that this information could be used to determine anisotropic stiffness degradation or anisotropic damage evolution for individual brittle (or quasi-brittle) composite material constituents within finite element or micromechanics-based software

  12. Fusing probability density function into Dempster-Shafer theory of evidence for the evaluation of water treatment plant.

    PubMed

    Chowdhury, Shakhawat

    2013-05-01

    The evaluation of the status of a municipal drinking water treatment plant (WTP) is important. The evaluation depends on several factors, including, human health risks from disinfection by-products (R), disinfection performance (D), and cost (C) of water production and distribution. The Dempster-Shafer theory (DST) of evidence can combine the individual status with respect to R, D, and C to generate a new indicator, from which the overall status of a WTP can be evaluated. In the DST, the ranges of different factors affecting the overall status are divided into several segments. The basic probability assignments (BPA) for each segment of these factors are provided by multiple experts, which are then combined to obtain the overall status. In assigning the BPA, the experts use their individual judgments, which can impart subjective biases in the overall evaluation. In this research, an approach has been introduced to avoid the assignment of subjective BPA. The factors contributing to the overall status were characterized using the probability density functions (PDF). The cumulative probabilities for different segments of these factors were determined from the cumulative density function, which were then assigned as the BPA for these factors. A case study is presented to demonstrate the application of PDF in DST to evaluate a WTP, leading to the selection of the required level of upgradation for the WTP.

  13. A method for estimating soil moisture availability

    NASA Technical Reports Server (NTRS)

    Carlson, T. N.

    1985-01-01

    A method for estimating values of soil moisture based on measurements of infrared surface temperature is discussed. A central element in the method is a boundary layer model. Although it has been shown that soil moistures determined by this method using satellite measurements do correspond in a coarse fashion to the antecedent precipitation, the accuracy and exact physical interpretation (with respect to ground water amounts) are not well known. This area of ignorance, which currently impedes the practical application of the method to problems in hydrology, meteorology and agriculture, is largely due to the absence of corresponding surface measurements. Preliminary field measurements made over France have led to the development of a promising vegetation formulation (Taconet et al., 1985), which has been incorporated in the model. It is necessary, however, to test the vegetation component, and the entire method, over a wide variety of surface conditions and crop canopies.

  14. Probability density function treatment of turbulence/chemistry interactions during the ignition of a temperature-stratified mixture for application to HCCI engine modeling

    SciTech Connect

    Bisetti, Fabrizio; Chen, J.-Y.; Hawkes, Evatt R.; Chen, Jacqueline H.

    2008-12-15

    Homogeneous charge compression ignition (HCCI) engine technology promises to reduce NO{sub x} and soot emissions while achieving high thermal efficiency. Temperature and mixture stratification are regarded as effective means of controlling the start of combustion and reducing the abrupt pressure rise at high loads. Probability density function methods are currently being pursued as a viable approach to modeling the effects of turbulent mixing and mixture stratification on HCCI ignition. In this paper we present an assessment of the merits of three widely used mixing models in reproducing the moments of reactive scalars during the ignition of a lean hydrogen/air mixture ({phi}=0.1, p=41atm, and T=1070 K) under increasing temperature stratification and subject to decaying turbulence. The results from the solution of the evolution equation for a spatially homogeneous joint PDF of the reactive scalars are compared with available direct numerical simulation (DNS) data [E.R. Hawkes, R. Sankaran, P.P. Pebay, J.H. Chen, Combust. Flame 145 (1-2) (2006) 145-159]. The mixing models are found able to quantitatively reproduce the time history of the heat release rate, first and second moments of temperature, and hydroxyl radical mass fraction from the DNS results. Most importantly, the dependence of the heat release rate on the extent of the initial temperature stratification in the charge is also well captured. (author)

  15. On methods of estimating cosmological bulk flows

    NASA Astrophysics Data System (ADS)

    Nusser, Adi

    2016-01-01

    We explore similarities and differences between several estimators of the cosmological bulk flow, B, from the observed radial peculiar velocities of galaxies. A distinction is made between two theoretical definitions of B as a dipole moment of the velocity field weighted by a radial window function. One definition involves the three-dimensional (3D) peculiar velocity, while the other is based on its radial component alone. Different methods attempt at inferring B for either of these definitions which coincide only for the case of a velocity field which is constant in space. We focus on the Wiener Filtering (WF) and the Constrained Minimum Variance (CMV) methodologies. Both methodologies require a prior expressed in terms of the radial velocity correlation function. Hoffman et al. compute B in Top-Hat windows from a WF realization of the 3D peculiar velocity field. Feldman et al. infer B directly from the observed velocities for the second definition of B. The WF methodology could easily be adapted to the second definition, in which case it will be equivalent to the CMV with the exception of the imposed constraint. For a prior with vanishing correlations or very noisy data, CMV reproduces the standard Maximum Likelihood estimation for B of the entire sample independent of the radial weighting function. Therefore, this estimator is likely more susceptible to observational biases that could be present in measurements of distant galaxies. Finally, two additional estimators are proposed.

  16. Cost estimating methods for advanced space systems

    NASA Technical Reports Server (NTRS)

    Cyr, Kelley

    1994-01-01

    NASA is responsible for developing much of the nation's future space technology. Cost estimates for new programs are required early in the planning process so that decisions can be made accurately. Because of the long lead times required to develop space hardware, the cost estimates are frequently required 10 to 15 years before the program delivers hardware. The system design in conceptual phases of a program is usually only vaguely defined and the technology used is so often state-of-the-art or beyond. These factors combine to make cost estimating for conceptual programs very challenging. This paper describes an effort to develop parametric cost estimating methods for space systems in the conceptual design phase. The approach is to identify variables that drive cost such as weight, quantity, development culture, design inheritance and time. The nature of the relationships between the driver variables and cost will be discussed. In particular, the relationship between weight and cost will be examined in detail. A theoretical model of cost will be developed and tested statistically against a historical database of major research and development projects.

  17. Influence of ENSO SSTs on the spread of the probability density function for precipitation and land surface temperature

    NASA Astrophysics Data System (ADS)

    Chen, Mingyue; Kumar, Arun

    2015-08-01

    The impact of the interannual variations in ENSO SSTs on the spread of probability density function (PDF) for the seasonal mean of variables of societal relevance are analyzed based on a large set of the hindcasts from NCEP CFSv2. The study is focused on the analysis of global rainfall and 2-m temperature over land (T2m) for December-January-February (DJF) seasonal mean. For rainfall, the spatial distribution of the ENSO SST induced changes on the spread of PDF strongly resembles changes in the mean but have a smaller amplitude. Over the central-eastern equatorial Pacific, changes in the spread lead to a reduction in signal-to-noise ratio (SNR) during El Niño years while to an increase in the SNR during La Niña years. Over extratropics, year to year changes in the spread are relatively small. For T2m, the changes in spread have little systematic dependence on the ENSO SSTs and the amplitudes of the changes in spread are much smaller than corresponding changes in the ensemble mean. The results demonstrate small systematic year to year variations in the PDF spread, for example over extratropics for rainfall and over most of global land areas for T2m, and indicate that it might be a good practice in seasonal predictions to assume that the spread of seasonal means from year to year is constant and the skill in seasonal forecast information resides primarily in the shift of the first moment of the seasonal mean of the PDF.

  18. Influence of ENSO SSTs on the spread of the probability density function for precipitation and land surface temperature

    NASA Astrophysics Data System (ADS)

    Chen, Mingyue; Kumar, Arun

    2014-09-01

    The impact of the interannual variations in ENSO SSTs on the spread of probability density function (PDF) for the seasonal mean of variables of societal relevance are analyzed based on a large set of the hindcasts from NCEP CFSv2. The study is focused on the analysis of global rainfall and 2-m temperature over land (T2m) for December-January-February (DJF) seasonal mean. For rainfall, the spatial distribution of the ENSO SST induced changes on the spread of PDF strongly resembles changes in the mean but have a smaller amplitude. Over the central-eastern equatorial Pacific, changes in the spread lead to a reduction in signal-to-noise ratio (SNR) during El Niño years while to an increase in the SNR during La Niña years. Over extratropics, year to year changes in the spread are relatively small. For T2m, the changes in spread have little systematic dependence on the ENSO SSTs and the amplitudes of the changes in spread are much smaller than corresponding changes in the ensemble mean. The results demonstrate small systematic year to year variations in the PDF spread, for example over extratropics for rainfall and over most of global land areas for T2m, and indicate that it might be a good practice in seasonal predictions to assume that the spread of seasonal means from year to year is constant and the skill in seasonal forecast information resides primarily in the shift of the first moment of the seasonal mean of the PDF.

  19. An Analytical Method of Estimating Turbine Performance

    NASA Technical Reports Server (NTRS)

    Kochendorfer, Fred D; Nettles, J Cary

    1948-01-01

    A method is developed by which the performance of a turbine over a range of operating conditions can be analytically estimated from the blade angles and flow areas. In order to use the method, certain coefficients that determine the weight flow and friction losses must be approximated. The method is used to calculate the performance of the single-stage turbine of a commercial aircraft gas-turbine engine and the calculated performance is compared with the performance indicated by experimental data. For the turbine of the typical example, the assumed pressure losses and turning angles give a calculated performance that represents the trends of the experimental performance with reasonable accuracy. The exact agreement between analytical performance and experimental performance is contingent upon the proper selection of the blading-loss parameter. A variation of blading-loss parameter from 0.3 to 0.5 includes most of the experimental data from the turbine investigated.

  20. An analytical method of estimating turbine performance

    NASA Technical Reports Server (NTRS)

    Kochendorfer, Fred D; Nettles, J Cary

    1949-01-01

    A method is developed by which the performance of a turbine over a range of operating conditions can be analytically estimated from the blade angles and flow areas. In order to use the method, certain coefficients that determine the weight flow and the friction losses must be approximated. The method is used to calculate the performance of the single-stage turbine of a commercial aircraft gas-turbine engine and the calculated performance is compared with the performance indicated by experimental data. For the turbine of the typical example, the assumed pressure losses and the tuning angles give a calculated performance that represents the trends of the experimental performance with reasonable accuracy. The exact agreement between analytical performance and experimental performance is contingent upon the proper selection of a blading-loss parameter.

  1. On the use of the noncentral chi-square density function for the distribution of helicopter spectral estimates

    NASA Technical Reports Server (NTRS)

    Garber, Donald P.

    1993-01-01

    A probability density function for the variability of ensemble averaged spectral estimates from helicopter acoustic signals in Gaussian background noise was evaluated. Numerical methods for calculating the density function and for determining confidence limits were explored. Density functions were predicted for both synthesized and experimental data and compared with observed spectral estimate variability.

  2. Large Eddy Simulation/Probability Density Function Modeling of a Turbulent CH4/H2/N2 Jet Flame

    SciTech Connect

    Wang, Haifeng; Pope, Stephen B.

    2011-01-01

    In this work, we develop the large-eddy simulation (LES)/probability density function (PDF) simulation capability for turbulent combustion and apply it to a turbulent CH{sub 4}/H{sub 2}/N{sub 2} jet flame (DLR Flame A). The PDF code is verified to be second-order accurate with respect to the time-step size and the grid size in a manufactured one-dimensional test case. Three grids (64×64×16,192×192×48,320×320×80)(64×64×16,192×192×48,320×320×80) are used in the simulations of DLR Flame A to examine the effect of the grid resolution. The numerical solutions of the resolved mixture fraction, the mixture fraction squared, and the density are duplicated in the LES code and the PDF code to explore the numerical consistency between them. A single laminar flamelet profile is used to reduce the computational cost of treating the chemical reactions of the particles. The sensitivity of the LES results to the time-step size is explored. Both first and second-order time splitting schemes are used for integrating the stochastic differential equations for the particles, and these are compared in the jet flame simulations. The numerical results are found to be sensitive to the grid resolution, and the 192×192×48192×192×48 grid is adequate to capture the main flow fields of interest for this study. The numerical consistency between LES and PDF is confirmed by the small difference between their numerical predictions. Overall good agreement between the LES/PDF predictions and the experimental data is observed for the resolved flow fields and the composition fields, including for the mass fractions of the minor species and NO. The LES results are found to be insensitive to the time-step size for this particular flame. The first-order splitting scheme performs as well as the second-order splitting scheme in predicting the resolved mean and rms mixture fraction and the density for this flame.

  3. A Novel Method for Estimating Linkage Maps

    PubMed Central

    Tan, Yuan-De; Fu, Yun-Xin

    2006-01-01

    The goal of linkage mapping is to find the true order of loci from a chromosome. Since the number of possible orders is large even for a modest number of loci, the problem of finding the optimal solution is known as a NP-hard problem or traveling salesman problem (TSP). Although a number of algorithms are available, many either are low in the accuracy of recovering the true order of loci or require tremendous amounts of computational resources, thus making them difficult to use for reconstructing a large-scale map. We developed in this article a novel method called unidirectional growth (UG) to help solve this problem. The UG algorithm sequentially constructs the linkage map on the basis of novel results about additive distance. It not only is fast but also has a very high accuracy in recovering the true order of loci according to our simulation studies. Since the UG method requires n − 1 cycles to estimate the ordering of n loci, it is particularly useful for estimating linkage maps consisting of hundreds or even thousands of linked codominant loci on a chromosome. PMID:16783016

  4. Momentum Probabilities for a Single Quantum Particle in Three-Dimensional Regular "Infinite" Wells: One Way of Promoting Understanding of Probability Densities

    ERIC Educational Resources Information Center

    Riggs, Peter J.

    2013-01-01

    Students often wrestle unsuccessfully with the task of correctly calculating momentum probability densities and have difficulty in understanding their interpretation. In the case of a particle in an "infinite" potential well, its momentum can take values that are not just those corresponding to the particle's quantised energies but…

  5. Demographic estimation methods for plants with dormancy

    USGS Publications Warehouse

    Kery, M.; Gregg, K.B.

    2004-01-01

    Demographic studies in plants appear simple because unlike animals, plants do not run away. Plant individuals can be marked with, e.g., plastic tags, but often the coordinates of an individual may be sufficient to identify it. Vascular plants in temperate latitudes have a pronounced seasonal life–cycle, so most plant demographers survey their study plots once a year often during or shortly after flowering. Life–states are pervasive in plants, hence the results of a demographic study for an individual can be summarized in a familiar encounter history, such as 0VFVVF000. A zero means that an individual was not seen in a year and a letter denotes its state for years when it was seen aboveground. V and F here stand for vegetative and flowering states, respectively. Probabilities of survival and state transitions can then be obtained by mere counting.Problems arise when there is an unobservable dormant state, i.e., when plants may stay belowground for one or more growing seasons. Encounter histories such as 0VF00F000 may then occur where the meaning of zeroes becomes ambiguous. A zero can either mean a dead or a dormant plant. Various ad hoc methods in wide use among plant ecologists have made strong assumptions about when a zero should be equated to a dormant individual. These methods have never been compared among each other. In our talk and in Kéry et al. (submitted), we show that these ad hoc estimators provide spurious estimates of survival and should not be used.In contrast, if detection probabilities for aboveground plants are known or can be estimated, capturerecapture (CR) models can be used to estimate probabilities of survival and state–transitions and the fraction of the population that is dormant. We have used this approach in two studies of terrestrial orchids, Cleistes bifaria (Kéry et al., submitted) and Cypripedium reginae(Kéry & Gregg, submitted) in West Virginia, U.S.A. For Cleistes, our data comprised one population with a total of 620

  6. A computer simulated phantom study of tomotherapy dose optimization based on probability density functions (PDF) and potential errors caused by low reproducibility of PDF

    SciTech Connect

    Sheng, Ke; Cai Jing; Brookeman, James; Molloy, Janelle; Christopher, John; Read, Paul

    2006-09-15

    Lung tumor motion trajectories measured by four-dimensional CT or dynamic MRI can be converted to a probability density function (PDF), which describes the probability of the tumor at a certain position, for PDF based treatment planning. Using this method in simulated sequential tomotherapy, we study the dose reduction of normal tissues and more important, the effect of PDF reproducibility on the accuracy of dosimetry. For these purposes, realistic PDFs were obtained from two dynamic MRI scans of a healthy volunteer within a 2 week interval. The first PDF was accumulated from a 300 s scan and the second PDF was calculated from variable scan times from 5 s (one breathing cycle) to 300 s. Optimized beam fluences based on the second PDF were delivered to the hypothetical gross target volume (GTV) of a lung phantom that moved following the first PDF. The reproducibility between two PDFs varied from low (78%) to high (94.8%) when the second scan time increased from 5 s to 300 s. When a highly reproducible PDF was used in optimization, the dose coverage of GTV was maintained; phantom lung receiving 10%-20% prescription dose was reduced by 40%-50% and the mean phantom lung dose was reduced by 9.6%. However, optimization based on PDF with low reproducibility resulted in a 50% underdosed GTV. The dosimetric error increased nearly exponentially as the PDF error increased. Therefore, although the dose of the tumor surrounding tissue can be theoretically reduced by PDF based treatment planning, the reliability and applicability of this method highly depend on if a reproducible PDF exists and is measurable. By correlating the dosimetric error and PDF error together, a useful guideline for PDF data acquisition and patient qualification for PDF based planning can be derived.

  7. Demographic estimation methods for plants with dormancy

    USGS Publications Warehouse

    Kery, M.; Gregg, K.B.

    2004-01-01

    Demographic studies in plants appear simple because unlike animals, plants do not run away. Plant individuals can be marked with, e.g., plastic tags, but often the coordinates of an individual may be sufficient to identify it. Vascular plants in temperate latitudes have a pronounced seasonal life-cycle, so most plant demographers survey their study plots once a year often during or shortly after flowering. Life-states are pervasive in plants, hence the results of a demographic study for an individual can be summarized in a familiar encounter history, such as OVFVVF000. A zero means that an individual was not seen in a year and a letter denotes its state for years when it was seen aboveground. V and F here stand for vegetative and flowering states, respectively. Probabilities of survival and state transitions can then be obtained by mere counting. Problems arise when there is an unobservable dormant state, I.e., when plants may stay belowground for one or more growing seasons. Encounter histories such as OVFOOF000 may then occur where the meaning of zeroes becomes ambiguous. A zero can either mean a dead or a dormant plant. Various ad hoc methods in wide use among plant ecologists have made strong assumptions about when a zero should be equated to a dormant individual. These methods have never been compared among each other. In our talk and in Kery et al. (submitted), we show that these ad hoc estimators provide spurious estimates of survival and should not be used. In contrast, if detection probabilities for aboveground plants are known or can be estimated, capture-recapture (CR) models can be used to estimate probabilities of survival and state-transitions and the fraction of the population that is dormant. We have used this approach in two studies of terrestrial orchids, Cleistes bifaria (Kery et aI., submitted) and Cypripedium reginae (Kery & Gregg, submitted) in West Virginia, U.S.A. For Cleistes, our data comprised one population with a total of 620 marked

  8. Foliage penetration obscuration probability density function analysis from overhead canopy photos for gimbaled linear-mode and Geiger-mode airborne lidar

    NASA Astrophysics Data System (ADS)

    Burton, Robin R.

    2010-04-01

    Three-dimensional (3D) Light Detection And Ranging (LIDAR) systems designed for foliage penetration can produce good bare-earth products in medium to medium-heavy obscuration environments, but product creation becomes increasingly more difficult as the obscuration level increases. A prior knowledge of the obscuration environment over large areas is hard to obtain. The competing factors of area coverage rate and product quality are difficult to balance. Ground-based estimates of obscuration levels are labor intensive and only capture a small portion of the area of interest. Estimates of obscuration levels derived from airborne data require that the area of interest has been collected previously. Recently, there has been a focus on lacunarity (scale dependent measure of translational invariance) to quantify the gap structure of canopies. While this approach is useful, it needs to be evaluated relative to the size of the instantaneous field-of-view (IFOV) of the system under consideration. In this paper, the author reports on initial results to generate not just average obscuration values from overhead canopy photographs, but to generate obscuration probability density functions (PDFs) for both gimbaled linear-mode and geiger-mode airborne LIDAR. In general, gimbaled linear-mode (LM) LIDAR collects data with higher signal-to-noise (SNR), but is limited to smaller areas and cannot collect at higher altitudes. Conversely, geiger-mode (GM) LIDAR has a much lower SNR, but is capable of higher area rates and collecting data at higher altitudes. To date, geiger-mode LIDAR obscurant penetration theory has relied on a single obscuration value, but recent work has extended it to use PDFs1. Whether or not the inclusion of PDFs significantly changes predicted results and more closely matches actual results awaits the generation of PDFs over specific ground truth targets and comparison to actual collections of those ground truth targets. Ideally, examination of individual PDFs

  9. An estimation method of the direct benefit of a waterlogging control project applicable to the changing environment

    NASA Astrophysics Data System (ADS)

    Zengmei, L.; Guanghua, Q.; Zishen, C.

    2015-05-01

    The direct benefit of a waterlogging control project is reflected by the reduction or avoidance of waterlogging loss. Before and after the construction of a waterlogging control project, the disaster-inducing environment in the waterlogging-prone zone is generally different. In addition, the category, quantity and spatial distribution of the disaster-bearing bodies are also changed more or less. Therefore, under the changing environment, the direct benefit of a waterlogging control project should be the reduction of waterlogging losses compared to conditions with no control project. Moreover, the waterlogging losses with or without the project should be the mathematical expectations of the waterlogging losses when rainstorms of all frequencies meet various water levels in the drainage-accepting zone. So an estimation model of the direct benefit of waterlogging control is proposed. Firstly, on the basis of a Copula function, the joint distribution of the rainstorms and the water levels are established, so as to obtain their joint probability density function. Secondly, according to the two-dimensional joint probability density distribution, the dimensional domain of integration is determined, which is then divided into small domains so as to calculate the probability for each of the small domains and the difference between the average waterlogging loss with and without a waterlogging control project, called the regional benefit of waterlogging control project, under the condition that rainstorms in the waterlogging-prone zone meet the water level in the drainage-accepting zone. Finally, it calculates the weighted mean of the project benefit of all small domains, with probability as the weight, and gets the benefit of the waterlogging control project. Taking the estimation of benefit of a waterlogging control project in Yangshan County, Guangdong Province, as an example, the paper briefly explains the procedures in waterlogging control project benefit estimation. The

  10. Statistically advanced, self-similar, radial probability density functions of atmospheric and under-expanded hydrogen jets

    NASA Astrophysics Data System (ADS)

    Ruggles, Adam J.

    2015-11-01

    This paper presents improved statistical insight regarding the self-similar scalar mixing process of atmospheric hydrogen jets and the downstream region of under-expanded hydrogen jets. Quantitative planar laser Rayleigh scattering imaging is used to probe both jets. The self-similarity of statistical moments up to the sixth order (beyond the literature established second order) is documented in both cases. This is achieved using a novel self-similar normalization method that facilitated a degree of statistical convergence that is typically limited to continuous, point-based measurements. This demonstrates that image-based measurements of a limited number of samples can be used for self-similar scalar mixing studies. Both jets exhibit the same radial trends of these moments demonstrating that advanced atmospheric self-similarity can be applied in the analysis of under-expanded jets. Self-similar histograms away from the centerline are shown to be the combination of two distributions. The first is attributed to turbulent mixing. The second, a symmetric Poisson-type distribution centered on zero mass fraction, progressively becomes the dominant and eventually sole distribution at the edge of the jet. This distribution is attributed to shot noise-affected pure air measurements, rather than a diffusive superlayer at the jet boundary. This conclusion is reached after a rigorous measurement uncertainty analysis and inspection of pure air data collected with each hydrogen data set. A threshold based upon the measurement noise analysis is used to separate the turbulent and pure air data, and thusly estimate intermittency. Beta-distributions (four parameters) are used to accurately represent the turbulent distribution moments. This combination of measured intermittency and four-parameter beta-distributions constitutes a new, simple approach to model scalar mixing. Comparisons between global moments from the data and moments calculated using the proposed model show excellent

  11. Comparisons of Four Methods for Estimating a Dynamic Factor Model

    ERIC Educational Resources Information Center

    Zhang, Zhiyong; Hamaker, Ellen L.; Nesselroade, John R.

    2008-01-01

    Four methods for estimating a dynamic factor model, the direct autoregressive factor score (DAFS) model, are evaluated and compared. The first method estimates the DAFS model using a Kalman filter algorithm based on its state space model representation. The second one employs the maximum likelihood estimation method based on the construction of a…

  12. Statistical methods of estimating mining costs

    USGS Publications Warehouse

    Long, K.R.

    2011-01-01

    Until it was defunded in 1995, the U.S. Bureau of Mines maintained a Cost Estimating System (CES) for prefeasibility-type economic evaluations of mineral deposits and estimating costs at producing and non-producing mines. This system had a significant role in mineral resource assessments to estimate costs of developing and operating known mineral deposits and predicted undiscovered deposits. For legal reasons, the U.S. Geological Survey cannot update and maintain CES. Instead, statistical tools are under development to estimate mining costs from basic properties of mineral deposits such as tonnage, grade, mineralogy, depth, strip ratio, distance from infrastructure, rock strength, and work index. The first step was to reestimate "Taylor's Rule" which relates operating rate to available ore tonnage. The second step was to estimate statistical models of capital and operating costs for open pit porphyry copper mines with flotation concentrators. For a sample of 27 proposed porphyry copper projects, capital costs can be estimated from three variables: mineral processing rate, strip ratio, and distance from nearest railroad before mine construction began. Of all the variables tested, operating costs were found to be significantly correlated only with strip ratio.

  13. A Study of Variance Estimation Methods. Working Paper Series.

    ERIC Educational Resources Information Center

    Zhang, Fan; Weng, Stanley; Salvucci, Sameena; Hu, Ming-xiu

    This working paper contains reports of five studies of variance estimation methods. The first, An Empirical Study of Poststratified Estimator, by Fan Zhang uses data from the National Household Education Survey to illustrate use of poststratified estimation. The second paper, BRR Variance Estimation Using BPLX Hadamard Procedure, by Stanley Weng…

  14. Estimation Methods for One-Parameter Testlet Models

    ERIC Educational Resources Information Center

    Jiao, Hong; Wang, Shudong; He, Wei

    2013-01-01

    This study demonstrated the equivalence between the Rasch testlet model and the three-level one-parameter testlet model and explored the Markov Chain Monte Carlo (MCMC) method for model parameter estimation in WINBUGS. The estimation accuracy from the MCMC method was compared with those from the marginalized maximum likelihood estimation (MMLE)…

  15. A Method to Estimate the Probability that any Individual Cloud-to-Ground Lightning Stroke was Within any Radius of any Point

    NASA Technical Reports Server (NTRS)

    Huddleston, Lisa L.; Roeder, William P.; Merceret, Francis J.

    2011-01-01

    A new technique has been developed to estimate the probability that a nearby cloud to ground lightning stroke was within a specified radius of any point of interest. This process uses the bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to determine the probability that the stroke is inside any specified radius of any location, even if that location is not centered on or even with the location error ellipse. This technique is adapted from a method of calculating the probability of debris collision with spacecraft. Such a technique is important in spaceport processing activities because it allows engineers to quantify the risk of induced current damage to critical electronics due to nearby lightning strokes. This technique was tested extensively and is now in use by space launch organizations at Kennedy Space Center and Cape Canaveral Air Force Station. Future applications could include forensic meteorology.

  16. A Method to Estimate the Probability That Any Individual Cloud-to-Ground Lightning Stroke Was Within Any Radius of Any Point

    NASA Technical Reports Server (NTRS)

    Huddleston, Lisa L.; Roeder, William P.; Merceret, Francis J.

    2010-01-01

    A new technique has been developed to estimate the probability that a nearby cloud-to-ground lightning stroke was within a specified radius of any point of interest. This process uses the bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to determine the probability that the stroke is inside any specified radius of any location, even if that location is not centered on or even within the location error ellipse. This technique is adapted from a method of calculating the probability of debris collision with spacecraft. Such a technique is important in spaceport processing activities because it allows engineers to quantify the risk of induced current damage to critical electronics due to nearby lightning strokes. This technique was tested extensively and is now in use by space launch organizations at Kennedy Space Center and Cape Canaveral Air Force station.

  17. Development of advanced acreage estimation methods

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr. (Principal Investigator)

    1980-01-01

    The use of the AMOEBA clustering/classification algorithm was investigated as a basis for both a color display generation technique and maximum likelihood proportion estimation procedure. An approach to analyzing large data reduction systems was formulated and an exploratory empirical study of spatial correlation in LANDSAT data was also carried out. Topics addressed include: (1) development of multiimage color images; (2) spectral spatial classification algorithm development; (3) spatial correlation studies; and (4) evaluation of data systems.

  18. Estimation of vegetation cover at subpixel resolution using LANDSAT data

    NASA Technical Reports Server (NTRS)

    Jasinski, Michael F.; Eagleson, Peter S.

    1986-01-01

    The present report summarizes the various approaches relevant to estimating canopy cover at subpixel resolution. The approaches are based on physical models of radiative transfer in non-homogeneous canopies and on empirical methods. The effects of vegetation shadows and topography are examined. Simple versions of the model are tested, using the Taos, New Mexico Study Area database. Emphasis has been placed on using relatively simple models requiring only one or two bands. Although most methods require some degree of ground truth, a two-band method is investigated whereby the percent cover can be estimated without ground truth by examining the limits of the data space. Future work is proposed which will incorporate additional surface parameters into the canopy cover algorithm, such as topography, leaf area, or shadows. The method involves deriving a probability density function for the percent canopy cover based on the joint probability density function of the observed radiances.

  19. Bin mode estimation methods for Compton camera imaging

    NASA Astrophysics Data System (ADS)

    Ikeda, S.; Odaka, H.; Uemura, M.; Takahashi, T.; Watanabe, S.; Takeda, S.

    2014-10-01

    We study the image reconstruction problem of a Compton camera which consists of semiconductor detectors. The image reconstruction is formulated as a statistical estimation problem. We employ a bin-mode estimation (BME) and extend an existing framework to a Compton camera with multiple scatterers and absorbers. Two estimation algorithms are proposed: an accelerated EM algorithm for the maximum likelihood estimation (MLE) and a modified EM algorithm for the maximum a posteriori (MAP) estimation. Numerical simulations demonstrate the potential of the proposed methods.

  20. An investigation of new methods for estimating parameter sensitivities

    NASA Technical Reports Server (NTRS)

    Beltracchi, Todd J.; Gabriele, Gary A.

    1989-01-01

    The method proposed for estimating sensitivity derivatives is based on the Recursive Quadratic Programming (RQP) method and in conjunction a differencing formula to produce estimates of the sensitivities. This method is compared to existing methods and is shown to be very competitive in terms of the number of function evaluations required. In terms of accuracy, the method is shown to be equivalent to a modified version of the Kuhn-Tucker method, where the Hessian of the Lagrangian is estimated using the BFS method employed by the RQP algorithm. Initial testing on a test set with known sensitivities demonstrates that the method can accurately calculate the parameter sensitivity.

  1. A quasi-Newton approach to optimization problems with probability density constraints. [problem solving in mathematical programming

    NASA Technical Reports Server (NTRS)

    Tapia, R. A.; Vanrooy, D. L.

    1976-01-01

    A quasi-Newton method is presented for minimizing a nonlinear function while constraining the variables to be nonnegative and sum to one. The nonnegativity constraints were eliminated by working with the squares of the variables and the resulting problem was solved using Tapia's general theory of quasi-Newton methods for constrained optimization. A user's guide for a computer program implementing this algorithm is provided.

  2. The Stability of Four Methods for Estimating Item Bias.

    ERIC Educational Resources Information Center

    Bezruczko, Nikolaus; And Others

    The stability of bias estimates from J. Schueneman's chi-square method, the transformed Delta method, Rasch's one-parameter residual analysis, and the Mantel-Haenszel procedure, were compared across small and large samples for a data set of 30,000 cases. Bias values for 30 samples were estimated for each method, and means and variances of item…

  3. Morphological method for estimation of simian virus 40 infectious titer.

    PubMed

    Landau, S M; Nosach, L N; Pavlova, G V

    1982-01-01

    The cytomorphologic method previously reported for titration of adenoviruses has been employed for estimating the infectious titer of simian virus 40 (SV 40). Infected cells forming intranuclear inclusions were determined. The method examined possesses a number of advantages over virus titration by plaque assay and cytopathic effect. The virus titer estimated by the method of inclusion counting and expressed as IFU/ml (Inclusion Forming Units/ml) corresponds to that estimated by plaque count and expressed as PFU/ml.

  4. Advancing Methods for Estimating Cropland Area

    NASA Astrophysics Data System (ADS)

    King, L.; Hansen, M.; Stehman, S. V.; Adusei, B.; Potapov, P.; Krylov, A.

    2014-12-01

    Measurement and monitoring of complex and dynamic agricultural land systems is essential with increasing demands on food, feed, fuel and fiber production from growing human populations, rising consumption per capita, the expansion of crops oils in industrial products, and the encouraged emphasis on crop biofuels as an alternative energy source. Soybean is an important global commodity crop, and the area of land cultivated for soybean has risen dramatically over the past 60 years, occupying more than 5% of all global croplands (Monfreda et al 2008). Escalating demands for soy over the next twenty years are anticipated to be met by an increase of 1.5 times the current global production, resulting in expansion of soybean cultivated land area by nearly the same amount (Masuda and Goldsmith 2009). Soybean cropland area is estimated with the use of a sampling strategy and supervised non-linear hierarchical decision tree classification for the United States, Argentina and Brazil as the prototype in development of a new methodology for crop specific agricultural area estimation. Comparison of our 30 m2 Landsat soy classification with the National Agricultural Statistical Services Cropland Data Layer (CDL) soy map shows a strong agreement in the United States for 2011, 2012, and 2013. RapidEye 5m2 imagery was also classified for soy presence and absence and used at the field scale for validation and accuracy assessment of the Landsat soy maps, describing a nearly 1 to 1 relationship in the United States, Argentina and Brazil. The strong correlation found between all products suggests high accuracy and precision of the prototype and has proven to be a successful and efficient way to assess soybean cultivated area at the sub-national and national scale for the United States with great potential for application elsewhere.

  5. A Monte Carlo method for variance estimation for estimators based on induced smoothing

    PubMed Central

    Jin, Zhezhen; Shao, Yongzhao; Ying, Zhiliang

    2015-01-01

    An important issue in statistical inference for semiparametric models is how to provide reliable and consistent variance estimation. Brown and Wang (2005. Standard errors and covariance matrices for smoothed rank estimators. Biometrika 92, 732–746) proposed a variance estimation procedure based on an induced smoothing for non-smooth estimating functions. Herein a Monte Carlo version is developed that does not require any explicit form for the estimating function itself, as long as numerical evaluation can be carried out. A general convergence theory is established, showing that any one-step iteration leads to a consistent variance estimator and continuation of the iterations converges at an exponential rate. The method is demonstrated through the Buckley–James estimator and the weighted log-rank estimators for censored linear regression, and rank estimation for multiple event times data. PMID:24812418

  6. Evaluation of Two Methods to Estimate and Monitor Bird Populations

    PubMed Central

    Taylor, Sandra L.; Pollard, Katherine S.

    2008-01-01

    Background Effective management depends upon accurately estimating trends in abundance of bird populations over time, and in some cases estimating abundance. Two population estimation methods, double observer (DO) and double sampling (DS), have been advocated for avian population studies and the relative merits and short-comings of these methods remain an area of debate. Methodology/Principal Findings We used simulations to evaluate the performances of these two population estimation methods under a range of realistic scenarios. For three hypothetical populations with different levels of clustering, we generated DO and DS population size estimates for a range of detection probabilities and survey proportions. Population estimates for both methods were centered on the true population size for all levels of population clustering and survey proportions when detection probabilities were greater than 20%. The DO method underestimated the population at detection probabilities less than 30% whereas the DS method remained essentially unbiased. The coverage probability of 95% confidence intervals for population estimates was slightly less than the nominal level for the DS method but was substantially below the nominal level for the DO method at high detection probabilities. Differences in observer detection probabilities did not affect the accuracy and precision of population estimates of the DO method. Population estimates for the DS method remained unbiased as the proportion of units intensively surveyed changed, but the variance of the estimates decreased with increasing proportion intensively surveyed. Conclusions/Significance The DO and DS methods can be applied in many different settings and our evaluations provide important information on the performance of these two methods that can assist researchers in selecting the method most appropriate for their particular needs. PMID:18728775

  7. Validation of the probability density function for the calculated radiant power of synchrotron radiation according to the Schwinger formalism

    NASA Astrophysics Data System (ADS)

    Klein, Roman

    2016-06-01

    Electron storage rings with appropriate design are primary source standards, the spectral radiant intensity of which can be calculated from measured parameters using the Schwinger equation. PTB uses the electron storage rings BESSY II and MLS for source-based radiometry in the spectral range from the near-infrared to the x-ray region. The uncertainty of the calculated radiant intensity depends on the uncertainty of the measured parameters used for the calculation. Up to now the procedure described in the guide to the expression of uncertainty in measurement (GUM), i.e. the law of propagation of uncertainty, assuming a linear measurement model, was used to determine the combined uncertainty of the calculated spectral intensity, and for the determination of the coverage interval as well. Now it has been tested with a Monte Carlo simulation, according to Supplement 1 to the GUM, whether this procedure is valid for the rather complicated calculation by means of the Schwinger formalism and for different probability distributions of the input parameters. It was found that for typical uncertainties of the input parameters both methods yield similar results.

  8. Robust time and frequency domain estimation methods in adaptive control

    NASA Technical Reports Server (NTRS)

    Lamaire, Richard Orville

    1987-01-01

    A robust identification method was developed for use in an adaptive control system. The type of estimator is called the robust estimator, since it is robust to the effects of both unmodeled dynamics and an unmeasurable disturbance. The development of the robust estimator was motivated by a need to provide guarantees in the identification part of an adaptive controller. To enable the design of a robust control system, a nominal model as well as a frequency-domain bounding function on the modeling uncertainty associated with this nominal model must be provided. Two estimation methods are presented for finding parameter estimates, and, hence, a nominal model. One of these methods is based on the well developed field of time-domain parameter estimation. In a second method of finding parameter estimates, a type of weighted least-squares fitting to a frequency-domain estimated model is used. The frequency-domain estimator is shown to perform better, in general, than the time-domain parameter estimator. In addition, a methodology for finding a frequency-domain bounding function on the disturbance is used to compute a frequency-domain bounding function on the additive modeling error due to the effects of the disturbance and the use of finite-length data. The performance of the robust estimator in both open-loop and closed-loop situations is examined through the use of simulations.

  9. System and method for motor parameter estimation

    SciTech Connect

    Luhrs, Bin; Yan, Ting

    2014-03-18

    A system and method for determining unknown values of certain motor parameters includes a motor input device connectable to an electric motor having associated therewith values for known motor parameters and an unknown value of at least one motor parameter. The motor input device includes a processing unit that receives a first input from the electric motor comprising values for the known motor parameters for the electric motor and receive a second input comprising motor data on a plurality of reference motors, including values for motor parameters corresponding to the known motor parameters of the electric motor and values for motor parameters corresponding to the at least one unknown motor parameter value of the electric motor. The processor determines the unknown value of the at least one motor parameter from the first input and the second input and determines a motor management strategy for the electric motor based thereon.

  10. Adjoint method for estimating Jiles-Atherton hysteresis model parameters

    NASA Astrophysics Data System (ADS)

    Zaman, Mohammad Asif; Hansen, Paul C.; Neustock, Lars T.; Padhy, Punnag; Hesselink, Lambertus

    2016-09-01

    A computationally efficient method for identifying the parameters of the Jiles-Atherton hysteresis model is presented. Adjoint analysis is used in conjecture with an accelerated gradient descent optimization algorithm. The proposed method is used to estimate the Jiles-Atherton model parameters of two different materials. The obtained results are found to be in good agreement with the reported values. By comparing with existing methods of model parameter estimation, the proposed method is found to be computationally efficient and fast converging.

  11. Carbon footprint: current methods of estimation.

    PubMed

    Pandey, Divya; Agrawal, Madhoolika; Pandey, Jai Shanker

    2011-07-01

    Increasing greenhouse gaseous concentration in the atmosphere is perturbing the environment to cause grievous global warming and associated consequences. Following the rule that only measurable is manageable, mensuration of greenhouse gas intensiveness of different products, bodies, and processes is going on worldwide, expressed as their carbon footprints. The methodologies for carbon footprint calculations are still evolving and it is emerging as an important tool for greenhouse gas management. The concept of carbon footprinting has permeated and is being commercialized in all the areas of life and economy, but there is little coherence in definitions and calculations of carbon footprints among the studies. There are disagreements in the selection of gases, and the order of emissions to be covered in footprint calculations. Standards of greenhouse gas accounting are the common resources used in footprint calculations, although there is no mandatory provision of footprint verification. Carbon footprinting is intended to be a tool to guide the relevant emission cuts and verifications, its standardization at international level are therefore necessary. Present review describes the prevailing carbon footprinting methods and raises the related issues. PMID:20848311

  12. Estimate octane numbers using an enhanced method

    SciTech Connect

    Twu, C.H.; Coon, J.E.

    1997-03-01

    An improved model, based on the Twu-Coon method, is not only internally consistent, but also retains the same level of accuracy as the previous model in predicting octanes of gasoline blends. The enhanced model applies the same binary interaction parameters to components in each gasoline cut and their blends. Thus, the enhanced model can blend gasoline cuts in any order, in any combination or from any splitting of gasoline cuts and still yield the identical value of octane number for blending the same number of gasoline cuts. Setting binary interaction parameters to zero for identical gasoline cuts during the blending process is not required. The new model changes the old model`s methodology so that the same binary interaction parameters can be applied between components inside a gasoline cut as are applied to the same components between gasoline cuts. The enhanced model is more consistent in methodology than the original model, but it has equal accuracy for predicting octane numbers of gasoline blends, and it has the same number of binary interaction parameters. The paper discusses background, enhancement of the Twu-Coon interaction model, and three examples: blend of 2 identical gasoline cuts, blend of 3 gasoline cuts, and blend of the same 3 gasoline cuts in a different order.

  13. Carbon footprint: current methods of estimation.

    PubMed

    Pandey, Divya; Agrawal, Madhoolika; Pandey, Jai Shanker

    2011-07-01

    Increasing greenhouse gaseous concentration in the atmosphere is perturbing the environment to cause grievous global warming and associated consequences. Following the rule that only measurable is manageable, mensuration of greenhouse gas intensiveness of different products, bodies, and processes is going on worldwide, expressed as their carbon footprints. The methodologies for carbon footprint calculations are still evolving and it is emerging as an important tool for greenhouse gas management. The concept of carbon footprinting has permeated and is being commercialized in all the areas of life and economy, but there is little coherence in definitions and calculations of carbon footprints among the studies. There are disagreements in the selection of gases, and the order of emissions to be covered in footprint calculations. Standards of greenhouse gas accounting are the common resources used in footprint calculations, although there is no mandatory provision of footprint verification. Carbon footprinting is intended to be a tool to guide the relevant emission cuts and verifications, its standardization at international level are therefore necessary. Present review describes the prevailing carbon footprinting methods and raises the related issues.

  14. Dental age estimation in Egyptian children, comparison between two methods.

    PubMed

    El-Bakary, Amal A; Hammad, Shaza M; Mohammed, Fatma

    2010-10-01

    The need to estimate age of living individuals is becoming increasingly more important in both forensic science and clinical dentistry. The study of the morphological parameters of teeth on dental radiographs of adult humans is more reliable than most other methods for age estimation. Willems and Cameriere methods are newly presented methods. The aim of this work was to evaluate the applicability of using these methods for Egyptian children. Digitalized panoramas taken from 286 Egyptian children (134 boys, 152 girls) with age range from 5 to 16 years were analyzed. The seven left permanent mandibular teeth were evaluated using the two methods. The results of this research showed that dental age estimated by both methods was significantly correlated to real age. However, Willems method was slightly more accurate (98.62%) compared to Cameriere method (98.02%). Therefore, both methods can be recommended for practical application in clinical dentistry and forensic procedures on the Egyptian population.

  15. Estimating tree height-diameter models with the Bayesian method.

    PubMed

    Zhang, Xiongqing; Duan, Aiguo; Zhang, Jianguo; Xiang, Congwei

    2014-01-01

    Six candidate height-diameter models were used to analyze the height-diameter relationships. The common methods for estimating the height-diameter models have taken the classical (frequentist) approach based on the frequency interpretation of probability, for example, the nonlinear least squares method (NLS) and the maximum likelihood method (ML). The Bayesian method has an exclusive advantage compared with classical method that the parameters to be estimated are regarded as random variables. In this study, the classical and Bayesian methods were used to estimate six height-diameter models, respectively. Both the classical method and Bayesian method showed that the Weibull model was the "best" model using data1. In addition, based on the Weibull model, data2 was used for comparing Bayesian method with informative priors with uninformative priors and classical method. The results showed that the improvement in prediction accuracy with Bayesian method led to narrower confidence bands of predicted value in comparison to that for the classical method, and the credible bands of parameters with informative priors were also narrower than uninformative priors and classical method. The estimated posterior distributions for parameters can be set as new priors in estimating the parameters using data2.

  16. Sedimentary phosphate method for estimating paleosalinities: a paleontological assumption.

    PubMed

    Guber, A L

    1969-11-01

    Paleosalinity values in certain rocks determined by the sedimentary phosphate method differ from salinity estimates based upon contained fossil assemblages, geochemical methods, and existing stratigraphic controls. Some anomalous values are related to the abundance of fossil organisms known to be concentrators of calcium phosphate. Because of the abundance and diversity of organisms which might introduce significant errors into paleosalinity estimates, the sedimentary phosphate method seemingly is of limited applicability.

  17. Stochastic BER estimation for coherent QPSK transmission systems with digital carrier phase recovery.

    PubMed

    Zhang, Fan; Gao, Yan; Luo, Yazhi; Chen, Zhangyuan; Xu, Anshi

    2010-04-26

    We propose a stochastic bit error ratio estimation approach based on a statistical analysis of the retrieved signal phase for coherent optical QPSK systems with digital carrier phase recovery. A family of the generalized exponential function is applied to fit the probability density function of the signal samples. The method provides reasonable performance estimation in presence of both linear and nonlinear transmission impairments while reduces the computational intensity greatly compared to Monte Carlo simulation.

  18. A source number estimation method for single optical fiber sensor

    NASA Astrophysics Data System (ADS)

    Hu, Junpeng; Huang, Zhiping; Su, Shaojing; Zhang, Yimeng; Liu, Chunwu

    2015-10-01

    The single-channel blind source separation (SCBSS) technique makes great significance in many fields, such as optical fiber communication, sensor detection, image processing and so on. It is a wide range application to realize blind source separation (BSS) from a single optical fiber sensor received data. The performance of many BSS algorithms and signal process methods will be worsened with inaccurate source number estimation. Many excellent algorithms have been proposed to deal with the source number estimation in array signal process which consists of multiple sensors, but they can not be applied directly to the single sensor condition. This paper presents a source number estimation method dealing with the single optical fiber sensor received data. By delay process, this paper converts the single sensor received data to multi-dimension form. And the data covariance matrix is constructed. Then the estimation algorithms used in array signal processing can be utilized. The information theoretic criteria (ITC) based methods, presented by AIC and MDL, Gerschgorin's disk estimation (GDE) are introduced to estimate the source number of the single optical fiber sensor's received signal. To improve the performance of these estimation methods at low signal noise ratio (SNR), this paper make a smooth process to the data covariance matrix. By the smooth process, the fluctuation and uncertainty of the eigenvalues of the covariance matrix are reduced. Simulation results prove that ITC base methods can not estimate the source number effectively under colored noise. The GDE method, although gets a poor performance at low SNR, but it is able to accurately estimate the number of sources with colored noise. The experiments also show that the proposed method can be applied to estimate the source number of single sensor received data.

  19. A phase match based frequency estimation method for sinusoidal signals

    NASA Astrophysics Data System (ADS)

    Shen, Yan-Lin; Tu, Ya-Qing; Chen, Lin-Jun; Shen, Ting-Ao

    2015-04-01

    Accurate frequency estimation affects the ranging precision of linear frequency modulated continuous wave (LFMCW) radars significantly. To improve the ranging precision of LFMCW radars, a phase match based frequency estimation method is proposed. To obtain frequency estimation, linear prediction property, autocorrelation, and cross correlation of sinusoidal signals are utilized. The analysis of computational complex shows that the computational load of the proposed method is smaller than those of two-stage autocorrelation (TSA) and maximum likelihood. Simulations and field experiments are performed to validate the proposed method, and the results demonstrate the proposed method has better performance in terms of frequency estimation precision than methods of Pisarenko harmonic decomposition, modified covariance, and TSA, which contribute to improving the precision of LFMCW radars effectively.

  20. Comparisons of Four Methods for Evapotranspiration Estimates in Jordan

    NASA Astrophysics Data System (ADS)

    Zhang, H.; Gorelick, S.; Yoon, J.

    2014-12-01

    We compared evapotranspiration (ET) estimates in Jordan calculated by four theoretically-different methods. The first method was the FAO Single Crop Coefficient method. Our calculation took into account 20 dominant crop species in Jordan, utilized the global Climate Forecast System Reanalysis (CFSR) data set, and generated spatially heterogeneous crop coefficients. The second approach was the Surface Energy Balance Algorithms for Land (SEBAL) method. It was used with Landsat TM/ETM+ images to calculate instantaneous ET at the moment of satellite overpass, and the results of multiple images were combined to derive seasonal and annual ET estimates. The third method was based on the 1-km land surface ET product from MODIS, which was calculated using MODIS-observed land cover and photosynthetically active radiation. The fourth method was based on the SWAT model, which combines the Penman-Monteith equation and vegetation growth to estimate daily ET rates at the watershed scale. The results show substantial differences in both magnitude and spatiotemporal patterns of ET estimates across different regions from the four methods. Such differences were particularly evident in the Highlands region, where irrigation plays a critical role in local water balance. Results also suggest that land cover data is a major source of uncertainty in estimating regional ET rates. Although it is difficult to conclude which method was more reliable due to the limited availability of validation data, the results suggest caution in developing and interpreting ET estimates in this arid environment.

  1. SAR imaging via modern 2-D spectral estimation methods.

    PubMed

    DeGraaf, S R

    1998-01-01

    This paper discusses the use of modern 2D spectral estimation algorithms for synthetic aperture radar (SAR) imaging. The motivation for applying power spectrum estimation methods to SAR imaging is to improve resolution, remove sidelobe artifacts, and reduce speckle compared to what is possible with conventional Fourier transform SAR imaging techniques. This paper makes two principal contributions to the field of adaptive SAR imaging. First, it is a comprehensive comparison of 2D spectral estimation methods for SAR imaging. It provides a synopsis of the algorithms available, discusses their relative merits for SAR imaging, and illustrates their performance on simulated and collected SAR imagery. Some of the algorithms presented or their derivations are new, as are some of the insights into or analyses of the algorithms. Second, this work develops multichannel variants of four related algorithms, minimum variance method (MVM), reduced-rank MVM (RRMVM), adaptive sidelobe reduction (ASR) and space variant apodization (SVA) to estimate both reflectivity intensity and interferometric height from polarimetric displaced-aperture interferometric data. All of these interferometric variants are new. In the interferometric contest, adaptive spectral estimation can improve the height estimates through a combination of adaptive nulling and averaging. Examples illustrate that MVM, ASR, and SVA offer significant advantages over Fourier methods for estimating both scattering intensity and interferometric height, and allow empirical comparison of the accuracies of Fourier, MVM, ASR, and SVA interferometric height estimates.

  2. Quantifying Accurate Calorie Estimation Using the "Think Aloud" Method

    ERIC Educational Resources Information Center

    Holmstrup, Michael E.; Stearns-Bruening, Kay; Rozelle, Jeffrey

    2013-01-01

    Objective: Clients often have limited time in a nutrition education setting. An improved understanding of the strategies used to accurately estimate calories may help to identify areas of focused instruction to improve nutrition knowledge. Methods: A "Think Aloud" exercise was recorded during the estimation of calories in a standard dinner meal…

  3. A Novel Monopulse Angle Estimation Method for Wideband LFM Radars

    PubMed Central

    Zhang, Yi-Xiong; Liu, Qi-Fan; Hong, Ru-Jia; Pan, Ping-Ping; Deng, Zhen-Miao

    2016-01-01

    Traditional monopulse angle estimations are mainly based on phase comparison and amplitude comparison methods, which are commonly adopted in narrowband radars. In modern radar systems, wideband radars are becoming more and more important, while the angle estimation for wideband signals is little studied in previous works. As noise in wideband radars has larger bandwidth than narrowband radars, the challenge lies in the accumulation of energy from the high resolution range profile (HRRP) of monopulse. In wideband radars, linear frequency modulated (LFM) signals are frequently utilized. In this paper, we investigate the monopulse angle estimation problem for wideband LFM signals. To accumulate the energy of the received echo signals from different scatterers of a target, we propose utilizing a cross-correlation operation, which can achieve a good performance in low signal-to-noise ratio (SNR) conditions. In the proposed algorithm, the problem of angle estimation is converted to estimating the frequency of the cross-correlation function (CCF). Experimental results demonstrate the similar performance of the proposed algorithm compared with the traditional amplitude comparison method. It means that the proposed method for angle estimation can be adopted. When adopting the proposed method, future radars may only need wideband signals for both tracking and imaging, which can greatly increase the data rate and strengthen the capability of anti-jamming. More importantly, the estimated angle will not become ambiguous under an arbitrary angle, which can significantly extend the estimated angle range in wideband radars. PMID:27271629

  4. Methods for Estimating Medical Expenditures Attributable to Intimate Partner Violence

    ERIC Educational Resources Information Center

    Brown, Derek S.; Finkelstein, Eric A.; Mercy, James A.

    2008-01-01

    This article compares three methods for estimating the medical cost burden of intimate partner violence against U.S. adult women (18 years and older), 1 year postvictimization. To compute the estimates, prevalence data from the National Violence Against Women Survey are combined with cost data from the Medical Expenditure Panel Survey, the…

  5. A Novel Monopulse Angle Estimation Method for Wideband LFM Radars.

    PubMed

    Zhang, Yi-Xiong; Liu, Qi-Fan; Hong, Ru-Jia; Pan, Ping-Ping; Deng, Zhen-Miao

    2016-01-01

    Traditional monopulse angle estimations are mainly based on phase comparison and amplitude comparison methods, which are commonly adopted in narrowband radars. In modern radar systems, wideband radars are becoming more and more important, while the angle estimation for wideband signals is little studied in previous works. As noise in wideband radars has larger bandwidth than narrowband radars, the challenge lies in the accumulation of energy from the high resolution range profile (HRRP) of monopulse. In wideband radars, linear frequency modulated (LFM) signals are frequently utilized. In this paper, we investigate the monopulse angle estimation problem for wideband LFM signals. To accumulate the energy of the received echo signals from different scatterers of a target, we propose utilizing a cross-correlation operation, which can achieve a good performance in low signal-to-noise ratio (SNR) conditions. In the proposed algorithm, the problem of angle estimation is converted to estimating the frequency of the cross-correlation function (CCF). Experimental results demonstrate the similar performance of the proposed algorithm compared with the traditional amplitude comparison method. It means that the proposed method for angle estimation can be adopted. When adopting the proposed method, future radars may only need wideband signals for both tracking and imaging, which can greatly increase the data rate and strengthen the capability of anti-jamming. More importantly, the estimated angle will not become ambiguous under an arbitrary angle, which can significantly extend the estimated angle range in wideband radars. PMID:27271629

  6. ROBUST MAXIMUM LIKELIHOOD ESTIMATION IN Q-SPACE MRI.

    PubMed

    Landman, B A; Farrell, J A D; Smith, S A; Calabresi, P A; van Zijl, P C M; Prince, J L

    2008-05-14

    Q-space imaging is an emerging diffusion weighted MR imaging technique to estimate molecular diffusion probability density functions (PDF's) without the need to assume a Gaussian distribution. We present a robust M-estimator, Q-space Estimation by Maximizing Rician Likelihood (QEMRL), for diffusion PDF's based on maximum likelihood. PDF's are modeled by constrained Gaussian mixtures. In QEMRL, robust likelihood measures mitigate the impacts of imaging artifacts. In simulation and in vivo human spinal cord, the method improves reliability of estimated PDF's and increases tissue contrast. QEMRL enables more detailed exploration of the PDF properties than prior approaches and may allow acquisitions at higher spatial resolution.

  7. A bootstrap method for estimating uncertainty of water quality trends

    USGS Publications Warehouse

    Hirsch, Robert M.; Archfield, Stacey A.; DeCicco, Laura

    2015-01-01

    Estimation of the direction and magnitude of trends in surface water quality remains a problem of great scientific and practical interest. The Weighted Regressions on Time, Discharge, and Season (WRTDS) method was recently introduced as an exploratory data analysis tool to provide flexible and robust estimates of water quality trends. This paper enhances the WRTDS method through the introduction of the WRTDS Bootstrap Test (WBT), an extension of WRTDS that quantifies the uncertainty in WRTDS-estimates of water quality trends and offers various ways to visualize and communicate these uncertainties. Monte Carlo experiments are applied to estimate the Type I error probabilities for this method. WBT is compared to other water-quality trend-testing methods appropriate for data sets of one to three decades in length with sampling frequencies of 6–24 observations per year. The software to conduct the test is in the EGRETci R-package.

  8. A new FOA estimation method in SAR/GALILEO system

    NASA Astrophysics Data System (ADS)

    Liu, Gang; He, Bing; Li, Jilin

    2007-11-01

    The European Galileo Plan will include the Search and Rescue (SAR) transponder which will become part of the future MEOSAR (Medium earth orbit Search and Rescue) system, the new SAR system can improve localization accuracy through measuring the frequency of arrival (FOA) and time of arrival (TOA) of beacons, the FOA estimation is one of the most important part. In this paper, we aim to find a good FOA algorithm with minimal estimation error, which must be less than 0.1Hz. We propose a new method called Kay algorithm for the SAR/GALILEO system by comparing some frequency estimation methods and current methods using in the COAPAS-SARSAT system and analyzing distress beacon in terms of signal structure, spectrum characteristic. The simulation proves that the Kay method for FOA estimation is better.

  9. Methods for Estimating Uncertainty in Factor Analytic Solutions

    EPA Science Inventory

    The EPA PMF (Environmental Protection Agency positive matrix factorization) version 5.0 and the underlying multilinear engine-executable ME-2 contain three methods for estimating uncertainty in factor analytic models: classical bootstrap (BS), displacement of factor elements (DI...

  10. Evapotranspiration: Mass balance measurements compared with flux estimation methods

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Evapotranspiration (ET) may be measured by mass balance methods and estimated by flux sensing methods. The mass balance methods are typically restricted in terms of the area that can be represented (e.g., surface area of weighing lysimeter (LYS) or equivalent representative area of neutron probe (NP...

  11. An investigation of new methods for estimating parameter sensitivities

    NASA Technical Reports Server (NTRS)

    Beltracchi, Todd J.; Gabriele, Gary A.

    1988-01-01

    Parameter sensitivity is defined as the estimation of changes in the modeling functions and the design variables due to small changes in the fixed parameters of the formulation. There are currently several methods for estimating parameter sensitivities requiring either difficult to obtain second order information, or do not return reliable estimates for the derivatives. Additionally, all the methods assume that the set of active constraints does not change in a neighborhood of the estimation point. If the active set does in fact change, than any extrapolations based on these derivatives may be in error. The objective here is to investigate more efficient new methods for estimating parameter sensitivities when the active set changes. The new method is based on the recursive quadratic programming (RQP) method and in conjunction a differencing formula to produce estimates of the sensitivities. This is compared to existing methods and is shown to be very competitive in terms of the number of function evaluations required. In terms of accuracy, the method is shown to be equivalent to a modified version of the Kuhn-Tucker method, where the Hessian of the Lagrangian is estimated using the BFS method employed by the RPQ algorithm. Inital testing on a test set with known sensitivities demonstrates that the method can accurately calculate the parameter sensitivity. To handle changes in the active set, a deflection algorithm is proposed for those cases where the new set of active constraints remains linearly independent. For those cases where dependencies occur, a directional derivative is proposed. A few simple examples are included for the algorithm, but extensive testing has not yet been performed.

  12. A posteriori pointwise error estimates for the boundary element method

    SciTech Connect

    Paulino, G.H.; Gray, L.J.; Zarikian, V.

    1995-01-01

    This report presents a new approach for a posteriori pointwise error estimation in the boundary element method. The estimator relies upon the evaluation of hypersingular integral equations, and is therefore intrinsic to the boundary integral equation approach. This property allows some theoretical justification by mathematically correlating the exact and estimated errors. A methodology is developed for approximating the error on the boundary as well as in the interior of the domain. In the interior, error estimates for both the function and its derivatives (e.g. potential and interior gradients for potential problems, displacements and stresses for elasticity problems) are presented. Extensive computational experiments have been performed for the two dimensional Laplace equation on interior domains, employing Dirichlet and mixed boundary conditions. The results indicate that the error estimates successfully track the form of the exact error curve. Moreover, a reasonable estimate of the magnitude of the actual error is also obtained.

  13. Two-dimensional location and direction estimating method.

    PubMed

    Haga, Teruhiro; Tsukamoto, Sosuke; Hoshino, Hiroshi

    2008-01-01

    In this paper, a method of estimating both the position and the rotation angle of an object on a measurement stage was proposed. The system utilizes the radio communication technology and the directivity of an antenna. As a prototype system, a measurement stage (a circle 240mm in diameter) with 36 antennas that placed in each 10 degrees was developed. Two transmitter antennas are settled in a right angle on the stage as the target object, and the position and the rotation angle is estimated by measuring efficiency of the radio communication of each 36 antennas. The experimental result revealed that even when the estimated location is not so accurate (about a 30 mm error), the rotation angle is accurately estimated (about 2.33 degree error on average). The result suggests that the proposed method will be useful for estimating the location and the direction of an object.

  14. A Channelization-Based DOA Estimation Method for Wideband Signals.

    PubMed

    Guo, Rui; Zhang, Yue; Lin, Qianqiang; Chen, Zengping

    2016-01-01

    In this paper, we propose a novel direction of arrival (DOA) estimation method for wideband signals with sensor arrays. The proposed method splits the wideband array output into multiple frequency sub-channels and estimates the signal parameters using a digital channelization receiver. Based on the output sub-channels, a channelization-based incoherent signal subspace method (Channelization-ISM) and a channelization-based test of orthogonality of projected subspaces method (Channelization-TOPS) are proposed. Channelization-ISM applies narrowband signal subspace methods on each sub-channel independently. Then the arithmetic mean or geometric mean of the estimated DOAs from each sub-channel gives the final result. Channelization-TOPS measures the orthogonality between the signal and the noise subspaces of the output sub-channels to estimate DOAs. The proposed channelization-based method isolates signals in different bandwidths reasonably and improves the output SNR. It outperforms the conventional ISM and TOPS methods on estimation accuracy and dynamic range, especially in real environments. Besides, the parallel processing architecture makes it easy to implement on hardware. A wideband digital array radar (DAR) using direct wideband radio frequency (RF) digitization is presented. Experiments carried out in a microwave anechoic chamber with the wideband DAR are presented to demonstrate the performance. The results verify the effectiveness of the proposed method. PMID:27384566

  15. A Channelization-Based DOA Estimation Method for Wideband Signals

    PubMed Central

    Guo, Rui; Zhang, Yue; Lin, Qianqiang; Chen, Zengping

    2016-01-01

    In this paper, we propose a novel direction of arrival (DOA) estimation method for wideband signals with sensor arrays. The proposed method splits the wideband array output into multiple frequency sub-channels and estimates the signal parameters using a digital channelization receiver. Based on the output sub-channels, a channelization-based incoherent signal subspace method (Channelization-ISM) and a channelization-based test of orthogonality of projected subspaces method (Channelization-TOPS) are proposed. Channelization-ISM applies narrowband signal subspace methods on each sub-channel independently. Then the arithmetic mean or geometric mean of the estimated DOAs from each sub-channel gives the final result. Channelization-TOPS measures the orthogonality between the signal and the noise subspaces of the output sub-channels to estimate DOAs. The proposed channelization-based method isolates signals in different bandwidths reasonably and improves the output SNR. It outperforms the conventional ISM and TOPS methods on estimation accuracy and dynamic range, especially in real environments. Besides, the parallel processing architecture makes it easy to implement on hardware. A wideband digital array radar (DAR) using direct wideband radio frequency (RF) digitization is presented. Experiments carried out in a microwave anechoic chamber with the wideband DAR are presented to demonstrate the performance. The results verify the effectiveness of the proposed method. PMID:27384566

  16. A robust method for rotation estimation using spherical harmonics representation.

    PubMed

    Althloothi, Salah; Mahoor, Mohammad H; Voyles, Richard M

    2013-06-01

    This paper presents a robust method for 3D object rotation estimation using spherical harmonics representation and the unit quaternion vector. The proposed method provides a closed-form solution for rotation estimation without recurrence relations or searching for point correspondences between two objects. The rotation estimation problem is casted as a minimization problem, which finds the optimum rotation angles between two objects of interest in the frequency domain. The optimum rotation angles are obtained by calculating the unit quaternion vector from a symmetric matrix, which is constructed from the two sets of spherical harmonics coefficients using eigendecomposition technique. Our experimental results on hundreds of 3D objects show that our proposed method is very accurate in rotation estimation, robust to noisy data, missing surface points, and can handle intra-class variability between 3D objects. PMID:23475364

  17. A Fast Estimation Method of Railway Passengers' Flow

    NASA Astrophysics Data System (ADS)

    Nagasaki, Yusaku; Asuka, Masashi; Komaya, Kiyotoshi

    To evaluate a train schedule from the viewpoint of passengers' convenience, it is important to know each passenger's choice of trains and transfer stations to arrive at his/her destination. Because of difficulties of measuring such passengers' behavior, estimation methods of railway passengers' flow are proposed to execute such an evaluation. However, a train schedule planning system equipped with those methods is not practical due to necessity of much time to complete the estimation. In this article, the authors propose a fast passengers' flow estimation method that employs features of passengers' flow graph using preparative search based on each train's arrival time at each station. And the authors show the results of passengers' flow estimation applied on a railway in an urban area.

  18. Demographic estimation methods for plants with unobservable life-states

    USGS Publications Warehouse

    Kery, M.; Gregg, K.B.; Schaub, M.

    2005-01-01

    Demographic estimation of vital parameters in plants with an unobservable dormant state is complicated, because time of death is not known. Conventional methods assume that death occurs at a particular time after a plant has last been seen aboveground but the consequences of assuming a particular duration of dormancy have never been tested. Capture-recapture methods do not make assumptions about time of death; however, problems with parameter estimability have not yet been resolved. To date, a critical comparative assessment of these methods is lacking. We analysed data from a 10 year study of Cleistes bifaria, a terrestrial orchid with frequent dormancy, and compared demographic estimates obtained by five varieties of the conventional methods, and two capture-recapture methods. All conventional methods produced spurious unity survival estimates for some years or for some states, and estimates of demographic rates sensitive to the time of death assumption. In contrast, capture-recapture methods are more parsimonious in terms of assumptions, are based on well founded theory and did not produce spurious estimates. In Cleistes, dormant episodes lasted for 1-4 years (mean 1.4, SD 0.74). The capture-recapture models estimated ramet survival rate at 0.86 (SE~ 0.01), ranging from 0.77-0.94 (SEs # 0.1) in anyone year. The average fraction dormant was estimated at 30% (SE 1.5), ranging 16 -47% (SEs # 5.1) in anyone year. Multistate capture-recapture models showed that survival rates were positively related to precipitation in the current year, but transition rates were more strongly related to precipitation in the previous than in the current year, with more ramets going dormant following dry years. Not all capture-recapture models of interest have estimable parameters; for instance, without excavating plants in years when they do not appear aboveground, it is not possible to obtain independent timespecific survival estimates for dormant plants. We introduce rigorous

  19. Comparison of volume estimation methods for pancreatic islet cells

    NASA Astrophysics Data System (ADS)

    Dvořák, JiřÃ.­; Å vihlík, Jan; Habart, David; Kybic, Jan

    2016-03-01

    In this contribution we study different methods of automatic volume estimation for pancreatic islets which can be used in the quality control step prior to the islet transplantation. The total islet volume is an important criterion in the quality control. Also, the individual islet volume distribution is interesting -- it has been indicated that smaller islets can be more effective. A 2D image of a microscopy slice containing the islets is acquired. The input of the volume estimation methods are segmented images of individual islets. The segmentation step is not discussed here. We consider simple methods of volume estimation assuming that the islets have spherical or ellipsoidal shape. We also consider a local stereological method, namely the nucleator. The nucleator does not rely on any shape assumptions and provides unbiased estimates if isotropic sections through the islets are observed. We present a simulation study comparing the performance of the volume estimation methods in different scenarios and an experimental study comparing the methods on a real dataset.

  20. Motion estimation using point cluster method and Kalman filter.

    PubMed

    Senesh, M; Wolf, A

    2009-05-01

    The most frequently used method in a three dimensional human gait analysis involves placing markers on the skin of the analyzed segment. This introduces a significant artifact, which strongly influences the bone position and orientation and joint kinematic estimates. In this study, we tested and evaluated the effect of adding a Kalman filter procedure to the previously reported point cluster technique (PCT) in the estimation of a rigid body motion. We demonstrated the procedures by motion analysis of a compound planar pendulum from indirect opto-electronic measurements of markers attached to an elastic appendage that is restrained to slide along the rigid body long axis. The elastic frequency is close to the pendulum frequency, as in the biomechanical problem, where the soft tissue frequency content is similar to the actual movement of the bones. Comparison of the real pendulum angle to that obtained by several estimation procedures--PCT, Kalman filter followed by PCT, and low pass filter followed by PCT--enables evaluation of the accuracy of the procedures. When comparing the maximal amplitude, no effect was noted by adding the Kalman filter; however, a closer look at the signal revealed that the estimated angle based only on the PCT method was very noisy with fluctuation, while the estimated angle based on the Kalman filter followed by the PCT was a smooth signal. It was also noted that the instantaneous frequencies obtained from the estimated angle based on the PCT method is more dispersed than those obtained from the estimated angle based on Kalman filter followed by the PCT method. Addition of a Kalman filter to the PCT method in the estimation procedure of rigid body motion results in a smoother signal that better represents the real motion, with less signal distortion than when using a digital low pass filter. Furthermore, it can be concluded that adding a Kalman filter to the PCT procedure substantially reduces the dispersion of the maximal and minimal

  1. Modeling an exhumed basin: A method for estimating eroded overburden

    USGS Publications Warehouse

    Poelchau, H.S.

    2001-01-01

    The Alberta Deep Basin in western Canada has undergone a large amount of erosion following deep burial in the Eocene. Basin modeling and simulation of burial and temperature history require estimates of maximum overburden for each gridpoint in the basin model. Erosion can be estimated using shale compaction trends. For instance, the widely used Magara method attempts to establish a sonic log gradient for shales and uses the extrapolation to a theoretical uncompacted shale value as a first indication of overcompaction and estimation of the amount of erosion. Because such gradients are difficult to establish in many wells, an extension of this method was devised to help map erosion over a large area. Sonic A; values of one suitable shale formation are calibrated with maximum depth of burial estimates from sonic log extrapolation for several wells. This resulting regression equation then can be used to estimate and map maximum depth of burial or amount of erosion for all wells in which this formation has been logged. The example from the Alberta Deep Basin shows that the magnitude of erosion calculated by this method is conservative and comparable to independent estimates using vitrinite reflectance gradient methods. ?? 2001 International Association for Mathematical Geology.

  2. Stability and error estimation for Component Adaptive Grid methods

    NASA Technical Reports Server (NTRS)

    Oliger, Joseph; Zhu, Xiaolei

    1994-01-01

    Component adaptive grid (CAG) methods for solving hyperbolic partial differential equations (PDE's) are discussed in this paper. Applying recent stability results for a class of numerical methods on uniform grids. The convergence of these methods for linear problems on component adaptive grids is established here. Furthermore, the computational error can be estimated on CAG's using the stability results. Using these estimates, the error can be controlled on CAG's. Thus, the solution can be computed efficiently on CAG's within a given error tolerance. Computational results for time dependent linear problems in one and two space dimensions are presented.

  3. The estimation of the measurement results with using statistical methods

    NASA Astrophysics Data System (ADS)

    Velychko, O.; Gordiyenko, T.

    2015-02-01

    The row of international standards and guides describe various statistical methods that apply for a management, control and improvement of processes with the purpose of realization of analysis of the technical measurement results. The analysis of international standards and guides on statistical methods estimation of the measurement results recommendations for those applications in laboratories is described. For realization of analysis of standards and guides the cause-and-effect Ishikawa diagrams concerting to application of statistical methods for estimation of the measurement results are constructed.

  4. A Study of Methods for Estimating Distributions of Test Scores.

    ERIC Educational Resources Information Center

    Cope, Ronald T.; Kolen, Michael J.

    This study compared five density estimation techniques applied to samples from a population of 272,244 examinees' ACT English Usage and Mathematics Usage raw scores. Unsmoothed frequencies, kernel method, negative hypergeometric, four-parameter beta compound binomial, and Cureton-Tukey methods were applied to 500 replications of random samples of…

  5. Rapid-estimation method for assessing scour at highway bridges

    USGS Publications Warehouse

    Holnbeck, Stephen R.

    1998-01-01

    A method was developed by the U.S. Geological Survey for rapid estimation of scour at highway bridges using limited site data and analytical procedures to estimate pier, abutment, and contraction scour depths. The basis for the method was a procedure recommended by the Federal Highway Administration for conducting detailed scour investigations, commonly referred to as the Level 2 method. Using pier, abutment, and contraction scour results obtained from Level 2 investigations at 122 sites in 10 States, envelope curves and graphical relations were developed that enable determination of scour-depth estimates at most bridge sites in a matter of a few hours. Rather than using complex hydraulic variables, surrogate variables more easily obtained in the field were related to calculated scour-depth data from Level 2 studies. The method was tested by having several experienced individuals apply the method in the field, and results were compared among the individuals and with previous detailed analyses performed for the sites. Results indicated that the variability in predicted scour depth among individuals applying the method generally was within an acceptable range, and that conservatively greater scour depths generally were obtained by the rapid-estimation method compared to the Level 2 method. The rapid-estimation method is considered most applicable for conducting limited-detail scour assessments and as a screening tool to determine those bridge sites that may require more detailed analysis. The method is designed to be applied only by a qualified professional possessing knowledge and experience in the fields of bridge scour, hydraulics, and flood hydrology, and having specific expertise with the Level 2 method.

  6. Precision of two methods for estimating age from burbot otoliths

    USGS Publications Warehouse

    Edwards, W.H.; Stapanian, M.A.; Stoneman, A.T.

    2011-01-01

    Lower reproductive success and older age structure are associated with many burbot (Lota lota L.) populations that are declining or of conservation concern. Therefore, reliable methods for estimating the age of burbot are critical for effective assessment and management. In Lake Erie, burbot populations have declined in recent years due to the combined effects of an aging population (&xmacr; = 10 years in 2007) and extremely low recruitment since 2002. We examined otoliths from burbot (N = 91) collected in Lake Erie in 2007 and compared the estimates of burbot age by two agers, each using two established methods (cracked-and-burned and thin-section) of estimating ages from burbot otoliths. One ager was experienced at estimating age from otoliths, the other was a novice. Agreement (precision) between the two agers was higher for the thin-section method, particularly at ages 6–11 years, based on linear regression analyses and 95% confidence intervals. As expected, precision between the two methods was higher for the more experienced ager. Both agers reported that the thin sections offered clearer views of the annuli, particularly near the margins on otoliths from burbot ages ≥8. Slides for the thin sections required some costly equipment and more than 2 days to prepare. In contrast, preparing the cracked-and-burned samples was comparatively inexpensive and quick. We suggest use of the thin-section method for estimating the age structure of older burbot populations.

  7. Increasing confidence in mass discharge estimates using geostatistical methods.

    PubMed

    Cai, Zuansi; Wilson, Ryan D; Cardiff, Michael A; Kitanidis, Peter K

    2011-01-01

    Mass discharge is one metric rapidly gaining acceptance for assessing the performance of in situ groundwater remediation systems. Multilevel sampling transects provide the data necessary to make such estimates, often using the Thiessen Polygon method. This method, however, does not provide a direct estimate of uncertainty. We introduce a geostatistical mass discharge estimation approach that involves a rigorous analysis of data spatial variability and selection of an appropriate variogram model. High-resolution interpolation was applied to create a map of measurements across a transect, and the magnitude and uncertainty of mass discharge were quantified by conditional simulation. An important benefit of the approach is quantified uncertainty of the mass discharge estimate. We tested the approach on data from two sites monitored using multilevel transects. We also used the approach to explore the effect of lower spatial monitoring resolution on the accuracy and uncertainty of mass discharge estimates. This process revealed two important findings: (1) appropriate monitoring resolution is that which yielded an estimate comparable with the full dataset value, and (2) high-resolution sampling yields a more representative spatial data structure descriptor, which can then be used via conditional simulation to make subsequent mass discharge estimates from lower resolution sampling of the same transect. The implication of the latter is that a high-resolution multilevel transect needs to be sampled only once to obtain the necessary spatial data descriptor for a contaminant plume exhibiting minor temporal variability, and thereafter less spatially intensely to reduce costs.

  8. A simple and reliable method for estimating haemoglobin.

    PubMed Central

    Stott, G. J.; Lewis, S. M.

    1995-01-01

    A new colour scale has been advised for estimating haemoglobin levels by matching the blood samples with ten levels of haemoglobin (3, 4, 5, 6, 7, 8, 9, 10, 12, and 14 g/dl) on the scale. Preliminary results show good correlations with spectrophotometric readings. The new device is being field tested and if the initial promise is confirmed, will provide a simple and reliable method for estimating haemoglobin where laboratory facilities are not available. Images Fig. 2 PMID:7614669

  9. Time domain attenuation estimation method from ultrasonic backscattered signals

    PubMed Central

    Ghoshal, Goutam; Oelze, Michael L.

    2012-01-01

    Ultrasonic attenuation is important not only as a parameter for characterizing tissue but also for compensating other parameters that are used to classify tissues. Several techniques have been explored for estimating ultrasonic attenuation from backscattered signals. In the present study, a technique is developed to estimate the local ultrasonic attenuation coefficient by analyzing the time domain backscattered signal. The proposed method incorporates an objective function that combines the diffraction pattern of the source/receiver with the attenuation slope in an integral equation. The technique was assessed through simulations and validated through experiments with a tissue mimicking phantom and fresh rabbit liver samples. The attenuation values estimated using the proposed technique were compared with the attenuation estimated using insertion loss measurements. For a data block size of 15 pulse lengths axially and 15 beamwidths laterally, the mean attenuation estimates from the tissue mimicking phantoms were within 10% of the estimates using insertion loss measurements. With a data block size of 20 pulse lengths axially and 20 beamwidths laterally, the error in the attenuation values estimated from the liver samples were within 10% of the attenuation values estimated from the insertion loss measurements. PMID:22779499

  10. Estimating Population Size Using the Network Scale Up Method

    PubMed Central

    Maltiel, Rachael; Raftery, Adrian E.; McCormick, Tyler H.; Baraff, Aaron J.

    2015-01-01

    We develop methods for estimating the size of hard-to-reach populations from data collected using network-based questions on standard surveys. Such data arise by asking respondents how many people they know in a specific group (e.g. people named Michael, intravenous drug users). The Network Scale up Method (NSUM) is a tool for producing population size estimates using these indirect measures of respondents’ networks. Killworth et al. (1998a,b) proposed maximum likelihood estimators of population size for a fixed effects model in which respondents’ degrees or personal network sizes are treated as fixed. We extend this by treating personal network sizes as random effects, yielding principled statements of uncertainty. This allows us to generalize the model to account for variation in people’s propensity to know people in particular subgroups (barrier effects), such as their tendency to know people like themselves, as well as their lack of awareness of or reluctance to acknowledge their contacts’ group memberships (transmission bias). NSUM estimates also suffer from recall bias, in which respondents tend to underestimate the number of members of larger groups that they know, and conversely for smaller groups. We propose a data-driven adjustment method to deal with this. Our methods perform well in simulation studies, generating improved estimates and calibrated uncertainty intervals, as well as in back estimates of real sample data. We apply them to data from a study of HIV/AIDS prevalence in Curitiba, Brazil. Our results show that when transmission bias is present, external information about its likely extent can greatly improve the estimates. The methods are implemented in the NSUM R package. PMID:26949438

  11. Fault detection in electromagnetic suspension systems with state estimation methods

    SciTech Connect

    Sinha, P.K.; Zhou, F.B.; Kutiyal, R.S. . Dept. of Engineering)

    1993-11-01

    High-speed maglev vehicles need a high level of safety that depends on the whole vehicle system's reliability. There are many ways of attaining high reliability for the system. Conventional method uses redundant hardware with majority vote logic circuits. Hardware redundancy costs more, weigh more and occupy more space than that of analytically redundant methods. Analytically redundant systems use parameter identification and state estimation methods based on the system models to detect and isolate the fault of instruments (sensors), actuator and components. In this paper the authors use the Luenberger observer to estimate three state variables of the electromagnetic suspension system: position (airgap), vehicle velocity, and vertical acceleration. These estimates are compared with the corresponding sensor outputs for fault detection. In this paper, they consider FDI of the accelerometer, the sensor which provides the ride quality.

  12. Models and estimation methods for clinical HIV-1 data

    NASA Astrophysics Data System (ADS)

    Verotta, Davide

    2005-12-01

    Clinical HIV-1 data include many individual factors, such as compliance to treatment, pharmacokinetics, variability in respect to viral dynamics, race, sex, income, etc., which might directly influence or be associated with clinical outcome. These factors need to be taken into account to achieve a better understanding of clinical outcome and mathematical models can provide a unifying framework to do so. The first objective of this paper is to demonstrate the development of comprehensive HIV-1 dynamics models that describe viral dynamics and also incorporate different factors influencing such dynamics. The second objective of this paper is to describe alternative estimation methods that can be applied to the analysis of data with such models. In particular, we consider: (i) simple but effective two-stage estimation methods, in which data from each patient are analyzed separately and summary statistics derived from the results, (ii) more complex nonlinear mixed effect models, used to pool all the patient data in a single analysis. Bayesian estimation methods are also considered, in particular: (iii) maximum posterior approximations, MAP, and (iv) Markov chain Monte Carlo, MCMC. Bayesian methods incorporate prior knowledge into the models, thus avoiding some of the model simplifications introduced when the data are analyzed using two-stage methods, or a nonlinear mixed effect framework. We demonstrate the development of the models and the different estimation methods using real AIDS clinical trial data involving patients receiving multiple drugs regimens.

  13. Kernel bandwidth estimation for nonparametric modeling.

    PubMed

    Bors, Adrian G; Nasios, Nikolaos

    2009-12-01

    Kernel density estimation is a nonparametric procedure for probability density modeling, which has found several applications in various fields. The smoothness and modeling ability of the functional approximation are controlled by the kernel bandwidth. In this paper, we describe a Bayesian estimation method for finding the bandwidth from a given data set. The proposed bandwidth estimation method is applied in three different computational-intelligence methods that rely on kernel density estimation: 1) scale space; 2) mean shift; and 3) quantum clustering. The third method is a novel approach that relies on the principles of quantum mechanics. This method is based on the analogy between data samples and quantum particles and uses the SchrOdinger potential as a cost function. The proposed methodology is used for blind-source separation of modulated signals and for terrain segmentation based on topography information.

  14. New method for estimating low-earth-orbit collision probabilities

    NASA Technical Reports Server (NTRS)

    Vedder, John D.; Tabor, Jill L.

    1991-01-01

    An unconventional but general method is described for estimating the probability of collision between an earth-orbiting spacecraft and orbital debris. This method uses a Monte Caralo simulation of the orbital motion of the target spacecraft and each discrete debris object to generate an empirical set of distances, each distance representing the separation between the spacecraft and the nearest debris object at random times. Using concepts from the asymptotic theory of extreme order statistics, an analytical density function is fitted to this set of minimum distances. From this function, it is possible to generate realistic collision estimates for the spacecraft.

  15. Estimation Method of Body Temperature from Upper Arm Temperature

    NASA Astrophysics Data System (ADS)

    Suzuki, Arata; Ryu, Kazuteru; Kanai, Nobuyuki

    This paper proposes a method for estimation of a body temperature by using a relation between the upper arm temperature and the atmospheric temperature. Conventional method has measured by armpit or oral, because the body temperature from the body surface is influenced by the atmospheric temperature. However, there is a correlation between the body surface temperature and the atmospheric temperature. By using this correlation, the body temperature can estimated from the body surface temperature. Proposed method enables to measure body temperature by the temperature sensor that is embedded in the blood pressure monitor cuff. Therefore, simultaneous measurement of blood pressure and body temperature can be realized. The effectiveness of the proposed method is verified through the actual body temperature experiment. The proposed method might contribute to reduce the medical staff's workloads in the home medical care, and more.

  16. Objectivity and validity of EMG method in estimating anaerobic threshold.

    PubMed

    Kang, S-K; Kim, J; Kwon, M; Eom, H

    2014-08-01

    The purposes of this study were to verify and compare the performances of anaerobic threshold (AT) point estimates among different filtering intervals (9, 15, 20, 25, 30 s) and to investigate the interrelationships of AT point estimates obtained by ventilatory threshold (VT) and muscle fatigue thresholds using electromyographic (EMG) activity during incremental exercise on a cycle ergometer. 69 untrained male university students, yet pursuing regular exercise voluntarily participated in this study. The incremental exercise protocol was applied with a consistent stepwise increase in power output of 20 watts per minute until exhaustion. AT point was also estimated in the same manner using V-slope program with gas exchange parameters. In general, the estimated values of AT point-time computed by EMG method were more consistent across 5 filtering intervals and demonstrated higher correlations among themselves when compared with those values obtained by VT method. The results found in the present study suggest that the EMG signals could be used as an alternative or a new option in estimating AT point. Also the proposed computing procedure implemented in Matlab for the analysis of EMG signals appeared to be valid and reliable as it produced nearly identical values and high correlations with VT estimates. PMID:24988194

  17. A review of action estimation methods for galactic dynamics

    NASA Astrophysics Data System (ADS)

    Sanders, Jason L.; Binney, James

    2016-04-01

    We review the available methods for estimating actions, angles and frequencies of orbits in both axisymmetric and triaxial potentials. The methods are separated into two classes. Unless an orbit has been trapped by a resonance, convergent, or iterative, methods are able to recover the actions to arbitrarily high accuracy given sufficient computing time. Faster non-convergent methods rely on the potential being sufficiently close to a separable potential, and the accuracy of the action estimate cannot be improved through further computation. We critically compare the accuracy of the methods and the required computation time for a range of orbits in an axisymmetric multicomponent Galactic potential. We introduce a new method for estimating actions that builds on the adiabatic approximation of Schönrich & Binney and discuss the accuracy required for the actions, angles and frequencies using suitable distribution functions for the thin and thick discs, the stellar halo and a star stream. We conclude that for studies of the disc and smooth halo component of the Milky Way, the most suitable compromise between speed and accuracy is the Stäckel Fudge, whilst when studying streams the non-convergent methods do not offer sufficient accuracy and the most suitable method is computing the actions from an orbit integration via a generating function. All the software used in this study can be downloaded from https://github.com/jls713/tact.

  18. Hydrological model uncertainty due to spatial evapotranspiration estimation methods

    NASA Astrophysics Data System (ADS)

    Yu, Xuan; Lamačová, Anna; Duffy, Christopher; Krám, Pavel; Hruška, Jakub

    2016-05-01

    Evapotranspiration (ET) continues to be a difficult process to estimate in seasonal and long-term water balances in catchment models. Approaches to estimate ET typically use vegetation parameters (e.g., leaf area index [LAI], interception capacity) obtained from field observation, remote sensing data, national or global land cover products, and/or simulated by ecosystem models. In this study we attempt to quantify the uncertainty that spatial evapotranspiration estimation introduces into hydrological simulations when the age of the forest is not precisely known. The Penn State Integrated Hydrologic Model (PIHM) was implemented for the Lysina headwater catchment, located 50°03‧N, 12°40‧E in the western part of the Czech Republic. The spatial forest patterns were digitized from forest age maps made available by the Czech Forest Administration. Two ET methods were implemented in the catchment model: the Biome-BGC forest growth sub-model (1-way coupled to PIHM) and with the fixed-seasonal LAI method. From these two approaches simulation scenarios were developed. We combined the estimated spatial forest age maps and two ET estimation methods to drive PIHM. A set of spatial hydrologic regime and streamflow regime indices were calculated from the modeling results for each method. Intercomparison of the hydrological responses to the spatial vegetation patterns suggested considerable variation in soil moisture and recharge and a small uncertainty in the groundwater table elevation and streamflow. The hydrologic modeling with ET estimated by Biome-BGC generated less uncertainty due to the plant physiology-based method. The implication of this research is that overall hydrologic variability induced by uncertain management practices was reduced by implementing vegetation models in the catchment models.

  19. Global parameter estimation methods for stochastic biochemical systems

    PubMed Central

    2010-01-01

    Background The importance of stochasticity in cellular processes having low number of molecules has resulted in the development of stochastic models such as chemical master equation. As in other modelling frameworks, the accompanying rate constants are important for the end-applications like analyzing system properties (e.g. robustness) or predicting the effects of genetic perturbations. Prior knowledge of kinetic constants is usually limited and the model identification routine typically includes parameter estimation from experimental data. Although the subject of parameter estimation is well-established for deterministic models, it is not yet routine for the chemical master equation. In addition, recent advances in measurement technology have made the quantification of genetic substrates possible to single molecular levels. Thus, the purpose of this work is to develop practical and effective methods for estimating kinetic model parameters in the chemical master equation and other stochastic models from single cell and cell population experimental data. Results Three parameter estimation methods are proposed based on the maximum likelihood and density function distance, including probability and cumulative density functions. Since stochastic models such as chemical master equations are typically solved using a Monte Carlo approach in which only a finite number of Monte Carlo realizations are computationally practical, specific considerations are given to account for the effect of finite sampling in the histogram binning of the state density functions. Applications to three practical case studies showed that while maximum likelihood method can effectively handle low replicate measurements, the density function distance methods, particularly the cumulative density function distance estimation, are more robust in estimating the parameters with consistently higher accuracy, even for systems showing multimodality. Conclusions The parameter estimation methodologies

  20. A method for reliability estimation of heterogeneous systems

    NASA Astrophysics Data System (ADS)

    Mihalache, Alin; Guérin, Fabrice; Barreau, Mihaela; Todoskoff, Alexis; Bacivarov, Ioan; Bacivarov, Angelica

    2009-01-01

    Reliability estimation is becoming an important issue of the design process of complex heterogeneous systems. The concept of reliability is frequently seen as being one of the least controlled points and for some as being the critical point. Since these systems are very complex to study, the evaluation of their reliability is extremely difficult. In this paper, we propose a global method to estimate the mechatronic system reliability using operating field data. Since we have a small amount of data, we use an estimation method called Bayesian Restoration Maximization (BRM) method, thus increasing the estimation accuracy. The BRM method needs to define some prior knowledge. For this purpose, we propose to define the prior distribution using a Monte-Carlo simulation based on stochastic Petri Nets (SPN) model and on the operating field data. The stochastic PN model describes the functional and dysfunctional behaviours. In this study, we deal with the case of n repairable systems until a deterministic censoring time (for example, this censoring time may be the warranty period of an ABS system). We consider repair as the replacement of the failing component by an identical one in the case of electronic and mechanical subsystem and in the case of software, the default is rectified on all the subsystems. We simulate the failures times and we compute the confidence interval. The proposed method allows reliability evaluating both for n mechatronic systems and for their different subsystems.

  1. The deposit size frequency method for estimating undiscovered uranium deposits

    USGS Publications Warehouse

    McCammon, R.B.; Finch, W.I.

    1993-01-01

    The deposit size frequency (DSF) method has been developed as a generalization of the method that was used in the National Uranium Resource Evaluation (NURE) program to estimate the uranium endowment of the United States. The DSF method overcomes difficulties encountered during the NURE program when geologists were asked to provide subjective estimates of (1) the endowed fraction of an area judged favorable (factor F) for the occurrence of undiscovered uranium deposits and (2) the tons of endowed rock per unit area (factor T) within the endowed fraction of the favorable area. Because the magnitudes of factors F and T were unfamiliar to nearly all of the geologists, most geologists responded by estimating the number of undiscovered deposits likely to occur within the favorable area and the average size of these deposits. The DSF method combines factors F and T into a single factor (F??T) that represents the tons of endowed rock per unit area of the undiscovered deposits within the favorable area. Factor F??T, provided by the geologist, is the estimated number of undiscovered deposits per unit area in each of a number of specified deposit-size classes. The number of deposit-size classes and the size interval of each class are based on the data collected from the deposits in known (control) areas. The DSF method affords greater latitude in making subjective estimates than the NURE method and emphasizes more of the everyday experience of exploration geologists. Using the DSF method, new assessments have been made for the "young, organic-rich" surficial uranium deposits in Washington and idaho and for the solution-collapse breccia pipe uranium deposits in the Grand Canyon region in Arizona and adjacent Utah. ?? 1993 Oxford University Press.

  2. Estimation of uncertainty for contour method residual stress measurements

    DOE PAGES

    Olson, Mitchell D.; DeWald, Adrian T.; Prime, Michael B.; Hill, Michael R.

    2014-12-03

    This paper describes a methodology for the estimation of measurement uncertainty for the contour method, where the contour method is an experimental technique for measuring a two-dimensional map of residual stress over a plane. Random error sources including the error arising from noise in displacement measurements and the smoothing of the displacement surfaces are accounted for in the uncertainty analysis. The output is a two-dimensional, spatially varying uncertainty estimate such that every point on the cross-section where residual stress is determined has a corresponding uncertainty value. Both numerical and physical experiments are reported, which are used to support the usefulnessmore » of the proposed uncertainty estimator. The uncertainty estimator shows the contour method to have larger uncertainty near the perimeter of the measurement plane. For the experiments, which were performed on a quenched aluminum bar with a cross section of 51 × 76 mm, the estimated uncertainty was approximately 5 MPa (σ/E = 7 · 10⁻⁵) over the majority of the cross-section, with localized areas of higher uncertainty, up to 10 MPa (σ/E = 14 · 10⁻⁵).« less

  3. Estimation of uncertainty for contour method residual stress measurements

    SciTech Connect

    Olson, Mitchell D.; DeWald, Adrian T.; Prime, Michael B.; Hill, Michael R.

    2014-12-03

    This paper describes a methodology for the estimation of measurement uncertainty for the contour method, where the contour method is an experimental technique for measuring a two-dimensional map of residual stress over a plane. Random error sources including the error arising from noise in displacement measurements and the smoothing of the displacement surfaces are accounted for in the uncertainty analysis. The output is a two-dimensional, spatially varying uncertainty estimate such that every point on the cross-section where residual stress is determined has a corresponding uncertainty value. Both numerical and physical experiments are reported, which are used to support the usefulness of the proposed uncertainty estimator. The uncertainty estimator shows the contour method to have larger uncertainty near the perimeter of the measurement plane. For the experiments, which were performed on a quenched aluminum bar with a cross section of 51 × 76 mm, the estimated uncertainty was approximately 5 MPa (σ/E = 7 · 10⁻⁵) over the majority of the cross-section, with localized areas of higher uncertainty, up to 10 MPa (σ/E = 14 · 10⁻⁵).

  4. Estimating Agricultural Water Use using the Operational Simplified Surface Energy Balance Evapotranspiration Estimation Method

    NASA Astrophysics Data System (ADS)

    Forbes, B. T.

    2015-12-01

    Due to the predominantly arid climate in Arizona, access to adequate water supply is vital to the economic development and livelihood of the State. Water supply has become increasingly important during periods of prolonged drought, which has strained reservoir water levels in the Desert Southwest over past years. Arizona's water use is dominated by agriculture, consuming about seventy-five percent of the total annual water demand. Tracking current agricultural water use is important for managers and policy makers so that current water demand can be assessed and current information can be used to forecast future demands. However, many croplands in Arizona are irrigated outside of areas where water use reporting is mandatory. To estimate irrigation withdrawals on these lands, we use a combination of field verification, evapotranspiration (ET) estimation, and irrigation system qualification. ET is typically estimated in Arizona using the Modified Blaney-Criddle method which uses meteorological data to estimate annual crop water requirements. The Modified Blaney-Criddle method assumes crops are irrigated to their full potential over the entire growing season, which may or may not be realistic. We now use the Operational Simplified Surface Energy Balance (SSEBop) ET data in a remote-sensing and energy-balance framework to estimate cropland ET. SSEBop data are of sufficient resolution (30m by 30m) for estimation of field-scale cropland water use. We evaluate our SSEBop-based estimates using ground-truth information and irrigation system qualification obtained in the field. Our approach gives the end user an estimate of crop consumptive use as well as inefficiencies in irrigation system performance—both of which are needed by water managers for tracking irrigated water use in Arizona.

  5. Improvement of Source Number Estimation Method for Single Channel Signal

    PubMed Central

    Du, Bolun; He, Yunze

    2016-01-01

    Source number estimation methods for single channel signal have been investigated and the improvements for each method are suggested in this work. Firstly, the single channel data is converted to multi-channel form by delay process. Then, algorithms used in the array signal processing, such as Gerschgorin’s disk estimation (GDE) and minimum description length (MDL), are introduced to estimate the source number of the received signal. The previous results have shown that the MDL based on information theoretic criteria (ITC) obtains a superior performance than GDE at low SNR. However it has no ability to handle the signals containing colored noise. On the contrary, the GDE method can eliminate the influence of colored noise. Nevertheless, its performance at low SNR is not satisfactory. In order to solve these problems and contradictions, the work makes remarkable improvements on these two methods on account of the above consideration. A diagonal loading technique is employed to ameliorate the MDL method and a jackknife technique is referenced to optimize the data covariance matrix in order to improve the performance of the GDE method. The results of simulation have illustrated that the performance of original methods have been promoted largely. PMID:27736959

  6. Detecting diversity: emerging methods to estimate species diversity.

    PubMed

    Iknayan, Kelly J; Tingley, Morgan W; Furnas, Brett J; Beissinger, Steven R

    2014-02-01

    Estimates of species richness and diversity are central to community and macroecology and are frequently used in conservation planning. Commonly used diversity metrics account for undetected species primarily by controlling for sampling effort. Yet the probability of detecting an individual can vary among species, observers, survey methods, and sites. We review emerging methods to estimate alpha, beta, gamma, and metacommunity diversity through hierarchical multispecies occupancy models (MSOMs) and multispecies abundance models (MSAMs) that explicitly incorporate observation error in the detection process for species or individuals. We examine advantages, limitations, and assumptions of these detection-based hierarchical models for estimating species diversity. Accounting for imperfect detection using these approaches has influenced conclusions of comparative community studies and creates new opportunities for testing theory. PMID:24315534

  7. Inverse method for estimating shear stress in machining

    NASA Astrophysics Data System (ADS)

    Burns, T. J.; Mates, S. P.; Rhorer, R. L.; Whitenton, E. P.; Basak, D.

    2016-01-01

    An inverse method is presented for estimating shear stress in the work material in the region of chip-tool contact along the rake face of the tool during orthogonal machining. The method is motivated by a model of heat generation in the chip, which is based on a two-zone contact model for friction along the rake face, and an estimate of the steady-state flow of heat into the cutting tool. Given an experimentally determined discrete set of steady-state temperature measurements along the rake face of the tool, it is shown how to estimate the corresponding shear stress distribution on the rake face, even when no friction model is specified.

  8. A method of complex background estimation in astronomical images

    NASA Astrophysics Data System (ADS)

    Popowicz, A.; Smolka, B.

    2015-09-01

    In this paper, we present a novel approach to the estimation of strongly varying backgrounds in astronomical images by means of small-objects removal and subsequent missing pixels interpolation. The method is based on the analysis of a pixel local neighbourhood and utilizes the morphological distance transform. In contrast to popular background-estimation techniques, our algorithm allows for accurate extraction of complex structures, like galaxies or nebulae. Moreover, it does not require multiple tuning parameters, since it relies on physical properties of CCD image sensors - the gain and the readout noise characteristics. The comparison with other widely used background estimators revealed higher accuracy of the proposed technique. The superiority of the novel method is especially significant for the most challenging fluctuating backgrounds. The size of filtered-out objects is tunable; therefore, the algorithm may eliminate a wide range of foreground structures, including the dark current impulses, cosmic rays or even entire galaxies in deep field images.

  9. Optimal Input Signal Design for Data-Centric Estimation Methods

    PubMed Central

    Deshpande, Sunil; Rivera, Daniel E.

    2013-01-01

    Data-centric estimation methods such as Model-on-Demand and Direct Weight Optimization form attractive techniques for estimating unknown functions from noisy data. These methods rely on generating a local function approximation from a database of regressors at the current operating point with the process repeated at each new operating point. This paper examines the design of optimal input signals formulated to produce informative data to be used by local modeling procedures. The proposed method specifically addresses the distribution of the regressor vectors. The design is examined for a linear time-invariant system under amplitude constraints on the input. The resulting optimization problem is solved using semidefinite relaxation methods. Numerical examples show the benefits in comparison to a classical PRBS input design. PMID:24317042

  10. Boundary estimation method for ultrasonic 3D imaging

    NASA Astrophysics Data System (ADS)

    Ohashi, Gosuke; Ohya, Akihisa; Natori, Michiya; Nakajima, Masato

    1993-09-01

    The authors developed a new method for automatically and efficiently estimating the boundaries of soft tissue and amniotic fluid and to obtain a fine three dimensional image of the fetus from information given by ultrasonic echo images. The aim of this boundary estimation is to provide clear three dimensional images by shading the surface of the fetus and uterine wall using Lambert shading method. Normally there appears a random granular pattern called 'speckle' on an ultrasonic echo image. Therefore, it is difficult to estimate the soft tissue boundary satisfactorily via a simple method such as threshold value processing. Accordingly, the authors devised a method for classifying attributes into three categories using the neural network: soft tissue, amniotic and boundary. The shape of the grey level histogram was the standard for judgment, made by referring to the peripheral region of the voxel. Its application to the clinical data has shown a fine estimation of the boundary between the fetus or the uterine wall and the amniotic, enabling the details of the three dimensional structure to be observed.

  11. Characterization of optical traps using on-line estimation methods

    NASA Astrophysics Data System (ADS)

    Gorman, Jason J.; LeBrun, Thomas W.; Balijepalli, Arvind; Gagnon, Cedric; Lee, Dongjin

    2005-08-01

    System identification methods are presented for the estimation of the characteristic frequency of an optically trapped particle. These methods are more amenable to automated on-line measurements and are believed to be less prone to erroneous results compared to techniques based on thermal noise analysis. Optical tweezers have been shown to be an effective tool in measuring the complex interactions of micro-scale particles with piconewton resolution. However, the accuracy of the measurements depends heavily on knowledge of the trap stiffness and the viscous drag coefficient for the trapped particle. The most commonly referenced approach to measuring the trap stiffness is the power spectrum method, which provides the characteristic frequency for the trap based on the roll-off of the frequency response of a trapped particle excited by thermal fluctuations. However, the reliance on thermal fluctuations to excite the trapping dynamics results in a large degree of uncertainty in the estimated characteristic frequency. These issues are addressed by two parameter estimation methods which can be implemented on-line for fast trap characterization. The first is a frequency domain system identification approach which combines swept-sine frequency testing with a least-squares transfer function fitting algorithm. The second is a recursive least-squares parameter estimation scheme. The algorithms and results from simulation studies are discussed in detail.

  12. A study of methods to estimate debris flow velocity

    USGS Publications Warehouse

    Prochaska, A.B.; Santi, P.M.; Higgins, J.D.; Cannon, S.H.

    2008-01-01

    Debris flow velocities are commonly back-calculated from superelevation events which require subjective estimates of radii of curvature of bends in the debris flow channel or predicted using flow equations that require the selection of appropriate rheological models and material property inputs. This research investigated difficulties associated with the use of these conventional velocity estimation methods. Radii of curvature estimates were found to vary with the extent of the channel investigated and with the scale of the media used, and back-calculated velocities varied among different investigated locations along a channel. Distinct populations of Bingham properties were found to exist between those measured by laboratory tests and those back-calculated from field data; thus, laboratory-obtained values would not be representative of field-scale debris flow behavior. To avoid these difficulties with conventional methods, a new preliminary velocity estimation method is presented that statistically relates flow velocity to the channel slope and the flow depth. This method presents ranges of reasonable velocity predictions based on 30 previously measured velocities. ?? 2008 Springer-Verlag.

  13. The Stability of "g" across Different Methods of Estimation.

    ERIC Educational Resources Information Center

    Ree, Malcolm James; Earles, James A.

    1991-01-01

    Fourteen estimates were made of "g" (general cognitive ability) from the normative sample of a multiple-aptitude test battery with a weighted sample representing 25,409,193 men and women. The methods, which included principal components, unrotated principal factors, and hierarchical factor analysis, are equivalent for this test. (SLD)

  14. Stress intensity estimates by a computer assisted photoelastic method

    NASA Technical Reports Server (NTRS)

    Smith, C. W.

    1977-01-01

    Following an introductory history, the frozen stress photoelastic method is reviewed together with analytical and experimental aspects of cracks in photoelastic models. Analytical foundations are then presented upon which a computer assisted frozen stress photoelastic technique is based for extracting estimates of stress intensity factors from three-dimensional cracked body problems. The use of the method is demonstrated for two currently important three-dimensional crack problems.

  15. Nonparametric methods for drought severity estimation at ungauged sites

    NASA Astrophysics Data System (ADS)

    Sadri, S.; Burn, D. H.

    2012-12-01

    The objective in frequency analysis is, given extreme events such as drought severity or duration, to estimate the relationship between that event and the associated return periods at a catchment. Neural networks and other artificial intelligence approaches in function estimation and regression analysis are relatively new techniques in engineering, providing an attractive alternative to traditional statistical models. There are, however, few applications of neural networks and support vector machines in the area of severity quantile estimation for drought frequency analysis. In this paper, we compare three methods for this task: multiple linear regression, radial basis function neural networks, and least squares support vector regression (LS-SVR). The area selected for this study includes 32 catchments in the Canadian Prairies. From each catchment drought severities are extracted and fitted to a Pearson type III distribution, which act as observed values. For each method-duration pair, we use a jackknife algorithm to produce estimated values at each site. The results from these three approaches are compared and analyzed, and it is found that LS-SVR provides the best quantile estimates and extrapolating capacity.

  16. Three Different Methods of Estimating LAI in a Small Watershed

    NASA Astrophysics Data System (ADS)

    Speckman, H. N.; Ewers, B. E.; Beverly, D.

    2015-12-01

    Leaf area index (LAI) is a critical input of models that improve predictive understanding of ecology, hydrology, and climate change. Multiple techniques exist to quantify LAI, most of which are labor intensive, and all often fail to converge on similar estimates. . Recent large-scale bark beetle induced mortality greatly altered LAI, which is now dominated by younger and more metabolically active trees compared to the pre-beetle forest. Tree mortality increases error in optical LAI estimates due to the lack of differentiation between live and dead branches in dense canopy. Our study aims to quantify LAI using three different LAI methods, and then to compare the techniques to each other and topographic drivers to develop an effective predictive model of LAI. This study focuses on quantifying LAI within a small (~120 ha) beetle infested watershed in Wyoming's Snowy Range Mountains. The first technique estimated LAI using in-situ hemispherical canopy photographs that were then analyzed with Hemisfer software. The second LAI estimation technique was use of the Kaufmann 1982 allometrerics from forest inventories conducted throughout the watershed, accounting for stand basal area, species composition, and the extent of bark beetle driven mortality. The final technique used airborne light detection and ranging (LIDAR) first DMS returns, which were used to estimating canopy heights and crown area. LIDAR final returns provided topographical information and were then ground-truthed during forest inventories. Once data was collected, a fractural analysis was conducted comparing the three methods. Species composition was driven by slope position and elevation Ultimately the three different techniques provided very different estimations of LAI, but each had their advantage: estimates from hemisphere photos were well correlated with SWE and snow depth measurements, forest inventories provided insight into stand health and composition, and LIDAR were able to quickly and

  17. A New Method for Deriving Global Estimates of Maternal Mortality.

    PubMed

    Wilmoth, John R; Mizoguchi, Nobuko; Oestergaard, Mikkel Z; Say, Lale; Mathers, Colin D; Zureick-Brown, Sarah; Inoue, Mie; Chou, Doris

    2012-07-13

    Maternal mortality is widely regarded as a key indicator of population health and of social and economic development. Its levels and trends are monitored closely by the United Nations and others, inspired in part by the UN's Millennium Development Goals (MDGs), which call for a three-fourths reduction in the maternal mortality ratio between 1990 and 2015. Unfortunately, the empirical basis for such monitoring remains quite weak, requiring the use of statistical models to obtain estimates for most countries. In this paper we describe a new method for estimating global levels and trends in maternal mortality. For countries lacking adequate data for direct calculation of estimates, we employed a parametric model that separates maternal deaths related to HIV/AIDS from all others. For maternal deaths unrelated to HIV/AIDS, the model consists of a hierarchical linear regression with three predictors and variable intercepts for both countries and regions. The uncertainty of estimates was assessed by simulating the estimation process, accounting for variability both in the data and in other model inputs. The method was used to obtain the most recent set of UN estimates, published in September 2010. Here, we provide a concise description and explanation of the approach, including a new analysis of the components of variability reflected in the uncertainty intervals. Final estimates provide evidence of a more rapid decline in the global maternal mortality ratio than suggested by previous work, including another study published in April 2010. We compare findings from the two recent studies and discuss topics for further research to help resolve differences. PMID:24416714

  18. Comparison of Methods for Estimating Low Flow Characteristics of Streams

    USGS Publications Warehouse

    Tasker, Gary D.

    1987-01-01

    Four methods for estimating the 7-day, 10-year and 7-day, 20-year low flows for streams are compared by the bootstrap method. The bootstrap method is a Monte Carlo technique in which random samples are drawn from an unspecified sampling distribution defined from observed data. The nonparametric nature of the bootstrap makes it suitable for comparing methods based on a flow series for which the true distribution is unknown. Results show that the two methods based on hypothetical distribution (Log-Pearson III and Weibull) had lower mean square errors than did the G. E. P. Box-D. R. Cox transformation method or the Log-W. C. Boughton method which is based on a fit of plotting positions.

  19. Statistical estimation of mineral age by K-Ar method

    SciTech Connect

    Vistelius, A.B.; Drubetzkoy, E.R.; Faas, A.V. )

    1989-11-01

    Statistical estimation of age of {sup 40}Ar/{sup 40}K ratios may be considered a result of convolution of uniform and normal distributions with different weights for different minerals. Data from Gul'shad Massif (Nearbalkhash, Kazakhstan, USSR) indicate that {sup 40}Ar/{sup 40}K ratios reflecting the intensity of geochemical processes can be resolved using convolutions. Loss of {sup 40}Ar in biotites is shown whereas hornblende retained the original content of {sup 40}Ar throughout the geological history of the massif. Results demonstrate that different estimation methods must be used for different minerals and different rocks when radiometric ages are employed for dating.

  20. Estimation of Defect's Geometric Parameters with a Thermal Method

    NASA Astrophysics Data System (ADS)

    Protasov, A.; Sineglazov, V.

    2003-03-01

    The problem of estimation of flaws' parameters has been realized in two stages. At the first stage, it has been estimated relationship between temperature difference of a heated sample's surface and geometrical parameters of the flaw. For this purpose we have solved a direct heat conduction problem for various combination of the geometrical sizes of the flaw. At the second stage, we have solved an inverse heat conduction problem using the H - infinity method of identification. The results have shown good convergence to real parameters.

  1. A new method for estimating growth transition matrices.

    PubMed

    Hillary, R M

    2011-03-01

    The vast majority of population models work using age or stage not length but there are many cases where animals cannot be aged sensibly or accurately. For these cases length-based models form the logical alternative but there has been little work done to develop and compare different methods of estimating growth transition matrices to be used in such models. This article demonstrates how a consistent Bayesian framework for estimating growth parameters and a novel method for constructing length transition matrices accounts for variation in growth in a clear and consistent manner and avoids potential subjective choices required using more established methods. The inclusion of the resultant growth uncertainty in population assessment models and the potential impact on management decisions is also addressed.

  2. Noninvasive method of estimating human newborn regional cerebral blood flow

    SciTech Connect

    Younkin, D.P.; Reivich, M.; Jaggi, J.; Obrist, W.; Delivoria-Papadopoulos, M.

    1982-12-01

    A noninvasive method of estimating regional cerebral blood flow (rCBF) in premature and full-term babies has been developed. Based on a modification of the /sup 133/Xe inhalation rCBF technique, this method uses eight extracranial NaI scintillation detectors and an i.v. bolus injection of /sup 133/Xe (approximately 0.5 mCi/kg). Arterial xenon concentration was estimated with an external chest detector. Cerebral blood flow was measured in 15 healthy, neurologically normal premature infants. Using Obrist's method of two-compartment analysis, normal values were calculated for flow in both compartments, relative weight and fractional flow in the first compartment (gray matter), initial slope of gray matter blood flow, mean cerebral blood flow, and initial slope index of mean cerebral blood flow. The application of this technique to newborns, its relative advantages, and its potential uses are discussed.

  3. An aerial survey method to estimate sea otter abundance

    USGS Publications Warehouse

    Bodkin, J.L.; Udevitz, M.S.

    1999-01-01

    Sea otters (Enhydra lutris) occur in shallow coastal habitats and can be highly visible on the sea surface. They generally rest in groups and their detection depends on factors that include sea conditions, viewing platform, observer technique and skill, distance, habitat and group size. While visible on the surface, they are difficult to see while diving and may dive in response to an approaching survey platform. We developed and tested an aerial survey method that uses intensive searches within portions of strip transects to adjust for availability and sightability biases. Correction factors are estimated independently for each survey and observer. In tests of our method using shore-based observers, we estimated detection probabilities of 0.52-0.72 in standard strip-transects and 0.96 in intensive searches. We used the survey method in Prince William Sound, Alaska to estimate a sea otter population size of 9,092 (SE = 1422). The new method represents an improvement over various aspects of previous methods, but additional development and testing will be required prior to its broad application.

  4. A new analytical method for groundwater recharge and discharge estimation

    NASA Astrophysics Data System (ADS)

    Liang, Xiuyu; Zhang, You-Kuan

    2012-07-01

    SummaryA new analytical method was proposed for groundwater recharge and discharge estimation in an unconfined aquifer. The method is based on an analytical solution to the Boussinesq equation linearized in terms of h2, where h is the water table elevation, with a time-dependent source term. The solution derived was validated with numerical simulation and was shown to be a better approximation than an existing solution to the Boussinesq equation linearized in terms of h. By calibrating against the observed water levels in a monitoring well during a period of 100 days, we shown that the method proposed in this study can be used to estimate daily recharge (R) and evapotranspiration (ET) as well as the lateral drainage. It was shown that the total R was reasonably estimated with a water-table fluctuation (WTF) method if the water table measurements away from a fixed-head boundary were used, but the total ET was overestimated and the total net recharge was underestimated because of the lack of consideration of lateral drainage and aquifer storage in the WTF method.

  5. Method to Estimate the Dissolved Air Content in Hydraulic Fluid

    NASA Technical Reports Server (NTRS)

    Hauser, Daniel M.

    2011-01-01

    In order to verify the air content in hydraulic fluid, an instrument was needed to measure the dissolved air content before the fluid was loaded into the system. The instrument also needed to measure the dissolved air content in situ and in real time during the de-aeration process. The current methods used to measure the dissolved air content require the fluid to be drawn from the hydraulic system, and additional offline laboratory processing time is involved. During laboratory processing, there is a potential for contamination to occur, especially when subsaturated fluid is to be analyzed. A new method measures the amount of dissolved air in hydraulic fluid through the use of a dissolved oxygen meter. The device measures the dissolved air content through an in situ, real-time process that requires no additional offline laboratory processing time. The method utilizes an instrument that measures the partial pressure of oxygen in the hydraulic fluid. By using a standardized calculation procedure that relates the oxygen partial pressure to the volume of dissolved air in solution, the dissolved air content is estimated. The technique employs luminescent quenching technology to determine the partial pressure of oxygen in the hydraulic fluid. An estimated Henry s law coefficient for oxygen and nitrogen in hydraulic fluid is calculated using a standard method to estimate the solubility of gases in lubricants. The amount of dissolved oxygen in the hydraulic fluid is estimated using the Henry s solubility coefficient and the measured partial pressure of oxygen in solution. The amount of dissolved nitrogen that is in solution is estimated by assuming that the ratio of dissolved nitrogen to dissolved oxygen is equal to the ratio of the gas solubility of nitrogen to oxygen at atmospheric pressure and temperature. The technique was performed at atmospheric pressure and room temperature. The technique could be theoretically carried out at higher pressures and elevated

  6. Dental age estimation using Willems method: A digital orthopantomographic study

    PubMed Central

    Mohammed, Rezwana Begum; Krishnamraju, P. V.; Prasanth, P. S.; Sanghvi, Praveen; Lata Reddy, M. Asha; Jyotsna, S.

    2014-01-01

    In recent years, age estimation has become increasingly important in living people for a variety of reasons, including identifying criminal and legal responsibility, and for many other social events such as a birth certificate, marriage, beginning a job, joining the army, and retirement. Objectives: The aim of this study was to assess the developmental stages of left seven mandibular teeth for estimation of dental age (DA) in different age groups and to evaluate the possible correlation between DA and chronological age (CA) in South Indian population using Willems method. Materials and Methods: Digital Orthopantomogram of 332 subjects (166 males, 166 females) who fit the study and the criteria were obtained. Assessment of mandibular teeth (from central incisor to the second molar on left quadrant) development was undertaken and DA was assessed using Willems method. Results and Discussion: The present study showed a significant correlation between DA and CA in both males (r = 0.71 and females (r = 0.88). The overall mean difference between the estimated DA and CA for males was 0.69 ± 2.14 years (P < 0.001) while for females, it was 0.08 ± 1.34 years (P > 0.05). Willems method underestimated the mean age of males by 0.69 years and females by 0.08 years and showed that females mature earlier than males in selected population. The mean difference between DA and CA according to Willems method was 0.39 years and is statistically significant (P < 0.05). Conclusion: This study showed significant relation between DA and CA. Thus, digital radiographic assessment of mandibular teeth development can be used to generate mean DA using Willems method and also the estimated age range for an individual of unknown CA. PMID:25191076

  7. Methods of Mmax Estimation East of the Rocky Mountains

    USGS Publications Warehouse

    Wheeler, Russell L.

    2009-01-01

    Several methods have been used to estimate the magnitude of the largest possible earthquake (Mmax) in parts of the Central and Eastern United States and adjacent Canada (CEUSAC). Each method has pros and cons. The largest observed earthquake in a specified area provides an unarguable lower bound on Mmax in the area. Beyond that, all methods are undermined by the enigmatic nature of geologic controls on the propagation of large CEUSAC ruptures. Short historical-seismicity records decrease the defensibility of several methods that are based on characteristics of small areas in most of CEUSAC. Methods that use global tectonic analogs of CEUSAC encounter uncertainties in understanding what 'analog' means. Five of the methods produce results that are inconsistent with paleoseismic findings from CEUSAC seismic zones or individual active faults.

  8. Estimation of quality factors by energy ratio method

    NASA Astrophysics Data System (ADS)

    Wang, Zong-Jun; Cao, Si-Yuan; Zhang, Hao-Ran; Qu, Ying-Ming; Yuan, Dian; Yang, Jin-Hao; Shao, Guan-Ming

    2015-03-01

    The quality factor Q, which reflects the energy attenuation of seismic waves in subsurface media, is a diagnostic tool for hydrocarbon detection and reservoir characterization. In this paper, we propose a new Q extraction method based on the energy ratio before and after the wavelet attenuation, named the energy-ratio method (ERM). The proposed method uses multipoint signal data in the time domain to estimate the wavelet energy without invoking the source wavelet spectrum, which is necessary in conventional Q extraction methods, and is applicable to any source wavelet spectrum; however, it requires high-precision seismic data. Forward zero-offset VSP modeling suggests that the ERM can be used for reliable Q inversion after nonintrinsic attenuation (geometric dispersion, reflection, and transmission loss) compensation. The application to real zero-offset VSP data shows that the Q values extracted by the ERM and spectral ratio methods are identical, which proves the reliability of the new method.

  9. Point estimation of simultaneous methods for solving polynomial equations

    NASA Astrophysics Data System (ADS)

    Petkovic, Miodrag S.; Petkovic, Ljiljana D.; Rancic, Lidija Z.

    2007-08-01

    The construction of computationally verifiable initial conditions which provide both the guaranteed and fast convergence of the numerical root-finding algorithm is one of the most important problems in solving nonlinear equations. Smale's "point estimation theory" from 1981 was a great advance in this topic; it treats convergence conditions and the domain of convergence in solving an equation f(z)=0 using only the information of f at the initial point z0. The study of a general problem of the construction of initial conditions of practical interest providing guaranteed convergence is very difficult, even in the case of algebraic polynomials. In the light of Smale's point estimation theory, an efficient approach based on some results concerning localization of polynomial zeros and convergent sequences is applied in this paper to iterative methods for the simultaneous determination of simple zeros of polynomials. We state new, improved initial conditions which provide the guaranteed convergence of frequently used simultaneous methods for solving algebraic equations: Ehrlich-Aberth's method, Ehrlich-Aberth's method with Newton's correction, Borsch-Supan's method with Weierstrass' correction and Halley-like (or Wang-Zheng) method. The introduced concept offers not only a clear insight into the convergence analysis of sequences generated by the considered methods, but also explicitly gives their order of convergence. The stated initial conditions are of significant practical importance since they are computationally verifiable; they depend only on the coefficients of a given polynomial, its degree n and initial approximations to polynomial zeros.

  10. The Lyapunov dimension and its estimation via the Leonov method

    NASA Astrophysics Data System (ADS)

    Kuznetsov, N. V.

    2016-06-01

    Along with widely used numerical methods for estimating and computing the Lyapunov dimension there is an effective analytical approach, proposed by G.A. Leonov in 1991. The Leonov method is based on the direct Lyapunov method with special Lyapunov-like functions. The advantage of the method is that it allows one to estimate the Lyapunov dimension of invariant sets without localization of the set in the phase space and, in many cases, to get effectively an exact Lyapunov dimension formula. In this work the invariance of the Lyapunov dimension with respect to diffeomorphisms and its connection with the Leonov method are discussed. For discrete-time dynamical systems an analog of Leonov method is suggested. In a simple but rigorous way, here it is presented the connection between the Leonov method and the key related works: Kaplan and Yorke (the concept of the Lyapunov dimension, 1979), Douady and Oesterlé (upper bounds of the Hausdorff dimension via the Lyapunov dimension of maps, 1980), Constantin, Eden, Foiaş, and Temam (upper bounds of the Hausdorff dimension via the Lyapunov exponents and Lyapunov dimension of dynamical systems, 1985-90), and the numerical calculation of the Lyapunov exponents and dimension.

  11. Estimating the extreme low-temperature event using nonparametric methods

    NASA Astrophysics Data System (ADS)

    D'Silva, Anisha

    This thesis presents a new method of estimating the one-in-N low temperature threshold using a non-parametric statistical method called kernel density estimation applied to daily average wind-adjusted temperatures. We apply our One-in-N Algorithm to local gas distribution companies (LDCs), as they have to forecast the daily natural gas needs of their consumers. In winter, demand for natural gas is high. Extreme low temperature events are not directly related to an LDCs gas demand forecasting, but knowledge of extreme low temperatures is important to ensure that an LDC has enough capacity to meet customer demands when extreme low temperatures are experienced. We present a detailed explanation of our One-in-N Algorithm and compare it to the methods using the generalized extreme value distribution, the normal distribution, and the variance-weighted composite distribution. We show that our One-in-N Algorithm estimates the one-in- N low temperature threshold more accurately than the methods using the generalized extreme value distribution, the normal distribution, and the variance-weighted composite distribution according to root mean square error (RMSE) measure at a 5% level of significance. The One-in- N Algorithm is tested by counting the number of times the daily average wind-adjusted temperature is less than or equal to the one-in- N low temperature threshold.

  12. Vegetation index methods for estimating evapotranspiration by remote sensing

    USGS Publications Warehouse

    Glenn, Edward P.; Nagler, Pamela L.; Huete, Alfredo R.

    2010-01-01

    Evapotranspiration (ET) is the largest term after precipitation in terrestrial water budgets. Accurate estimates of ET are needed for numerous agricultural and natural resource management tasks and to project changes in hydrological cycles due to potential climate change. We explore recent methods that combine vegetation indices (VI) from satellites with ground measurements of actual ET (ETa) and meteorological data to project ETa over a wide range of biome types and scales of measurement, from local to global estimates. The majority of these use time-series imagery from the Moderate Resolution Imaging Spectrometer on the Terra satellite to project ET over seasons and years. The review explores the theoretical basis for the methods, the types of ancillary data needed, and their accuracy and limitations. Coefficients of determination between modeled ETa and measured ETa are in the range of 0.45–0.95, and root mean square errors are in the range of 10–30% of mean ETa values across biomes, similar to methods that use thermal infrared bands to estimate ETa and within the range of accuracy of the ground measurements by which they are calibrated or validated. The advent of frequent-return satellites such as Terra and planed replacement platforms, and the increasing number of moisture and carbon flux tower sites over the globe, have made these methods feasible. Examples of operational algorithms for ET in agricultural and natural ecosystems are presented. The goal of the review is to enable potential end-users from different disciplines to adapt these methods to new applications that require spatially-distributed ET estimates.

  13. Impedance-estimation methods, modeling methods, articles of manufacture, impedance-modeling devices, and estimated-impedance monitoring systems

    DOEpatents

    Richardson, John G.

    2009-11-17

    An impedance estimation method includes measuring three or more impedances of an object having a periphery using three or more probes coupled to the periphery. The three or more impedance measurements are made at a first frequency. Three or more additional impedance measurements of the object are made using the three or more probes. The three or more additional impedance measurements are made at a second frequency different from the first frequency. An impedance of the object at a point within the periphery is estimated based on the impedance measurements and the additional impedance measurements.

  14. Comparison of the performance of two methods for height estimation.

    PubMed

    Edelman, Gerda; Alberink, Ivo; Hoogeboom, Bart

    2010-03-01

    In the case study, two methods of performing body height measurements in images are compared based on projective geometry and 3D modeling of the crime scene. Accuracy and stability of height estimations are tested using reconstruction images of test persons of known height. Given unchanged camera settings, predictions of both methods are accurate. However, as the camera had been moved in the case, new vanishing points and camera matches had to be created for the reconstruction images. 3D modeling still yielded accurate and stable estimations. Projective geometry produced incorrect predictions for test persons and unstable intervals for questioned persons. The latter is probably caused by the straight lines in the field of view being hard to discern. With the quality of material presented, which is representative for our case practice, using vanishing points may thus yield unstable results. The results underline the importance of performing validation experiments in casework. PMID:20158593

  15. Estimating surface acoustic impedance with the inverse method.

    PubMed

    Piechowicz, Janusz

    2011-01-01

    Sound field parameters are predicted with numerical methods in sound control systems, in acoustic designs of building and in sound field simulations. Those methods define the acoustic properties of surfaces, such as sound absorption coefficients or acoustic impedance, to determine boundary conditions. Several in situ measurement techniques were developed; one of them uses 2 microphones to measure direct and reflected sound over a planar test surface. Another approach is used in the inverse boundary elements method, in which estimating acoustic impedance of a surface is expressed as an inverse boundary problem. The boundary values can be found from multipoint sound pressure measurements in the interior of a room. This method can be applied to arbitrarily-shaped surfaces. This investigation is part of a research programme on using inverse methods in industrial room acoustics. PMID:21939599

  16. Adaptive error covariances estimation methods for ensemble Kalman filters

    SciTech Connect

    Zhen, Yicun; Harlim, John

    2015-08-01

    This paper presents a computationally fast algorithm for estimating, both, the system and observation noise covariances of nonlinear dynamics, that can be used in an ensemble Kalman filtering framework. The new method is a modification of Belanger's recursive method, to avoid an expensive computational cost in inverting error covariance matrices of product of innovation processes of different lags when the number of observations becomes large. When we use only product of innovation processes up to one-lag, the computational cost is indeed comparable to a recently proposed method by Berry–Sauer's. However, our method is more flexible since it allows for using information from product of innovation processes of more than one-lag. Extensive numerical comparisons between the proposed method and both the original Belanger's and Berry–Sauer's schemes are shown in various examples, ranging from low-dimensional linear and nonlinear systems of SDEs and 40-dimensional stochastically forced Lorenz-96 model. Our numerical results suggest that the proposed scheme is as accurate as the original Belanger's scheme on low-dimensional problems and has a wider range of more accurate estimates compared to Berry–Sauer's method on L-96 example.

  17. Uncertainty in streamflow records - a comparison of multiple estimation methods

    NASA Astrophysics Data System (ADS)

    Kiang, Julie; Gazoorian, Chris; Mason, Robert; Le Coz, Jerome; Renard, Benjamin; Mansanarez, Valentin; McMillan, Hilary; Westerberg, Ida; Petersen-Øverleir, Asgeir; Reitan, Trond; Sikorska, Anna; Siebert, Jan; Coxon, Gemma; Freer, Jim; Belleville, Arnaud; Hauet, Alexandre

    2016-04-01

    Stage-discharge rating curves are used to relate streamflow discharge to continuously measured river stage readings in order to create a continuous record of streamflow discharge. The stage-discharge relationship is estimated and refined using discrete streamflow gaugings over time, during which both the discharge and stage are measured. The resulting rating curve has uncertainty due to multiple factors including the curve-fitting process, assumptions on the form of the model used, the changeable nature of natural channels, and the approaches used to extrapolate the rating equation beyond available observations. A number of different methods have been proposed for estimating rating curve uncertainty, differing in mathematical rigour, in the assumptions made about the component errors, and in the information required to implement the method at any given site. This study compares several methods that range from simple LOWESS fits to more complicated Bayesian methods that consider hydraulic principles directly. We evaluate these different methods when applied to a single gauging station using the same information (channel characteristics, hydrographs, and streamflow gaugings). We quantify the resultant spread of the stage-discharge curves and compare the level of uncertainty attributed to the streamflow record by the different methods..

  18. Estimation of race admixture--a new method.

    PubMed

    Chakraborty, R

    1975-05-01

    The contribution of a parental population in the gene pool of a hybrid population which arose by hybridization with one or more other populations is estimated here at the population level from the probability of gene identity. The dynamics of accumulation of such admixture is studied incorporating the fluctuations due to finite size of the hybrid population. The method is illustrated with data on admixture in Cherokee Indians. PMID:1146991

  19. A Sensitivity Analysis of a Thin Film Conductivity Estimation Method

    SciTech Connect

    McMasters, Robert L; Dinwiddie, Ralph Barton

    2010-01-01

    An analysis method was developed for determining the thermal conductivity of a thin film on a substrate of known thermal properties using the flash diffusivity method. In order to determine the thermal conductivity of the film using this method, the volumetric heat capacity of the film must be known, as determined in a separate experiment. Additionally, the thermal properties of the substrate must be known, including conductivity and volumetric heat capacity. The ideal conditions for the experiment are a low conductivity film adhered to a higher conductivity substrate. As the film becomes thinner with respect to the substrate or, as the conductivity of the film approaches that of the substrate, the estimation of thermal conductivity of the film becomes more difficult. The present research examines the effect of inaccuracies in the known parameters on the estimation of the parameter of interest, the thermal conductivity of the film. As such, perturbations are introduced into the other parameters in the experiment, which are assumed to be known, to find the effect on the estimated thermal conductivity of the film. A baseline case is established with the following parameters: Substrate thermal conductivity 1.0 W/m-K Substrate volumetric heat capacity 106 J/m3-K Substrate thickness 0.8 mm Film thickness 0.2 mm Film volumetric heat capacity 106 J/m3-K Film thermal conductivity 0.01 W/m-K Convection coefficient 20 W/m2-K Magnitude of heat absorbed during the flash 1000 J/m2 Each of these parameters, with the exception of film thermal conductivity, the parameter of interest, is varied from its baseline value, in succession, and placed into a synthetic experimental data file. Each of these data files is individually analyzed by the program to determine the effect on the estimated film conductivity, thus quantifying the vulnerability of the method to measurement errors.

  20. A robust method for estimating landfill methane emissions.

    PubMed

    Figueroa, Veronica K; Mackie, Kevin R; Guarriello, Nick; Cooper, C David

    2009-08-01

    Because municipal solid waste (MSW) landfills emit significant amounts of methane, a potent greenhouse gas, there is considerable interest in quantifying surficial methane emissions from landfills. The authors present a method to estimate methane emissions, using ambient air volatile organic compound (VOC) measurements taken above the surface of the landfill. Using a hand-held monitor, hundreds of VOC concentrations can be taken easily in a day, and simple meteorological data can be recorded at the same time. The standard Gaussian dispersion equations are inverted and solved by matrix methods to determine the methane emission rates at hundreds of point locations throughout a MSW landfill. These point emission rates are then summed to give the total landfill emission rate. This method is tested on a central Florida MSW landfill using data from 3 different days, taken 6 and 12 months apart. A sensitivity study is conducted, and the emission estimates are most sensitive to the input meteorological parameters of wind speed and stability class. Because of the many measurements that are used, the results are robust. When the emission estimates were used as inputs into a dispersion model, a reasonable scatterplot fit of the individual concentration measurement data resulted. PMID:19728486

  1. Improving stochastic estimates with inference methods: calculating matrix diagonals.

    PubMed

    Selig, Marco; Oppermann, Niels; Ensslin, Torsten A

    2012-02-01

    Estimating the diagonal entries of a matrix, that is not directly accessible but only available as a linear operator in the form of a computer routine, is a common necessity in many computational applications, especially in image reconstruction and statistical inference. Here, methods of statistical inference are used to improve the accuracy or the computational costs of matrix probing methods to estimate matrix diagonals. In particular, the generalized Wiener filter methodology, as developed within information field theory, is shown to significantly improve estimates based on only a few sampling probes, in cases in which some form of continuity of the solution can be assumed. The strength, length scale, and precise functional form of the exploited autocorrelation function of the matrix diagonal is determined from the probes themselves. The developed algorithm is successfully applied to mock and real world problems. These performance tests show that, in situations where a matrix diagonal has to be calculated from only a small number of computationally expensive probes, a speedup by a factor of 2 to 10 is possible with the proposed method.

  2. Geometric estimation method for x-ray digital intraoral tomosynthesis

    NASA Astrophysics Data System (ADS)

    Li, Liang; Yang, Yao; Chen, Zhiqiang

    2016-06-01

    It is essential for accurate image reconstruction to obtain a set of parameters that describes the x-ray scanning geometry. A geometric estimation method is presented for x-ray digital intraoral tomosynthesis (DIT) in which the detector remains stationary while the x-ray source rotates. The main idea is to estimate the three-dimensional (3-D) coordinates of each shot position using at least two small opaque balls adhering to the detector surface as the positioning markers. From the radiographs containing these balls, the position of each x-ray focal spot can be calculated independently relative to the detector center no matter what kind of scanning trajectory is used. A 3-D phantom which roughly simulates DIT was designed to evaluate the performance of this method both quantitatively and qualitatively in the sense of mean square error and structural similarity. Results are also presented for real data acquired with a DIT experimental system. These results prove the validity of this geometric estimation method.

  3. The segmented-beat modulation method for ECG estimation.

    PubMed

    Agostinelli, A; Giuliani, C; Fioretti, S; Di Nardo, F; Burattini, L

    2015-08-01

    Electrocardiographic (ECG) tracings corrupted by noise with frequency components in the ECG frequency band, may result useless unless appropriately processed. The estimation of the clean ECG from such recordings, however, is quite challenging; being linear filtering inappropriate. In the common situations in which the R peaks are detectable, template-based techniques have been proposed to estimate the ECG by a template-beat concatenation. However, such techniques have the major limit of not being able to reproduce physiological heart-rate and morphological variability. Thus, the aim of the present study was to propose the segmented-beat modulation method (SBMM) as the technique that overcomes such limit. The SBMM is an improved template-based technique that provides good-quality estimations of ECG tracings characterized by some heart-rate and morphological variability. It segments the template ECG beat into QRS and TUP segments and then, before concatenation, it applies a modulation/demodulation process to the TUP-segment so that the estimated-beat duration and morphology adjust to those of the corresponding original-beat. To test its performance, the SBMM was applied to 19 ECG tracings from normal subjects. There were no errors in estimating the R peak location, and the errors in the QRS and TUP segments were low (≤65 μV and ≤30 μV, respectively), with the former ones being significantly higher than the latter ones. Eventually, TUP errors tended to increase with increasing heart-rate variability (correlation coefficient: 0.59, P<;10(-2)). In conclusion, the new SBMM proved to be a useful tool for providing good-quality ECG estimations of tracings characterized by heart-rate and morphological variability.

  4. SCoPE: an efficient method of Cosmological Parameter Estimation

    SciTech Connect

    Das, Santanu; Souradeep, Tarun E-mail: tarun@iucaa.ernet.in

    2014-07-01

    Markov Chain Monte Carlo (MCMC) sampler is widely used for cosmological parameter estimation from CMB and other data. However, due to the intrinsic serial nature of the MCMC sampler, convergence is often very slow. Here we present a fast and independently written Monte Carlo method for cosmological parameter estimation named as Slick Cosmological Parameter Estimator (SCoPE), that employs delayed rejection to increase the acceptance rate of a chain, and pre-fetching that helps an individual chain to run on parallel CPUs. An inter-chain covariance update is also incorporated to prevent clustering of the chains allowing faster and better mixing of the chains. We use an adaptive method for covariance calculation to calculate and update the covariance automatically as the chains progress. Our analysis shows that the acceptance probability of each step in SCoPE is more than 95% and the convergence of the chains are faster. Using SCoPE, we carry out some cosmological parameter estimations with different cosmological models using WMAP-9 and Planck results. One of the current research interests in cosmology is quantifying the nature of dark energy. We analyze the cosmological parameters from two illustrative commonly used parameterisations of dark energy models. We also asses primordial helium fraction in the universe can be constrained by the present CMB data from WMAP-9 and Planck. The results from our MCMC analysis on the one hand helps us to understand the workability of the SCoPE better, on the other hand it provides a completely independent estimation of cosmological parameters from WMAP-9 and Planck data.

  5. Methods for estimating low-flow statistics for Massachusetts streams

    USGS Publications Warehouse

    Ries, Kernell G.; Friesz, Paul J.

    2000-01-01

    Methods and computer software are described in this report for determining flow duration, low-flow frequency statistics, and August median flows. These low-flow statistics can be estimated for unregulated streams in Massachusetts using different methods depending on whether the location of interest is at a streamgaging station, a low-flow partial-record station, or an ungaged site where no data are available. Low-flow statistics for streamgaging stations can be estimated using standard U.S. Geological Survey methods described in the report. The MOVE.1 mathematical method and a graphical correlation method can be used to estimate low-flow statistics for low-flow partial-record stations. The MOVE.1 method is recommended when the relation between measured flows at a partial-record station and daily mean flows at a nearby, hydrologically similar streamgaging station is linear, and the graphical method is recommended when the relation is curved. Equations are presented for computing the variance and equivalent years of record for estimates of low-flow statistics for low-flow partial-record stations when either a single or multiple index stations are used to determine the estimates. The drainage-area ratio method or regression equations can be used to estimate low-flow statistics for ungaged sites where no data are available. The drainage-area ratio method is generally as accurate as or more accurate than regression estimates when the drainage-area ratio for an ungaged site is between 0.3 and 1.5 times the drainage area of the index data-collection site. Regression equations were developed to estimate the natural, long-term 99-, 98-, 95-, 90-, 85-, 80-, 75-, 70-, 60-, and 50-percent duration flows; the 7-day, 2-year and the 7-day, 10-year low flows; and the August median flow for ungaged sites in Massachusetts. Streamflow statistics and basin characteristics for 87 to 133 streamgaging stations and low-flow partial-record stations were used to develop the equations. The

  6. The composite method: An improved method for stream-water solute load estimation

    USGS Publications Warehouse

    Aulenbach, Brent T.; Hooper, R.P.

    2006-01-01

    The composite method is an alternative method for estimating stream-water solute loads, combining aspects of two commonly used methods: the regression-model method (which is used by the composite method to predict variations in concentrations between collected samples) and a period-weighted approach (which is used by the composite method to apply the residual concentrations from the regression model over time). The extensive dataset collected at the outlet of the Panola Mountain Research Watershed (PMRW) near Atlanta, Georgia, USA, was used in data analyses for illustrative purposes. A bootstrap (subsampling) experiment (using the composite method and the PMRW dataset along with various fixed-interval and large storm sampling schemes) obtained load estimates for the 8-year study period with a magnitude of the bias of less than 1%, even for estimates that included the fewest number of samples. Precisions were always <2% on a study period and annual basis, and <2% precisions were obtained for quarterly and monthly time intervals for estimates that had better sampling. The bias and precision of composite-method load estimates varies depending on the variability in the regression-model residuals, how residuals systematically deviated from the regression model over time, sampling design, and the time interval of the load estimate. The regression-model method did not estimate loads precisely during shorter time intervals, from annually to monthly, because the model could not explain short-term patterns in the observed concentrations. Load estimates using the period-weighted approach typically are biased as a result of sampling distribution and are accurate only with extensive sampling. The formulation of the composite method facilitates exploration of patterns (trends) contained in the unmodelled portion of the load. Published in 2006 by John Wiley & Sons, Ltd.

  7. Evaluation of estimation methods for organic carbon normalized sorption coefficients

    USGS Publications Warehouse

    Baker, James R.; Mihelcic, James R.; Luehrs, Dean C.; Hickey, James P.

    1997-01-01

    A critically evaluated set of 94 soil water partition coefficients normalized to soil organic carbon content (Koc) is presented for 11 classes of organic chemicals. This data set is used to develop and evaluate Koc estimation methods using three different descriptors. The three types of descriptors used in predicting Koc were octanol/water partition coefficient (Kow), molecular connectivity (mXt) and linear solvation energy relationships (LSERs). The best results were obtained estimating Koc from Kow, though a slight improvement in the correlation coefficient was obtained by using a two-parameter regression with Kow and the third order difference term from mXt. Molecular connectivity correlations seemed to be best suited for use with specific chemical classes. The LSER provided a better fit than mXt but not as good as the correlation with Koc. The correlation to predict Koc from Kow was developed for 72 chemicals; log Koc = 0.903* log Kow + 0.094. This correlation accounts for 91% of the variability in the data for chemicals with log Kow ranging from 1.7 to 7.0. The expression to determine the 95% confidence interval on the estimated Koc is provided along with an example for two chemicals of different hydrophobicity showing the confidence interval of the retardation factor determined from the estimated Koc. The data showed that Koc is not likely to be applicable for chemicals with log Kow < 1.7. Finally, the Koc correlation developed using Kow as a descriptor was compared with three nonclass-specific correlations and two 'commonly used' class-specific correlations to determine which method(s) are most suitable.

  8. Method to estimate center of rigidity using vibration recordings

    USGS Publications Warehouse

    Safak, Erdal; Celebi, Mehmet

    1990-01-01

    A method to estimate the center of rigidity of buildings by using vibration recordings is presented. The method is based on the criterion that the coherence of translational motions with the rotational motion is minimum at the center of rigidity. Since the coherence is a function of frequency, a gross but frequency-independent measure of the coherency is defined as the integral of the coherence function over the frequency. The center of rigidity is determined by minimizing this integral. The formulation is given for two-dimensional motions. Two examples are presented for the method; a rectangular building with ambient-vibration recordings, and a triangular building with earthquake-vibration recordings. Although the examples given are for buildings, the method can be applied to any structure with two-dimensional motions.

  9. Estimates of tropical bromoform emissions using an inversion method

    NASA Astrophysics Data System (ADS)

    Ashfold, M. J.; Harris, N. R. P.; Manning, A. J.; Robinson, A. D.; Warwick, N. J.; Pyle, J. A.

    2013-08-01

    Bromine plays an important role in ozone chemistry in both the troposphere and stratosphere. When measured by mass, bromoform (CHBr3) is thought to be the largest organic source of bromine to the atmosphere. While seaweed and phytoplankton are known to be dominant sources, the size and the geographical distribution of CHBr3 emissions remains uncertain. Particularly little is known about emissions from the Maritime Continent, which have usually been assumed to be large, and which appear to be especially likely to reach the stratosphere. In this study we aim to use the first multi-annual set of CHBr3 measurements from this region, and an inversion method, to reduce this uncertainty. We find that local measurements of a short-lived gas like CHBr3 can only be used to constrain emissions from a relatively small, sub-regional domain. We then obtain detailed estimates of both the distribution and magnitude of CHBr3 emissions within this area. Our estimates appear to be relatively insensitive to the assumptions inherent in the inversion process. We extrapolate this information to produce estimated emissions for the entire tropics (defined as 20° S-20° N) of 225 GgCHBr3 y-1. This estimate is consistent with other recent studies, and suggests that CHBr3 emissions in the coastline-rich Maritime Continent may not be stronger than emissions in other parts of the tropics.

  10. Reliability of field methods for estimating body fat.

    PubMed

    Loenneke, Jeremy P; Barnes, Jeremy T; Wilson, Jacob M; Lowery, Ryan P; Isaacs, Melissa N; Pujol, Thomas J

    2013-09-01

    When health professionals measure the fitness levels of clients, body composition is usually estimated. In practice, the reliability of the measurement may be more important than the actual validity, as reliability determines how much change is needed to be considered meaningful. Therefore, the purpose of this study was to determine the reliability of two bioelectrical impedance analysis (BIA) devices (in athlete and non-athlete mode) and compare that to 3-site skinfold (SKF) readings. Twenty-one college students attended the laboratory on two occasions and had their measurements taken in the following order: body mass, height, SKF, Tanita body fat-350 (BF-350) and Omron HBF-306C. There were no significant pairwise differences between Visit 1 and Visit 2 for any of the estimates (P>0.05). The Pearson product correlations ranged from r = 0.933 for HBF-350 in the athlete mode (A) to r = 0.994 for SKF. The ICC's ranged from 0.93 for HBF-350(A) to 0.992 for SKF, and the MD's ranged from 1.8% for SKF to 5.1% for BF-350(A). The current study found that SKF and HBF-306C(A) were the most reliable (<2%) methods of estimating BF%, with the other methods (BF-350, BF-350(A), HBF-306C) producing minimal differences greater than 2%. In conclusion, the SKF method presented with the best reliability because of its low minimal difference, suggesting this method may be the best field method to track changes over time if you have an experienced tester. However, if technical error is a concern, the practitioner may use the HBF-306C(A) because it had a minimal difference value comparable to SKF. PMID:23701358

  11. A Generalized, Likelihood-Free Method for Posterior Estimation

    PubMed Central

    Turner, Brandon M.; Sederberg, Per B.

    2014-01-01

    Recent advancements in Bayesian modeling have allowed for likelihood-free posterior estimation. Such estimation techniques are crucial to the understanding of simulation-based models, whose likelihood functions may be difficult or even impossible to derive. However, current approaches are limited by their dependence on sufficient statistics and/or tolerance thresholds. In this article, we provide a new approach that requires no summary statistics, error terms, or thresholds, and is generalizable to all models in psychology that can be simulated. We use our algorithm to fit a variety of cognitive models with known likelihood functions to ensure the accuracy of our approach. We then apply our method to two real-world examples to illustrate the types of complex problems our method solves. In the first example, we fit an error-correcting criterion model of signal detection, whose criterion dynamically adjusts after every trial. We then fit two models of choice response time to experimental data: the Linear Ballistic Accumulator model, which has a known likelihood, and the Leaky Competing Accumulator model whose likelihood is intractable. The estimated posterior distributions of the two models allow for direct parameter interpretation and model comparison by means of conventional Bayesian statistics – a feat that was not previously possible. PMID:24258272

  12. Methods for cost estimation in software project management

    NASA Astrophysics Data System (ADS)

    Briciu, C. V.; Filip, I.; Indries, I. I.

    2016-02-01

    The speed in which the processes used in software development field have changed makes it very difficult the task of forecasting the overall costs for a software project. By many researchers, this task has been considered unachievable, but there is a group of scientist for which this task can be solved using the already known mathematical methods (e.g. multiple linear regressions) and the new techniques as genetic programming and neural networks. The paper presents a solution for building a model for the cost estimation models in the software project management using genetic algorithms starting from the PROMISE datasets related COCOMO 81 model. In the first part of the paper, a summary of the major achievements in the research area of finding a model for estimating the overall project costs is presented together with the description of the existing software development process models. In the last part, a basic proposal of a mathematical model of a genetic programming is proposed including here the description of the chosen fitness function and chromosome representation. The perspective of model described it linked with the current reality of the software development considering as basis the software product life cycle and the current challenges and innovations in the software development area. Based on the author's experiences and the analysis of the existing models and product lifecycle it was concluded that estimation models should be adapted with the new technologies and emerging systems and they depend largely by the chosen software development method.

  13. A method to estimate groundwater depletion from confining layers

    USGS Publications Warehouse

    Konikow, L.F.; Neuzil, C.E.

    2007-01-01

    Although depletion of storage in low-permeability confining layers is the source of much of the groundwater produced from many confined aquifer systems, it is all too frequently overlooked or ignored. This makes effective management of groundwater resources difficult by masking how much water has been derived from storage and, in some cases, the total amount of water that has been extracted from an aquifer system. Analyzing confining layer storage is viewed as troublesome because of the additional computational burden and because the hydraulic properties of confining layers are poorly known. In this paper we propose a simplified method for computing estimates of confining layer depletion, as well as procedures for approximating confining layer hydraulic conductivity (K) and specific storage (Ss) using geologic information. The latter makes the technique useful in developing countries and other settings where minimal data are available or when scoping calculations are needed. As such, our approach may be helpful for estimating the global transfer of groundwater to surface water. A test of the method on a synthetic system suggests that the computational errors will generally be small. Larger errors will probably result from inaccuracy in confining layer property estimates, but these may be no greater than errors in more sophisticated analyses. The technique is demonstrated by application to two aquifer systems: the Dakota artesian aquifer system in South Dakota and the coastal plain aquifer system in Virginia. In both cases, depletion from confining layers was substantially larger than depletion from the aquifers.

  14. Causes and methods to estimate cryptic sources of fishing mortality.

    PubMed

    Gilman, E; Suuronen, P; Hall, M; Kennelly, S

    2013-10-01

    Cryptic, not readily detectable, components of fishing mortality are not routinely accounted for in fisheries management because of a lack of adequate data, and for some components, a lack of accurate estimation methods. Cryptic fishing mortalities can cause adverse ecological effects, are a source of wastage, reduce the sustainability of fishery resources and, when unaccounted for, can cause errors in stock assessments and population models. Sources of cryptic fishing mortality are (1) pre-catch losses, where catch dies from the fishing operation but is not brought onboard when the gear is retrieved, (2) ghost-fishing mortality by fishing gear that was abandoned, lost or discarded, (3) post-release mortality of catch that is retrieved and then released alive but later dies as a result of stress and injury sustained from the fishing interaction, (4) collateral mortalities indirectly caused by various ecological effects of fishing and (5) losses due to synergistic effects of multiple interacting sources of stress and injury from fishing operations, or from cumulative stress and injury caused by repeated sub-lethal interactions with fishing operations. To fill a gap in international guidance on best practices, causes and methods for estimating each component of cryptic fishing mortality are described, and considerations for their effective application are identified. Research priorities to fill gaps in understanding the causes and estimating cryptic mortality are highlighted. PMID:24090548

  15. Causes and methods to estimate cryptic sources of fishing mortality.

    PubMed

    Gilman, E; Suuronen, P; Hall, M; Kennelly, S

    2013-10-01

    Cryptic, not readily detectable, components of fishing mortality are not routinely accounted for in fisheries management because of a lack of adequate data, and for some components, a lack of accurate estimation methods. Cryptic fishing mortalities can cause adverse ecological effects, are a source of wastage, reduce the sustainability of fishery resources and, when unaccounted for, can cause errors in stock assessments and population models. Sources of cryptic fishing mortality are (1) pre-catch losses, where catch dies from the fishing operation but is not brought onboard when the gear is retrieved, (2) ghost-fishing mortality by fishing gear that was abandoned, lost or discarded, (3) post-release mortality of catch that is retrieved and then released alive but later dies as a result of stress and injury sustained from the fishing interaction, (4) collateral mortalities indirectly caused by various ecological effects of fishing and (5) losses due to synergistic effects of multiple interacting sources of stress and injury from fishing operations, or from cumulative stress and injury caused by repeated sub-lethal interactions with fishing operations. To fill a gap in international guidance on best practices, causes and methods for estimating each component of cryptic fishing mortality are described, and considerations for their effective application are identified. Research priorities to fill gaps in understanding the causes and estimating cryptic mortality are highlighted.

  16. Molecular-clock methods for estimating evolutionary rates and timescales.

    PubMed

    Ho, Simon Y W; Duchêne, Sebastián

    2014-12-01

    The molecular clock presents a means of estimating evolutionary rates and timescales using genetic data. These estimates can lead to important insights into evolutionary processes and mechanisms, as well as providing a framework for further biological analyses. To deal with rate variation among genes and among lineages, a diverse range of molecular-clock methods have been developed. These methods have been implemented in various software packages and differ in their statistical properties, ability to handle different models of rate variation, capacity to incorporate various forms of calibrating information and tractability for analysing large data sets. Choosing a suitable molecular-clock model can be a challenging exercise, but a number of model-selection techniques are available. In this review, we describe the different forms of evolutionary rate heterogeneity and explain how they can be accommodated in molecular-clock analyses. We provide an outline of the various clock methods and models that are available, including the strict clock, local clocks, discrete clocks and relaxed clocks. Techniques for calibration and clock-model selection are also described, along with methods for handling multilocus data sets. We conclude our review with some comments about the future of molecular clocks.

  17. Estimation of regionalized compositions: A comparison of three methods

    USGS Publications Warehouse

    Pawlowsky, V.; Olea, R.A.; Davis, J.C.

    1995-01-01

    A regionalized composition is a random vector function whose components are positive and sum to a constant at every point of the sampling region. Consequently, the components of a regionalized composition are necessarily spatially correlated. This spatial dependence-induced by the constant sum constraint-is a spurious spatial correlation and may lead to misinterpretations of statistical analyses. Furthermore, the cross-covariance matrices of the regionalized composition are singular, as is the coefficient matrix of the cokriging system of equations. Three methods of performing estimation or prediction of a regionalized composition at unsampled points are discussed: (1) the direct approach of estimating each variable separately; (2) the basis method, which is applicable only when a random function is available that can he regarded as the size of the regionalized composition under study; (3) the logratio approach, using the additive-log-ratio transformation proposed by J. Aitchison, which allows statistical analysis of compositional data. We present a brief theoretical review of these three methods and compare them using compositional data from the Lyons West Oil Field in Kansas (USA). It is shown that, although there are no important numerical differences, the direct approach leads to invalid results, whereas the basis method and the additive-log-ratio approach are comparable. ?? 1995 International Association for Mathematical Geology.

  18. Estimating Bacterial Diversity for Ecological Studies: Methods, Metrics, and Assumptions

    PubMed Central

    Birtel, Julia; Walser, Jean-Claude; Pichon, Samuel; Bürgmann, Helmut; Matthews, Blake

    2015-01-01

    Methods to estimate microbial diversity have developed rapidly in an effort to understand the distribution and diversity of microorganisms in natural environments. For bacterial communities, the 16S rRNA gene is the phylogenetic marker gene of choice, but most studies select only a specific region of the 16S rRNA to estimate bacterial diversity. Whereas biases derived from from DNA extraction, primer choice and PCR amplification are well documented, we here address how the choice of variable region can influence a wide range of standard ecological metrics, such as species richness, phylogenetic diversity, β-diversity and rank-abundance distributions. We have used Illumina paired-end sequencing to estimate the bacterial diversity of 20 natural lakes across Switzerland derived from three trimmed variable 16S rRNA regions (V3, V4, V5). Species richness, phylogenetic diversity, community composition, β-diversity, and rank-abundance distributions differed significantly between 16S rRNA regions. Overall, patterns of diversity quantified by the V3 and V5 regions were more similar to one another than those assessed by the V4 region. Similar results were obtained when analyzing the datasets with different sequence similarity thresholds used during sequences clustering and when the same analysis was used on a reference dataset of sequences from the Greengenes database. In addition we also measured species richness from the same lake samples using ARISA Fingerprinting, but did not find a strong relationship between species richness estimated by Illumina and ARISA. We conclude that the selection of 16S rRNA region significantly influences the estimation of bacterial diversity and species distributions and that caution is warranted when comparing data from different variable regions as well as when using different sequencing techniques. PMID:25915756

  19. Estimating Return on Investment in Translational Research: Methods and Protocols

    PubMed Central

    Trochim, William; Dilts, David M.; Kirk, Rosalind

    2014-01-01

    Assessing the value of clinical and translational research funding on accelerating the translation of scientific knowledge is a fundamental issue faced by the National Institutes of Health and its Clinical and Translational Awards (CTSA). To address this issue, the authors propose a model for measuring the return on investment (ROI) of one key CTSA program, the clinical research unit (CRU). By estimating the economic and social inputs and outputs of this program, this model produces multiple levels of ROI: investigator, program and institutional estimates. A methodology, or evaluation protocol, is proposed to assess the value of this CTSA function, with specific objectives, methods, descriptions of the data to be collected, and how data are to be filtered, analyzed, and evaluated. This paper provides an approach CTSAs could use to assess the economic and social returns on NIH and institutional investments in these critical activities. PMID:23925706

  20. Estimating return on investment in translational research: methods and protocols.

    PubMed

    Grazier, Kyle L; Trochim, William M; Dilts, David M; Kirk, Rosalind

    2013-12-01

    Assessing the value of clinical and translational research funding on accelerating the translation of scientific knowledge is a fundamental issue faced by the National Institutes of Health (NIH) and its Clinical and Translational Awards (CTSAs). To address this issue, the authors propose a model for measuring the return on investment (ROI) of one key CTSA program, the clinical research unit (CRU). By estimating the economic and social inputs and outputs of this program, this model produces multiple levels of ROI: investigator, program, and institutional estimates. A methodology, or evaluation protocol, is proposed to assess the value of this CTSA function, with specific objectives, methods, descriptions of the data to be collected, and how data are to be filtered, analyzed, and evaluated. This article provides an approach CTSAs could use to assess the economic and social returns on NIH and institutional investments in these critical activities.

  1. Bayesian Threshold Estimation

    ERIC Educational Resources Information Center

    Gustafson, S. C.; Costello, C. S.; Like, E. C.; Pierce, S. J.; Shenoy, K. N.

    2009-01-01

    Bayesian estimation of a threshold time (hereafter simply threshold) for the receipt of impulse signals is accomplished given the following: 1) data, consisting of the number of impulses received in a time interval from zero to one and the time of the largest time impulse; 2) a model, consisting of a uniform probability density of impulse time…

  2. Spectrophotometric estimation of tamsulosin hydrochloride by acid-dye method

    PubMed Central

    Shrivastava, Alankar; Saxena, Prachi; Gupta, Vipin B.

    2011-01-01

    A new spectrophotometric method for the estimation of tamsulosin hydrochloride in pharmaceutical dosage forms has been developed and validated. The method is based on reaction between drug and bromophenol blue and complex was measured at 421 nm. The slope, intercept and correlation coefficient was found to be 0.054, -0.020 and 0.999, respectively. Method was validated in terms of specificity, linearity, range, precision and accuracy. The developed method can be used to determine drug in both tablet and capsule formulations. Reaction was optimized using three parameters i.e., concentration of the dye, pH of the buffer, volume of the buffer and shaking time. Maximum stability of the chromophore was achieved by using pH 2 and 2 ml volume of buffer. Shaking time kept was 2 min and concentration of the dye used was 2 ml of 0.05% w/v solution. Method was validated in terms of linearity, precision, range, accuracy, LOD and LOQ and stochiometry of the method was also established using Mole ratio and Job's method of continuous variation. The dye benzonoid form (blue color) of dye ionized into quinonoid form (purple color) in presence of buffer and reacts with protonated form of drug in 1:1 ratio and forms an ion-pair complex (yellow color). PMID:23781431

  3. A Method to Estimate the Probability that Any Individual Cloud-to-Ground Lightning Stroke was Within Any Radius of Any Point

    NASA Technical Reports Server (NTRS)

    Huddleston, Lisa; Roeder, WIlliam P.; Merceret, Francis J.

    2011-01-01

    A new technique has been developed to estimate the probability that a nearby cloud-to-ground lightning stroke was within a specified radius of any point of interest. This process uses the bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to determine the probability that the stroke is inside any specified radius of any location, even if that location is not centered on or even within the location error ellipse. This technique is adapted from a method of calculating the probability of debris collision with spacecraft. Such a technique is important in spaceport processing activities because it allows engineers to quantify the risk of induced current damage to critical electronics due to nearby lightning strokes. This technique was tested extensively and is now in use by space launch organizations at Kennedy Space Center and Cape Canaveral Air Force station. Future applications could include forensic meteorology.

  4. Estimating Fuel Cycle Externalities: Analytical Methods and Issues, Report 2

    SciTech Connect

    Barnthouse, L.W.; Cada, G.F.; Cheng, M.-D.; Easterly, C.E.; Kroodsma, R.L.; Lee, R.; Shriner, D.S.; Tolbert, V.R.; Turner, R.S.

    1994-07-01

    that also have not been fully addressed. This document contains two types of papers that seek to fill part of this void. Some of the papers describe analytical methods that can be applied to one of the five steps of the damage function approach. The other papers discuss some of the complex issues that arise in trying to estimate externalities. This report, the second in a series of eight reports, is part of a joint study by the U.S. Department of Energy (DOE) and the Commission of the European Communities (EC)* on the externalities of fuel cycles. Most of the papers in this report were originally written as working papers during the initial phases of this study. The papers provide descriptions of the (non-radiological) atmospheric dispersion modeling that the study uses; reviews much of the relevant literature on ecological and health effects, and on the economic valuation of those impacts; contains several papers on some of the more complex and contentious issues in estimating externalities; and describes a method for depicting the quality of scientific information that a study uses. The analytical methods and issues that this report discusses generally pertain to more than one of the fuel cycles, though not necessarily to all of them. The report is divided into six parts, each one focusing on a different subject area.

  5. Streamflow-Characteristic Estimation Methods for Unregulated Streams of Tennessee

    USGS Publications Warehouse

    Law, George S.; Tasker, Gary D.; Ladd, David E.

    2009-01-01

    Streamflow-characteristic estimation methods for unregulated rivers and streams of Tennessee were developed by the U.S. Geological Survey in cooperation with the Tennessee Department of Environment and Conservation. Streamflow estimates are provided for 1,224 stream sites. Streamflow characteristics include the 7-consecutive-day, 10-year recurrence-interval low flow, the 30-consecutive-day, 5-year recurrence-interval low flow, the mean annual and mean summer flows, and the 99.5-, 99-, 98-, 95-, 90-, 80-, 70-, 60-, 50-, 40-, 30-, 20-, and 10-percent flow durations. Estimation methods include regional regression (RRE) equations and the region-of-influence (ROI) method. Both methods use zero-flow probability screening to estimate zero-flow quantiles. A low flow and flow duration (LFFD) computer program (TDECv301) performs zero-flow screening and calculation of nonzero-streamflow characteristics using the RRE equations and ROI method and provides quality measures including the 90-percent prediction interval and equivalent years of record. The U.S. Geological Survey StreamStats geographic information system automates the calculation of basin characteristics and streamflow characteristics. In addition, basin characteristics can be manually input to the stand-alone version of the computer program (TDECv301) to calculate streamflow characteristics in Tennessee. The RRE equations were computed using multivariable regression analysis. The two regions used for this study, the western part of the State (West) and the central and eastern part of the State (Central+East), are separated by the Tennessee River as it flows south to north from Hardin County to Stewart County. The West region uses data from 124 of the 1,224 streamflow sites, and the Central+East region uses data from 893 of the 1,224 streamflow sites. The study area also includes parts of the adjacent States of Georgia, North Carolina, Virginia, Alabama, Kentucky, and Mississippi. Total drainage area, a geology

  6. Using optimal estimation method for upper atmospheric Lidar temperature retrieval

    NASA Astrophysics Data System (ADS)

    Zou, Rongshi; Pan, Weilin; Qiao, Shuai

    2016-07-01

    Conventional ground based Rayleigh lidar temperature retrieval use integrate technique, which has limitations that necessitate abandoning temperatures retrieved at the greatest heights due to the assumption of a seeding value required to initialize the integration at the highest altitude. Here we suggests the use of a method that can incorporate information from various sources to improve the quality of the retrieval result. This approach inverts lidar equation via optimal estimation method(OEM) based on Bayesian theory together with Gaussian statistical model. It presents many advantages over the conventional ones: 1) the possibility of incorporating information from multiple heterogeneous sources; 2) provides diagnostic information about retrieval qualities; 3) ability of determining vertical resolution and maximum height to which the retrieval is mostly independent of the a priori profile. This paper compares one-hour temperature profiles retrieved using conventional and optimal estimation methods at Golmud, Qinghai province, China. Results show that OEM results show a better agreement with SABER profile compared with conventional one, in some region it is much lower than SABER profile, which is a very different results compared with previous studies, further studies are needed to explain this phenomenon. The success of applying OEM on temperature retrieval is a validation for using as retrieval framework in large synthetic observation systems including various active remote sensing instruments by incorporating all available measurement information into the model and analyze groups of measurements simultaneously to improve the results.

  7. A Quantitative Method for Estimating Probable Public Costs of Hurricanes.

    PubMed

    BOSWELL; DEYLE; SMITH; BAKER

    1999-04-01

    / A method is presented for estimating probable public costs resulting from damage caused by hurricanes, measured as local government expenditures approved for reimbursement under the Stafford Act Section 406 Public Assistance Program. The method employs a multivariate model developed through multiple regression analysis of an array of independent variables that measure meteorological, socioeconomic, and physical conditions related to the landfall of hurricanes within a local government jurisdiction. From the regression analysis we chose a log-log (base 10) model that explains 74% of the variance in the expenditure data using population and wind speed as predictors. We illustrate application of the method for a local jurisdiction-Lee County, Florida, USA. The results show that potential public costs range from $4.7 million for a category 1 hurricane with winds of 137 kilometers per hour (85 miles per hour) to $130 million for a category 5 hurricane with winds of 265 kilometers per hour (165 miles per hour). Based on these figures, we estimate expected annual public costs of $2.3 million. These cost estimates: (1) provide useful guidance for anticipating the magnitude of the federal, state, and local expenditures that would be required for the array of possible hurricanes that could affect that jurisdiction; (2) allow policy makers to assess the implications of alternative federal and state policies for providing public assistance to jurisdictions that experience hurricane damage; and (3) provide information needed to develop a contingency fund or other financial mechanism for assuring that the community has sufficient funds available to meet its obligations. KEY WORDS: Hurricane; Public costs; Local government; Disaster recovery; Disaster response; Florida; Stafford Act

  8. Analytical method to estimate resin cement diffusion into dentin

    NASA Astrophysics Data System (ADS)

    de Oliveira Ferraz, Larissa Cristina; Ubaldini, Adriana Lemos Mori; de Oliveira, Bruna Medeiros Bertol; Neto, Antonio Medina; Sato, Fracielle; Baesso, Mauro Luciano; Pascotto, Renata Corrêa

    2016-05-01

    This study analyzed the diffusion of two resin luting agents (resin cements) into dentin, with the aim of presenting an analytical method for estimating the thickness of the diffusion zone. Class V cavities were prepared in the buccal and lingual surfaces of molars (n=9). Indirect composite inlays were luted into the cavities with either a self-adhesive or a self-etch resin cement. The teeth were sectioned bucco-lingually and the cement-dentin interface was analyzed by using micro-Raman spectroscopy (MRS) and scanning electron microscopy. Evolution of peak intensities of the Raman bands, collected from the functional groups corresponding to the resin monomer (C–O–C, 1113 cm-1) present in the cements, and the mineral content (P–O, 961 cm-1) in dentin were sigmoid shaped functions. A Boltzmann function (BF) was then fitted to the peaks encountered at 1113 cm-1 to estimate the resin cement diffusion into dentin. The BF identified a resin cement-dentin diffusion zone of 1.8±0.4 μm for the self-adhesive cement and 2.5±0.3 μm for the self-etch cement. This analysis allowed the authors to estimate the diffusion of the resin cements into the dentin. Fitting the MRS data to the BF contributed to and is relevant for future studies of the adhesive interface.

  9. Analytical method to estimate resin cement diffusion into dentin

    NASA Astrophysics Data System (ADS)

    de Oliveira Ferraz, Larissa Cristina; Ubaldini, Adriana Lemos Mori; de Oliveira, Bruna Medeiros Bertol; Neto, Antonio Medina; Sato, Fracielle; Baesso, Mauro Luciano; Pascotto, Renata Corrêa

    2016-05-01

    This study analyzed the diffusion of two resin luting agents (resin cements) into dentin, with the aim of presenting an analytical method for estimating the thickness of the diffusion zone. Class V cavities were prepared in the buccal and lingual surfaces of molars (n=9). Indirect composite inlays were luted into the cavities with either a self-adhesive or a self-etch resin cement. The teeth were sectioned bucco-lingually and the cement-dentin interface was analyzed by using micro-Raman spectroscopy (MRS) and scanning electron microscopy. Evolution of peak intensities of the Raman bands, collected from the functional groups corresponding to the resin monomer (C-O-C, 1113 cm-1) present in the cements, and the mineral content (P-O, 961 cm-1) in dentin were sigmoid shaped functions. A Boltzmann function (BF) was then fitted to the peaks encountered at 1113 cm-1 to estimate the resin cement diffusion into dentin. The BF identified a resin cement-dentin diffusion zone of 1.8±0.4 μm for the self-adhesive cement and 2.5±0.3 μm for the self-etch cement. This analysis allowed the authors to estimate the diffusion of the resin cements into the dentin. Fitting the MRS data to the BF contributed to and is relevant for future studies of the adhesive interface.

  10. Uncertainty Quantification in State Estimation using the Probabilistic Collocation Method

    SciTech Connect

    Lin, Guang; Zhou, Ning; Ferryman, Thomas A.; Tuffner, Francis K.

    2011-03-23

    In this study, a new efficient uncertainty quantification technique, probabilistic collocation method (PCM) on sparse grid points is employed to enable the evaluation of uncertainty in state estimation. The PCM allows us to use just a small number of ensembles to quantify the uncertainty in estimating the state variables of power systems. By sparse grid points, the PCM approach can handle large number of uncertain parameters in power systems with relatively lower computational cost, when comparing with classic Monte Carlo (MC) simulations. The algorithm and procedure is outlined and we demonstrate the capability and illustrate the application of PCM on sparse grid points approach on uncertainty quantification in state estimation of the IEEE 14 bus model as an example. MC simulations have also been conducted to verify accuracy of the PCM approach. By comparing the results obtained from MC simulations with PCM results for mean and standard deviation of uncertain parameters, it is evident that the PCM approach is computationally more efficient than MC simulations.

  11. A method for sex estimation using the proximal femur.

    PubMed

    Curate, Francisco; Coelho, João; Gonçalves, David; Coelho, Catarina; Ferreira, Maria Teresa; Navega, David; Cunha, Eugénia

    2016-09-01

    The assessment of sex is crucial to the establishment of a biological profile of an unidentified skeletal individual. The best methods currently available for the sexual diagnosis of human skeletal remains generally rely on the presence of well-preserved pelvic bones, which is not always the case. Postcranial elements, including the femur, have been used to accurately estimate sex in skeletal remains from forensic and bioarcheological settings. In this study, we present an approach to estimate sex using two measurements (femoral neck width [FNW] and femoral neck axis length [FNAL]) of the proximal femur. FNW and FNAL were obtained in a training sample (114 females and 138 males) from the Luís Lopes Collection (National History Museum of Lisbon). Logistic regression and the C4.5 algorithm were used to develop models to predict sex in unknown individuals. Proposed cross-validated models correctly predicted sex in 82.5-85.7% of the cases. The models were also evaluated in a test sample (96 females and 96 males) from the Coimbra Identified Skeletal Collection (University of Coimbra), resulting in a sex allocation accuracy of 80.1-86.2%. This study supports the relative value of the proximal femur to estimate sex in skeletal remains, especially when other exceedingly dimorphic skeletal elements are not accessible for analysis. PMID:27373600

  12. A method for sex estimation using the proximal femur.

    PubMed

    Curate, Francisco; Coelho, João; Gonçalves, David; Coelho, Catarina; Ferreira, Maria Teresa; Navega, David; Cunha, Eugénia

    2016-09-01

    The assessment of sex is crucial to the establishment of a biological profile of an unidentified skeletal individual. The best methods currently available for the sexual diagnosis of human skeletal remains generally rely on the presence of well-preserved pelvic bones, which is not always the case. Postcranial elements, including the femur, have been used to accurately estimate sex in skeletal remains from forensic and bioarcheological settings. In this study, we present an approach to estimate sex using two measurements (femoral neck width [FNW] and femoral neck axis length [FNAL]) of the proximal femur. FNW and FNAL were obtained in a training sample (114 females and 138 males) from the Luís Lopes Collection (National History Museum of Lisbon). Logistic regression and the C4.5 algorithm were used to develop models to predict sex in unknown individuals. Proposed cross-validated models correctly predicted sex in 82.5-85.7% of the cases. The models were also evaluated in a test sample (96 females and 96 males) from the Coimbra Identified Skeletal Collection (University of Coimbra), resulting in a sex allocation accuracy of 80.1-86.2%. This study supports the relative value of the proximal femur to estimate sex in skeletal remains, especially when other exceedingly dimorphic skeletal elements are not accessible for analysis.

  13. Matrix Methods for Estimating the Coherence Functions from Estimates of the Cross-Spectral Density Matrix

    DOE PAGES

    Smallwood, D. O.

    1996-01-01

    It is shown that the usual method for estimating the coherence functions (ordinary, partial, and multiple) for a general multiple-input! multiple-output problem can be expressed as a modified form of Cholesky decomposition of the cross-spectral density matrix of the input and output records. The results can be equivalently obtained using singular value decomposition (SVD) of the cross-spectral density matrix. Using SVD suggests a new form of fractional coherence. The formulation as a SVD problem also suggests a way to order the inputs when a natural physical order of the inputs is absent.

  14. Residual fatigue life estimation using a nonlinear ultrasound modulation method

    NASA Astrophysics Data System (ADS)

    Piero Malfense Fierro, Gian; Meo, Michele

    2015-02-01

    Predicting the residual fatigue life of a material is not a simple task and requires the development and association of many variables that as standalone tasks can be difficult to determine. This work develops a modulated nonlinear elastic wave spectroscopy method for the evaluation of a metallic components residual fatigue life. An aluminium specimen (AA6082-T6) was tested at predetermined fatigue stages throughout its fatigue life using a dual-frequency ultrasound method. A modulated nonlinear parameter was derived, which described the relationship between the generation of modulated (sideband) responses of a dual frequency signal and the linear response. The sideband generation from the dual frequency (two signal output system) was shown to increase as the residual fatigue life decreased, and as a standalone measurement method it can be used to show an increase in a materials damage. A baseline-free method was developed by linking a theoretical model, obtained by combining the Paris law and the Nazarov-Sutin crack equation, to experimental nonlinear modulation measurements. The results showed good correlation between the derived theoretical model and the modulated nonlinear parameter, allowing for baseline-free material residual fatigue life estimation. Advantages and disadvantages of these methods are discussed, as well as presenting further methods that would lead to increased accuracy of residual fatigue life detection.

  15. Dental age estimation in Brazilian HIV children using Willems' method.

    PubMed

    de Souza, Rafael Boschetti; da Silva Assunção, Luciana Reichert; Franco, Ademir; Zaroni, Fábio Marzullo; Holderbaum, Rejane Maria; Fernandes, Ângela

    2015-12-01

    The notification of the Human Immunodeficiency Virus (HIV) in Brazilian children was first reported in 1984. Since that time more than 21 thousand children became infected. Approximately 99.6% of the children aged less than 13 years old are vertically infected. In this context, most of the children are abandoned after birth, or lose their relatives in a near future, growing with uncertain identification. The present study aims to estimate the dental age of Brazilian HIV patients in face of healthy patients paired by age and gender. The sample consisted of 160 panoramic radiographs of male (n: 80) and female (n: 80) patients aged between 4 and 15 years (mean age: 8.88 years), divided into HIV (n: 80) and control (n: 80) groups. The sample was analyzed by three trained examiners, using Willems' method, 2001. Intraclass Correlation Coefficient (ICC) was applied to test intra- and inter-examiner agreement, and Student paired t-test was used to determine the age association between HIV and control groups. Intra-examiner (ICC: from 0.993 to 0.997) and inter-examiner (ICC: from 0.991 to 0.995) agreement tests indicated high reproducibility of the method between the examiners (P<0.01). Willems' method revealed discrete statistical overestimation in HIV (2.86 months; P=0.019) and control (1.90 months; P=0.039) groups. However, stratified analysis by gender indicate that overestimation were only concentrated in male HIV (3.85 months; P=0.001) and control (2.86 months; P=0.022) patients. The significant statistical differences are not clinically relevant once only few months of discrepancy are detected applying Willems' method in a Brazilian HIV sample, making this method highly recommended for dental age estimation of both HIV and healthy children with unknown age.

  16. Estimation of Convective Momentum Fluxes Using Satellite-Based Methods

    NASA Astrophysics Data System (ADS)

    Jewett, C.; Mecikalski, J. R.

    2009-12-01

    Research and case studies have shown that convection plays a significant role in large-scale environmental circulations. Convective momentum fluxes (CMFs) have been studied for many years using in-situ and aircraft measurements, along with numerical simulations. However, despite these successes, little work has been conducted on methods that use satellite remote sensing as a tool to diagnose these fluxes. Uses of satellite data have the capability to provide continuous analysis across regions void of ground-based remote sensing. Therefore, the project's overall goal is to develop a synergistic approach for retrieving CMFs using a collection of instruments including GOES, TRMM, CloudSat, MODIS, and QuikScat. However, this particular study will focus on the work using TRMM and QuikScat, and the methodology of using CloudSat. Sound research has already been conducted for computing CMFs using the GOES instruments (Jewett and Mecikalski 2009, submitted to J. Geophys. Res.). Using satellite-derived winds, namely mesoscale atmospheric motion vectors (MAMVs) as described by Bedka and Mecikalski (2005), one can obtain the actual winds occurring within a convective environment as perturbed by convection. Surface outflow boundaries and upper-tropospheric anvil outflow will produce “perturbation” winds on smaller, convective scales. Combined with estimated vertical motion retrieved using geostationary infrared imagery, CMFs were estimated using MAMVs, with an average profile being calculated across a convective regime or a domain covered by active storms. This study involves estimating draft-tilt from TRMM PR radar reflectivity and sub-cloud base fluxes using QuikScat data. The “slope” of falling hydrometeors (relative to Earth) in data are related to u', v' and w' winds within convection. The main up- and down-drafts within convection are described by precipitation patterns (Mecikalski 2003). Vertical motion estimates are made using model results for deep convection

  17. Method for estimating absolute lung volumes at constant inflation pressure.

    PubMed

    Hills, B A; Barrow, R E

    1979-10-01

    A method has been devised for measuring functional residual capacity in the intact killed animal or absolute lung volumes in any excised lung preparation without changing the inflation pressure. This is achieved by titrating the absolute pressure of a chamber in which the preparation is compressed until a known volume of air has entered the lungs. This technique was used to estimate the volumes of five intact rabbit lungs and five rigid containers of known dimensions by means of Boyle's law. Results were found to agree to within +/- 1% with values determined by alternative methods. In the discussion the advantage of determining absolute lung volumes at almost any stage in a study of lung mechanics without the determination itself changing inflation pressure and, hence, lung volume is emphasized. PMID:511699

  18. Methods for estimating the population contribution to environmental change.

    PubMed

    Raskin, P D

    1995-12-01

    "This paper introduces general methods for quantitative analysis of the role of population in environmental change. The approach is applicable over a wide range of environmental issues, and arbitrary regions and time periods. First, a single region is considered, appropriate formulae derived, and the limitations to quantitative approaches discussed. The approach is contrasted to earlier formulations, and shown to avoid weaknesses in a common approximation. Next, the analysis is extended to the multiple region problem. An apparent paradox in aggregating regional estimates is illuminated, and the risk of misleading results is underscored. The methods are applied to the problem of climate change with two case studies, an historical period and a future scenario, used to illustrate the results. The contribution of change in population to change in green house gas emissions is shown to be significant, but not dominant in both industrialized and developing regions."

  19. A variable circular-plot method for estimated bird numbers

    USGS Publications Warehouse

    Reynolds, R.T.; Scott, J.M.; Nussbaum, R.A.

    1980-01-01

    A bird census method is presented that is designed for tall, structurally complex vegetation types, and rugged terrain. With this method the observer counts all birds seen or heard around a station, and estimates the horizontal distance from the station to each bird. Count periods at stations vary according to the avian community and structural complexity of the vegetation. The density of each species is determined by inspecting a histogram of the number of individuals per unit area in concentric bands of predetermined widths about the stations, choosing the band (with outside radius x) where the density begins to decline, and summing the number of individuals counted within the circle of radius x and dividing by the area (Bx2). Although all observations beyond radius x are rejected with this procedure, coefficients of maximum distance.

  20. A new assimilation method with physical mechanism to estimate evapotranspiration

    NASA Astrophysics Data System (ADS)

    Ye, Wen; Xu, Xinyi

    2016-04-01

    The accurate estimation of regional evapotranspiration has been a research hotspot in the field of hydrology and water resources both in domestic and abroad. A new assimilation method with physical mechanism was proposed to estimate evapotranspiration, which was easier to apply. Based on the evapotranspiration (ET) calculating method with soil moisture recurrence relations in the Distributed Time Variant Gain Model (DTVGM) and Ensemble Kalman Filter (EnKF), it constructed an assimilation system for recursive calculation of evapotranspiration in combination with "observation value" by the retrieval data of evapotranspiration through the Two-Layer Remote Sensing Model. By updating the filter in the model with assimilated evapotranspiration, synchronization correction to the model estimation was achieved and more accurate time continuous series values of evapotranspiration were obtained. Through the verification of observations in Xiaotangshan Observatory and hydrological stations in the basin, the correlation coefficient of remote sensing inversion evapotranspiration and actual evapotranspiration reaches as high as 0.97, and the NS efficiency coefficient of DTVGM model was 0.80. By using the typical daily evapotranspiration from Remote Sensing and the data from DTVGM Model, we assimilated the hydrological simulation processes with DTVGM Model in Shahe Basin in Beijing to obtain continuous evapotranspiration time series. The results showed that the average relative error between the remote sensing values and DTVGM simulations is about 12.3%, and for the value between remote sensing retrieval data and assimilation values is 4.5%, which proved that the assimilation results of Ensemble Kalman Filter (EnKF) were closer to the "real" data, and was better than the evapotranspiration simulated by DTVGM without any improvement. Keyword Evapotranspiration assimilation Ensemble Kalman Filter Distributed hydrological model Two-Layer Remote Sensing Model

  1. A QUALITATIVE METHOD TO ESTIMATE HSI DISPLAY COMPLEXITY

    SciTech Connect

    Jacques Hugo; David Gertman

    2013-04-01

    There is mounting evidence that complex computer system displays in control rooms contribute to cognitive complexity and, thus, to the probability of human error. Research shows that reaction time increases and response accuracy decreases as the number of elements in the display screen increase. However, in terms of supporting the control room operator, approaches focusing on addressing display complexity solely in terms of information density and its location and patterning, will fall short of delivering a properly designed interface. This paper argues that information complexity and semantic complexity are mandatory components when considering display complexity and that the addition of these concepts assists in understanding and resolving differences between designers and the preferences and performance of operators. This paper concludes that a number of simplified methods, when combined, can be used to estimate the impact that a particular display may have on the operator's ability to perform a function accurately and effectively. We present a mixed qualitative and quantitative approach and a method for complexity estimation.

  2. Method of Estimating Continuous Cooling Transformation Curves of Glasses

    NASA Technical Reports Server (NTRS)

    Zhu, Dongmei; Zhou, Wancheng; Ray, Chandra S.; Day, Delbert E.

    2006-01-01

    A method is proposed for estimating the critical cooling rate and continuous cooling transformation (CCT) curve from isothermal TTT data of glasses. The critical cooling rates and CCT curves for a group of lithium disilicate glasses containing different amounts of Pt as nucleating agent estimated through this method are compared with the experimentally measured values. By analysis of the experimental and calculated data of the lithium disilicate glasses, a simple relationship between the crystallized amount in the glasses during continuous cooling, X, and the temperature of undercooling, (Delta)T, was found to be X = AR(sup-4)exp(B (Delta)T), where (Delta)T is the temperature difference between the theoretical melting point of the glass composition and the temperature in discussion, R is the cooling rate, and A and B are constants. The relation between the amount of crystallisation during continuous cooling and isothermal hold can be expressed as (X(sub cT)/X(sub iT) = (4/B)(sup 4) (Delta)T(sup -4), where X(sub cT) stands for the crystallised amount in a glass during continuous cooling for a time t when the temperature comes to T, and X(sub iT) is the crystallised amount during isothermal hold at temperature T for a time t.

  3. Study on color difference estimation method of medicine biochemical analysis

    NASA Astrophysics Data System (ADS)

    Wang, Chunhong; Zhou, Yue; Zhao, Hongxia; Sun, Jiashi; Zhou, Fengkun

    2006-01-01

    The biochemical analysis in medicine is an important inspection and diagnosis method in hospital clinic. The biochemical analysis of urine is one important item. The Urine test paper shows corresponding color with different detection project or different illness degree. The color difference between the standard threshold and the test paper color of urine can be used to judge the illness degree, so that further analysis and diagnosis to urine is gotten. The color is a three-dimensional physical variable concerning psychology, while reflectance is one-dimensional variable; therefore, the estimation method of color difference in urine test can have better precision and facility than the conventional test method with one-dimensional reflectance, it can make an accurate diagnose. The digital camera is easy to take an image of urine test paper and is used to carry out the urine biochemical analysis conveniently. On the experiment, the color image of urine test paper is taken by popular color digital camera and saved in the computer which installs a simple color space conversion (RGB -> XYZ -> L *a *b *)and the calculation software. Test sample is graded according to intelligent detection of quantitative color. The images taken every time were saved in computer, and the whole illness process will be monitored. This method can also use in other medicine biochemical analyses that have relation with color. Experiment result shows that this test method is quick and accurate; it can be used in hospital, calibrating organization and family, so its application prospect is extensive.

  4. Application of Common Mid-Point Method to Estimate Asphalt

    NASA Astrophysics Data System (ADS)

    Zhao, Shan; Al-Aadi, Imad

    2015-04-01

    3-D radar is a multi-array stepped-frequency ground penetration radar (GPR) that can measure at a very close sampling interval in both in-line and cross-line directions. Constructing asphalt layers in accordance with specified thicknesses is crucial for pavement structure capacity and pavement performance. Common mid-point method (CMP) is a multi-offset measurement method that can improve the accuracy of the asphalt layer thickness estimation. In this study, the viability of using 3-D radar to predict asphalt concrete pavement thickness with an extended CMP method was investigated. GPR signals were collected on asphalt pavements with various thicknesses. Time domain resolution of the 3-D radar was improved by applying zero-padding technique in the frequency domain. The performance of the 3-D radar was then compared to that of the air-coupled horn antenna. The study concluded that 3-D radar can be used to predict asphalt layer thickness using CMP method accurately when the layer thickness is larger than 0.13m. The lack of time domain resolution of 3-D radar can be solved by frequency zero-padding. Keywords: asphalt pavement thickness, 3-D Radar, stepped-frequency, common mid-point method, zero padding.

  5. Comparison of carbon and biomass estimation methods for European forests

    NASA Astrophysics Data System (ADS)

    Neumann, Mathias; Mues, Volker; Harkonen, Sanna; Mura, Matteo; Bouriaud, Olivier; Lang, Mait; Achten, Wouter; Thivolle-Cazat, Alain; Bronisz, Karol; Merganicova, Katarina; Decuyper, Mathieu; Alberdi, Iciar; Astrup, Rasmus; Schadauer, Klemens; Hasenauer, Hubert

    2015-04-01

    National and international reporting systems as well as research, enterprises and political stakeholders require information on carbon stocks of forests. Terrestrial assessment systems like forest inventory data in combination with carbon calculation methods are often used for this purpose. To assess the effect of the calculation method used, a comparative analysis was done using the carbon calculation methods from 13 European countries and the research plots from ICP Forests (International Co-operative Programme on Assessment and Monitoring of Air Pollution Effects on Forests). These methods are applied for five European tree species (Fagus sylvatica L., Quercus robur L., Betula pendula Roth, Picea abies (L.) Karst. and Pinus sylvestris L.) using a standardized theoretical tree dataset to avoid biases due to data collection and sample design. The carbon calculation methods use allometric biomass and volume functions, carbon and biomass expansion factors or a combination thereof. The results of the analysis show a high variation in the results for total tree carbon as well as for carbon in the single tree compartments. The same pattern is found when comparing the respective volume estimates. This is consistent for all five tree species and the variation remains when the results are grouped according to the European forest regions. Possible explanations are differences in the sample material used for the biomass models, the model variables or differences in the definition of tree compartments. The analysed carbon calculation methods have a strong effect on the results both for single trees and forest stands. To avoid misinterpretation the calculation method has to be chosen carefully along with quality checks and the calculation method needs consideration especially in comparative studies to avoid biased and misleading conclusions.

  6. The Mayfield method of estimating nesting success: A model, estimators and simulation results

    USGS Publications Warehouse

    Hensler, G.L.; Nichols, J.D.

    1981-01-01

    Using a nesting model proposed by Mayfield we show that the estimator he proposes is a maximum likelihood estimator (m.l.e.). M.l.e. theory allows us to calculate the asymptotic distribution of this estimator, and we propose an estimator of the asymptotic variance. Using these estimators we give approximate confidence intervals and tests of significance for daily survival. Monte Carlo simulation results show the performance of our estimators and tests under many sets of conditions. A traditional estimator of nesting success is shown to be quite inferior to the Mayfield estimator. We give sample sizes required for a given accuracy under several sets of conditions.

  7. Estimated Accuracy of Three Common Trajectory Statistical Methods

    NASA Technical Reports Server (NTRS)

    Kabashnikov, Vitaliy P.; Chaikovsky, Anatoli P.; Kucsera, Tom L.; Metelskaya, Natalia S.

    2011-01-01

    Three well-known trajectory statistical methods (TSMs), namely concentration field (CF), concentration weighted trajectory (CWT), and potential source contribution function (PSCF) methods were tested using known sources and artificially generated data sets to determine the ability of TSMs to reproduce spatial distribution of the sources. In the works by other authors, the accuracy of the trajectory statistical methods was estimated for particular species and at specified receptor locations. We have obtained a more general statistical estimation of the accuracy of source reconstruction and have found optimum conditions to reconstruct source distributions of atmospheric trace substances. Only virtual pollutants of the primary type were considered. In real world experiments, TSMs are intended for application to a priori unknown sources. Therefore, the accuracy of TSMs has to be tested with all possible spatial distributions of sources. An ensemble of geographical distributions of virtual sources was generated. Spearman s rank order correlation coefficient between spatial distributions of the known virtual and the reconstructed sources was taken to be a quantitative measure of the accuracy. Statistical estimates of the mean correlation coefficient and a range of the most probable values of correlation coefficients were obtained. All the TSMs that were considered here showed similar close results. The maximum of the ratio of the mean correlation to the width of the correlation interval containing the most probable correlation values determines the optimum conditions for reconstruction. An optimal geographical domain roughly coincides with the area supplying most of the substance to the receptor. The optimal domain s size is dependent on the substance decay time. Under optimum reconstruction conditions, the mean correlation coefficients can reach 0.70 0.75. The boundaries of the interval with the most probable correlation values are 0.6 0.9 for the decay time of 240 h

  8. Estimation of Anthocyanin Content of Berries by NIR Method

    NASA Astrophysics Data System (ADS)

    Zsivanovits, G.; Ludneva, D.; Iliev, A.

    2010-01-01

    Anthocyanin contents of fruits were estimated by VIS spectrophotometer and compared with spectra measured by NIR spectrophotometer (600-1100 nm step 10 nm). The aim was to find a relationship between NIR method and traditional spectrophotometric method. The testing protocol, using NIR, is easier, faster and non-destructive. NIR spectra were prepared in pairs, reflectance and transmittance. A modular spectrocomputer, realized on the basis of a monochromator and peripherals Bentham Instruments Ltd (GB) and a photometric camera created at Canning Research Institute, were used. An important feature of this camera is the possibility offered for a simultaneous measurement of both transmittance and reflectance with geometry patterns T0/180 and R0/45. The collected spectra were analyzed by CAMO Unscrambler 9.1 software, with PCA, PLS, PCR methods. Based on the analyzed spectra quality and quantity sensitive calibrations were prepared. The results showed that the NIR method allows measuring of the total anthocyanin content in fresh berry fruits or processed products without destroying them.

  9. Estimation of Anthocyanin Content of Berries by NIR Method

    SciTech Connect

    Zsivanovits, G.; Ludneva, D.; Iliev, A.

    2010-01-21

    Anthocyanin contents of fruits were estimated by VIS spectrophotometer and compared with spectra measured by NIR spectrophotometer (600-1100 nm step 10 nm). The aim was to find a relationship between NIR method and traditional spectrophotometric method. The testing protocol, using NIR, is easier, faster and non-destructive. NIR spectra were prepared in pairs, reflectance and transmittance. A modular spectrocomputer, realized on the basis of a monochromator and peripherals Bentham Instruments Ltd (GB) and a photometric camera created at Canning Research Institute, were used. An important feature of this camera is the possibility offered for a simultaneous measurement of both transmittance and reflectance with geometry patterns T0/180 and R0/45. The collected spectra were analyzed by CAMO Unscrambler 9.1 software, with PCA, PLS, PCR methods. Based on the analyzed spectra quality and quantity sensitive calibrations were prepared. The results showed that the NIR method allows measuring of the total anthocyanin content in fresh berry fruits or processed products without destroying them.

  10. Estimating recharge at Yucca Mountain, Nevada, USA: Comparison of methods

    USGS Publications Warehouse

    Flint, A.L.; Flint, L.E.; Kwicklis, E.M.; Fabryka-Martin, J. T.; Bodvarsson, G.S.

    2002-01-01

    Obtaining values of net infiltration, groundwater travel time, and recharge is necessary at the Yucca Mountain site, Nevada, USA, in order to evaluate the expected performance of a potential repository as a containment system for high-level radioactive waste. However, the geologic complexities of this site, its low precipitation and net infiltration, with numerous mechanisms operating simultaneously to move water through the system, provide many challenges for the estimation of the spatial distribution of recharge. A variety of methods appropriate for arid environments has been applied, including water-balance techniques, calculations using Darcy's law in the unsaturated zone, a soil-physics method applied to neutron-hole water-content data, inverse modeling of thermal profiles in boreholes extending through the thick unsaturated zone, chloride mass balance, atmospheric radionuclides, and empirical approaches. These methods indicate that near-surface infiltration rates at Yucca Mountain are highly variable in time and space, with local (point) values ranging from zero to several hundred millimeters per year. Spatially distributed net-infiltration values average 5 mm/year, with the highest values approaching 20 mm/year near Yucca Crest. Site-scale recharge estimates range from less than 1 to about 12 mm/year. These results have been incorporated into a site-scale model that has been calibrated using these data sets that reflect infiltration processes acting on highly variable temporal and spatial scales. The modeling study predicts highly non-uniform recharge at the water table, distributed significantly differently from the non-uniform infiltration pattern at the surface.

  11. Estimating recharge at Yucca Mountain, Nevada, USA: comparison of methods

    NASA Astrophysics Data System (ADS)

    Flint, Alan L.; Flint, Lorraine E.; Kwicklis, Edward M.; Fabryka-Martin, June T.; Bodvarsson, Gudmundur S.

    2002-02-01

    Obtaining values of net infiltration, groundwater travel time, and recharge is necessary at the Yucca Mountain site, Nevada, USA, in order to evaluate the expected performance of a potential repository as a containment system for high-level radioactive waste. However, the geologic complexities of this site, its low precipitation and net infiltration, with numerous mechanisms operating simultaneously to move water through the system, provide many challenges for the estimation of the spatial distribution of recharge. A variety of methods appropriate for arid environments has been applied, including water-balance techniques, calculations using Darcy's law in the unsaturated zone, a soil-physics method applied to neutron-hole water-content data, inverse modeling of thermal profiles in boreholes extending through the thick unsaturated zone, chloride mass balance, atmospheric radionuclides, and empirical approaches. These methods indicate that near-surface infiltration rates at Yucca Mountain are highly variable in time and space, with local (point) values ranging from zero to several hundred millimeters per year. Spatially distributed net-infiltration values average 5 mm/year, with the highest values approaching 20 mm/year near Yucca Crest. Site-scale recharge estimates range from less than 1 to about 12 mm/year. These results have been incorporated into a site-scale model that has been calibrated using these data sets that reflect infiltration processes acting on highly variable temporal and spatial scales. The modeling study predicts highly non-uniform recharge at the water table, distributed significantly differently from the non-uniform infiltration pattern at the surface.

  12. Estimating recharge at yucca mountain, nevada, usa: comparison of methods

    SciTech Connect

    Flint, A. L.; Flint, L. E.; Kwicklis, E. M.; Fabryka-Martin, J. T.; Bodvarsson, G. S.

    2001-11-01

    Obtaining values of net infiltration, groundwater travel time, and recharge is necessary at the Yucca Mountain site, Nevada, USA, in order to evaluate the expected performance of a potential repository as a containment system for high-level radioactive waste. However, the geologic complexities of this site, its low precipitation and net infiltration, with numerous mechanisms operating simultaneously to move water through the system, provide many challenges for the estimation of the spatial distribution of recharge. A variety of methods appropriate for and environments has been applied, including water-balance techniques, calculations using Darcy's law in the unsaturated zone, a soil-physics method applied to neutron-hole water-content data, inverse modeling of thermal profiles in boreholes extending through the thick unsaturated zone, chloride mass balance, atmospheric radionuclides, and empirical approaches. These methods indicate that near-surface infiltration rates at Yucca Mountain are highly variable in time and space, with local (point) values ranging from zero to several hundred millimeters per year. Spatially distributed net-infiltration values average 5 mm/year, with the highest values approaching 20 nun/year near Yucca Crest. Site-scale recharge estimates range from less than I to about 12 mm/year. These results have been incorporated into a site-scale model that has been calibrated using these data sets that reflect infiltration processes acting on highly variable temporal and spatial scales. The modeling study predicts highly non-uniform recharge at the water table, distributed significantly differently from the non-uniform infiltration pattern at the surface. [References: 57

  13. Computational methods estimating uncertainties for profile reconstruction in scatterometry

    NASA Astrophysics Data System (ADS)

    Gross, H.; Rathsfeld, A.; Scholze, F.; Model, R.; Bär, M.

    2008-04-01

    The solution of the inverse problem in scatterometry, i.e. the determination of periodic surface structures from light diffraction patterns, is incomplete without knowledge of the uncertainties associated with the reconstructed surface parameters. With decreasing feature sizes of lithography masks, increasing demands on metrology techniques arise. Scatterometry as a non-imaging indirect optical method is applied to periodic line-space structures in order to determine geometric parameters like side-wall angles, heights, top and bottom widths and to evaluate the quality of the manufacturing process. The numerical simulation of the diffraction process is based on the finite element solution of the Helmholtz equation. The inverse problem seeks to reconstruct the grating geometry from measured diffraction patterns. Restricting the class of gratings and the set of measurements, this inverse problem can be reformulated as a non-linear operator equation in Euclidean spaces. The operator maps the grating parameters to the efficiencies of diffracted plane wave modes. We employ a Gauss-Newton type iterative method to solve this operator equation and end up minimizing the deviation of the measured efficiency or phase shift values from the simulated ones. The reconstruction properties and the convergence of the algorithm, however, is controlled by the local conditioning of the non-linear mapping and the uncertainties of the measured efficiencies or phase shifts. In particular, the uncertainties of the reconstructed geometric parameters essentially depend on the uncertainties of the input data and can be estimated by various methods. We compare the results obtained from a Monte Carlo procedure to the estimations gained from the approximative covariance matrix of the profile parameters close to the optimal solution and apply them to EUV masks illuminated by plane waves with wavelengths in the range of 13 nm.

  14. A comparison of spectral estimation methods for the analysis of sibilant fricatives

    PubMed Central

    Reidy, Patrick F.

    2015-01-01

    It has been argued that, to ensure accurate spectral feature estimates for sibilants, the spectral estimation method should include a low-variance spectral estimator; however, no empirical evaluation of estimation methods in terms of feature estimates has been given. The spectra of /s/ and /ʃ/ were estimated with different methods that varied the pre-emphasis filter and estimator. These methods were evaluated in terms of effects on two features (centroid and degree of sibilance) and on the detection of four linguistic contrasts within these features. Estimation method affected the spectral features but none of the tested linguistic contrasts. PMID:25920873

  15. Estimating Earth's modal Q with epicentral stacking method

    NASA Astrophysics Data System (ADS)

    Chen, X.; Park, J. J.

    2014-12-01

    The attenuation rates of Earth's normal modes are the most important constraints on the anelastic state of Earth's deep interior. Yet current measurements of Earth's attenuation rates suffer from 3 sources of biases: the mode coupling effect, the beating effect, and the background noise, which together lead to significant uncertainties in the attenuation rates. In this research, we present a new technique to estimate the attenuation rates of Earth's normal modes - the epicentral stacking method. Rather than using the conventional geographical coordinate system, we instead deal with Earth's normal modes in the epicentral coordinate system, in which only 5 singlets rather than 2l+1 are excited. By stacking records from the same events at a series of time lags, we are able to recover the time-varying amplitudes of the 5 excited singlets, and thus measure their attenuation rates. The advantage of our method is that it enhances the SNR through stacking and minimizes the background noise effect, yet it avoids the beating effect problem commonly associated with the conventional multiplet stacking method by singling out the singlets. The attenuation rates measured from our epicentral stacking method seem to be reliable measurements in that: a) the measured attenuation rates are generally consistent among the 10 large events we used, except for a few events with unexplained larger attenuation rates; b) the line for the log of singlet amplitudes and time lag is very close to a straight line, suggesting an accurate estimation of attenuation rate. The Q measurements from our method are consistently lower than previous modal Q measurements, but closer to the PREM model. For example, for mode 0S25 whose Coriolis force coupling is negligible, our measured Q is between 190 to 210 depending on the event, while the PREM modal Q of 0S25 is 205, and previous modal Q measurements are as high as 242. The difference between our results and previous measurements might be due to the lower

  16. Effect of packing density on strain estimation by Fry method

    NASA Astrophysics Data System (ADS)

    Srivastava, Deepak; Ojha, Arun

    2015-04-01

    Fry method is a graphical technique that uses relative movement of material points, typically the grain centres or centroids, and yields the finite strain ellipse as the central vacancy of a point distribution. Application of the Fry method assumes an anticlustered and isotropic grain centre distribution in undistorted samples. This assumption is, however, difficult to test in practice. As an alternative, the sedimentological degree of sorting is routinely used as an approximation for the degree of clustering and anisotropy. The effect of the sorting on the Fry method has already been explored by earlier workers. This study tests the effect of the tightness of packing, the packing density%, which equals to the ratio% of the area occupied by all the grains and the total area of the sample. A practical advantage of using the degree of sorting or the packing density% is that these parameters, unlike the degree of clustering or anisotropy, do not vary during a constant volume homogeneous distortion. Using the computer graphics simulations and the programming, we approach the issue of packing density in four steps; (i) generation of several sets of random point distributions such that each set has same degree of sorting but differs from the other sets with respect to the packing density%, (ii) two-dimensional homogeneous distortion of each point set by various known strain ratios and orientation, (iii) estimation of strain in each distorted point set by the Fry method, and, (iv) error estimation by comparing the known strain and those given by the Fry method. Both the absolute errors and the relative root mean squared errors give consistent results. For a given degree of sorting, the Fry method gives better results in the samples having greater than 30% packing density. This is because the grain centre distributions show stronger clustering and a greater degree of anisotropy with the decrease in the packing density. As compared to the degree of sorting alone, a

  17. Estimates of tropical bromoform emissions using an inversion method

    NASA Astrophysics Data System (ADS)

    Ashfold, M. J.; Harris, N. R. P.; Manning, A. J.; Robinson, A. D.; Warwick, N. J.; Pyle, J. A.

    2014-01-01

    Bromine plays an important role in ozone chemistry in both the troposphere and stratosphere. When measured by mass, bromoform (CHBr3) is thought to be the largest organic source of bromine to the atmosphere. While seaweed and phytoplankton are known to be dominant sources, the size and the geographical distribution of CHBr3 emissions remains uncertain. Particularly little is known about emissions from the Maritime Continent, which have usually been assumed to be large, and which appear to be especially likely to reach the stratosphere. In this study we aim to reduce this uncertainty by combining the first multi-annual set of CHBr3 measurements from this region, and an inversion process, to investigate systematically the distribution and magnitude of CHBr3 emissions. The novelty of our approach lies in the application of the inversion method to CHBr3. We find that local measurements of a short-lived gas like CHBr3 can be used to constrain emissions from only a relatively small, sub-regional domain. We then obtain detailed estimates of CHBr3 emissions within this area, which appear to be relatively insensitive to the assumptions inherent in the inversion process. We extrapolate this information to produce estimated emissions for the entire tropics (defined as 20° S-20° N) of 225 Gg CHBr3 yr-1. The ocean in the area we base our extrapolations upon is typically somewhat shallower, and more biologically productive, than the tropical average. Despite this, our tropical estimate is lower than most other recent studies, and suggests that CHBr3 emissions in the coastline-rich Maritime Continent may not be stronger than emissions in other parts of the tropics.

  18. A practical method of estimating energy expenditure during tennis play.

    PubMed

    Novas, A M P; Rowbottom, D G; Jenkins, D G

    2003-03-01

    This study aimed to develop a practical method of estimating energy expenditure (EE) during tennis. Twenty-four elite female tennis players first completed a tennis-specific graded test in which five different Intensity levels were applied randomly. Each intensity level was intended to simulate a "game" of singles tennis and comprised six 14 s periods of activity alternated with 20 s of active rest. Oxygen consumption (VO2) and heart rate (HR) were measured continuously and each player's rate of perceived exertion (RPE) was recorded at the end of each intensity level. Rate of energy expenditure (EE(VO2)) during the test was calculated using the sum of VO2 during play and the 'O2 debt' during recovery, divided by the duration of the activity. There were significant individual linear relationships between EE(VO2) and RPE, EE(VO2) and HR (r > or = 0.89 & r > or = 0.93; p < 0.05). On a second occasion, six players completed a 60-min singles tennis match during which VO2, HR and RPE were recorded; EE(VO2) was compared with EE predicted from the previously derived RPE and HR regression equations. Analysis found that EE(VO2) was overestimated by EE(RPE) (92 +/- 76 kJ x h(-1)) and EE(HR) (435 +/- 678 kJ x h(-1)), but the error of estimation for EE(RPE) (t = -3.01; p = 0.03) was less than 5% whereas for EE(HR) such error was 20.7%. The results of the study show that RPE can be used to estimate the energetic cost of playing tennis.

  19. A robust method for estimating optimal treatment regimes.

    PubMed

    Zhang, Baqun; Tsiatis, Anastasios A; Laber, Eric B; Davidian, Marie

    2012-12-01

    A treatment regime is a rule that assigns a treatment, among a set of possible treatments, to a patient as a function of his/her observed characteristics, hence "personalizing" treatment to the patient. The goal is to identify the optimal treatment regime that, if followed by the entire population of patients, would lead to the best outcome on average. Given data from a clinical trial or observational study, for a single treatment decision, the optimal regime can be found by assuming a regression model for the expected outcome conditional on treatment and covariates, where, for a given set of covariates, the optimal treatment is the one that yields the most favorable expected outcome. However, treatment assignment via such a regime is suspect if the regression model is incorrectly specified. Recognizing that, even if misspecified, such a regression model defines a class of regimes, we instead consider finding the optimal regime within such a class by finding the regime that optimizes an estimator of overall population mean outcome. To take into account possible confounding in an observational study and to increase precision, we use a doubly robust augmented inverse probability weighted estimator for this purpose. Simulations and application to data from a breast cancer clinical trial demonstrate the performance of the method. PMID:22550953

  20. Estimating rotavirus vaccine effectiveness in Japan using a screening method

    PubMed Central

    Araki, Kaoru; Hara, Megumi; Sakanishi, Yuta; Shimanoe, Chisato; Nishida, Yuichiro; Matsuo, Muneaki; Tanaka, Keitaro

    2016-01-01

    abstract Rotavirus gastroenteritis is a highly contagious, acute viral disease that imposes a significant health burden worldwide. In Japan, rotavirus vaccines have been commercially available since 2011 for voluntary vaccination, but vaccine coverage and effectiveness have not been evaluated. In the absence of a vaccination registry in Japan, vaccination coverage in the general population was estimated according to the number of vaccines supplied by the manufacturer, the number of children who received financial support for vaccination, and the size of the target population. Patients with rotavirus gastroenteritis were identified by reviewing the medical records of all children who consulted 6 major hospitals in Saga Prefecture with gastroenteritis symptoms. Vaccination status among these patients was investigated by reviewing their medical records or interviewing their guardians by telephone. Vaccine effectiveness was determined using a screening method. Vaccination coverage increased with time, and it was 2-times higher in municipalities where the vaccination fee was supported. In the 2012/13 season, vaccination coverage in Saga Prefecture was 14.9% whereas the proportion of patients vaccinated was 5.1% among those with clinically diagnosed rotavirus gastroenteritis and 1.9% among those hospitalized for rotavirus gastroenteritis. Thus, vaccine effectiveness was estimated as 69.5% and 88.8%, respectively. This is the first study to evaluate rotavirus vaccination coverage and effectiveness in Japan since vaccination began. PMID:26680277

  1. Predictive methods for estimating pesticide flux to air

    SciTech Connect

    Woodrow, J.E.; Seiber, J.N.

    1996-10-01

    Published evaporative flux values for pesticides volatilizing from soil, plants, and water were correlated with compound vapor pressures (VP), modified by compound properties appropriate to the treated matrix (e.g., soil adsorption coefficient [K{sub oc}], water solubility [S{sub w}]). These correlations were formulated as Ln-Ln plots with correlation (r{sup 2}) coefficients in the range 0.93-0.99: (1) Soil surface - Ln flux vs Ln (VP/[K{sub oc} x S{sub w}]); (2) soil incorporation - Ln flux vs Ln [(VP x AR)/(K{sub oc} x S{sub w} x d)] (AR = application rate, d = incorporation depth); (3) plants - Ln flux vs Ln VP; and (4) water - Ln (flux/water conc) vs Ln (VP/Sw). Using estimated flux values from the plant correlation as source terms in the EPA`s SCREEN-2 dispersion model gave downwind concentrations that agreed to within 65-114% with measured concentrations. Further validation using other treated matrices is in progress. These predictive methods for estimating flux, when coupled with downwind dispersion modeling, provide tools for limiting downwind exposures.

  2. An automatic iris occlusion estimation method based on high-dimensional density estimation.

    PubMed

    Li, Yung-Hui; Savvides, Marios

    2013-04-01

    Iris masks play an important role in iris recognition. They indicate which part of the iris texture map is useful and which part is occluded or contaminated by noisy image artifacts such as eyelashes, eyelids, eyeglasses frames, and specular reflections. The accuracy of the iris mask is extremely important. The performance of the iris recognition system will decrease dramatically when the iris mask is inaccurate, even when the best recognition algorithm is used. Traditionally, people used the rule-based algorithms to estimate iris masks from iris images. However, the accuracy of the iris masks generated this way is questionable. In this work, we propose to use Figueiredo and Jain's Gaussian Mixture Models (FJ-GMMs) to model the underlying probabilistic distributions of both valid and invalid regions on iris images. We also explored possible features and found that Gabor Filter Bank (GFB) provides the most discriminative information for our goal. Finally, we applied Simulated Annealing (SA) technique to optimize the parameters of GFB in order to achieve the best recognition rate. Experimental results show that the masks generated by the proposed algorithm increase the iris recognition rate on both ICE2 and UBIRIS dataset, verifying the effectiveness and importance of our proposed method for iris occlusion estimation. PMID:22868651

  3. [Methods for the estimation of the renal function].

    PubMed

    Fontseré Baldellou, Néstor; Bonal I Bastons, Jordi; Romero González, Ramón

    2007-10-13

    The chronic kidney disease represents one of the pathologies with greater incidence and prevalence in the present sanitary systems. The ambulatory application of different methods that allow a suitable detection, monitoring and stratification of the renal functionalism is of crucial importance. On the basis of the vagueness obtained by means of the application of the serum creatinine, a set of predictive equations for the estimation of the glomerular filtration rate have been developed. Nevertheless, it is essential for the physician to know its limitations, in situations of normal renal function and hyperfiltration, certain associate pathologies and extreme situations of nutritional status and age. In these cases, the application of the isotopic techniques for the calculation of the renal function is more recommendable.

  4. Strengths and Limitations of Period Estimation Methods for Circadian Data

    PubMed Central

    Troup, Eilidh; Halliday, Karen J.; Millar, Andrew J.

    2014-01-01

    A key step in the analysis of circadian data is to make an accurate estimate of the underlying period. There are many different techniques and algorithms for determining period, all with different assumptions and with differing levels of complexity. Choosing which algorithm, which implementation and which measures of accuracy to use can offer many pitfalls, especially for the non-expert. We have developed the BioDare system, an online service allowing data-sharing (including public dissemination), data-processing and analysis. Circadian experiments are the main focus of BioDare hence performing period analysis is a major feature of the system. Six methods have been incorporated into BioDare: Enright and Lomb-Scargle periodograms, FFT-NLLS, mFourfit, MESA and Spectrum Resampling. Here we review those six techniques, explain the principles behind each algorithm and evaluate their performance. In order to quantify the methods' accuracy, we examine the algorithms against artificial mathematical test signals and model-generated mRNA data. Our re-implementation of each method in Java allows meaningful comparisons of the computational complexity and computing time associated with each algorithm. Finally, we provide guidelines on which algorithms are most appropriate for which data types, and recommendations on experimental design to extract optimal data for analysis. PMID:24809473

  5. A new rapid method for rockfall energies and distances estimation

    NASA Astrophysics Data System (ADS)

    Giacomini, Anna; Ferrari, Federica; Thoeni, Klaus; Lambert, Cedric

    2016-04-01

    Rockfalls are characterized by long travel distances and significant energies. Over the last decades, three main methods have been proposed in the literature to assess the rockfall runout: empirical, process-based and GIS-based methods (Dorren, 2003). Process-based methods take into account the physics of rockfall by simulating the motion of a falling rock along a slope and they are generally based on a probabilistic rockfall modelling approach that allows for taking into account the uncertainties associated with the rockfall phenomenon. Their application has the advantage of evaluating the energies, bounce heights and distances along the path of a falling block, hence providing valuable information for the design of mitigation measures (Agliardi et al., 2009), however, the implementation of rockfall simulations can be time-consuming and data-demanding. This work focuses on the development of a new methodology for estimating the expected kinetic energies and distances of the first impact at the base of a rock cliff, subject to the conditions that the geometry of the cliff and the properties of the representative block are known. The method is based on an extensive two-dimensional sensitivity analysis, conducted by means of kinematic simulations based on probabilistic modelling of two-dimensional rockfall trajectories (Ferrari et al., 2016). To take into account for the uncertainty associated with the estimation of the input parameters, the study was based on 78400 rockfall scenarios performed by systematically varying the input parameters that are likely to affect the block trajectory, its energy and distance at the base of the rock wall. The variation of the geometry of the rock cliff (in terms of height and slope angle), the roughness of the rock surface and the properties of the outcropping material were considered. A simplified and idealized rock wall geometry was adopted. The analysis of the results allowed finding empirical laws that relate impact energies

  6. A Method for Estimation of Death Tolls in Disastrous Earthquake

    NASA Astrophysics Data System (ADS)

    Pai, C.; Tien, Y.; Teng, T.

    2004-12-01

    Fatality tolls caused by the disastrous earthquake are the one of the most important items among the earthquake damage and losses. If we can precisely estimate the potential tolls and distribution of fatality in individual districts as soon as the earthquake occurrences, it not only make emergency programs and disaster management more effective but also supply critical information to plan and manage the disaster and the allotments of disaster rescue manpower and medicine resources in a timely manner. In this study, we intend to reach the estimation of death tolls caused by the Chi-Chi earthquake in individual districts based on the Attributive Database of Victims, population data, digital maps and Geographic Information Systems. In general, there were involved many factors including the characteristics of ground motions, geological conditions, types and usage habits of buildings, distribution of population and social-economic situations etc., all are related to the damage and losses induced by the disastrous earthquake. The density of seismic stations in Taiwan is the greatest in the world at present. In the meantime, it is easy to get complete seismic data by earthquake rapid-reporting systems from the Central Weather Bureau: mostly within about a minute or less after the earthquake happened. Therefore, it becomes possible to estimate death tolls caused by the earthquake in Taiwan based on the preliminary information. Firstly, we form the arithmetic mean of the three components of the Peak Ground Acceleration (PGA) to give the PGA Index for each individual seismic station, according to the mainshock data of the Chi-Chi earthquake. To supply the distribution of Iso-seismic Intensity Contours in any districts and resolve the problems for which there are no seismic station within partial districts through the PGA Index and geographical coordinates in individual seismic station, the Kriging Interpolation Method and the GIS software, The population density depends on

  7. Seismic Methods of Identifying Explosions and Estimating Their Yield

    NASA Astrophysics Data System (ADS)

    Walter, W. R.; Ford, S. R.; Pasyanos, M.; Pyle, M. L.; Myers, S. C.; Mellors, R. J.; Pitarka, A.; Rodgers, A. J.; Hauk, T. F.

    2014-12-01

    Seismology plays a key national security role in detecting, locating, identifying and determining the yield of explosions from a variety of causes, including accidents, terrorist attacks and nuclear testing treaty violations (e.g. Koper et al., 2003, 1999; Walter et al. 1995). A collection of mainly empirical forensic techniques has been successfully developed over many years to obtain source information on explosions from their seismic signatures (e.g. Bowers and Selby, 2009). However a lesson from the three DPRK declared nuclear explosions since 2006, is that our historic collection of data may not be representative of future nuclear test signatures (e.g. Selby et al., 2012). To have confidence in identifying future explosions amongst the background of other seismic signals, and accurately estimate their yield, we need to put our empirical methods on a firmer physical footing. Goals of current research are to improve our physical understanding of the mechanisms of explosion generation of S- and surface-waves, and to advance our ability to numerically model and predict them. As part of that process we are re-examining regional seismic data from a variety of nuclear test sites including the DPRK and the former Nevada Test Site (now the Nevada National Security Site (NNSS)). Newer relative location and amplitude techniques can be employed to better quantify differences between explosions and used to understand those differences in term of depth, media and other properties. We are also making use of the Source Physics Experiments (SPE) at NNSS. The SPE chemical explosions are explicitly designed to improve our understanding of emplacement and source material effects on the generation of shear and surface waves (e.g. Snelson et al., 2013). Finally we are also exploring the value of combining seismic information with other technologies including acoustic and InSAR techniques to better understand the source characteristics. Our goal is to improve our explosion models

  8. The Equivalence of Two Methods of Parameter Estimation for the Rasch Model.

    ERIC Educational Resources Information Center

    Blackwood, Larry G.; Bradley, Edwin L.

    1989-01-01

    Two methods of estimating parameters in the Rasch model are compared. The equivalence of likelihood estimations from the model of G. J. Mellenbergh and P. Vijn (1981) and from usual unconditional maximum likelihood (UML) estimation is demonstrated. Mellenbergh and Vijn's model is a convenient method of calculating UML estimates. (SLD)

  9. Application of age estimation methods based on teeth eruption: how easy is Olze method to use?

    PubMed

    De Angelis, D; Gibelli, D; Merelli, V; Botto, M; Ventura, F; Cattaneo, C

    2014-09-01

    The development of new methods for age estimation has become with time an urgent issue because of the increasing immigration, in order to estimate accurately the age of those subjects who lack valid identity documents. Methods of age estimation are divided in skeletal and dental ones, and among the latter, Olze's method is one of the most recent, since it was introduced in 2010 with the aim to identify the legal age of 18 and 21 years by evaluating the different stages of development of the periodontal ligament of the third molars with closed root apices. The present study aims at verifying the applicability of the method to the daily forensic practice, with special focus on the interobserver repeatability. Olze's method was applied by three different observers (two physicians and one dentist without a specific training in Olze's method) to 61 orthopantomograms from subjects of mixed ethnicity aged between 16 and 51 years. The analysis took into consideration the lower third molars. The results provided by the different observers were then compared in order to verify the interobserver error. Results showed that interobserver error varies between 43 and 57 % for the right lower third molar (M48) and between 23 and 49 % for the left lower third molar (M38). Chi-square test did not show significant differences according to the side of teeth and type of professional figure. The results prove that Olze's method is not easy to apply when used by not adequately trained personnel, because of an intrinsic interobserver error. Since it is however a crucial method in age determination, it should be used only by experienced observers after an intensive and specific training.

  10. Novel method of channel estimation for WCDMA downlink

    NASA Astrophysics Data System (ADS)

    Sheng, Bin; You, XiaoHu

    2001-10-01

    A novel scheme for channel estimation is proposed in this paper for WCDMA Downlink where a pilot channel is simultaneously transmitted with a dada traffic channel. The proposed scheme exploits channel information in both pilot and data traffic channels by combining channel estimates from these two channels. It is demonstrated by computer simulations that the performance of the Rake receiver is improved obviously.

  11. Software Effort Estimation Accuracy: A Comparative Study of Estimations Based on Software Sizing and Development Methods

    ERIC Educational Resources Information Center

    Lafferty, Mark T.

    2010-01-01

    The number of project failures and those projects completed over cost and over schedule has been a significant issue for software project managers. Among the many reasons for failure, inaccuracy in software estimation--the basis for project bidding, budgeting, planning, and probability estimates--has been identified as a root cause of a high…

  12. A Simulation Study Comparison of Bayesian Estimation with Conventional Methods for Estimating Unknown Change Points

    ERIC Educational Resources Information Center

    Wang, Lijuan; McArdle, John J.

    2008-01-01

    The main purpose of this research is to evaluate the performance of a Bayesian approach for estimating unknown change points using Monte Carlo simulations. The univariate and bivariate unknown change point mixed models were presented and the basic idea of the Bayesian approach for estimating the models was discussed. The performance of Bayesian…

  13. Estimation of HIV infection and incubation via state space models.

    PubMed

    Tan, W Y; Ye, Z

    2000-09-01

    By using the state space model (Kalman filter model) of the HIV epidemic, in this paper we have developed a general Bayesian procedure to estimate simultaneously the HIV infection distribution, the HIV incubation distribution, the numbers of susceptible people, infective people and AIDS cases. The basic approach is to use the Gibbs sampling method combined with the weighted bootstrap method. We have applied this method to the San Francisco AIDS incidence data from January 1981 to December 1992. The results show clearly that both the probability density function of the HIV infection and the probability density function of the HIV incubation are curves with two peaks. The results of the HIV infection distribution are clearly consistent with the finding by Tan et al. [W.Y. Tan, S.C. Tang, S.R. Lee, Estimation of HIV seroconversion and effects of age in San Francisco homosexual populations, J. Appl. Stat. 25 (1998) 85]. The results of HIV incubation distribution seem to confirm the staged model used by Satten and Longini [G. Satten, I. Longini, Markov chain with measurement error: estimating the 'true' course of marker of the progression of human immunodeficiency virus disease, Appl. Stat. 45 (1996) 275]. PMID:10942785

  14. Variational methods to estimate terrestrial ecosystem model parameters

    NASA Astrophysics Data System (ADS)

    Delahaies, Sylvain; Roulstone, Ian

    2016-04-01

    Carbon is at the basis of the chemistry of life. Its ubiquity in the Earth system is the result of complex recycling processes. Present in the atmosphere in the form of carbon dioxide it is adsorbed by marine and terrestrial ecosystems and stored within living biomass and decaying organic matter. Then soil chemistry and a non negligible amount of time transform the dead matter into fossil fuels. Throughout this cycle, carbon dioxide is released in the atmosphere through respiration and combustion of fossils fuels. Model-data fusion techniques allow us to combine our understanding of these complex processes with an ever-growing amount of observational data to help improving models and predictions. The data assimilation linked ecosystem carbon (DALEC) model is a simple box model simulating the carbon budget allocation for terrestrial ecosystems. Over the last decade several studies have demonstrated the relative merit of various inverse modelling strategies (MCMC, ENKF, 4DVAR) to estimate model parameters and initial carbon stocks for DALEC and to quantify the uncertainty in the predictions. Despite its simplicity, DALEC represents the basic processes at the heart of more sophisticated models of the carbon cycle. Using adjoint based methods we study inverse problems for DALEC with various data streams (8 days MODIS LAI, monthly MODIS LAI, NEE). The framework of constraint optimization allows us to incorporate ecological common sense into the variational framework. We use resolution matrices to study the nature of the inverse problems and to obtain data importance and information content for the different type of data. We study how varying the time step affect the solutions, and we show how "spin up" naturally improves the conditioning of the inverse problems.

  15. Analytic Method to Estimate Particle Acceleration in Flux Ropes

    NASA Technical Reports Server (NTRS)

    Guidoni, S. E.; Karpen, J. T.; DeVore, C. R.

    2015-01-01

    The mechanism that accelerates particles to the energies required to produce the observed high-energy emission in solar flares is not well understood. Drake et al. (2006) proposed a kinetic mechanism for accelerating electrons in contracting magnetic islands formed by reconnection. In this model, particles that gyrate around magnetic field lines transit from island to island, increasing their energy by Fermi acceleration in those islands that are contracting. Based on these ideas, we present an analytic model to estimate the energy gain of particles orbiting around field lines inside a flux rope (2.5D magnetic island). We calculate the change in the velocity of the particles as the flux rope evolves in time. The method assumes a simple profile for the magnetic field of the evolving island; it can be applied to any case where flux ropes are formed. In our case, the flux-rope evolution is obtained from our recent high-resolution, compressible 2.5D MHD simulations of breakout eruptive flares. The simulations allow us to resolve in detail the generation and evolution of large-scale flux ropes as a result of sporadic and patchy reconnection in the flare current sheet. Our results show that the initial energy of particles can be increased by 2-5 times in a typical contracting island, before the island reconnects with the underlying arcade. Therefore, particles need to transit only from 3-7 islands to increase their energies by two orders of magnitude. These macroscopic regions, filled with a large number of particles, may explain the large observed rates of energetic electron production in flares. We conclude that this mechanism is a promising candidate for electron acceleration in flares, but further research is needed to extend our results to 3D flare conditions.

  16. On-line estimation of nonlinear physical systems

    USGS Publications Warehouse

    Christakos, G.

    1988-01-01

    Recursive algorithms for estimating states of nonlinear physical systems are presented. Orthogonality properties are rediscovered and the associated polynomials are used to linearize state and observation models of the underlying random processes. This requires some key hypotheses regarding the structure of these processes, which may then take account of a wide range of applications. The latter include streamflow forecasting, flood estimation, environmental protection, earthquake engineering, and mine planning. The proposed estimation algorithm may be compared favorably to Taylor series-type filters, nonlinear filters which approximate the probability density by Edgeworth or Gram-Charlier series, as well as to conventional statistical linearization-type estimators. Moreover, the method has several advantages over nonrecursive estimators like disjunctive kriging. To link theory with practice, some numerical results for a simulated system are presented, in which responses from the proposed and extended Kalman algorithms are compared. ?? 1988 International Association for Mathematical Geology.

  17. [Using Lamendin and Meindl-Lovejoy methods for age at death estimation of the unknown person].

    PubMed

    Bednarek, Jarosław; Engelgardt, Piotr; Bloch-Bogusławska, Elzbieta; Sliwka, Karol

    2002-01-01

    The paper presents the precise description of two methods used for age estimation on the base of single rooted tooth and cranial suture obliteration. Using the methods mentioned above, the age at death of the unknown person was estimated. A comparison of the estimated age and chronological age derived after identification, showed high usefulness of the mentioned methods.

  18. Bayesian Methods for Radiation Detection and Dosimetry

    SciTech Connect

    Peter G. Groer

    2002-09-29

    We performed work in three areas: radiation detection, external and internal radiation dosimetry. In radiation detection we developed Bayesian techniques to estimate the net activity of high and low activity radioactive samples. These techniques have the advantage that the remaining uncertainty about the net activity is described by probability densities. Graphs of the densities show the uncertainty in pictorial form. Figure 1 below demonstrates this point. We applied stochastic processes for a method to obtain Bayesian estimates of 222Rn-daughter products from observed counting rates. In external radiation dosimetry we studied and developed Bayesian methods to estimate radiation doses to an individual with radiation induced chromosome aberrations. We analyzed chromosome aberrations after exposure to gammas and neutrons and developed a method for dose-estimation after criticality accidents. The research in internal radiation dosimetry focused on parameter estimation for compartmental models from observed compartmental activities. From the estimated probability densities of the model parameters we were able to derive the densities for compartmental activities for a two compartment catenary model at different times. We also calculated the average activities and their standard deviation for a simple two compartment model.

  19. Dynamic State Estimation Utilizing High Performance Computing Methods

    SciTech Connect

    Schneider, Kevin P.; Huang, Zhenyu; Yang, Bo; Hauer, Matthew L.; Nieplocha, Jaroslaw

    2009-03-18

    The state estimation tools which are currently deployed in power system control rooms are based on a quasi-steady-state assumption. As a result, the suite of operational tools that rely on state estimation results as inputs do not have dynamic information available and their accuracy is compromised. This paper presents an overview of the Kalman Filtering process and then focuses on the implementation of the predication component on multiple processors.

  20. Estimation of Organ Activity using Four Different Methods of Background Correction in Conjugate View Method

    PubMed Central

    Shanei, Ahmad; Afshin, Maryam; Moslehi, Masoud; Rastaghi, Sedighe

    2015-01-01

    To make an accurate estimation of the uptake of radioactivity in an organ using the conjugate view method, corrections of physical factors, such as background activity, scatter, and attenuation are needed. The aim of this study was to evaluate the accuracy of four different methods for background correction in activity quantification of the heart in myocardial perfusion scans. The organ activity was calculated using the conjugate view method. A number of 22 healthy volunteers were injected with 17–19 mCi of 99mTc-methoxy-isobutyl-isonitrile (MIBI) at rest or during exercise. Images were obtained by a dual-headed gamma camera. Four methods for background correction were applied: (1) Conventional correction (referred to as the Gates' method), (2) Buijs method, (3) BgdA subtraction, (4) BgdB subtraction. To evaluate the accuracy of these methods, the results of the calculations using the above-mentioned methods were compared with the reference results. The calculated uptake in the heart using conventional method, Buijs method, BgdA subtraction, and BgdB subtraction methods was 1.4 ± 0.7% (P < 0.05), 2.6 ± 0.6% (P < 0.05), 1.3 ± 0.5% (P < 0.05), and 0.8 ± 0.3% (P < 0.05) of injected dose (I.D) at rest and 1.8 ± 0.6% (P > 0.05), 3.1 ± 0.8% (P > 0.05), 1.9 ± 0.8% (P < 0.05), and 1.2 ± 0.5% (P < 0.05) of I.D, during exercise. The mean estimated myocardial uptake of 99mTc-MIBI was dependent on the correction method used. Comparison among the four different methods of background activity correction applied in this study showed that the Buijs method was the most suitable method for background correction in myocardial perfusion scan. PMID:26955568

  1. Estimation of Organ Activity using Four Different Methods of Background Correction in Conjugate View Method.

    PubMed

    Shanei, Ahmad; Afshin, Maryam; Moslehi, Masoud; Rastaghi, Sedighe

    2015-01-01

    To make an accurate estimation of the uptake of radioactivity in an organ using the conjugate view method, corrections of physical factors, such as background activity, scatter, and attenuation are needed. The aim of this study was to evaluate the accuracy of four different methods for background correction in activity quantification of the heart in myocardial perfusion scans. The organ activity was calculated using the conjugate view method. A number of 22 healthy volunteers were injected with 17-19 mCi of (99m)Tc-methoxy-isobutyl-isonitrile (MIBI) at rest or during exercise. Images were obtained by a dual-headed gamma camera. Four methods for background correction were applied: (1) Conventional correction (referred to as the Gates' method), (2) Buijs method, (3) BgdA subtraction, (4) BgdB subtraction. To evaluate the accuracy of these methods, the results of the calculations using the above-mentioned methods were compared with the reference results. The calculated uptake in the heart using conventional method, Buijs method, BgdA subtraction, and BgdB subtraction methods was 1.4 ± 0.7% (P < 0.05), 2.6 ± 0.6% (P < 0.05), 1.3 ± 0.5% (P < 0.05), and 0.8 ± 0.3% (P < 0.05) of injected dose (I.D) at rest and 1.8 ± 0.6% (P > 0.05), 3.1 ± 0.8% (P > 0.05), 1.9 ± 0.8% (P < 0.05), and 1.2 ± 0.5% (P < 0.05) of I.D, during exercise. The mean estimated myocardial uptake of (99m)Tc-MIBI was dependent on the correction method used. Comparison among the four different methods of background activity correction applied in this study showed that the Buijs method was the most suitable method for background correction in myocardial perfusion scan. PMID:26955568

  2. Estimation of Organ Activity using Four Different Methods of Background Correction in Conjugate View Method.

    PubMed

    Shanei, Ahmad; Afshin, Maryam; Moslehi, Masoud; Rastaghi, Sedighe

    2015-01-01

    To make an accurate estimation of the uptake of radioactivity in an organ using the conjugate view method, corrections of physical factors, such as background activity, scatter, and attenuation are needed. The aim of this study was to evaluate the accuracy of four different methods for background correction in activity quantification of the heart in myocardial perfusion scans. The organ activity was calculated using the conjugate view method. A number of 22 healthy volunteers were injected with 17-19 mCi of (99m)Tc-methoxy-isobutyl-isonitrile (MIBI) at rest or during exercise. Images were obtained by a dual-headed gamma camera. Four methods for background correction were applied: (1) Conventional correction (referred to as the Gates' method), (2) Buijs method, (3) BgdA subtraction, (4) BgdB subtraction. To evaluate the accuracy of these methods, the results of the calculations using the above-mentioned methods were compared with the reference results. The calculated uptake in the heart using conventional method, Buijs method, BgdA subtraction, and BgdB subtraction methods was 1.4 ± 0.7% (P < 0.05), 2.6 ± 0.6% (P < 0.05), 1.3 ± 0.5% (P < 0.05), and 0.8 ± 0.3% (P < 0.05) of injected dose (I.D) at rest and 1.8 ± 0.6% (P > 0.05), 3.1 ± 0.8% (P > 0.05), 1.9 ± 0.8% (P < 0.05), and 1.2 ± 0.5% (P < 0.05) of I.D, during exercise. The mean estimated myocardial uptake of (99m)Tc-MIBI was dependent on the correction method used. Comparison among the four different methods of background activity correction applied in this study showed that the Buijs method was the most suitable method for background correction in myocardial perfusion scan.

  3. An Investigation of Methods for Improving Estimation of Test Score Distributions.

    ERIC Educational Resources Information Center

    Hanson, Bradley A.

    Three methods of estimating test score distributions that may improve on using the observed frequencies (OBFs) as estimates of a population test score distribution are considered: the kernel method (KM); the polynomial method (PM); and the four-parameter beta binomial method (FPBBM). The assumption each method makes about the smoothness of the…

  4. Food portion estimation by children with obesity: the effects of estimation method and food type.

    PubMed

    Friedman, Alinda; Bennett, Tesia G; Barbarich, Bobbi N; Keaschuk, Rachel A; Ball, Geoff D C

    2012-02-01

    Several factors influence children's ability to report accurate information about their dietary intake. To date, one understudied area of dietary assessment research relates to children's ability to estimate portion sizes of food. The purpose of this cross-sectional research was to examine food portion size estimation accuracy in 7- to 18-year-old children with obesity. Two within-subject experiments (Experiment 1: n=28, Experiment 2: n=27) were conducted in Edmonton, Alberta, Canada, during 2007-2008. Three types of portion size measurement aids (PSMAs) (eg, measuring cups and spoons, household objects [full and half-sized], and modeling clay) were counterbalanced in a Latin Square design for participants to estimate four types of foods (ie, solid, liquid, amorphous pieces, and amorphous masses). Analyses of variance conducted on percent of signed and absolute errors yielded significant PSMA type×food type interactions (P<0.01) in both experiments. Across all food types, for Experiments 1 and 2, measuring cups and spoons produced the least accurate estimates with respect to absolute error (54.2% and 53.1%, respectively), whereas modeling clay produced the most accurate estimates (40.6% and 33.2%, respectively). Half sizes of household objects also yielded enhanced accuracy (47.9% to 37.2%). Finally, there were significant differences in accuracy between amorphous pieces (eg, grapes) vs amorphous masses (eg, mashed potatoes; P<0.01), indicating that there are qualitative differences in how different amorphous foods are estimated. These data are relevant when collecting food intake data from children with obesity and indicate that different PSMAs may be needed to optimize food portion size estimation accuracy for different food types. PMID:22732463

  5. Iterative methods for distributed parameter estimation in parabolic PDE

    SciTech Connect

    Vogel, C.R.; Wade, J.G.

    1994-12-31

    The goal of the work presented is the development of effective iterative techniques for large-scale inverse or parameter estimation problems. In this extended abstract, a detailed description of the mathematical framework in which the authors view these problem is presented, followed by an outline of the ideas and algorithms developed. Distributed parameter estimation problems often arise in mathematical modeling with partial differential equations. They can be viewed as inverse problems; the `forward problem` is that of using the fully specified model to predict the behavior of the system. The inverse or parameter estimation problem is: given the form of the model and some observed data from the system being modeled, determine the unknown parameters of the model. These problems are of great practical and mathematical interest, and the development of efficient computational algorithms is an active area of study.

  6. PHREATOPHYTE WATER USE ESTIMATED BY EDDY-CORRELATION METHODS.

    USGS Publications Warehouse

    Weaver, H.L.; Weeks, E.P.; Campbell, G.S.; Stannard, D.I.; Tanner, B.D.

    1986-01-01

    Water-use was estimated for three phreatophyte communities: a saltcedar community and an alkali-Sacaton grass community in New Mexico, and a greasewood rabbit-brush-saltgrass community in Colorado. These water-use estimates were calculated from eddy-correlation measurements using three different analyses, since the direct eddy-correlation measurements did not satisfy a surface energy balance. The analysis that seems to be most accurate indicated the saltcedar community used from 58 to 87 cm (23 to 34 in. ) of water each year. The other two communities used about two-thirds this quantity.

  7. Fast, moment-based estimation methods for delay network tomography

    SciTech Connect

    Lawrence, Earl Christophre; Michailidis, George; Nair, Vijayan N

    2008-01-01

    Consider the delay network tomography problem where the goal is to estimate distributions of delays at the link-level using data on end-to-end delays. These measurements are obtained using probes that are injected at nodes located on the periphery of the network and sent to other nodes also located on the periphery. Much of the previous literature deals with discrete delay distributions by discretizing the data into small bins. This paper considers more general models with a focus on computationally efficient estimation. The moment-based schemes presented here are designed to function well for larger networks and for applications like monitoring that require speedy solutions.

  8. Regional and longitudinal estimation of product lifespan distribution: a case study for automobiles and a simplified estimation method.

    PubMed

    Oguchi, Masahiro; Fuse, Masaaki

    2015-02-01

    Product lifespan estimates are important information for understanding progress toward sustainable consumption and estimating the stocks and end-of-life flows of products. Publications reported actual lifespan of products; however, quantitative data are still limited for many countries and years. This study presents regional and longitudinal estimation of lifespan distribution of consumer durables, taking passenger cars as an example, and proposes a simplified method for estimating product lifespan distribution. We estimated lifespan distribution parameters for 17 countries based on the age profile of in-use cars. Sensitivity analysis demonstrated that the shape parameter of the lifespan distribution can be replaced by a constant value for all the countries and years. This enabled a simplified estimation that does not require detailed data on the age profile. Applying the simplified method, we estimated the trend in average lifespans of passenger cars from 2000 to 2009 for 20 countries. Average lifespan differed greatly between countries (9-23 years) and was increasing in many countries. This suggests consumer behavior differs greatly among countries and has changed over time, even in developed countries. The results suggest that inappropriate assumptions of average lifespan may cause significant inaccuracy in estimating the stocks and end-of-life flows of products.

  9. Characterization, parameter estimation, and aircraft response statistics of atmospheric turbulence

    NASA Technical Reports Server (NTRS)

    Mark, W. D.

    1981-01-01

    A nonGaussian three component model of atmospheric turbulence is postulated that accounts for readily observable features of turbulence velocity records, their autocorrelation functions, and their spectra. Methods for computing probability density functions and mean exceedance rates of a generic aircraft response variable are developed using nonGaussian turbulence characterizations readily extracted from velocity recordings. A maximum likelihood method is developed for optimal estimation of the integral scale and intensity of records possessing von Karman transverse of longitudinal spectra. Formulas for the variances of such parameter estimates are developed. The maximum likelihood and least-square approaches are combined to yield a method for estimating the autocorrelation function parameters of a two component model for turbulence.

  10. COMPARISON OF METHODS FOR ESTIMATING GROUND-WATER PUMPAGE FOR IRRIGATION.

    USGS Publications Warehouse

    Frenzel, Steven A.

    1985-01-01

    Ground-water pumpage for irrigation was measured at 32 sites on the eastern Snake River Plain in southern Idaho during 1983. Pumpage at these sites also was estimated by three commonly used methods, and pumpage estimates were compared to measured values to determine the accuracy of each estimate. Statistical comparisons of estimated and metered pumpage using an F-test showed that only estimates made using the instantaneous discharge method were not significantly different ( alpha equals 0. 01) from metered values. Pumpage estimates made using the power consumption method reflect variability in pumping efficiency among sites. Pumpage estimates made using the crop-consumptive use method reflect variability in water-management practices. Pumpage estimates made using the instantaneous discharge method reflect variability in discharges at each site during the irrigation season.

  11. A Practical Method of Policy Analysis by Estimating Effect Size

    ERIC Educational Resources Information Center

    Phelps, James L.

    2011-01-01

    The previous articles on class size and other productivity research paint a complex and confusing picture of the relationship between policy variables and student achievement. Missing is a conceptual scheme capable of combining the seemingly unrelated research and dissimilar estimates of effect size into a unified structure for policy analysis and…

  12. Assessing Methods for Generalizing Experimental Impact Estimates to Target Populations

    ERIC Educational Resources Information Center

    Kern, Holger L.; Stuart, Elizabeth A.; Hill, Jennifer; Green, Donald P.

    2016-01-01

    Randomized experiments are considered the gold standard for causal inference because they can provide unbiased estimates of treatment effects for the experimental participants. However, researchers and policymakers are often interested in using a specific experiment to inform decisions about other target populations. In education research,…

  13. Assessment of in silico methods to estimate aquatic species sensitivity

    EPA Science Inventory

    Determining the sensitivity of a diversity of species to environmental contaminants continues to be a significant challenge in ecological risk assessment because toxicity data are generally limited to a few standard species. In many cases, QSAR models are used to estimate toxici...

  14. Estimation method for national methane emission from solid waste landfills

    NASA Astrophysics Data System (ADS)

    Kumar, Sunil; Gaikwad, S. A.; Shekdar, A. V.; Kshirsagar, P. S.; Singh, R. N.

    In keeping with the global efforts on inventorisation of methane emission, municipal solid waste (MSW) landfills are recognised as one of the major sources of anthropogenic emissions generated from human activities. In India, most of the solid wastes are disposed of by landfilling in low-lying areas located in and around the urban centres resulting in generation of large quantities of biogas containing a sizeable proportion of methane. After a critical review of literature on the methodology for estimation of methane emissions, the default methodology has been used in estimation following the IPCC guidelines 1996. However, as the default methodology assumes that all potential methane is emitted in the year of waste deposition, a triangular model for biogas from landfill has been proposed and the results are compared. The methodology proposed for methane emissions from landfills based on a triangular model is more realistic and can very well be used in estimation on global basis. Methane emissions from MSW landfills for the year AD 1980-1999 have been estimated which could be used in computing national inventories of methane emission.

  15. Methods to explain genomic estimates of breeding value

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Genetic markers allow animal breeders to locate, estimate, and trace inheritance of many unknown genes that affect quantitative traits. Traditional models use pedigree data to compute expected proportions of genes identical by descent (assumed the same for all traits). Newer genomic models use thous...

  16. A Modified Frequency Estimation Equating Method for the Common-Item Nonequivalent Groups Design

    ERIC Educational Resources Information Center

    Wang, Tianyou; Brennan, Robert L.

    2009-01-01

    Frequency estimation, also called poststratification, is an equating method used under the common-item nonequivalent groups design. A modified frequency estimation method is proposed here, based on altering one of the traditional assumptions in frequency estimation in order to correct for equating bias. A simulation study was carried out to…

  17. Etalon-photometric method for estimation of tissues density at x-ray images

    NASA Astrophysics Data System (ADS)

    Buldakov, Nicolay S.; Buldakova, Tatyana I.; Suyatinov, Sergey I.

    2016-04-01

    The etalon-photometric method for quantitative estimation of physical density of pathological entities is considered. The method consists in using etalon during the registration and estimation of photometric characteristics of objects. The algorithm for estimating of physical density at X-ray images is offered.

  18. Pain from the life cycle perspective: Evaluation and Measurement through psychophysical methods of category estimation and magnitude estimation 1

    PubMed Central

    Sousa, Fátima Aparecida Emm Faleiros; da Silva, Talita de Cássia Raminelli; Siqueira, Hilze Benigno de Oliveira Moura; Saltareli, Simone; Gomez, Rodrigo Ramon Falconi; Hortense, Priscilla

    2016-01-01

    Abstract Objective: to describe acute and chronic pain from the perspective of the life cycle. Methods: participants: 861 people in pain. The Multidimensional Pain Evaluation Scale (MPES) was used. Results: in the category estimation method the highest descriptors of chronic pain for children/ adolescents were "Annoying" and for adults "Uncomfortable". The highest descriptors of acute pain for children/adolescents was "Complicated"; and for adults was "Unbearable". In magnitude estimation method, the highest descriptors of chronic pain was "Desperate" and for descriptors of acute pain was "Terrible". Conclusions: the MPES is a reliable scale it can be applied during different stages of development. PMID:27556875

  19. Quantitative estimation of poikilocytosis by the coherent optical method

    NASA Astrophysics Data System (ADS)

    Safonova, Larisa P.; Samorodov, Andrey V.; Spiridonov, Igor N.

    2000-05-01

    The investigation upon the necessity and the reliability required of the determination of the poikilocytosis in hematology has shown that existing techniques suffer from grave shortcomings. To determine a deviation of the erythrocytes' form from the normal (rounded) one in blood smears it is expedient to use an integrative estimate. The algorithm which is based on the correlation between erythrocyte morphological parameters with properties of the spatial-frequency spectrum of blood smear is suggested. During analytical and experimental research an integrative form parameter (IFP) which characterizes the increase of the relative concentration of cells with the changed form over 5% and the predominating type of poikilocytes was suggested. An algorithm of statistically reliable estimation of the IFP on the standard stained blood smears has been developed. To provide the quantitative characterization of the morphological features of cells a form vector has been proposed, and its validity for poikilocytes differentiation was shown.

  20. A Simple Echocardiographic Method To Estimate Pulmonary Vascular Resistance

    PubMed Central

    Opotowsky, Alexander R.; Clair, Mathieu; Afilalo, Jonathan; Landzberg, Michael J.; Waxman, Aaron B.; Moko, Lilamarie; Maron, Bradley; Vaidya, Anjali; Forfia, Paul R.

    2015-01-01

    Pulmonary hypertension is comprised of heterogeneous diagnoses with distinct hemodynamic pathophysiology. Identifying elevated pulmonary vascular resistance (PVR) is critical for appropriate treatment. We reviewed data for patients seen at referral PH clinics who underwent echocardiography and right heart catheterization within 1 year. We derived equations to estimate PVR based on the ratio of estimated pulmonary artery (PA) systolic pressure (PASPDoppler) to RVOT VTI. We validated these equations in a separate sample and compared them to a published model based on the ratio of transtricuspid flow velocity to RVOT VTI (Model 1, Abbas et al 2003). The derived models were: (Model 2)PVR=1.2×PASPRVOT VTI (Model 3)PVR=PASPRVOT VTI+3if notch present The cohort included 217 patients with mean PA pressure=45.3±11.9mmHg, PVR=7.3±5.0WU and PA wedge pressure=14.8±8.1mmHg; just over 1/3rd had PA wedge pressure >15mmHg (35.5%) and 82.0% had PVR>3WU. Model 1 systematically underestimated PVR, especially with high PVR. The derived models demonstrated no systematic bias. Model 3 correlated best with PVR (r=0.80 vs. 0.73 and 0.77 for Models 1 and 2 respectively). Model 3 had superior discriminatory power for PVR>3WU (AUC=0.946) and PVR>5WU (AUC=0.924), though all models discriminated well. Model 3 estimated PVR>3 was 98.3% sensitive and 61.1% specific for PVR>3WU (PPV=93%; NPV=88%). In conclusion, we present an equation to estimate PVR, using the ratio of PASPDoppler to RVOT VTI and a constant designating presence of RVOT VTI mid-systolic notching, which provides superior agreement with PVR across a wide range of values. PMID:23735649

  1. Developing methods for timely and relevant mission impact estimation

    NASA Astrophysics Data System (ADS)

    Grimaila, Michael R.; Fortson, Larry W., Jr.; Sutton, Janet L.; Mills, Robert F.

    2009-05-01

    Military organizations embed information systems and networking technologies into their core mission processes as a means to increase operational efficiency, improve decision making quality, and shorten the "kill chain". Unfortunately, this dependence can place the mission at risk when the loss or degradation of the confidentiality, integrity, availability, non-repudiation, or authenticity of a critical information resource or flow occurs. Since the accuracy, conciseness, and timeliness of the information used in command decision making processes impacts the quality of these decisions, and hence, the operational mission outcome; it is imperative to explicitly recognize, quantify, and document critical missioninformation dependencies in order to gain a true appreciation of operational risk. We conjecture what is needed is a structured process to provide decision makers with real-time awareness of the status of critical information resources and timely notification of estimated mission impact, from the time an information incident is declared, until the incident is fully remediated. In this paper, we discuss our initial research towards the development of a mission impact estimation engine which fuses information from subject matter experts, historical mission impacts, and explicit mission models to provide the ability to estimate the mission impacts resulting from an information incident in real-time.

  2. Methods for estimating drought streamflow probabilities for Virginia streams

    USGS Publications Warehouse

    Austin, Samuel H.

    2014-01-01

    Maximum likelihood logistic regression model equations used to estimate drought flow probabilities for Virginia streams are presented for 259 hydrologic basins in Virginia. Winter streamflows were used to estimate the likelihood of streamflows during the subsequent drought-prone summer months. The maximum likelihood logistic regression models identify probable streamflows from 5 to 8 months in advance. More than 5 million streamflow daily values collected over the period of record (January 1, 1900 through May 16, 2012) were compiled and analyzed over a minimum 10-year (maximum 112-year) period of record. The analysis yielded the 46,704 equations with statistically significant fit statistics and parameter ranges published in two tables in this report. These model equations produce summer month (July, August, and September) drought flow threshold probabilities as a function of streamflows during the previous winter months (November, December, January, and February). Example calculations are provided, demonstrating how to use the equations to estimate probable streamflows as much as 8 months in advance.

  3. Estimation of lithofacies proportions using well and well test data

    SciTech Connect

    Hu, L.Y.; Blanc, G.; Noetinger, B.

    1996-12-31

    A crucial step of the commonly used geostatistical methods for modeling heterogeneous reservoirs (e.g. the sequential indicator simulation and the truncated Gaussian functions) is the estimation of the lithofacies local proportion (or probability density) functions. Well-test derived permeabilities show good correlation with lithofacies proportions around wells. Integrating well and well-test data in estimating lithofacies proportions could permit the building of more realistic models of reservoir heterogeneity. However this integration is difficult because of the different natures and measurement scales of these two types of data. This paper presents a two step approach to integrating well and well-test data into heterogeneous reservoir modeling. First lithofacies proportions in well-test investigation areas are estimated using a new kriging algorithm called KISCA. KISCA consists in kriging jointly the proportions of all lithofacies in a well-test investigation area so that the corresponding well-test derived permeability is respected through a weighted power averaging of lithofacies permeabilities. For multiple well-tests, an iterative process is used in KISCA to account for their interaction. After this, the estimated proportions are combined with lithofacies indicators at wells for estimating proportion (or probability density) functions over the entire reservoir field using a classical kriging method. Some numerical examples were considered to test the proposed method for estimating lithofacies proportions. In addition, a synthetic lithofacies reservoir model was generated and a well-test simulation was performed. The comparison between the experimental and estimated proportions in the well-test investigation area demonstrates the validity of the proposed method.

  4. Practical Aspects of the Equation-Error Method for Aircraft Parameter Estimation

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene a.

    2006-01-01

    Various practical aspects of the equation-error approach to aircraft parameter estimation were examined. The analysis was based on simulated flight data from an F-16 nonlinear simulation, with realistic noise sequences added to the computed aircraft responses. This approach exposes issues related to the parameter estimation techniques and results, because the true parameter values are known for simulation data. The issues studied include differentiating noisy time series, maximum likelihood parameter estimation, biases in equation-error parameter estimates, accurate computation of estimated parameter error bounds, comparisons of equation-error parameter estimates with output-error parameter estimates, analyzing data from multiple maneuvers, data collinearity, and frequency-domain methods.

  5. Full 3-D transverse oscillations: a method for tissue motion estimation.

    PubMed

    Salles, Sebastien; Liebgott, Hervé; Garcia, Damien; Vray, Didier

    2015-08-01

    We present a new method to estimate 4-D (3-D + time) tissue motion. The method used combines 3-D phase based motion estimation with an unconventional beamforming strategy. The beamforming technique allows us to obtain full 3-D RF volumes with axial, lateral, and elevation modulations. Based on these images, we propose a method to estimate 3-D motion that uses phase images instead of amplitude images. First, volumes featuring 3-D oscillations are created using only a single apodization function, and the 3-D displacement between two consecutive volumes is estimated simultaneously by applying this 3-D estimation. The validity of the method is investigated by conducting simulations and phantom experiments. The results are compared with those obtained with two other conventional estimation methods: block matching and optical flow. The results show that the proposed method outperforms the conventional methods, especially in the transverse directions.

  6. An Estimation Method of Waiting Time for Health Service at Hospital by Using a Portable RFID and Robust Estimation

    NASA Astrophysics Data System (ADS)

    Ishigaki, Tsukasa; Yamamoto, Yoshinobu; Nakamura, Yoshiyuki; Akamatsu, Motoyuki

    Patients that have an health service by doctor have to wait long time at many hospitals. The long waiting time is the worst factor of patient's dissatisfaction for hospital service according to questionnaire for patients. The present paper describes an estimation method of the waiting time for each patient without an electronic medical chart system. The method applies a portable RFID system to data acquisition and robust estimation of probability distribution of the health service and test time by doctor for high-accurate waiting time estimation. We carried out an health service of data acquisition at a real hospital and verified the efficiency of the proposed method. The proposed system widely can be used as data acquisition system in various fields such as marketing service, entertainment or human behavior measurement.

  7. Nonlinear Attitude Filtering Methods

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis; Crassidis, John L.; Cheng, Yang

    2005-01-01

    This paper provides a survey of modern nonlinear filtering methods for attitude estimation. Early applications relied mostly on the extended Kalman filter for attitude estimation. Since these applications, several new approaches have been developed that have proven to be superior to the extended Kalman filter. Several of these approaches maintain the basic structure of the extended Kalman filter, but employ various modifications in order to provide better convergence or improve other performance characteristics. Examples of such approaches include: filter QUEST, extended QUEST, the super-iterated extended Kalman filter, the interlaced extended Kalman filter, and the second-order Kalman filter. Filters that propagate and update a discrete set of sigma points rather than using linearized equations for the mean and covariance are also reviewed. A two-step approach is discussed with a first-step state that linearizes the measurement model and an iterative second step to recover the desired attitude states. These approaches are all based on the Gaussian assumption that the probability density function is adequately specified by its mean and covariance. Other approaches that do not require this assumption are reviewed, including particle filters and a Bayesian filter based on a non-Gaussian, finite-parameter probability density function on SO(3). Finally, the predictive filter, nonlinear observers and adaptive approaches are shown. The strengths and weaknesses of the various approaches are discussed.

  8. Statistics of Sxy estimates

    NASA Technical Reports Server (NTRS)

    Freilich, M. H.; Pawka, S. S.

    1987-01-01

    The statistics of Sxy estimates derived from orthogonal-component measurements are examined. Based on results of Goodman (1957), the probability density function (pdf) for Sxy(f) estimates is derived, and a closed-form solution for arbitrary moments of the distribution is obtained. Characteristic functions are used to derive the exact pdf of Sxy(tot). In practice, a simple Gaussian approximation is found to be highly accurate even for relatively few degrees of freedom. Implications for experiment design are discussed, and a maximum-likelihood estimator for a posterior estimation is outlined.

  9. Wind Power Error Estimation in Resource Assessments

    PubMed Central

    Rodríguez, Osvaldo; del Río, Jesús A.; Jaramillo, Oscar A.; Martínez, Manuel

    2015-01-01

    Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies. PMID:26000444

  10. Wind power error estimation in resource assessments.

    PubMed

    Rodríguez, Osvaldo; Del Río, Jesús A; Jaramillo, Oscar A; Martínez, Manuel

    2015-01-01

    Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies.

  11. Wind power error estimation in resource assessments.

    PubMed

    Rodríguez, Osvaldo; Del Río, Jesús A; Jaramillo, Oscar A; Martínez, Manuel

    2015-01-01

    Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies. PMID:26000444

  12. Sound speed estimation and source localization with linearization and particle filtering.

    PubMed

    Lin, Tao; Michalopoulou, Zoi-Heleni

    2014-03-01

    A method is developed for the estimation of source location and sound speed in the water column relying on linearization. The Jacobian matrix, necessary for the proposed linearization approach, includes derivatives with respect to empirical orthogonal function coefficients instead of sound speed directly. First, the inversion technique is tested on synthetic arrival times, using Gaussian distributions for the errors in the considered arrival times. The approach is efficient, requiring a few iterations, and produces accurate results. Probability densities of the estimates are calculated for different levels of noise in the arrival times. Subsequently, particle filtering is employed for the estimation of arrival times from signals recorded during the Shallow Water 06 experiment. It has been shown in the past that particle filtering can be employed for the successful estimation of multipath arrival times from short-range data and, consequently, in geometry, bathymetry, and sound speed inversion. Here probability density functions of arrival times computed via particle filtering are propagated backward through the proposed inversion process. Inversion estimates are consistent with values reported in the literature for the same quantities. Last it is shown that results are consistent with estimates resulting from fast simulated annealing applied to the same data.

  13. Simple and robust baseline estimation method for multichannel SAR-GMTI systems

    NASA Astrophysics Data System (ADS)

    Chen, Zhao-Yan; Wang, Tong; Ma, Nan

    2016-07-01

    In this paper, the authors propose an approach of estimating the effective baseline for ground moving target indication (GMTI) mode of synthetic aperture radar (SAR), which is different from any previous work. The authors show that the new method leads to a simpler and more robust baseline estimate. This method employs a baseline search operation, where the degree of coherence (DOC) is served as a metric to judge whether the optimum baseline estimate is obtained. The rationale behind this method is that the more accurate the baseline estimate, the higher the coherence of the two channels after co-registering with the estimated baseline value. The merits of the proposed method are twofold: simple to design and robust to the Doppler centroid estimation error. The performance of the proposed method is good. The effectiveness of the method is tested with real SAR data.

  14. Advanced Method to Estimate Fuel Slosh Simulation Parameters

    NASA Technical Reports Server (NTRS)

    Schlee, Keith; Gangadharan, Sathya; Ristow, James; Sudermann, James; Walker, Charles; Hubert, Carl

    2005-01-01

    The nutation (wobble) of a spinning spacecraft in the presence of energy dissipation is a well-known problem in dynamics and is of particular concern for space missions. The nutation of a spacecraft spinning about its minor axis typically grows exponentially and the rate of growth is characterized by the Nutation Time Constant (NTC). For launch vehicles using spin-stabilized upper stages, fuel slosh in the spacecraft propellant tanks is usually the primary source of energy dissipation. For analytical prediction of the NTC this fuel slosh is commonly modeled using simple mechanical analogies such as pendulums or rigid rotors coupled to the spacecraft. Identifying model parameter values which adequately represent the sloshing dynamics is the most important step in obtaining an accurate NTC estimate. Analytic determination of the slosh model parameters has met with mixed success and is made even more difficult by the introduction of propellant management devices and elastomeric diaphragms. By subjecting full-sized fuel tanks with actual flight fuel loads to motion similar to that experienced in flight and measuring the forces experienced by the tanks these parameters can be determined experimentally. Currently, the identification of the model parameters is a laborious trial-and-error process in which the equations of motion for the mechanical analog are hand-derived, evaluated, and their results are compared with the experimental results. The proposed research is an effort to automate the process of identifying the parameters of the slosh model using a MATLAB/SimMechanics-based computer simulation of the experimental setup. Different parameter estimation and optimization approaches are evaluated and compared in order to arrive at a reliable and effective parameter identification process. To evaluate each parameter identification approach, a simple one-degree-of-freedom pendulum experiment is constructed and motion is induced using an electric motor. By applying the

  15. Numerical method for estimating the size of chaotic regions of phase space

    SciTech Connect

    Henyey, F.S.; Pomphrey, N.

    1987-10-01

    A numerical method for estimating irregular volumes of phase space is derived. The estimate weights the irregular area on a surface of section with the average return time to the section. We illustrate the method by application to the stadium and oval billiard systems and also apply the method to the continuous Henon-Heiles system. 15 refs., 10 figs. (LSP)

  16. Semi-quantitative method to estimate levels of Campylobacter

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Introduction: Research projects utilizing live animals and/or systems often require reliable, accurate quantification of Campylobacter following treatments. Even with marker strains, conventional methods designed to quantify are labor and material intensive requiring either serial dilutions or MPN ...

  17. A history-based method to estimate animal preference.

    PubMed

    Maia, Caroline Marques; Volpato, Gilson Luiz

    2016-01-01

    Giving animals their preferred items (e.g., environmental enrichment) has been suggested as a method to improve animal welfare, thus raising the question of how to determine what animals want. Most studies have employed choice tests for detecting animal preferences. However, whether choice tests represent animal preferences remains a matter of controversy. Here, we present a history-based method to analyse data from individual choice tests to discriminate between preferred and non-preferred items. This method differentially weighs choices from older and recent tests performed over time. Accordingly, we provide both a preference index that identifies preferred items contrasted with non-preferred items in successive multiple-choice tests and methods to detect the strength of animal preferences for each item. We achieved this goal by investigating colour choices in the Nile tilapia fish species. PMID:27350213

  18. A history-based method to estimate animal preference

    PubMed Central

    Maia, Caroline Marques; Volpato, Gilson Luiz

    2016-01-01

    Giving animals their preferred items (e.g., environmental enrichment) has been suggested as a method to improve animal welfare, thus raising the question of how to determine what animals want. Most studies have employed choice tests for detecting animal preferences. However, whether choice tests represent animal preferences remains a matter of controversy. Here, we present a history-based method to analyse data from individual choice tests to discriminate between preferred and non-preferred items. This method differentially weighs choices from older and recent tests performed over time. Accordingly, we provide both a preference index that identifies preferred items contrasted with non-preferred items in successive multiple-choice tests and methods to detect the strength of animal preferences for each item. We achieved this goal by investigating colour choices in the Nile tilapia fish species. PMID:27350213

  19. Method of estimating pulse response using an impedance spectrum

    DOEpatents

    Morrison, John L; Morrison, William H; Christophersen, Jon P; Motloch, Chester G

    2014-10-21

    Electrochemical Impedance Spectrum data are used to predict pulse performance of an energy storage device. The impedance spectrum may be obtained in-situ. A simulation waveform includes a pulse wave with a period greater than or equal to the lowest frequency used in the impedance measurement. Fourier series coefficients of the pulse train can be obtained. The number of harmonic constituents in the Fourier series are selected so as to appropriately resolve the response, but the maximum frequency should be less than or equal to the highest frequency used in the impedance measurement. Using a current pulse as an example, the Fourier coefficients of the pulse are multiplied by the impedance spectrum at corresponding frequencies to obtain Fourier coefficients of the voltage response to the desired pulse. The Fourier coefficients of the response are then summed and reassembled to obtain the overall time domain estimate of the voltage using the Fourier series analysis.

  20. Evaluation of acidity estimation methods for mine drainage, Pennsylvania, USA.

    PubMed

    Park, Daeryong; Park, Byungtae; Mendinsky, Justin J; Paksuchon, Benjaphon; Suhataikul, Ratda; Dempsey, Brian A; Cho, Yunchul

    2015-01-01

    Eighteen sites impacted by abandoned mine drainage (AMD) in Pennsylvania were sampled and measured for pH, acidity, alkalinity, metal ions, and sulfate. This study compared the accuracy of four acidity calculation methods with measured hot peroxide acidity and identified the most accurate calculation method for each site as a function of pH and sulfate concentration. Method E1 was the sum of proton and acidity based on total metal concentrations; method E2 added alkalinity; method E3 also accounted for aluminum speciation and temperature effects; and method E4 accounted for sulfate speciation. To evaluate errors between measured and predicted acidity, the Nash-Sutcliffe efficiency (NSE), the coefficient of determination (R (2)), and the root mean square error to standard deviation ratio (RSR) methods were applied. The error evaluation results show that E1, E2, E3, and E4 sites were most accurate at 0, 9, 4, and 5 of the sites, respectively. Sites where E2 was most accurate had pH greater than 4.0 and less than 400 mg/L of sulfate. Sites where E3 was most accurate had pH greater than 4.0 and sulfate greater than 400 mg/L with two exceptions. Sites where E4 was most accurate had pH less than 4.0 and more than 400 mg/L sulfate with one exception. The results indicate that acidity in AMD-affected streams can be accurately predicted by using pH, alkalinity, sulfate, Fe(II), Mn(II), and Al(III) concentrations in one or more of the identified equations, and that the appropriate equation for prediction can be selected based on pH and sulfate concentration. PMID:25399119

  1. Evaluation of acidity estimation methods for mine drainage, Pennsylvania, USA.

    PubMed

    Park, Daeryong; Park, Byungtae; Mendinsky, Justin J; Paksuchon, Benjaphon; Suhataikul, Ratda; Dempsey, Brian A; Cho, Yunchul

    2015-01-01

    Eighteen sites impacted by abandoned mine drainage (AMD) in Pennsylvania were sampled and measured for pH, acidity, alkalinity, metal ions, and sulfate. This study compared the accuracy of four acidity calculation methods with measured hot peroxide acidity and identified the most accurate calculation method for each site as a function of pH and sulfate concentration. Method E1 was the sum of proton and acidity based on total metal concentrations; method E2 added alkalinity; method E3 also accounted for aluminum speciation and temperature effects; and method E4 accounted for sulfate speciation. To evaluate errors between measured and predicted acidity, the Nash-Sutcliffe efficiency (NSE), the coefficient of determination (R (2)), and the root mean square error to standard deviation ratio (RSR) methods were applied. The error evaluation results show that E1, E2, E3, and E4 sites were most accurate at 0, 9, 4, and 5 of the sites, respectively. Sites where E2 was most accurate had pH greater than 4.0 and less than 400 mg/L of sulfate. Sites where E3 was most accurate had pH greater than 4.0 and sulfate greater than 400 mg/L with two exceptions. Sites where E4 was most accurate had pH less than 4.0 and more than 400 mg/L sulfate with one exception. The results indicate that acidity in AMD-affected streams can be accurately predicted by using pH, alkalinity, sulfate, Fe(II), Mn(II), and Al(III) concentrations in one or more of the identified equations, and that the appropriate equation for prediction can be selected based on pH and sulfate concentration.

  2. Performance of different detrending methods in turbulent flux estimation

    NASA Astrophysics Data System (ADS)

    Donateo, Antonio; Cava, Daniela; Contini, Daniele

    2015-04-01

    The eddy covariance is the most direct, efficient and reliable method to measure the turbulent flux of a scalar (Baldocchi, 2003). Required conditions for high-quality eddy covariance measurements are amongst others stationarity of the measured data and a fully developed turbulence. The simplest method for obtaining the fluctuating components for covariance calculation according to Reynolds averaging rules under ideal stationary conditions is the so called mean removal method. However steady state conditions rarely exist in the atmosphere, because of the diurnal cycle, changes in meteorological conditions, or sensor drift. All these phenomena produce trends or low-frequency changes superimposed to the turbulent signal. Different methods for trend removal have been proposed in literature; however a general agreement on how separate low frequency perturbations from turbulence has not yet been reached. The most commonly applied methods are the linear detrending (Gash and Culf, 1996) and the high-pass filter, namely the moving average (Moncrieff et al., 2004). Moreover Vickers and Mahrt (2003) proposed a multi resolution decomposition method in order to select an appropriate time scale for mean removal as a function of atmospheric stability conditions. The present work investigates the performance of these different detrending methods in removing the low frequency contribution to the turbulent fluxes calculation, including also a spectral filter by a Fourier decomposition of the time series. The different methods have been applied to the calculation of the turbulent fluxes for different scalars (temperature, ultrafine particles number concentration, carbon dioxide and water vapour concentration). A comparison of the detrending methods will be performed also for different measurement site, namely a urban site, a suburban area, and a remote area in Antarctica. Moreover the performance of the moving average in detrending time series has been analyzed as a function of the

  3. Comparative evaluation of two quantitative precipitation estimation methods in Korea

    NASA Astrophysics Data System (ADS)

    Ko, H.; Nam, K.; Jung, H.

    2013-12-01

    The spatial distribution and intensity of rainfall is necessary for hydrological model, particularly, grid based distributed model. The weather radar is much higher spatial resolution (1kmx1km) than rain gauges (~13km) although radar is indirect measurement of rainfall and rain gauges are directly observed it. And also, radar is provided areal and gridded rainfall information while rain gauges are provided point data. Therefore, radar rainfall data can be useful for input data on the hydrological model. In this study, we compared two QPE schemes to produce radar rainfall for hydrological utilization. The two methods are 1) spatial adjustment and 2) real-time Z-R relationship adjustment (hereafter RAR; Radar-Aws Rain rate). We computed and analyzed the statistics such as ME (Mean Error), RMSE (Root mean square Error), and correlation using cross-validation method (here, leave-one-out method).

  4. Estimation of mechanical properties of nanomaterials using artificial intelligence methods

    NASA Astrophysics Data System (ADS)

    Vijayaraghavan, V.; Garg, A.; Wong, C. H.; Tai, K.

    2014-09-01

    Computational modeling tools such as molecular dynamics (MD), ab initio, finite element modeling or continuum mechanics models have been extensively applied to study the properties of carbon nanotubes (CNTs) based on given input variables such as temperature, geometry and defects. Artificial intelligence techniques can be used to further complement the application of numerical methods in characterizing the properties of CNTs. In this paper, we have introduced the application of multi-gene genetic programming (MGGP) and support vector regression to formulate the mathematical relationship between the compressive strength of CNTs and input variables such as temperature and diameter. The predictions of compressive strength of CNTs made by these models are compared to those generated using MD simulations. The results indicate that MGGP method can be deployed as a powerful method for predicting the compressive strength of the carbon nanotubes.

  5. A method for estimating abundance of mobile populations using telemetry and counts of unmarked animals

    USGS Publications Warehouse

    Clement, Matthew; O'Keefe, Joy M; Walters, Brianne

    2015-01-01

    While numerous methods exist for estimating abundance when detection is imperfect, these methods may not be appropriate due to logistical difficulties or unrealistic assumptions. In particular, if highly mobile taxa are frequently absent from survey locations, methods that estimate a probability of detection conditional on presence will generate biased abundance estimates. Here, we propose a new estimator for estimating abundance of mobile populations using telemetry and counts of unmarked animals. The estimator assumes that the target population conforms to a fission-fusion grouping pattern, in which the population is divided into groups that frequently change in size and composition. If assumptions are met, it is not necessary to locate all groups in the population to estimate abundance. We derive an estimator, perform a simulation study, conduct a power analysis, and apply the method to field data. The simulation study confirmed that our estimator is asymptotically unbiased with low bias, narrow confidence intervals, and good coverage, given a modest survey effort. The power analysis provided initial guidance on survey effort. When applied to small data sets obtained by radio-tracking Indiana bats, abundance estimates were reasonable, although imprecise. The proposed method has the potential to improve abundance estimates for mobile species that have a fission-fusion social structure, such as Indiana bats, because it does not condition detection on presence at survey locations and because it avoids certain restrictive assumptions.

  6. A method for estimating both the solubility parameters and molar volumes of liquids

    NASA Technical Reports Server (NTRS)

    Fedors, R. F.

    1974-01-01

    Development of an indirect method of estimating the solubility parameter of high molecular weight polymers. The proposed method of estimating the solubility parameter, like Small's method, is based on group additive constants, but is believed to be superior to Small's method for two reasons: (1) the contribution of a much larger number of functional groups have been evaluated, and (2) the method requires only a knowledge of structural formula of the compound.

  7. Effects of Vertical Scaling Methods on Linear Growth Estimation

    ERIC Educational Resources Information Center

    Lei, Pui-Wa; Zhao, Yu

    2012-01-01

    Vertical scaling is necessary to facilitate comparison of scores from test forms of different difficulty levels. It is widely used to enable the tracking of student growth in academic performance over time. Most previous studies on vertical scaling methods assume relatively long tests and large samples. Little is known about their performance when…

  8. Fourier methods for estimating power system stability limits

    SciTech Connect

    Marceau, R.J.; Galiana, F.D. . Dept. of Electrical Engineering); Mailhot, R.; Denomme, F.; McGillis, D.T. )

    1994-05-01

    This paper shows how the use of new generation tools such as a generalized shell for dynamic security analysis can help improve the understanding of fundamental power systems behavior. Using the ELISA prototype shell as a laboratory tool, it is shown that the signal energy of the network impulse response acts as a barometer to define the relative severity of a contingency with respect to some parameter, for instance power generation or power transfer. In addition, for a given contingency, as the parameter is varied and a network approaches instability, signal energy increases smoothly and predictably towards an asymptote which defines the network's stability limit: this, in turn, permits comparison of the severity of different contingencies. Using a Fourier transform approach, it is shown that this behavior can be explained in terms of the effect of increasing power on the damping component of a power system's dominant poles. A simple function is derived which estimates network stability limits with surprising accuracy from two or three simulations, provided that at least one of these is within 5% of the limit. These results hold notwithstanding the presence of many active, nonlinear voltage-support elements (i.e. generators, synchronous condensers, SVCs, static excitation systems, etc.) in the network.

  9. Comparison of some biased estimation methods (including ordinary subset regression) in the linear model

    NASA Technical Reports Server (NTRS)

    Sidik, S. M.

    1975-01-01

    Ridge, Marquardt's generalized inverse, shrunken, and principal components estimators are discussed in terms of the objectives of point estimation of parameters, estimation of the predictive regression function, and hypothesis testing. It is found that as the normal equations approach singularity, more consideration must be given to estimable functions of the parameters as opposed to estimation of the full parameter vector; that biased estimators all introduce constraints on the parameter space; that adoption of mean squared error as a criterion of goodness should be independent of the degree of singularity; and that ordinary least-squares subset regression is the best overall method.

  10. Comparing the estimation methods of stable distributions with respect to robustness properties

    NASA Astrophysics Data System (ADS)

    Celik, Nuri; Erden, Samet; Sarikaya, M. Zeki

    2016-04-01

    In statistical applications, some data set may exhibit the features like high skewness and kurtosis and heavy tailness that are incompatible with the normality assumption especially in finance and engineering. For these reason, the modeling of the data sets with α stable distributions will be reasonable approach. The stable distributions have four parameters. In literature, the estimation methods have been studied in order to estimate these unknown model parameters. In this study, we give small information about these proposed estimation methods and we compare these estimators with respect to robustness properties with a comprehensive simulation study, since the robustness property of an estimator has been an important tool for an appropriate modeling.

  11. Feasible methods to estimate disease based price indexes.

    PubMed

    Bradley, Ralph

    2013-05-01

    There is a consensus that statistical agencies should report medical data by disease rather than by service. This study computes price indexes that are necessary to deflate nominal disease expenditures and to decompose their growth into price, treated prevalence and output per patient growth. Unlike previous studies, it uses methods that can be implemented by the Bureau of Labor Statistics (BLS). For the calendar years 2005-2010, I find that these feasible disease based indexes are approximately 1% lower on an annual basis than indexes computed by current methods at BLS. This gives evidence that traditional medical price indexes have not accounted for the more efficient use of medical inputs in treating most diseases.

  12. A TRMM Rainfall Estimation Method Applicable to Land Areas

    NASA Technical Reports Server (NTRS)

    Prabhakara, C.; Iacovazzi, R., Jr.; Oki, R.; Weinman, J. A.

    1998-01-01

    Utilizing multi-spectral, dual-polarization Special Sensor Microwave Imager (SSM/I) radiometer measurements, we have developed in this study a method to retrieve average rain rate, R(sub f(sub R)), in a mesoscale grid box of 2deg x 3deg over land. The key parameter of this method is the fractional rain area, f(sub R), in that grid box, which is determined with the help of a threshold on the 85 GHz scattering depression 0 deduced from the SSM/I data. In order to demonstrate the usefulness of this method, nine-months of R(sub f(sub R))are retrieved from SSM/I data over three grid boxes in the Northeastern United States. These retrievals are then compared with the corresponding ground-truth-average rain rate, R(sub g), deduced from 15-minute rain gauges. Based on nine months of rain rate retrievals over three grid boxes, we find that R(sub f(sub R)can explain about 64 % of the variance contained in R(sub g). A similar evaluation of the grid-box-average rain rates R(sub GSCAT) and R(sub SRL), given by the NASA/GSCAT and NOAA/SRL rain retrieval algorithms, is performed. This evaluation reveals that R(sub GSCAT) and R(sub SRL) can explain only about 42 % of the variance contained in R(sub g). In our method, a threshold on the 85 GHz scattering depression is used primarily to determine the fractional rain area in a mesoscale grid box. Quantitative information pertaining to the 85 GHz scattering depression in the grid box is disregarded. In the NASA/GSCAT and NOAA/SRL methods on the other hand, this quantitative information is included. Based on the performance of all three methods, we infer that the magnitude of the scattering depression is a poor indicator of rain rate. Furthermore, from maps based on the observations made by SSM/I on land and ocean we find that there is a significant redundancy in the information content of the SSM/I multi-spectral observations. This leads us to infer that observations of SSM/I at 19 and 37 GHz add only marginal information to that

  13. A new gaze estimation method considering external light.

    PubMed

    Lee, Jong Man; Lee, Hyeon Chang; Gwon, Su Yeong; Jung, Dongwook; Pan, Weiyuan; Cho, Chul Woo; Park, Kang Ryoung; Kim, Hyun-Cheol; Cha, Jihun

    2015-01-01

    Gaze tracking systems usually utilize near-infrared (NIR) lights and NIR cameras, and the performance of such systems is mainly affected by external light sources that include NIR components. This is ascribed to the production of additional (imposter) corneal specular reflection (SR) caused by the external light, which makes it difficult to discriminate between the correct SR as caused by the NIR illuminator of the gaze tracking system and the imposter SR. To overcome this problem, a new method is proposed for determining the correct SR in the presence of external light based on the relationship between the corneal SR and the pupil movable area with the relative position of the pupil and the corneal SR. The experimental results showed that the proposed method makes the gaze tracking system robust to the existence of external light. PMID:25769050

  14. Data-Driven Method to Estimate Nonlinear Chemical Equivalence

    PubMed Central

    Mayo, Michael; Collier, Zachary A.; Winton, Corey; Chappell, Mark A

    2015-01-01

    There is great need to express the impacts of chemicals found in the environment in terms of effects from alternative chemicals of interest. Methods currently employed in fields such as life-cycle assessment, risk assessment, mixtures toxicology, and pharmacology rely mostly on heuristic arguments to justify the use of linear relationships in the construction of “equivalency factors,” which aim to model these concentration-concentration correlations. However, the use of linear models, even at low concentrations, oversimplifies the nonlinear nature of the concentration-response curve, therefore introducing error into calculations involving these factors. We address this problem by reporting a method to determine a concentration-concentration relationship between two chemicals based on the full extent of experimentally derived concentration-response curves. Although this method can be easily generalized, we develop and illustrate it from the perspective of toxicology, in which we provide equations relating the sigmoid and non-monotone, or “biphasic,” responses typical of the field. The resulting concentration-concentration relationships are manifestly nonlinear for nearly any chemical level, even at the very low concentrations common to environmental measurements. We demonstrate the method using real-world examples of toxicological data which may exhibit sigmoid and biphasic mortality curves. Finally, we use our models to calculate equivalency factors, and show that traditional results are recovered only when the concentration-response curves are “parallel,” which has been noted before, but we make formal here by providing mathematical conditions on the validity of this approach. PMID:26158701

  15. Comparison of ready biodegradation estimation methods for fragrance materials.

    PubMed

    Boethling, Robert

    2014-11-01

    Biodegradability is fundamental to the assessment of environmental exposure and risk from organic chemicals. Predictive models can be used to pursue both regulatory and chemical design (green chemistry) objectives, which are most effectively met when models are easy to use and available free of charge. The objective of this work was to evaluate no-cost estimation programs with respect to prediction of ready biodegradability. Fragrance materials, which are structurally diverse and have significant exposure potential, were used for this purpose. Using a database of 222 fragrance compounds with measured ready biodegradability, 10 models were compared on the basis of overall accuracy, sensitivity, specificity, and Matthews correlation coefficient (MCC), a measure of quality for binary classification. The 10 models were VEGA© Non-Interactive Client, START (Toxtree©), Biowin©1-6, and two models based on inductive machine learning. Applicability domain (AD) was also considered. Overall accuracy was ca. 70% and varied little over all models, but sensitivity, specificity and MCC showed wider variation. Based on MCC, the best models for fragrance compounds were Biowin6, VEGA and Biowin3. VEGA performance was slightly better for the <50% of the compounds it identified as having "high reliability" predictions (AD index >0.8). However, removing compounds with one and only one quaternary carbon yielded similar improvement in predictivity for VEGA, START, and Biowin3/6, with a smaller penalty in reduced coverage. Of the nine compounds for which the eight models (VEGA, START, Biowin1-6) all disagreed with the measured value, measured analog data were available for seven, and all supported the predicted value. VEGA, Biowin3 and Biowin6 are judged suitable for ready biodegradability screening of fragrance compounds.

  16. Systematic variational method for statistical nonlinear state and parameter estimation.

    PubMed

    Ye, Jingxin; Rey, Daniel; Kadakia, Nirag; Eldridge, Michael; Morone, Uriel I; Rozdeba, Paul; Abarbanel, Henry D I; Quinn, John C

    2015-11-01

    In statistical data assimilation one evaluates the conditional expected values, conditioned on measurements, of interesting quantities on the path of a model through observation and prediction windows. This often requires working with very high dimensional integrals in the discrete time descriptions of the observations and model dynamics, which become functional integrals in the continuous-time limit. Two familiar methods for performing these integrals include (1) Monte Carlo calculations and (2) variational approximations using the method of Laplace plus perturbative corrections to the dominant contributions. We attend here to aspects of the Laplace approximation and develop an annealing method for locating the variational path satisfying the Euler-Lagrange equations that comprises the major contribution to the integrals. This begins with the identification of the minimum action path starting with a situation where the model dynamics is totally unresolved in state space, and the consistent minimum of the variational problem is known. We then proceed to slowly increase the model resolution, seeking to remain in the basin of the minimum action path, until a path that gives the dominant contribution to the integral is identified. After a discussion of some general issues, we give examples of the assimilation process for some simple, instructive models from the geophysical literature. Then we explore a slightly richer model of the same type with two distinct time scales. This is followed by a model characterizing the biophysics of individual neurons. PMID:26651756

  17. A comparative study of Interaural Time Delay estimation methods.

    PubMed

    Katz, Brian F G; Noisternig, Markus

    2014-06-01

    The Interaural Time Delay (ITD) is an important binaural cue for sound source localization. Calculations of ITD values are obtained either from measured time domain Head-Related Impulse Responses (HRIRs) or from their frequency transform Head-Related Transfer Functions (HRTFs). Numerous methods exist in current literature, based on a variety of definitions and assumptions of the nature of the ITD as an acoustic cue. This work presents a thorough comparative study of the degree of variability between some of the most common methods for calculating the ITD from measured data. Thirty-two different calculations or variations are compared for positions on the horizontal plane for the HRTF measured on both a KEMAR mannequin and a rigid sphere. Specifically, the spatial variations of the methods are investigated. Included is a discussion of the primary potential causes of these differences, such as the existence of multiple peaks in the HRIR of the contra-lateral ear for azimuths near the inter-aural axis due to multipath propagation and head/pinnae shadowing. PMID:24907816

  18. Method to estimate water storage capacity of capillary barriers - Discussion

    SciTech Connect

    Gee, Glendon W. ); Ward, Anderson L. ); Meyer, Philip D. )

    1998-11-01

    This is a brief comment on a previously published paper. The paper by Stormont and Morris[JGGE 124 (4):297-302] provides an interesting approach to computing water storage capacity of capillary barriers used as landfill covers. They correctly show that available water storage capacity can be increased up to a factor of two for a silt loam soil, when it is used in a capillary barrier as compared to existing as a deep soil profile. For this very reason such a capillary barrier, utilizing silt loam soil, was constructed and successfully tested at the U. S. Department of Energy?s Hanford Site in southeastern Washington State. Silt loam soil provides optimal water storage for capillary barriers and ensures minimal drainage. Less benefits are obtained when capillary barriers utilize more sandy soils. We would endorse a limited application of the method of Stormont and Morris. We suggest that there will be large uncertainties in field capacity, wilting point and water retention characteristics and only when these uncertainties are accounted for can such a method be used to provide sound engineering judgement for cover design. A recommended procedure for using this method would include actual field measurements of the soil hydraulic properties of the cover materials.

  19. Real-Time Parameter Estimation Method Applied to a MIMO Process and its Comparison with an Offline Identification Method

    SciTech Connect

    Kaplanoglu, Erkan; Safak, Koray K.; Varol, H. Selcuk

    2009-01-12

    An experiment based method is proposed for parameter estimation of a class of linear multivariable systems. The method was applied to a pressure-level control process. Experimental time domain input/output data was utilized in a gray-box modeling approach. Prior knowledge of the form of the system transfer function matrix elements is assumed to be known. Continuous-time system transfer function matrix parameters were estimated in real-time by the least-squares method. Simulation results of experimentally determined system transfer function matrix compare very well with the experimental results. For comparison and as an alternative to the proposed real-time estimation method, we also implemented an offline identification method using artificial neural networks and obtained fairly good results. The proposed methods can be implemented conveniently on a desktop PC equipped with a data acquisition board for parameter estimation of moderately complex linear multivariable systems.

  20. Estimating Isometric Tension of Finger Muscle Using Needle EMG Signals and the Twitch Contraction Model

    NASA Astrophysics Data System (ADS)

    Tachibana, Hideyuki; Suzuki, Takafumi; Mabuchi, Kunihiko

    We address an estimation method of isometric muscle tension of fingers, as fundamental research for a neural signal-based prosthesis of fingers. We utilize needle electromyogram (EMG) signals, which have approximately equivalent information to peripheral neural signals. The estimating algorithm comprised two convolution operations. The first convolution is between normal distribution and a spike array, which is detected by needle EMG signals. The convolution estimates the probability density of spike-invoking time in the muscle. In this convolution, we hypothesize that each motor unit in a muscle activates spikes independently based on a same probability density function. The second convolution is between the result of the previous convolution and isometric twitch, viz., the impulse response of the motor unit. The result of the calculation is the sum of all estimated tensions of whole muscle fibers, i.e., muscle tension. We confirmed that there is good correlation between the estimated tension of the muscle and the actual tension, with >0.9 correlation coefficients at 59%, and >0.8 at 89% of all trials.

  1. EXPERIMENTAL METHODS TO ESTIMATE ACCUMULATED SOLIDS IN NUCLEAR WASTE TANKS

    SciTech Connect

    Duignan, M.; Steeper, T.; Steimke, J.

    2012-12-10

    devices and techniques were very effective to estimate the movement, location, and concentrations of the solids representing plutonium and are expected to perform well at a larger scale. The operation of the techniques and their measurement accuracies will be discussed as well as the overall results of the accumulated solids test.

  2. A Practical Torque Estimation Method for Interior Permanent Magnet Synchronous Machine in Electric Vehicles

    PubMed Central

    Zhu, Yuan

    2015-01-01

    The torque output accuracy of the IPMSM in electric vehicles using a state of the art MTPA strategy highly depends on the accuracy of machine parameters, thus, a torque estimation method is necessary for the safety of the vehicle. In this paper, a torque estimation method based on flux estimator with a modified low pass filter is presented. Moreover, by taking into account the non-ideal characteristic of the inverter, the torque estimation accuracy is improved significantly. The effectiveness of the proposed method is demonstrated through MATLAB/Simulink simulation and experiment. PMID:26114557

  3. A Practical Torque Estimation Method for Interior Permanent Magnet Synchronous Machine in Electric Vehicles.

    PubMed

    Wu, Zhihong; Lu, Ke; Zhu, Yuan

    2015-01-01

    The torque output accuracy of the IPMSM in electric vehicles using a state of the art MTPA strategy highly depends on the accuracy of machine parameters, thus, a torque estimation method is necessary for the safety of the vehicle. In this paper, a torque estimation method based on flux estimator with a modified low pass filter is presented. Moreover, by taking into account the non-ideal characteristic of the inverter, the torque estimation accuracy is improved significantly. The effectiveness of the proposed method is demonstrated through MATLAB/Simulink simulation and experiment.

  4. A Practical Torque Estimation Method for Interior Permanent Magnet Synchronous Machine in Electric Vehicles.

    PubMed

    Wu, Zhihong; Lu, Ke; Zhu, Yuan

    2015-01-01

    The torque output accuracy of the IPMSM in electric vehicles using a state of the art MTPA strategy highly depends on the accuracy of machine parameters, thus, a torque estimation method is necessary for the safety of the vehicle. In this paper, a torque estimation method based on flux estimator with a modified low pass filter is presented. Moreover, by taking into account the non-ideal characteristic of the inverter, the torque estimation accuracy is improved significantly. The effectiveness of the proposed method is demonstrated through MATLAB/Simulink simulation and experiment. PMID:26114557

  5. RELICA: a method for estimating the reliability of independent components.

    PubMed

    Artoni, Fiorenzo; Menicucci, Danilo; Delorme, Arnaud; Makeig, Scott; Micera, Silvestro

    2014-12-01

    Independent Component Analysis (ICA) is a widely applied data-driven method for parsing brain and non-brain EEG source signals, mixed by volume conduction to the scalp electrodes, into a set of maximally temporally and often functionally independent components (ICs). Many ICs may be identified with a precise physiological or non-physiological origin. However, this process is hindered by partial instability in ICA results that can arise from noise in the data. Here we propose RELICA (RELiable ICA), a novel method to characterize IC reliability within subjects. RELICA first computes IC "dipolarity" a measure of physiological plausibility, plus a measure of IC consistency across multiple decompositions of bootstrap versions of the input data. RELICA then uses these two measures to visualize and cluster the separated ICs, providing a within-subject measure of IC reliability that does not involve checking for its occurrence across subjects. We demonstrate the use of RELICA on EEG data recorded from 14 subjects performing a working memory experiment and show that many brain and ocular artifact ICs are correctly classified as "stable" (highly repeatable across decompositions of bootstrapped versions of the input data). Many stable ICs appear to originate in the brain, while other stable ICs account for identifiable non-brain processes such as line noise. RELICA might be used with any linear blind source separation algorithm to reduce the risk of basing conclusions on unstable or physiologically un-interpretable component processes. PMID:25234117

  6. Automated methods for estimation of sperm flagellar bending parameters.

    PubMed

    Brokaw, C J

    1984-01-01

    Parameters to describe flagellar bending patterns can be obtained by a microcomputer procedure that uses a set of parameters to synthesize model bending patterns, compares the model bending patterns with digitized and filtered data from flagellar photographs, and uses the Simplex method to vary the parameters until a solution with minimum root mean square differences between the model and the data is found. Parameters for Chlamydomonas bending patterns have been obtained from comparison of shear angle curves for the model and the data. To avoid the determination of the orientation of the basal end of the flagellum, which is required for calculation of shear angles, parameters for sperm flagella have been obtained by comparison of curves of curvature as a function of length for the model and for the data. A constant curvature model, modified from that originally used for Chlamydomonas flagella, has been used for obtaining parameters from sperm flagella, but the methods can be applied using other models for synthesizing the model bending patterns.

  7. The use of copulas to practical estimation of multivariate stochastic differential equation mixed effects models

    SciTech Connect

    Rupšys, P.

    2015-10-28

    A system of stochastic differential equations (SDE) with mixed-effects parameters and multivariate normal copula density function were used to develop tree height model for Scots pine trees in Lithuania. A two-step maximum likelihood parameter estimation method is used and computational guidelines are given. After fitting the conditional probability density functions to outside bark diameter at breast height, and total tree height, a bivariate normal copula distribution model was constructed. Predictions from the mixed-effects parameters SDE tree height model calculated during this research were compared to the regression tree height equations. The results are implemented in the symbolic computational language MAPLE.

  8. The use of copulas to practical estimation of multivariate stochastic differential equation mixed effects models

    NASA Astrophysics Data System (ADS)

    Rupšys, P.

    2015-10-01

    A system of stochastic differential equations (SDE) with mixed-effects parameters and multivariate normal copula density function were used to develop tree height model for Scots pine trees in Lithuania. A two-step maximum likelihood parameter estimation method is used and computational guidelines are given. After fitting the conditional probability density functions to outside bark diameter at breast height, and total tree height, a bivariate normal copula distribution model was constructed. Predictions from the mixed-effects parameters SDE tree height model calculated during this research were compared to the regression tree height equations. The results are implemented in the symbolic computational language MAPLE.

  9. System and Method for Outlier Detection via Estimating Clusters

    NASA Technical Reports Server (NTRS)

    Iverson, David J. (Inventor)

    2016-01-01

    An efficient method and system for real-time or offline analysis of multivariate sensor data for use in anomaly detection, fault detection, and system health monitoring is provided. Models automatically derived from training data, typically nominal system data acquired from sensors in normally operating conditions or from detailed simulations, are used to identify unusual, out of family data samples (outliers) that indicate possible system failure or degradation. Outliers are determined through analyzing a degree of deviation of current system behavior from the models formed from the nominal system data. The deviation of current system behavior is presented as an easy to interpret numerical score along with a measure of the relative contribution of each system parameter to any off-nominal deviation. The techniques described herein may also be used to "clean" the training data.

  10. Method and system for non-linear motion estimation

    NASA Technical Reports Server (NTRS)

    Lu, Ligang (Inventor)

    2011-01-01

    A method and system for extrapolating and interpolating a visual signal including determining a first motion vector between a first pixel position in a first image to a second pixel position in a second image, determining a second motion vector between the second pixel position in the second image and a third pixel position in a third image, determining a third motion vector between one of the first pixel position in the first image and the second pixel position in the second image, and the second pixel position in the second image and the third pixel position in the third image using a non-linear model, determining a position of the fourth pixel in a fourth image based upon the third motion vector.

  11. Statistical classification methods for estimating ancestry using morphoscopic traits.

    PubMed

    Hefner, Joseph T; Ousley, Stephen D

    2014-07-01

    Ancestry assessments using cranial morphoscopic traits currently rely on subjective trait lists and observer experience rather than empirical support. The trait list approach, which is untested, unverified, and in many respects unrefined, is relied upon because of tradition and subjective experience. Our objective was to examine the utility of frequently cited morphoscopic traits and to explore eleven appropriate and novel methods for classifying an unknown cranium into one of several reference groups. Based on these results, artificial neural networks (aNNs), OSSA, support vector machines, and random forest models showed mean classification accuracies of at least 85%. The aNNs had the highest overall classification rate (87.8%), and random forests show the smallest difference between the highest (90.4%) and lowest (76.5%) classification accuracies. The results of this research demonstrate that morphoscopic traits can be successfully used to assess ancestry without relying only on the experience of the observer.

  12. Comparative Evaluation of Two Methods to Estimate Natural Gas Production in Texas

    EIA Publications

    2003-01-01

    This report describes an evaluation conducted by the Energy Information Administration (EIA) in August 2003 of two methods that estimate natural gas production in Texas. The first method (parametric method) was used by EIA from February through August 2003 and the second method (multinomial method) replaced it starting in September 2003, based on the results of this evaluation.

  13. A method to estimate weight and dimensions of large and small gas turbine engines

    NASA Technical Reports Server (NTRS)

    Onat, E.; Klees, G. W.

    1979-01-01

    A computerized method was developed to estimate weight and envelope dimensions of large and small gas turbine engines within + or - 5% to 10%. The method is based on correlations of component weight and design features of 29 data base engines. Rotating components were estimated by a preliminary design procedure which is sensitive to blade geometry, operating conditions, material properties, shaft speed, hub tip ratio, etc. The development and justification of the method selected, and the various methods of analysis are discussed.

  14. Study on Comparison of Bidding and Pricing Behavior Distinction between Estimate Methods

    NASA Astrophysics Data System (ADS)

    Morimoto, Emi; Namerikawa, Susumu

    The most characteristic trend on bidding and pricing behavior distinction in recent years is the increasing number of bidders just above the criteria for low-price bidding investigations. The contractor's markup is the difference between the bidding price and the execution price. Therefore, the contractor's markup is the difference between criteria for low-price bidding investigations price and the execution price in the public works bid in Japan. Virtually, bidder's strategies and behavior have been controlled by public engineer's budgets. Estimation and bid are inseparably linked in the Japanese public works procurement system. The trial of the unit price-type estimation method begins in 2004. On another front, accumulated estimation method is one of the general methods in public works. So, there are two types of standard estimation methods in Japan. In this study, we did a statistical analysis on the bid information of civil engineering works for the Ministry of Land, Infrastructure, and Transportation in 2008. It presents several issues that bidding and pricing behavior is related to an estimation method (several estimation methods) for public works bid in Japan. The two types of standard estimation methods produce different results that number of bidders (decide on bid-no bid strategy) and distribution of bid price (decide on mark-up strategy).The comparison on the distribution of bid prices showed that the percentage of the bid concentrated on the criteria for low-price bidding investigations have had a tendency to get higher in the large-sized public works by the unit price-type estimation method, comparing with the accumulated estimation method. On one hand, the number of bidders who bids for public works estimated unit-price tends to increase significantly Public works estimated unit-price is likely to have been one of the factors for the construction companies to decide if they participate in the biddings.

  15. A new TDOA estimation method in Three-satellite interference localisation

    NASA Astrophysics Data System (ADS)

    Dou, Huijing; Lei, Qian; Li, Wenxue; Xing, Qingqing

    2015-05-01

    Time difference of arrival (TDOA) parameter estimation is the key to Three-satellite interference localisation. Therefore, in order to improve the accuracy of Three-satellite interference location, we must estimate the TDOA parameter accurately and effectively. Based on the study of wavelet transform correlation TDOA estimation algorithm, combining with correlation and Hilbert subtraction method, we put forward a high precision TDOA estimation method for Three-satellite interference location. The proposed algorithm utilises the characteristics of the zero-crossing point of Hilbert transform method corresponding to the correlation peak point of correlation method, using correlation function of wavelet transform correlation method minus the absolute value of its Hilbert transform, to sharpen peak point and improve the TDOA estimation precision, so that the positioning is more accurate and effective.

  16. A comparison of methods to estimate photosynthetic light absorption in leaves with contrasting morphology.

    PubMed

    Olascoaga, Beñat; Mac Arthur, Alasdair; Atherton, Jon; Porcar-Castell, Albert

    2016-03-01

    Accurate temporal and spatial measurements of leaf optical traits (i.e., absorption, reflectance and transmittance) are paramount to photosynthetic studies. These optical traits are also needed to couple radiative transfer and physiological models to facilitate the interpretation of optical data. However, estimating leaf optical traits in leaves with complex morphologies remains a challenge. Leaf optical traits can be measured using integrating spheres, either by placing the leaf sample in one of the measuring ports (External Method) or by placing the sample inside the sphere (Internal Method). However, in leaves with complex morphology (e.g., needles), the External Method presents limitations associated with gaps between the leaves, and the Internal Method presents uncertainties related to the estimation of total leaf area. We introduce a modified version of the Internal Method, which bypasses the effect of gaps and the need to estimate total leaf area, by painting the leaves black and measuring them before and after painting. We assess and compare the new method with the External Method using a broadleaf and two conifer species. Both methods yielded similar leaf absorption estimates for the broadleaf, but absorption estimates were higher with the External Method for the conifer species. Factors explaining the differences between methods, their trade-offs and their advantages and limitations are also discussed. We suggest that the new method can be used to estimate leaf absorption in any type of leaf independently of its morphology, and be used to study further the impact of gap fraction in the External Method.

  17. An improved method for nonlinear parameter estimation: a case study of the Rössler model

    NASA Astrophysics Data System (ADS)

    He, Wen-Ping; Wang, Liu; Jiang, Yun-Di; Wan, Shi-Quan

    2016-08-01

    Parameter estimation is an important research topic in nonlinear dynamics. Based on the evolutionary algorithm (EA), Wang et al. (2014) present a new scheme for nonlinear parameter estimation and numerical tests indicate that the estimation precision is satisfactory. However, the convergence rate of the EA is relatively slow when multiple unknown parameters in a multidimensional dynamical system are estimated simultaneously. To solve this problem, an improved method for parameter estimation of nonlinear dynamical equations is provided in the present paper. The main idea of the improved scheme is to use all of the known time series for all of the components in some dynamical equations to estimate the parameters in single component one by one, instead of estimating all of the parameters in all of the components simultaneously. Thus, we can estimate all of the parameters stage by stage. The performance of the improved method was tested using a classic chaotic system—Rössler model. The numerical tests show that the amended parameter estimation scheme can greatly improve the searching efficiency and that there is a significant increase in the convergence rate of the EA, particularly for multiparameter estimation in multidimensional dynamical equations. Moreover, the results indicate that the accuracy of parameter estimation and the CPU time consumed by the presented method have no obvious dependence on the sample size.

  18. Multivariate drought frequency estimation using copula method in Southwest China

    NASA Astrophysics Data System (ADS)

    Hao, Cui; Zhang, Jiahua; Yao, Fengmei

    2015-12-01

    Drought over Southwest China occurs frequently and has an obvious seasonal characteristic. Proper management of regional droughts requires knowledge of the expected frequency or probability of specific climate information. This study utilized k-means classification and copulas to demonstrate the regional drought occurrence probability and return period based on trivariate drought properties, i.e., drought duration, severity, and peak. A drought event in this study was defined when 3-month Standardized Precipitation Evapotranspiration Index (SPEI) was less than -0.99 according to the regional climate characteristic. Then, the next step was to classify the region into six clusters by k-means method based on annual and seasonal precipitation and temperature and to establish marginal probabilistic distributions for each drought property in each sub-region. Several copula types were selected to test the best fit distribution, and Student t copula was recognized as the best one to integrate drought duration, severity, and peak. The results indicated that a proper classification was important for a regional drought frequency analysis, and copulas were useful tools in exploring the associations of the correlated drought variables and analyzing drought frequency. Student t copula was a robust and proper function for drought joint probability and return period analysis, which is important for analyzing and predicting the regional drought risks.

  19. Variable methods to estimate the ionospheric horizontal gradient

    NASA Astrophysics Data System (ADS)

    Nagarajoo, Karthigesu

    2016-06-01

    DGPS or differential Global Positioning System is a system where the range error at a reference station (after eliminating the error due to its’ clock, hardware delay and multipath) will be eliminated from the range measurement at the user, which view the same satellite, presuming that the satellites path to both the reference station and the user experience common errors due to the ionosphere, clock errors etc. In this assumption, the error due to the ionospheric refraction is assumed to be the same for the two closely spaced paths (such as a baseline length between reference station and the user of 10km as used in simulations throughout this paper, unless otherwise stated) and thus the presence of ionospheric horizontal gradient is ignored. If a user's path is exposed to a drastically large ionosphere gradient, the large difference of ionosphere delays between the reference station and the user can result in significant position error for the user. Several examples of extremely large ionosphere gradients that could cause the significant user errors have been observed. The ionospheric horizontal gradient could be obtained instead from the gradient of the Total Electron Content, TEC observed from a number of received GPS satellites at one or more reference stations or based on empirical models updated with real time data. To investigate the former, in this work, the dual frequency method has been used to obtain both South-North and East-West gradients by using four different receiving stations separated in those directions. In addition, observation data from Navy Ionospheric Monitoring System (NIMS) receivers and the TEC contour map from Rutherford Appleton Laboratory (RAL) UK have also been used in order to define the magnitude and direction of the gradient.

  20. Estimating basin scale evapotranspiration (ET) by water balance and remote sensing methods

    USGS Publications Warehouse

    Senay, G.B.; Leake, S.; Nagler, P.L.; Artan, G.; Dickinson, J.; Cordova, J.T.; Glenn, E.P.

    2011-01-01

    Evapotranspiration (ET) is an important hydrological process that can be studied and estimated at multiple spatial scales ranging from a leaf to a river basin. We present a review of methods in estimating basin scale ET and its applications in understanding basin water balance dynamics. The review focuses on two aspects of ET: (i) how the basin scale water balance approach is used to estimate ET; and (ii) how ‘direct’ measurement and modelling approaches are used to estimate basin scale ET. Obviously, the basin water balance-based ET requires the availability of good precipitation and discharge data to calculate ET as a residual on longer time scales (annual) where net storage changes are assumed to be negligible. ET estimated from such a basin water balance principle is generally used for validating the performance of ET models. On the other hand, many of the direct estimation methods involve the use of remotely sensed data to estimate spatially explicit ET and use basin-wide averaging to estimate basin scale ET. The direct methods can be grouped into soil moisture balance modelling, satellite-based vegetation index methods, and methods based on satellite land surface temperature measurements that convert potential ET into actual ET using a proportionality relationship. The review also includes the use of complementary ET estimation principles for large area applications. The review identifies the need to compare and evaluate the different ET approaches using standard data sets in basins covering different hydro-climatic regions of the world.

  1. An Efficient Acoustic Density Estimation Method with Human Detectors Applied to Gibbons in Cambodia

    PubMed Central

    Kidney, Darren; Rawson, Benjamin M.; Borchers, David L.; Stevenson, Ben C.; Marques, Tiago A.; Thomas, Len

    2016-01-01

    Some animal species are hard to see but easy to hear. Standard visual methods for estimating population density for such species are often ineffective or inefficient, but methods based on passive acoustics show more promise. We develop spatially explicit capture-recapture (SECR) methods for territorial vocalising species, in which humans act as an acoustic detector array. We use SECR and estimated bearing data from a single-occasion acoustic survey of a gibbon population in northeastern Cambodia to estimate the density of calling groups. The properties of the estimator are assessed using a simulation study, in which a variety of survey designs are also investigated. We then present a new form of the SECR likelihood for multi-occasion data which accounts for the stochastic availability of animals. In the context of gibbon surveys this allows model-based estimation of the proportion of groups that produce territorial vocalisations on a given day, thereby enabling the density of groups, instead of the density of calling groups, to be estimated. We illustrate the performance of this new estimator by simulation. We show that it is possible to estimate density reliably from human acoustic detections of visually cryptic species using SECR methods. For gibbon surveys we also show that incorporating observers’ estimates of bearings to detected groups substantially improves estimator performance. Using the new form of the SECR likelihood we demonstrate that estimates of availability, in addition to population density and detection function parameters, can be obtained from multi-occasion data, and that the detection function parameters are not confounded with the availability parameter. This acoustic SECR method provides a means of obtaining reliable density estimates for territorial vocalising species. It is also efficient in terms of data requirements since since it only requires routine survey data. We anticipate that the low-tech field requirements will make this method

  2. Applicability of Demirjian's four methods and Willems method for age estimation in a sample of Turkish children.

    PubMed

    Akkaya, Nursel; Yilanci, Hümeyra Özge; Göksülük, Dinçer

    2015-09-01

    The aim of this study was to evaluate applicability of five dental methods including Demirjian's original, revised, four teeth, and alternate four teeth methods and Willems method for age estimation in a sample of Turkish children. Panoramic radiographs of 799 children (412 females, 387 males) aged between 2.20 and 15.99years were examined by two observers. A repeated measures ANOVA was performed to compare dental methods among gender and age groups. All of the five methods overestimated the chronological age on the average. Among these, Willems method was found to be the most accurate method, which showed 0.07 and 0.15years overestimation for males and females, respectively. It was followed by Demirjian's four teeth methods, revised and original methods. According to the results, Willems method can be recommended for dental age estimation of Turkish children in forensic applications.

  3. Methods to estimate the between‐study variance and its uncertainty in meta‐analysis†

    PubMed Central

    Jackson, Dan; Viechtbauer, Wolfgang; Bender, Ralf; Bowden, Jack; Knapp, Guido; Kuss, Oliver; Higgins, Julian PT; Langan, Dean; Salanti, Georgia

    2015-01-01

    Meta‐analyses are typically used to estimate the overall/mean of an outcome of interest. However, inference about between‐study variability, which is typically modelled using a between‐study variance parameter, is usually an additional aim. The DerSimonian and Laird method, currently widely used by default to estimate the between‐study variance, has been long challenged. Our aim is to identify known methods for estimation of the between‐study variance and its corresponding uncertainty, and to summarise the simulation and empirical evidence that compares them. We identified 16 estimators for the between‐study variance, seven methods to calculate confidence intervals, and several comparative studies. Simulation studies suggest that for both dichotomous and continuous data the estimator proposed by Paule and Mandel and for continuous data the restricted maximum likelihood estimator are better alternatives to estimate the between‐study variance. Based on the scenarios and results presented in the published studies, we recommend the Q‐profile method and the alternative approach based on a ‘generalised Cochran between‐study variance statistic’ to compute corresponding confidence intervals around the resulting estimates. Our recommendations are based on a qualitative evaluation of the existing literature and expert consensus. Evidence‐based recommendations require an extensive simulation study where all methods would be compared under the same scenarios. © 2015 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd. PMID:26332144

  4. Joint state and parameter estimation of the hemodynamic model by particle smoother expectation maximization method

    NASA Astrophysics Data System (ADS)

    Aslan, Serdar; Taylan Cemgil, Ali; Akın, Ata

    2016-08-01

    Objective. In this paper, we aimed for the robust estimation of the parameters and states of the hemodynamic model by using blood oxygen level dependent signal. Approach. In the fMRI literature, there are only a few successful methods that are able to make a joint estimation of the states and parameters of the hemodynamic model. In this paper, we implemented a maximum likelihood based method called the particle smoother expectation maximization (PSEM) algorithm for the joint state and parameter estimation. Main results. Former sequential Monte Carlo methods were only reliable in the hemodynamic state estimates. They were claimed to outperform the local linearization (LL) filter and the extended Kalman filter (EKF). The PSEM algorithm is compared with the most successful method called square-root cubature Kalman smoother (SCKS) for both state and parameter estimation. SCKS was found to be better than the dynamic expectation maximization (DEM) algorithm, which was shown to be a better estimator than EKF, LL and particle filters. Significance. PSEM was more accurate than SCKS for both the state and the parameter estimation. Hence, PSEM seems to be the most accurate method for the system identification and state estimation for the hemodynamic model inversion literature. This paper do not compare its results with Tikhonov-regularized Newton—CKF (TNF-CKF), a recent robust method which works in filtering sense.

  5. Loss Factor Estimation Using the Impulse Response Decay Method on a Stiffened Structure

    NASA Technical Reports Server (NTRS)

    Cabell, Randolph; Schiller, Noah; Allen, Albert; Moeller, Mark

    2009-01-01

    High-frequency vibroacoustic modeling is typically performed using energy-based techniques such as Statistical Energy Analysis (SEA). Energy models require an estimate of the internal damping loss factor. Unfortunately, the loss factor is difficult to estimate analytically, and experimental methods such as the power injection method can require extensive measurements over the structure of interest. This paper discusses the implications of estimating damping loss factors using the impulse response decay method (IRDM) from a limited set of response measurements. An automated procedure for implementing IRDM is described and then evaluated using data from a finite element model of a stiffened, curved panel. Estimated loss factors are compared with loss factors computed using a power injection method and a manual curve fit. The paper discusses the sensitivity of the IRDM loss factor estimates to damping of connected subsystems and the number and location of points in the measurement ensemble.

  6. Method for Estimating Low-Frequency Return Current of DC Electric Railcar

    NASA Astrophysics Data System (ADS)

    Hatsukade, Satoru

    The Estimation of the harmonic current of railcars is necessary for achieving compatibility between train signaling systems and railcar equipment. However, although several theoretical analyses methods for estimating the harmonic current of railcars using switching functions exist, there are no theoretical analysis methods estimating a low-frequency current at a frequency less than the power converter's carrier frequency. This paper describes a method for estimating the spectrum (frequency and amplitude) of the low-frequency return current of DC electric railcars. First, relationships between the return current and characteristics of the DC electric railcars, such as mass and acceleration, are determined. Then, the mathematical (not numerical) calculation results for low-frequency current are obtained from the time-current curve for a DC electric railcar by using Fourier series expansions. Finally, the measurement results clearly show the effectiveness of the estimation method development in this study.

  7. A Comparison of Methods for Estimating Quadratic Effects in Nonlinear Structural Equation Models

    PubMed Central

    Harring, Jeffrey R.; Weiss, Brandi A.; Hsu, Jui-Chen

    2012-01-01

    Two Monte Carlo simulations were performed to compare methods for estimating and testing hypotheses of quadratic effects in latent variable regression models. The methods considered in the current study were (a) a 2-stage moderated regression approach using latent variable scores, (b) an unconstrained product indicator approach, (c) a latent moderated structural equation method, (d) a fully Bayesian approach, and (e) marginal maximum likelihood estimation. Of the 5 estimation methods, it was found that overall the methods based on maximum likelihood estimation and the Bayesian approach performed best in terms of bias, root-mean-square error, standard error ratios, power, and Type I error control, although key differences were observed. Similarities as well as disparities among methods are highlight and general recommendations articulated. As a point of comparison, all 5 approaches were fit to a reparameterized version of the latent quadratic model to educational reading data. PMID:22429193

  8. Using Resampling To Estimate the Precision of an Empirical Standard-Setting Method.

    ERIC Educational Resources Information Center

    Muijtjens, Arno M. M.; Kramer, Anneke W. M.; Kaufman, David M.; Van der Vleuten, Cees P. M.

    2003-01-01

    Developed a method to estimate the cutscore precisions for empirical standard-setting methods by using resampling. Illustrated the method with two actual datasets consisting of 86 Dutch medical residents and 155 Canadian medical students taking objective structured clinical examinations. Results show the applicability of the method. (SLD)

  9. Comparison of Parametric and Nonparametric Bootstrap Methods for Estimating Random Error in Equipercentile Equating

    ERIC Educational Resources Information Center

    Cui, Zhongmin; Kolen, Michael J.

    2008-01-01

    This article considers two methods of estimating standard errors of equipercentile equating: the parametric bootstrap method and the nonparametric bootstrap method. Using a simulation study, these two methods are compared under three sample sizes (300, 1,000, and 3,000), for two test content areas (the Iowa Tests of Basic Skills Maps and Diagrams…

  10. Novel and simple non-parametric methods of estimating the joint and marginal densities

    NASA Astrophysics Data System (ADS)

    Alghalith, Moawia

    2016-07-01

    We introduce very simple non-parametric methods that overcome key limitations of the existing literature on both the joint and marginal density estimation. In doing so, we do not assume any form of the marginal distribution or joint distribution a priori. Furthermore, our method circumvents the bandwidth selection problems. We compare our method to the kernel density method.

  11. A Simple Joint Estimation Method of Residual Frequency Offset and Sampling Frequency Offset for DVB Systems

    NASA Astrophysics Data System (ADS)

    Kwon, Ki-Won; Cho, Yongsoo

    This letter presents a simple joint estimation method for residual frequency offset (RFO) and sampling frequency offset (STO) in OFDM-based digital video broadcasting (DVB) systems. The proposed method selects a continual pilot (CP) subset from an unsymmetrically and non-uniformly distributed CP set to obtain an unbiased estimator. Simulation results show that the proposed method using a properly selected CP subset is unbiased and performs robustly.

  12. A method for estimating and removing streaking artifacts in quantitative susceptibility mapping.

    PubMed

    Li, Wei; Wang, Nian; Yu, Fang; Han, Hui; Cao, Wei; Romero, Rebecca; Tantiwongkosi, Bundhit; Duong, Timothy Q; Liu, Chunlei

    2015-03-01

    Quantitative susceptibility mapping (QSM) is a novel MRI method for quantifying tissue magnetic property. In the brain, it reflects the molecular composition and microstructure of the local tissue. However, susceptibility maps reconstructed from single-orientation data still suffer from streaking artifacts which obscure structural details and small lesions. We propose and have developed a general method for estimating streaking artifacts and subtracting them from susceptibility maps. Specifically, this method uses a sparse linear equation and least-squares (LSQR)-algorithm-based method to derive an initial estimation of magnetic susceptibility, a fast quantitative susceptibility mapping method to estimate the susceptibility boundaries, and an iterative approach to estimate the susceptibility artifact from ill-conditioned k-space regions only. With a fixed set of parameters for the initial susceptibility estimation and subsequent streaking artifact estimation and removal, the method provides an unbiased estimate of tissue susceptibility with negligible streaking artifacts, as compared to multi-orientation QSM reconstruction. This method allows for improved delineation of white matter lesions in patients with multiple sclerosis and small structures of the human brain with excellent anatomical details. The proposed methodology can be extended to other existing QSM algorithms.

  13. Inter-Method Discrepancies in Brain Volume Estimation May Drive Inconsistent Findings in Autism

    PubMed Central

    Katuwal, Gajendra J.; Baum, Stefi A.; Cahill, Nathan D.; Dougherty, Chase C.; Evans, Eli; Evans, David W.; Moore, Gregory J.; Michael, Andrew M.

    2016-01-01

    Previous studies applying automatic preprocessing methods on Structural Magnetic Resonance Imaging (sMRI) report inconsistent neuroanatomical abnormalities in Autism Spectrum Disorder (ASD). In this study we investigate inter-method differences as a possible cause behind these inconsistent findings. In particular, we focus on the estimation of the following brain volumes: gray matter (GM), white matter (WM), cerebrospinal fluid (CSF), and total intra cranial volume (TIV). T1-weighted sMRIs of 417 ASD subjects and 459 typically developing controls (TDC) from the ABIDE dataset were estimated using three popular preprocessing methods: SPM, FSL, and FreeSurfer (FS). Brain volumes estimated by the three methods were correlated but had significant inter-method differences; except TIVSPM vs. TIVFS, all inter-method differences were significant. ASD vs. TDC group differences in all brain volume estimates were dependent on the method used. SPM showed that TIV, GM, and CSF volumes of ASD were larger than TDC with statistical significance, whereas FS and FSL did not show significant differences in any of the volumes; in some cases, the direction of the differences were opposite to SPM. When methods were compared with each other, they showed differential biases for autism, and several biases were larger than ASD vs. TDC differences of the respective methods. After manual inspection, we found inter-method segmentation mismatches in the cerebellum, sub-cortical structures, and inter-sulcal CSF. In addition, to validate automated TIV estimates we performed manual segmentation on a subset of subjects. Results indicate that SPM estimates are closest to manual segmentation, followed by FS while FSL estimates were significantly lower. In summary, we show that ASD vs. TDC brain volume differences are method dependent and that these inter-method discrepancies can contribute to inconsistent neuroimaging findings in general. We suggest cross-validation across methods and emphasize the

  14. A new method for estimating the number of non-differentially expressed genes.

    PubMed

    Wu, J; Liu, C Y; Chen, W T; Ma, W Y; Ding, Y

    2016-01-01

    Control of the false discovery rate is a statistical method that is widely used when identifying differentially expressed genes in high-throughput sequencing assays. It is often calculated using an adaptive linear step-up procedure in which the number of non-differentially expressed genes should be estimated accurately. In this paper, we discuss the estimation of this parameter and point out defects in the original estimation method. We also propose a new estimation method and provide the error estimation. We compared the estimation results from the two methods in a simulation study that produced a mean, standard deviation, range, and root mean square error. The results revealed that there was little difference in the mean between the two methods, but the standard deviation, range, and root mean square error obtained using the new method were much smaller than those produced by the original method, which indicates that the new method is more accurate and robust. Furthermore, we used real microarray data to verify the conclusion. Finally we provide a suggestion when analyzing differentially expressed genes using statistical methods. PMID:27051004

  15. Methods for estimating monthly streamflow characteristics at ungaged sites in western Montana

    USGS Publications Warehouse

    Parrett, Charles; Cartier, Kenn D.

    1989-01-01

    Three methods were developed for estimating monthly streamflow characteristics for western Montana. The first method, based on multiple-regression equations, relates monthly streamflow characteristics to various basin and climatic variables. Standard errors range from 43 to 107%. The equations are generally not applicable to streams that receive or lose water as a result of geology or that have appreciable upstream storage or diversions. The second method, based on regression equations, relates monthly streamflow characteristics to channel width. Standard errors range from 41 to 111%. The equations are generally not applicable to streams with exposed bedrock, with braided or sand channel, or with recent alterations. The third method requires 12 once-monthly streamflow measurements at an ungaged site. They are then correlated with concurrent flows at some nearby gaged site, and the resulting relation is used to estimate the required monthly streamflow characteristic at the ungaged site. Standard errors range from 19 to 92%. Although generally substantially more reliable than the first or second method, this method may be unreliable if the measurement site and the gage site are not hydrologically similar. A procedure for weighting individual estimates, based on variance and degree of independence of individual estimating methods, was also developed. Standard errors range from 15 to 43% when all three methods are used. The weighted-average estimated from all three methods are generally substantially more reliable than any of the individual estimates. (USGS)

  16. Evaluating methods for estimating local effective population size with and without migration.

    PubMed

    Gilbert, Kimberly J; Whitlock, Michael C

    2015-08-01

    Effective population size is a fundamental parameter in population genetics, evolutionary biology, and conservation biology, yet its estimation can be fraught with difficulties. Several methods to estimate Ne from genetic data have been developed that take advantage of various approaches for inferring Ne . The ability of these methods to accurately estimate Ne , however, has not been comprehensively examined. In this study, we employ seven of the most cited methods for estimating Ne from genetic data (Colony2, CoNe, Estim, MLNe, ONeSAMP, TMVP, and NeEstimator including LDNe) across simulated datasets with populations experiencing migration or no migration. The simulated population demographies are an isolated population with no immigration, an island model metapopulation with a sink population receiving immigrants, and an isolation by distance stepping stone model of populations. We find considerable variance in performance of these methods, both within and across demographic scenarios, with some methods performing very poorly. The most accurate estimates of Ne can be obtained by using LDNe, MLNe, or TMVP; however each of these approaches is outperformed by another in a differing demographic scenario. Knowledge of the approximate demography of population as well as the availability of temporal data largely improves Ne estimates.

  17. Evaluating methods for estimating local effective population size with and without migration.

    PubMed

    Gilbert, Kimberly J; Whitlock, Michael C

    2015-08-01

    Effective population size is a fundamental parameter in population genetics, evolutionary biology, and conservation biology, yet its estimation can be fraught with difficulties. Several methods to estimate Ne from genetic data have been developed that take advantage of various approaches for inferring Ne . The ability of these methods to accurately estimate Ne , however, has not been comprehensively examined. In this study, we employ seven of the most cited methods for estimating Ne from genetic data (Colony2, CoNe, Estim, MLNe, ONeSAMP, TMVP, and NeEstimator including LDNe) across simulated datasets with populations experiencing migration or no migration. The simulated population demographies are an isolated population with no immigration, an island model metapopulation with a sink population receiving immigrants, and an isolation by distance stepping stone model of populations. We find considerable variance in performance of these methods, both within and across demographic scenarios, with some methods performing very poorly. The most accurate estimates of Ne can be obtained by using LDNe, MLNe, or TMVP; however each of these approaches is outperformed by another in a differing demographic scenario. Knowledge of the approximate demography of population as well as the availability of temporal data largely improves Ne estimates. PMID:26118738

  18. Methods to estimate the between-study variance and its uncertainty in meta-analysis.

    PubMed

    Veroniki, Areti Angeliki; Jackson, Dan; Viechtbauer, Wolfgang; Bender, Ralf; Bowden, Jack; Knapp, Guido; Kuss, Oliver; Higgins, Julian P T; Langan, Dean; Salanti, Georgia

    2016-03-01

    Meta-analyses are typically used to estimate the overall/mean of an outcome of interest. However, inference about between-study variability, which is typically modelled using a between-study variance parameter, is usually an additional aim. The DerSimonian and Laird method, currently widely used by default to estimate the between-study variance, has been long challenged. Our aim is to identify known methods for estimation of the between-study variance and its corresponding uncertainty, and to summarise the simulation and empirical evidence that compares them. We identified 16 estimators for the between-study variance, seven methods to calculate confidence intervals, and several comparative studies. Simulation studies suggest that for both dichotomous and continuous data the estimator proposed by Paule and Mandel and for continuous data the restricted maximum likelihood estimator are better alternatives to estimate the between-study variance. Based on the scenarios and results presented in the published studies, we recommend the Q-profile method and the alternative approach based on a 'generalised Cochran between-study variance statistic' to compute corresponding confidence intervals around the resulting estimates. Our recommendations are based on a qualitative evaluation of the existing literature and expert consensus. Evidence-based recommendations require an extensive simulation study where all methods would be compared under the same scenarios. PMID:26332144

  19. Statistical Methods for Estimating the Uncertainty in the Best Basis Inventories

    SciTech Connect

    WILMARTH, S.R.

    2000-09-07

    This document describes the statistical methods used to determine sample-based uncertainty estimates for the Best Basis Inventory (BBI). For each waste phase, the equation for the inventory of an analyte in a tank is Inventory (Kg or Ci) = Concentration x Density x Waste Volume. the total inventory is the sum of the inventories in the different waste phases. Using tanks sample data: statistical methods are used to obtain estimates of the mean concentration of an analyte the density of the waste, and their standard deviations. The volumes of waste in the different phases, and their standard deviations, are estimated based on other types of data. The three estimates are multiplied to obtain the inventory estimate. The standard deviations are combined to obtain a standard deviation of the inventory. The uncertainty estimate for the Best Basis Inventory (BBI) is the approximate 95% confidence interval on the inventory.

  20. Method and system to estimate variables in an integrated gasification combined cycle (IGCC) plant

    DOEpatents

    Kumar, Aditya; Shi, Ruijie; Dokucu, Mustafa

    2013-09-17

    System and method to estimate variables in an integrated gasification combined cycle (IGCC) plant are provided. The system includes a sensor suite to measure respective plant input and output variables. An extended Kalman filter (EKF) receives sensed plant input variables and includes a dynamic model to generate a plurality of plant state estimates and a covariance matrix for the state estimates. A preemptive-constraining processor is configured to preemptively constrain the state estimates and covariance matrix to be free of constraint violations. A measurement-correction processor may be configured to correct constrained state estimates and a constrained covariance matrix based on processing of sensed plant output variables. The measurement-correction processor is coupled to update the dynamic model with corrected state estimates and a corrected covariance matrix. The updated dynamic model may be configured to estimate values for at least one plant variable not originally sensed by the sensor suite.

  1. Estimating of equilibrium formation temperature by curve fitting method and it's problems

    SciTech Connect

    Kenso Takai; Masami Hyodo; Shinji Takasugi

    1994-01-20

    Determination of true formation temperature from measured bottom hole temperature is important for geothermal reservoir evaluation after completion of well drilling. For estimation of equilibrium formation temperature, we studied non-linear least squares fitting method adapting the Middleton Model (Chiba et al., 1988). It was pointed out that this method was applicable as simple and relatively reliable method for estimation of the equilibrium formation temperature after drilling. As a next step, we are studying the estimation of equilibrium formation temperature from bottom hole temperature data measured by MWD (measurement while drilling system). In this study, we have evaluated availability of nonlinear least squares fitting method adapting curve fitting method and the numerical simulator (GEOTEMP2) for estimation of the equilibrium formation temperature while drilling.

  2. New Method for Estimation of Aeolian Sand Transport Rate Using Ceramic Sand Flux Sensor (UD-101)

    PubMed Central

    Udo, Keiko

    2009-01-01

    In this study, a new method for the estimation of aeolian sand transport rate was developed; the method employs a ceramic sand flux sensor (UD-101). UD-101 detects wind-blown sand impacting on its surface. The method was devised by considering the results of wind tunnel experiments that were performed using a vertical sediment trap and the UD-101. Field measurements to evaluate the estimation accuracy during the prevalence of unsteady winds were performed on a flat backshore. The results showed that aeolian sand transport rates estimated using the developed method were of the same order as those estimated using the existing method for high transport rates, i.e., for transport rates greater than 0.01 kg m−1 s−1. PMID:22291553

  3. A New Formulation of the Filter-Error Method for Aerodynamic Parameter Estimation in Turbulence

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.; Morelli, Eugene A.

    2015-01-01

    A new formulation of the filter-error method for estimating aerodynamic parameters in nonlinear aircraft dynamic models during turbulence was developed and demonstrated. The approach uses an estimate of the measurement noise covariance to identify the model parameters, their uncertainties, and the process noise covariance, in a relaxation method analogous to the output-error method. Prior information on the model parameters and uncertainties can be supplied, and a post-estimation correction to the uncertainty was included to account for colored residuals not considered in the theory. No tuning parameters, needing adjustment by the analyst, are used in the estimation. The method was demonstrated in simulation using the NASA Generic Transport Model, then applied to the subscale T-2 jet-engine transport aircraft flight. Modeling results in different levels of turbulence were compared with results from time-domain output error and frequency- domain equation error methods to demonstrate the effectiveness of the approach.

  4. Summary of methods for calculating dynamic lateral stability and response and for estimating aerodynamic stability derivatives

    NASA Technical Reports Server (NTRS)

    Campbell, John P; Mckinney, Marion O

    1952-01-01

    A summary of methods for making dynamic lateral stability and response calculations and for estimating the aerodynamic stability derivatives required for use in these calculations is presented. The processes of performing calculations of the time histories of lateral motions, of the period and damping of these motions, and of the lateral stability boundaries are presented as a series of simple straightforward steps. Existing methods for estimating the stability derivatives are summarized and, in some cases, simple new empirical formulas are presented. Detailed estimation methods are presented for low-subsonic-speed conditions but only a brief discussion and a list of references are given for transonic and supersonic speed conditions.

  5. Brain aneurysm segmentation in CTA and 3DRA using geodesic active regions based on second order prototype features and nonparametric density estimation

    NASA Astrophysics Data System (ADS)

    Hernandez, Monica; Frangi, Alejandro F.

    2005-04-01

    Coupling the geodesic active contours model with statistical information based on regions introduces robustness in the segmentation of images with weak or inhomogeneous gradients. In the estimation of the probability density function for each region take part the definition of the features that describe the image inside the different regions and the method of density estimation itself. A Gaussian Mixture Model is frequently proposed for density estimation. This approach is based on the assumption that the intensity distribution of the image is the most discriminant feature in a region. However, the use of second order features provides a better discrimination of the different regions, as these features represent more accurately the local properties of the image manifold. Due to the high dimensionality of the problem, the use of non parametric density estimation methods becomes necessary. In this article, we present a novel method of introducing the second order information of an image for non parametric estimation of the probability density functions of the different tissues that are present in medical images. The novelty of the method stems on the use of the response of the image under an orthogonal harmonic operator set projected onto a prototype space for feature generation. The technique described here is applied to the segmentation of brain aneurysms in Computed Tomography Angiography (CTA) and 3D Rotational Angiography (3DRA) showing a qualitative improvement from the Gaussian Mixture Model approach.

  6. Fast 2D DOA Estimation Algorithm by an Array Manifold Matching Method with Parallel Linear Arrays

    PubMed Central

    Yang, Lisheng; Liu, Sheng; Li, Dong; Jiang, Qingping; Cao, Hailin

    2016-01-01

    In this paper, the problem of two-dimensional (2D) direction-of-arrival (DOA) estimation with parallel linear arrays is addressed. Two array manifold matching (AMM) approaches, in this work, are developed for the incoherent and coherent signals, respectively. The proposed AMM methods estimate the azimuth angle only with the assumption that the elevation angles are known or estimated. The proposed methods are time efficient since they do not require eigenvalue decomposition (EVD) or peak searching. In addition, the complexity analysis shows the proposed AMM approaches have lower computational complexity than many current state-of-the-art algorithms. The estimated azimuth angles produced by the AMM approaches are automatically paired with the elevation angles. More importantly, for estimating the azimuth angles of coherent signals, the aperture loss issue is avoided since a decorrelation procedure is not required for the proposed AMM method. Numerical studies demonstrate the effectiveness of the proposed approaches. PMID:26907301

  7. Fast 2D DOA Estimation Algorithm by an Array Manifold Matching Method with Parallel Linear Arrays.

    PubMed

    Yang, Lisheng; Liu, Sheng; Li, Dong; Jiang, Qingping; Cao, Hailin

    2016-01-01

    In this paper, the problem of two-dimensional (2D) direction-of-arrival (DOA) estimation with parallel linear arrays is addressed. Two array manifold matching (AMM) approaches, in this work, are developed for the incoherent and coherent signals, respectively. The proposed AMM methods estimate the azimuth angle only with the assumption that the elevation angles are known or estimated. The proposed methods are time efficient since they do not require eigenvalue decomposition (EVD) or peak searching. In addition, the complexity analysis shows the proposed AMM approaches have lower computational complexity than many current state-of-the-art algorithms. The estimated azimuth angles produced by the AMM approaches are automatically paired with the elevation angles. More importantly, for estimating the azimuth angles of coherent signals, the aperture loss issue is avoided since a decorrelation procedure is not required for the proposed AMM method. Numerical studies demonstrate the effectiveness of the proposed approaches. PMID:26907301

  8. Estimation of design sea ice thickness with maximum entropy distribution by particle swarm optimization method

    NASA Astrophysics Data System (ADS)

    Tao, Shanshan; Dong, Sheng; Wang, Zhifeng; Jiang, Wensheng

    2016-06-01

    The maximum entropy distribution, which consists of various recognized theoretical distributions, is a better curve to estimate the design thickness of sea ice. Method of moment and empirical curve fitting method are common-used parameter estimation methods for maximum entropy distribution. In this study, we propose to use the particle swarm optimization method as a new parameter estimation method for the maximum entropy distribution, which has the advantage to avoid deviation introduced by simplifications made in other methods. We conducted a case study to fit the hindcasted thickness of the sea ice in the Liaodong Bay of Bohai Sea using these three parameter-estimation methods for the maximum entropy distribution. All methods implemented in this study pass the K-S tests at 0.05 significant level. In terms of the average sum of deviation squares, the empirical curve fitting method provides the best fit for the original data, while the method of moment provides the worst. Among all three methods, the particle swarm optimization method predicts the largest thickness of the sea ice for a same return period. As a result, we recommend using the particle swarm optimization method for the maximum entropy distribution for offshore structures mainly influenced by the sea ice in winter, but using the empirical curve fitting method to reduce the cost in the design of temporary and economic buildings.

  9. Probability density functions characterizing PSC particle size distribution parameters for NAT and STS derived from in situ measurements between 1989 and 2010 above McMurdo Station, Antarctica, and between 1991-2004 above Kiruna, Sweden

    NASA Astrophysics Data System (ADS)

    Deshler, Terry

    2016-04-01

    Balloon-borne optical particle counters were used to make in situ size resolved particle concentration measurements within polar stratospheric clouds (PSCs) over 20 years in the Antarctic and over 10 years in the Arctic. The measurements were made primarily during the late winter in the Antarctic and in the early and mid-winter in the Arctic. Measurements in early and mid-winter were also made during 5 years in the Antarctic. For the analysis bimodal lognormal size distributions are fit to 250 meter averages of the particle concentration data. The characteristics of these fits, along with temperature, water and nitric acid vapor mixing ratios, are used to classify the PSC observations as either NAT, STS, ice, or some mixture of these. The vapor mixing ratios are obtained from satellite when possible, otherwise assumptions are made. This classification of the data is used to construct probability density functions for NAT, STS, and ice number concentration, median radius and distribution width for mid and late winter clouds in the Antarctic and for early and mid-winter clouds in the Arctic. Additional analysis is focused on characterizing the temperature histories associated with the particle classes and the different time periods. The results from theses analyses will be presented, and should be useful to set bounds for retrievals of PSC properties from remote measurements, and to constrain model representations of PSCs.

  10. Estimation of Melanin and Hemoglobin Using Spectral Reflectance Images Reconstructed from a Digital RGB Image by the Wiener Estimation Method

    PubMed Central

    Nishidate, Izumi; Maeda, Takaaki; Niizeki, Kyuichi; Aizu, Yoshihisa

    2013-01-01

    A multi-spectral diffuse reflectance imaging method based on a single snap shot of Red-Green-Blue images acquired with the exposure time of 65 ms (15 fps) was investigated for estimating melanin concentration, blood concentration, and oxygen saturation in human skin tissue. The technique utilizes the Wiener estimation method to deduce spectral reflectance images instantaneously from an RGB image. Using the resultant absorbance spectrum as a response variable and the extinction coefficients of melanin, oxygenated hemoglobin and deoxygenated hemoglobin as predictor variables, multiple regression analysis provides regression coefficients. Concentrations of melanin and total blood are then determined from the regression coefficients using conversion vectors that are numerically deduced in advance by the Monte Carlo simulations for light transport in skin. Oxygen saturation is obtained directly from the regression coefficients. Experiments with a tissue-like agar gel phantom validated the method. In vivo experiments on fingers during upper limb occlusion demonstrated the ability of the method to evaluate physiological reactions of human skin. PMID:23783740

  11. Two methods of random seed generation to avoid over-segmentation with stochastic watershed: application to nuclear fuel micrographs.

    PubMed

    Tolosa, S Cativa; Blacher, S; Denis, A; Marajofsky, A; Pirard, J-P; Gommes, C J

    2009-10-01

    A stochastic version of the watershed algorithm is obtained by choosing randomly in the image the seeds from which the watershed regions are grown. The output of the procedure is a probability density function corresponding to the probability that each pixel belongs to a boundary. In the present paper, two stochastic seed-generation processes are explored to avoid over-segmentation. The first is a non-uniform Poisson process, the density of which is optimized on the basis of opening granulometry. The second process positions the seeds randomly within disks centred on the maxima of a distance map. The two methods are applied to characterize the grain structure of nuclear fuel pellets. Estimators are proposed for the total edge length and grain number per unit area, L(A) and N(A), which take advantage of the probabilistic nature of the probability density function and do not require segmentation.

  12. Handbook for cost estimating. A method for developing estimates of costs for generic actions for nuclear power plants

    SciTech Connect

    Ball, J.R.; Cohen, S.; Ziegler, E.Z.

    1984-10-01

    This document provides overall guidance to assist the NRC in preparing the types of cost estimates required by the Regulatory Analysis Guidelines and to assist in the assignment of priorities in resolving generic safety issues. The Handbook presents an overall cost model that allows the cost analyst to develop a chronological series of activities needed to implement a specific regulatory requirement throughout all applicable commercial LWR power plants and to identify the significant cost elements for each activity. References to available cost data are provided along with rules of thumb and cost factors to assist in evaluating each cost element. A suitable code-of-accounts data base is presented to assist in organizing and aggregating costs. Rudimentary cost analysis methods are described to allow the analyst to produce a constant-dollar, lifetime cost for the requirement. A step-by-step example cost estimate is included to demonstrate the overall use of the Handbook.

  13. A New Method for Radar Rainfall Estimation Using Merged Radar and Gauge Derived Fields

    NASA Astrophysics Data System (ADS)

    Hasan, M. M.; Sharma, A.; Johnson, F.; Mariethoz, G.; Seed, A.

    2014-12-01

    Accurate estimation of rainfall is critical for any hydrological analysis. The advantage of radar rainfall measurements is their ability to cover large areas. However, the uncertainties in the parameters of the power law, that links reflectivity to rainfall intensity, have to date precluded the widespread use of radars for quantitative rainfall estimates for hydrological studies. There is therefore considerable interest in methods that can combine the strengths of radar and gauge measurements by merging the two data sources. In this work, we propose two new developments to advance this area of research. The first contribution is a non-parametric radar rainfall estimation method (NPZR) which is based on kernel density estimation. Instead of using a traditional Z-R relationship, the NPZR accounts for the uncertainty in the relationship between reflectivity and rainfall intensity. More importantly, this uncertainty can vary for different values of reflectivity. The NPZR method reduces the Mean Square Error (MSE) of the estimated rainfall by 16 % compared to a traditionally fitted Z-R relation. Rainfall estimates are improved at 90% of the gauge locations when the method is applied to the densely gauged Sydney Terrey Hills radar region. A copula based spatial interpolation method (SIR) is used to estimate rainfall from gauge observations at the radar pixel locations. The gauge-based SIR estimates have low uncertainty in areas with good gauge density, whilst the NPZR method provides more reliable rainfall estimates than the SIR method, particularly in the areas of low gauge density. The second contribution of the work is to merge the radar rainfall field with spatially interpolated gauge rainfall estimates. The two rainfall fields are combined using a temporally and spatially varying weighting scheme that can account for the strengths of each method. The weight for each time period at each location is calculated based on the expected estimation error of each method

  14. IN-RESIDENCE, MULTIPLE ROUTE EXPOSURES TO CHLORPYRIFOS AND DIAZINON ESTIMATED BY INDIRECT METHOD MODELS

    EPA Science Inventory

    One of the objectives of the National Human Exposure Assessment Survey (NHEXAS) is to estimate exposures to several pollutants in multiple media and determine their distributions for the population of Arizona. This paper presents modeling methods used to estimate exposure dist...

  15. Fitting Multilevel Models with Ordinal Outcomes: Performance of Alternative Specifications and Methods of Estimation

    ERIC Educational Resources Information Center

    Bauer, Daniel J.; Sterba, Sonya K.

    2011-01-01

    Previous research has compared methods of estimation for fitting multilevel models to binary data, but there are reasons to believe that the results will not always generalize to the ordinal case. This article thus evaluates (a) whether and when fitting multilevel linear models to ordinal outcome data is justified and (b) which estimator to employ…

  16. Bayesian and Frequentist Methods for Estimating Joint Uncertainty of Freundlich Adsorption Isotherm Fitting Parameters

    EPA Science Inventory

    In this paper, we present methods for estimating Freundlich isotherm fitting parameters (K and N) and their joint uncertainty, which have been implemented into the freeware software platforms R and WinBUGS. These estimates were determined by both Frequentist and Bayesian analyse...

  17. Simplified sampling methods for estimating levels of lactobacilli in saliva in dental clinical practice.

    PubMed

    Gabre, P; Martinsson, T; Gahnberg, L

    1999-08-01

    The aim of the present study was to evaluate whether estimation of lactobacilli was possible with simplified saliva sampling methods. Dentocult LB (Orion Diagnostica AB, Trosa, Sweden) was used to estimate the number of lactobacilli in saliva sampled by 3 different methods from 96 individuals: (i) Collecting and pouring stimulated saliva over a Dentocult dip-slide; (ii) direct licking of the Dentocult LB dip-slide; (iii) contaminating a wooden spatula with saliva and pressing against the Dentocult dip-slide. The first method was in accordance with the manufacturer's instructions and selected as the 'gold standard'; the other 2 methods were compared with this result. The 2 simplified methods for estimating levels of lactobacilli in saliva showed good reliability and specificity. Sensitivity, defined as the ability to detect individuals with a high number of lactabacilli in saliva, was sufficient for the licking method (85%), but significantly reduced for the wooden spatula method (52%).

  18. Validation tests of an improved kernel density estimation method for identifying disease clusters

    NASA Astrophysics Data System (ADS)

    Cai, Qiang; Rushton, Gerard; Bhaduri, Budhendra

    2012-07-01

    The spatial filter method, which belongs to the class of kernel density estimation methods, has been used to make morbidity and mortality maps in several recent studies. We propose improvements in the method to include spatially adaptive filters to achieve constant standard error of the relative risk estimates; a staircase weight method for weighting observations to reduce estimation bias; and a parameter selection tool to enhance disease cluster detection performance, measured by sensitivity, specificity, and false discovery rate. We test the performance of the method using Monte Carlo simulations of hypothetical disease clusters over a test area of four counties in Iowa. The simulations include different types of spatial disease patterns and high-resolution population distribution data. Results confirm that the new features of the spatial filter method do substantially improve its performance in realistic situations comparable to those where the method is likely to be used.

  19. Estimating Rooftop Suitability for PV: A Review of Methods, Patents, and Validation Techniques

    SciTech Connect

    Melius, J.; Margolis, R.; Ong, S.

    2013-12-01

    A number of methods have been developed using remote sensing data to estimate rooftop area suitable for the installation of photovoltaics (PV) at various geospatial resolutions. This report reviews the literature and patents on methods for estimating rooftop-area appropriate for PV, including constant-value methods, manual selection methods, and GIS-based methods. This report also presents NREL's proposed method for estimating suitable rooftop area for PV using Light Detection and Ranging (LiDAR) data in conjunction with a GIS model to predict areas with appropriate slope, orientation, and sunlight. NREL's method is validated against solar installation data from New Jersey, Colorado, and California to compare modeled results to actual on-the-ground measurements.

  20. Joint estimation of TOA and DOA in IR-UWB system using a successive propagator method

    NASA Astrophysics Data System (ADS)

    Wang, Fangqiu; Zhang, Xiaofei; Wang, Chenghua; Zhou, Shengkui

    2015-10-01

    Impulse radio ultra-wideband (IR-UWB) ranging and positioning require accurate estimation of time-of-arrival (TOA) and direction-of-arrival (DOA). With receiver of two antennas, both of the TOA and DOA parameters can be estimated via two-dimensional (2D) propagator method (PM), in which the 2D spectral peak searching, however, renders much higher computational complexity. This paper proposes a successive PM algorithm for joint TOA and DOA estimation in IR-UWB system to avoid 2D spectral peak searching. The proposed algorithm firstly gets the initial TOA estimates in the two antennas from the propagation matrix, then utilises successively one-dimensional (1D) local searches to achieve the estimation of TOAs in the two antennas, and finally obtains the DOA estimates via the difference in the TOAs between the two antennas. The proposed algorithm, which only requires 1D local searches, can avoid the high computational cost in 2D-PM algorithm. Furthermore, the proposed algorithm can obtain automatically paired parameters and has better joint TOA and DOA estimation performance than conventional PM algorithm, estimation of signal parameters via rotational invariance techniques algorithm and matrix pencil algorithm. Meanwhile, it has very close parameter estimation to that of 2D-PM algorithm. We have also derived the mean square error of TOA and DOA estimation of the proposed algorithm and the Cramer-Rao bound of TOA and DOA estimation in this paper. The simulation results verify the usefulness of the proposed algorithm.

  1. Estimating Small-area Populations by Age and Sex Using Spatial Interpolation and Statistical Inference Methods

    SciTech Connect

    Qai, Qiang; Rushton, Gerald; Bhaduri, Budhendra L; Bright, Eddie A; Coleman, Phil R

    2006-01-01

    The objective of this research is to compute population estimates by age and sex for small areas whose boundaries are different from those for which the population counts were made. In our approach, population surfaces and age-sex proportion surfaces are separately estimated. Age-sex population estimates for small areas and their confidence intervals are then computed using a binomial model with the two surfaces as inputs. The approach was implemented for Iowa using a 90 m resolution population grid (LandScan USA) and U.S. Census 2000 population. Three spatial interpolation methods, the areal weighting (AW) method, the ordinary kriging (OK) method, and a modification of the pycnophylactic method, were used on Census Tract populations to estimate the age-sex proportion surfaces. To verify the model, age-sex population estimates were computed for paired Block Groups that straddled Census Tracts and therefore were spatially misaligned with them. The pycnophylactic method and the OK method were more accurate than the AW method. The approach is general and can be used to estimate subgroup-count types of variables from information in existing administrative areas for custom-defined areas used as the spatial basis of support in other applications.

  2. Linear least-squares method for unbiased estimation of T1 from SPGR signals.

    PubMed

    Chang, Lin-Ching; Koay, Cheng Guan; Basser, Peter J; Pierpaoli, Carlo

    2008-08-01

    The longitudinal relaxation time, T(1), can be estimated from two or more spoiled gradient recalled echo images (SPGR) acquired with different flip angles and/or repetition times (TRs). The function relating signal intensity to flip angle and TR is nonlinear; however, a linear form proposed 30 years ago is currently widely used. Here we show that this linear method provides T(1) estimates that have similar precision but lower accuracy than those obtained with a nonlinear method. We also show that T(1) estimated by the linear method is biased due to improper accounting for noise in the fitting. This bias can be significant for clinical SPGR images; for example, T(1) estimated in brain tissue (800 ms < T(1) < 1600 ms) can be overestimated by 10% to 20%. We propose a weighting scheme that correctly accounts for the noise contribution in the fitting procedure. Monte Carlo simulations of SPGR experiments are used to evaluate the accuracy of the estimated T(1) from the widely-used linear, the proposed weighted-uncertainty linear, and the nonlinear methods. We show that the linear method with weighted uncertainties reduces the bias of the linear method, providing T(1) estimates comparable in precision and accuracy to those of the nonlinear method while reducing computation time significantly. PMID:18666108

  3. A novel method for estimating the number of species within a region

    PubMed Central

    Shtilerman, Elad; Thompson, Colin J.; Stone, Lewi; Bode, Michael; Burgman, Mark

    2014-01-01

    Ecologists are often required to estimate the number of species in a region or designated area. A number of diversity indices are available for this purpose and are based on sampling the area using quadrats or other means, and estimating the total number of species from these samples. In this paper, a novel theory and method for estimating the number of species is developed. The theory involves the use of the Laplace method for approximating asymptotic integrals. The method is shown to be successful by testing random simulated datasets. In addition, several real survey datasets are tested, including forests that contain a large number (tens to hundreds) of tree species, and an aquatic system with a large number of fish species. The method is shown to give accurate results, and in almost all cases found to be superior to existing tools for estimating diversity. PMID:24500169

  4. A simple method to estimate threshold friction velocity of wind erosion in the field

    NASA Astrophysics Data System (ADS)

    Li, Junran; Okin, Gregory S.; Herrick, Jeffrey E.; Belnap, Jayne; Munson, Seth M.; Miller, Mark E.

    2010-05-01

    This study provides a fast and easy-to-apply method to estimate threshold friction velocity (TFV) of wind erosion in the field. Wind tunnel experiments and a variety of ground measurements including air gun, pocket penetrometer, torvane, and roughness chain were conducted in Moab, Utah and cross-validated in the Mojave Desert, California. Patterns between TFV and ground measurements were examined to identify the optimum method for estimating TFV. The results show that TFVs were best predicted using the air gun and penetrometer measurements in the Moab sites. This empirical method, however, systematically underestimated TFVs in the Mojave Desert sites. Further analysis showed that TFVs in the Mojave sites can be satisfactorily estimated with a correction for rock cover, which is presumably the main cause of the underestimation of TFVs. The proposed method may be also applied to estimate TFVs in environments where other non-erodible elements such as postharvest residuals are found.

  5. Recursive least squares method of regression coefficients estimation as a special case of Kalman filter

    NASA Astrophysics Data System (ADS)

    Borodachev, S. M.

    2016-06-01

    The simple derivation of recursive least squares (RLS) method equations is given as special case of Kalman filter estimation of a constant system state under changing observation conditions. A numerical example illustrates application of RLS to multicollinearity problem.

  6. Experimental parameter estimation method for nonlinear viscoelastic composite material models: an application on arterial tissue.

    PubMed

    Sunbuloglu, Emin; Bozdag, Ergun; Toprak, Tuncer; Islak, Civan

    2013-01-01

    This study is aimed at setting a method of experimental parameter estimation for large-deforming nonlinear viscoelastic continuous fibre-reinforced composite material model. Specifically, arterial tissue was investigated during experimental research and parameter estimation studies, due to medical, scientific and socio-economic importance of soft tissue research. Using analytical formulations for specimens under combined inflation/extension/torsion on thick-walled cylindrical tubes, in vitro experiments were carried out with fresh sheep arterial segments, and parameter estimation procedures were carried out on experimental data. Model restrictions were pointed out using outcomes from parameter estimation. Needs for further studies that can be developed are discussed.

  7. A method to estimate weight and dimensions of aircraft gas turbine engines. Volume 1: Method of analysis

    NASA Technical Reports Server (NTRS)

    Pera, R. J.; Onat, E.; Klees, G. W.; Tjonneland, E.

    1977-01-01

    Weight and envelope dimensions of aircraft gas turbine engines are estimated within plus or minus 5% to 10% using a computer method based on correlations of component weight and design features of 29 data base engines. Rotating components are estimated by a preliminary design procedure where blade geometry, operating conditions, material properties, shaft speed, hub-tip ratio, etc., are the primary independent variables used. The development and justification of the method selected, the various methods of analysis, the use of the program, and a description of the input/output data are discussed.

  8. A review and comparison of some commonly used methods of estimating petroleum resource availability

    SciTech Connect

    Herbert, J.H.

    1982-10-01

    The purpose of this pedagogical report is to elucidate the characteristics of the principal methods of estimating the petroleum resource base. Other purposes are to indicate the logical similarities and data requirements of these different methods. The report should serve as a guide for the application and interpretation of the different methods.

  9. ZZ-Type a posteriori error estimators for adaptive boundary element methods on a curve.

    PubMed

    Feischl, Michael; Führer, Thomas; Karkulik, Michael; Praetorius, Dirk

    2014-01-01

    In the context of the adaptive finite element method (FEM), ZZ-error estimators named after Zienkiewicz and Zhu (1987) [52] are mathematically well-established and widely used in practice. In this work, we propose and analyze ZZ-type error estimators for the adaptive boundary element method (BEM). We consider weakly singular and hyper-singular integral equations and prove, in particular, convergence of the related adaptive mesh-refining algorithms. Throughout, the theoretical findings are underlined by numerical experiments.

  10. Method for estimating crack-extension resistance curve from residual strength data

    NASA Technical Reports Server (NTRS)

    Orange, T. W.

    1980-01-01

    A method is presented for estimating the crack extension resistance curve (R curve) from residual strength (maximum load against initial crack length) data for precracked fracture specimens. The method allows additional information to be inferred from simple test results, and that information is used to estimate the failure loads of more complicated structures. Numerical differentiation of the residual strength data is required, and the problems that it may present are discussed.

  11. Monitoring Hawaiian waterbirds: evaluation of sampling methods to produce reliable estimates

    USGS Publications Warehouse

    Camp, Richard J.; Brinck, Kevin W.; Paxton, Eben H.; Leopold, Christina

    2014-01-01

    We conducted field trials to assess several different methods of estimating the abundance of four endangered Hawaiian waterbirds: the Hawaiian duck (Anas wyvilliana), Hawaiian coot (Fulica alai), Hawaiian common moorhen (Gallinula chloropus sandvicensis) and Hawaiian stilt (Himantopus mexicanus knudseni). At two sites on Oʽahu, James Campbell National Wildlife Refuge and Hamakua Marsh, we conducted field trials where both solitary and paired observers counted birds and recorded the distance to observed birds. We then compared the results of estimates using the existing simple count, distance estimates from both point- and line-transect surveys, paired observer count estimates, bounded count, and Overton estimators. Comparing covariate recorded values among simultaneous observations revealed inconsistency between observers. We showed that the variation among simple counts means the current direct count survey, even if interpreted as a proportional index of abundance, incorporates many sources of uncertainty that are not taken into account. Analysis revealed violation of model assumptions that allowed us to discount distance-based estimates as a viable estimation technique. Among the remaining methods, point counts by paired observers produced the most precise estimates while meeting model assumptions. We present an example sampling protocol using paired observer counts. Finally, we suggest further research that will improve abundance estimates of Hawaiian waterbirds.

  12. Design of a Direction-of-Arrival Estimation Method Used for an Automatic Bearing Tracking System

    PubMed Central

    Guo, Feng; Liu, Huawei; Huang, Jingchang; Zhang, Xin; Zu, Xingshui; Li, Baoqing; Yuan, Xiaobing

    2016-01-01

    In this paper, we introduce a sub-band direction-of-arrival (DOA) estimation method suitable for employment within an automatic bearing tracking system. Inspired by the magnitude-squared coherence (MSC), we extend the MSC to the sub-band and propose the sub-band magnitude-squared coherence (SMSC) to measure the coherence between the frequency sub-bands of wideband signals. Then, we design a sub-band DOA estimation method which chooses a sub-band from the wideband signals by SMSC for the bearing tracking system. The simulations demonstrate that the sub-band method has a good tradeoff between the wideband methods and narrowband methods in terms of the estimation accuracy, spatial resolution, and computational cost. The proposed method was also tested in the field environment with the bearing tracking system, which also showed a good performance. PMID:27455267

  13. Design of a Direction-of-Arrival Estimation Method Used for an Automatic Bearing Tracking System.

    PubMed

    Guo, Feng; Liu, Huawei; Huang, Jingchang; Zhang, Xin; Zu, Xingshui; Li, Baoqing; Yuan, Xiaobing

    2016-01-01

    In this paper, we introduce a sub-band direction-of-arrival (DOA) estimation method suitable for employment within an automatic bearing tracking system. Inspired by the magnitude-squared coherence (MSC), we extend the MSC to the sub-band and propose the sub-band magnitude-squared coherence (SMSC) to measure the coherence between the frequency sub-bands of wideband signals. Then, we design a sub-band DOA estimation method which chooses a sub-band from the wideband signals by SMSC for the bearing tracking system. The simulations demonstrate that the sub-band method has a good tradeoff between the wideband methods and narrowband methods in terms of the estimation accuracy, spatial resolution, and computational cost. The proposed method was also tested in the field environment with the bearing tracking system, which also showed a good performance. PMID:27455267

  14. Parameter estimation of copula functions using an optimization-based method

    NASA Astrophysics Data System (ADS)

    Abdi, Amin; Hassanzadeh, Yousef; Talatahari, Siamak; Fakheri-Fard, Ahmad; Mirabbasi, Rasoul

    2016-02-01

    Application of the copulas can be useful for the accurate multivariate frequency analysis of hydrological phenomena. There are many copula functions and some methods were proposed for estimating the copula parameters. Since the copula functions are mathematically complicated, estimating of the copula parameter is an effortful work. In the present study, an optimization-based method (OBM) is proposed to obtain the parameters of copulas. The usefulness of the proposed method is illustrated on drought events. For this purpose, three commonly used copulas of Archimedean family, namely, Clayton, Frank, and Gumbel copulas are used to construct the joint probability distribution of drought characteristics of 60 gauging sites located in East-Azarbaijan province, Iran. The performance of OBM was compared with two conventional methods, namely, method of moments and inference function for margins. The results illustrate the supremacy of the OBM to estimate the copula parameters compared to the other considered methods.

  15. Time Domain Estimation of Arterial Parameters using the Windkessel Model and the Monte Carlo Method

    NASA Astrophysics Data System (ADS)

    Gostuski, Vladimir; Pastore, Ignacio; Rodriguez Palacios, Gaspar; Vaca Diez, Gustavo; Moscoso-Vasquez, H. Marcela; Risk, Marcelo

    2016-04-01

    Numerous parameter estimation techniques exist for characterizing the arterial system using electrical circuit analogs. However, they are often limited by their requirements and usually high computational burdain. Therefore, a new method for estimating arterial parameters based on Monte Carlo simulation is proposed. A three element Windkessel model was used to represent the arterial system. The approach was to reduce the error between the calculated and physiological aortic pressure by randomly generating arterial parameter values, while keeping constant the arterial resistance. This last value was obtained for each subject using the arterial flow, and was a necessary consideration in order to obtain a unique set of values for the arterial compliance and peripheral resistance. The estimation technique was applied to in vivo data containing steady beats in mongrel dogs, and it reliably estimated Windkessel arterial parameters. Further, this method appears to be computationally efficient for on-line time-domain estimation of these parameters.

  16. Methods, approaches and data sources for estimating stocks of irregular migrants.

    PubMed

    Jandl, Michael

    2011-01-01

    This paper presents a comprehensive review of available methods for sizing irregular migrant populations as a particular group in the study of hidden populations. Based on the existing body of literature on the subject, a generic classification scheme is developed that divides existing estimation procedures into subcategories like “approaches”, “methods” and “estimation techniques”. For each of these categories, basic principles, methodical strengths and weaknesses, as well as practical problems, are identified and discussed with the use of existing examples. Special emphasis is placed on data requirements, data shortcomings and possible estimation biases. In addition, based on the empirical classification and quality assessment of country-specific estimates developed in the CLANDESTINO research project, the potential and requirements for replicating best practice models in other countries are explored. Finally, a number of conclusions on the appropriate design of estimation projects are offered.

  17. One-level prediction-A numerical method for estimating undiscovered metal endowment

    USGS Publications Warehouse

    McCammon, R.B.; Kork, J.O.

    1992-01-01

    One-level prediction has been developed as a numerical method for estimating undiscovered metal endowment within large areas. The method is based on a presumed relationship between a numerical measure of geologic favorability and the spatial distribution of metal endowment. Metal endowment within an unexplored area for which the favorability measure is greater than a favorability threshold level is estimated to be proportional to the area of that unexplored portion. The constant of proportionality is the ratio of the discovered endowment found within a suitably chosen control region, which has been explored, to the area of that explored region. In addition to the estimate of undiscovered endowment, a measure of the error of the estimate is also calculated. One-level prediction has been used to estimate the undiscovered uranium endowment in the San Juan basin, New Mexico, U.S.A. A subroutine to perform the necessary calculations is included. ?? 1992 Oxford University Press.

  18. A test of three methods for estimating stature from immature skeletal remains using long bone lengths.

    PubMed

    Cardoso, Hugo F V

    2009-01-01

    In this study, the accuracy of three methods for stature estimation of children from long bone lengths was investigated. The sample utilized consists of nine identified immature skeletons (seven males and two females) of known cadaver length, aged between 1 and 14 years old. Results show that stature (cadaver length) is consistently underestimated by all three methods (from a minimum of 2.9 cm to a maximum of 19.3 cm). The femur/stature ratio provided the least accurate estimates of stature, and predictions were not significantly improved by the other two methods. Differences between true and estimated stature were also greatest when using the length of lower limb bones. Given that the study sample children grew in less than optimal environmental conditions, compared with the children that contributed to the development of the methods, they are stunted and have proportionally shorter legs. This suggests that stature estimation methods are not universally applicable and that environmental differences within a population (e.g., socioeconomic status differences) or differing levels of modernization and social and economic development between nations are an important source of variation in stature and body proportions of children. The fallibility of stature estimation methods, when they do not consider such variation, can be somewhat minimized if stature is estimated from the length of upper limb bones.

  19. Single Tracking Location Methods Suppress Speckle Noise in Shear Wave Velocity Estimation

    PubMed Central

    Elegbe, Etana C.; McAleavey, Stephen A.

    2014-01-01

    In ultrasound-based elastography methods, the estimation of shear wave velocity typically involves the tracking of speckle motion due to an applied force. The errors in the estimates of tissue displacement, and thus shear wave velocity, are generally attributed to electronic noise and decorrelation due to physical processes. We present our preliminary findings on another source of error, namely, speckle-induced bias in phase estimation. We find that methods that involve tracking in a single location, as opposed to multiple locations, are less sensitive to this source of error since the measurement is differential in nature and cancels out speckle-induced phase errors. PMID:23493611

  20. An easy field method for estimating the abundance of culicid larval instars.

    PubMed

    Carron, Alexandre; Duchet, Claire; Gaven, Bruno; Lagneau, Christophe

    2003-12-01

    A new method is proposed that avoids manual counting of mosquito larvae in order to estimate larval abundance in the field. This method is based on the visual comparison between abundance, in a standardized sampling tray (called an abacus), with 5 (abacus 5) or 10 (abacus 10) diagrammatically prepared abundance classes. Accuracy under laboratory and field conditions and individual bias have been evaluated and both abaci provide a reliable estimation of abundance in both conditions. There is no individual bias, whether people are familiar or not with its use. They could also be used for a quick estimation of larval treatment effectiveness, for the study of population dynamics and spatial distribution.