Structural Reliability Using Probability Density Estimation Methods Within NESSUS
NASA Technical Reports Server (NTRS)
Chamis, Chrisos C. (Technical Monitor); Godines, Cody Ric
2003-01-01
A reliability analysis studies a mathematical model of a physical system taking into account uncertainties of design variables and common results are estimations of a response density, which also implies estimations of its parameters. Some common density parameters include the mean value, the standard deviation, and specific percentile(s) of the response, which are measures of central tendency, variation, and probability regions, respectively. Reliability analyses are important since the results can lead to different designs by calculating the probability of observing safe responses in each of the proposed designs. All of this is done at the expense of added computational time as compared to a single deterministic analysis which will result in one value of the response out of many that make up the density of the response. Sampling methods, such as monte carlo (MC) and latin hypercube sampling (LHS), can be used to perform reliability analyses and can compute nonlinear response density parameters even if the response is dependent on many random variables. Hence, both methods are very robust; however, they are computationally expensive to use in the estimation of the response density parameters. Both methods are 2 of 13 stochastic methods that are contained within the Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) program. NESSUS is a probabilistic finite element analysis (FEA) program that was developed through funding from NASA Glenn Research Center (GRC). It has the additional capability of being linked to other analysis programs; therefore, probabilistic fluid dynamics, fracture mechanics, and heat transfer are only a few of what is possible with this software. The LHS method is the newest addition to the stochastic methods within NESSUS. Part of this work was to enhance NESSUS with the LHS method. The new LHS module is complete, has been successfully integrated with NESSUS, and been used to study four different test cases that have been
On the method of logarithmic cumulants for parametric probability density function estimation.
Krylov, Vladimir A; Moser, Gabriele; Serpico, Sebastiano B; Zerubia, Josiane
2013-10-01
Parameter estimation of probability density functions is one of the major steps in the area of statistical image and signal processing. In this paper we explore several properties and limitations of the recently proposed method of logarithmic cumulants (MoLC) parameter estimation approach which is an alternative to the classical maximum likelihood (ML) and method of moments (MoM) approaches. We derive the general sufficient condition for a strong consistency of the MoLC estimates which represents an important asymptotic property of any statistical estimator. This result enables the demonstration of the strong consistency of MoLC estimates for a selection of widely used distribution families originating from (but not restricted to) synthetic aperture radar image processing. We then derive the analytical conditions of applicability of MoLC to samples for the distribution families in our selection. Finally, we conduct various synthetic and real data experiments to assess the comparative properties, applicability and small sample performance of MoLC notably for the generalized gamma and K families of distributions. Supervised image classification experiments are considered for medical ultrasound and remote-sensing SAR imagery. The obtained results suggest that MoLC is a feasible and computationally fast yet not universally applicable alternative to MoM. MoLC becomes especially useful when the direct ML approach turns out to be unfeasible. PMID:23799694
Probability Density and CFAR Threshold Estimation for Hyperspectral Imaging
Clark, G A
2004-09-21
The work reported here shows the proof of principle (using a small data set) for a suite of algorithms designed to estimate the probability density function of hyperspectral background data and compute the appropriate Constant False Alarm Rate (CFAR) matched filter decision threshold for a chemical plume detector. Future work will provide a thorough demonstration of the algorithms and their performance with a large data set. The LASI (Large Aperture Search Initiative) Project involves instrumentation and image processing for hyperspectral images of chemical plumes in the atmosphere. The work reported here involves research and development on algorithms for reducing the false alarm rate in chemical plume detection and identification algorithms operating on hyperspectral image cubes. The chemical plume detection algorithms to date have used matched filters designed using generalized maximum likelihood ratio hypothesis testing algorithms [1, 2, 5, 6, 7, 12, 10, 11, 13]. One of the key challenges in hyperspectral imaging research is the high false alarm rate that often results from the plume detector [1, 2]. The overall goal of this work is to extend the classical matched filter detector to apply Constant False Alarm Rate (CFAR) methods to reduce the false alarm rate, or Probability of False Alarm P{sub FA} of the matched filter [4, 8, 9, 12]. A detector designer is interested in minimizing the probability of false alarm while simultaneously maximizing the probability of detection P{sub D}. This is summarized by the Receiver Operating Characteristic Curve (ROC) [10, 11], which is actually a family of curves depicting P{sub D} vs. P{sub FA}parameterized by varying levels of signal to noise (or clutter) ratio (SNR or SCR). Often, it is advantageous to be able to specify a desired P{sub FA} and develop a ROC curve (P{sub D} vs. decision threshold r{sub 0}) for that case. That is the purpose of this work. Specifically, this work develops a set of algorithms and MATLAB
Probability Density Function Method for Langevin Equations with Colored Noise
Wang, Peng; Tartakovsky, Alexandre M.; Tartakovsky, Daniel M.
2013-04-05
We present a novel method to derive closed-form, computable PDF equations for Langevin systems with colored noise. The derived equations govern the dynamics of joint or marginal probability density functions (PDFs) of state variables, and rely on a so-called Large-Eddy-Diffusivity (LED) closure. We demonstrate the accuracy of the proposed PDF method for linear and nonlinear Langevin equations, describing the classical Brownian displacement and dispersion in porous media.
Estimating probability densities from short samples: A parametric maximum likelihood approach
NASA Astrophysics Data System (ADS)
Dudok de Wit, T.; Floriani, E.
1998-10-01
A parametric method similar to autoregressive spectral estimators is proposed to determine the probability density function (PDF) of a random set. The method proceeds by maximizing the likelihood of the PDF, yielding estimates that perform equally well in the tails as in the bulk of the distribution. It is therefore well suited for the analysis of short sets drawn from smooth PDF's and stands out by the simplicity of its computational scheme. Its advantages and limitations are discussed.
Numerical methods for high-dimensional probability density function equations
NASA Astrophysics Data System (ADS)
Cho, H.; Venturi, D.; Karniadakis, G. E.
2016-01-01
In this paper we address the problem of computing the numerical solution to kinetic partial differential equations involving many phase variables. These types of equations arise naturally in many different areas of mathematical physics, e.g., in particle systems (Liouville and Boltzmann equations), stochastic dynamical systems (Fokker-Planck and Dostupov-Pugachev equations), random wave theory (Malakhov-Saichev equations) and coarse-grained stochastic systems (Mori-Zwanzig equations). We propose three different classes of new algorithms addressing high-dimensionality: The first one is based on separated series expansions resulting in a sequence of low-dimensional problems that can be solved recursively and in parallel by using alternating direction methods. The second class of algorithms relies on truncation of interaction in low-orders that resembles the Bogoliubov-Born-Green-Kirkwood-Yvon (BBGKY) framework of kinetic gas theory and it yields a hierarchy of coupled probability density function equations. The third class of algorithms is based on high-dimensional model representations, e.g., the ANOVA method and probabilistic collocation methods. A common feature of all these approaches is that they are reducible to the problem of computing the solution to high-dimensional equations via a sequence of low-dimensional problems. The effectiveness of the new algorithms is demonstrated in numerical examples involving nonlinear stochastic dynamical systems and partial differential equations, with up to 120 variables.
Large Eddy Simulation and the Filtered Probability Density Function Method
NASA Astrophysics Data System (ADS)
Jones, W. P.; Navarro-Martinez, S.
2009-12-01
Recently there is has been increased interest in modelling combustion processes with high-levels of extinction and re-ignition. Such system often lie beyond the scope of conventional single scalar-based models. Large Eddy Simulation (LES) has shown a large potential for describing turbulent reactive systems, though combustion occurs at the smallest unresolved scales of the flow and must be modelled. In the sub-grid Probability Density Function (pdf) method approximations are devised to close the evolution equation for the joint-pdf which is then solved directly. The paper describes such an approach and concerns, in particular, the Eulerian stochastic field method of solving the pdf equation. The paper examines the capabilities of the LES-pdf method in capturing auto-ignition and extinction events in different partially premixed configurations with different fuels (hydrogen, methane and n-heptane). The results show that the LES-pdf formulation can capture different regimes without any parameter adjustments, independent of Reynolds numbers and fuel type.
Parameterizing deep convection using the assumed probability density function method
Storer, R. L.; Griffin, B. M.; Hoft, Jan; Weber, J. K.; Raut, E.; Larson, Vincent E.; Wang, Minghuai; Rasch, Philip J.
2015-01-06
Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method.The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and mid-latitude deep convection.These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing is weak. The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.
Parameterizing deep convection using the assumed probability density function method
NASA Astrophysics Data System (ADS)
Storer, R. L.; Griffin, B. M.; Höft, J.; Weber, J. K.; Raut, E.; Larson, V. E.; Wang, M.; Rasch, P. J.
2015-01-01
Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method. The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and midlatitude deep convection. These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing is weak. The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.
Parameterizing deep convection using the assumed probability density function method
Storer, R. L.; Griffin, B. M.; Höft, J.; Weber, J. K.; Raut, E.; Larson, V. E.; Wang, M.; Rasch, P. J.
2014-06-11
Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method. The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and mid-latitude deep convection. These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing ismore » weak. The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less
Parameterizing deep convection using the assumed probability density function method
Storer, R. L.; Griffin, B. M.; Höft, J.; Weber, J. K.; Raut, E.; Larson, V. E.; Wang, M.; Rasch, P. J.
2015-01-06
Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method. The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and midlatitude deep convection. These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing ismore » weak. The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less
Approximation of probability density functions by the Multilevel Monte Carlo Maximum Entropy method
NASA Astrophysics Data System (ADS)
Bierig, Claudio; Chernov, Alexey
2016-06-01
We develop a complete convergence theory for the Maximum Entropy method based on moment matching for a sequence of approximate statistical moments estimated by the Multilevel Monte Carlo method. Under appropriate regularity assumptions on the target probability density function, the proposed method is superior to the Maximum Entropy method with moments estimated by the Monte Carlo method. New theoretical results are illustrated in numerical examples.
Robust location and spread measures for nonparametric probability density function estimation.
López-Rubio, Ezequiel
2009-10-01
Robustness against outliers is a desirable property of any unsupervised learning scheme. In particular, probability density estimators benefit from incorporating this feature. A possible strategy to achieve this goal is to substitute the sample mean and the sample covariance matrix by more robust location and spread estimators. Here we use the L1-median to develop a nonparametric probability density function (PDF) estimator. We prove its most relevant properties, and we show its performance in density estimation and classification applications. PMID:19885963
Rajwade, Ajit; Banerjee, Arunava; Rangarajan, Anand
2010-01-01
We present a new geometric approach for determining the probability density of the intensity values in an image. We drop the notion of an image as a set of discrete pixels and assume a piecewise-continuous representation. The probability density can then be regarded as being proportional to the area between two nearby isocontours of the image surface. Our paper extends this idea to joint densities of image pairs. We demonstrate the application of our method to affine registration between two or more images using information-theoretic measures such as mutual information. We show cases where our method outperforms existing methods such as simple histograms, histograms with partial volume interpolation, Parzen windows, etc., under fine intensity quantization for affine image registration under significant image noise. Furthermore, we demonstrate results on simultaneous registration of multiple images, as well as for pairs of volume data sets, and show some theoretical properties of our density estimator. Our approach requires the selection of only an image interpolant. The method neither requires any kind of kernel functions (as in Parzen windows), which are unrelated to the structure of the image in itself, nor does it rely on any form of sampling for density estimation. PMID:19147876
A probability density function method for acoustic field uncertainty analysis
NASA Astrophysics Data System (ADS)
James, Kevin R.; Dowling, David R.
2005-11-01
Acoustic field predictions, whether analytical or computational, rely on knowledge of the environmental, boundary, and initial conditions. When knowledge of these conditions is uncertain, acoustic field predictions will also be uncertain, even if the techniques for field prediction are perfect. Quantifying acoustic field uncertainty is important for applications that require accurate field amplitude and phase predictions, like matched-field techniques for sonar, nondestructive evaluation, bio-medical ultrasound, and atmospheric remote sensing. Drawing on prior turbulence research, this paper describes how an evolution equation for the probability density function (PDF) of the predicted acoustic field can be derived and used to quantify predicted-acoustic-field uncertainties arising from uncertain environmental, boundary, or initial conditions. Example calculations are presented in one and two spatial dimensions for the one-point PDF for the real and imaginary parts of a harmonic field, and show that predicted field uncertainty increases with increasing range and frequency. In particular, at 500 Hz in an ideal 100 m deep underwater sound channel with a 1 m root-mean-square depth uncertainty, the PDF results presented here indicate that at a range of 5 km, all phases and a 10 dB range of amplitudes will have non-negligible probability. Evolution equations for the two-point PDF are also derived.
ANNz2 - Photometric redshift and probability density function estimation using machine-learning
NASA Astrophysics Data System (ADS)
Sadeh, Iftach
2014-05-01
Large photometric galaxy surveys allow the study of questions at the forefront of science, such as the nature of dark energy. The success of such surveys depends on the ability to measure the photometric redshifts of objects (photo-zs), based on limited spectral data. A new major version of the public photo-z estimation software, ANNz , is presented here. The new code incorporates several machine-learning methods, such as artificial neural networks and boosted decision/regression trees, which are all used in concert. The objective of the algorithm is to dynamically optimize the performance of the photo-z estimation, and to properly derive the associated uncertainties. In addition to single-value solutions, the new code also generates full probability density functions in two independent ways.
SAR amplitude probability density function estimation based on a generalized Gaussian model.
Moser, Gabriele; Zerubia, Josiane; Serpico, Sebastiano B
2006-06-01
In the context of remotely sensed data analysis, an important problem is the development of accurate models for the statistics of the pixel intensities. Focusing on synthetic aperture radar (SAR) data, this modeling process turns out to be a crucial task, for instance, for classification or for denoising purposes. In this paper, an innovative parametric estimation methodology for SAR amplitude data is proposed that adopts a generalized Gaussian (GG) model for the complex SAR backscattered signal. A closed-form expression for the corresponding amplitude probability density function (PDF) is derived and a specific parameter estimation algorithm is developed in order to deal with the proposed model. Specifically, the recently proposed "method-of-log-cumulants" (MoLC) is applied, which stems from the adoption of the Mellin transform (instead of the usual Fourier transform) in the computation of characteristic functions and from the corresponding generalization of the concepts of moment and cumulant. For the developed GG-based amplitude model, the resulting MoLC estimates turn out to be numerically feasible and are also analytically proved to be consistent. The proposed parametric approach was validated by using several real ERS-1, XSAR, E-SAR, and NASA/JPL airborne SAR images, and the experimental results prove that the method models the amplitude PDF better than several previously proposed parametric models for backscattering phenomena. PMID:16764268
Unification of field theory and maximum entropy methods for learning probability densities
NASA Astrophysics Data System (ADS)
Kinney, Justin B.
2015-09-01
The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.
NASA Astrophysics Data System (ADS)
Hengartner, Nicolas; Talbot, Lawrence; Shepherd, Ian; Bickel, Peter
1995-06-01
An important parameter in the experimental study of dynamics of combustion is the probability distribution of the effective Rayleigh scattering cross section. This cross section cannot be observed directly. Instead, pairs of measurements of laser intensities and Rayleigh scattering counts are observed. Our aim is to provide estimators for the probability density function of the scattering cross section from such measurements. The probability distribution is derived first for the number of recorded photons in the Rayleigh scattering experiment. In this approach the laser intensity measurements are treated as known covariates. This departs from the usual practice of normalizing the Rayleigh scattering counts by the laser intensities. For distributions supported on finite intervals two one based on expansion of the density in
A Tomographic Method for the Reconstruction of Local Probability Density Functions
NASA Technical Reports Server (NTRS)
Sivathanu, Y. R.; Gore, J. P.
1993-01-01
A method of obtaining the probability density function (PDF) of local properties from path integrated measurements is described. The approach uses a discrete probability function (DPF) method to infer the PDF of the local extinction coefficient from measurements of the PDFs of the path integrated transmittance. The local PDFs obtained using the method are compared with those obtained from direct intrusive measurements in propylene/air and ethylene/air diffusion flames. The results of this comparison are good.
NASA Astrophysics Data System (ADS)
Cheng, Wei Ping; Jia, Yafei
2010-04-01
A backward location probability density function (BL-PDF) method capable of identifying location of point sources in surface waters is presented in this paper. The relation of forward location probability density function (FL-PDF) and backward location probability density, based on adjoint analysis, is validated using depth-averaged free-surface flow and mass transport models and several surface water test cases. The solutions of the backward location PDF transport equation agreed well to the forward location PDF computed using the pollutant concentration at the monitoring points. Using this relation and the distribution of the concentration detected at the monitoring points, an effective point source identification method is established. The numerical error of the backward location PDF simulation is found to be sensitive to the irregularity of the computational meshes, diffusivity, and velocity gradients. The performance of identification method is evaluated regarding the random error and number of observed values. In addition to hypothetical cases, a real case was studied to identify the source location where a dye tracer was instantaneously injected into a stream. The study indicated the proposed source identification method is effective, robust, and quite efficient in surface waters; the number of advection-diffusion equations needed to solve is equal to the number of observations.
NASA Astrophysics Data System (ADS)
Turel, N.; Arikan, F.
2010-12-01
Ionospheric channel characterization is an important task for both HF and satellite communications. The inherent space-time variability of the ionosphere can be observed through total electron content (TEC) that can be obtained using GPS receivers. In this study, within-the-hour variability of the ionosphere over high-latitude, midlatitude, and equatorial regions is investigated by estimating a parametric model for the probability density function (PDF) of GPS-TEC. PDF is a useful tool in defining the statistical structure of communication channels. For this study, a half solar cycle data is collected for 18 GPS stations. Histograms of TEC, corresponding to experimental probability distributions, are used to estimate the parameters of five different PDFs. The best fitting distribution to the TEC data is obtained using the maximum likelihood ratio of the estimated parametric distributions. It is observed that all of the midlatitude stations and most of the high-latitude and equatorial stations are distributed as lognormal. A representative distribution can easily be obtained for stations that are located in midlatitude using solar zenith normalization. The stations located in very high latitudes or in equatorial regions cannot be described using only one PDF distribution. Due to significant seasonal variability, different distributions are required for summer and winter.
Fast and accurate probability density estimation in large high dimensional astronomical datasets
NASA Astrophysics Data System (ADS)
Gupta, Pramod; Connolly, Andrew J.; Gardner, Jeffrey P.
2015-01-01
Astronomical surveys will generate measurements of hundreds of attributes (e.g. color, size, shape) on hundreds of millions of sources. Analyzing these large, high dimensional data sets will require efficient algorithms for data analysis. An example of this is probability density estimation that is at the heart of many classification problems such as the separation of stars and quasars based on their colors. Popular density estimation techniques use binning or kernel density estimation. Kernel density estimation has a small memory footprint but often requires large computational resources. Binning has small computational requirements but usually binning is implemented with multi-dimensional arrays which leads to memory requirements which scale exponentially with the number of dimensions. Hence both techniques do not scale well to large data sets in high dimensions. We present an alternative approach of binning implemented with hash tables (BASH tables). This approach uses the sparseness of data in the high dimensional space to ensure that the memory requirements are small. However hashing requires some extra computation so a priori it is not clear if the reduction in memory requirements will lead to increased computational requirements. Through an implementation of BASH tables in C++ we show that the additional computational requirements of hashing are negligible. Hence this approach has small memory and computational requirements. We apply our density estimation technique to photometric selection of quasars using non-parametric Bayesian classification and show that the accuracy of the classification is same as the accuracy of earlier approaches. Since the BASH table approach is one to three orders of magnitude faster than the earlier approaches it may be useful in various other applications of density estimation in astrostatistics.
Zhang Yumin; Lum, Kai-Yew; Wang Qingguo
2009-03-05
In this paper, a H-infinity fault detection and diagnosis (FDD) scheme for a class of discrete nonlinear system fault using output probability density estimation is presented. Unlike classical FDD problems, the measured output of the system is viewed as a stochastic process and its square root probability density function (PDF) is modeled with B-spline functions, which leads to a deterministic space-time dynamic model including nonlinearities, uncertainties. A weighting mean value is given as an integral function of the square root PDF along space direction, which leads a function only about time and can be used to construct residual signal. Thus, the classical nonlinear filter approach can be used to detect and diagnose the fault in system. A feasible detection criterion is obtained at first, and a new H-infinity adaptive fault diagnosis algorithm is further investigated to estimate the fault. Simulation example is given to demonstrate the effectiveness of the proposed approaches.
Bakosi, Jozsef; Ristorcelli, Raymond J
2010-01-01
Probability density function (PDF) methods are extended to variable-density pressure-gradient-driven turbulence. We apply the new method to compute the joint PDF of density and velocity in a non-premixed binary mixture of different-density molecularly mixing fluids under gravity. The full time-evolution of the joint PDF is captured in the highly non-equilibrium flow: starting from a quiescent state, transitioning to fully developed turbulence and finally dissipated by molecular diffusion. High-Atwood-number effects (as distinguished from the Boussinesq case) are accounted for: both hydrodynamic turbulence and material mixing are treated at arbitrary density ratios, with the specific volume, mass flux and all their correlations in closed form. An extension of the generalized Langevin model, originally developed for the Lagrangian fluid particle velocity in constant-density shear-driven turbulence, is constructed for variable-density pressure-gradient-driven flows. The persistent small-scale anisotropy, a fundamentally 'non-Kolmogorovian' feature of flows under external acceleration forces, is captured by a tensorial diffusion term based on the external body force. The material mixing model for the fluid density, an active scalar, is developed based on the beta distribution. The beta-PDF is shown to be capable of capturing the mixing asymmetry and that it can accurately represent the density through transition, in fully developed turbulence and in the decay process. The joint model for hydrodynamics and active material mixing yields a time-accurate evolution of the turbulent kinetic energy and Reynolds stress anisotropy without resorting to gradient diffusion hypotheses, and represents the mixing state by the density PDF itself, eliminating the need for dubious mixing measures. Direct numerical simulations of the homogeneous Rayleigh-Taylor instability are used for model validation.
Sato, Tatsuhiko; Hamada, Nobuyuki
2014-01-01
We here propose a new model assembly for estimating the surviving fraction of cells irradiated with various types of ionizing radiation, considering both targeted and nontargeted effects in the same framework. The probability densities of specific energies in two scales, which are the cell nucleus and its substructure called a domain, were employed as the physical index for characterizing the radiation fields. In the model assembly, our previously established double stochastic microdosimetric kinetic (DSMK) model was used to express the targeted effect, whereas a newly developed model was used to express the nontargeted effect. The radioresistance caused by overexpression of anti-apoptotic protein Bcl-2 known to frequently occur in human cancer was also considered by introducing the concept of the adaptive response in the DSMK model. The accuracy of the model assembly was examined by comparing the computationally and experimentally determined surviving fraction of Bcl-2 cells (Bcl-2 overexpressing HeLa cells) and Neo cells (neomycin resistant gene-expressing HeLa cells) irradiated with microbeam or broadbeam of energetic heavy ions, as well as the WI-38 normal human fibroblasts irradiated with X-ray microbeam. The model assembly reproduced very well the experimentally determined surviving fraction over a wide range of dose and linear energy transfer (LET) values. Our newly established model assembly will be worth being incorporated into treatment planning systems for heavy-ion therapy, brachytherapy, and boron neutron capture therapy, given critical roles of the frequent Bcl-2 overexpression and the nontargeted effect in estimating therapeutic outcomes and harmful effects of such advanced therapeutic modalities. PMID:25426641
Sato, Tatsuhiko; Hamada, Nobuyuki
2014-01-01
We here propose a new model assembly for estimating the surviving fraction of cells irradiated with various types of ionizing radiation, considering both targeted and nontargeted effects in the same framework. The probability densities of specific energies in two scales, which are the cell nucleus and its substructure called a domain, were employed as the physical index for characterizing the radiation fields. In the model assembly, our previously established double stochastic microdosimetric kinetic (DSMK) model was used to express the targeted effect, whereas a newly developed model was used to express the nontargeted effect. The radioresistance caused by overexpression of anti-apoptotic protein Bcl-2 known to frequently occur in human cancer was also considered by introducing the concept of the adaptive response in the DSMK model. The accuracy of the model assembly was examined by comparing the computationally and experimentally determined surviving fraction of Bcl-2 cells (Bcl-2 overexpressing HeLa cells) and Neo cells (neomycin resistant gene-expressing HeLa cells) irradiated with microbeam or broadbeam of energetic heavy ions, as well as the WI-38 normal human fibroblasts irradiated with X-ray microbeam. The model assembly reproduced very well the experimentally determined surviving fraction over a wide range of dose and linear energy transfer (LET) values. Our newly established model assembly will be worth being incorporated into treatment planning systems for heavy-ion therapy, brachytherapy, and boron neutron capture therapy, given critical roles of the frequent Bcl-2 overexpression and the nontargeted effect in estimating therapeutic outcomes and harmful effects of such advanced therapeutic modalities. PMID:25426641
NASA Astrophysics Data System (ADS)
Papadopoulos, Vissarion; Kalogeris, Ioannis
2016-05-01
The present paper proposes a Galerkin finite element projection scheme for the solution of the partial differential equations (pde's) involved in the probability density evolution method, for the linear and nonlinear static analysis of stochastic systems. According to the principle of preservation of probability, the probability density evolution of a stochastic system is expressed by its corresponding Fokker-Planck (FP) stochastic partial differential equation. Direct integration of the FP equation is feasible only for simple systems with a small number of degrees of freedom, due to analytical and/or numerical intractability. However, rewriting the FP equation conditioned to the random event description, a generalized density evolution equation (GDEE) can be obtained, which can be reduced to a one dimensional pde. Two Galerkin finite element method schemes are proposed for the numerical solution of the resulting pde's, namely a time-marching discontinuous Galerkin scheme and the StreamlineUpwind/Petrov Galerkin (SUPG) scheme. In addition, a reformulation of the classical GDEE is proposed, which implements the principle of probability preservation in space instead of time, making this approach suitable for the stochastic analysis of finite element systems. The advantages of the FE Galerkin methods and in particular the SUPG over finite difference schemes, like the modified Lax-Wendroff, which is the most frequently used method for the solution of the GDEE, are illustrated with numerical examples and explored further.
NASA Astrophysics Data System (ADS)
Yu, Zhi-wu; Mao, Jian-feng; Guo, Feng-qi; Guo, Wei
2016-03-01
Rail irregularity is one of the main sources causing train-bridge random vibration. A new random vibration theory for the coupled train-bridge systems is proposed in this paper. First, number theory method (NTM) with 2N-dimensional vectors for the stochastic harmonic function (SHF) of rail irregularity power spectrum density was adopted to determine the representative points of spatial frequencies and phases to generate the random rail irregularity samples, and the non-stationary rail irregularity samples were modulated with the slowly varying function. Second, the probability density evolution method (PDEM) was employed to calculate the random dynamic vibration of the three-dimensional (3D) train-bridge system by a program compiled on the MATLAB® software platform. Eventually, the Newmark-β integration method and double edge difference method of total variation diminishing (TVD) format were adopted to obtain the mean value curve, the standard deviation curve and the time-history probability density information of responses. A case study was presented in which the ICE-3 train travels on a three-span simply-supported high-speed railway bridge with excitation of random rail irregularity. The results showed that compared to the Monte Carlo simulation, the PDEM has higher computational efficiency for the same accuracy, i.e., an improvement by 1-2 orders of magnitude. Additionally, the influences of rail irregularity and train speed on the random vibration of the coupled train-bridge system were discussed.
NASA Technical Reports Server (NTRS)
Tretter, S. A.
1977-01-01
A report is given to supplement the progress report of June 17, 1977. In that progress report gamma, lognormal, and Rayleigh probability density functions were fitted to the times between lightning flashes in the storms of 9/12/75, 8/26/75, and 7/13/76 by the maximum likelihood method. The goodness of fit is checked by the Kolmogoroff-Smirnoff test. Plots of the estimated densities along with normalized histograms are included to provide a visual check on the goodness of fit. The lognormal densities are the most peaked and have the highest tails. This results in the best fit to the normalized histogram in most cases. The Rayleigh densities have too broad and rounded peaks to give good fits. In addition, they have the lowest tails. The gamma densities fall inbetween and give the best fit in a few cases.
Caliandro, G.A.; Torres, D.F.; Rea, N. E-mail: dtorres@aliga.ieec.uab.es
2013-07-01
Here, we present a new method to evaluate the expectation value of the power spectrum of a time series. A statistical approach is adopted to define the method. After its demonstration, it is validated showing that it leads to the known properties of the power spectrum when the time series contains a periodic signal. The approach is also validated in general with numerical simulations. The method puts into evidence the importance that is played by the probability density function of the phases associated to each time stamp for a given frequency, and how this distribution can be perturbed by the uncertainties of the parameters in the pulsar ephemeris. We applied this method to solve the power spectrum in the case the first derivative of the pulsar frequency is unknown and not negligible. We also undertook the study of the most general case of a blind search, in which both the frequency and its first derivative are uncertain. We found the analytical solutions of the above cases invoking the sum of Fresnel's integrals squared.
From data to probability densities without histograms
NASA Astrophysics Data System (ADS)
Berg, Bernd A.; Harris, Robert C.
2008-09-01
When one deals with data drawn from continuous variables, a histogram is often inadequate to display their probability density. It deals inefficiently with statistical noise, and binsizes are free parameters. In contrast to that, the empirical cumulative distribution function (obtained after sorting the data) is parameter free. But it is a step function, so that its differentiation does not give a smooth probability density. Based on Fourier series expansion and Kolmogorov tests, we introduce a simple method, which overcomes this problem. Error bars on the estimated probability density are calculated using a jackknife method. We give several examples and provide computer code reproducing them. You may want to look at the corresponding figures 4 to 9 first. Program summaryProgram title: cdf_to_pd Catalogue identifier: AEBC_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEBC_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 2758 No. of bytes in distributed program, including test data, etc.: 18 594 Distribution format: tar.gz Programming language: Fortran 77 Computer: Any capable of compiling and executing Fortran code Operating system: Any capable of compiling and executing Fortran code Classification: 4.14, 9 Nature of problem: When one deals with data drawn from continuous variables, a histogram is often inadequate to display the probability density. It deals inefficiently with statistical noise, and binsizes are free parameters. In contrast to that, the empirical cumulative distribution function (obtained after sorting the data) is parameter free. But it is a step function, so that its differentiation does not give a smooth probability density. Solution method: Based on Fourier series expansion and Kolmogorov tests, we introduce a simple method, which
Probability densities in strong turbulence
NASA Astrophysics Data System (ADS)
Yakhot, Victor
2006-03-01
In this work we, using Mellin’s transform combined with the Gaussian large-scale boundary condition, calculate probability densities (PDFs) of velocity increments P(δu,r), velocity derivatives P(u,r) and the PDF of the fluctuating dissipation scales Q(η,Re), where Re is the large-scale Reynolds number. The resulting expressions strongly deviate from the Log-normal PDF P(δu,r) often quoted in the literature. It is shown that the probability density of the small-scale velocity fluctuations includes information about the large (integral) scale dynamics which is responsible for the deviation of P(δu,r) from P(δu,r). An expression for the function D(h) of the multifractal theory, free from spurious logarithms recently discussed in [U. Frisch, M. Martins Afonso, A. Mazzino, V. Yakhot, J. Fluid Mech. 542 (2005) 97] is also obtained.
NASA Astrophysics Data System (ADS)
Zhao, X. Y.; Haworth, D. C.; Ren, T.; Modest, M. F.
2013-04-01
A computational fluid dynamics model for high-temperature oxy-natural gas combustion is developed and exercised. The model features detailed gas-phase chemistry and radiation treatments (a photon Monte Carlo method with line-by-line spectral resolution for gas and wall radiation - PMC/LBL) and a transported probability density function (PDF) method to account for turbulent fluctuations in composition and temperature. The model is first validated for a 0.8 MW oxy-natural gas furnace, and the level of agreement between model and experiment is found to be at least as good as any that has been published earlier. Next, simulations are performed with systematic model variations to provide insight into the roles of individual physical processes and their interplay in high-temperature oxy-fuel combustion. This includes variations in the chemical mechanism and the radiation model, and comparisons of results obtained with versus without the PDF method to isolate and quantify the effects of turbulence-chemistry interactions and turbulence-radiation interactions. In this combustion environment, it is found to be important to account for the interconversion of CO and CO2, and radiation plays a dominant role. The PMC/LBL model allows the effects of molecular gas radiation and wall radiation to be clearly separated and quantified. Radiation and chemistry are tightly coupled through the temperature, and correct temperature prediction is required for correct prediction of the CO/CO2 ratio. Turbulence-chemistry interactions influence the computed flame structure and mean CO levels. Strong local effects of turbulence-radiation interactions are found in the flame, but the net influence of TRI on computed mean temperature and species profiles is small. The ultimate goal of this research is to simulate high-temperature oxy-coal combustion, where accurate treatments of chemistry, radiation and turbulence-chemistry-particle-radiation interactions will be even more important.
Trajectory versus probability density entropy.
Bologna, M; Grigolini, P; Karagiorgis, M; Rosa, A
2001-07-01
We show that the widely accepted conviction that a connection can be established between the probability density entropy and the Kolmogorov-Sinai (KS) entropy is questionable. We adopt the definition of density entropy as a functional of a distribution density whose time evolution is determined by a transport equation, conceived as the only prescription to use for the calculation. Although the transport equation is built up for the purpose of affording a picture equivalent to that stemming from trajectory dynamics, no direct use of trajectory time evolution is allowed, once the transport equation is defined. With this definition in mind we prove that the detection of a time regime of increase of the density entropy with a rate identical to the KS entropy is possible only in a limited number of cases. The proposals made by some authors to establish a connection between the two entropies in general, violate our definition of density entropy and imply the concept of trajectory, which is foreign to that of density entropy. PMID:11461383
Trajectory versus probability density entropy
NASA Astrophysics Data System (ADS)
Bologna, Mauro; Grigolini, Paolo; Karagiorgis, Markos; Rosa, Angelo
2001-07-01
We show that the widely accepted conviction that a connection can be established between the probability density entropy and the Kolmogorov-Sinai (KS) entropy is questionable. We adopt the definition of density entropy as a functional of a distribution density whose time evolution is determined by a transport equation, conceived as the only prescription to use for the calculation. Although the transport equation is built up for the purpose of affording a picture equivalent to that stemming from trajectory dynamics, no direct use of trajectory time evolution is allowed, once the transport equation is defined. With this definition in mind we prove that the detection of a time regime of increase of the density entropy with a rate identical to the KS entropy is possible only in a limited number of cases. The proposals made by some authors to establish a connection between the two entropies in general, violate our definition of density entropy and imply the concept of trajectory, which is foreign to that of density entropy.
Modulation Based on Probability Density Functions
NASA Technical Reports Server (NTRS)
Williams, Glenn L.
2009-01-01
A proposed method of modulating a sinusoidal carrier signal to convey digital information involves the use of histograms representing probability density functions (PDFs) that characterize samples of the signal waveform. The method is based partly on the observation that when a waveform is sampled (whether by analog or digital means) over a time interval at least as long as one half cycle of the waveform, the samples can be sorted by frequency of occurrence, thereby constructing a histogram representing a PDF of the waveform during that time interval.
Direct propagation of probability density functions in hydrological equations
NASA Astrophysics Data System (ADS)
Kunstmann, Harald; Kastens, Marko
2006-06-01
Sustainable decisions in hydrological risk management require detailed information on the probability density function ( pdf) of the model output. Only then probabilities for the failure of a specific management option or the exceedance of critical thresholds (e.g. of pollutants) can be derived. A new approach of uncertainty propagation in hydrological equations is developed that directly propagates the probability density functions of uncertain model input parameters into the corresponding probability density functions of model output. The basics of the methodology are presented and central applications to different disciplines in hydrology are shown. This work focuses on the following basic hydrological equations: (1) pumping test analysis (Theis-equation, propagation of uncertainties in recharge and transmissivity), (2) 1-dim groundwater contaminant transport equation (Gauss-equation, propagation of uncertainties in decay constant and dispersivity), (3) evapotranspiration estimation (Penman-Monteith-equation, propagation of uncertainty in roughness length). The direct propagation of probability densities is restricted to functions that are monotonically increasing or decreasing or that can be separated in corresponding monotonic branches so that inverse functions can be derived. In case no analytic solutions for inverse functions could be derived, semi-analytical approximations were used. It is shown that the results of direct probability density function propagation are in perfect agreement with results obtained from corresponding Monte Carlo derived frequency distributions. Direct pdf propagation, however, has the advantage that is yields exact solutions for the resulting hydrological pdfs rather than approximating discontinuous frequency distributions. It is additionally shown that the type of the resulting pdf depends on the specific values (order of magnitude, respectively) of the standard deviation of the input pdf. The dependency of skewness and kurtosis
Carrier Modulation Via Waveform Probability Density Function
NASA Technical Reports Server (NTRS)
Williams, Glenn L.
2004-01-01
Beyond the classic modes of carrier modulation by varying amplitude (AM), phase (PM), or frequency (FM), we extend the modulation domain of an analog carrier signal to include a class of general modulations which are distinguished by their probability density function histogram. Separate waveform states are easily created by varying the pdf of the transmitted waveform. Individual waveform states are assignable as proxies for digital ONEs or ZEROs. At the receiver, these states are easily detected by accumulating sampled waveform statistics and performing periodic pattern matching, correlation, or statistical filtering. No fundamental natural laws are broken in the detection process. We show how a typical modulation scheme would work in the digital domain and suggest how to build an analog version. We propose that clever variations of the modulating waveform (and thus the histogram) can provide simple steganographic encoding.
Carrier Modulation Via Waveform Probability Density Function
NASA Technical Reports Server (NTRS)
Williams, Glenn L.
2006-01-01
Beyond the classic modes of carrier modulation by varying amplitude (AM), phase (PM), or frequency (FM), we extend the modulation domain of an analog carrier signal to include a class of general modulations which are distinguished by their probability density function histogram. Separate waveform states are easily created by varying the pdf of the transmitted waveform. Individual waveform states are assignable as proxies for digital one's or zero's. At the receiver, these states are easily detected by accumulating sampled waveform statistics and performing periodic pattern matching, correlation, or statistical filtering. No fundamental physical laws are broken in the detection process. We show how a typical modulation scheme would work in the digital domain and suggest how to build an analog version. We propose that clever variations of the modulating waveform (and thus the histogram) can provide simple steganographic encoding.
Application of the response probability density function technique to biodynamic models.
Hershey, R L; Higgins, T H
1978-01-01
A method has been developed, which we call the "response probability density function technique," which has applications in predicting the probability of injury in a wide range of biodynamic situations. The method, which was developed in connection with sonic boom damage prediction, utilized the probability density function of the excitation force and the probability density function of the sensitivity of the material being acted upon. The method is especially simple to use when both these probability density functions are lognormal. Studies thus far have shown that the stresses from sonic booms, as well as the strengths of glass and mortars, are distributed lognormally. Some biodynamic processes also have lognormal distributions and are, therefore, amenable to modeling by this technique. In particular, this paper discusses the application of the response probability density function technique to the analysis of the thoracic response to air blast and the prediction of skull fracture from head impact. PMID:623590
Probability density function learning by unsupervised neurons.
Fiori, S
2001-10-01
In a recent work, we introduced the concept of pseudo-polynomial adaptive activation function neuron (FAN) and presented an unsupervised information-theoretic learning theory for such structure. The learning model is based on entropy optimization and provides a way of learning probability distributions from incomplete data. The aim of the present paper is to illustrate some theoretical features of the FAN neuron, to extend its learning theory to asymmetrical density function approximation, and to provide an analytical and numerical comparison with other known density function estimation methods, with special emphasis to the universal approximation ability. The paper also provides a survey of PDF learning from incomplete data, as well as results of several experiments performed on real-world problems and signals. PMID:11709808
Downlink Probability Density Functions for EOS-McMurdo Sound
NASA Technical Reports Server (NTRS)
Christopher, P.; Jackson, A. H.
1996-01-01
The visibility times and communication link dynamics for the Earth Observations Satellite (EOS)-McMurdo Sound direct downlinks have been studied. The 16 day EOS periodicity may be shown with the Goddard Trajectory Determination System (GTDS) and the entire 16 day period should be simulated for representative link statistics. We desire many attributes of the downlink, however, and a faster orbital determination method is desirable. We use the method of osculating elements for speed and accuracy in simulating the EOS orbit. The accuracy of the method of osculating elements is demonstrated by closely reproducing the observed 16 day Landsat periodicity. An autocorrelation function method is used to show the correlation spike at 16 days. The entire 16 day record of passes over McMurdo Sound is then used to generate statistics for innage time, outage time, elevation angle, antenna angle rates, and propagation loss. The levation angle probability density function is compared with 1967 analytic approximation which has been used for medium to high altitude satellites. One practical result of this comparison is seen to be the rare occurrence of zenith passes. The new result is functionally different than the earlier result, with a heavy emphasis on low elevation angles. EOS is one of a large class of sun synchronous satellites which may be downlinked to McMurdo Sound. We examine delay statistics for an entire group of sun synchronous satellites ranging from 400 km to 1000 km altitude. Outage probability density function results are presented three dimensionally.
Probability density function transformation using seeded localized averaging
Dimitrov, N. B.; Jordanov, V. T.
2011-07-01
Seeded Localized Averaging (SLA) is a spectrum acquisition method that averages pulse-heights in dynamic windows. SLA sharpens peaks in the acquired spectra. This work investigates the transformation of the original probability density function (PDF) in the process of applying SLA procedure. We derive an analytical expression for the resulting probability density function after an application of SLA. In addition, we prove the following properties: 1) for symmetric distributions, SLA preserves both the mean and symmetry. 2) for uni-modal symmetric distributions, SLA reduces variance, sharpening the distributions peak. Our results are the first to prove these properties, reinforcing past experimental observations. Specifically, our results imply that in the typical case of a spectral peak with Gaussian PDF the full width at half maximum (FWHM) of the transformed peak becomes narrower even with averaging of only two pulse-heights. While the Gaussian shape is no longer preserved, our results include an analytical expression for the resulting distribution. Examples of the transformation of other PDFs are presented. (authors)
NASA Astrophysics Data System (ADS)
Jin, X. L.; Huang, Z. L.
The nonstationary probability densities of system responses are obtained for nonlinear multi-degree-of-freedom systems subject to stochastic parametric and external excitations. First, the stochastic averaging method is used to obtain the averaged Itô equation for amplitude envelopes of the system response. Then, the corresponding Fokker-Planck-Kolmogorov equation governing the nonstationary probability density of the amplitude envelopes is deduced. By applying the Galerkin method, the nonstationary probability density can be expressed as a series expansion in terms of a set of orthogonal base functions with time-dependent coefficients. Finally, the nonstationary probability densities for the amplitude response, as well as those for the state-space response, are solved approximately. To illustrate the applicability, the proposed method is applied to a two-degree-of-freedom van der Pol oscillator subject to external excitations of Gaussian white noises.
Protein single-model quality assessment by feature-based probability density functions.
Cao, Renzhi; Cheng, Jianlin
2016-01-01
Protein quality assessment (QA) has played an important role in protein structure prediction. We developed a novel single-model quality assessment method-Qprob. Qprob calculates the absolute error for each protein feature value against the true quality scores (i.e. GDT-TS scores) of protein structural models, and uses them to estimate its probability density distribution for quality assessment. Qprob has been blindly tested on the 11th Critical Assessment of Techniques for Protein Structure Prediction (CASP11) as MULTICOM-NOVEL server. The official CASP result shows that Qprob ranks as one of the top single-model QA methods. In addition, Qprob makes contributions to our protein tertiary structure predictor MULTICOM, which is officially ranked 3rd out of 143 predictors. The good performance shows that Qprob is good at assessing the quality of models of hard targets. These results demonstrate that this new probability density distribution based method is effective for protein single-model quality assessment and is useful for protein structure prediction. The webserver of Qprob is available at: http://calla.rnet.missouri.edu/qprob/. The software is now freely available in the web server of Qprob. PMID:27041353
Michalopoulou, Zoi-Heleni; Pole, Andrew
2016-07-01
The dispersion pattern of a received signal is critical for understanding physical properties of the propagation medium. The objective of this work is to estimate accurately sediment sound speed using modal arrival times obtained from dispersion curves extracted via time-frequency analysis of acoustic signals. A particle filter is used that estimates probability density functions of modal frequencies arriving at specific times. Employing this information, probability density functions of arrival times for modal frequencies are constructed. Samples of arrival time differences are then obtained and are propagated backwards through an inverse acoustic model. As a result, probability density functions of sediment sound speed are estimated. Maximum a posteriori estimates indicate that inversion is successful. It is also demonstrated that multiple frequency processing offers an advantage over inversion at a single frequency, producing results with reduced variance. PMID:27475202
Protein single-model quality assessment by feature-based probability density functions
Cao, Renzhi; Cheng, Jianlin
2016-01-01
Protein quality assessment (QA) has played an important role in protein structure prediction. We developed a novel single-model quality assessment method–Qprob. Qprob calculates the absolute error for each protein feature value against the true quality scores (i.e. GDT-TS scores) of protein structural models, and uses them to estimate its probability density distribution for quality assessment. Qprob has been blindly tested on the 11th Critical Assessment of Techniques for Protein Structure Prediction (CASP11) as MULTICOM-NOVEL server. The official CASP result shows that Qprob ranks as one of the top single-model QA methods. In addition, Qprob makes contributions to our protein tertiary structure predictor MULTICOM, which is officially ranked 3rd out of 143 predictors. The good performance shows that Qprob is good at assessing the quality of models of hard targets. These results demonstrate that this new probability density distribution based method is effective for protein single-model quality assessment and is useful for protein structure prediction. The webserver of Qprob is available at: http://calla.rnet.missouri.edu/qprob/. The software is now freely available in the web server of Qprob. PMID:27041353
A unified optical damage criterion based on the probability density distribution of detector signals
NASA Astrophysics Data System (ADS)
Somoskoi, T.; Vass, Cs.; Mero, M.; Mingesz, R.; Bozoki, Z.; Osvay, K.
2013-11-01
Various methods and procedures have been developed so far to test laser induced optical damage. The question naturally arises, that what are the respective sensitivities of these diverse methods. To make a suitable comparison, both the processing of the measured primary signal has to be at least similar to the various methods, and one needs to establish a proper damage criterion, which has to be universally applicable for every method. We defined damage criteria based on the probability density distribution of the obtained detector signals. This was determined by the kernel density estimation procedure. We have tested the entire evaluation procedure in four well-known detection techniques: direct observation of the sample by optical microscopy; monitoring of the change in the light scattering power of the target surface and the detection of the generated photoacoustic waves both in the bulk of the sample and in the surrounding air.
Probability Density Functions of Observed Rainfall in Montana
NASA Technical Reports Server (NTRS)
Larsen, Scott D.; Johnson, L. Ronald; Smith, Paul L.
1995-01-01
The question of whether a rain rate probability density function (PDF) can vary uniformly between precipitation events is examined. Image analysis on large samples of radar echoes is possible because of advances in technology. The data provided by such an analysis easily allow development of radar reflectivity factors (and by extension rain rate) distribution. Finding a PDF becomes a matter of finding a function that describes the curve approximating the resulting distributions. Ideally, one PDF would exist for all cases; or many PDF's that have the same functional form with only systematic variations in parameters (such as size or shape) exist. Satisfying either of theses cases will, validate the theoretical basis of the Area Time Integral (ATI). Using the method of moments and Elderton's curve selection criteria, the Pearson Type 1 equation was identified as a potential fit for 89 percent of the observed distributions. Further analysis indicates that the Type 1 curve does approximate the shape of the distributions but quantitatively does not produce a great fit. Using the method of moments and Elderton's curve selection criteria, the Pearson Type 1 equation was identified as a potential fit for 89% of the observed distributions. Further analysis indicates that the Type 1 curve does approximate the shape of the distributions but quantitatively does not produce a great fit.
Representation of Probability Density Functions from Orbit Determination using the Particle Filter
NASA Technical Reports Server (NTRS)
Mashiku, Alinda K.; Garrison, James; Carpenter, J. Russell
2012-01-01
Statistical orbit determination enables us to obtain estimates of the state and the statistical information of its region of uncertainty. In order to obtain an accurate representation of the probability density function (PDF) that incorporates higher order statistical information, we propose the use of nonlinear estimation methods such as the Particle Filter. The Particle Filter (PF) is capable of providing a PDF representation of the state estimates whose accuracy is dependent on the number of particles or samples used. For this method to be applicable to real case scenarios, we need a way of accurately representing the PDF in a compressed manner with little information loss. Hence we propose using the Independent Component Analysis (ICA) as a non-Gaussian dimensional reduction method that is capable of maintaining higher order statistical information obtained using the PF. Methods such as the Principal Component Analysis (PCA) are based on utilizing up to second order statistics, hence will not suffice in maintaining maximum information content. Both the PCA and the ICA are applied to two scenarios that involve a highly eccentric orbit with a lower apriori uncertainty covariance and a less eccentric orbit with a higher a priori uncertainty covariance, to illustrate the capability of the ICA in relation to the PCA.
Probability density function modeling for sub-powered interconnects
NASA Astrophysics Data System (ADS)
Pater, Flavius; Amaricǎi, Alexandru
2016-06-01
This paper proposes three mathematical models for reliability probability density function modeling the interconnect supplied at sub-threshold voltages: spline curve approximations, Gaussian models,and sine interpolation. The proposed analysis aims at determining the most appropriate fitting for the switching delay - probability of correct switching for sub-powered interconnects. We compare the three mathematical models with the Monte-Carlo simulations of interconnects for 45 nm CMOS technology supplied at 0.25V.
Assumed Probability Density Functions for Shallow and Deep Convection
NASA Astrophysics Data System (ADS)
Bogenschutz, Peter A.; Krueger, Steven K.; Khairoutdinov, Marat
2010-04-01
The assumed joint probability density function (PDF) between vertical velocity and conserved temperature and total water scalars has been suggested to be a relatively computationally inexpensive and unified subgrid-scale (SGS) parameterization for boundary layer clouds and turbulent moments. This paper analyzes the performance of five families of PDFs using large-eddy simulations of deep convection, shallow convection, and a transition from stratocumulus to trade wind cumulus. Three of the PDF families are based on the double Gaussian form and the remaining two are the single Gaussian and a Double Delta Function (analogous to a mass flux model). The assumed PDF method is tested for grid sizes as small as 0.4 km to as large as 204.8 km. In addition, studies are performed for PDF sensitivity to errors in the input moments and for how well the PDFs diagnose some higher-order moments. In general, the double Gaussian PDFs more accurately represent SGS cloud structure and turbulence moments in the boundary layer compared to the single Gaussian and Double Delta Function PDFs for the range of grid sizes tested. This is especially true for small SGS cloud fractions. While the most complex PDF, Lewellen-Yoh, better represents shallow convective cloud properties (cloud fraction and liquid water mixing ratio) compared to the less complex Analytic Double Gaussian 1 PDF, there appears to be no advantage in implementing Lewellen-Yoh for deep convection. However, the Analytic Double Gaussian 1 PDF better represents the liquid water flux, is less sensitive to errors in the input moments, and diagnoses higher order moments more accurately. Between the Lewellen-Yoh and Analytic Double Gaussian 1 PDFs, it appears that neither family is distinctly better at representing cloudy layers. However, due to the reduced computational cost and fairly robust results, it appears that the Analytic Double Gaussian 1 PDF could be an ideal family for SGS cloud and turbulence representation in coarse
Hayward, Thomas J; Oba, Roger M
2013-07-01
Numerical methods are presented for approximating the probability density functions (pdf's) of acoustic fields and receiver-array responses induced by a given joint pdf of a set of acoustic environmental parameters. An approximation to the characteristic function of the random acoustic field (the inverse Fourier transform of the field pdf) is first obtained either by construction of the empirical characteristic function (ECF) from a random sample of the acoustic parameters, or by application of generalized Gaussian quadrature to approximate the integral defining the characteristic function. The Fourier transform is then applied to obtain an approximation of the pdf by a continuous function of the field variables. Application of both the ECF and generalized Gaussian quadrature is demonstrated in an example of a shallow-water ocean waveguide with two-dimensional uncertainty of sound speed and attenuation coefficient in the ocean bottom. Both approximations lead to a smoother estimate of the field pdf than that provided by a histogram, with generalized Gaussian quadrature providing a smoother estimate at the tails of the pdf. Potential applications to acoustic system performance quantification and to nonparametric acoustic signal processing are discussed. PMID:23862782
A new estimator method for GARCH models
NASA Astrophysics Data System (ADS)
Onody, R. N.; Favaro, G. M.; Cazaroto, E. R.
2007-06-01
The GARCH (p, q) model is a very interesting stochastic process with widespread applications and a central role in empirical finance. The Markovian GARCH (1, 1) model has only 3 control parameters and a much discussed question is how to estimate them when a series of some financial asset is given. Besides the maximum likelihood estimator technique, there is another method which uses the variance, the kurtosis and the autocorrelation time to determine them. We propose here to use the standardized 6th moment. The set of parameters obtained in this way produces a very good probability density function and a much better time autocorrelation function. This is true for both studied indexes: NYSE Composite and FTSE 100. The probability of return to the origin is investigated at different time horizons for both Gaussian and Laplacian GARCH models. In spite of the fact that these models show almost identical performances with respect to the final probability density function and to the time autocorrelation function, their scaling properties are, however, very different. The Laplacian GARCH model gives a better scaling exponent for the NYSE time series, whereas the Gaussian dynamics fits better the FTSE scaling exponent.
Representation of layer-counted proxy records as probability densities on error-free time axes
NASA Astrophysics Data System (ADS)
Boers, Niklas; Goswami, Bedartha; Ghil, Michael
2016-04-01
Time series derived from paleoclimatic proxy records exhibit substantial dating uncertainties in addition to the measurement errors of the proxy values. For radiometrically dated proxy archives, Goswami et al. [1] have recently introduced a framework rooted in Bayesian statistics that successfully propagates the dating uncertainties from the time axis to the proxy axis. The resulting proxy record consists of a sequence of probability densities over the proxy values, conditioned on prescribed age values. One of the major benefits of this approach is that the proxy record is represented on an accurate, error-free time axis. Such unambiguous dating is crucial, for instance, in comparing different proxy records. This approach, however, is not directly applicable to proxy records with layer-counted chronologies, as for example ice cores, which are typically dated by counting quasi-annually deposited ice layers. Hence the nature of the chronological uncertainty in such records is fundamentally different from that in radiometrically dated ones. Here, we introduce a modification of the Goswami et al. [1] approach that is specifically designed for layer-counted proxy records, instead of radiometrically dated ones. We apply our method to isotope ratios and dust concentrations in the NGRIP core, using a published 60,000-year chronology [2]. It is shown that the further one goes into the past, the more the layer-counting errors accumulate and lead to growing uncertainties in the probability density sequence for the proxy values that results from the proposed approach. For the older parts of the record, these uncertainties affect more and more a statistically sound estimation of proxy values. This difficulty implies that great care has to be exercised when comparing and in particular aligning specific events among different layer-counted proxy records. On the other hand, when attempting to derive stochastic dynamical models from the proxy records, one is only interested in the
Nonstationary probability densities of a class of nonlinear system excited by external colored noise
NASA Astrophysics Data System (ADS)
Qi, LuYuan; Xu, Wei; Gu, XuDong
2012-03-01
This paper deals with the approximate nonstationary probability density of a class of nonlinear vibrating system excited by colored noise. First, the stochastic averaging method is adopted to obtain the averaged Itô equation for the amplitude of the system. The corresponding Fokker-Planck-Kolmogorov equation governing the evolutionary probability density function is deduced. Then, the approximate solution of the Fokker-Planck-Kolmogorov equation is derived by applying the Galerkin method. The solution is expressed as a sum of a series of expansion in terms of a set of proper basis functions with time-depended coefficients. Finally, an example is given to illustrate the proposed procedure. The validity of the proposed method is confirmed by Monte Carlo Simulation.
Gomez-Lazaro, Emilio; Bueso, Maria C.; Kessler, Mathieu; Martin-Martinez, Sergio; Zhang, Jie; Hodge, Bri -Mathias; Molina-Garcia, Angel
2016-02-02
Here, the Weibull probability distribution has been widely applied to characterize wind speeds for wind energy resources. Wind power generation modeling is different, however, due in particular to power curve limitations, wind turbine control methods, and transmission system operation requirements. These differences are even greater for aggregated wind power generation in power systems with high wind penetration. Consequently, models based on one-Weibull component can provide poor characterizations for aggregated wind power generation. With this aim, the present paper focuses on discussing Weibull mixtures to characterize the probability density function (PDF) for aggregated wind power generation. PDFs of wind power datamore » are firstly classified attending to hourly and seasonal patterns. The selection of the number of components in the mixture is analyzed through two well-known different criteria: the Akaike information criterion (AIC) and the Bayesian information criterion (BIC). Finally, the optimal number of Weibull components for maximum likelihood is explored for the defined patterns, including the estimated weight, scale, and shape parameters. Results show that multi-Weibull models are more suitable to characterize aggregated wind power data due to the impact of distributed generation, variety of wind speed values and wind power curtailment.« less
Analytical Formulation of the Single-visit Completeness Joint Probability Density Function
NASA Astrophysics Data System (ADS)
Garrett, Daniel; Savransky, Dmitry
2016-09-01
We derive an exact formulation of the multivariate integral representing the single-visit obscurational and photometric completeness joint probability density function for arbitrary distributions for planetary parameters. We present a derivation of the region of nonzero values of this function, which extends previous work, and discuss the time and computational complexity costs and benefits of the method. We present a working implementation and demonstrate excellent agreement between this approach and Monte Carlo simulation results.
Kappa distribution and Probability Density Functions in Solar Wind
NASA Astrophysics Data System (ADS)
Jurac, S.
2004-12-01
A signature of a statistical intermittency is the presence of large deviations from the average value: this increased probability of finding extreme deviations is characterized by Probability Density Functions (PDFs) which exhibit non Gaussian power-law tails. Such power-law distributions were observed over decades in biology, chemistry, finance and other fields. Known examples include heartbeat histograms, price distribution, turbulent fluid flow and many other non-equilibrium systems. It is shown that the Kappa distribution represents a good description of PDFs observed in Solar wind. The asymmetric fluctuations in variance over time observed in solar wind PDFs are Gamma distributed. It is shown that, by assuming such a distribution of variance, the Kappa distribution can be analitically derived.
Zeeman mapping of probability densities in square quantum wells using magnetic probes
NASA Astrophysics Data System (ADS)
Prechtl, G.; Heiss, W.; Bonanni, A.; Jantsch, W.; Mackowski, S.; Janik, E.; Karczewski, G.
2000-06-01
We use a method to probe experimentally the probability density of carriers confined in semiconductor quantum structures. The exciton Zeeman splitting in quantum wells containing a single, ultranarrow magnetic layer is studied depending on the layer position. In particular, a system consisting of a 1/4 monolayer MnTe embedded at varying positions in nonmagnetic CdTe/CdMgTe quantum wells is investigated. The sp-d exchange interaction results in a drastic increase of the Zeeman splitting, which, because of the strongly localized nature of this interaction, sensitively depends on the position of the MnTe submonolayer in the quantum well. For various interband transitions we show that the dependence of the exciton Zeeman splitting on the position of the magnetic layer directly maps the probability density of free holesin the growth direction.
Analysis of 2-d ultrasound cardiac strain imaging using joint probability density functions.
Ma, Chi; Varghese, Tomy
2014-06-01
Ultrasound frame rates play a key role for accurate cardiac deformation tracking. Insufficient frame rates lead to an increase in signal de-correlation artifacts resulting in erroneous displacement and strain estimation. Joint probability density distributions generated from estimated axial strain and its associated signal-to-noise ratio provide a useful approach to assess the minimum frame rate requirements. Previous reports have demonstrated that bi-modal distributions in the joint probability density indicate inaccurate strain estimation over a cardiac cycle. In this study, we utilize similar analysis to evaluate a 2-D multi-level displacement tracking and strain estimation algorithm for cardiac strain imaging. The effect of different frame rates, final kernel dimensions and a comparison of radio frequency and envelope based processing are evaluated using echo signals derived from a 3-D finite element cardiac model and five healthy volunteers. Cardiac simulation model analysis demonstrates that the minimum frame rates required to obtain accurate joint probability distributions for the signal-to-noise ratio and strain, for a final kernel dimension of 1 λ by 3 A-lines, was around 42 Hz for radio frequency signals. On the other hand, even a frame rate of 250 Hz with envelope signals did not replicate the ideal joint probability distribution. For the volunteer study, clinical data was acquired only at a 34 Hz frame rate, which appears to be sufficient for radio frequency analysis. We also show that an increase in the final kernel dimensions significantly affect the strain probability distribution and joint probability density function generated, with a smaller effect on the variation in the accumulated mean strain estimated over a cardiac cycle. Our results demonstrate that radio frequency frame rates currently achievable on clinical cardiac ultrasound systems are sufficient for accurate analysis of the strain probability distribution, when a multi-level 2-D
NASA Astrophysics Data System (ADS)
Mori, Shohei; Hirata, Shinnosuke; Yamaguchi, Tadashi; Hachiya, Hiroyuki
To develop a quantitative diagnostic method for liver fibrosis using an ultrasound B-mode image, a probability imaging method of tissue characteristics based on a multi-Rayleigh model, which expresses a probability density function of echo signals from liver fibrosis, has been proposed. In this paper, an effect of non-speckle echo signals on tissue characteristics estimated from the multi-Rayleigh model was evaluated. Non-speckle signals were determined and removed using the modeling error of the multi-Rayleigh model. The correct tissue characteristics of fibrotic tissue could be estimated with the removal of non-speckle signals.
Probability Density Function for Waves Propagating in a Straight PEC Rough Wall Tunnel
Pao, H
2004-11-08
The probability density function for wave propagating in a straight perfect electrical conductor (PEC) rough wall tunnel is deduced from the mathematical models of the random electromagnetic fields. The field propagating in caves or tunnels is a complex-valued Gaussian random processing by the Central Limit Theorem. The probability density function for single modal field amplitude in such structure is Ricean. Since both expected value and standard deviation of this field depend only on radial position, the probability density function, which gives what is the power distribution, is a radially dependent function. The radio channel places fundamental limitations on the performance of wireless communication systems in tunnels and caves. The transmission path between the transmitter and receiver can vary from a simple direct line of sight to one that is severely obstructed by rough walls and corners. Unlike wired channels that are stationary and predictable, radio channels can be extremely random and difficult to analyze. In fact, modeling the radio channel has historically been one of the more challenging parts of any radio system design; this is often done using statistical methods. In this contribution, we present the most important statistic property, the field probability density function, of wave propagating in a straight PEC rough wall tunnel. This work only studies the simplest case--PEC boundary which is not the real world but the methods and conclusions developed herein are applicable to real world problems which the boundary is dielectric. The mechanisms behind electromagnetic wave propagation in caves or tunnels are diverse, but can generally be attributed to reflection, diffraction, and scattering. Because of the multiple reflections from rough walls, the electromagnetic waves travel along different paths of varying lengths. The interactions between these waves cause multipath fading at any location, and the strengths of the waves decrease as the distance
NASA Astrophysics Data System (ADS)
Boslough, M.
2011-12-01
Climate-related uncertainty is traditionally presented as an error bar, but it is becoming increasingly common to express it in terms of a probability density function (PDF). PDFs are a necessary component of probabilistic risk assessments, for which simple "best estimate" values are insufficient. Many groups have generated PDFs for climate sensitivity using a variety of methods. These PDFs are broadly consistent, but vary significantly in their details. One axiom of the verification and validation community is, "codes don't make predictions, people make predictions." This is a statement of the fact that subject domain experts generate results using assumptions within a range of epistemic uncertainty and interpret them according to their expert opinion. Different experts with different methods will arrive at different PDFs. For effective decision support, a single consensus PDF would be useful. We suggest that market methods can be used to aggregate an ensemble of opinions into a single distribution that expresses the consensus. Prediction markets have been shown to be highly successful at forecasting the outcome of events ranging from elections to box office returns. In prediction markets, traders can take a position on whether some future event will or will not occur. These positions are expressed as contracts that are traded in a double-action market that aggregates price, which can be interpreted as a consensus probability that the event will take place. Since climate sensitivity cannot directly be measured, it cannot be predicted. However, the changes in global mean surface temperature are a direct consequence of climate sensitivity, changes in forcing, and internal variability. Viable prediction markets require an undisputed event outcome on a specific date. Climate-related markets exist on Intrade.com, an online trading exchange. One such contract is titled "Global Temperature Anomaly for Dec 2011 to be greater than 0.65 Degrees C." Settlement is based
Interpolation of probability densities in ENDF and ENDL
Hedstrom, G
2006-01-27
Suppose that we are given two probability densities p{sub 0}(E{prime}) and p{sub 1}(E{prime}) for the energy E{prime} of an outgoing particle, p{sub 0}(E{prime}) corresponding to energy E{sub 0} of the incident particle and p{sub 1}(E{prime}) corresponding to incident energy E{sub 1}. If E{sub 0} < E{sub 1}, the problem is how to define p{sub {alpha}}(E{prime}) for intermediate incident energies E{sub {alpha}} = (1 - {alpha})E{sub 0} + {alpha}E{sub 1} with 0 < {alpha} < 1. In this note the author considers three ways to do it. They begin with unit-base interpolation, which is standard in ENDL and is sometimes used in ENDF. They then describe the equiprobable bins used by some Monte Carlo codes. They then close with a discussion of interpolation by corresponding-points, which is commonly used in ENDF.
On singular probability densities generated by extremal dynamics
NASA Astrophysics Data System (ADS)
Garcia, Guilherme J. M.; Dickman, Ronald
2004-02-01
Extremal dynamics is the mechanism that drives the Bak-Sneppen model into a (self-organized) critical state, marked by a singular stationary probability density p( x). With the aim of understanding this phenomenon, we study the BS model and several variants via mean-field theory and simulation. In all cases, we find that p( x) is singular at one or more points, as a consequence of extremal dynamics. Furthermore we show that the extremal barrier xi always belongs to the ‘prohibited’ interval, in which p( x)=0. Our simulations indicate that the Bak-Sneppen universality class is robust with regard to changes in the updating rule: we find the same value for the exponent π for all variants. Mean-field theory, which furnishes an exact description for the model on a complete graph, reproduces the character of the probability distribution found in simulations. For the modified processes mean-field theory takes the form of a functional equation for p( x).
Probability density distribution of velocity differences at high Reynolds numbers
NASA Technical Reports Server (NTRS)
Praskovsky, Alexander A.
1993-01-01
Recent understanding of fine-scale turbulence structure in high Reynolds number flows is mostly based on Kolmogorov's original and revised models. The main finding of these models is that intrinsic characteristics of fine-scale fluctuations are universal ones at high Reynolds numbers, i.e., the functional behavior of any small-scale parameter is the same in all flows if the Reynolds number is high enough. The only large-scale quantity that directly affects small-scale fluctuations is the energy flux through a cascade. In dynamical equilibrium between large- and small-scale motions, this flux is equal to the mean rate of energy dissipation epsilon. The pdd of velocity difference is a very important characteristic for both the basic understanding of fully developed turbulence and engineering problems. Hence, it is important to test the findings: (1) the functional behavior of the tails of the probability density distribution (pdd) represented by P(delta(u)) is proportional to exp(-b(r) absolute value of delta(u)/sigma(sub delta(u))) and (2) the logarithmic decrement b(r) scales as b(r) is proportional to r(sup 0.15) when separation r lies in the inertial subrange in high Reynolds number laboratory shear flows.
Efficiency issues related to probability density function comparison
Kelly, P.M.; Cannon, M.; Barros, J.E.
1996-03-01
The CANDID project (Comparison Algorithm for Navigating Digital Image Databases) employs probability density functions (PDFs) of localized feature information to represent the content of an image for search and retrieval purposes. A similarity measure between PDFs is used to identify database images that are similar to a user-provided query image. Unfortunately, signature comparison involving PDFs is a very time-consuming operation. In this paper, we look into some efficiency considerations when working with PDFS. Since PDFs can take on many forms, we look into tradeoffs between accurate representation and efficiency of manipulation for several data sets. In particular, we typically represent each PDF as a Gaussian mixture (e.g. as a weighted sum of Gaussian kernels) in the feature space. We find that by constraining all Gaussian kernels to have principal axes that are aligned to the natural axes of the feature space, computations involving these PDFs are simplified. We can also constrain the Gaussian kernels to be hyperspherical rather than hyperellipsoidal, simplifying computations even further, and yielding an order of magnitude speedup in signature comparison. This paper illustrates the tradeoffs encountered when using these constraints.
Probability density functions for use when calculating standardised drought indices
NASA Astrophysics Data System (ADS)
Svensson, Cecilia; Prosdocimi, Ilaria; Hannaford, Jamie
2015-04-01
Time series of drought indices like the standardised precipitation index (SPI) and standardised flow index (SFI) require a statistical probability density function to be fitted to the observed (generally monthly) precipitation and river flow data. Once fitted, the quantiles are transformed to a Normal distribution with mean = 0 and standard deviation = 1. These transformed data are the SPI/SFI, which are widely used in drought studies, including for drought monitoring and early warning applications. Different distributions were fitted to rainfall and river flow data accumulated over 1, 3, 6 and 12 months for 121 catchments in the United Kingdom. These catchments represent a range of catchment characteristics in a mid-latitude climate. Both rainfall and river flow data have a lower bound at 0, as rains and flows cannot be negative. Their empirical distributions also tend to have positive skewness, and therefore the Gamma distribution has often been a natural and suitable choice for describing the data statistically. However, after transformation of the data to Normal distributions to obtain the SPIs and SFIs for the 121 catchments, the distributions are rejected in 11% and 19% of cases, respectively, by the Shapiro-Wilk test. Three-parameter distributions traditionally used in hydrological applications, such as the Pearson type 3 for rainfall and the Generalised Logistic and Generalised Extreme Value distributions for river flow, tend to make the transformed data fit better, with rejection rates of 5% or less. However, none of these three-parameter distributions have a lower bound at zero. This means that the lower tail of the fitted distribution may potentially go below zero, which would result in a lower limit to the calculated SPI and SFI values (as observations can never reach into this lower tail of the theoretical distribution). The Tweedie distribution can overcome the problems found when using either the Gamma or the above three-parameter distributions. The
Spectral discrete probability density function of measured wind turbine noise in the far field.
Ashtiani, Payam; Denison, Adelaide
2015-01-01
Of interest is the spectral character of wind turbine noise at typical residential set-back distances. In this paper, a spectral statistical analysis has been applied to immission measurements conducted at three locations. This method provides discrete probability density functions for the Turbine ONLY component of the measured noise. This analysis is completed for one-third octave sound levels, at integer wind speeds, and is compared to existing metrics for measuring acoustic comfort as well as previous discussions on low-frequency noise sources. PMID:25905097
NASA Astrophysics Data System (ADS)
Bernotas, Marius P.; Nelson, Charles
2016-05-01
The Weibull and Exponentiated Weibull probability density functions have been examined for the free space regime using heuristically derived shape and scale parameters. This paper extends current literature to the underwater channel and explores use of experimentally derived parameters. Data gathered in a short range underwater channel emulator was analyzed using a nonlinear curve fitting methodology to optimize the scale and shape parameters of the PDFs. This method provides insight into the scaled effects of underwater optical turbulence on a long range link, and may yield a general set of equations for determining the PDF for an underwater optical link.
Spectral Discrete Probability Density Function of Measured Wind Turbine Noise in the Far Field
Ashtiani, Payam; Denison, Adelaide
2015-01-01
Of interest is the spectral character of wind turbine noise at typical residential set-back distances. In this paper, a spectral statistical analysis has been applied to immission measurements conducted at three locations. This method provides discrete probability density functions for the Turbine ONLY component of the measured noise. This analysis is completed for one-third octave sound levels, at integer wind speeds, and is compared to existing metrics for measuring acoustic comfort as well as previous discussions on low-frequency noise sources. PMID:25905097
Probability density function of a passive scalar in turbulent shear flows
Kollmann, W.; Janicka, J.
1982-10-01
The transport equation for the probability density function of a scalar in turbulent shear flow is analyzed and the closure based on the gradient flux model for the turbulent flux and an integral model for the scalar dissipation term is put forward. The probability density function equation is complemented by a two-equation turbulence model. Application to several shear flows proves the capability of the closure model to determine the probability density function of passive scalars.
3D model retrieval using probability density-based shape descriptors.
Akgül, Ceyhun Burak; Sankur, Bülent; Yemez, Yücel; Schmitt, Francis
2009-06-01
We address content-based retrieval of complete 3D object models by a probabilistic generative description of local shape properties. The proposed shape description framework characterizes a 3D object with sampled multivariate probability density functions of its local surface features. This density-based descriptor can be efficiently computed via kernel density estimation (KDE) coupled with fast Gauss transform. The non-parametric KDE technique allows reliable characterization of a diverse set of shapes and yields descriptors which remain relatively insensitive to small shape perturbations and mesh resolution. Density-based characterization also induces a permutation property which can be used to guarantee invariance at the shape matching stage. As proven by extensive retrieval experiments on several 3D databases, our framework provides state-of-the-art discrimination over a broad and heterogeneous set of shape categories. PMID:19372614
NASA Astrophysics Data System (ADS)
Kikuchi, Ryota; Misaka, Takashi; Obayashi, Shigeru
2015-10-01
An integrated method of a proper orthogonal decomposition based reduced-order model (ROM) and data assimilation is proposed for the real-time prediction of an unsteady flow field. In this paper, a particle filter (PF) and an ensemble Kalman filter (EnKF) are compared for data assimilation and the difference in the predicted flow fields is evaluated focusing on the probability density function (PDF) of the model variables. The proposed method is demonstrated using identical twin experiments of an unsteady flow field around a circular cylinder at the Reynolds number of 1000. The PF and EnKF are employed to estimate temporal coefficients of the ROM based on the observed velocity components in the wake of the circular cylinder. The prediction accuracy of ROM-PF is significantly better than that of ROM-EnKF due to the flexibility of PF for representing a PDF compared to EnKF. Furthermore, the proposed method reproduces the unsteady flow field several orders faster than the reference numerical simulation based on the Navier-Stokes equations.
Turbulent combustion analysis with various probability density functions
NASA Astrophysics Data System (ADS)
Kim, Yongmo; Chung, T. J.
A finite element method for the computation of confined, axisymmetric, turbulent diffusion flames is developed. This algorithm adopts the coupled velocity-pressure formulation to improve the covergence rate in variable-viscosity/variable-density flows. In order to minimize the numerical diffusion, the streamline upwind/Petrov-Galerkin formulation is employed. Turbulence is represented by the k-epsilon model, and the combustion process involves an irreversible one-step reaction at an infinite rate. The mean mixture properties were obtained by three methods based on the diffusion flame concept; without using a pdf, with a double-delta pdf, and with a beta pdf. A comparison is made between the combustion models with and without the pdf application, and the effect of turbulence on combustion are discussed. The numerical results are compared with available experimental data.
NASA Astrophysics Data System (ADS)
Myers, Adam D.; White, Martin; Ball, Nicholas M.
2009-11-01
The use of photometric redshifts in cosmology is increasing. Often, however these photo-z are treated like spectroscopic observations, in that the peak of the photometric redshift, rather than the full probability density function (PDF), is used. This overlooks useful information inherent in the full PDF. We introduce a new real-space estimator for one of the most used cosmological statistics, the two-point correlation function, that weights by the PDF of individual photometric objects in a manner that is optimal when Poisson statistics dominate. As our estimator does not bin based on the PDF peak, it substantially enhances the clustering signal by usefully incorporating information from all photometric objects that overlap the redshift bin of interest. As a real-world application, we measure quasi-stellar object (QSO) clustering in the Sloan Digital Sky Survey (SDSS). We find that our simplest binned estimator improves the clustering signal by a factor equivalent to increasing the survey size by a factor of 2-3. We also introduce a new implementation that fully weights between pairs of objects in constructing the cross-correlation and find that this pair-weighted estimator improves clustering signal in a manner equivalent to increasing the survey size by a factor of 4-5. Our technique uses spectroscopic data to anchor the distance scale and it will be particularly useful where spectroscopic data (e.g. from BOSS) overlap deeper photometry (e.g. from Pan-STARRS, DES or the LSST). We additionally provide simple, informative expressions to determine when our estimator will be competitive with the autocorrelation of spectroscopic objects. Although we use QSOs as an example population, our estimator can and should be applied to any clustering estimate that uses photometric objects.
Angraini, Lily Maysari; Suparmi,; Variani, Viska Inda
2010-12-23
SUSY quantum mechanics can be applied to solve Schrodinger equation for high dimensional system that can be reduced into one dimensional system and represented in lowering and raising operators. Lowering and raising operators can be obtained using relationship between original Hamiltonian equation and the (super) potential equation. In this paper SUSY quantum mechanics is used as a method to obtain the wave function and the energy level of the Modified Poschl Teller potential. The graph of wave function equation and probability density is simulated by using Delphi 7.0 programming language. Finally, the expectation value of quantum mechanics operator could be calculated analytically using integral form or probability density graph resulted by the programming.
NASA Astrophysics Data System (ADS)
Angraini, Lily Maysari; Suparmi, Variani, Viska Inda
2010-12-01
SUSY quantum mechanics can be applied to solve Schrodinger equation for high dimensional system that can be reduced into one dimensional system and represented in lowering and raising operators. Lowering and raising operators can be obtained using relationship between original Hamiltonian equation and the (super) potential equation. In this paper SUSY quantum mechanics is used as a method to obtain the wave function and the energy level of the Modified Poschl Teller potential. The graph of wave function equation and probability density is simulated by using Delphi 7.0 programming language. Finally, the expectation value of quantum mechanics operator could be calculated analytically using integral form or probability density graph resulted by the programming.
NASA Astrophysics Data System (ADS)
Lu, C.; Liu, Y.; Niu, S.; Vogelmann, A. M.
2012-12-01
In situ aircraft cumulus observations from the RACORO field campaign are used to estimate entrainment rate for individual clouds using a recently developed mixing fraction approach. The entrainment rate is computed based on the observed state of the cloud core and the state of the air that is laterally mixed into the cloud at its edge. The computed entrainment rate decreases when the air is entrained from increasing distance from the cloud core edge; this is because the air farther away from cloud edge is drier than the neighboring air that is within the humid shells around cumulus clouds. Probability density functions of entrainment rate are well fitted by lognormal distributions at different heights above cloud base for different dry air sources (i.e., different source distances from the cloud core edge). Such lognormal distribution functions are appropriate for inclusion into future entrainment rate parameterization in large scale models. To the authors' knowledge, this is the first time that probability density functions of entrainment rate have been obtained in shallow cumulus clouds based on in situ observations. The reason for the wide spread of entrainment rate is that the observed clouds are affected by entrainment mixing processes to different extents, which is verified by the relationships between the entrainment rate and cloud microphysics/dynamics. The entrainment rate is negatively correlated with liquid water content and cloud droplet number concentration due to the dilution and evaporation in entrainment mixing processes. The entrainment rate is positively correlated with relative dispersion (i.e., ratio of standard deviation to mean value) of liquid water content and droplet size distributions, consistent with the theoretical expectation that entrainment mixing processes are responsible for microphysics fluctuations and spectral broadening. The entrainment rate is negatively correlated with vertical velocity and dissipation rate because entrainment
Ayachi, F S; Boudaoud, S; Marque, C
2014-08-01
In this work, we propose to classify, by simulation, the shape variability (or non-Gaussianity) of the surface electromyogram (sEMG) amplitude probability density function (PDF), according to contraction level, using high-order statistics (HOS) and a recent functional formalism, the core shape modeling (CSM). According to recent studies, based on simulated and/or experimental conditions, the sEMG PDF shape seems to be modified by many factors as: contraction level, fatigue state, muscle anatomy, used instrumentation, and also motor control parameters. For sensitivity evaluation against these several sources (physiological, instrumental, and neural control) of variability, a large-scale simulation (25 muscle anatomies, ten parameter configurations, three electrode arrangements) is performed, by using a recent sEMG-force model and parallel computing, to classify sEMG data from three contraction levels (20, 50, and 80% MVC). A shape clustering algorithm is then launched using five combinations of HOS parameters, the CSM method and compared to amplitude clustering with classical indicators [average rectified value (ARV) and root mean square (RMS)]. From the results screening, it appears that the CSM method obtains, using Laplacian electrode arrangement, the highest classification scores, after ARV and RMS approaches, and followed by one HOS combination. However, when some critical confounding parameters are changed, these scores decrease. These simulation results demonstrate that the shape screening of the sEMG amplitude PDF is a complex task which needs both efficient shape analysis methods and specific signal recording protocol to be properly used for tracking neural drive and muscle activation strategies with varying force contraction in complement to classical amplitude estimators. PMID:24961179
Jian, Jhih-Wei; Elumalai, Pavadai; Pitti, Thejkiran; Wu, Chih Yuan; Tsai, Keng-Chang; Chang, Jeng-Yih; Peng, Hung-Pin; Yang, An-Suei
2016-01-01
Predicting ligand binding sites (LBSs) on protein structures, which are obtained either from experimental or computational methods, is a useful first step in functional annotation or structure-based drug design for the protein structures. In this work, the structure-based machine learning algorithm ISMBLab-LIG was developed to predict LBSs on protein surfaces with input attributes derived from the three-dimensional probability density maps of interacting atoms, which were reconstructed on the query protein surfaces and were relatively insensitive to local conformational variations of the tentative ligand binding sites. The prediction accuracy of the ISMBLab-LIG predictors is comparable to that of the best LBS predictors benchmarked on several well-established testing datasets. More importantly, the ISMBLab-LIG algorithm has substantial tolerance to the prediction uncertainties of computationally derived protein structure models. As such, the method is particularly useful for predicting LBSs not only on experimental protein structures without known LBS templates in the database but also on computationally predicted model protein structures with structural uncertainties in the tentative ligand binding sites. PMID:27513851
Development and evaluation of probability density functions for a set of human exposure factors
Maddalena, R.L.; McKone, T.E.; Bodnar, A.; Jacobson, J.
1999-06-01
The purpose of this report is to describe efforts carried out during 1998 and 1999 at the Lawrence Berkeley National Laboratory to assist the U.S. EPA in developing and ranking the robustness of a set of default probability distributions for exposure assessment factors. Among the current needs of the exposure-assessment community is the need to provide data for linking exposure, dose, and health information in ways that improve environmental surveillance, improve predictive models, and enhance risk assessment and risk management (NAS, 1994). The U.S. Environmental Protection Agency (EPA) Office of Emergency and Remedial Response (OERR) plays a lead role in developing national guidance and planning future activities that support the EPA Superfund Program. OERR is in the process of updating its 1989 Risk Assessment Guidance for Superfund (RAGS) as part of the EPA Superfund reform activities. Volume III of RAGS, when completed in 1999 will provide guidance for conducting probabilistic risk assessments. This revised document will contain technical information including probability density functions (PDFs) and methods used to develop and evaluate these PDFs. The PDFs provided in this EPA document are limited to those relating to exposure factors.
NASA Technical Reports Server (NTRS)
Smith, N. S. A.; Frolov, S. M.; Bowman, C. T.
1996-01-01
Two types of mixing sub-models are evaluated in connection with a joint-scalar probability density function method for turbulent nonpremixed combustion. Model calculations are made and compared to simulation results for homogeneously distributed methane-air reaction zones mixing and reacting in decaying turbulence within a two-dimensional enclosed domain. The comparison is arranged to ensure that both the simulation and model calculations a) make use of exactly the same chemical mechanism, b) do not involve non-unity Lewis number transport of species, and c) are free from radiation loss. The modified Curl mixing sub-model was found to provide superior predictive accuracy over the simple relaxation-to-mean submodel in the case studied. Accuracy to within 10-20% was found for global means of major species and temperature; however, nitric oxide prediction accuracy was lower and highly dependent on the choice of mixing sub-model. Both mixing submodels were found to produce non-physical mixing behavior for mixture fractions removed from the immediate reaction zone. A suggestion for a further modified Curl mixing sub-model is made in connection with earlier work done in the field.
Probability density functions of the average and difference intensities of Friedel opposites.
Shmueli, U; Flack, H D
2010-11-01
Trigonometric series for the average (A) and difference (D) intensities of Friedel opposites were carefully rederived and were normalized to minimize their dependence on sin(theta)/lambda. Probability density functions (hereafter p.d.f.s) of these series were then derived by the Fourier method [Shmueli, Weiss, Kiefer & Wilson (1984). Acta Cryst. A40, 651-660] and their expressions, which admit any chemical composition of the unit-cell contents, were obtained for the space group P1. Histograms of A and D were then calculated for an assumed random-structure model and for 3135 Friedel pairs of a published solved crystal structure, and were compared with the p.d.f.s after the latter were scaled up to the histograms. Good agreement was obtained for the random-structure model and a qualitative one for the published solved structure. The results indicate that the residual discrepancy is mainly due to the presumed statistical independence of the p.d.f.'s characteristic function on the contributions of the interatomic vectors. PMID:20962376
Probability densities for quantum-mechanical collision resonances in reactive scattering
NASA Astrophysics Data System (ADS)
Thompson, Todd C.; Truhlar, Donald G.
1983-10-01
We present contour maps of probability density |ψ| 2 for reactive compound-state resonances in two collinear reactions: H+ FH → HF + H on a model low-barrier surface and H + H 2 → H 2 + H on the Porter-Karplus surface no. 2. The maps clearly show the Fermi-resonance schizoid character of the compound states.
Dynamic Graphics in Excel for Teaching Statistics: Understanding the Probability Density Function
ERIC Educational Resources Information Center
Coll-Serrano, Vicente; Blasco-Blasco, Olga; Alvarez-Jareno, Jose A.
2011-01-01
In this article, we show a dynamic graphic in Excel that is used to introduce an important concept in our subject, Statistics I: the probability density function. This interactive graphic seeks to facilitate conceptual understanding of the main aspects analysed by the learners.
2013-01-01
This paper provides an analytical derivation of the probability density function of signal-to-interference-plus-noise ratio in the scenario where mobile stations interfere with each other. This analysis considers cochannel interference and adjacent channel interference. This could also remove the need for Monte Carlo simulations when evaluating the interference effect between mobile stations. Numerical verification shows that the analytical result agrees well with a Monte Carlo simulation. Also, we applied analytical methods for evaluating the interference effect between mobile stations using adjacent frequency bands. The analytical derivation of the probability density function can be used to provide the technical criteria for sharing a frequency band. PMID:24453792
NASA Astrophysics Data System (ADS)
Coclite, A.; Pascazio, G.; De Palma, P.; Cutrone, L.
2016-07-01
Flamelet-Progress-Variable (FPV) combustion models allow the evaluation of all thermochemical quantities in a reacting flow by computing only the mixture fraction Z and a progress variable C. When using such a method to predict turbulent combustion in conjunction with a turbulence model, a probability density function (PDF) is required to evaluate statistical averages (e. g., Favre averages) of chemical quantities. The choice of the PDF is a compromise between computational costs and accuracy level. The aim of this paper is to investigate the influence of the PDF choice and its modeling aspects to predict turbulent combustion. Three different models are considered: the standard one, based on the choice of a β-distribution for Z and a Dirac-distribution for C; a model employing a β-distribution for both Z and C; and the third model obtained using a β-distribution for Z and the statistically most likely distribution (SMLD) for C. The standard model, although widely used, does not take into account the interaction between turbulence and chemical kinetics as well as the dependence of the progress variable not only on its mean but also on its variance. The SMLD approach establishes a systematic framework to incorporate informations from an arbitrary number of moments, thus providing an improvement over conventionally employed presumed PDF closure models. The rational behind the choice of the three PDFs is described in some details and the prediction capability of the corresponding models is tested vs. well-known test cases, namely, the Sandia flames, and H2-air supersonic combustion.
NASA Technical Reports Server (NTRS)
Saha, Bhaskar (Inventor); Goebel, Kai F. (Inventor)
2012-01-01
This invention develops a mathematical model to describe battery behavior during individual discharge cycles as well as over its cycle life. The basis for the form of the model has been linked to the internal processes of the battery and validated using experimental data. Effects of temperature and load current have also been incorporated into the model. Subsequently, the model has been used in a Particle Filtering framework to make predictions of remaining useful life for individual discharge cycles as well as for cycle life. The prediction performance was found to be satisfactory as measured by performance metrics customized for prognostics for a sample case. The work presented here provides initial steps towards a comprehensive health management solution for energy storage devices.
The quiet Sun magnetic field observed with ZIMPOL on THEMIS. I. The probability density function
NASA Astrophysics Data System (ADS)
Bommier, V.; Martínez González, M.; Bianda, M.; Frisch, H.; Asensio Ramos, A.; Gelly, B.; Landi Degl'Innocenti, E.
2009-11-01
Context: The quiet Sun magnetic field probability density function (PDF) remains poorly known. Modeling this field also introduces a magnetic filling factor that is also poorly known. With these two quantities, PDF and filling factor, the statistical description of the quiet Sun magnetic field is complex and needs to be clarified. Aims: In the present paper, we propose a procedure that combines direct determinations and inversion results to derive the magnetic field vector and filling factor, and their PDFs. Methods: We used spectro-polarimetric observations taken with the ZIMPOL polarimeter mounted on the THEMIS telescope. The target was a quiet region at disk center. We analyzed the data by means of the UNNOFIT inversion code, with which we inferred the distribution of the mean magnetic field α B, α being the magnetic filling factor. The distribution of α was derived by an independent method, directly from the spectro-polarimetric data. The magnetic field PDF p(B) could then be inferred. By introducing a joint PDF for the filling factor and the magnetic field strength, we have clarified the definition of the PDF of the quiet Sun magnetic field when the latter is assumed not to be volume-filling. Results: The most frequent local average magnetic field strength is found to be 13 G. We find that the magnetic filling factor is related to the magnetic field strength by the simple law α = B_1/B with B1 = 15 G. This result is compatible with the Hanle weak-field determinations, as well as with the stronger field determinations from the Zeeman effect (kGauss field filling 1-2% of space). From linear fits, we obtain the analytical dependence of the magnetic field PDF. Our analysis has also revealed that the magnetic field in the quiet Sun is isotropically distributed in direction. Conclusions: We conclude that the quiet Sun is a complex medium where magnetic fields having different field strengths and filling factors coexist. Further observations with a better
PDV Uncertainty Estimation & Methods Comparison
Machorro, E.
2011-11-01
Several methods are presented for estimating the rapidly changing instantaneous frequency of a time varying signal that is contaminated by measurement noise. Useful a posteriori error estimates for several methods are verified numerically through Monte Carlo simulation. However, given the sampling rates of modern digitizers, sub-nanosecond variations in velocity are shown to be reliably measurable in most (but not all) cases. Results support the hypothesis that in many PDV regimes of interest, sub-nanosecond resolution can be achieved.
A Projection and Density Estimation Method for Knowledge Discovery
Stanski, Adam; Hellwich, Olaf
2012-01-01
A key ingredient to modern data analysis is probability density estimation. However, it is well known that the curse of dimensionality prevents a proper estimation of densities in high dimensions. The problem is typically circumvented by using a fixed set of assumptions about the data, e.g., by assuming partial independence of features, data on a manifold or a customized kernel. These fixed assumptions limit the applicability of a method. In this paper we propose a framework that uses a flexible set of assumptions instead. It allows to tailor a model to various problems by means of 1d-decompositions. The approach achieves a fast runtime and is not limited by the curse of dimensionality as all estimations are performed in 1d-space. The wide range of applications is demonstrated at two very different real world examples. The first is a data mining software that allows the fully automatic discovery of patterns. The software is publicly available for evaluation. As a second example an image segmentation method is realized. It achieves state of the art performance on a benchmark dataset although it uses only a fraction of the training data and very simple features. PMID:23049675
Akhmediev, N; Soto-Crespo, J M; Devine, N
2016-08-01
Turbulence in integrable systems exhibits a noticeable scientific advantage: it can be expressed in terms of the nonlinear modes of these systems. Whether the majority of the excitations in the system are breathers or solitons defines the properties of the turbulent state. In the two extreme cases we can call such states "breather turbulence" or "soliton turbulence." The number of rogue waves, the probability density functions of the chaotic wave fields, and their physical spectra are all specific for each of these two situations. Understanding these extreme cases also helps in studies of mixed turbulent states when the wave field contains both solitons and breathers, thus revealing intermediate characteristics. PMID:27627303
Eulerian Mapping Closure Approach for Probability Density Function of Concentration in Shear Flows
NASA Technical Reports Server (NTRS)
He, Guowei; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
The Eulerian mapping closure approach is developed for uncertainty propagation in computational fluid mechanics. The approach is used to study the Probability Density Function (PDF) for the concentration of species advected by a random shear flow. An analytical argument shows that fluctuation of the concentration field at one point in space is non-Gaussian and exhibits stretched exponential form. An Eulerian mapping approach provides an appropriate approximation to both convection and diffusion terms and leads to a closed mapping equation. The results obtained describe the evolution of the initial Gaussian field, which is in agreement with direct numerical simulations.
Mkrtchyan, A. R.; Hayrapetyan, A. G.; Khachatryan, B. V.; Petrosyan, R. G.; Avakyan, R. M.
2009-08-15
The fourth order linear differential equation is obtained for the probability density considering the non-Hermitian Hamiltonian (the case of quasistationary states - complexity of energy). Third order nonlinear differential equation for the square of the modulus of the order parameter and for the phase is obtained by making use of Ginzburg-Landau equations. Three integrals of 'motion' are found in the absence of the external magnetic field and two integrals are found in the presence of the external magnetic field. The analysis of these integrals is conducted. New analytical solutions are obtained.
Properties of the probability density function of the non-central chi-squared distribution
NASA Astrophysics Data System (ADS)
András, Szilárd; Baricz, Árpád
2008-10-01
In this paper we consider the probability density function (pdf) of a non-central [chi]2 distribution with arbitrary number of degrees of freedom. For this function we prove that can be represented as a finite sum and we deduce a partial derivative formula. Moreover, we show that the pdf is log-concave when the degrees of freedom is greater or equal than 2. At the end of this paper we present some Turán-type inequalities for this function and an elegant application of the monotone form of l'Hospital's rule in probability theory is given.
NASA Astrophysics Data System (ADS)
Kitayabu, Toru; Hagiwara, Mao; Ishikawa, Hiroyasu; Shirai, Hiroshi
A novel delta-sigma modulator that employs a non-uniform quantizer whose spacing is adjusted by reference to the statistical properties of the input signal is proposed. The proposed delta-sigma modulator has less quantization noise compared to the one that uses a uniform quantizer with the same number of output values. With respect to the quantizer on its own, Lloyd proposed a non-uniform quantizer that is best for minimizing the average quantization noise power. The applicable condition of the method is that the statistical properties of the input signal, the probability density, are given. However, the procedure cannot be directly applied to the quantizer in the delta-sigma modulator because it jeopardizes the modulator's stability. In this paper, a procedure is proposed that determine the spacing of the quantizer with avoiding instability. Simulation results show that the proposed method reduces quantization noise by up to 3.8dB and 2.8dB with the input signal having a PAPR of 16dB and 12dB, respectively, compared to the one employing a uniform quantizer. Two alternative types of probability density function (PDF) are used in the proposed method for the calculation of the output values. One is the PDF of the input signal to the delta-sigma modulator and the other is an approximated PDF of the input signal to the quantizer inside the delta-sigma modulator. Both approaches are evaluated to find that the latter gives lower quantization noise.
NASA Astrophysics Data System (ADS)
Kim, Kyu Rang; Kim, Mijin; Choe, Ho-Seong; Han, Mae Ja; Lee, Hye-Rim; Oh, Jae-Won; Kim, Baek-Jo
2016-07-01
Pollen is an important cause of respiratory allergic reactions. As individual sanitation has improved, allergy risk has increased, and this trend is expected to continue due to climate change. Atmospheric pollen concentration is highly influenced by weather conditions. Regression analysis and modeling of the relationships between airborne pollen concentrations and weather conditions were performed to analyze and forecast pollen conditions. Traditionally, daily pollen concentration has been estimated using regression models that describe the relationships between observed pollen concentrations and weather conditions. These models were able to forecast daily concentrations at the sites of observation, but lacked broader spatial applicability beyond those sites. To overcome this limitation, an integrated modeling scheme was developed that is designed to represent the underlying processes of pollen production and distribution. A maximum potential for airborne pollen is first determined using the Weibull probability density function. Then, daily pollen concentration is estimated using multiple regression models. Daily risk grade levels are determined based on the risk criteria used in Korea. The mean percentages of agreement between the observed and estimated levels were 81.4-88.2 % and 92.5-98.5 % for oak and Japanese hop pollens, respectively. The new models estimated daily pollen risk more accurately than the original statistical models because of the newly integrated biological response curves. Although they overestimated seasonal mean concentration, they did not simulate all of the peak concentrations. This issue would be resolved by adding more variables that affect the prevalence and internal maturity of pollens.
Ensemble Averaged Probability Density Function (APDF) for Compressible Turbulent Reacting Flows
NASA Technical Reports Server (NTRS)
Shih, Tsan-Hsing; Liu, Nan-Suey
2012-01-01
In this paper, we present a concept of the averaged probability density function (APDF) for studying compressible turbulent reacting flows. The APDF is defined as an ensemble average of the fine grained probability density function (FG-PDF) with a mass density weighting. It can be used to exactly deduce the mass density weighted, ensemble averaged turbulent mean variables. The transport equation for APDF can be derived in two ways. One is the traditional way that starts from the transport equation of FG-PDF, in which the compressible Navier- Stokes equations are embedded. The resulting transport equation of APDF is then in a traditional form that contains conditional means of all terms from the right hand side of the Navier-Stokes equations except for the chemical reaction term. These conditional means are new unknown quantities that need to be modeled. Another way of deriving the transport equation of APDF is to start directly from the ensemble averaged Navier-Stokes equations. The resulting transport equation of APDF derived from this approach appears in a closed form without any need for additional modeling. The methodology of ensemble averaging presented in this paper can be extended to other averaging procedures: for example, the Reynolds time averaging for statistically steady flow and the Reynolds spatial averaging for statistically homogeneous flow. It can also be extended to a time or spatial filtering procedure to construct the filtered density function (FDF) for the large eddy simulation (LES) of compressible turbulent reacting flows.
Tanaka, Taku; Ciffroy, Philippe; Stenberg, Kristofer; Capri, Ettore
2010-11-01
In the framework of environmental multimedia modeling studies dedicated to environmental and health risk assessments of chemicals, the bioconcentration factor (BCF) is a parameter commonly used, especially for fish. As for neutral lipophilic substances, it is assumed that BCF is independent of exposure levels of the substances. However, for metals some studies found the inverse relationship between BCF values and aquatic exposure concentrations for various aquatic species and metals, and also high variability in BCF data. To deal with the factors determining BCF for metals, we conducted regression analyses to evaluate the inverse relationships and introduce the concept of probability density function (PDF) for Cd, Cu, Zn, Pb, and As. In the present study, for building the regression model and derive the PDF of fish BCF, two statistical approaches are applied: ordinary regression analysis to estimate a regression model that does not consider the variation in data across different fish family groups; and hierarchical Bayesian regression analysis to estimate fish group-specific regression models. The results show that the BCF ranges and PDFs estimated for metals by both statistical approaches have less uncertainty than the variation of collected BCF data (the uncertainty is reduced by 9%-61%), and thus such PDFs proved to be useful to obtain accurate model predictions for environmental and health risk assessment concerning metals. PMID:20886641
NASA Astrophysics Data System (ADS)
Nakano, Shinya
2013-04-01
In the ensemble-based sequential data assimilation, the probability density function (PDF) at each time step is represented by ensemble members. These ensemble members are usually assumed to be Monte Carlo samples drawn from the PDF, and the probability density is associated with the concentration of the ensemble members. On the basis of the Monte Carlo approximation, the forecast ensemble, which is obtained by applying the dynamical model to each ensemble member, provides an approximation of the forecast PDF on the basis of the Chapman-Kolmogorov integral. In practical cases, however, the ensemble size is limited by available computational resources, and it is typically much less than the system dimension. In such situations, the Monte Carlo approximation would not well work. When the ensemble size is less than the system dimension, the ensemble would form a simplex in a subspace. The simplex can not represent the third or higher-order moments of the PDF, but it can represent only the Gaussian features of the PDF. As noted by Wang et al. (2004), the forecast ensemble, which is obtained by applying the dynamical model to each member of the simplex ensemble, provides an approximation of the mean and covariance of the forecast PDF where the Taylor expansion of the dynamical model up to the second-order is considered except that the uncertainties which can not represented by the ensemble members are ignored. Since the third and higher-order nonlinearity is discarded, the forecast ensemble would provide some bias to the forecast. Using a small nonlinear model, the Lorenz 63 model, we also performed the experiment of the state estimation with both the simplex representation and the Monte Carlo representation, which corresponds to the limited-sized ensemble case and the large-sized ensemble case, respectively. If we use the simplex representation, it is found that the estimates tend to have some bias which is likely to be caused by the nonlinearity of the system rather
Methods for Cloud Cover Estimation
NASA Technical Reports Server (NTRS)
Glackin, D. L.; Huning, J. R.; Smith, J. H.; Logan, T. L.
1984-01-01
Several methods for cloud cover estimation are described relevant to assessing the performance of a ground-based network of solar observatories. The methods rely on ground and satellite data sources and provide meteorological or climatological information. One means of acquiring long-term observations of solar oscillations is the establishment of a ground-based network of solar observatories. Criteria for station site selection are: gross cloudiness, accurate transparency information, and seeing. Alternative methods for computing this duty cycle are discussed. The cycle, or alternatively a time history of solar visibility from the network, can then be input to a model to determine the effect of duty cycle on derived solar seismology parameters. Cloudiness from space is studied to examine various means by which the duty cycle might be computed. Cloudiness, and to some extent transparency, can potentially be estimated from satellite data.
NASA Technical Reports Server (NTRS)
Chadwick, C.
1984-01-01
This paper describes the development and use of an algorithm to compute approximate statistics of the magnitude of a single random trajectory correction maneuver (TCM) Delta v vector. The TCM Delta v vector is modeled as a three component Cartesian vector each of whose components is a random variable having a normal (Gaussian) distribution with zero mean and possibly unequal standard deviations. The algorithm uses these standard deviations as input to produce approximations to (1) the mean and standard deviation of the magnitude of Delta v, (2) points of the probability density function of the magnitude of Delta v, and (3) points of the cumulative and inverse cumulative distribution functions of Delta v. The approximates are based on Monte Carlo techniques developed in a previous paper by the author and extended here. The algorithm described is expected to be useful in both pre-flight planning and in-flight analysis of maneuver propellant requirements for space missions.
Weatherbee, Andrew; Sugita, Mitsuro; Bizheva, Kostadinka; Popov, Ivan; Vitkin, Alex
2016-06-15
The distribution of backscattered intensities as described by the probability density function (PDF) of tissue-scattered light contains information that may be useful for tissue assessment and diagnosis, including characterization of its pathology. In this Letter, we examine the PDF description of the light scattering statistics in a well characterized tissue-like particulate medium using optical coherence tomography (OCT). It is shown that for low scatterer density, the governing statistics depart considerably from a Gaussian description and follow the K distribution for both OCT amplitude and intensity. The PDF formalism is shown to be independent of the scatterer flow conditions; this is expected from theory, and suggests robustness and motion independence of the OCT amplitude (and OCT intensity) PDF metrics in the context of potential biomedical applications. PMID:27304274
Pharmacokinetic parameter estimations by minimum relative entropy method.
Amisaki, T; Eguchi, S
1995-10-01
For estimating pharmacokinetic parameters, we introduce the minimum relative entropy (MRE) method and compare its performance with least squares methods. There are several variants of least squares, such as ordinary least squares (OLS), weighted least squares, and iteratively reweighted least squares. In addition to these traditional methods, even extended least squares (ELS), a relatively new approach to nonlinear regression analysis, can be regarded as a variant of least squares. These methods are different from each other in their manner of handling weights. It has been recognized that least squares methods with an inadequate weighting scheme may cause misleading results (the "choice of weights" problem). Although least squares with uniform weights, i.e., OLS, is rarely used in pharmacokinetic analysis, it offers the principle of least squares. The objective function of OLS can be regarded as a distance between observed and theoretical pharmacokinetic values on the Euclidean space RN, where N is the number of observations. Thus OLS produces its estimates by minimizing the Euclidean distance. On the other hand, MRE works by minimizing the relative entropy which expresses discrepancy between two probability densities. Because pharmacokinetic functions are not density function in general, we use a particular form of the relative entropy whose domain is extended to the space of all positive functions. MRE never assumes any distribution of errors involved in observations. Thus, it can be a possible solution to the choice of weights problem. Moreover, since the mathematical form of the relative entropy, i.e., an expectation of the log-ratio of two probability density functions, is different from that of a usual Euclidean distance, the behavior of MRE may be different from those of least squares methods. To clarify the behavior of MRE, we have compared the performance of MRE with those of ELS and OLS by carrying out an intensive simulation study, where four pharmaco
Translating CFC-based piston ages into probability density functions of ground-water age in karst
Long, A.J.; Putnam, L.D.
2006-01-01
Temporal age distributions are equivalent to probability density functions (PDFs) of transit time. The type and shape of a PDF provides important information related to ground-water mixing at the well or spring and the complex nature of flow networks in karst aquifers. Chlorofluorocarbon (CFC) concentrations measured for samples from 12 locations in the karstic Madison aquifer were used to evaluate the suitability of various PDF types for this aquifer. Parameters of PDFs could not be estimated within acceptable confidence intervals for any of the individual sites. Therefore, metrics derived from CFC-based apparent ages were used to evaluate results of PDF modeling in a more general approach. The ranges of these metrics were established as criteria against which families of PDFs could be evaluated for their applicability to different parts of the aquifer. Seven PDF types, including five unimodal and two bimodal models, were evaluated. Model results indicate that unimodal models may be applicable to areas close to conduits that have younger piston (i.e., apparent) ages and that bimodal models probably are applicable to areas farther from conduits that have older piston ages. The two components of a bimodal PDF are interpreted as representing conduit and diffuse flow, and transit times of as much as two decades may separate these PDF components. Areas near conduits may be dominated by conduit flow, whereas areas farther from conduits having bimodal distributions probably have good hydraulic connection to both diffuse and conduit flow. ?? 2006 Elsevier B.V. All rights reserved.
Models for the probability densities of the turbulent plasma flux in magnetized plasmas
NASA Astrophysics Data System (ADS)
Bergsaker, A. S.; Fredriksen, Å; Pécseli, H. L.; Trulsen, J. K.
2015-10-01
Observations of turbulent transport in magnetized plasmas indicate that plasma losses can be due to coherent structures or bursts of plasma rather than a classical random walk or diffusion process. A model for synthetic data based on coherent plasma flux events is proposed, where all basic properties can be obtained analytically in terms of a few control parameters. One basic parameter in the present case is the density of burst events in a long time-record, together with parameters in a model of the individual pulse shapes and the statistical distribution of these parameters. The model and its extensions give the probability density of the plasma flux. An interesting property of the model is a prediction of a near-parabolic relation between skewness and kurtosis of the statistical flux distribution for a wide range of parameters. The model is generalized by allowing for an additive random noise component. When this noise dominates the signal we can find a transition to standard results for Gaussian random noise. Applications of the model are illustrated by data from the toroidal Blaamann plasma.
On the Evolution of the Density Probability Density Function in Strongly Self-gravitating Systems
NASA Astrophysics Data System (ADS)
Girichidis, Philipp; Konstandin, Lukas; Whitworth, Anthony P.; Klessen, Ralf S.
2014-02-01
The time evolution of the probability density function (PDF) of the mass density is formulated and solved for systems in free-fall using a simple approximate function for the collapse of a sphere. We demonstrate that a pressure-free collapse results in a power-law tail on the high-density side of the PDF. The slope quickly asymptotes to the functional form PV (ρ)vpropρ-1.54 for the (volume-weighted) PDF and PM (ρ)vpropρ-0.54 for the corresponding mass-weighted distribution. From the simple approximation of the PDF we derive analytic descriptions for mass accretion, finding that dynamically quiet systems with narrow density PDFs lead to retarded star formation and low star formation rates (SFRs). Conversely, strong turbulent motions that broaden the PDF accelerate the collapse causing a bursting mode of star formation. Finally, we compare our theoretical work with observations. The measured SFRs are consistent with our model during the early phases of the collapse. Comparison of observed column density PDFs with those derived from our model suggests that observed star-forming cores are roughly in free-fall.
Korotkova, Olga; Avramov-Zamurovic, Svetlana; Malek-Madani, Reza; Nelson, Charles
2011-10-10
A number of field experiments measuring the fluctuating intensity of a laser beam propagating along horizontal paths in the maritime environment is performed over sub-kilometer distances at the United States Naval Academy. Both above the ground and over the water links are explored. Two different detection schemes, one photographing the beam on a white board, and the other capturing the beam directly using a ccd sensor, gave consistent results. The probability density function (pdf) of the fluctuating intensity is reconstructed with the help of two theoretical models: the Gamma-Gamma and the Gamma-Laguerre, and compared with the intensity's histograms. It is found that the on-ground experimental results are in good agreement with theoretical predictions. The results obtained above the water paths lead to appreciable discrepancies, especially in the case of the Gamma-Gamma model. These discrepancies are attributed to the presence of the various scatterers along the path of the beam, such as water droplets, aerosols and other airborne particles. Our paper's main contribution is providing a methodology for computing the pdf function of the laser beam intensity in the maritime environment using field measurements. PMID:21997043
Homogeneous clusters over India using probability density function of daily rainfall
NASA Astrophysics Data System (ADS)
Kulkarni, Ashwini
2016-04-01
The Indian landmass has been divided into homogeneous clusters by applying the cluster analysis to the probability density function of a century-long time series of daily summer monsoon (June through September) rainfall at 357 grids over India, each of approximately 100 km × 100 km. The analysis gives five clusters over Indian landmass; only cluster 5 happened to be the contiguous region and all other clusters are dispersed away which confirms the erratic behavior of daily rainfall over India. The area averaged seasonal rainfall over cluster 5 has a very strong relationship with Indian summer monsoon rainfall; also, the rainfall variability over this region is modulated by the most important mode of climate system, i.e., El Nino Southern Oscillation (ENSO). This cluster could be considered as the representative of the entire Indian landmass to examine monsoon variability. The two-sample Kolmogorov-Smirnov test supports that the cumulative distribution functions of daily rainfall over cluster 5 and India as a whole do not differ significantly. The clustering algorithm is also applied to two time epochs 1901-1975 and 1976-2010 to examine the possible changes in clusters in a recent warming period. The clusters are drastically different in two time periods. They are more dispersed in recent period implying the more erroneous distribution of daily rainfall in recent period.
On the evolution of the density probability density function in strongly self-gravitating systems
Girichidis, Philipp; Konstandin, Lukas; Klessen, Ralf S.; Whitworth, Anthony P.
2014-02-01
The time evolution of the probability density function (PDF) of the mass density is formulated and solved for systems in free-fall using a simple approximate function for the collapse of a sphere. We demonstrate that a pressure-free collapse results in a power-law tail on the high-density side of the PDF. The slope quickly asymptotes to the functional form P{sub V} (ρ)∝ρ{sup –1.54} for the (volume-weighted) PDF and P{sub M} (ρ)∝ρ{sup –0.54} for the corresponding mass-weighted distribution. From the simple approximation of the PDF we derive analytic descriptions for mass accretion, finding that dynamically quiet systems with narrow density PDFs lead to retarded star formation and low star formation rates (SFRs). Conversely, strong turbulent motions that broaden the PDF accelerate the collapse causing a bursting mode of star formation. Finally, we compare our theoretical work with observations. The measured SFRs are consistent with our model during the early phases of the collapse. Comparison of observed column density PDFs with those derived from our model suggests that observed star-forming cores are roughly in free-fall.
NASA Astrophysics Data System (ADS)
Ivanova, I. V.; Dmitriev, D. I.; Sirazetdinov, V. S.
2007-02-01
In this paper we analyze some results of natural and numerical experiments on probability density of intensity fluctuations on an axis for 1,06 microns and 0,53 microns laser beams in comparison with theoretical dependences (lognormal, exponential and K-distribution). Beams were propagated in aviation engine exhaust at various angles between the jet and beam axes. It has been shown that for a beam with a wavelength of 0,53 microns experimental data can be approximated as exponential and K-distribution, while for radiation with a wavelength of 1,06 microns good conformity to K-distribution has been observed. Optimum conditions for image registration with CCD-cameras of laser beams distorted by turbulence have been chosen. For this purpose transfer characteristics of several same type samples of CCD-cameras have been studied at various irradiation modes and registration tunings. It has been shown that the dynamic range of the cameras is used to maximum capacity for image recording when gamma-correction is applied.
NASA Technical Reports Server (NTRS)
Mei, Chuh; Dhainaut, Jean-Michel
2000-01-01
The Monte Carlo simulation method in conjunction with the finite element large deflection modal formulation are used to estimate fatigue life of aircraft panels subjected to stationary Gaussian band-limited white-noise excitations. Ten loading cases varying from 106 dB to 160 dB OASPL with bandwidth 1024 Hz are considered. For each load case, response statistics are obtained from an ensemble of 10 response time histories. The finite element nonlinear modal procedure yields time histories, probability density functions (PDF), power spectral densities and higher statistical moments of the maximum deflection and stress/strain. The method of moments of PSD with Dirlik's approach is employed to estimate the panel fatigue life.
NASA Technical Reports Server (NTRS)
Hsu, Andrew T.
1992-01-01
Turbulent combustion can not be simulated adequately by conventional moment closure turbulent models. The probability density function (PDF) method offers an attractive alternative: in a PDF model, the chemical source terms are closed and do not require additional models. Because the number of computational operations grows only linearly in the Monte Carlo scheme, it is chosen over finite differencing schemes. A grid dependent Monte Carlo scheme following J.Y. Chen and W. Kollmann has been studied in the present work. It was found that in order to conserve the mass fractions absolutely, one needs to add further restrictions to the scheme, namely alpha(sub j) + gamma(sub j) = alpha(sub j - 1) + gamma(sub j + 1). A new algorithm was devised that satisfied this restriction in the case of pure diffusion or uniform flow problems. Using examples, it is shown that absolute conservation can be achieved. Although for non-uniform flows absolute conservation seems impossible, the present scheme has reduced the error considerably.
The probability density function in molecular gas in the G333 and Vela C molecular clouds
NASA Astrophysics Data System (ADS)
Cunningham, Maria
2015-08-01
The probability density function (PDF) is a simple analytical tool for determining the hierarchical spatial structure of molecular clouds. It has been used frequently in recent years with dust continuum emission, such as that from the Herschel space telescope and ALMA. These dust column density PDFs universally show a log-normal distribution in low column density gas, characteristic of unbound turbulent gas, and a power-law tail at high column densities, indicating the presence of gravitationally bound gas. We have recently conducted a PDF analysis of the molecular gas in the G333 and Vela C giant molecular cloud complexes, using transitions of CO, HCN, HNC, HCO+ and N2H+.The results show that CO and its isotopologues trace mostly the log-normal part of the PDF, while HCN and HCO+ trace both a log-normal part and a power law part to the distribution. On the other hand, HNC and N2H+ mostly trace only the power law tail. The difference between the PDFs of HCN and HNC is surprising, as is the similarity between HNC and the N2H+ PDFs. The most likely explanation for the similar distributions of HNC and N2H+ is that N2H+ is known to be enhanced in cool gas below 20K, where CO is depleted, while the reaction that forms HNC or HCN favours the former at similar low temperatures. The lack of evidence for a power law tail in 13CO and C18O, in conjunction for the results for the N2H+ PDF suggest that depletion of CO in the dense cores of these molecular clouds is significant. In conclusion, the PDF has proved to be a surprisingly useful tool for investigating not only the spatial distribution of molecular gas, but also the wide scale chemistry of molecular clouds.
NASA Astrophysics Data System (ADS)
Carrasco Kind, Matias; Brunner, Robert J.
2014-07-01
One of the consequences of entering the era of precision cosmology is the widespread adoption of photometric redshift probability density functions (PDFs). Both current and future photometric surveys are expected to obtain images of billions of distinct galaxies. As a result, storing and analysing all of these PDFs will be non-trivial and even more severe if a survey plans to compute and store multiple different PDFs. In this paper we propose the use of a sparse basis representation to fully represent individual photo-z PDFs. By using an orthogonal matching pursuit algorithm and a combination of Gaussian and Voigt basis functions, we demonstrate how our approach is superior to a multi-Gaussian fitting, as we require approximately half of the parameters for the same fitting accuracy with the additional advantage that an entire PDF can be stored by using a 4-byte integer per basis function, and we can achieve better accuracy by increasing the number of bases. By using data from the Canada-France-Hawaii Telescope Lensing Survey, we demonstrate that only 10-20 points per galaxy are sufficient to reconstruct both the individual PDFs and the ensemble redshift distribution, N(z), to an accuracy of 99.9 per cent when compared to the one built using the original PDFs computed with a resolution of δz = 0.01, reducing the required storage of 200 original values by a factor of 10-20. Finally, we demonstrate how this basis representation can be directly extended to a cosmological analysis, thereby increasing computational performance without losing resolution nor accuracy.
NASA Astrophysics Data System (ADS)
Diop, C. A.
2009-09-01
In many studies discussing the statistical characterization of the rain rate, most of the authors have found that the probability density function (PDF) of the rain rate follows a lognormal law. However, a more careful analysis of the PDF of the radar reflectivity Z suggests that it is a question of a mixture of distributions. The purpose of this work is to identify precipitation types that can coexist in a continental thunderstorm from the PDF of the radar reflectivity. The data used come from the NEXRAD S-band radar network, notably the level II database. From reflectivity ranging from -10 dBZ to 70 dBZ, we compute the PDF. We find that the total distribution is a mixture of several populations adjusted by several gaussian distributions with known parameters : mean, standard deviation and proportion of each one in the mixture. Since it is known that the rainfall is a sum of its parts and is composed of hydrometeors of various sizes, these statistical findings are in accordance with the physical properties of the precipitation. Then each component of the mixed distribution is tentatively attributed to a physical character of the precipitation. The first distribution with low reflectivities is assumed to represent the background of the small sized particles. The second component centred around medium Z corresponds to stratiform rain, the third population located at larger Z is due to heavy rain. Eventually a fourth population is present for hail. *Observatoire Midi-Pyrénées, Laboratoire d'Aérologie, CNRS/Université Paul Sabatier, Toulouse , France **Université des Sciences et Technologies de Lille, UFR de Physique Fondamentale, Laboratoire d'Optique Atmosphérique, Lille, France
Nonparametric estimation of plant density by the distance method
Patil, S.A.; Burnham, K.P.; Kovner, J.L.
1979-01-01
A relation between the plant density and the probability density function of the nearest neighbor distance (squared) from a random point is established under fairly broad conditions. Based upon this relationship, a nonparametric estimator for the plant density is developed and presented in terms of order statistics. Consistency and asymptotic normality of the estimator are discussed. An interval estimator for the density is obtained. The modifications of this estimator and its variance are given when the distribution is truncated. Simulation results are presented for regular, random and aggregated populations to illustrate the nonparametric estimator and its variance. A numerical example from field data is given. Merits and deficiencies of the estimator are discussed with regard to its robustness and variance.
Research on Parameter Estimation Methods for Alpha Stable Noise in a Laser Gyroscope’s Random Error
Wang, Xueyun; Li, Kui; Gao, Pengyu; Meng, Suxia
2015-01-01
Alpha stable noise, determined by four parameters, has been found in the random error of a laser gyroscope. Accurate estimation of the four parameters is the key process for analyzing the properties of alpha stable noise. Three widely used estimation methods—quantile, empirical characteristic function (ECF) and logarithmic moment method—are analyzed in contrast with Monte Carlo simulation in this paper. The estimation accuracy and the application conditions of all methods, as well as the causes of poor estimation accuracy, are illustrated. Finally, the highest precision method, ECF, is applied to 27 groups of experimental data to estimate the parameters of alpha stable noise in a laser gyroscope’s random error. The cumulative probability density curve of the experimental data fitted by an alpha stable distribution is better than that by a Gaussian distribution, which verifies the existence of alpha stable noise in a laser gyroscope’s random error. PMID:26230698
NASA Technical Reports Server (NTRS)
Raju, M. S.
1998-01-01
The success of any solution methodology used in the study of gas-turbine combustor flows depends a great deal on how well it can model the various complex and rate controlling processes associated with the spray's turbulent transport, mixing, chemical kinetics, evaporation, and spreading rates, as well as convective and radiative heat transfer and other phenomena. The phenomena to be modeled, which are controlled by these processes, often strongly interact with each other at different times and locations. In particular, turbulence plays an important role in determining the rates of mass and heat transfer, chemical reactions, and evaporation in many practical combustion devices. The influence of turbulence in a diffusion flame manifests itself in several forms, ranging from the so-called wrinkled, or stretched, flamelets regime to the distributed combustion regime, depending upon how turbulence interacts with various flame scales. Conventional turbulence models have difficulty treating highly nonlinear reaction rates. A solution procedure based on the composition joint probability density function (PDF) approach holds the promise of modeling various important combustion phenomena relevant to practical combustion devices (such as extinction, blowoff limits, and emissions predictions) because it can account for nonlinear chemical reaction rates without making approximations. In an attempt to advance the state-of-the-art in multidimensional numerical methods, we at the NASA Lewis Research Center extended our previous work on the PDF method to unstructured grids, parallel computing, and sprays. EUPDF, which was developed by M.S. Raju of Nyma, Inc., was designed to be massively parallel and could easily be coupled with any existing gas-phase and/or spray solvers. EUPDF can use an unstructured mesh with mixed triangular, quadrilateral, and/or tetrahedral elements. The application of the PDF method showed favorable results when applied to several supersonic
Kinetic and dynamic probability-density-function descriptions of disperse turbulent two-phase flows
NASA Astrophysics Data System (ADS)
Minier, Jean-Pierre; Profeta, Christophe
2015-11-01
This article analyzes the status of two classical one-particle probability density function (PDF) descriptions of the dynamics of discrete particles dispersed in turbulent flows. The first PDF formulation considers only the process made up by particle position and velocity Zp=(xp,Up) and is represented by its PDF p (t ;yp,Vp) which is the solution of a kinetic PDF equation obtained through a flux closure based on the Furutsu-Novikov theorem. The second PDF formulation includes fluid variables into the particle state vector, for example, the fluid velocity seen by particles Zp=(xp,Up,Us) , and, consequently, handles an extended PDF p (t ;yp,Vp,Vs) which is the solution of a dynamic PDF equation. For high-Reynolds-number fluid flows, a typical formulation of the latter category relies on a Langevin model for the trajectories of the fluid seen or, conversely, on a Fokker-Planck equation for the extended PDF. In the present work, a new derivation of the kinetic PDF equation is worked out and new physical expressions of the dispersion tensors entering the kinetic PDF equation are obtained by starting from the extended PDF and integrating over the fluid seen. This demonstrates that, under the same assumption of a Gaussian colored noise and irrespective of the specific stochastic model chosen for the fluid seen, the kinetic PDF description is the marginal of a dynamic PDF one. However, a detailed analysis reveals that kinetic PDF models of particle dynamics in turbulent flows described by statistical correlations constitute incomplete stand-alone PDF descriptions and, moreover, that present kinetic-PDF equations are mathematically ill posed. This is shown to be the consequence of the non-Markovian characteristic of the stochastic process retained to describe the system and the use of an external colored noise. Furthermore, developments bring out that well-posed PDF descriptions are essentially due to a proper choice of the variables selected to describe physical systems
Kinetic and dynamic probability-density-function descriptions of disperse turbulent two-phase flows.
Minier, Jean-Pierre; Profeta, Christophe
2015-11-01
This article analyzes the status of two classical one-particle probability density function (PDF) descriptions of the dynamics of discrete particles dispersed in turbulent flows. The first PDF formulation considers only the process made up by particle position and velocity Z(p)=(x(p),U(p)) and is represented by its PDF p(t; y(p),V(p)) which is the solution of a kinetic PDF equation obtained through a flux closure based on the Furutsu-Novikov theorem. The second PDF formulation includes fluid variables into the particle state vector, for example, the fluid velocity seen by particles Z(p)=(x(p),U(p),U(s)), and, consequently, handles an extended PDF p(t; y(p),V(p),V(s)) which is the solution of a dynamic PDF equation. For high-Reynolds-number fluid flows, a typical formulation of the latter category relies on a Langevin model for the trajectories of the fluid seen or, conversely, on a Fokker-Planck equation for the extended PDF. In the present work, a new derivation of the kinetic PDF equation is worked out and new physical expressions of the dispersion tensors entering the kinetic PDF equation are obtained by starting from the extended PDF and integrating over the fluid seen. This demonstrates that, under the same assumption of a Gaussian colored noise and irrespective of the specific stochastic model chosen for the fluid seen, the kinetic PDF description is the marginal of a dynamic PDF one. However, a detailed analysis reveals that kinetic PDF models of particle dynamics in turbulent flows described by statistical correlations constitute incomplete stand-alone PDF descriptions and, moreover, that present kinetic-PDF equations are mathematically ill posed. This is shown to be the consequence of the non-Markovian characteristic of the stochastic process retained to describe the system and the use of an external colored noise. Furthermore, developments bring out that well-posed PDF descriptions are essentially due to a proper choice of the variables selected to
NASA Astrophysics Data System (ADS)
Liang, Shiuan-Ni; Lan, Boon Leong
2015-11-01
The Newtonian and general-relativistic position and velocity probability densities, which are calculated from the same initial Gaussian ensemble of trajectories using the same system parameters, are compared for a low-speed weak-gravity bouncing ball system. The Newtonian approximation to the general-relativistic probability densities does not always break down rapidly if the trajectories in the ensembles are chaotic -- the rapid breakdown occurs only if the initial position and velocity standard deviations are sufficiently small. This result is in contrast to the previously studied single-trajectory case where the Newtonian approximation to a general-relativistic trajectory will always break down rapidly if the two trajectories are chaotic. Similar rapid breakdown of the Newtonian approximation to the general-relativistic probability densities should also occur for other low-speed weak-gravity chaotic systems since it is due to sensitivity to the small difference between the two dynamical theories at low speed and weak gravity. For the bouncing ball system, the breakdown of the Newtonian approximation is transient because the Newtonian and general-relativistic probability densities eventually converge to invariant densities which are close in agreement.
A method for estimating proportions
NASA Technical Reports Server (NTRS)
Guseman, L. F., Jr.; Marion, B. P.
1975-01-01
A proportion estimation procedure is presented which requires only on set of ground truth data for determining the error matrix. The error matrix is then used to determine an unbiased estimate. The error matrix is shown to be directly related to the probability of misclassifications, and is more diagonally dominant with the increase in the number of passes used.
NASA Astrophysics Data System (ADS)
Jian, Y.; Yao, R.; Mulnix, T.; Jin, X.; Carson, R. E.
2015-01-01
Resolution degradation in PET image reconstruction can be caused by inaccurate modeling of the physical factors in the acquisition process. Resolution modeling (RM) is a common technique that takes into account the resolution degrading factors in the system matrix. Our previous work has introduced a probability density function (PDF) method of deriving the resolution kernels from Monte Carlo simulation and parameterizing the LORs to reduce the number of kernels needed for image reconstruction. In addition, LOR-PDF allows different PDFs to be applied to LORs from different crystal layer pairs of the HRRT. In this study, a thorough test was performed with this new model (LOR-PDF) applied to two PET scanners—the HRRT and Focus-220. A more uniform resolution distribution was observed in point source reconstructions by replacing the spatially-invariant kernels with the spatially-variant LOR-PDF. Specifically, from the center to the edge of radial field of view (FOV) of the HRRT, the measured in-plane FWHMs of point sources in a warm background varied slightly from 1.7 mm to 1.9 mm in LOR-PDF reconstructions. In Minihot and contrast phantom reconstructions, LOR-PDF resulted in up to 9% higher contrast at any given noise level than image-space resolution model. LOR-PDF also has the advantage in performing crystal-layer-dependent resolution modeling. The contrast improvement by using LOR-PDF was verified statistically by replicate reconstructions. In addition, [11C]AFM rats imaged on the HRRT and [11C]PHNO rats imaged on the Focus-220 were utilized to demonstrated the advantage of the new model. Higher contrast between high-uptake regions of only a few millimeter diameter and the background was observed in LOR-PDF reconstruction than in other methods.
NASA Astrophysics Data System (ADS)
Linde, N.; Vrugt, J. A.
2009-04-01
Geophysical models are increasingly used in hydrological simulations and inversions, where they are typically treated as an artificial data source with known uncorrelated "data errors". The model appraisal problem in classical deterministic linear and non-linear inversion approaches based on linearization is often addressed by calculating model resolution and model covariance matrices. These measures offer only a limited potential to assign a more appropriate "data covariance matrix" for future hydrological applications, simply because the regularization operators used to construct a stable inverse solution bear a strong imprint on such estimates and because the non-linearity of the geophysical inverse problem is not explored. We present a parallelized Markov Chain Monte Carlo (MCMC) scheme to efficiently derive the posterior spatially distributed radar slowness and water content between boreholes given first-arrival traveltimes. This method is called DiffeRential Evolution Adaptive Metropolis (DREAM_ZS) with snooker updater and sampling from past states. Our inverse scheme does not impose any smoothness on the final solution, and uses uniform prior ranges of the parameters. The posterior distribution of radar slowness is converted into spatially distributed soil moisture values using a petrophysical relationship. To benchmark the performance of DREAM_ZS, we first apply our inverse method to a synthetic two-dimensional infiltration experiment using 9421 traveltimes contaminated with Gaussian errors and 80 different model parameters, corresponding to a model discretization of 0.3 m × 0.3 m. After this, the method is applied to field data acquired in the vadose zone during snowmelt. This work demonstrates that fully non-linear stochastic inversion can be applied with few limiting assumptions to a range of common two-dimensional tomographic geophysical problems. The main advantage of DREAM_ZS is that it provides a full view of the posterior distribution of spatially
Smoothing Methods for Estimating Test Score Distributions.
ERIC Educational Resources Information Center
Kolen, Michael J.
1991-01-01
Estimation/smoothing methods that are flexible enough to fit a wide variety of test score distributions are reviewed: kernel method, strong true-score model-based method, and method that uses polynomial log-linear models. Applications of these methods include describing/comparing test score distributions, estimating norms, and estimating…
NASA Astrophysics Data System (ADS)
Pedretti, Daniele; Fernàndez-Garcia, Daniel
2013-09-01
Particle tracking methods to simulate solute transport deal with the issue of having to reconstruct smooth concentrations from a limited number of particles. This is an error-prone process that typically leads to large fluctuations in the determined late-time behavior of breakthrough curves (BTCs). Kernel density estimators (KDE) can be used to automatically reconstruct smooth BTCs from a small number of particles. The kernel approach incorporates the uncertainty associated with subsampling a large population by equipping each particle with a probability density function. Two broad classes of KDE methods can be distinguished depending on the parametrization of this function: global and adaptive methods. This paper shows that each method is likely to estimate a specific portion of the BTCs. Although global methods offer a valid approach to estimate early-time behavior and peak of BTCs, they exhibit important fluctuations at the tails where fewer particles exist. In contrast, locally adaptive methods improve tail estimation while oversmoothing both early-time and peak concentrations. Therefore a new method is proposed combining the strength of both KDE approaches. The proposed approach is universal and only needs one parameter (α) which slightly depends on the shape of the BTCs. Results show that, for the tested cases, heavily-tailed BTCs are properly reconstructed with α ≈ 0.5 .
Radiance and atmosphere propagation-based method for the target range estimation
NASA Astrophysics Data System (ADS)
Cho, Hoonkyung; Chun, Joohwan
2012-06-01
Target range estimation is traditionally based on radar and active sonar systems in modern combat system. However, the performance of such active sensor devices is degraded tremendously by jamming signal from the enemy. This paper proposes a simple range estimation method between the target and the sensor. Passive IR sensors measures infrared (IR) light radiance radiating from objects in dierent wavelength and this method shows robustness against electromagnetic jamming. The measured target radiance of each wavelength at the IR sensor depends on the emissive properties of target material and is attenuated by various factors, in particular the distance between the sensor and the target and atmosphere environment. MODTRAN is a tool that models atmospheric propagation of electromagnetic radiation. Based on the result from MODTRAN and measured radiance, the target range is estimated. To statistically analyze the performance of proposed method, we use maximum likelihood estimation (MLE) and evaluate the Cramer-Rao Lower Bound (CRLB) via the probability density function of measured radiance. And we also compare CRLB and the variance of and ML estimation using Monte-Carlo.
NASA Technical Reports Server (NTRS)
1995-01-01
The success of any solution methodology for studying gas-turbine combustor flows depends a great deal on how well it can model various complex, rate-controlling processes associated with turbulent transport, mixing, chemical kinetics, evaporation and spreading rates of the spray, convective and radiative heat transfer, and other phenomena. These phenomena often strongly interact with each other at disparate time and length scales. In particular, turbulence plays an important role in determining the rates of mass and heat transfer, chemical reactions, and evaporation in many practical combustion devices. Turbulence manifests its influence in a diffusion flame in several forms depending on how turbulence interacts with various flame scales. These forms range from the so-called wrinkled, or stretched, flamelets regime, to the distributed combustion regime. Conventional turbulence closure models have difficulty in treating highly nonlinear reaction rates. A solution procedure based on the joint composition probability density function (PDF) approach holds the promise of modeling various important combustion phenomena relevant to practical combustion devices such as extinction, blowoff limits, and emissions predictions because it can handle the nonlinear chemical reaction rates without any approximation. In this approach, mean and turbulence gas-phase velocity fields are determined from a standard turbulence model; the joint composition field of species and enthalpy are determined from the solution of a modeled PDF transport equation; and a Lagrangian-based dilute spray model is used for the liquid-phase representation with appropriate consideration of the exchanges of mass, momentum, and energy between the two phases. The PDF transport equation is solved by a Monte Carlo method, and existing state-of-the-art numerical representations are used to solve the mean gasphase velocity and turbulence fields together with the liquid-phase equations. The joint composition PDF
a Hybrid Method in Vegetation Height Estimation Using Polinsar Images of Campaign Biosar
NASA Astrophysics Data System (ADS)
Dehnavi, S.; Maghsoudi, Y.
2015-12-01
Recently, there have been plenty of researches on the retrieval of forest height by PolInSAR data. This paper aims at the evaluation of a hybrid method in vegetation height estimation based on L-band multi-polarized air-borne SAR images. The SAR data used in this paper were collected by the airborne E-SAR system. The objective of this research is firstly to describe each interferometry cross correlation as a sum of contributions corresponding to single bounce, double bounce and volume scattering processes. Then, an ESPIRIT (Estimation of Signal Parameters via Rotational Invariance Techniques) algorithm is implemented, to determine the interferometric phase of each local scatterer (ground and canopy). Secondly, the canopy height is estimated by phase differencing method, according to the RVOG (Random Volume Over Ground) concept. The applied model-based decomposition method is unrivaled, as it is not limited to specific type of vegetation, unlike the previous decomposition techniques. In fact, the usage of generalized probability density function based on the nth power of a cosine-squared function, which is characterized by two parameters, makes this method useful for different vegetation types. Experimental results show the efficiency of the approach for vegetation height estimation in the test site.
An at-site flood estimation method in the context of nonstationarity I. A simulation study
NASA Astrophysics Data System (ADS)
Gado, Tamer A.; Nguyen, Van-Thanh-Van
2016-04-01
The stationarity of annual flood peak records is the traditional assumption of flood frequency analysis. In some cases, however, as a result of land-use and/or climate change, this assumption is no longer valid. Therefore, new statistical models are needed to capture dynamically the change of probability density functions over time, in order to obtain reliable flood estimation. In this study, an innovative method for nonstationary flood frequency analysis was presented. Here, the new method is based on detrending the flood series and applying the L-moments along with the GEV distribution to the transformed "stationary" series (hereafter, this is called the LM-NS). The LM-NS method was assessed through a comparative study with the maximum likelihood (ML) method for the nonstationary GEV model, as well as with the stationary (S) GEV model. The comparative study, based on Monte Carlo simulations, was carried out for three nonstationary GEV models: a linear dependence of the mean on time (GEV1), a quadratic dependence of the mean on time (GEV2), and linear dependence in both the mean and log standard deviation on time (GEV11). The simulation results indicated that the LM-NS method performs better than the ML method for most of the cases studied, whereas the stationary method provides the least accurate results. An additional advantage of the LM-NS method is to avoid the numerical problems (e.g., convergence problems) that may occur with the ML method when estimating parameters for small data samples.
Yao, Rutao; Ramachandra, Ranjith M; Mahajan, Neeraj; Rathod, Vinay; Gunasekar, Noel; Panse, Ashish; Ma, Tianyu; Jian, Yiqiang; Yan, Jianhua; Carson, Richard E
2012-11-01
To achieve optimal PET image reconstruction through better system modeling, we developed a system matrix that is based on the probability density function for each line of response (LOR-PDF). The LOR-PDFs are grouped by LOR-to-detector incident angles to form a highly compact system matrix. The system matrix was implemented in the MOLAR list mode reconstruction algorithm for a small animal PET scanner. The impact of LOR-PDF on reconstructed image quality was assessed qualitatively as well as quantitatively in terms of contrast recovery coefficient (CRC) and coefficient of variance (COV), and its performance was compared with a fixed Gaussian (iso-Gaussian) line spread function. The LOR-PDFs of three coincidence signal emitting sources, (1) ideal positron emitter that emits perfect back-to-back γ rays (γγ) in air; (2) fluorine-18 (¹⁸F) nuclide in water; and (3) oxygen-15 (¹⁵O) nuclide in water, were derived, and assessed with simulated and experimental phantom data. The derived LOR-PDFs showed anisotropic and asymmetric characteristics dependent on LOR-detector angle, coincidence emitting source, and the medium, consistent with common PET physical principles. The comparison of the iso-Gaussian function and LOR-PDF showed that: (1) without positron range and acollinearity effects, the LOR-PDF achieved better or similar trade-offs of contrast recovery and noise for objects of 4 mm radius or larger, and this advantage extended to smaller objects (e.g. 2 mm radius sphere, 0.6 mm radius hot-rods) at higher iteration numbers; and (2) with positron range and acollinearity effects, the iso-Gaussian achieved similar or better resolution recovery depending on the significance of positron range effect. We conclude that the 3D LOR-PDF approach is an effective method to generate an accurate and compact system matrix. However, when used directly in expectation-maximization based list-mode iterative reconstruction algorithms such as MOLAR, its superiority is not clear
Yao, Rutao; Ramachandra, Ranjith M.; Mahajan, Neeraj; Rathod, Vinay; Gunasekar, Noel; Panse, Ashish; Ma, Tianyu; Jian, Yiqiang; Yan, Jianhua; Carson, Richard E.
2012-01-01
To achieve optimal PET image reconstruction through better system modeling, we developed a system matrix that is based on the probability density function for each line of response (LOR-PDF). The LOR-PDFs are grouped by LOR-to-detector incident angles to form a highly compact system matrix. The system matrix was implemented in the MOLAR list mode reconstruction algorithm for a small animal PET scanner. The impact of LOR-PDF on reconstructed image quality was assessed qualitatively as well as quantitatively in terms of contrast recovery coefficient (CRC) and coefficient of variance (COV), and its performance was compared with a fixed Gaussian (iso-Gaussian) line spread function. The LOR-PDFs of 3 coincidence signal emitting sources, 1) ideal positron emitter that emits perfect back-to-back γ rays (γγ) in air; 2) fluorine-18 (18F) nuclide in water; and 3) oxygen-15 (15O) nuclide in water, were derived, and assessed with simulated and experimental phantom data. The derived LOR-PDFs showed anisotropic and asymmetric characteristics dependent on LOR-detector angle, coincidence emitting source, and the medium, consistent with common PET physical principles. The comparison of the iso-Gaussian function and LOR-PDF showed that: 1) without positron range and acolinearity effects, the LOR-PDF achieved better or similar trade-offs of contrast recovery and noise for objects of 4-mm radius or larger, and this advantage extended to smaller objects (e.g. 2-mm radius sphere, 0.6-mm radius hot-rods) at higher iteration numbers; and 2) with positron range and acolinearity effects, the iso-Gaussian achieved similar or better resolution recovery depending on the significance of positron range effect. We conclude that the 3-D LOR-PDF approach is an effective method to generate an accurate and compact system matrix. However, when used directly in expectation-maximization based list-mode iterative reconstruction algorithms such as MOLAR, its superiority is not clear. For this
NASA Astrophysics Data System (ADS)
Yao, Rutao; Ramachandra, Ranjith M.; Mahajan, Neeraj; Rathod, Vinay; Gunasekar, Noel; Panse, Ashish; Ma, Tianyu; Jian, Yiqiang; Yan, Jianhua; Carson, Richard E.
2012-11-01
To achieve optimal PET image reconstruction through better system modeling, we developed a system matrix that is based on the probability density function for each line of response (LOR-PDF). The LOR-PDFs are grouped by LOR-to-detector incident angles to form a highly compact system matrix. The system matrix was implemented in the MOLAR list mode reconstruction algorithm for a small animal PET scanner. The impact of LOR-PDF on reconstructed image quality was assessed qualitatively as well as quantitatively in terms of contrast recovery coefficient (CRC) and coefficient of variance (COV), and its performance was compared with a fixed Gaussian (iso-Gaussian) line spread function. The LOR-PDFs of three coincidence signal emitting sources, (1) ideal positron emitter that emits perfect back-to-back γ rays (γγ) in air; (2) fluorine-18 (18F) nuclide in water; and (3) oxygen-15 (15O) nuclide in water, were derived, and assessed with simulated and experimental phantom data. The derived LOR-PDFs showed anisotropic and asymmetric characteristics dependent on LOR-detector angle, coincidence emitting source, and the medium, consistent with common PET physical principles. The comparison of the iso-Gaussian function and LOR-PDF showed that: (1) without positron range and acollinearity effects, the LOR-PDF achieved better or similar trade-offs of contrast recovery and noise for objects of 4 mm radius or larger, and this advantage extended to smaller objects (e.g. 2 mm radius sphere, 0.6 mm radius hot-rods) at higher iteration numbers; and (2) with positron range and acollinearity effects, the iso-Gaussian achieved similar or better resolution recovery depending on the significance of positron range effect. We conclude that the 3D LOR-PDF approach is an effective method to generate an accurate and compact system matrix. However, when used directly in expectation-maximization based list-mode iterative reconstruction algorithms such as MOLAR, its superiority is not clear. For this
Robust parameter estimation method for bilinear model
NASA Astrophysics Data System (ADS)
Ismail, Mohd Isfahani; Ali, Hazlina; Yahaya, Sharipah Soaad S.
2015-12-01
This paper proposed the method of parameter estimation for bilinear model, especially on BL(1,0,1,1) model without and with the presence of additive outlier (AO). In this study, the estimated parameters for BL(1,0,1,1) model are using nonlinear least squares (LS) method and also through robust approaches. The LS method employs the Newton-Raphson (NR) iterative procedure in estimating the parameters of bilinear model, but, using LS in estimating the parameters can be affected with the occurrence of outliers. As a solution, this study proposed robust approaches in dealing with the problem of outliers specifically on AO in BL(1,0,1,1) model. In robust estimation method, for improvement, we proposed to modify the NR procedure with robust scale estimators. We introduced two robust scale estimators namely median absolute deviation (MADn) and Tn in linear autoregressive model, AR(1) that be adequate and suitable for bilinear BL(1,0,1,1) model. We used the estimated parameter value in AR(1) model as an initial value in estimating the parameter values of BL(1,0,1,1) model. The investigation of the performance of LS and robust estimation methods in estimating the coefficients of BL(1,0,1,1) model is carried out through simulation study. The achievement of performance for both methods will be assessed in terms of bias values. Numerical results present that, the robust estimation method performs better than LS method in estimating the parameters without and with the presence of AO.
Reliability Estimation Methods for Liquid Rocket Engines
NASA Astrophysics Data System (ADS)
Hirata, Kunio; Masuya, Goro; Kamijo, Kenjiro
Reliability estimation using the dispersive, binominal distribution method has been traditionally used to certify the reliability of liquid rocket engines, but its estimation sometimes disagreed with the failure rates of flight engines. In order to take better results, the reliability growth model and the failure distribution method are applied to estimate the reliability of LE-7A engines, which have propelled the first stage of H-2A launch vehicles.
Probability density adjoint for sensitivity analysis of the Mean of Chaos
Blonigan, Patrick J. Wang, Qiqi
2014-08-01
Sensitivity analysis, especially adjoint based sensitivity analysis, is a powerful tool for engineering design which allows for the efficient computation of sensitivities with respect to many parameters. However, these methods break down when used to compute sensitivities of long-time averaged quantities in chaotic dynamical systems. This paper presents a new method for sensitivity analysis of ergodic chaotic dynamical systems, the density adjoint method. The method involves solving the governing equations for the system's invariant measure and its adjoint on the system's attractor manifold rather than in phase-space. This new approach is derived for and demonstrated on one-dimensional chaotic maps and the three-dimensional Lorenz system. It is found that the density adjoint computes very finely detailed adjoint distributions and accurate sensitivities, but suffers from large computational costs.
NASA Astrophysics Data System (ADS)
Lopes Cardozo, David; Holdsworth, Peter C. W.
2016-04-01
The magnetization probability density in d = 2 and 3 dimensional Ising models in slab geometry of volume L\\paralleld-1× {{L}\\bot} is computed through Monte-Carlo simulation at the critical temperature and zero magnetic field. The finite-size scaling of this distribution and its dependence on the system aspect-ratio ρ =\\frac{{{L}\\bot}}{{{L}\\parallel}} and boundary conditions are discussed. In the limiting case ρ \\to 0 of a macroscopically large slab ({{L}\\parallel}\\gg {{L}\\bot} ) the distribution is found to scale as a Gaussian function for all tested system sizes and boundary conditions.
NASA Technical Reports Server (NTRS)
Nitsche, Ludwig C.; Nitsche, Johannes M.; Brenner, Howard
1988-01-01
The sedimentation and diffusion of a nonneutrally buoyant Brownian particle in vertical fluid-filled cylinder of finite length which is instantaneously inverted at regular intervals are investigated analytically. A one-dimensional convective-diffusive equation is derived to describe the temporal and spatial evolution of the probability density; a periodicity condition is formulated; the applicability of Fredholm theory is established; and the parameter-space regions are determined within which the existence and uniqueness of solutions are guaranteed. Numerical results for sample problems are presented graphically and briefly characterized.
Direct Density Derivative Estimation.
Sasaki, Hiroaki; Noh, Yung-Kyun; Niu, Gang; Sugiyama, Masashi
2016-06-01
Estimating the derivatives of probability density functions is an essential step in statistical data analysis. A naive approach to estimate the derivatives is to first perform density estimation and then compute its derivatives. However, this approach can be unreliable because a good density estimator does not necessarily mean a good density derivative estimator. To cope with this problem, in this letter, we propose a novel method that directly estimates density derivatives without going through density estimation. The proposed method provides computationally efficient estimation for the derivatives of any order on multidimensional data with a hyperparameter tuning method and achieves the optimal parametric convergence rate. We further discuss an extension of the proposed method by applying regularized multitask learning and a general framework for density derivative estimation based on Bregman divergences. Applications of the proposed method to nonparametric Kullback-Leibler divergence approximation and bandwidth matrix selection in kernel density estimation are also explored. PMID:27140943
Probability Density Function for Waves Propagating in a Straight Rough Wall Tunnel
Pao, H
2004-01-28
The radio channel places fundamental limitations on the performance of wireless communication systems in tunnels and caves. The transmission path between the transmitter and receiver can vary from a simple direct line of sight to one that is severely obstructed by rough walls and corners. Unlike wired channels that are stationary and predictable, radio channels can be extremely random and difficult to analyze. In fact, modeling the radio channel has historically been one of the more challenging parts of any radio system design; this is often done using statistical methods. The mechanisms behind electromagnetic wave propagation are diverse, but can generally be attributed to reflection, diffraction, and scattering. Because of the multiple reflections from rough walls, the electromagnetic waves travel along different paths of varying lengths. The interactions between these waves cause multipath fading at any location, and the strengths of the waves decrease as the distance between the transmitter and receiver increases. As a consequence of the central limit theorem, the received signals are approximately Gaussian random process. This means that the field propagating in a cave or tunnel is typically a complex-valued Gaussian random process.
On the thresholds, probability densities, and critical exponents of Bak-Sneppen-like models
NASA Astrophysics Data System (ADS)
Garcia, Guilherme J. M.; Dickman, Ronald
2004-10-01
We report a simple method to accurately determine the threshold and the exponent ν of the Bak-Sneppen (BS) model and also investigate the BS universality class. For the random-neighbor version of the BS model, we find the threshold x ∗=0.33332(3) , in agreement with the exact result x ∗= {1}/{3} given by mean-field theory. For the one-dimensional original model, we find x ∗=0.6672(2) in good agreement with the results reported in the literature; for the anisotropic BS model we obtain x ∗=0.7240(1) . We study the finite size effect x ∗(L)-x ∗(L→∞)∝L -ν, observed in a system with L sites, and find ν=1.00(1) for the random-neighbor version, ν=1.40(1) for the original model, and ν=1.58(1) for the anisotropic case. Finally, we discuss the effect of defining the extremal site as the one which minimizes a general function f( x), instead of simply f( x)= x as in the original updating rule. We emphasize that models with extremal dynamics have singular stationary probability distributions p( x). Our simulations indicate the existence of two symmetry-based universality classes.
NASA Astrophysics Data System (ADS)
Poom-Medina, José Luis; Álvarez-Borrego, Josué
2016-07-01
Theoretical relationships of statistical properties of surface slope from statistical properties of the image intensity in remotely sensed images, considering a non-Gaussian probability density function of the surface slope, are shown. Considering a variable detector line of sight angle and considering ocean waves moving along a single direction and that the observer and the sun are both in the vertical plane containing this direction, new expressions, using two different glitter functions, between the variance of the intensity of the image and the variance of the surface slopes are derived. In this case, skewness and kurtosis moments are taken into account. However, new expressions between correlation functions of the intensities in the image and surface slopes are numerically analyzed; for this case, the skewness moments were considered only. It is possible to observe more changes in these statistical relationships when the Rect function is used. The skewness and kurtosis values are in direct relation with the wind velocity on the sea surface.
NASA Technical Reports Server (NTRS)
Simon, M.; Mileant, A.
1986-01-01
The steady-state behavior of a particular type of digital phase-locked loop (DPLL) with an integrate-and-dump circuit following the phase detector is characterized in terms of the probability density function (pdf) of the phase error in the loop. Although the loop is entirely digital from an implementation standpoint, it operates at two extremely different sampling rates. In particular, the combination of a phase detector and an integrate-and-dump circuit operates at a very high rate whereas the loop update rate is very slow by comparison. Because of this dichotomy, the loop can be analyzed by hybrid analog/digital (s/z domain) techniques. The loop is modeled in such a general fashion that previous analyses of the Real-Time Combiner (RTC), Subcarrier Demodulator Assembly (SDA), and Symbol Synchronization Assembly (SSA) fall out as special cases.
Nonparametric estimation of population density for line transect sampling using FOURIER series
Crain, B.R.; Burnham, K.P.; Anderson, D.R.; Lake, J.L.
1979-01-01
A nonparametric, robust density estimation method is explored for the analysis of right-angle distances from a transect line to the objects sighted. The method is based on the FOURIER series expansion of a probability density function over an interval. With only mild assumptions, a general population density estimator of wide applicability is obtained.
Som, Nicholas A.; Goodman, Damon H.; Perry, Russell W.; Hardy, Thomas B.
2015-01-01
Previous methods for constructing univariate habitat suitability criteria (HSC) curves have ranged from professional judgement to kernel-smoothed density functions or combinations thereof. We present a new method of generating HSC curves that applies probability density functions as the mathematical representation of the curves. Compared with previous approaches, benefits of our method include (1) estimation of probability density function parameters directly from raw data, (2) quantitative methods for selecting among several candidate probability density functions, and (3) concise methods for expressing estimation uncertainty in the HSC curves. We demonstrate our method with a thorough example using data collected on the depth of water used by juvenile Chinook salmon (Oncorhynchus tschawytscha) in the Klamath River of northern California and southern Oregon. All R code needed to implement our example is provided in the appendix. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.
Shankar Subramaniam
2009-04-01
This final project report summarizes progress made towards the objectives described in the proposal entitled “Developing New Mathematical Models for Multiphase Flows Based on a Fundamental Probability Density Function Approach”. Substantial progress has been made in theory, modeling and numerical simulation of turbulent multiphase flows. The consistent mathematical framework based on probability density functions is described. New models are proposed for turbulent particle-laden flows and sprays.
Tveito, Aslak; Lines, Glenn T.; Edwards, Andrew G.; McCulloch, Andrew
2016-01-01
Markov models are ubiquitously used to represent the function of single ion channels. However, solving the inverse problem to construct a Markov model of single channel dynamics from bilayer or patch-clamp recordings remains challenging, particularly for channels involving complex gating processes. Methods for solving the inverse problem are generally based on data from voltage clamp measurements. Here, we describe an alternative approach to this problem based on measurements of voltage traces. The voltage traces define probability density functions of the functional states of an ion channel. These probability density functions can also be computed by solving a deterministic system of partial differential equations. The inversion is based on tuning the rates of the Markov models used in the deterministic system of partial differential equations such that the solution mimics the properties of the probability density function gathered from (pseudo) experimental data as well as possible. The optimization is done by defining a cost function to measure the difference between the deterministic solution and the solution based on experimental data. By evoking the properties of this function, it is possible to infer whether the rates of the Markov model are identifiable by our method. We present applications to Markov model well known from the literature. PMID:27154008
Tveito, Aslak; Lines, Glenn T; Edwards, Andrew G; McCulloch, Andrew
2016-07-01
Markov models are ubiquitously used to represent the function of single ion channels. However, solving the inverse problem to construct a Markov model of single channel dynamics from bilayer or patch-clamp recordings remains challenging, particularly for channels involving complex gating processes. Methods for solving the inverse problem are generally based on data from voltage clamp measurements. Here, we describe an alternative approach to this problem based on measurements of voltage traces. The voltage traces define probability density functions of the functional states of an ion channel. These probability density functions can also be computed by solving a deterministic system of partial differential equations. The inversion is based on tuning the rates of the Markov models used in the deterministic system of partial differential equations such that the solution mimics the properties of the probability density function gathered from (pseudo) experimental data as well as possible. The optimization is done by defining a cost function to measure the difference between the deterministic solution and the solution based on experimental data. By evoking the properties of this function, it is possible to infer whether the rates of the Markov model are identifiable by our method. We present applications to Markov model well-known from the literature. PMID:27154008
A simple method to estimate interwell autocorrelation
Pizarro, J.O.S.; Lake, L.W.
1997-08-01
The estimation of autocorrelation in the lateral or interwell direction is important when performing reservoir characterization studies using stochastic modeling. This paper presents a new method to estimate the interwell autocorrelation based on parameters, such as the vertical range and the variance, that can be estimated with commonly available data. We used synthetic fields that were generated from stochastic simulations to provide data to construct the estimation charts. These charts relate the ratio of areal to vertical variance and the autocorrelation range (expressed variously) in two directions. Three different semivariogram models were considered: spherical, exponential and truncated fractal. The overall procedure is demonstrated using field data. We find that the approach gives the most self-consistent results when it is applied to previously identified facies. Moreover, the autocorrelation trends follow the depositional pattern of the reservoir, which gives confidence in the validity of the approach.
Variational bayesian method of estimating variance components.
Arakawa, Aisaku; Taniguchi, Masaaki; Hayashi, Takeshi; Mikawa, Satoshi
2016-07-01
We developed a Bayesian analysis approach by using a variational inference method, a so-called variational Bayesian method, to determine the posterior distributions of variance components. This variational Bayesian method and an alternative Bayesian method using Gibbs sampling were compared in estimating genetic and residual variance components from both simulated data and publically available real pig data. In the simulated data set, we observed strong bias toward overestimation of genetic variance for the variational Bayesian method in the case of low heritability and low population size, and less bias was detected with larger population sizes in both methods examined. The differences in the estimates of variance components between the variational Bayesian and the Gibbs sampling were not found in the real pig data. However, the posterior distributions of the variance components obtained with the variational Bayesian method had shorter tails than those obtained with the Gibbs sampling. Consequently, the posterior standard deviations of the genetic and residual variances of the variational Bayesian method were lower than those of the method using Gibbs sampling. The computing time required was much shorter with the variational Bayesian method than with the method using Gibbs sampling. PMID:26877207
Cost estimating methods for advanced space systems
NASA Technical Reports Server (NTRS)
Cyr, Kelley
1988-01-01
The development of parametric cost estimating methods for advanced space systems in the conceptual design phase is discussed. The process of identifying variables which drive cost and the relationship between weight and cost are discussed. A theoretical model of cost is developed and tested using a historical data base of research and development projects.
Estimation method for serial dilution experiments.
Ben-David, Avishai; Davidson, Charles E
2014-12-01
Titration of microorganisms in infectious or environmental samples is a corner stone of quantitative microbiology. A simple method is presented to estimate the microbial counts obtained with the serial dilution technique for microorganisms that can grow on bacteriological media and develop into a colony. The number (concentration) of viable microbial organisms is estimated from a single dilution plate (assay) without a need for replicate plates. Our method selects the best agar plate with which to estimate the microbial counts, and takes into account the colony size and plate area that both contribute to the likelihood of miscounting the number of colonies on a plate. The estimate of the optimal count given by our method can be used to narrow the search for the best (optimal) dilution plate and saves time. The required inputs are the plate size, the microbial colony size, and the serial dilution factors. The proposed approach shows relative accuracy well within ±0.1log10 from data produced by computer simulations. The method maintains this accuracy even in the presence of dilution errors of up to 10% (for both the aliquot and diluent volumes), microbial counts between 10(4) and 10(12) colony-forming units, dilution ratios from 2 to 100, and plate size to colony size ratios between 6.25 to 200. PMID:25205541
Efficient Methods of Estimating Switchgrass Biomass Supplies
Technology Transfer Automated Retrieval System (TEKTRAN)
Switchgrass (Panicum virgatum L.) is being developed as a biofuel feedstock for the United States. Efficient and accurate methods to estimate switchgrass biomass feedstock supply within a production area will be required by biorefineries. Our main objective was to determine the effectiveness of in...
Venturi, D.; Karniadakis, G.E.
2012-08-30
By using functional integral methods we determine new evolution equations satisfied by the joint response-excitation probability density function (PDF) associated with the stochastic solution to first-order nonlinear partial differential equations (PDEs). The theory is presented for both fully nonlinear and for quasilinear scalar PDEs subject to random boundary conditions, random initial conditions or random forcing terms. Particular applications are discussed for the classical linear and nonlinear advection equations and for the advection-reaction equation. By using a Fourier-Galerkin spectral method we obtain numerical solutions of the proposed response-excitation PDF equations. These numerical solutions are compared against those obtained by using more conventional statistical approaches such as probabilistic collocation and multi-element probabilistic collocation methods. It is found that the response-excitation approach yields accurate predictions of the statistical properties of the system. In addition, it allows to directly ascertain the tails of probabilistic distributions, thus facilitating the assessment of rare events and associated risks. The computational cost of the response-excitation method is order magnitudes smaller than the one of more conventional statistical approaches if the PDE is subject to high-dimensional random boundary or initial conditions. The question of high-dimensionality for evolution equations involving multidimensional joint response-excitation PDFs is also addressed.
A statistical method to estimate outflow volume in case of levee breach due to overtopping
NASA Astrophysics Data System (ADS)
Brandimarte, Luigia; Martina, Mario; Dottori, Francesco; Mazzoleni, Maurizio
2015-04-01
The aim of this study is to propose a statistical method to assess the outflowing water volume through a levee breach, due to overtopping, in case of three different types of grass cover quality. The first step in the proposed methodology is the definition of the reliability function, a the relation between loading and resistance conditions on the levee system, in case of overtopping. Secondly, the fragility curve, which relates the probability of failure with loading condition over the levee system, is estimated having defined the stochastic variables in the reliability function. Thus, different fragility curves are assessed in case of different scenarios of grass cover quality. Then, a levee breach model is implemented and combined with a 1D hydrodynamic model in order to assess the outflow hydrograph given the water level in the main channel and stochastic values of the breach width. Finally, the water volume is estimated as a combination of the probability density function of the breach width and levee failure. The case study is located in the in 98km-braided reach of Po River, Italy, between the cross-sections of Cremona and Borgoforte. The analysis showed how different counter measures, different grass cover quality in this case, can reduce the probability of failure of the levee system. In particular, for a given values of breach width good levee cover qualities can significantly reduce the outflowing water volume, compared to bad cover qualities, inducing a consequent lower flood risk within the flood-prone area.
Estimation of typhoon rainfall in GaoPing River: A Multivariate Maximum Entropy Method
NASA Astrophysics Data System (ADS)
Pei-Jui, Wu; Hwa-Lung, Yu
2016-04-01
The heavy rainfall from typhoons is the main factor of the natural disaster in Taiwan, which causes the significant loss of human lives and properties. Statistically average 3.5 typhoons invade Taiwan every year, and the serious typhoon, Morakot in 2009, impacted Taiwan in recorded history. Because the duration, path and intensity of typhoon, also affect the temporal and spatial rainfall type in specific region , finding the characteristics of the typhoon rainfall type is advantageous when we try to estimate the quantity of rainfall. This study developed a rainfall prediction model and can be divided three parts. First, using the EEOF(extended empirical orthogonal function) to classify the typhoon events, and decompose the standard rainfall type of all stations of each typhoon event into the EOF and PC(principal component). So we can classify the typhoon events which vary similarly in temporally and spatially as the similar typhoon types. Next, according to the classification above, we construct the PDF(probability density function) in different space and time by means of using the multivariate maximum entropy from the first to forth moment statistically. Therefore, we can get the probability of each stations of each time. Final we use the BME(Bayesian Maximum Entropy method) to construct the typhoon rainfall prediction model , and to estimate the rainfall for the case of GaoPing river which located in south of Taiwan.This study could be useful for typhoon rainfall predictions in future and suitable to government for the typhoon disaster prevention .
Cost estimating methods for advanced space systems
NASA Technical Reports Server (NTRS)
Cyr, Kelley
1988-01-01
Parametric cost estimating methods for space systems in the conceptual design phase are developed. The approach is to identify variables that drive cost such as weight, quantity, development culture, design inheritance, and time. The relationship between weight and cost is examined in detail. A theoretical model of cost is developed and tested statistically against a historical data base of major research and development programs. It is concluded that the technique presented is sound, but that it must be refined in order to produce acceptable cost estimates.
NASA Astrophysics Data System (ADS)
Sato, Katsushi
2016-08-01
The friction coefficient controls the brittle strength of the Earth's crust for deformation recorded by faults. This study proposes a computerized method to determine the friction coefficient of meso-scale faults. The method is based on the analysis of orientation distribution of faults, and the principal stress axes and the stress ratio calculated by a stress tensor inversion technique. The method assumes that faults are activated according to the cohesionless Coulomb's failure criterion, where the fluctuations of fluid pressure and the magnitude of differential stress are assumed to induce faulting. In this case, the orientation distribution of fault planes is described by a probability density function that is visualized as linear contours on a Mohr diagram. The parametric optimization of the function for an observed fault population yields the friction coefficient. A test using an artificial fault-slip dataset successfully determines the internal friction angle (the arctangent of the friction coefficient) with its confidence interval of several degrees estimated by the bootstrap resampling technique. An application to natural faults cutting a Pleistocene forearc basin fill yields a friction coefficient around 0.7 which is experimentally predicted by the Byerlee's law.
Implicit solvent methods for free energy estimation
Decherchi, Sergio; Masetti, Matteo; Vyalov, Ivan; Rocchia, Walter
2014-01-01
Solvation is a fundamental contribution in many biological processes and especially in molecular binding. Its estimation can be performed by means of several computational approaches. The aim of this review is to give an overview of existing theories and methods to estimate solvent effects giving a specific focus on the category of implicit solvent models and their use in Molecular Dynamics. In many of these models, the solvent is considered as a continuum homogenous medium, while the solute can be represented at the atomic detail and at different levels of theory. Despite their degree of approximation, implicit methods are still widely employed due to their trade-off between accuracy and efficiency. Their derivation is rooted in the statistical mechanics and integral equations disciplines, some of the related details being provided here. Finally, methods that combine implicit solvent models and molecular dynamics simulation, are briefly described. PMID:25193298
Chowdhury, Shakhawat
2013-05-01
The evaluation of the status of a municipal drinking water treatment plant (WTP) is important. The evaluation depends on several factors, including, human health risks from disinfection by-products (R), disinfection performance (D), and cost (C) of water production and distribution. The Dempster-Shafer theory (DST) of evidence can combine the individual status with respect to R, D, and C to generate a new indicator, from which the overall status of a WTP can be evaluated. In the DST, the ranges of different factors affecting the overall status are divided into several segments. The basic probability assignments (BPA) for each segment of these factors are provided by multiple experts, which are then combined to obtain the overall status. In assigning the BPA, the experts use their individual judgments, which can impart subjective biases in the overall evaluation. In this research, an approach has been introduced to avoid the assignment of subjective BPA. The factors contributing to the overall status were characterized using the probability density functions (PDF). The cumulative probabilities for different segments of these factors were determined from the cumulative density function, which were then assigned as the BPA for these factors. A case study is presented to demonstrate the application of PDF in DST to evaluate a WTP, leading to the selection of the required level of upgradation for the WTP. PMID:22941202
NASA Technical Reports Server (NTRS)
Nemeth, Noel
2013-01-01
Models that predict the failure probability of monolithic glass and ceramic components under multiaxial loading have been developed by authors such as Batdorf, Evans, and Matsuo. These "unit-sphere" failure models assume that the strength-controlling flaws are randomly oriented, noninteracting planar microcracks of specified geometry but of variable size. This report develops a formulation to describe the probability density distribution of the orientation of critical strength-controlling flaws that results from an applied load. This distribution is a function of the multiaxial stress state, the shear sensitivity of the flaws, the Weibull modulus, and the strength anisotropy. Examples are provided showing the predicted response on the unit sphere for various stress states for isotropic and transversely isotropic (anisotropic) materials--including the most probable orientation of critical flaws for offset uniaxial loads with strength anisotropy. The author anticipates that this information could be used to determine anisotropic stiffness degradation or anisotropic damage evolution for individual brittle (or quasi-brittle) composite material constituents within finite element or micromechanics-based software
A method for estimating soil moisture availability
NASA Technical Reports Server (NTRS)
Carlson, T. N.
1985-01-01
A method for estimating values of soil moisture based on measurements of infrared surface temperature is discussed. A central element in the method is a boundary layer model. Although it has been shown that soil moistures determined by this method using satellite measurements do correspond in a coarse fashion to the antecedent precipitation, the accuracy and exact physical interpretation (with respect to ground water amounts) are not well known. This area of ignorance, which currently impedes the practical application of the method to problems in hydrology, meteorology and agriculture, is largely due to the absence of corresponding surface measurements. Preliminary field measurements made over France have led to the development of a promising vegetation formulation (Taconet et al., 1985), which has been incorporated in the model. It is necessary, however, to test the vegetation component, and the entire method, over a wide variety of surface conditions and crop canopies.
Comparative yield estimation via shock hydrodynamic methods
Attia, A.V.; Moran, B.; Glenn, L.A.
1991-06-01
Shock TOA (CORRTEX) from recent underground nuclear explosions in saturated tuff were used to estimate yield via the simulated explosion-scaling method. The sensitivity of the derived yield to uncertainties in the measured shock Hugoniot, release adiabats, and gas porosity is the main focus of this paper. In this method for determining yield, we assume a point-source explosion in an infinite homogeneous material. The rock is formulated using laboratory experiments on core samples, taken prior to the explosion. Results show that increasing gas porosity from 0% to 2% causes a 15% increase in yield per ms/kt{sup 1/3}. 6 refs., 4 figs.
On methods of estimating cosmological bulk flows
NASA Astrophysics Data System (ADS)
Nusser, Adi
2016-01-01
We explore similarities and differences between several estimators of the cosmological bulk flow, B, from the observed radial peculiar velocities of galaxies. A distinction is made between two theoretical definitions of B as a dipole moment of the velocity field weighted by a radial window function. One definition involves the three-dimensional (3D) peculiar velocity, while the other is based on its radial component alone. Different methods attempt at inferring B for either of these definitions which coincide only for the case of a velocity field which is constant in space. We focus on the Wiener Filtering (WF) and the Constrained Minimum Variance (CMV) methodologies. Both methodologies require a prior expressed in terms of the radial velocity correlation function. Hoffman et al. compute B in Top-Hat windows from a WF realization of the 3D peculiar velocity field. Feldman et al. infer B directly from the observed velocities for the second definition of B. The WF methodology could easily be adapted to the second definition, in which case it will be equivalent to the CMV with the exception of the imposed constraint. For a prior with vanishing correlations or very noisy data, CMV reproduces the standard Maximum Likelihood estimation for B of the entire sample independent of the radial weighting function. Therefore, this estimator is likely more susceptible to observational biases that could be present in measurements of distant galaxies. Finally, two additional estimators are proposed.
Cost estimating methods for advanced space systems
NASA Technical Reports Server (NTRS)
Cyr, Kelley
1994-01-01
NASA is responsible for developing much of the nation's future space technology. Cost estimates for new programs are required early in the planning process so that decisions can be made accurately. Because of the long lead times required to develop space hardware, the cost estimates are frequently required 10 to 15 years before the program delivers hardware. The system design in conceptual phases of a program is usually only vaguely defined and the technology used is so often state-of-the-art or beyond. These factors combine to make cost estimating for conceptual programs very challenging. This paper describes an effort to develop parametric cost estimating methods for space systems in the conceptual design phase. The approach is to identify variables that drive cost such as weight, quantity, development culture, design inheritance and time. The nature of the relationships between the driver variables and cost will be discussed. In particular, the relationship between weight and cost will be examined in detail. A theoretical model of cost will be developed and tested statistically against a historical database of major research and development projects.
Bisetti, Fabrizio; Chen, J.-Y.; Hawkes, Evatt R.; Chen, Jacqueline H.
2008-12-15
Homogeneous charge compression ignition (HCCI) engine technology promises to reduce NO{sub x} and soot emissions while achieving high thermal efficiency. Temperature and mixture stratification are regarded as effective means of controlling the start of combustion and reducing the abrupt pressure rise at high loads. Probability density function methods are currently being pursued as a viable approach to modeling the effects of turbulent mixing and mixture stratification on HCCI ignition. In this paper we present an assessment of the merits of three widely used mixing models in reproducing the moments of reactive scalars during the ignition of a lean hydrogen/air mixture ({phi}=0.1, p=41atm, and T=1070 K) under increasing temperature stratification and subject to decaying turbulence. The results from the solution of the evolution equation for a spatially homogeneous joint PDF of the reactive scalars are compared with available direct numerical simulation (DNS) data [E.R. Hawkes, R. Sankaran, P.P. Pebay, J.H. Chen, Combust. Flame 145 (1-2) (2006) 145-159]. The mixing models are found able to quantitatively reproduce the time history of the heat release rate, first and second moments of temperature, and hydroxyl radical mass fraction from the DNS results. Most importantly, the dependence of the heat release rate on the extent of the initial temperature stratification in the charge is also well captured. (author)
NASA Astrophysics Data System (ADS)
Afsar, Ozgur; Tirnakli, Ugur
2010-10-01
We investigate the probability density of rescaled sum of iterates of sine-circle map within quasiperiodic route to chaos. When the dynamical system is strongly mixing (i.e., ergodic), standard central limit theorem (CLT) is expected to be valid, but at the edge of chaos where iterates have strong correlations, the standard CLT is not necessarily valid anymore. We discuss here the main characteristics of the probability densities for the sums of iterates of deterministic dynamical systems which exhibit quasiperiodic route to chaos. At the golden-mean onset of chaos for the sine-circle map, we numerically verify that the probability density appears to converge to a q -Gaussian with q<1 as the golden mean value is approached.
An analytical method of estimating turbine performance
NASA Technical Reports Server (NTRS)
Kochendorfer, Fred D; Nettles, J Cary
1949-01-01
A method is developed by which the performance of a turbine over a range of operating conditions can be analytically estimated from the blade angles and flow areas. In order to use the method, certain coefficients that determine the weight flow and the friction losses must be approximated. The method is used to calculate the performance of the single-stage turbine of a commercial aircraft gas-turbine engine and the calculated performance is compared with the performance indicated by experimental data. For the turbine of the typical example, the assumed pressure losses and the tuning angles give a calculated performance that represents the trends of the experimental performance with reasonable accuracy. The exact agreement between analytical performance and experimental performance is contingent upon the proper selection of a blading-loss parameter.
An Analytical Method of Estimating Turbine Performance
NASA Technical Reports Server (NTRS)
Kochendorfer, Fred D; Nettles, J Cary
1948-01-01
A method is developed by which the performance of a turbine over a range of operating conditions can be analytically estimated from the blade angles and flow areas. In order to use the method, certain coefficients that determine the weight flow and friction losses must be approximated. The method is used to calculate the performance of the single-stage turbine of a commercial aircraft gas-turbine engine and the calculated performance is compared with the performance indicated by experimental data. For the turbine of the typical example, the assumed pressure losses and turning angles give a calculated performance that represents the trends of the experimental performance with reasonable accuracy. The exact agreement between analytical performance and experimental performance is contingent upon the proper selection of the blading-loss parameter. A variation of blading-loss parameter from 0.3 to 0.5 includes most of the experimental data from the turbine investigated.
Method for estimation of protein isoelectric point.
Pihlasalo, Sari; Auranen, Laura; Hänninen, Pekka; Härmä, Harri
2012-10-01
Adsorption of sample protein to Eu(3+) chelate-labeled nanoparticles is the basis of the developed noncompetitive and homogeneous method for the estimation of the protein isoelectric point (pI). The lanthanide ion of the nanoparticle surface-conjugated Eu(3+) chelate is dissociated at a low pH, therefore decreasing the luminescence signal. A nanoparticle-adsorbed sample protein prevents the dissociation of the chelate, leading to a high luminescence signal. The adsorption efficiency of the sample protein is reduced above the isoelectric point due to the decreased electrostatic attraction between the negatively charged protein and the negatively charged particle. Four proteins with isoelectric points ranging from ~5 to 9 were tested to show the performance of the method. These pI values measured with the developed method were close to the theoretical and experimental literature values. The method is sensitive and requires a low analyte concentration of submilligrams per liter, which is nearly 10000 times lower than the concentration required for the traditional isoelectric focusing. Moreover, the method is significantly faster and simpler than the existing methods, as a ready-to-go assay was prepared for the microtiter plate format. This mix-and-measure concept is a highly attractive alternative for routine laboratory work. PMID:22946671
A Novel Method for Estimating Linkage Maps
Tan, Yuan-De; Fu, Yun-Xin
2006-01-01
The goal of linkage mapping is to find the true order of loci from a chromosome. Since the number of possible orders is large even for a modest number of loci, the problem of finding the optimal solution is known as a NP-hard problem or traveling salesman problem (TSP). Although a number of algorithms are available, many either are low in the accuracy of recovering the true order of loci or require tremendous amounts of computational resources, thus making them difficult to use for reconstructing a large-scale map. We developed in this article a novel method called unidirectional growth (UG) to help solve this problem. The UG algorithm sequentially constructs the linkage map on the basis of novel results about additive distance. It not only is fast but also has a very high accuracy in recovering the true order of loci according to our simulation studies. Since the UG method requires n − 1 cycles to estimate the ordering of n loci, it is particularly useful for estimating linkage maps consisting of hundreds or even thousands of linked codominant loci on a chromosome. PMID:16783016
NASA Technical Reports Server (NTRS)
Garber, Donald P.
1993-01-01
A probability density function for the variability of ensemble averaged spectral estimates from helicopter acoustic signals in Gaussian background noise was evaluated. Numerical methods for calculating the density function and for determining confidence limits were explored. Density functions were predicted for both synthesized and experimental data and compared with observed spectral estimate variability.
Large Eddy Simulation/Probability Density Function Modeling of a Turbulent CH4/H2/N2 Jet Flame
Wang, Haifeng; Pope, Stephen B.
2011-01-01
In this work, we develop the large-eddy simulation (LES)/probability density function (PDF) simulation capability for turbulent combustion and apply it to a turbulent CH{sub 4}/H{sub 2}/N{sub 2} jet flame (DLR Flame A). The PDF code is verified to be second-order accurate with respect to the time-step size and the grid size in a manufactured one-dimensional test case. Three grids (64×64×16,192×192×48,320×320×80)(64×64×16,192×192×48,320×320×80) are used in the simulations of DLR Flame A to examine the effect of the grid resolution. The numerical solutions of the resolved mixture fraction, the mixture fraction squared, and the density are duplicated in the LES code and the PDF code to explore the numerical consistency between them. A single laminar flamelet profile is used to reduce the computational cost of treating the chemical reactions of the particles. The sensitivity of the LES results to the time-step size is explored. Both first and second-order time splitting schemes are used for integrating the stochastic differential equations for the particles, and these are compared in the jet flame simulations. The numerical results are found to be sensitive to the grid resolution, and the 192×192×48192×192×48 grid is adequate to capture the main flow fields of interest for this study. The numerical consistency between LES and PDF is confirmed by the small difference between their numerical predictions. Overall good agreement between the LES/PDF predictions and the experimental data is observed for the resolved flow fields and the composition fields, including for the mass fractions of the minor species and NO. The LES results are found to be insensitive to the time-step size for this particular flame. The first-order splitting scheme performs as well as the second-order splitting scheme in predicting the resolved mean and rms mixture fraction and the density for this flame.
ERIC Educational Resources Information Center
Riggs, Peter J.
2013-01-01
Students often wrestle unsuccessfully with the task of correctly calculating momentum probability densities and have difficulty in understanding their interpretation. In the case of a particle in an "infinite" potential well, its momentum can take values that are not just those corresponding to the particle's quantised energies but…
Demographic estimation methods for plants with dormancy
Kery, M.; Gregg, K.B.
2004-01-01
Demographic studies in plants appear simple because unlike animals, plants do not run away. Plant individuals can be marked with, e.g., plastic tags, but often the coordinates of an individual may be sufficient to identify it. Vascular plants in temperate latitudes have a pronounced seasonal life-cycle, so most plant demographers survey their study plots once a year often during or shortly after flowering. Life-states are pervasive in plants, hence the results of a demographic study for an individual can be summarized in a familiar encounter history, such as OVFVVF000. A zero means that an individual was not seen in a year and a letter denotes its state for years when it was seen aboveground. V and F here stand for vegetative and flowering states, respectively. Probabilities of survival and state transitions can then be obtained by mere counting. Problems arise when there is an unobservable dormant state, I.e., when plants may stay belowground for one or more growing seasons. Encounter histories such as OVFOOF000 may then occur where the meaning of zeroes becomes ambiguous. A zero can either mean a dead or a dormant plant. Various ad hoc methods in wide use among plant ecologists have made strong assumptions about when a zero should be equated to a dormant individual. These methods have never been compared among each other. In our talk and in Kery et al. (submitted), we show that these ad hoc estimators provide spurious estimates of survival and should not be used. In contrast, if detection probabilities for aboveground plants are known or can be estimated, capture-recapture (CR) models can be used to estimate probabilities of survival and state-transitions and the fraction of the population that is dormant. We have used this approach in two studies of terrestrial orchids, Cleistes bifaria (Kery et aI., submitted) and Cypripedium reginae (Kery & Gregg, submitted) in West Virginia, U.S.A. For Cleistes, our data comprised one population with a total of 620 marked
NASA Astrophysics Data System (ADS)
Zengmei, L.; Guanghua, Q.; Zishen, C.
2015-05-01
The direct benefit of a waterlogging control project is reflected by the reduction or avoidance of waterlogging loss. Before and after the construction of a waterlogging control project, the disaster-inducing environment in the waterlogging-prone zone is generally different. In addition, the category, quantity and spatial distribution of the disaster-bearing bodies are also changed more or less. Therefore, under the changing environment, the direct benefit of a waterlogging control project should be the reduction of waterlogging losses compared to conditions with no control project. Moreover, the waterlogging losses with or without the project should be the mathematical expectations of the waterlogging losses when rainstorms of all frequencies meet various water levels in the drainage-accepting zone. So an estimation model of the direct benefit of waterlogging control is proposed. Firstly, on the basis of a Copula function, the joint distribution of the rainstorms and the water levels are established, so as to obtain their joint probability density function. Secondly, according to the two-dimensional joint probability density distribution, the dimensional domain of integration is determined, which is then divided into small domains so as to calculate the probability for each of the small domains and the difference between the average waterlogging loss with and without a waterlogging control project, called the regional benefit of waterlogging control project, under the condition that rainstorms in the waterlogging-prone zone meet the water level in the drainage-accepting zone. Finally, it calculates the weighted mean of the project benefit of all small domains, with probability as the weight, and gets the benefit of the waterlogging control project. Taking the estimation of benefit of a waterlogging control project in Yangshan County, Guangdong Province, as an example, the paper briefly explains the procedures in waterlogging control project benefit estimation. The
Bayes method for low rank tensor estimation
NASA Astrophysics Data System (ADS)
Suzuki, Taiji; Kanagawa, Heishiro
2016-03-01
We investigate the statistical convergence rate of a Bayesian low-rank tensor estimator, and construct a Bayesian nonlinear tensor estimator. The problem setting is the regression problem where the regression coefficient forms a tensor structure. This problem setting occurs in many practical applications, such as collaborative filtering, multi-task learning, and spatio-temporal data analysis. The convergence rate of the Bayes tensor estimator is analyzed in terms of both in-sample and out-of-sample predictive accuracies. It is shown that a fast learning rate is achieved without any strong convexity of the observation. Moreover, we extend the tensor estimator to a nonlinear function estimator so that we estimate a function that is a tensor product of several functions.
Comparisons of Four Methods for Estimating a Dynamic Factor Model
ERIC Educational Resources Information Center
Zhang, Zhiyong; Hamaker, Ellen L.; Nesselroade, John R.
2008-01-01
Four methods for estimating a dynamic factor model, the direct autoregressive factor score (DAFS) model, are evaluated and compared. The first method estimates the DAFS model using a Kalman filter algorithm based on its state space model representation. The second one employs the maximum likelihood estimation method based on the construction of a…
The MIRD method of estimating absorbed dose
Weber, D.A.
1991-01-01
The estimate of absorbed radiation dose from internal emitters provides the information required to assess the radiation risk associated with the administration of radiopharmaceuticals for medical applications. The MIRD (Medical Internal Radiation Dose) system of dose calculation provides a systematic approach to combining the biologic distribution data and clearance data of radiopharmaceuticals and the physical properties of radionuclides to obtain dose estimates. This tutorial presents a review of the MIRD schema, the derivation of the equations used to calculate absorbed dose, and shows how the MIRD schema can be applied to estimate dose from radiopharmaceuticals used in nuclear medicine.
NASA Astrophysics Data System (ADS)
Ruggles, Adam J.
2015-11-01
This paper presents improved statistical insight regarding the self-similar scalar mixing process of atmospheric hydrogen jets and the downstream region of under-expanded hydrogen jets. Quantitative planar laser Rayleigh scattering imaging is used to probe both jets. The self-similarity of statistical moments up to the sixth order (beyond the literature established second order) is documented in both cases. This is achieved using a novel self-similar normalization method that facilitated a degree of statistical convergence that is typically limited to continuous, point-based measurements. This demonstrates that image-based measurements of a limited number of samples can be used for self-similar scalar mixing studies. Both jets exhibit the same radial trends of these moments demonstrating that advanced atmospheric self-similarity can be applied in the analysis of under-expanded jets. Self-similar histograms away from the centerline are shown to be the combination of two distributions. The first is attributed to turbulent mixing. The second, a symmetric Poisson-type distribution centered on zero mass fraction, progressively becomes the dominant and eventually sole distribution at the edge of the jet. This distribution is attributed to shot noise-affected pure air measurements, rather than a diffusive superlayer at the jet boundary. This conclusion is reached after a rigorous measurement uncertainty analysis and inspection of pure air data collected with each hydrogen data set. A threshold based upon the measurement noise analysis is used to separate the turbulent and pure air data, and thusly estimate intermittency. Beta-distributions (four parameters) are used to accurately represent the turbulent distribution moments. This combination of measured intermittency and four-parameter beta-distributions constitutes a new, simple approach to model scalar mixing. Comparisons between global moments from the data and moments calculated using the proposed model show excellent
Statistical methods of estimating mining costs
Long, K.R.
2011-01-01
Until it was defunded in 1995, the U.S. Bureau of Mines maintained a Cost Estimating System (CES) for prefeasibility-type economic evaluations of mineral deposits and estimating costs at producing and non-producing mines. This system had a significant role in mineral resource assessments to estimate costs of developing and operating known mineral deposits and predicted undiscovered deposits. For legal reasons, the U.S. Geological Survey cannot update and maintain CES. Instead, statistical tools are under development to estimate mining costs from basic properties of mineral deposits such as tonnage, grade, mineralogy, depth, strip ratio, distance from infrastructure, rock strength, and work index. The first step was to reestimate "Taylor's Rule" which relates operating rate to available ore tonnage. The second step was to estimate statistical models of capital and operating costs for open pit porphyry copper mines with flotation concentrators. For a sample of 27 proposed porphyry copper projects, capital costs can be estimated from three variables: mineral processing rate, strip ratio, and distance from nearest railroad before mine construction began. Of all the variables tested, operating costs were found to be significantly correlated only with strip ratio.
A Study of Variance Estimation Methods. Working Paper Series.
ERIC Educational Resources Information Center
Zhang, Fan; Weng, Stanley; Salvucci, Sameena; Hu, Ming-xiu
This working paper contains reports of five studies of variance estimation methods. The first, An Empirical Study of Poststratified Estimator, by Fan Zhang uses data from the National Household Education Survey to illustrate use of poststratified estimation. The second paper, BRR Variance Estimation Using BPLX Hadamard Procedure, by Stanley Weng…
Nutrient Estimation Using Subsurface Sensing Methods
Technology Transfer Automated Retrieval System (TEKTRAN)
This report investigates the use of precision management techniques for measuring soil conductivity on feedlot surfaces to estimate nutrient value for crop production. An electromagnetic induction soil conductivity meter was used to collect apparent soil electrical conductivity (ECa) from feedlot p...
Estimation Methods for One-Parameter Testlet Models
ERIC Educational Resources Information Center
Jiao, Hong; Wang, Shudong; He, Wei
2013-01-01
This study demonstrated the equivalence between the Rasch testlet model and the three-level one-parameter testlet model and explored the Markov Chain Monte Carlo (MCMC) method for model parameter estimation in WINBUGS. The estimation accuracy from the MCMC method was compared with those from the marginalized maximum likelihood estimation (MMLE)…
NASA Technical Reports Server (NTRS)
Huddleston, Lisa L.; Roeder, William P.; Merceret, Francis J.
2011-01-01
A new technique has been developed to estimate the probability that a nearby cloud to ground lightning stroke was within a specified radius of any point of interest. This process uses the bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to determine the probability that the stroke is inside any specified radius of any location, even if that location is not centered on or even with the location error ellipse. This technique is adapted from a method of calculating the probability of debris collision with spacecraft. Such a technique is important in spaceport processing activities because it allows engineers to quantify the risk of induced current damage to critical electronics due to nearby lightning strokes. This technique was tested extensively and is now in use by space launch organizations at Kennedy Space Center and Cape Canaveral Air Force Station. Future applications could include forensic meteorology.
NASA Technical Reports Server (NTRS)
Huddleston, Lisa L.; Roeder, William P.; Merceret, Francis J.
2010-01-01
A new technique has been developed to estimate the probability that a nearby cloud-to-ground lightning stroke was within a specified radius of any point of interest. This process uses the bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to determine the probability that the stroke is inside any specified radius of any location, even if that location is not centered on or even within the location error ellipse. This technique is adapted from a method of calculating the probability of debris collision with spacecraft. Such a technique is important in spaceport processing activities because it allows engineers to quantify the risk of induced current damage to critical electronics due to nearby lightning strokes. This technique was tested extensively and is now in use by space launch organizations at Kennedy Space Center and Cape Canaveral Air Force station.
Development of advanced acreage estimation methods
NASA Technical Reports Server (NTRS)
Guseman, L. F., Jr. (Principal Investigator)
1980-01-01
The use of the AMOEBA clustering/classification algorithm was investigated as a basis for both a color display generation technique and maximum likelihood proportion estimation procedure. An approach to analyzing large data reduction systems was formulated and an exploratory empirical study of spatial correlation in LANDSAT data was also carried out. Topics addressed include: (1) development of multiimage color images; (2) spectral spatial classification algorithm development; (3) spatial correlation studies; and (4) evaluation of data systems.
Estimation of vegetation cover at subpixel resolution using LANDSAT data
NASA Technical Reports Server (NTRS)
Jasinski, Michael F.; Eagleson, Peter S.
1986-01-01
The present report summarizes the various approaches relevant to estimating canopy cover at subpixel resolution. The approaches are based on physical models of radiative transfer in non-homogeneous canopies and on empirical methods. The effects of vegetation shadows and topography are examined. Simple versions of the model are tested, using the Taos, New Mexico Study Area database. Emphasis has been placed on using relatively simple models requiring only one or two bands. Although most methods require some degree of ground truth, a two-band method is investigated whereby the percent cover can be estimated without ground truth by examining the limits of the data space. Future work is proposed which will incorporate additional surface parameters into the canopy cover algorithm, such as topography, leaf area, or shadows. The method involves deriving a probability density function for the percent canopy cover based on the joint probability density function of the observed radiances.
An investigation of new methods for estimating parameter sensitivities
NASA Technical Reports Server (NTRS)
Beltracchi, Todd J.; Gabriele, Gary A.
1989-01-01
The method proposed for estimating sensitivity derivatives is based on the Recursive Quadratic Programming (RQP) method and in conjunction a differencing formula to produce estimates of the sensitivities. This method is compared to existing methods and is shown to be very competitive in terms of the number of function evaluations required. In terms of accuracy, the method is shown to be equivalent to a modified version of the Kuhn-Tucker method, where the Hessian of the Lagrangian is estimated using the BFS method employed by the RQP algorithm. Initial testing on a test set with known sensitivities demonstrates that the method can accurately calculate the parameter sensitivity.
NASA Technical Reports Server (NTRS)
Tapia, R. A.; Vanrooy, D. L.
1976-01-01
A quasi-Newton method is presented for minimizing a nonlinear function while constraining the variables to be nonnegative and sum to one. The nonnegativity constraints were eliminated by working with the squares of the variables and the resulting problem was solved using Tapia's general theory of quasi-Newton methods for constrained optimization. A user's guide for a computer program implementing this algorithm is provided.
The augmented Lagrangian method for parameter estimation in elliptic systems
NASA Technical Reports Server (NTRS)
Ito, Kazufumi; Kunisch, Karl
1990-01-01
In this paper a new technique for the estimation of parameters in elliptic partial differential equations is developed. It is a hybrid method combining the output-least-squares and the equation error method. The new method is realized by an augmented Lagrangian formulation, and convergence as well as rate of convergence proofs are provided. Technically the critical step is the verification of a coercivity estimate of an appropriately defined Lagrangian functional. To obtain this coercivity estimate a seminorm regularization technique is used.
Morphological method for estimation of simian virus 40 infectious titer.
Landau, S M; Nosach, L N; Pavlova, G V
1982-01-01
The cytomorphologic method previously reported for titration of adenoviruses has been employed for estimating the infectious titer of simian virus 40 (SV 40). Infected cells forming intranuclear inclusions were determined. The method examined possesses a number of advantages over virus titration by plaque assay and cytopathic effect. The virus titer estimated by the method of inclusion counting and expressed as IFU/ml (Inclusion Forming Units/ml) corresponds to that estimated by plaque count and expressed as PFU/ml. PMID:6289780
Fused methods for visual saliency estimation
NASA Astrophysics Data System (ADS)
Danko, Amanda S.; Lyu, Siwei
2015-02-01
In this work, we present a new model of visual saliency by combing results from existing methods, improving upon their performance and accuracy. By fusing pre-attentive and context-aware methods, we highlight the abilities of state-of-the-art models while compensating for their deficiencies. We put this theory to the test in a series of experiments, comparatively evaluating the visual saliency maps and employing them for content-based image retrieval and thumbnail generation. We find that on average our model yields definitive improvements upon recall and f-measure metrics with comparable precisions. In addition, we find that all image searches using our fused method return more correct images and additionally rank them higher than the searches using the original methods alone.
Advancing Methods for Estimating Cropland Area
NASA Astrophysics Data System (ADS)
King, L.; Hansen, M.; Stehman, S. V.; Adusei, B.; Potapov, P.; Krylov, A.
2014-12-01
Measurement and monitoring of complex and dynamic agricultural land systems is essential with increasing demands on food, feed, fuel and fiber production from growing human populations, rising consumption per capita, the expansion of crops oils in industrial products, and the encouraged emphasis on crop biofuels as an alternative energy source. Soybean is an important global commodity crop, and the area of land cultivated for soybean has risen dramatically over the past 60 years, occupying more than 5% of all global croplands (Monfreda et al 2008). Escalating demands for soy over the next twenty years are anticipated to be met by an increase of 1.5 times the current global production, resulting in expansion of soybean cultivated land area by nearly the same amount (Masuda and Goldsmith 2009). Soybean cropland area is estimated with the use of a sampling strategy and supervised non-linear hierarchical decision tree classification for the United States, Argentina and Brazil as the prototype in development of a new methodology for crop specific agricultural area estimation. Comparison of our 30 m2 Landsat soy classification with the National Agricultural Statistical Services Cropland Data Layer (CDL) soy map shows a strong agreement in the United States for 2011, 2012, and 2013. RapidEye 5m2 imagery was also classified for soy presence and absence and used at the field scale for validation and accuracy assessment of the Landsat soy maps, describing a nearly 1 to 1 relationship in the United States, Argentina and Brazil. The strong correlation found between all products suggests high accuracy and precision of the prototype and has proven to be a successful and efficient way to assess soybean cultivated area at the sub-national and national scale for the United States with great potential for application elsewhere.
Wang, Dongliang; Hutson, Alan D.
2016-01-01
The traditional confidence interval associated with the ordinary least squares estimator of linear regression coefficient is sensitive to non-normality of the underlying distribution. In this article, we develop a novel kernel density estimator for the ordinary least squares estimator via utilizing well-defined inversion based kernel smoothing techniques in order to estimate the conditional probability density distribution of the dependent random variable. Simulation results show that given a small sample size, our method significantly increases the power as compared with Wald-type CIs. The proposed approach is illustrated via an application to a classic small data set originally from Graybill (1961). PMID:26924882
Comparison of three methods for estimating complete life tables
NASA Astrophysics Data System (ADS)
Ibrahim, Rose Irnawaty
2013-04-01
A question of interest in the demographic and actuarial fields is the estimation of the complete sets of qx values when the data are given in age groups. When the complete life tables are not available, estimating it from abridged life tables is necessary. Three methods such as King's Osculatory Interpolation, Six-point Lagrangian Interpolation and Heligman-Pollard Model are compared using data on abridged life tables for Malaysian population. Each of these methods considered was applied on the abridged data sets to estimate the complete sets of qx values. Then, the estimated complete sets of qx values were used to produce the estimated abridged ones by each of the three methods. The results were then compared with the actual values published in the abridged life tables. Among the three methods, the Six-point Lagrangian Interpolation method produces the best estimates of complete life tables from five-year abridged life tables.
System and method for correcting attitude estimation
NASA Technical Reports Server (NTRS)
Josselson, Robert H. (Inventor)
2010-01-01
A system includes an angular rate sensor disposed in a vehicle for providing angular rates of the vehicle, and an instrument disposed in the vehicle for providing line-of-sight control with respect to a line-of-sight reference. The instrument includes an integrator which is configured to integrate the angular rates of the vehicle to form non-compensated attitudes. Also included is a compensator coupled across the integrator, in a feed-forward loop, for receiving the angular rates of the vehicle and outputting compensated angular rates of the vehicle. A summer combines the non-compensated attitudes and the compensated angular rates of the to vehicle to form estimated vehicle attitudes for controlling the instrument with respect to the line-of-sight reference. The compensator is configured to provide error compensation to the instrument free-of any feedback loop that uses an error signal. The compensator may include a transfer function providing a fixed gain to the received angular rates of the vehicle. The compensator may, alternatively, include a is transfer function providing a variable gain as a function of frequency to operate on the received angular rates of the vehicle.
Evaluation of Two Methods to Estimate and Monitor Bird Populations
Taylor, Sandra L.; Pollard, Katherine S.
2008-01-01
Background Effective management depends upon accurately estimating trends in abundance of bird populations over time, and in some cases estimating abundance. Two population estimation methods, double observer (DO) and double sampling (DS), have been advocated for avian population studies and the relative merits and short-comings of these methods remain an area of debate. Methodology/Principal Findings We used simulations to evaluate the performances of these two population estimation methods under a range of realistic scenarios. For three hypothetical populations with different levels of clustering, we generated DO and DS population size estimates for a range of detection probabilities and survey proportions. Population estimates for both methods were centered on the true population size for all levels of population clustering and survey proportions when detection probabilities were greater than 20%. The DO method underestimated the population at detection probabilities less than 30% whereas the DS method remained essentially unbiased. The coverage probability of 95% confidence intervals for population estimates was slightly less than the nominal level for the DS method but was substantially below the nominal level for the DO method at high detection probabilities. Differences in observer detection probabilities did not affect the accuracy and precision of population estimates of the DO method. Population estimates for the DS method remained unbiased as the proportion of units intensively surveyed changed, but the variance of the estimates decreased with increasing proportion intensively surveyed. Conclusions/Significance The DO and DS methods can be applied in many different settings and our evaluations provide important information on the performance of these two methods that can assist researchers in selecting the method most appropriate for their particular needs. PMID:18728775
NASA Astrophysics Data System (ADS)
Gu, Jian; Lei, YongPing; Lin, Jian; Fu, HanGuang; Wu, Zhongwei
2016-01-01
The scattering of fatigue life data is a common problem and usually described using the normal distribution or Weibull distribution. For solder joints under drop impact, due to the complicated stress distribution, the relationship between the stress and the drop life is so far unknown. Furthermore, it is important to establish a function describing the change in standard deviation for solder joints under different drop impact levels. Therefore, in this study, a novel conditional probability density distribution surface (CPDDS) was established for the analysis of the drop life of solder joints. The relationship between the drop impact acceleration and the drop life is proposed, which comprehensively considers the stress distribution. A novel exponential model was adopted for describing the change of the standard deviation with the impact acceleration (0 → +∞). To validate the model, the drop life of Sn-3.0Ag-0.5Cu solder joints was analyzed. The probability density curve of the logarithm of the fatigue life distribution can be easily obtained for a certain acceleration level fixed on the acceleration level axis of the CPDDS. The P- A- N curve was also obtained using the functions μ( A) and σ( A), which can reflect the regularity of the life data for an overall reliability P.
Robust time and frequency domain estimation methods in adaptive control
NASA Technical Reports Server (NTRS)
Lamaire, Richard Orville
1987-01-01
A robust identification method was developed for use in an adaptive control system. The type of estimator is called the robust estimator, since it is robust to the effects of both unmodeled dynamics and an unmeasurable disturbance. The development of the robust estimator was motivated by a need to provide guarantees in the identification part of an adaptive controller. To enable the design of a robust control system, a nominal model as well as a frequency-domain bounding function on the modeling uncertainty associated with this nominal model must be provided. Two estimation methods are presented for finding parameter estimates, and, hence, a nominal model. One of these methods is based on the well developed field of time-domain parameter estimation. In a second method of finding parameter estimates, a type of weighted least-squares fitting to a frequency-domain estimated model is used. The frequency-domain estimator is shown to perform better, in general, than the time-domain parameter estimator. In addition, a methodology for finding a frequency-domain bounding function on the disturbance is used to compute a frequency-domain bounding function on the additive modeling error due to the effects of the disturbance and the use of finite-length data. The performance of the robust estimator in both open-loop and closed-loop situations is examined through the use of simulations.
NASA Astrophysics Data System (ADS)
Klein, Roman
2016-06-01
Electron storage rings with appropriate design are primary source standards, the spectral radiant intensity of which can be calculated from measured parameters using the Schwinger equation. PTB uses the electron storage rings BESSY II and MLS for source-based radiometry in the spectral range from the near-infrared to the x-ray region. The uncertainty of the calculated radiant intensity depends on the uncertainty of the measured parameters used for the calculation. Up to now the procedure described in the guide to the expression of uncertainty in measurement (GUM), i.e. the law of propagation of uncertainty, assuming a linear measurement model, was used to determine the combined uncertainty of the calculated spectral intensity, and for the determination of the coverage interval as well. Now it has been tested with a Monte Carlo simulation, according to Supplement 1 to the GUM, whether this procedure is valid for the rather complicated calculation by means of the Schwinger formalism and for different probability distributions of the input parameters. It was found that for typical uncertainties of the input parameters both methods yield similar results.
System and method for motor parameter estimation
Luhrs, Bin; Yan, Ting
2014-03-18
A system and method for determining unknown values of certain motor parameters includes a motor input device connectable to an electric motor having associated therewith values for known motor parameters and an unknown value of at least one motor parameter. The motor input device includes a processing unit that receives a first input from the electric motor comprising values for the known motor parameters for the electric motor and receive a second input comprising motor data on a plurality of reference motors, including values for motor parameters corresponding to the known motor parameters of the electric motor and values for motor parameters corresponding to the at least one unknown motor parameter value of the electric motor. The processor determines the unknown value of the at least one motor parameter from the first input and the second input and determines a motor management strategy for the electric motor based thereon.
Carbon footprint: current methods of estimation.
Pandey, Divya; Agrawal, Madhoolika; Pandey, Jai Shanker
2011-07-01
Increasing greenhouse gaseous concentration in the atmosphere is perturbing the environment to cause grievous global warming and associated consequences. Following the rule that only measurable is manageable, mensuration of greenhouse gas intensiveness of different products, bodies, and processes is going on worldwide, expressed as their carbon footprints. The methodologies for carbon footprint calculations are still evolving and it is emerging as an important tool for greenhouse gas management. The concept of carbon footprinting has permeated and is being commercialized in all the areas of life and economy, but there is little coherence in definitions and calculations of carbon footprints among the studies. There are disagreements in the selection of gases, and the order of emissions to be covered in footprint calculations. Standards of greenhouse gas accounting are the common resources used in footprint calculations, although there is no mandatory provision of footprint verification. Carbon footprinting is intended to be a tool to guide the relevant emission cuts and verifications, its standardization at international level are therefore necessary. Present review describes the prevailing carbon footprinting methods and raises the related issues. PMID:20848311
Estimate octane numbers using an enhanced method
Twu, C.H.; Coon, J.E.
1997-03-01
An improved model, based on the Twu-Coon method, is not only internally consistent, but also retains the same level of accuracy as the previous model in predicting octanes of gasoline blends. The enhanced model applies the same binary interaction parameters to components in each gasoline cut and their blends. Thus, the enhanced model can blend gasoline cuts in any order, in any combination or from any splitting of gasoline cuts and still yield the identical value of octane number for blending the same number of gasoline cuts. Setting binary interaction parameters to zero for identical gasoline cuts during the blending process is not required. The new model changes the old model`s methodology so that the same binary interaction parameters can be applied between components inside a gasoline cut as are applied to the same components between gasoline cuts. The enhanced model is more consistent in methodology than the original model, but it has equal accuracy for predicting octane numbers of gasoline blends, and it has the same number of binary interaction parameters. The paper discusses background, enhancement of the Twu-Coon interaction model, and three examples: blend of 2 identical gasoline cuts, blend of 3 gasoline cuts, and blend of the same 3 gasoline cuts in a different order.
Evaluating combinational illumination estimation methods on real-world images.
Bing Li; Weihua Xiong; Weiming Hu; Funt, Brian
2014-03-01
Illumination estimation is an important component of color constancy and automatic white balancing. A number of methods of combining illumination estimates obtained from multiple subordinate illumination estimation methods now appear in the literature. These combinational methods aim to provide better illumination estimates by fusing the information embedded in the subordinate solutions. The existing combinational methods are surveyed and analyzed here with the goals of determining: 1) the effectiveness of fusing illumination estimates from multiple subordinate methods; 2) the best method of combination; 3) the underlying factors that affect the performance of a combinational method; and 4) the effectiveness of combination for illumination estimation in multiple-illuminant scenes. The various combinational methods are categorized in terms of whether or not they require supervised training and whether or not they rely on high-level scene content cues (e.g., indoor versus outdoor). Extensive tests and enhanced analyzes using three data sets of real-world images are conducted. For consistency in testing, the images were labeled according to their high-level features (3D stages, indoor/outdoor) and this label data is made available on-line. The tests reveal that the trained combinational methods (direct combination by support vector regression in particular) clearly outperform both the non-combinational methods and those combinational methods based on scene content cues. PMID:23974624
Estimating Tree Height-Diameter Models with the Bayesian Method
Duan, Aiguo; Zhang, Jianguo; Xiang, Congwei
2014-01-01
Six candidate height-diameter models were used to analyze the height-diameter relationships. The common methods for estimating the height-diameter models have taken the classical (frequentist) approach based on the frequency interpretation of probability, for example, the nonlinear least squares method (NLS) and the maximum likelihood method (ML). The Bayesian method has an exclusive advantage compared with classical method that the parameters to be estimated are regarded as random variables. In this study, the classical and Bayesian methods were used to estimate six height-diameter models, respectively. Both the classical method and Bayesian method showed that the Weibull model was the “best” model using data1. In addition, based on the Weibull model, data2 was used for comparing Bayesian method with informative priors with uninformative priors and classical method. The results showed that the improvement in prediction accuracy with Bayesian method led to narrower confidence bands of predicted value in comparison to that for the classical method, and the credible bands of parameters with informative priors were also narrower than uninformative priors and classical method. The estimated posterior distributions for parameters can be set as new priors in estimating the parameters using data2. PMID:24711733
Estimating tree height-diameter models with the Bayesian method.
Zhang, Xiongqing; Duan, Aiguo; Zhang, Jianguo; Xiang, Congwei
2014-01-01
Six candidate height-diameter models were used to analyze the height-diameter relationships. The common methods for estimating the height-diameter models have taken the classical (frequentist) approach based on the frequency interpretation of probability, for example, the nonlinear least squares method (NLS) and the maximum likelihood method (ML). The Bayesian method has an exclusive advantage compared with classical method that the parameters to be estimated are regarded as random variables. In this study, the classical and Bayesian methods were used to estimate six height-diameter models, respectively. Both the classical method and Bayesian method showed that the Weibull model was the "best" model using data1. In addition, based on the Weibull model, data2 was used for comparing Bayesian method with informative priors with uninformative priors and classical method. The results showed that the improvement in prediction accuracy with Bayesian method led to narrower confidence bands of predicted value in comparison to that for the classical method, and the credible bands of parameters with informative priors were also narrower than uninformative priors and classical method. The estimated posterior distributions for parameters can be set as new priors in estimating the parameters using data2. PMID:24711733
Evaluation of Methods to Estimate Understory Fruit Biomass
Lashley, Marcus A.; Thompson, Jeffrey R.; Chitwood, M. Colter; DePerno, Christopher S.; Moorman, Christopher E.
2014-01-01
Fleshy fruit is consumed by many wildlife species and is a critical component of forest ecosystems. Because fruit production may change quickly during forest succession, frequent monitoring of fruit biomass may be needed to better understand shifts in wildlife habitat quality. Yet, designing a fruit sampling protocol that is executable on a frequent basis may be difficult, and knowledge of accuracy within monitoring protocols is lacking. We evaluated the accuracy and efficiency of 3 methods to estimate understory fruit biomass (Fruit Count, Stem Density, and Plant Coverage). The Fruit Count method requires visual counts of fruit to estimate fruit biomass. The Stem Density method uses counts of all stems of fruit producing species to estimate fruit biomass. The Plant Coverage method uses land coverage of fruit producing species to estimate fruit biomass. Using linear regression models under a censored-normal distribution, we determined the Fruit Count and Stem Density methods could accurately estimate fruit biomass; however, when comparing AIC values between models, the Fruit Count method was the superior method for estimating fruit biomass. After determining that Fruit Count was the superior method to accurately estimate fruit biomass, we conducted additional analyses to determine the sampling intensity (i.e., percentage of area) necessary to accurately estimate fruit biomass. The Fruit Count method accurately estimated fruit biomass at a 0.8% sampling intensity. In some cases, sampling 0.8% of an area may not be feasible. In these cases, we suggest sampling understory fruit production with the Fruit Count method at the greatest feasible sampling intensity, which could be valuable to assess annual fluctuations in fruit production. PMID:24819253
A source number estimation method for single optical fiber sensor
NASA Astrophysics Data System (ADS)
Hu, Junpeng; Huang, Zhiping; Su, Shaojing; Zhang, Yimeng; Liu, Chunwu
2015-10-01
The single-channel blind source separation (SCBSS) technique makes great significance in many fields, such as optical fiber communication, sensor detection, image processing and so on. It is a wide range application to realize blind source separation (BSS) from a single optical fiber sensor received data. The performance of many BSS algorithms and signal process methods will be worsened with inaccurate source number estimation. Many excellent algorithms have been proposed to deal with the source number estimation in array signal process which consists of multiple sensors, but they can not be applied directly to the single sensor condition. This paper presents a source number estimation method dealing with the single optical fiber sensor received data. By delay process, this paper converts the single sensor received data to multi-dimension form. And the data covariance matrix is constructed. Then the estimation algorithms used in array signal processing can be utilized. The information theoretic criteria (ITC) based methods, presented by AIC and MDL, Gerschgorin's disk estimation (GDE) are introduced to estimate the source number of the single optical fiber sensor's received signal. To improve the performance of these estimation methods at low signal noise ratio (SNR), this paper make a smooth process to the data covariance matrix. By the smooth process, the fluctuation and uncertainty of the eigenvalues of the covariance matrix are reduced. Simulation results prove that ITC base methods can not estimate the source number effectively under colored noise. The GDE method, although gets a poor performance at low SNR, but it is able to accurately estimate the number of sources with colored noise. The experiments also show that the proposed method can be applied to estimate the source number of single sensor received data.
An automated method of tuning an attitude estimator
NASA Technical Reports Server (NTRS)
Mason, Paul A. C.; Mook, D. Joseph
1995-01-01
Attitude determination is a major element of the operation and maintenance of a spacecraft. There are several existing methods of determining the attitude of a spacecraft. One of the most commonly used methods utilizes the Kalman filter to estimate the attitude of the spacecraft. Given an accurate model of a system and adequate observations, a Kalman filter can produce accurate estimates of the attitude. If the system model, filter parameters, or observations are inaccurate, the attitude estimates may be degraded. Therefore, it is advantageous to develop a method of automatically tuning the Kalman filter to produce the accurate estimates. In this paper, a three-axis attitude determination Kalman filter, which uses only magnetometer measurements, is developed and tested using real data. The appropriate filter parameters are found via the Process Noise Covariance Estimator (PNCE). The PNCE provides an optimal criterion for determining the best filter parameters.
Quantifying Accurate Calorie Estimation Using the "Think Aloud" Method
ERIC Educational Resources Information Center
Holmstrup, Michael E.; Stearns-Bruening, Kay; Rozelle, Jeffrey
2013-01-01
Objective: Clients often have limited time in a nutrition education setting. An improved understanding of the strategies used to accurately estimate calories may help to identify areas of focused instruction to improve nutrition knowledge. Methods: A "Think Aloud" exercise was recorded during the estimation of calories in a standard dinner meal…
Methods for Estimating Medical Expenditures Attributable to Intimate Partner Violence
ERIC Educational Resources Information Center
Brown, Derek S.; Finkelstein, Eric A.; Mercy, James A.
2008-01-01
This article compares three methods for estimating the medical cost burden of intimate partner violence against U.S. adult women (18 years and older), 1 year postvictimization. To compute the estimates, prevalence data from the National Violence Against Women Survey are combined with cost data from the Medical Expenditure Panel Survey, the…
A Novel Monopulse Angle Estimation Method for Wideband LFM Radars
Zhang, Yi-Xiong; Liu, Qi-Fan; Hong, Ru-Jia; Pan, Ping-Ping; Deng, Zhen-Miao
2016-01-01
Traditional monopulse angle estimations are mainly based on phase comparison and amplitude comparison methods, which are commonly adopted in narrowband radars. In modern radar systems, wideband radars are becoming more and more important, while the angle estimation for wideband signals is little studied in previous works. As noise in wideband radars has larger bandwidth than narrowband radars, the challenge lies in the accumulation of energy from the high resolution range profile (HRRP) of monopulse. In wideband radars, linear frequency modulated (LFM) signals are frequently utilized. In this paper, we investigate the monopulse angle estimation problem for wideband LFM signals. To accumulate the energy of the received echo signals from different scatterers of a target, we propose utilizing a cross-correlation operation, which can achieve a good performance in low signal-to-noise ratio (SNR) conditions. In the proposed algorithm, the problem of angle estimation is converted to estimating the frequency of the cross-correlation function (CCF). Experimental results demonstrate the similar performance of the proposed algorithm compared with the traditional amplitude comparison method. It means that the proposed method for angle estimation can be adopted. When adopting the proposed method, future radars may only need wideband signals for both tracking and imaging, which can greatly increase the data rate and strengthen the capability of anti-jamming. More importantly, the estimated angle will not become ambiguous under an arbitrary angle, which can significantly extend the estimated angle range in wideband radars. PMID:27271629
A Novel Monopulse Angle Estimation Method for Wideband LFM Radars.
Zhang, Yi-Xiong; Liu, Qi-Fan; Hong, Ru-Jia; Pan, Ping-Ping; Deng, Zhen-Miao
2016-01-01
Traditional monopulse angle estimations are mainly based on phase comparison and amplitude comparison methods, which are commonly adopted in narrowband radars. In modern radar systems, wideband radars are becoming more and more important, while the angle estimation for wideband signals is little studied in previous works. As noise in wideband radars has larger bandwidth than narrowband radars, the challenge lies in the accumulation of energy from the high resolution range profile (HRRP) of monopulse. In wideband radars, linear frequency modulated (LFM) signals are frequently utilized. In this paper, we investigate the monopulse angle estimation problem for wideband LFM signals. To accumulate the energy of the received echo signals from different scatterers of a target, we propose utilizing a cross-correlation operation, which can achieve a good performance in low signal-to-noise ratio (SNR) conditions. In the proposed algorithm, the problem of angle estimation is converted to estimating the frequency of the cross-correlation function (CCF). Experimental results demonstrate the similar performance of the proposed algorithm compared with the traditional amplitude comparison method. It means that the proposed method for angle estimation can be adopted. When adopting the proposed method, future radars may only need wideband signals for both tracking and imaging, which can greatly increase the data rate and strengthen the capability of anti-jamming. More importantly, the estimated angle will not become ambiguous under an arbitrary angle, which can significantly extend the estimated angle range in wideband radars. PMID:27271629
Uncertainty estimation in seismo-acoustic reflection travel time inversion.
Dettmer, Jan; Dosso, Stan E; Holland, Charles W
2007-07-01
This paper develops a nonlinear Bayesian inversion for high-resolution seabed reflection travel time data including rigorous uncertainty estimation and examination of statistical assumptions. Travel time data are picked on seismo-acoustic traces and inverted for a layered sediment sound-velocity model. Particular attention is paid to picking errors which are often biased, correlated, and nonstationary. Non-Toeplitz data covariance matrices are estimated and included in the inversion along with unknown travel time offset (bias) parameters to account for these errors. Simulated experiments show that neglecting error covariances and biases can cause misleading inversion results with unrealistically high confidence. The inversion samples the posterior probability density and provides a solution in terms of one- and two-dimensional marginal probability densities, correlations, and credibility intervals. Statistical assumptions are examined through the data residuals with rigorous statistical tests. The method is applied to shallow-water data collected on the Malta Plateau during the SCARAB98 experiment. PMID:17614476
Adaptive frequency estimation by MUSIC (Multiple Signal Classification) method
NASA Astrophysics Data System (ADS)
Karhunen, Juha; Nieminen, Esko; Joutsensalo, Jyrki
During the last years, the eigenvector-based method called MUSIC has become very popular in estimating the frequencies of sinusoids in additive white noise. Adaptive realizations of the MUSIC method are studied using simulated data. Several of the adaptive realizations seem to give in practice equally good results as the nonadaptive standard realization. The only exceptions are instantaneous gradient type algorithms that need considerably more samples to achieve a comparable performance. A new method is proposed for constructing initial estimates to the signal subspace. The method improves often dramatically the performance of instantaneous gradient type algorithms. The new signal subspace estimate can also be used to define a frequency estimator directly or to simplify eigenvector computation.
Methods for Estimating Uncertainty in Factor Analytic Solutions
The EPA PMF (Environmental Protection Agency positive matrix factorization) version 5.0 and the underlying multilinear engine-executable ME-2 contain three methods for estimating uncertainty in factor analytic models: classical bootstrap (BS), displacement of factor elements (DI...
Evapotranspiration: Mass balance measurements compared with flux estimation methods
Technology Transfer Automated Retrieval System (TEKTRAN)
Evapotranspiration (ET) may be measured by mass balance methods and estimated by flux sensing methods. The mass balance methods are typically restricted in terms of the area that can be represented (e.g., surface area of weighing lysimeter (LYS) or equivalent representative area of neutron probe (NP...
Recent developments in the methods of estimating shooting distance.
Zeichner, Arie; Glattstein, Baruch
2002-03-01
A review of developments during the past 10 years in the methods of estimating shooting distance is provided. This review discusses the examination of clothing targets, cadavers, and exhibits that cannot be processed in the laboratory. The methods include visual/microscopic examinations, color tests, and instrumental analysis of the gunshot residue deposits around the bullet entrance holes. The review does not cover shooting distance estimation from shotguns that fired pellet loads. PMID:12805985
Using the Mercy Method for Weight Estimation in Indian Children
Batmanabane, Gitanjali; Jena, Pradeep Kumar; Dikshit, Roshan
2015-01-01
This study was designed to compare the performance of a new weight estimation strategy (Mercy Method) with 12 existing weight estimation methods (APLS, Best Guess, Broselow, Leffler, Luscombe-Owens, Nelson, Shann, Theron, Traub-Johnson, Traub-Kichen) in children from India. Otherwise healthy children, 2 months to 16 years, were enrolled and weight, height, humeral length (HL), and mid-upper arm circumference (MUAC) were obtained by trained raters. Weight estimation was performed as described for each method. Predicted weights were regressed against actual weights and the slope, intercept, and Pearson correlation coefficient estimated. Agreement between estimated weight and actual weight was determined using Bland–Altman plots with log-transformation. Predictive performance of each method was assessed using mean error (ME), mean percentage error (MPE), and root mean square error (RMSE). Three hundred seventy-five children (7.5 ± 4.3 years, 22.1 ± 12.3 kg, 116.2 ± 26.3 cm) participated in this study. The Mercy Method (MM) offered the best correlation between actual and estimated weight when compared with the other methods (r2 = .967 vs .517-.844). The MM also demonstrated the lowest ME, MPE, and RMSE. Finally, the MM estimated weight within 20% of actual for nearly all children (96%) as opposed to the other methods for which these values ranged from 14% to 63%. The MM performed extremely well in Indian children with performance characteristics comparable to those observed for US children in whom the method was developed. It appears that the MM can be used in Indian children without modification, extending the utility of this weight estimation strategy beyond Western populations. PMID:27335932
Estimation and classification by sigmoids based on mutual information
NASA Technical Reports Server (NTRS)
Baram, Yoram
1994-01-01
An estimate of the probability density function of a random vector is obtained by maximizing the mutual information between the input and the output of a feedforward network of sigmoidal units with respect to the input weights. Classification problems can be solved by selecting the class associated with the maximal estimated density. Newton's s method, applied to an estimated density, yields a recursive maximum likelihood estimator, consisting of a single internal layer of sigmoids, for a random variable or a random sequence. Applications to the diamond classification and to the prediction of a sun-spot process are demonstrated.
An investigation of new methods for estimating parameter sensitivities
NASA Technical Reports Server (NTRS)
Beltracchi, Todd J.; Gabriele, Gary A.
1988-01-01
Parameter sensitivity is defined as the estimation of changes in the modeling functions and the design variables due to small changes in the fixed parameters of the formulation. There are currently several methods for estimating parameter sensitivities requiring either difficult to obtain second order information, or do not return reliable estimates for the derivatives. Additionally, all the methods assume that the set of active constraints does not change in a neighborhood of the estimation point. If the active set does in fact change, than any extrapolations based on these derivatives may be in error. The objective here is to investigate more efficient new methods for estimating parameter sensitivities when the active set changes. The new method is based on the recursive quadratic programming (RQP) method and in conjunction a differencing formula to produce estimates of the sensitivities. This is compared to existing methods and is shown to be very competitive in terms of the number of function evaluations required. In terms of accuracy, the method is shown to be equivalent to a modified version of the Kuhn-Tucker method, where the Hessian of the Lagrangian is estimated using the BFS method employed by the RPQ algorithm. Inital testing on a test set with known sensitivities demonstrates that the method can accurately calculate the parameter sensitivity. To handle changes in the active set, a deflection algorithm is proposed for those cases where the new set of active constraints remains linearly independent. For those cases where dependencies occur, a directional derivative is proposed. A few simple examples are included for the algorithm, but extensive testing has not yet been performed.
A posteriori pointwise error estimates for the boundary element method
Paulino, G.H.; Gray, L.J.; Zarikian, V.
1995-01-01
This report presents a new approach for a posteriori pointwise error estimation in the boundary element method. The estimator relies upon the evaluation of hypersingular integral equations, and is therefore intrinsic to the boundary integral equation approach. This property allows some theoretical justification by mathematically correlating the exact and estimated errors. A methodology is developed for approximating the error on the boundary as well as in the interior of the domain. In the interior, error estimates for both the function and its derivatives (e.g. potential and interior gradients for potential problems, displacements and stresses for elasticity problems) are presented. Extensive computational experiments have been performed for the two dimensional Laplace equation on interior domains, employing Dirichlet and mixed boundary conditions. The results indicate that the error estimates successfully track the form of the exact error curve. Moreover, a reasonable estimate of the magnitude of the actual error is also obtained.
Two-dimensional location and direction estimating method.
Haga, Teruhiro; Tsukamoto, Sosuke; Hoshino, Hiroshi
2008-01-01
In this paper, a method of estimating both the position and the rotation angle of an object on a measurement stage was proposed. The system utilizes the radio communication technology and the directivity of an antenna. As a prototype system, a measurement stage (a circle 240mm in diameter) with 36 antennas that placed in each 10 degrees was developed. Two transmitter antennas are settled in a right angle on the stage as the target object, and the position and the rotation angle is estimated by measuring efficiency of the radio communication of each 36 antennas. The experimental result revealed that even when the estimated location is not so accurate (about a 30 mm error), the rotation angle is accurately estimated (about 2.33 degree error on average). The result suggests that the proposed method will be useful for estimating the location and the direction of an object. PMID:19162938
A Channelization-Based DOA Estimation Method for Wideband Signals
Guo, Rui; Zhang, Yue; Lin, Qianqiang; Chen, Zengping
2016-01-01
In this paper, we propose a novel direction of arrival (DOA) estimation method for wideband signals with sensor arrays. The proposed method splits the wideband array output into multiple frequency sub-channels and estimates the signal parameters using a digital channelization receiver. Based on the output sub-channels, a channelization-based incoherent signal subspace method (Channelization-ISM) and a channelization-based test of orthogonality of projected subspaces method (Channelization-TOPS) are proposed. Channelization-ISM applies narrowband signal subspace methods on each sub-channel independently. Then the arithmetic mean or geometric mean of the estimated DOAs from each sub-channel gives the final result. Channelization-TOPS measures the orthogonality between the signal and the noise subspaces of the output sub-channels to estimate DOAs. The proposed channelization-based method isolates signals in different bandwidths reasonably and improves the output SNR. It outperforms the conventional ISM and TOPS methods on estimation accuracy and dynamic range, especially in real environments. Besides, the parallel processing architecture makes it easy to implement on hardware. A wideband digital array radar (DAR) using direct wideband radio frequency (RF) digitization is presented. Experiments carried out in a microwave anechoic chamber with the wideband DAR are presented to demonstrate the performance. The results verify the effectiveness of the proposed method. PMID:27384566
A Channelization-Based DOA Estimation Method for Wideband Signals.
Guo, Rui; Zhang, Yue; Lin, Qianqiang; Chen, Zengping
2016-01-01
In this paper, we propose a novel direction of arrival (DOA) estimation method for wideband signals with sensor arrays. The proposed method splits the wideband array output into multiple frequency sub-channels and estimates the signal parameters using a digital channelization receiver. Based on the output sub-channels, a channelization-based incoherent signal subspace method (Channelization-ISM) and a channelization-based test of orthogonality of projected subspaces method (Channelization-TOPS) are proposed. Channelization-ISM applies narrowband signal subspace methods on each sub-channel independently. Then the arithmetic mean or geometric mean of the estimated DOAs from each sub-channel gives the final result. Channelization-TOPS measures the orthogonality between the signal and the noise subspaces of the output sub-channels to estimate DOAs. The proposed channelization-based method isolates signals in different bandwidths reasonably and improves the output SNR. It outperforms the conventional ISM and TOPS methods on estimation accuracy and dynamic range, especially in real environments. Besides, the parallel processing architecture makes it easy to implement on hardware. A wideband digital array radar (DAR) using direct wideband radio frequency (RF) digitization is presented. Experiments carried out in a microwave anechoic chamber with the wideband DAR are presented to demonstrate the performance. The results verify the effectiveness of the proposed method. PMID:27384566
Comparison of several methods for estimating low speed stability derivatives
NASA Technical Reports Server (NTRS)
Fletcher, H. S.
1971-01-01
Methods presented in five different publications have been used to estimate the low-speed stability derivatives of two unpowered airplane configurations. One configuration had unswept lifting surfaces, the other configuration was the D-558-II swept-wing research airplane. The results of the computations were compared with each other, with existing wind-tunnel data, and with flight-test data for the D-558-II configuration to assess the relative merits of the methods for estimating derivatives. The results of the study indicated that, in general, for low subsonic speeds, no one text appeared consistently better for estimating all derivatives.
A Computationally Efficient Method for Polyphonic Pitch Estimation
NASA Astrophysics Data System (ADS)
Zhou, Ruohua; Reiss, Joshua D.; Mattavelli, Marco; Zoia, Giorgio
2009-12-01
This paper presents a computationally efficient method for polyphonic pitch estimation. The method employs the Fast Resonator Time-Frequency Image (RTFI) as the basic time-frequency analysis tool. The approach is composed of two main stages. First, a preliminary pitch estimation is obtained by means of a simple peak-picking procedure in the pitch energy spectrum. Such spectrum is calculated from the original RTFI energy spectrum according to harmonic grouping principles. Then the incorrect estimations are removed according to spectral irregularity and knowledge of the harmonic structures of the music notes played on commonly used music instruments. The new approach is compared with a variety of other frame-based polyphonic pitch estimation methods, and results demonstrate the high performance and computational efficiency of the approach.
A robust method for rotation estimation using spherical harmonics representation.
Althloothi, Salah; Mahoor, Mohammad H; Voyles, Richard M
2013-06-01
This paper presents a robust method for 3D object rotation estimation using spherical harmonics representation and the unit quaternion vector. The proposed method provides a closed-form solution for rotation estimation without recurrence relations or searching for point correspondences between two objects. The rotation estimation problem is casted as a minimization problem, which finds the optimum rotation angles between two objects of interest in the frequency domain. The optimum rotation angles are obtained by calculating the unit quaternion vector from a symmetric matrix, which is constructed from the two sets of spherical harmonics coefficients using eigendecomposition technique. Our experimental results on hundreds of 3D objects show that our proposed method is very accurate in rotation estimation, robust to noisy data, missing surface points, and can handle intra-class variability between 3D objects. PMID:23475364
A Fast Estimation Method of Railway Passengers' Flow
NASA Astrophysics Data System (ADS)
Nagasaki, Yusaku; Asuka, Masashi; Komaya, Kiyotoshi
To evaluate a train schedule from the viewpoint of passengers' convenience, it is important to know each passenger's choice of trains and transfer stations to arrive at his/her destination. Because of difficulties of measuring such passengers' behavior, estimation methods of railway passengers' flow are proposed to execute such an evaluation. However, a train schedule planning system equipped with those methods is not practical due to necessity of much time to complete the estimation. In this article, the authors propose a fast passengers' flow estimation method that employs features of passengers' flow graph using preparative search based on each train's arrival time at each station. And the authors show the results of passengers' flow estimation applied on a railway in an urban area.
Evaluation of the Mercy weight estimation method in Ouelessebougou, Mali
2014-01-01
Background This study evaluated the performance of a new weight estimation strategy (Mercy Method) with four existing weight-estimation methods (APLS, ARC, Broselow, and Nelson) in children from Ouelessebougou, Mali. Methods Otherwise healthy children, 2 mos to 16 yrs, were enrolled and weight, height, humeral length (HL) and mid-upper arm circumference (MUAC) obtained by trained raters. Weight estimation was performed as described for each method. Predicted weights were regressed against actual weights. Agreement between estimated and actual weight was determined using Bland-Altman plots with log-transformation. Predictive performance of each method was assessed using residual error (RE), percentage error (PE), root mean square error (RMSE), and percent predicted within 10, 20 and 30% of actual weight. Results 473 children (8.1 ± 4.8 yr, 25.1 ± 14.5 kg, 120.9 ± 29.5 cm) participated in this study. The Mercy Method (MM) offered the best correlation between actual and estimated weight when compared with the other methods (r2 = 0.97 vs. 0.80-0.94). The MM also demonstrated the lowest ME (0.06 vs. 0.92-4.1 kg), MPE (1.6 vs. 7.8-19.8%) and RMSE (2.6 vs. 3.0-6.7). Finally, the MM estimated weight within 20% of actual for nearly all children (97%) as opposed to the other methods for which these values ranged from 50-69%. Conclusions The MM performed extremely well in Malian children with performance characteristics comparable to those observed for U.S and India and could be used in sub-Saharan African children without modification extending the utility of this weight estimation strategy. PMID:24650051
Demographic estimation methods for plants with unobservable life-states
Kery, M.; Gregg, K.B.; Schaub, M.
2005-01-01
Demographic estimation of vital parameters in plants with an unobservable dormant state is complicated, because time of death is not known. Conventional methods assume that death occurs at a particular time after a plant has last been seen aboveground but the consequences of assuming a particular duration of dormancy have never been tested. Capture-recapture methods do not make assumptions about time of death; however, problems with parameter estimability have not yet been resolved. To date, a critical comparative assessment of these methods is lacking. We analysed data from a 10 year study of Cleistes bifaria, a terrestrial orchid with frequent dormancy, and compared demographic estimates obtained by five varieties of the conventional methods, and two capture-recapture methods. All conventional methods produced spurious unity survival estimates for some years or for some states, and estimates of demographic rates sensitive to the time of death assumption. In contrast, capture-recapture methods are more parsimonious in terms of assumptions, are based on well founded theory and did not produce spurious estimates. In Cleistes, dormant episodes lasted for 1-4 years (mean 1.4, SD 0.74). The capture-recapture models estimated ramet survival rate at 0.86 (SE~ 0.01), ranging from 0.77-0.94 (SEs # 0.1) in anyone year. The average fraction dormant was estimated at 30% (SE 1.5), ranging 16 -47% (SEs # 5.1) in anyone year. Multistate capture-recapture models showed that survival rates were positively related to precipitation in the current year, but transition rates were more strongly related to precipitation in the previous than in the current year, with more ramets going dormant following dry years. Not all capture-recapture models of interest have estimable parameters; for instance, without excavating plants in years when they do not appear aboveground, it is not possible to obtain independent timespecific survival estimates for dormant plants. We introduce rigorous
A new method for parameter estimation in nonlinear dynamical equations
NASA Astrophysics Data System (ADS)
Wang, Liu; He, Wen-Ping; Liao, Le-Jian; Wan, Shi-Quan; He, Tao
2015-01-01
Parameter estimation is an important scientific problem in various fields such as chaos control, chaos synchronization and other mathematical models. In this paper, a new method for parameter estimation in nonlinear dynamical equations is proposed based on evolutionary modelling (EM). This will be achieved by utilizing the following characteristics of EM which includes self-organizing, adaptive and self-learning features which are inspired by biological natural selection, and mutation and genetic inheritance. The performance of the new method is demonstrated by using various numerical tests on the classic chaos model—Lorenz equation (Lorenz 1963). The results indicate that the new method can be used for fast and effective parameter estimation irrespective of whether partial parameters or all parameters are unknown in the Lorenz equation. Moreover, the new method has a good convergence rate. Noises are inevitable in observational data. The influence of observational noises on the performance of the presented method has been investigated. The results indicate that the strong noises, such as signal noise ratio (SNR) of 10 dB, have a larger influence on parameter estimation than the relatively weak noises. However, it is found that the precision of the parameter estimation remains acceptable for the relatively weak noises, e.g. SNR is 20 or 30 dB. It indicates that the presented method also has some anti-noise performance.
Scanning linear estimation: improvements over region of interest (ROI) methods
NASA Astrophysics Data System (ADS)
Kupinski, Meredith K.; Clarkson, Eric W.; Barrett, Harrison H.
2013-03-01
In tomographic medical imaging, a signal activity is typically estimated by summing voxels from a reconstructed image. We introduce an alternative estimation scheme that operates on the raw projection data and offers a substantial improvement, as measured by the ensemble mean-square error (EMSE), when compared to using voxel values from a maximum-likelihood expectation-maximization (MLEM) reconstruction. The scanning-linear (SL) estimator operates on the raw projection data and is derived as a special case of maximum-likelihood estimation with a series of approximations to make the calculation tractable. The approximated likelihood accounts for background randomness, measurement noise and variability in the parameters to be estimated. When signal size and location are known, the SL estimate of signal activity is unbiased, i.e. the average estimate equals the true value. By contrast, unpredictable bias arising from the null functions of the imaging system affect standard algorithms that operate on reconstructed data. The SL method is demonstrated for two different tasks: (1) simultaneously estimating a signal’s size, location and activity; (2) for a fixed signal size and location, estimating activity. Noisy projection data are realistically simulated using measured calibration data from the multi-module multi-resolution small-animal SPECT imaging system. For both tasks, the same set of images is reconstructed using the MLEM algorithm (80 iterations), and the average and maximum values within the region of interest (ROI) are calculated for comparison. This comparison shows dramatic improvements in EMSE for the SL estimates. To show that the bias in ROI estimates affects not only absolute values but also relative differences, such as those used to monitor the response to therapy, the activity estimation task is repeated for three different signal sizes.
Stability and error estimation for Component Adaptive Grid methods
NASA Technical Reports Server (NTRS)
Oliger, Joseph; Zhu, Xiaolei
1994-01-01
Component adaptive grid (CAG) methods for solving hyperbolic partial differential equations (PDE's) are discussed in this paper. Applying recent stability results for a class of numerical methods on uniform grids. The convergence of these methods for linear problems on component adaptive grids is established here. Furthermore, the computational error can be estimated on CAG's using the stability results. Using these estimates, the error can be controlled on CAG's. Thus, the solution can be computed efficiently on CAG's within a given error tolerance. Computational results for time dependent linear problems in one and two space dimensions are presented.
Assessing the sensitivity of methods for estimating principal causal effects.
Stuart, Elizabeth A; Jo, Booil
2015-12-01
The framework of principal stratification provides a way to think about treatment effects conditional on post-randomization variables, such as level of compliance. In particular, the complier average causal effect (CACE) - the effect of the treatment for those individuals who would comply with their treatment assignment under either treatment condition - is often of substantive interest. However, estimation of the CACE is not always straightforward, with a variety of estimation procedures and underlying assumptions, but little advice to help researchers select between methods. In this article, we discuss and examine two methods that rely on very different assumptions to estimate the CACE: a maximum likelihood ('joint') method that assumes the 'exclusion restriction,' (ER) and a propensity score-based method that relies on 'principal ignorability.' We detail the assumptions underlying each approach, and assess each methods' sensitivity to both its own assumptions and those of the other method using both simulated data and a motivating example. We find that the ER-based joint approach appears somewhat less sensitive to its assumptions, and that the performance of both methods is significantly improved when there are strong predictors of compliance. Interestingly, we also find that each method performs particularly well when the assumptions of the other approach are violated. These results highlight the importance of carefully selecting an estimation procedure whose assumptions are likely to be satisfied in practice and of having strong predictors of principal stratum membership. PMID:21971481
A Simple Method to Estimate Harvest Index in Grain Crops
Technology Transfer Automated Retrieval System (TEKTRAN)
Several methods have been proposed to simulate yield in crop simulation models. In this work we present a simple method to estimate harvest index (HI) of grain crops based on fractional post-anthesis growth (fG = fraction of growth that occurred post-anthesis). We propose that there is a linear or c...
A Study of Methods for Estimating Distributions of Test Scores.
ERIC Educational Resources Information Center
Cope, Ronald T.; Kolen, Michael J.
This study compared five density estimation techniques applied to samples from a population of 272,244 examinees' ACT English Usage and Mathematics Usage raw scores. Unsmoothed frequencies, kernel method, negative hypergeometric, four-parameter beta compound binomial, and Cureton-Tukey methods were applied to 500 replications of random samples of…
Evaluation of alternative methods for estimating reference evapotranspiration
Technology Transfer Automated Retrieval System (TEKTRAN)
Evapotranspiration is an important component in water-balance and irrigation scheduling models. While the FAO-56 Penman-Monteith method has become the de facto standard for estimating reference evapotranspiration (ETo), it is a complex method requiring several weather parameters. Required weather ...
Precision of two methods for estimating age from burbot otoliths
Edwards, W.H.; Stapanian, M.A.; Stoneman, A.T.
2011-01-01
Lower reproductive success and older age structure are associated with many burbot (Lota lota L.) populations that are declining or of conservation concern. Therefore, reliable methods for estimating the age of burbot are critical for effective assessment and management. In Lake Erie, burbot populations have declined in recent years due to the combined effects of an aging population (&xmacr; = 10 years in 2007) and extremely low recruitment since 2002. We examined otoliths from burbot (N = 91) collected in Lake Erie in 2007 and compared the estimates of burbot age by two agers, each using two established methods (cracked-and-burned and thin-section) of estimating ages from burbot otoliths. One ager was experienced at estimating age from otoliths, the other was a novice. Agreement (precision) between the two agers was higher for the thin-section method, particularly at ages 6–11 years, based on linear regression analyses and 95% confidence intervals. As expected, precision between the two methods was higher for the more experienced ager. Both agers reported that the thin sections offered clearer views of the annuli, particularly near the margins on otoliths from burbot ages ≥8. Slides for the thin sections required some costly equipment and more than 2 days to prepare. In contrast, preparing the cracked-and-burned samples was comparatively inexpensive and quick. We suggest use of the thin-section method for estimating the age structure of older burbot populations.
Time domain attenuation estimation method from ultrasonic backscattered signals
Ghoshal, Goutam; Oelze, Michael L.
2012-01-01
Ultrasonic attenuation is important not only as a parameter for characterizing tissue but also for compensating other parameters that are used to classify tissues. Several techniques have been explored for estimating ultrasonic attenuation from backscattered signals. In the present study, a technique is developed to estimate the local ultrasonic attenuation coefficient by analyzing the time domain backscattered signal. The proposed method incorporates an objective function that combines the diffraction pattern of the source/receiver with the attenuation slope in an integral equation. The technique was assessed through simulations and validated through experiments with a tissue mimicking phantom and fresh rabbit liver samples. The attenuation values estimated using the proposed technique were compared with the attenuation estimated using insertion loss measurements. For a data block size of 15 pulse lengths axially and 15 beamwidths laterally, the mean attenuation estimates from the tissue mimicking phantoms were within 10% of the estimates using insertion loss measurements. With a data block size of 20 pulse lengths axially and 20 beamwidths laterally, the error in the attenuation values estimated from the liver samples were within 10% of the attenuation values estimated from the insertion loss measurements. PMID:22779499
Estimating Population Size Using the Network Scale Up Method
Maltiel, Rachael; Raftery, Adrian E.; McCormick, Tyler H.; Baraff, Aaron J.
2015-01-01
We develop methods for estimating the size of hard-to-reach populations from data collected using network-based questions on standard surveys. Such data arise by asking respondents how many people they know in a specific group (e.g. people named Michael, intravenous drug users). The Network Scale up Method (NSUM) is a tool for producing population size estimates using these indirect measures of respondents’ networks. Killworth et al. (1998a,b) proposed maximum likelihood estimators of population size for a fixed effects model in which respondents’ degrees or personal network sizes are treated as fixed. We extend this by treating personal network sizes as random effects, yielding principled statements of uncertainty. This allows us to generalize the model to account for variation in people’s propensity to know people in particular subgroups (barrier effects), such as their tendency to know people like themselves, as well as their lack of awareness of or reluctance to acknowledge their contacts’ group memberships (transmission bias). NSUM estimates also suffer from recall bias, in which respondents tend to underestimate the number of members of larger groups that they know, and conversely for smaller groups. We propose a data-driven adjustment method to deal with this. Our methods perform well in simulation studies, generating improved estimates and calibrated uncertainty intervals, as well as in back estimates of real sample data. We apply them to data from a study of HIV/AIDS prevalence in Curitiba, Brazil. Our results show that when transmission bias is present, external information about its likely extent can greatly improve the estimates. The methods are implemented in the NSUM R package. PMID:26949438
Benchmarking Method for Estimation of Biogas Upgrading Schemes
NASA Astrophysics Data System (ADS)
Blumberga, D.; Kuplais, Ģ.; Veidenbergs, I.; Dāce, E.
2009-01-01
The paper describes a new benchmarking method proposed for estimation of different biogas upgrading schemes. The method has been developed to compare the indicators of alternative biogas purification and upgrading solutions and their threshold values. The chosen indicators cover both economic and ecologic aspects of these solutions, e.g. the prime cost of biogas purification and storage, and the cost efficiency of greenhouse gas emission reduction. The proposed benchmarking method has been tested at "Daibe" - a landfill for solid municipal waste.
NASA Astrophysics Data System (ADS)
Wang, Lipo; Peters, Norbert
2008-06-01
Dissipation element analysis is a new approach to study turbulent scalar fields. Gradient trajectories starting from each material point in a fluctuating scalar field ϕ'(x⃗,t) in ascending and descending directions will inevitably reach a maximal and a minimal point. The ensemble of material points sharing the same pair ending points is named a dissipation element. Dissipation elements can be parametrized by the length scale l and the scalar difference Δϕ', which are defined as the straight line connecting the two extremal points and the scalar difference at these points, respectively. The decomposition of a turbulent field into dissipation elements is space filling. This allows us to reconstruct certain statistical quantities of fine scale turbulence which cannot be obtained otherwise. The marginal probability density function (PDF) of the length scale distribution had been modeled in the previous work based on a Poisson random cutting-reconnection process and had been compared to data from direct numerical simulation (DNS). The joint PDF of l and Δϕ ' contains the important information that is needed for the modeling of scalar mixing in turbulence, such as the marginal PDF of the length of elements and conditional moments, as well as their scaling exponents. In order to be able to predict these quantities, there is a need to model the joint PDF. A compensation-defect model is put forward in this work and the agreement between the model prediction and DNS results is satisfactory.
New method for the estimation of platelet ascorbic acid
Lloyd, J. V.; Davis, P. S.; Lander, Harry
1969-01-01
Present techniques for the estimation of platelet ascorbic acid allow interference by other substances in the sample. A new and more specific method of analysis is presented. The proposed method owes its increased specificity to resolution of the extract by thin-layer chromatography. By this means ascorbic acid is separated from other reducing substances present. The separated ascorbic acid is eluted from the thin layer and estimated by a new and very sensitive procedure: ascorbic acid is made to react with ferric chloride and the ferrous ions so formed are estimated spectrophotometrically by the coloured derivative which they form with tripyridyl-Striazine. Results obtained with normal blood platelets were consistently lower than simultaneous determinations by the dinitrophenylhydrazine (DNPH) method. PMID:5798633
Fault detection in electromagnetic suspension systems with state estimation methods
Sinha, P.K.; Zhou, F.B.; Kutiyal, R.S. . Dept. of Engineering)
1993-11-01
High-speed maglev vehicles need a high level of safety that depends on the whole vehicle system's reliability. There are many ways of attaining high reliability for the system. Conventional method uses redundant hardware with majority vote logic circuits. Hardware redundancy costs more, weigh more and occupy more space than that of analytically redundant methods. Analytically redundant systems use parameter identification and state estimation methods based on the system models to detect and isolate the fault of instruments (sensors), actuator and components. In this paper the authors use the Luenberger observer to estimate three state variables of the electromagnetic suspension system: position (airgap), vehicle velocity, and vertical acceleration. These estimates are compared with the corresponding sensor outputs for fault detection. In this paper, they consider FDI of the accelerometer, the sensor which provides the ride quality.
A novel tracer method for estimating sewer exfiltration
NASA Astrophysics Data System (ADS)
Rieckermann, J.; Borsuk, M.; Reichert, P.; Gujer, W.
2005-05-01
A novel method is presented to estimate exfiltration from sewer systems using artificial tracers. The method relies upon use of an upstream indicator signal and a downstream reference signal to eliminate the dependence of exfiltration estimates on the accuracy of discharge measurement. An experimental design, a data analysis procedure, and an uncertainty assessment process are described and illustrated by a case study. In a 2-km reach of unknown condition, exfiltration was estimated at 9.9 +/- 2.7%. Uncertainty in this estimate was primarily due to the use of sodium chloride (NaCl) as the tracer substance. NaCl is measured using conductivity, which is present at nonnegligible levels in wastewater, thus confounding accurate identification of tracer peaks. As estimates of exfiltration should have as low a measurement error as possible, future development of the method will concentrate on improved experimental design and tracer selection. Although the method is not intended to replace traditional CCTV inspections, it can provide additional information to urban water managers for rational rehabilitation planning.
A novel tracer method for estimating sewer exfiltration
NASA Astrophysics Data System (ADS)
Rieckermann, J.; Borsuk, M.; Reichert, P.; Gujer, W.
2005-05-01
A novel method is presented to estimate exfiltration from sewer systems using artificial tracers. The method relies upon use of an upstream indicator signal and a downstream reference signal to eliminate the dependence of exfiltration estimates on the accuracy of discharge measurement. An experimental design, a data analysis procedure, and an uncertainty assessment process are described and illustrated by a case study. In a 2-km reach of unknown condition, exfiltration was estimated at 9.9 ± 2.7%. Uncertainty in this estimate was primarily due to the use of sodium chloride (NaCl) as the tracer substance. NaCl is measured using conductivity, which is present at nonnegligible levels in wastewater, thus confounding accurate identification of tracer peaks. As estimates of exfiltration should have as low a measurement error as possible, future development of the method will concentrate on improved experimental design and tracer selection. Although the method is not intended to replace traditional CCTV inspections, it can provide additional information to urban water managers for rational rehabilitation planning.
Models and estimation methods for clinical HIV-1 data
NASA Astrophysics Data System (ADS)
Verotta, Davide
2005-12-01
Clinical HIV-1 data include many individual factors, such as compliance to treatment, pharmacokinetics, variability in respect to viral dynamics, race, sex, income, etc., which might directly influence or be associated with clinical outcome. These factors need to be taken into account to achieve a better understanding of clinical outcome and mathematical models can provide a unifying framework to do so. The first objective of this paper is to demonstrate the development of comprehensive HIV-1 dynamics models that describe viral dynamics and also incorporate different factors influencing such dynamics. The second objective of this paper is to describe alternative estimation methods that can be applied to the analysis of data with such models. In particular, we consider: (i) simple but effective two-stage estimation methods, in which data from each patient are analyzed separately and summary statistics derived from the results, (ii) more complex nonlinear mixed effect models, used to pool all the patient data in a single analysis. Bayesian estimation methods are also considered, in particular: (iii) maximum posterior approximations, MAP, and (iv) Markov chain Monte Carlo, MCMC. Bayesian methods incorporate prior knowledge into the models, thus avoiding some of the model simplifications introduced when the data are analyzed using two-stage methods, or a nonlinear mixed effect framework. We demonstrate the development of the models and the different estimation methods using real AIDS clinical trial data involving patients receiving multiple drugs regimens.
Estimation Method of Body Temperature from Upper Arm Temperature
NASA Astrophysics Data System (ADS)
Suzuki, Arata; Ryu, Kazuteru; Kanai, Nobuyuki
This paper proposes a method for estimation of a body temperature by using a relation between the upper arm temperature and the atmospheric temperature. Conventional method has measured by armpit or oral, because the body temperature from the body surface is influenced by the atmospheric temperature. However, there is a correlation between the body surface temperature and the atmospheric temperature. By using this correlation, the body temperature can estimated from the body surface temperature. Proposed method enables to measure body temperature by the temperature sensor that is embedded in the blood pressure monitor cuff. Therefore, simultaneous measurement of blood pressure and body temperature can be realized. The effectiveness of the proposed method is verified through the actual body temperature experiment. The proposed method might contribute to reduce the medical staff's workloads in the home medical care, and more.
Electromechanical Mode Online Estimation using Regularized Robust RLS Methods
Zhou, Ning; Trudnowski, Daniel; Pierre, John W; Mittelstadt, William
2008-11-01
This paper proposes a regularized robust recursive least square (R3LS) method for on-line estimation of power-system electromechanical modes based on synchronized phasor measurement unit (PMU) data. The proposed method utilizes an autoregressive moving average exogenous (ARMAX) model to account for typical measurement data, which includes low-level pseudo-random probing, ambient, and ringdown data. A robust objective function is utilized to reduce the negative influence from non-typical data, which include outliers and missing data. A dynamic regularization method is introduced to help include a priori knowledge about the system and reduce the influence of under-determined problems. Based on a 17-machine simulation model, it is shown through the Monte-Carlo method that the proposed R3LS method can estimate and track electromechani-cal modes by effectively using combined typical and non-typical measurement data.
A review of action estimation methods for galactic dynamics
NASA Astrophysics Data System (ADS)
Sanders, Jason L.; Binney, James
2016-04-01
We review the available methods for estimating actions, angles and frequencies of orbits in both axisymmetric and triaxial potentials. The methods are separated into two classes. Unless an orbit has been trapped by a resonance, convergent, or iterative, methods are able to recover the actions to arbitrarily high accuracy given sufficient computing time. Faster non-convergent methods rely on the potential being sufficiently close to a separable potential, and the accuracy of the action estimate cannot be improved through further computation. We critically compare the accuracy of the methods and the required computation time for a range of orbits in an axisymmetric multicomponent Galactic potential. We introduce a new method for estimating actions that builds on the adiabatic approximation of Schönrich & Binney and discuss the accuracy required for the actions, angles and frequencies using suitable distribution functions for the thin and thick discs, the stellar halo and a star stream. We conclude that for studies of the disc and smooth halo component of the Milky Way, the most suitable compromise between speed and accuracy is the Stäckel Fudge, whilst when studying streams the non-convergent methods do not offer sufficient accuracy and the most suitable method is computing the actions from an orbit integration via a generating function. All the software used in this study can be downloaded from https://github.com/jls713/tact.
Statistical methods of parameter estimation for deterministically chaotic time series.
Pisarenko, V F; Sornette, D
2004-03-01
We discuss the possibility of applying some standard statistical methods (the least-square method, the maximum likelihood method, and the method of statistical moments for estimation of parameters) to deterministically chaotic low-dimensional dynamic system (the logistic map) containing an observational noise. A "segmentation fitting" maximum likelihood (ML) method is suggested to estimate the structural parameter of the logistic map along with the initial value x(1) considered as an additional unknown parameter. The segmentation fitting method, called "piece-wise" ML, is similar in spirit but simpler and has smaller bias than the "multiple shooting" previously proposed. Comparisons with different previously proposed techniques on simulated numerical examples give favorable results (at least, for the investigated combinations of sample size N and noise level). Besides, unlike some suggested techniques, our method does not require the a priori knowledge of the noise variance. We also clarify the nature of the inherent difficulties in the statistical analysis of deterministically chaotic time series and the status of previously proposed Bayesian approaches. We note the trade off between the need of using a large number of data points in the ML analysis to decrease the bias (to guarantee consistency of the estimation) and the unstable nature of dynamical trajectories with exponentially fast loss of memory of the initial condition. The method of statistical moments for the estimation of the parameter of the logistic map is discussed. This method seems to be the unique method whose consistency for deterministically chaotic time series is proved so far theoretically (not only numerically). PMID:15089376
Hydrological model uncertainty due to spatial evapotranspiration estimation methods
NASA Astrophysics Data System (ADS)
Yu, Xuan; Lamačová, Anna; Duffy, Christopher; Krám, Pavel; Hruška, Jakub
2016-05-01
Evapotranspiration (ET) continues to be a difficult process to estimate in seasonal and long-term water balances in catchment models. Approaches to estimate ET typically use vegetation parameters (e.g., leaf area index [LAI], interception capacity) obtained from field observation, remote sensing data, national or global land cover products, and/or simulated by ecosystem models. In this study we attempt to quantify the uncertainty that spatial evapotranspiration estimation introduces into hydrological simulations when the age of the forest is not precisely known. The Penn State Integrated Hydrologic Model (PIHM) was implemented for the Lysina headwater catchment, located 50°03‧N, 12°40‧E in the western part of the Czech Republic. The spatial forest patterns were digitized from forest age maps made available by the Czech Forest Administration. Two ET methods were implemented in the catchment model: the Biome-BGC forest growth sub-model (1-way coupled to PIHM) and with the fixed-seasonal LAI method. From these two approaches simulation scenarios were developed. We combined the estimated spatial forest age maps and two ET estimation methods to drive PIHM. A set of spatial hydrologic regime and streamflow regime indices were calculated from the modeling results for each method. Intercomparison of the hydrological responses to the spatial vegetation patterns suggested considerable variation in soil moisture and recharge and a small uncertainty in the groundwater table elevation and streamflow. The hydrologic modeling with ET estimated by Biome-BGC generated less uncertainty due to the plant physiology-based method. The implication of this research is that overall hydrologic variability induced by uncertain management practices was reduced by implementing vegetation models in the catchment models.
Global parameter estimation methods for stochastic biochemical systems
2010-01-01
Background The importance of stochasticity in cellular processes having low number of molecules has resulted in the development of stochastic models such as chemical master equation. As in other modelling frameworks, the accompanying rate constants are important for the end-applications like analyzing system properties (e.g. robustness) or predicting the effects of genetic perturbations. Prior knowledge of kinetic constants is usually limited and the model identification routine typically includes parameter estimation from experimental data. Although the subject of parameter estimation is well-established for deterministic models, it is not yet routine for the chemical master equation. In addition, recent advances in measurement technology have made the quantification of genetic substrates possible to single molecular levels. Thus, the purpose of this work is to develop practical and effective methods for estimating kinetic model parameters in the chemical master equation and other stochastic models from single cell and cell population experimental data. Results Three parameter estimation methods are proposed based on the maximum likelihood and density function distance, including probability and cumulative density functions. Since stochastic models such as chemical master equations are typically solved using a Monte Carlo approach in which only a finite number of Monte Carlo realizations are computationally practical, specific considerations are given to account for the effect of finite sampling in the histogram binning of the state density functions. Applications to three practical case studies showed that while maximum likelihood method can effectively handle low replicate measurements, the density function distance methods, particularly the cumulative density function distance estimation, are more robust in estimating the parameters with consistently higher accuracy, even for systems showing multimodality. Conclusions The parameter estimation methodologies
MONTE CARLO ERROR ESTIMATION APPLIED TO NONDESTRUCTIVE ASSAY METHODS
R. ESTEP; ET AL
2000-06-01
Monte Carlo randomization of nuclear counting data into N replicate sets is the basis of a simple and effective method for estimating error propagation through complex analysis algorithms such as those using neural networks or tomographic image reconstructions. The error distributions of properly simulated replicate data sets mimic those of actual replicate measurements and can be used to estimate the std. dev. for an assay along with other statistical quantities. We have used this technique to estimate the standard deviation in radionuclide masses determined using the tomographic gamma scanner (TGS) and combined thermal/epithermal neutron (CTEN) methods. The effectiveness of this approach is demonstrated by a comparison of our Monte Carlo error estimates with the error distributions in actual replicate measurements and simulations of measurements. We found that the std. dev. estimated this way quickly converges to an accurate value on average and has a predictable error distribution similar to N actual repeat measurements. The main drawback of the Monte Carlo method is that N additional analyses of the data are required, which may be prohibitively time consuming with slow analysis algorithms.
Estimation of uncertainty for contour method residual stress measurements
Olson, Mitchell D.; DeWald, Adrian T.; Prime, Michael B.; Hill, Michael R.
2014-12-03
This paper describes a methodology for the estimation of measurement uncertainty for the contour method, where the contour method is an experimental technique for measuring a two-dimensional map of residual stress over a plane. Random error sources including the error arising from noise in displacement measurements and the smoothing of the displacement surfaces are accounted for in the uncertainty analysis. The output is a two-dimensional, spatially varying uncertainty estimate such that every point on the cross-section where residual stress is determined has a corresponding uncertainty value. Both numerical and physical experiments are reported, which are used to support the usefulnessmore » of the proposed uncertainty estimator. The uncertainty estimator shows the contour method to have larger uncertainty near the perimeter of the measurement plane. For the experiments, which were performed on a quenched aluminum bar with a cross section of 51 × 76 mm, the estimated uncertainty was approximately 5 MPa (σ/E = 7 · 10⁻⁵) over the majority of the cross-section, with localized areas of higher uncertainty, up to 10 MPa (σ/E = 14 · 10⁻⁵).« less
Estimation of uncertainty for contour method residual stress measurements
Olson, Mitchell D.; DeWald, Adrian T.; Prime, Michael B.; Hill, Michael R.
2014-12-03
This paper describes a methodology for the estimation of measurement uncertainty for the contour method, where the contour method is an experimental technique for measuring a two-dimensional map of residual stress over a plane. Random error sources including the error arising from noise in displacement measurements and the smoothing of the displacement surfaces are accounted for in the uncertainty analysis. The output is a two-dimensional, spatially varying uncertainty estimate such that every point on the cross-section where residual stress is determined has a corresponding uncertainty value. Both numerical and physical experiments are reported, which are used to support the usefulness of the proposed uncertainty estimator. The uncertainty estimator shows the contour method to have larger uncertainty near the perimeter of the measurement plane. For the experiments, which were performed on a quenched aluminum bar with a cross section of 51 × 76 mm, the estimated uncertainty was approximately 5 MPa (σ/E = 7 · 10⁻⁵) over the majority of the cross-section, with localized areas of higher uncertainty, up to 10 MPa (σ/E = 14 · 10⁻⁵).
A new colorimetric method for the estimation of glycosylated hemoglobin.
Nayak, S S; Pattabiraman, T N
1981-02-01
A new colorimetric method, based on the phenol sulphuric acid reaction of carbohydrates, is described for the determination of glycosylated hemoglobin. Hemolyzates were treated with 1 mol/l oxalic acid in 2 mol/l Hcl for 4 h at 100 degrees C, the protein was precipitated with trichloroacetic acid, and the free sugars and hydroxymethyl furfural in the protein free supernatant were treated with phenol and sulphuric acid to form the color. The new method is compared to the thiobarbituric acid method and the ion-exchange chromatographic method for the estimation of glycosylated hemoglobin in normals and diabetics. The increase in glycosylated hemoglobin in diabetic patients as estimated by the phenol-sulphuric acid method was more significant (P less than 0.001) than the increase observed by the thiobarbituric acid method (P less than 0.01). The correlation between the phenol-sulphuric acid method and the column method was better (r = 0.91) than the correlation between the thiobarbituric acid method and the column method (r = 0.84). No significant correlation between fasting and postprandial blood sugar level and glycosylated hemoglobin level as determined by the two colorimetric methods was observed in diabetic patients. PMID:7226519
Correction of Misclassifications Using a Proximity-Based Estimation Method
NASA Astrophysics Data System (ADS)
Niemistö, Antti; Shmulevich, Ilya; Lukin, Vladimir V.; Dolia, Alexander N.; Yli-Harja, Olli
2004-12-01
An estimation method for correcting misclassifications in signal and image processing is presented. The method is based on the use of context-based (temporal or spatial) information in a sliding-window fashion. The classes can be purely nominal, that is, an ordering of the classes is not required. The method employs nonlinear operations based on class proximities defined by a proximity matrix. Two case studies are presented. In the first, the proposed method is applied to one-dimensional signals for processing data that are obtained by a musical key-finding algorithm. In the second, the estimation method is applied to two-dimensional signals for correction of misclassifications in images. In the first case study, the proximity matrix employed by the estimation method follows directly from music perception studies, whereas in the second case study, the optimal proximity matrix is obtained with genetic algorithms as the learning rule in a training-based optimization framework. Simulation results are presented in both case studies and the degree of improvement in classification accuracy that is obtained by the proposed method is assessed statistically using Kappa analysis.
Detecting diversity: emerging methods to estimate species diversity.
Iknayan, Kelly J; Tingley, Morgan W; Furnas, Brett J; Beissinger, Steven R
2014-02-01
Estimates of species richness and diversity are central to community and macroecology and are frequently used in conservation planning. Commonly used diversity metrics account for undetected species primarily by controlling for sampling effort. Yet the probability of detecting an individual can vary among species, observers, survey methods, and sites. We review emerging methods to estimate alpha, beta, gamma, and metacommunity diversity through hierarchical multispecies occupancy models (MSOMs) and multispecies abundance models (MSAMs) that explicitly incorporate observation error in the detection process for species or individuals. We examine advantages, limitations, and assumptions of these detection-based hierarchical models for estimating species diversity. Accounting for imperfect detection using these approaches has influenced conclusions of comparative community studies and creates new opportunities for testing theory. PMID:24315534
Inverse method for estimating shear stress in machining
NASA Astrophysics Data System (ADS)
Burns, T. J.; Mates, S. P.; Rhorer, R. L.; Whitenton, E. P.; Basak, D.
2016-01-01
An inverse method is presented for estimating shear stress in the work material in the region of chip-tool contact along the rake face of the tool during orthogonal machining. The method is motivated by a model of heat generation in the chip, which is based on a two-zone contact model for friction along the rake face, and an estimate of the steady-state flow of heat into the cutting tool. Given an experimentally determined discrete set of steady-state temperature measurements along the rake face of the tool, it is shown how to estimate the corresponding shear stress distribution on the rake face, even when no friction model is specified.
NASA Astrophysics Data System (ADS)
Forbes, B. T.
2015-12-01
Due to the predominantly arid climate in Arizona, access to adequate water supply is vital to the economic development and livelihood of the State. Water supply has become increasingly important during periods of prolonged drought, which has strained reservoir water levels in the Desert Southwest over past years. Arizona's water use is dominated by agriculture, consuming about seventy-five percent of the total annual water demand. Tracking current agricultural water use is important for managers and policy makers so that current water demand can be assessed and current information can be used to forecast future demands. However, many croplands in Arizona are irrigated outside of areas where water use reporting is mandatory. To estimate irrigation withdrawals on these lands, we use a combination of field verification, evapotranspiration (ET) estimation, and irrigation system qualification. ET is typically estimated in Arizona using the Modified Blaney-Criddle method which uses meteorological data to estimate annual crop water requirements. The Modified Blaney-Criddle method assumes crops are irrigated to their full potential over the entire growing season, which may or may not be realistic. We now use the Operational Simplified Surface Energy Balance (SSEBop) ET data in a remote-sensing and energy-balance framework to estimate cropland ET. SSEBop data are of sufficient resolution (30m by 30m) for estimation of field-scale cropland water use. We evaluate our SSEBop-based estimates using ground-truth information and irrigation system qualification obtained in the field. Our approach gives the end user an estimate of crop consumptive use as well as inefficiencies in irrigation system performance—both of which are needed by water managers for tracking irrigated water use in Arizona.
Optimal Input Signal Design for Data-Centric Estimation Methods.
Deshpande, Sunil; Rivera, Daniel E
2013-01-01
Data-centric estimation methods such as Model-on-Demand and Direct Weight Optimization form attractive techniques for estimating unknown functions from noisy data. These methods rely on generating a local function approximation from a database of regressors at the current operating point with the process repeated at each new operating point. This paper examines the design of optimal input signals formulated to produce informative data to be used by local modeling procedures. The proposed method specifically addresses the distribution of the regressor vectors. The design is examined for a linear time-invariant system under amplitude constraints on the input. The resulting optimization problem is solved using semidefinite relaxation methods. Numerical examples show the benefits in comparison to a classical PRBS input design. PMID:24317042
Optimal Input Signal Design for Data-Centric Estimation Methods
Deshpande, Sunil; Rivera, Daniel E.
2013-01-01
Data-centric estimation methods such as Model-on-Demand and Direct Weight Optimization form attractive techniques for estimating unknown functions from noisy data. These methods rely on generating a local function approximation from a database of regressors at the current operating point with the process repeated at each new operating point. This paper examines the design of optimal input signals formulated to produce informative data to be used by local modeling procedures. The proposed method specifically addresses the distribution of the regressor vectors. The design is examined for a linear time-invariant system under amplitude constraints on the input. The resulting optimization problem is solved using semidefinite relaxation methods. Numerical examples show the benefits in comparison to a classical PRBS input design. PMID:24317042
A study of methods to estimate debris flow velocity
Prochaska, A.B.; Santi, P.M.; Higgins, J.D.; Cannon, S.H.
2008-01-01
Debris flow velocities are commonly back-calculated from superelevation events which require subjective estimates of radii of curvature of bends in the debris flow channel or predicted using flow equations that require the selection of appropriate rheological models and material property inputs. This research investigated difficulties associated with the use of these conventional velocity estimation methods. Radii of curvature estimates were found to vary with the extent of the channel investigated and with the scale of the media used, and back-calculated velocities varied among different investigated locations along a channel. Distinct populations of Bingham properties were found to exist between those measured by laboratory tests and those back-calculated from field data; thus, laboratory-obtained values would not be representative of field-scale debris flow behavior. To avoid these difficulties with conventional methods, a new preliminary velocity estimation method is presented that statistically relates flow velocity to the channel slope and the flow depth. This method presents ranges of reasonable velocity predictions based on 30 previously measured velocities. ?? 2008 Springer-Verlag.
Computational methods for estimation of parameters in hyperbolic systems
NASA Technical Reports Server (NTRS)
Banks, H. T.; Ito, K.; Murphy, K. A.
1983-01-01
Approximation techniques for estimating spatially varying coefficients and unknown boundary parameters in second order hyperbolic systems are discussed. Methods for state approximation (cubic splines, tau-Legendre) and approximation of function space parameters (interpolatory splines) are outlined and numerical findings for use of the resulting schemes in model "one dimensional seismic inversion' problems are summarized.
Stress intensity estimates by a computer assisted photoelastic method
NASA Technical Reports Server (NTRS)
Smith, C. W.
1977-01-01
Following an introductory history, the frozen stress photoelastic method is reviewed together with analytical and experimental aspects of cracks in photoelastic models. Analytical foundations are then presented upon which a computer assisted frozen stress photoelastic technique is based for extracting estimates of stress intensity factors from three-dimensional cracked body problems. The use of the method is demonstrated for two currently important three-dimensional crack problems.
Nonparametric methods for drought severity estimation at ungauged sites
NASA Astrophysics Data System (ADS)
Sadri, S.; Burn, D. H.
2012-12-01
The objective in frequency analysis is, given extreme events such as drought severity or duration, to estimate the relationship between that event and the associated return periods at a catchment. Neural networks and other artificial intelligence approaches in function estimation and regression analysis are relatively new techniques in engineering, providing an attractive alternative to traditional statistical models. There are, however, few applications of neural networks and support vector machines in the area of severity quantile estimation for drought frequency analysis. In this paper, we compare three methods for this task: multiple linear regression, radial basis function neural networks, and least squares support vector regression (LS-SVR). The area selected for this study includes 32 catchments in the Canadian Prairies. From each catchment drought severities are extracted and fitted to a Pearson type III distribution, which act as observed values. For each method-duration pair, we use a jackknife algorithm to produce estimated values at each site. The results from these three approaches are compared and analyzed, and it is found that LS-SVR provides the best quantile estimates and extrapolating capacity.
Three Different Methods of Estimating LAI in a Small Watershed
NASA Astrophysics Data System (ADS)
Speckman, H. N.; Ewers, B. E.; Beverly, D.
2015-12-01
Leaf area index (LAI) is a critical input of models that improve predictive understanding of ecology, hydrology, and climate change. Multiple techniques exist to quantify LAI, most of which are labor intensive, and all often fail to converge on similar estimates. . Recent large-scale bark beetle induced mortality greatly altered LAI, which is now dominated by younger and more metabolically active trees compared to the pre-beetle forest. Tree mortality increases error in optical LAI estimates due to the lack of differentiation between live and dead branches in dense canopy. Our study aims to quantify LAI using three different LAI methods, and then to compare the techniques to each other and topographic drivers to develop an effective predictive model of LAI. This study focuses on quantifying LAI within a small (~120 ha) beetle infested watershed in Wyoming's Snowy Range Mountains. The first technique estimated LAI using in-situ hemispherical canopy photographs that were then analyzed with Hemisfer software. The second LAI estimation technique was use of the Kaufmann 1982 allometrerics from forest inventories conducted throughout the watershed, accounting for stand basal area, species composition, and the extent of bark beetle driven mortality. The final technique used airborne light detection and ranging (LIDAR) first DMS returns, which were used to estimating canopy heights and crown area. LIDAR final returns provided topographical information and were then ground-truthed during forest inventories. Once data was collected, a fractural analysis was conducted comparing the three methods. Species composition was driven by slope position and elevation Ultimately the three different techniques provided very different estimations of LAI, but each had their advantage: estimates from hemisphere photos were well correlated with SWE and snow depth measurements, forest inventories provided insight into stand health and composition, and LIDAR were able to quickly and
A New Method for Deriving Global Estimates of Maternal Mortality.
Wilmoth, John R; Mizoguchi, Nobuko; Oestergaard, Mikkel Z; Say, Lale; Mathers, Colin D; Zureick-Brown, Sarah; Inoue, Mie; Chou, Doris
2012-07-13
Maternal mortality is widely regarded as a key indicator of population health and of social and economic development. Its levels and trends are monitored closely by the United Nations and others, inspired in part by the UN's Millennium Development Goals (MDGs), which call for a three-fourths reduction in the maternal mortality ratio between 1990 and 2015. Unfortunately, the empirical basis for such monitoring remains quite weak, requiring the use of statistical models to obtain estimates for most countries. In this paper we describe a new method for estimating global levels and trends in maternal mortality. For countries lacking adequate data for direct calculation of estimates, we employed a parametric model that separates maternal deaths related to HIV/AIDS from all others. For maternal deaths unrelated to HIV/AIDS, the model consists of a hierarchical linear regression with three predictors and variable intercepts for both countries and regions. The uncertainty of estimates was assessed by simulating the estimation process, accounting for variability both in the data and in other model inputs. The method was used to obtain the most recent set of UN estimates, published in September 2010. Here, we provide a concise description and explanation of the approach, including a new analysis of the components of variability reflected in the uncertainty intervals. Final estimates provide evidence of a more rapid decline in the global maternal mortality ratio than suggested by previous work, including another study published in April 2010. We compare findings from the two recent studies and discuss topics for further research to help resolve differences. PMID:24416714
Review of some results in bivariate density estimation
NASA Technical Reports Server (NTRS)
Scott, D. W.
1982-01-01
Results are reviewed for choosing smoothing parameters for some bivariate density estimators. Experience gained in comparing the effects of smoothing parameters on probability density estimators for univariate and bivariate data is summarized.
Methods for Measuring and Estimating Methane Emission from Ruminants
Storm, Ida M. L. D.; Hellwing, Anne Louise F.; Nielsen, Nicolaj I.; Madsen, Jørgen
2012-01-01
Simple Summary Knowledge about methods used in quantification of greenhouse gasses is currently needed due to international commitments to reduce the emissions. In the agricultural sector one important task is to reduce enteric methane emissions from ruminants. Different methods for quantifying these emissions are presently being used and others are under development, all with different conditions for application. For scientist and other persons working with the topic it is very important to understand the advantages and disadvantage of the different methods in use. This paper gives a brief introduction to existing methods but also a description of newer methods and model-based techniques. Abstract This paper is a brief introduction to the different methods used to quantify the enteric methane emission from ruminants. A thorough knowledge of the advantages and disadvantages of these methods is very important in order to plan experiments, understand and interpret experimental results, and compare them with other studies. The aim of the paper is to describe the principles, advantages and disadvantages of different methods used to quantify the enteric methane emission from ruminants. The best-known methods: Chambers/respiration chambers, SF6 technique and in vitro gas production technique and the newer CO2 methods are described. Model estimations, which are used to calculate national budget and single cow enteric emission from intake and diet composition, are also discussed. Other methods under development such as the micrometeorological technique, combined feeder and CH4 analyzer and proxy methods are briefly mentioned. Methods of choice for estimating enteric methane emission depend on aim, equipment, knowledge, time and money available, but interpretation of results obtained with a given method can be improved if knowledge about the disadvantages and advantages are used in the planning of experiments. PMID:26486915
Statistical estimation of mineral age by K-Ar method
Vistelius, A.B.; Drubetzkoy, E.R.; Faas, A.V. )
1989-11-01
Statistical estimation of age of {sup 40}Ar/{sup 40}K ratios may be considered a result of convolution of uniform and normal distributions with different weights for different minerals. Data from Gul'shad Massif (Nearbalkhash, Kazakhstan, USSR) indicate that {sup 40}Ar/{sup 40}K ratios reflecting the intensity of geochemical processes can be resolved using convolutions. Loss of {sup 40}Ar in biotites is shown whereas hornblende retained the original content of {sup 40}Ar throughout the geological history of the massif. Results demonstrate that different estimation methods must be used for different minerals and different rocks when radiometric ages are employed for dating.
A New Method to Estimate Halo Mass of Galaxy Groups
NASA Astrophysics Data System (ADS)
Lu, Yi; Yang, Xiaohu; Shen, Shiyin
2015-08-01
Reliable halo mass estimation for a given galaxy system plays an important role both in cosmology and galaxy formation studies. Here we set out to find the way that can improve the halo mass estimation for those galaxy systems with limited brightest member galaxies been observed. Using four mock galaxy samples constructed from semi-analytical formation models, the subhalo abundance matching method and the conditional luminosity functions, respectively, we find that the luminosity gap between the brightest and the subsequent brightest member galaxies in a halo (group) can be used to significantly reduce the scatter in the halo mass estimation based on the luminosity of the brightest galaxy alone. Tests show that these corrections can significantly reduce the scatter in the halo mass estimations by $\\sim 50\\%$ to $\\sim 70\\%$ in massive halos depending on which member galaxies are considered. Comparing to the traditional ranking method, we find that this method works better for groups with less than five members, or in observations with very bright magnitude cut.
Phenology of Net Ecosystem Exchange: A Simple Estimation Method
NASA Astrophysics Data System (ADS)
Losleben, M. V.
2007-12-01
Carbon sequestration is important to global carbon budget and ecosystem function and dynamics research. Direct measurement of Net Ecosystem Exchange (NEE), a measure of the carbon sequestration of an ecosystem, is instrument, labor, and fiscally intensive, thus there is value to establish a simple, robust estimation method. Six ecosystem types across the United States, ranging from deciduous and coniferous forests to desert shrub land and grasslands, are compared. Initial results suggest instrumentally measured NEE and this proxy method are promising, showing excellent temporal matches of the two methods for onset and termination of carbon sequestration in a sub-alpine forest for the study period, 1997-2006. Moreover, the similarity of climatic signatures in all six ecosystems of this study suggests this proxy estimation method may be widely applicable across diverse environmental zones This estimation method is simply the interpretation of annual accumulated daily precipitation plotted against the annual daily accumulated degree growing days above a zero degree C base. Applicability at sub-seasonal time scales will also be discussed in this presentation.
An aerial survey method to estimate sea otter abundance
Bodkin, J.L.; Udevitz, M.S.
1999-01-01
Sea otters (Enhydra lutris) occur in shallow coastal habitats and can be highly visible on the sea surface. They generally rest in groups and their detection depends on factors that include sea conditions, viewing platform, observer technique and skill, distance, habitat and group size. While visible on the surface, they are difficult to see while diving and may dive in response to an approaching survey platform. We developed and tested an aerial survey method that uses intensive searches within portions of strip transects to adjust for availability and sightability biases. Correction factors are estimated independently for each survey and observer. In tests of our method using shore-based observers, we estimated detection probabilities of 0.52-0.72 in standard strip-transects and 0.96 in intensive searches. We used the survey method in Prince William Sound, Alaska to estimate a sea otter population size of 9,092 (SE = 1422). The new method represents an improvement over various aspects of previous methods, but additional development and testing will be required prior to its broad application.
A new analytical method for groundwater recharge and discharge estimation
NASA Astrophysics Data System (ADS)
Liang, Xiuyu; Zhang, You-Kuan
2012-07-01
SummaryA new analytical method was proposed for groundwater recharge and discharge estimation in an unconfined aquifer. The method is based on an analytical solution to the Boussinesq equation linearized in terms of h2, where h is the water table elevation, with a time-dependent source term. The solution derived was validated with numerical simulation and was shown to be a better approximation than an existing solution to the Boussinesq equation linearized in terms of h. By calibrating against the observed water levels in a monitoring well during a period of 100 days, we shown that the method proposed in this study can be used to estimate daily recharge (R) and evapotranspiration (ET) as well as the lateral drainage. It was shown that the total R was reasonably estimated with a water-table fluctuation (WTF) method if the water table measurements away from a fixed-head boundary were used, but the total ET was overestimated and the total net recharge was underestimated because of the lack of consideration of lateral drainage and aquifer storage in the WTF method.
New Statistical Learning Methods for Estimating Optimal Dynamic Treatment Regimes
Zhao, Ying-Qi; Zeng, Donglin; Laber, Eric B.; Kosorok, Michael R.
2014-01-01
Dynamic treatment regimes (DTRs) are sequential decision rules for individual patients that can adapt over time to an evolving illness. The goal is to accommodate heterogeneity among patients and find the DTR which will produce the best long term outcome if implemented. We introduce two new statistical learning methods for estimating the optimal DTR, termed backward outcome weighted learning (BOWL), and simultaneous outcome weighted learning (SOWL). These approaches convert individualized treatment selection into an either sequential or simultaneous classification problem, and can thus be applied by modifying existing machine learning techniques. The proposed methods are based on directly maximizing over all DTRs a nonparametric estimator of the expected long-term outcome; this is fundamentally different than regression-based methods, for example Q-learning, which indirectly attempt such maximization and rely heavily on the correctness of postulated regression models. We prove that the resulting rules are consistent, and provide finite sample bounds for the errors using the estimated rules. Simulation results suggest the proposed methods produce superior DTRs compared with Q-learning especially in small samples. We illustrate the methods using data from a clinical trial for smoking cessation. PMID:26236062
A semi-automatic multi-view depth estimation method
NASA Astrophysics Data System (ADS)
Wildeboer, Meindert Onno; Fukushima, Norishige; Yendo, Tomohiro; Panahpour Tehrani, Mehrdad; Fujii, Toshiaki; Tanimoto, Masayuki
2010-07-01
In this paper, we propose a semi-automatic depth estimation algorithm whereby the user defines object depth boundaries and disparity initialization. Automatic depth estimation methods generally have difficulty to obtain good depth results around object edges and in areas with low texture. The goal of our method is to improve the depth in these areas and reduce view synthesis artifacts in Depth Image Based Rendering. Good view synthesis quality is very important in applications such as 3DTV and Free-viewpoint Television (FTV). In our proposed method, initial disparity values for smooth areas can be input through a so-called manual disparity map, and depth boundaries are defined by a manually created edge map which can be supplied for one or multiple frames. For evaluation we used MPEG multi-view videos and we demonstrate our algorithm can significantly improve the depth maps and reduce view synthesis artifacts.
Noninvasive method of estimating human newborn regional cerebral blood flow
Younkin, D.P.; Reivich, M.; Jaggi, J.; Obrist, W.; Delivoria-Papadopoulos, M.
1982-12-01
A noninvasive method of estimating regional cerebral blood flow (rCBF) in premature and full-term babies has been developed. Based on a modification of the /sup 133/Xe inhalation rCBF technique, this method uses eight extracranial NaI scintillation detectors and an i.v. bolus injection of /sup 133/Xe (approximately 0.5 mCi/kg). Arterial xenon concentration was estimated with an external chest detector. Cerebral blood flow was measured in 15 healthy, neurologically normal premature infants. Using Obrist's method of two-compartment analysis, normal values were calculated for flow in both compartments, relative weight and fractional flow in the first compartment (gray matter), initial slope of gray matter blood flow, mean cerebral blood flow, and initial slope index of mean cerebral blood flow. The application of this technique to newborns, its relative advantages, and its potential uses are discussed.
Method to Estimate the Dissolved Air Content in Hydraulic Fluid
NASA Technical Reports Server (NTRS)
Hauser, Daniel M.
2011-01-01
In order to verify the air content in hydraulic fluid, an instrument was needed to measure the dissolved air content before the fluid was loaded into the system. The instrument also needed to measure the dissolved air content in situ and in real time during the de-aeration process. The current methods used to measure the dissolved air content require the fluid to be drawn from the hydraulic system, and additional offline laboratory processing time is involved. During laboratory processing, there is a potential for contamination to occur, especially when subsaturated fluid is to be analyzed. A new method measures the amount of dissolved air in hydraulic fluid through the use of a dissolved oxygen meter. The device measures the dissolved air content through an in situ, real-time process that requires no additional offline laboratory processing time. The method utilizes an instrument that measures the partial pressure of oxygen in the hydraulic fluid. By using a standardized calculation procedure that relates the oxygen partial pressure to the volume of dissolved air in solution, the dissolved air content is estimated. The technique employs luminescent quenching technology to determine the partial pressure of oxygen in the hydraulic fluid. An estimated Henry s law coefficient for oxygen and nitrogen in hydraulic fluid is calculated using a standard method to estimate the solubility of gases in lubricants. The amount of dissolved oxygen in the hydraulic fluid is estimated using the Henry s solubility coefficient and the measured partial pressure of oxygen in solution. The amount of dissolved nitrogen that is in solution is estimated by assuming that the ratio of dissolved nitrogen to dissolved oxygen is equal to the ratio of the gas solubility of nitrogen to oxygen at atmospheric pressure and temperature. The technique was performed at atmospheric pressure and room temperature. The technique could be theoretically carried out at higher pressures and elevated
NEW COMPLETENESS METHODS FOR ESTIMATING EXOPLANET DISCOVERIES BY DIRECT DETECTION
Brown, Robert A.; Soummer, Remi
2010-05-20
We report on new methods for evaluating realistic observing programs that search stars for planets by direct imaging, where observations are selected from an optimized star list and stars can be observed multiple times. We show how these methods bring critical insight into the design of the mission and its instruments. These methods provide an estimate of the outcome of the observing program: the probability distribution of discoveries (detection and/or characterization) and an estimate of the occurrence rate of planets ({eta}). We show that these parameters can be accurately estimated from a single mission simulation, without the need for a complete Monte Carlo mission simulation, and we prove the accuracy of this new approach. Our methods provide tools to define a mission for a particular science goal; for example, a mission can be defined by the expected number of discoveries and its confidence level. We detail how an optimized star list can be built and how successive observations can be selected. Our approach also provides other critical mission attributes, such as the number of stars expected to be searched and the probability of zero discoveries. Because these attributes depend strongly on the mission scale (telescope diameter, observing capabilities and constraints, mission lifetime, etc.), our methods are directly applicable to the design of such future missions and provide guidance to the mission and instrument design based on scientific performance. We illustrate our new methods with practical calculations and exploratory design reference missions for the James Webb Space Telescope (JWST) operating with a distant starshade to reduce scattered and diffracted starlight on the focal plane. We estimate that five habitable Earth-mass planets would be discovered and characterized with spectroscopy, with a probability of zero discoveries of 0.004, assuming a small fraction of JWST observing time (7%), {eta} = 0.3, and 70 observing visits, limited by starshade
Estimation of quality factors by energy ratio method
NASA Astrophysics Data System (ADS)
Wang, Zong-Jun; Cao, Si-Yuan; Zhang, Hao-Ran; Qu, Ying-Ming; Yuan, Dian; Yang, Jin-Hao; Shao, Guan-Ming
2015-03-01
The quality factor Q, which reflects the energy attenuation of seismic waves in subsurface media, is a diagnostic tool for hydrocarbon detection and reservoir characterization. In this paper, we propose a new Q extraction method based on the energy ratio before and after the wavelet attenuation, named the energy-ratio method (ERM). The proposed method uses multipoint signal data in the time domain to estimate the wavelet energy without invoking the source wavelet spectrum, which is necessary in conventional Q extraction methods, and is applicable to any source wavelet spectrum; however, it requires high-precision seismic data. Forward zero-offset VSP modeling suggests that the ERM can be used for reliable Q inversion after nonintrinsic attenuation (geometric dispersion, reflection, and transmission loss) compensation. The application to real zero-offset VSP data shows that the Q values extracted by the ERM and spectral ratio methods are identical, which proves the reliability of the new method.
Experimental evaluation of chromatic dispersion estimation method using polynomial fitting
NASA Astrophysics Data System (ADS)
Jiang, Xin; Wang, Junyi; Pan, Zhongqi
2014-11-01
We experimentally validate a non-data-aided, modulation-format independent chromatic dispersion (CD) estimation method based on polynomial fitting algorithm in single-carrier coherent optical system with a 40 Gb/s polarization-division-multiplexed quadrature-phase-shift-keying (PDM-QPSK) system. The non-data-aided CD estimation for arbitrary modulation formats is achieved by measuring the differential phase between frequency f±fs/2 (fs is the symbol rate) in digital coherent receivers. The estimation range for a 40 Gb/s PDM-QPSK signal can be up to 20,000 ps/nm with a measurement accuracy of ±200 ps/nm. The maximum CD measurement is 25,000 ps/nm with a measurement error of 2%.
A probabilistic method for estimating system susceptibility to HPM
Mensing, R.W.
1989-05-18
Interruption of the operation of electronic systems by HPM is a stochastic process. Thus, a realistic estimate of system susceptibility to HPM is best expressed in terms of the probability the HPM have an effect on the system (probability of effect). To estimate susceptibility of complex electronic systems by extensive testing is not practical. Thus, it is necessary to consider alternative approaches. One approach is to combine information from extensive low level testing and computer modeling with limited high level field test data. A method for estimating system susceptibility based on a pretest analysis of low level test and computer model data combined with a post test analysis after high level testing is described in this paper. 4 figs.
The Lyapunov dimension and its estimation via the Leonov method
NASA Astrophysics Data System (ADS)
Kuznetsov, N. V.
2016-06-01
Along with widely used numerical methods for estimating and computing the Lyapunov dimension there is an effective analytical approach, proposed by G.A. Leonov in 1991. The Leonov method is based on the direct Lyapunov method with special Lyapunov-like functions. The advantage of the method is that it allows one to estimate the Lyapunov dimension of invariant sets without localization of the set in the phase space and, in many cases, to get effectively an exact Lyapunov dimension formula. In this work the invariance of the Lyapunov dimension with respect to diffeomorphisms and its connection with the Leonov method are discussed. For discrete-time dynamical systems an analog of Leonov method is suggested. In a simple but rigorous way, here it is presented the connection between the Leonov method and the key related works: Kaplan and Yorke (the concept of the Lyapunov dimension, 1979), Douady and Oesterlé (upper bounds of the Hausdorff dimension via the Lyapunov dimension of maps, 1980), Constantin, Eden, Foiaş, and Temam (upper bounds of the Hausdorff dimension via the Lyapunov exponents and Lyapunov dimension of dynamical systems, 1985-90), and the numerical calculation of the Lyapunov exponents and dimension.
Point estimation of simultaneous methods for solving polynomial equations
NASA Astrophysics Data System (ADS)
Petkovic, Miodrag S.; Petkovic, Ljiljana D.; Rancic, Lidija Z.
2007-08-01
The construction of computationally verifiable initial conditions which provide both the guaranteed and fast convergence of the numerical root-finding algorithm is one of the most important problems in solving nonlinear equations. Smale's "point estimation theory" from 1981 was a great advance in this topic; it treats convergence conditions and the domain of convergence in solving an equation f(z)=0 using only the information of f at the initial point z0. The study of a general problem of the construction of initial conditions of practical interest providing guaranteed convergence is very difficult, even in the case of algebraic polynomials. In the light of Smale's point estimation theory, an efficient approach based on some results concerning localization of polynomial zeros and convergent sequences is applied in this paper to iterative methods for the simultaneous determination of simple zeros of polynomials. We state new, improved initial conditions which provide the guaranteed convergence of frequently used simultaneous methods for solving algebraic equations: Ehrlich-Aberth's method, Ehrlich-Aberth's method with Newton's correction, Borsch-Supan's method with Weierstrass' correction and Halley-like (or Wang-Zheng) method. The introduced concept offers not only a clear insight into the convergence analysis of sequences generated by the considered methods, but also explicitly gives their order of convergence. The stated initial conditions are of significant practical importance since they are computationally verifiable; they depend only on the coefficients of a given polynomial, its degree n and initial approximations to polynomial zeros.
A new simple method to estimate fracture pressure gradient
Rocha, L.A.; Bourgoyne, A.T.
1994-12-31
Projecting safer and more economic wells calls for estimating correctly the fracture pressure gradient. On the other hand, a poor prediction of the fracture pressure gradient may lead to serious accidents such as lost circulation followed by a kick. Although these kinds of accidents can occur in any phase of the well, drilling shallow formations can offer additional dangerous due to shallow gas kicks, because they have the potential of becoming a shallow gas blowout leading sometimes to the formation of craters. Often, one of the main problems when estimating the fracture pressure gradient is the lack of data. In fact, drilling engineers generally face situations where only leak off test data (frequently having questionable results) are available. This problem is normally the case when drilling shallow formations where very few information is collected. This paper presents a new method to estimate fracture pressure gradient. The proposed method has the advantage of (a) using only the knowledge of leak off test data and (b) being independent of the pore pressure. The method is based on a new concept called pseudo-overburden pressure, defined as the overburden pressure a formation would exhibit if it were plastic. The method was applied in several areas of the world such as US Gulf Coast (Mississippi Canyon and Green Canyon) with very good results.
Estimating the extreme low-temperature event using nonparametric methods
NASA Astrophysics Data System (ADS)
D'Silva, Anisha
This thesis presents a new method of estimating the one-in-N low temperature threshold using a non-parametric statistical method called kernel density estimation applied to daily average wind-adjusted temperatures. We apply our One-in-N Algorithm to local gas distribution companies (LDCs), as they have to forecast the daily natural gas needs of their consumers. In winter, demand for natural gas is high. Extreme low temperature events are not directly related to an LDCs gas demand forecasting, but knowledge of extreme low temperatures is important to ensure that an LDC has enough capacity to meet customer demands when extreme low temperatures are experienced. We present a detailed explanation of our One-in-N Algorithm and compare it to the methods using the generalized extreme value distribution, the normal distribution, and the variance-weighted composite distribution. We show that our One-in-N Algorithm estimates the one-in- N low temperature threshold more accurately than the methods using the generalized extreme value distribution, the normal distribution, and the variance-weighted composite distribution according to root mean square error (RMSE) measure at a 5% level of significance. The One-in- N Algorithm is tested by counting the number of times the daily average wind-adjusted temperature is less than or equal to the one-in- N low temperature threshold.
Richardson, John G.
2009-11-17
An impedance estimation method includes measuring three or more impedances of an object having a periphery using three or more probes coupled to the periphery. The three or more impedance measurements are made at a first frequency. Three or more additional impedance measurements of the object are made using the three or more probes. The three or more additional impedance measurements are made at a second frequency different from the first frequency. An impedance of the object at a point within the periphery is estimated based on the impedance measurements and the additional impedance measurements.
Estimating surface acoustic impedance with the inverse method.
Piechowicz, Janusz
2011-01-01
Sound field parameters are predicted with numerical methods in sound control systems, in acoustic designs of building and in sound field simulations. Those methods define the acoustic properties of surfaces, such as sound absorption coefficients or acoustic impedance, to determine boundary conditions. Several in situ measurement techniques were developed; one of them uses 2 microphones to measure direct and reflected sound over a planar test surface. Another approach is used in the inverse boundary elements method, in which estimating acoustic impedance of a surface is expressed as an inverse boundary problem. The boundary values can be found from multipoint sound pressure measurements in the interior of a room. This method can be applied to arbitrarily-shaped surfaces. This investigation is part of a research programme on using inverse methods in industrial room acoustics. PMID:21939599
NASA Astrophysics Data System (ADS)
Bambang Avip Priatna, M.; Lukman, Sumiaty, Encum
2016-02-01
This paper aims to determine the properties of Correspondence Analysis (CA) estimator to estimate latent variable models. The method used is the High-Dimensional AIC (HAIC) method with simulation of Bernoulli distribution data. Stages are: (1) determine the matrix CA; (2) create a model of the CA estimator to estimate the latent variables by using HAIC; (3) simulated the Bernoulli distribution data with repetition 1,000,748 times. The simulation results show the CA estimator models work well.
Adaptive error covariances estimation methods for ensemble Kalman filters
Zhen, Yicun; Harlim, John
2015-08-01
This paper presents a computationally fast algorithm for estimating, both, the system and observation noise covariances of nonlinear dynamics, that can be used in an ensemble Kalman filtering framework. The new method is a modification of Belanger's recursive method, to avoid an expensive computational cost in inverting error covariance matrices of product of innovation processes of different lags when the number of observations becomes large. When we use only product of innovation processes up to one-lag, the computational cost is indeed comparable to a recently proposed method by Berry–Sauer's. However, our method is more flexible since it allows for using information from product of innovation processes of more than one-lag. Extensive numerical comparisons between the proposed method and both the original Belanger's and Berry–Sauer's schemes are shown in various examples, ranging from low-dimensional linear and nonlinear systems of SDEs and 40-dimensional stochastically forced Lorenz-96 model. Our numerical results suggest that the proposed scheme is as accurate as the original Belanger's scheme on low-dimensional problems and has a wider range of more accurate estimates compared to Berry–Sauer's method on L-96 example.
Rotate-and-Stare: A new method for PSF estimation.
NASA Astrophysics Data System (ADS)
Teuber, J.; Ostensen, R.; Stabell, R.; Florentin-Nielsen, R.
1994-12-01
We present a new and simple method for the determination of a digital Point Spread Function (PSF), utilizing the approximate circular symmetry in stellar images of normal quality. Using an optimal estimation of total intensity and object centering, the application of this type of PSF is found to be comparable to analytical or semi-analytical modelling, e.g., that employed in the DAOPHOT package (Stetson 1987). Further improvements are suggested.
A Sensitivity Analysis of a Thin Film Conductivity Estimation Method
McMasters, Robert L; Dinwiddie, Ralph Barton
2010-01-01
An analysis method was developed for determining the thermal conductivity of a thin film on a substrate of known thermal properties using the flash diffusivity method. In order to determine the thermal conductivity of the film using this method, the volumetric heat capacity of the film must be known, as determined in a separate experiment. Additionally, the thermal properties of the substrate must be known, including conductivity and volumetric heat capacity. The ideal conditions for the experiment are a low conductivity film adhered to a higher conductivity substrate. As the film becomes thinner with respect to the substrate or, as the conductivity of the film approaches that of the substrate, the estimation of thermal conductivity of the film becomes more difficult. The present research examines the effect of inaccuracies in the known parameters on the estimation of the parameter of interest, the thermal conductivity of the film. As such, perturbations are introduced into the other parameters in the experiment, which are assumed to be known, to find the effect on the estimated thermal conductivity of the film. A baseline case is established with the following parameters: Substrate thermal conductivity 1.0 W/m-K Substrate volumetric heat capacity 106 J/m3-K Substrate thickness 0.8 mm Film thickness 0.2 mm Film volumetric heat capacity 106 J/m3-K Film thermal conductivity 0.01 W/m-K Convection coefficient 20 W/m2-K Magnitude of heat absorbed during the flash 1000 J/m2 Each of these parameters, with the exception of film thermal conductivity, the parameter of interest, is varied from its baseline value, in succession, and placed into a synthetic experimental data file. Each of these data files is individually analyzed by the program to determine the effect on the estimated film conductivity, thus quantifying the vulnerability of the method to measurement errors.
Estimation of race admixture--a new method.
Chakraborty, R
1975-05-01
The contribution of a parental population in the gene pool of a hybrid population which arose by hybridization with one or more other populations is estimated here at the population level from the probability of gene identity. The dynamics of accumulation of such admixture is studied incorporating the fluctuations due to finite size of the hybrid population. The method is illustrated with data on admixture in Cherokee Indians. PMID:1146991
Improving stochastic estimates with inference methods: calculating matrix diagonals.
Selig, Marco; Oppermann, Niels; Ensslin, Torsten A
2012-02-01
Estimating the diagonal entries of a matrix, that is not directly accessible but only available as a linear operator in the form of a computer routine, is a common necessity in many computational applications, especially in image reconstruction and statistical inference. Here, methods of statistical inference are used to improve the accuracy or the computational costs of matrix probing methods to estimate matrix diagonals. In particular, the generalized Wiener filter methodology, as developed within information field theory, is shown to significantly improve estimates based on only a few sampling probes, in cases in which some form of continuity of the solution can be assumed. The strength, length scale, and precise functional form of the exploited autocorrelation function of the matrix diagonal is determined from the probes themselves. The developed algorithm is successfully applied to mock and real world problems. These performance tests show that, in situations where a matrix diagonal has to be calculated from only a small number of computationally expensive probes, a speedup by a factor of 2 to 10 is possible with the proposed method. PMID:22463179
Geometric estimation method for x-ray digital intraoral tomosynthesis
NASA Astrophysics Data System (ADS)
Li, Liang; Yang, Yao; Chen, Zhiqiang
2016-06-01
It is essential for accurate image reconstruction to obtain a set of parameters that describes the x-ray scanning geometry. A geometric estimation method is presented for x-ray digital intraoral tomosynthesis (DIT) in which the detector remains stationary while the x-ray source rotates. The main idea is to estimate the three-dimensional (3-D) coordinates of each shot position using at least two small opaque balls adhering to the detector surface as the positioning markers. From the radiographs containing these balls, the position of each x-ray focal spot can be calculated independently relative to the detector center no matter what kind of scanning trajectory is used. A 3-D phantom which roughly simulates DIT was designed to evaluate the performance of this method both quantitatively and qualitatively in the sense of mean square error and structural similarity. Results are also presented for real data acquired with a DIT experimental system. These results prove the validity of this geometric estimation method.
Minimally important difference estimates and methods: a protocol
Johnston, Bradley C; Ebrahim, Shanil; Carrasco-Labra, Alonso; Furukawa, Toshi A; Patrick, Donald L; Crawford, Mark W; Hemmelgarn, Brenda R; Schunemann, Holger J; Guyatt, Gordon H; Nesrallah, Gihad
2015-01-01
Introduction Patient-reported outcomes (PROs) are often the outcomes of greatest importance to patients. The minimally important difference (MID) provides a measure of the smallest change in the PRO that patients perceive as important. An anchor-based approach is the most appropriate method for MID determination. No study or database currently exists that provides all anchor-based MIDs associated with PRO instruments; nor are there any accepted standards for appraising the credibility of MID estimates. Our objectives are to complete a systematic survey of the literature to collect and characterise published anchor-based MIDs associated with PRO instruments used in evaluating the effects of interventions on chronic medical and psychiatric conditions and to assess their credibility. Methods and analysis We will search MEDLINE, EMBASE and PsycINFO (1989 to present) to identify studies addressing methods to estimate anchor-based MIDs of target PRO instruments or reporting empirical ascertainment of anchor-based MIDs. Teams of two reviewers will screen titles and abstracts, review full texts of citations, and extract relevant data. On the basis of findings from studies addressing methods to estimate anchor-based MIDs, we will summarise the available methods and develop an instrument addressing the credibility of empirically ascertained MIDs. We will evaluate the credibility of all studies reporting on the empirical ascertainment of anchor-based MIDs using the credibility instrument, and assess the instrument's inter-rater reliability. We will separately present reports for adult and paediatric populations. Ethics and dissemination No research ethics approval was required as we will be using aggregate data from published studies. Our work will summarise anchor-based methods available to establish MIDs, provide an instrument to assess the credibility of available MIDs, determine the reliability of that instrument, and provide a comprehensive compendium of published anchor
SCoPE: an efficient method of Cosmological Parameter Estimation
Das, Santanu; Souradeep, Tarun E-mail: tarun@iucaa.ernet.in
2014-07-01
Markov Chain Monte Carlo (MCMC) sampler is widely used for cosmological parameter estimation from CMB and other data. However, due to the intrinsic serial nature of the MCMC sampler, convergence is often very slow. Here we present a fast and independently written Monte Carlo method for cosmological parameter estimation named as Slick Cosmological Parameter Estimator (SCoPE), that employs delayed rejection to increase the acceptance rate of a chain, and pre-fetching that helps an individual chain to run on parallel CPUs. An inter-chain covariance update is also incorporated to prevent clustering of the chains allowing faster and better mixing of the chains. We use an adaptive method for covariance calculation to calculate and update the covariance automatically as the chains progress. Our analysis shows that the acceptance probability of each step in SCoPE is more than 95% and the convergence of the chains are faster. Using SCoPE, we carry out some cosmological parameter estimations with different cosmological models using WMAP-9 and Planck results. One of the current research interests in cosmology is quantifying the nature of dark energy. We analyze the cosmological parameters from two illustrative commonly used parameterisations of dark energy models. We also asses primordial helium fraction in the universe can be constrained by the present CMB data from WMAP-9 and Planck. The results from our MCMC analysis on the one hand helps us to understand the workability of the SCoPE better, on the other hand it provides a completely independent estimation of cosmological parameters from WMAP-9 and Planck data.
The composite method: An improved method for stream-water solute load estimation
Aulenbach, Brent T.; Hooper, R.P.
2006-01-01
The composite method is an alternative method for estimating stream-water solute loads, combining aspects of two commonly used methods: the regression-model method (which is used by the composite method to predict variations in concentrations between collected samples) and a period-weighted approach (which is used by the composite method to apply the residual concentrations from the regression model over time). The extensive dataset collected at the outlet of the Panola Mountain Research Watershed (PMRW) near Atlanta, Georgia, USA, was used in data analyses for illustrative purposes. A bootstrap (subsampling) experiment (using the composite method and the PMRW dataset along with various fixed-interval and large storm sampling schemes) obtained load estimates for the 8-year study period with a magnitude of the bias of less than 1%, even for estimates that included the fewest number of samples. Precisions were always <2% on a study period and annual basis, and <2% precisions were obtained for quarterly and monthly time intervals for estimates that had better sampling. The bias and precision of composite-method load estimates varies depending on the variability in the regression-model residuals, how residuals systematically deviated from the regression model over time, sampling design, and the time interval of the load estimate. The regression-model method did not estimate loads precisely during shorter time intervals, from annually to monthly, because the model could not explain short-term patterns in the observed concentrations. Load estimates using the period-weighted approach typically are biased as a result of sampling distribution and are accurate only with extensive sampling. The formulation of the composite method facilitates exploration of patterns (trends) contained in the unmodelled portion of the load. Published in 2006 by John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Kawasaki, Makoto; Kohno, Ryuji
Wireless communication devices in the field of medical implant, such as cardiac pacemakers and capsule endoscopes, have been studied and developed to improve healthcare systems. Especially it is very important to know the range and position of each device because it will contribute to an optimization of the transmission power. We adopt the time-based approach of position estimation using ultra wideband signals. However, the propagation velocity inside the human body differs in each tissue and each frequency. Furthermore, the human body is formed of various tissues with complex structures. For this reason, propagation velocity is different at a different point inside human body and the received signal so distorted through the channel inside human body. In this paper, we apply an adaptive template synthesis method in multipath channel for calculate the propagation time accurately based on the output of the correlator between the transmitter and the receiver. Furthermore, we propose a position estimation method using an estimation of the propagation velocity inside the human body. In addition, we show by computer simulation that the proposal method can perform accurate positioning with a size of medical implanted devices such as a medicine capsule.
Method to estimate center of rigidity using vibration recordings
Safak, Erdal; Celebi, Mehmet
1990-01-01
A method to estimate the center of rigidity of buildings by using vibration recordings is presented. The method is based on the criterion that the coherence of translational motions with the rotational motion is minimum at the center of rigidity. Since the coherence is a function of frequency, a gross but frequency-independent measure of the coherency is defined as the integral of the coherence function over the frequency. The center of rigidity is determined by minimizing this integral. The formulation is given for two-dimensional motions. Two examples are presented for the method; a rectangular building with ambient-vibration recordings, and a triangular building with earthquake-vibration recordings. Although the examples given are for buildings, the method can be applied to any structure with two-dimensional motions.
Evaluation of estimation methods for organic carbon normalized sorption coefficients
Baker, James R.; Mihelcic, James R.; Luehrs, Dean C.; Hickey, James P.
1997-01-01
A critically evaluated set of 94 soil water partition coefficients normalized to soil organic carbon content (Koc) is presented for 11 classes of organic chemicals. This data set is used to develop and evaluate Koc estimation methods using three different descriptors. The three types of descriptors used in predicting Koc were octanol/water partition coefficient (Kow), molecular connectivity (mXt) and linear solvation energy relationships (LSERs). The best results were obtained estimating Koc from Kow, though a slight improvement in the correlation coefficient was obtained by using a two-parameter regression with Kow and the third order difference term from mXt. Molecular connectivity correlations seemed to be best suited for use with specific chemical classes. The LSER provided a better fit than mXt but not as good as the correlation with Koc. The correlation to predict Koc from Kow was developed for 72 chemicals; log Koc = 0.903* log Kow + 0.094. This correlation accounts for 91% of the variability in the data for chemicals with log Kow ranging from 1.7 to 7.0. The expression to determine the 95% confidence interval on the estimated Koc is provided along with an example for two chemicals of different hydrophobicity showing the confidence interval of the retardation factor determined from the estimated Koc. The data showed that Koc is not likely to be applicable for chemicals with log Kow < 1.7. Finally, the Koc correlation developed using Kow as a descriptor was compared with three nonclass-specific correlations and two 'commonly used' class-specific correlations to determine which method(s) are most suitable.
Estimates of tropical bromoform emissions using an inversion method
NASA Astrophysics Data System (ADS)
Ashfold, M. J.; Harris, N. R. P.; Manning, A. J.; Robinson, A. D.; Warwick, N. J.; Pyle, J. A.
2013-08-01
Bromine plays an important role in ozone chemistry in both the troposphere and stratosphere. When measured by mass, bromoform (CHBr3) is thought to be the largest organic source of bromine to the atmosphere. While seaweed and phytoplankton are known to be dominant sources, the size and the geographical distribution of CHBr3 emissions remains uncertain. Particularly little is known about emissions from the Maritime Continent, which have usually been assumed to be large, and which appear to be especially likely to reach the stratosphere. In this study we aim to use the first multi-annual set of CHBr3 measurements from this region, and an inversion method, to reduce this uncertainty. We find that local measurements of a short-lived gas like CHBr3 can only be used to constrain emissions from a relatively small, sub-regional domain. We then obtain detailed estimates of both the distribution and magnitude of CHBr3 emissions within this area. Our estimates appear to be relatively insensitive to the assumptions inherent in the inversion process. We extrapolate this information to produce estimated emissions for the entire tropics (defined as 20° S-20° N) of 225 GgCHBr3 y-1. This estimate is consistent with other recent studies, and suggests that CHBr3 emissions in the coastline-rich Maritime Continent may not be stronger than emissions in other parts of the tropics.
Modeling an exhumed basin: A method for estimating eroded overburden
Poelchau, H.S. )
1993-09-01
The Alberta Deep basin in western Canada has undergone a large amount of erosion after deep burial in the Eocene. Basin modeling and simulation of burial and temperature history require estimates of maximum overburden for each gridpoint in the basin. Erosion generally is estimated with shale compaction trends. For instance, the commonly used Magara technique attempts to establish a sonic log gradient for shales and uses the intercept with uncompacted shale values as a first indication of overcompaction and amount of erosion. Since such gradients are difficult to establish in many wells, an extension of this method was devised to help map erosion over a large area. Sonic values of shales are calibrated with compaction gradients to give an equation for amount of total restored overburden for the same formation in several wells. This equation then can be used to estimate and map total restored overburden for all wells in which this formation has been logged. The example from the Alberta Deep basin shows that trend and magnitudes of erosion or overburden agree with independent estimates using vitrinite maturity values.
How Accurately Do Spectral Methods Estimate Effective Elastic Thickness?
NASA Astrophysics Data System (ADS)
Perez-Gussinye, M.; Lowry, A. R.; Watts, A. B.; Velicogna, I.
2002-12-01
The effective elastic thickness, Te, is an important parameter that has the potential to provide information on the long-term thermal and mechanical properties of the the lithosphere. Previous studies have estimated Te using both forward and inverse (spectral) methods. While there is generally good agreement between the results obtained using these methods, spectral methods are limited because they depend on the spectral estimator and the window size chosen for analysis. In order to address this problem, we have used a multitaper technique which yields optimal estimates of the bias and variance of the Bouguer coherence function relating topography and gravity anomaly data. The technique has been tested using realistic synthetic topography and gravity. Synthetic data were generated assuming surface and sub-surface (buried) loading of an elastic plate with fractal statistics consistent with real data sets. The cases of uniform and spatially varying Te are examined. The topography and gravity anomaly data consist of 2000x2000 km grids sampled at 8 km interval. The bias in the Te estimate is assessed from the difference between the true Te value and the mean from analyzing 100 overlapping windows within the 2000x2000 km data grids. For the case in which Te is uniform, the bias and variance decrease with window size and increase with increasing true Te value. In the case of a spatially varying Te, however, there is a trade-off between spatial resolution and variance. With increasing window size the variance of the Te estimate decreases, but the spatial changes in Te are smeared out. We find that for a Te distribution consisting of a strong central circular region of Te=50 km (radius 600 km) and progressively smaller Te towards its edges, the 800x800 and 1000x1000 km window gave the best compromise between spatial resolution and variance. Our studies demonstrate that assumed stationarity of the relationship between gravity and topography data yields good results even in
Methods for cost estimation in software project management
NASA Astrophysics Data System (ADS)
Briciu, C. V.; Filip, I.; Indries, I. I.
2016-02-01
The speed in which the processes used in software development field have changed makes it very difficult the task of forecasting the overall costs for a software project. By many researchers, this task has been considered unachievable, but there is a group of scientist for which this task can be solved using the already known mathematical methods (e.g. multiple linear regressions) and the new techniques as genetic programming and neural networks. The paper presents a solution for building a model for the cost estimation models in the software project management using genetic algorithms starting from the PROMISE datasets related COCOMO 81 model. In the first part of the paper, a summary of the major achievements in the research area of finding a model for estimating the overall project costs is presented together with the description of the existing software development process models. In the last part, a basic proposal of a mathematical model of a genetic programming is proposed including here the description of the chosen fitness function and chromosome representation. The perspective of model described it linked with the current reality of the software development considering as basis the software product life cycle and the current challenges and innovations in the software development area. Based on the author's experiences and the analysis of the existing models and product lifecycle it was concluded that estimation models should be adapted with the new technologies and emerging systems and they depend largely by the chosen software development method.
Causes and methods to estimate cryptic sources of fishing mortality.
Gilman, E; Suuronen, P; Hall, M; Kennelly, S
2013-10-01
Cryptic, not readily detectable, components of fishing mortality are not routinely accounted for in fisheries management because of a lack of adequate data, and for some components, a lack of accurate estimation methods. Cryptic fishing mortalities can cause adverse ecological effects, are a source of wastage, reduce the sustainability of fishery resources and, when unaccounted for, can cause errors in stock assessments and population models. Sources of cryptic fishing mortality are (1) pre-catch losses, where catch dies from the fishing operation but is not brought onboard when the gear is retrieved, (2) ghost-fishing mortality by fishing gear that was abandoned, lost or discarded, (3) post-release mortality of catch that is retrieved and then released alive but later dies as a result of stress and injury sustained from the fishing interaction, (4) collateral mortalities indirectly caused by various ecological effects of fishing and (5) losses due to synergistic effects of multiple interacting sources of stress and injury from fishing operations, or from cumulative stress and injury caused by repeated sub-lethal interactions with fishing operations. To fill a gap in international guidance on best practices, causes and methods for estimating each component of cryptic fishing mortality are described, and considerations for their effective application are identified. Research priorities to fill gaps in understanding the causes and estimating cryptic mortality are highlighted. PMID:24090548
A method to estimate groundwater depletion from confining layers
Konikow, L.F.; Neuzil, C.E.
2007-01-01
Although depletion of storage in low-permeability confining layers is the source of much of the groundwater produced from many confined aquifer systems, it is all too frequently overlooked or ignored. This makes effective management of groundwater resources difficult by masking how much water has been derived from storage and, in some cases, the total amount of water that has been extracted from an aquifer system. Analyzing confining layer storage is viewed as troublesome because of the additional computational burden and because the hydraulic properties of confining layers are poorly known. In this paper we propose a simplified method for computing estimates of confining layer depletion, as well as procedures for approximating confining layer hydraulic conductivity (K) and specific storage (Ss) using geologic information. The latter makes the technique useful in developing countries and other settings where minimal data are available or when scoping calculations are needed. As such, our approach may be helpful for estimating the global transfer of groundwater to surface water. A test of the method on a synthetic system suggests that the computational errors will generally be small. Larger errors will probably result from inaccuracy in confining layer property estimates, but these may be no greater than errors in more sophisticated analyses. The technique is demonstrated by application to two aquifer systems: the Dakota artesian aquifer system in South Dakota and the coastal plain aquifer system in Virginia. In both cases, depletion from confining layers was substantially larger than depletion from the aquifers.
A new simple method to estimate fracture pressure gradient
Rocha, L.A.; Bourgoyne, A.T.
1996-09-01
Projecting safety and more economic wells calls for estimating correctly the fracture pressure gradient. On the other hand, a poor prediction of the fracture pressure gradient may lead to serious accidents, such as lost circulation followed by a kick. Although these kind of accidents can occur in any phase of the well, drilling shallow formations can offer additional dangers caused by shallow gas kicks because they have the potential of becoming a shallow gas blowout leading sometimes to the formation of craters. This paper presents a new method to estimate fracture pressure gradient. The proposed method has the advantage of (1) using only the knowledge of leakoff test data and (2) being independent of the pore pressure. The method is based on a new concept called pseudo-overburden pressure, defined as the overburden pressure a formation would exhibit if it were plastic. The method was applied in several areas of the world, such as the US Gulf Coast (Mississippi Canyon and Green Canyon), with very good results.
Intensity estimation method of LED array for visible light communication
NASA Astrophysics Data System (ADS)
Ito, Takanori; Yendo, Tomohiro; Arai, Shintaro; Yamazato, Takaya; Okada, Hiraku; Fujii, Toshiaki
2013-03-01
This paper focuses on a road-to-vehicle visible light communication (VLC) system using LED traffic light as the transmitter and camera as the receiver. The traffic light is composed of a hundred of LEDs on two dimensional plain. In this system, data is sent as two dimensional brightness patterns by controlling each LED of the traffic light individually, and they are received as images by the camera. Here, there are problems that neighboring LEDs on the received image are merged due to less number of pixels in case that the receiver is distant from the transmitter, and/or due to blurring by defocus of the camera. Because of that, bit error rate (BER) increases due to recognition error of intensity of LEDs To solve the problem, we propose a method that estimates the intensity of LEDs by solving the inverse problem of communication channel characteristic from the transmitter to the receiver. The proposed method is evaluated by BER characteristics which are obtained by computer simulation and experiments. In the result, the proposed method can estimate with better accuracy than the conventional methods, especially in case that the received image is blurred a lot, and the number of pixels is small.
Estimating Return on Investment in Translational Research: Methods and Protocols
Trochim, William; Dilts, David M.; Kirk, Rosalind
2014-01-01
Assessing the value of clinical and translational research funding on accelerating the translation of scientific knowledge is a fundamental issue faced by the National Institutes of Health and its Clinical and Translational Awards (CTSA). To address this issue, the authors propose a model for measuring the return on investment (ROI) of one key CTSA program, the clinical research unit (CRU). By estimating the economic and social inputs and outputs of this program, this model produces multiple levels of ROI: investigator, program and institutional estimates. A methodology, or evaluation protocol, is proposed to assess the value of this CTSA function, with specific objectives, methods, descriptions of the data to be collected, and how data are to be filtered, analyzed, and evaluated. This paper provides an approach CTSAs could use to assess the economic and social returns on NIH and institutional investments in these critical activities. PMID:23925706
Estimating bacterial diversity for ecological studies: methods, metrics, and assumptions.
Birtel, Julia; Walser, Jean-Claude; Pichon, Samuel; Bürgmann, Helmut; Matthews, Blake
2015-01-01
Methods to estimate microbial diversity have developed rapidly in an effort to understand the distribution and diversity of microorganisms in natural environments. For bacterial communities, the 16S rRNA gene is the phylogenetic marker gene of choice, but most studies select only a specific region of the 16S rRNA to estimate bacterial diversity. Whereas biases derived from from DNA extraction, primer choice and PCR amplification are well documented, we here address how the choice of variable region can influence a wide range of standard ecological metrics, such as species richness, phylogenetic diversity, β-diversity and rank-abundance distributions. We have used Illumina paired-end sequencing to estimate the bacterial diversity of 20 natural lakes across Switzerland derived from three trimmed variable 16S rRNA regions (V3, V4, V5). Species richness, phylogenetic diversity, community composition, β-diversity, and rank-abundance distributions differed significantly between 16S rRNA regions. Overall, patterns of diversity quantified by the V3 and V5 regions were more similar to one another than those assessed by the V4 region. Similar results were obtained when analyzing the datasets with different sequence similarity thresholds used during sequences clustering and when the same analysis was used on a reference dataset of sequences from the Greengenes database. In addition we also measured species richness from the same lake samples using ARISA Fingerprinting, but did not find a strong relationship between species richness estimated by Illumina and ARISA. We conclude that the selection of 16S rRNA region significantly influences the estimation of bacterial diversity and species distributions and that caution is warranted when comparing data from different variable regions as well as when using different sequencing techniques. PMID:25915756
Nonparametric entropy estimation using kernel densities.
Lake, Douglas E
2009-01-01
The entropy of experimental data from the biological and medical sciences provides additional information over summary statistics. Calculating entropy involves estimates of probability density functions, which can be effectively accomplished using kernel density methods. Kernel density estimation has been widely studied and a univariate implementation is readily available in MATLAB. The traditional definition of Shannon entropy is part of a larger family of statistics, called Renyi entropy, which are useful in applications that require a measure of the Gaussianity of data. Of particular note is the quadratic entropy which is related to the Friedman-Tukey (FT) index, a widely used measure in the statistical community. One application where quadratic entropy is very useful is the detection of abnormal cardiac rhythms, such as atrial fibrillation (AF). Asymptotic and exact small-sample results for optimal bandwidth and kernel selection to estimate the FT index are presented and lead to improved methods for entropy estimation. PMID:19897106
Estimating microwave emissivity of sea foam by Rayleigh method
NASA Astrophysics Data System (ADS)
Liu, Shu-Bo; Wei, En-Bo; Jia, Yan-Xia
2013-01-01
To estimate microwave emissivity of sea foam consisting of dense seawater-coated air bubbles, the effective medium approximation is applied by regarding the foam layer as an isotropic dielectric medium. The Rayleigh method is developed to calculate effective permittivity of the sea foam layer at different microwave frequencies, air volume fraction, and seawater coating thickness. With a periodic lattice model of coated bubbles and multilayered structures of effective foam media, microwave emissivities of sea foam layers with different effective permittivities obtained by the Rayleigh method are calculated. Good agreement is obtained by comparing model results with experimental data at 1.4, 10.8, and 36.5 GHz. Furthermore, sea foam microwave emissivities calculated by well-known effective permittivity formulas are investigated, such as the Silberstein, refractive model, and Maxwell-Garnett formulas. Their results are compared with those of our model. It is shown that the Rayleigh method gives more reasonable results.
NASA Technical Reports Server (NTRS)
Huddleston, Lisa; Roeder, WIlliam P.; Merceret, Francis J.
2011-01-01
A new technique has been developed to estimate the probability that a nearby cloud-to-ground lightning stroke was within a specified radius of any point of interest. This process uses the bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to determine the probability that the stroke is inside any specified radius of any location, even if that location is not centered on or even within the location error ellipse. This technique is adapted from a method of calculating the probability of debris collision with spacecraft. Such a technique is important in spaceport processing activities because it allows engineers to quantify the risk of induced current damage to critical electronics due to nearby lightning strokes. This technique was tested extensively and is now in use by space launch organizations at Kennedy Space Center and Cape Canaveral Air Force station. Future applications could include forensic meteorology.
Estimating Fuel Cycle Externalities: Analytical Methods and Issues, Report 2
Barnthouse, L.W.; Cada, G.F.; Cheng, M.-D.; Easterly, C.E.; Kroodsma, R.L.; Lee, R.; Shriner, D.S.; Tolbert, V.R.; Turner, R.S.
1994-07-01
that also have not been fully addressed. This document contains two types of papers that seek to fill part of this void. Some of the papers describe analytical methods that can be applied to one of the five steps of the damage function approach. The other papers discuss some of the complex issues that arise in trying to estimate externalities. This report, the second in a series of eight reports, is part of a joint study by the U.S. Department of Energy (DOE) and the Commission of the European Communities (EC)* on the externalities of fuel cycles. Most of the papers in this report were originally written as working papers during the initial phases of this study. The papers provide descriptions of the (non-radiological) atmospheric dispersion modeling that the study uses; reviews much of the relevant literature on ecological and health effects, and on the economic valuation of those impacts; contains several papers on some of the more complex and contentious issues in estimating externalities; and describes a method for depicting the quality of scientific information that a study uses. The analytical methods and issues that this report discusses generally pertain to more than one of the fuel cycles, though not necessarily to all of them. The report is divided into six parts, each one focusing on a different subject area.
NASA Astrophysics Data System (ADS)
Kavehrad, Mohsen; Joseph, Myrlene
1986-12-01
The maximum entropy criterion for estimating an unknown probability density function from its moments is applied to the evaluation of the average error probability in digital communications. Accurate averages are obtained, even when a few moments are available. The method is stable and results compare well with those from the powerful and widely used Gauss quadrature rules (GQR) method. For test cases presented in this work, the maximum entropy method achieved results with typically a few moments, while the GQR method required many more moments to obtain the same, as accurately. The method requires about the same number of movements as techniques based on orthogonal expansions. In addition, it provides an estimate of the probability density function of the target variable in a digital communication application.
Using optimal estimation method for upper atmospheric Lidar temperature retrieval
NASA Astrophysics Data System (ADS)
Zou, Rongshi; Pan, Weilin; Qiao, Shuai
2016-07-01
Conventional ground based Rayleigh lidar temperature retrieval use integrate technique, which has limitations that necessitate abandoning temperatures retrieved at the greatest heights due to the assumption of a seeding value required to initialize the integration at the highest altitude. Here we suggests the use of a method that can incorporate information from various sources to improve the quality of the retrieval result. This approach inverts lidar equation via optimal estimation method(OEM) based on Bayesian theory together with Gaussian statistical model. It presents many advantages over the conventional ones: 1) the possibility of incorporating information from multiple heterogeneous sources; 2) provides diagnostic information about retrieval qualities; 3) ability of determining vertical resolution and maximum height to which the retrieval is mostly independent of the a priori profile. This paper compares one-hour temperature profiles retrieved using conventional and optimal estimation methods at Golmud, Qinghai province, China. Results show that OEM results show a better agreement with SABER profile compared with conventional one, in some region it is much lower than SABER profile, which is a very different results compared with previous studies, further studies are needed to explain this phenomenon. The success of applying OEM on temperature retrieval is a validation for using as retrieval framework in large synthetic observation systems including various active remote sensing instruments by incorporating all available measurement information into the model and analyze groups of measurements simultaneously to improve the results.
ERIC Educational Resources Information Center
Gustafson, S. C.; Costello, C. S.; Like, E. C.; Pierce, S. J.; Shenoy, K. N.
2009-01-01
Bayesian estimation of a threshold time (hereafter simply threshold) for the receipt of impulse signals is accomplished given the following: 1) data, consisting of the number of impulses received in a time interval from zero to one and the time of the largest time impulse; 2) a model, consisting of a uniform probability density of impulse time…
NASA Astrophysics Data System (ADS)
Topa, M. E.; De Paola, F.; Giugni, M.; Kombe, W.; Touré, H.
2012-04-01
The dynamic of hydro-climatic processes can fluctuate in a wide range of temporal scales. Such fluctuations are often unpredictable for ecosystems and the adaptation to these represent the great challenge for the survival and the stability of the species. An unsolved issue is how much these fluctuations of climatic variables to different temporal scales can influence the frequency and the intensity of the extreme events, and how much these events can modify the ecosystems life. It is by now widespread that an increment of the frequency and the intensity of the extreme events will represent one of the strongest characteristic of the global climatic change, with the greatest social and biotics implications (Porporato et al 2006). Recent field experiments (Gutshick and BassiriRad, 2003) and numerical analysis (Porporato et al 2004) have shown that the extreme events can generate not negligible consequences on organisms of water-limited ecosystems. Adaptation measures and species and ecosystems answers to the hydro-climatic variations, is therefore srongly interconnected to the probabilistic structure of these fluctuations. Generally the not-linear intermittent dynamic of a state variable z (a rainfall depth or the interarrival time between two storms), at short time scales (for example daily) is described by a probability density function (pdf), p (z|υ), where υ is the parameter of the distribution. If the same parameter υ varies so that the external forcing fluctuates at longer temporal scale, z reaches a new "local" equilibrium. When the temporal scale of the variation of υ is larger than the one of z, the probability distribution of z can be obtained as a overlapping of the temporary equlibria ("Superstatistic" approach), i.e.: p(z) = ∫ p(z|υ)·φ(υ)dυ (1) where p(z|υ) is the conditioned probability of z to υ, while φ(υ) is the pdf of υ (Beck, 2001; Benjamin and Cornell, 1970). The present work, carried out within FP7-ENV-2010 CLUVA (CLimate Change
Semiempirical method for estimating the noise of a propeller
NASA Astrophysics Data System (ADS)
Samokhin, V. F.
2012-09-01
A semiempirical method for estimating the noise of a propeller on the basis of the Lighthill analogy is proposed. The main relations of the computational model for the acoustic-radiation power have been obtained from a dimensional analysis of the general solution of the inhomogeneous wave equation for the pulsed acoustic radiation from a propeller. A comparison of the calculation and experimental data on the acousticradiation power and the one-third octave spectra of the sound pressure of four- and eight-blade AV-72 and SV-24 propellers is presented.
A method for sex estimation using the proximal femur.
Curate, Francisco; Coelho, João; Gonçalves, David; Coelho, Catarina; Ferreira, Maria Teresa; Navega, David; Cunha, Eugénia
2016-09-01
The assessment of sex is crucial to the establishment of a biological profile of an unidentified skeletal individual. The best methods currently available for the sexual diagnosis of human skeletal remains generally rely on the presence of well-preserved pelvic bones, which is not always the case. Postcranial elements, including the femur, have been used to accurately estimate sex in skeletal remains from forensic and bioarcheological settings. In this study, we present an approach to estimate sex using two measurements (femoral neck width [FNW] and femoral neck axis length [FNAL]) of the proximal femur. FNW and FNAL were obtained in a training sample (114 females and 138 males) from the Luís Lopes Collection (National History Museum of Lisbon). Logistic regression and the C4.5 algorithm were used to develop models to predict sex in unknown individuals. Proposed cross-validated models correctly predicted sex in 82.5-85.7% of the cases. The models were also evaluated in a test sample (96 females and 96 males) from the Coimbra Identified Skeletal Collection (University of Coimbra), resulting in a sex allocation accuracy of 80.1-86.2%. This study supports the relative value of the proximal femur to estimate sex in skeletal remains, especially when other exceedingly dimorphic skeletal elements are not accessible for analysis. PMID:27373600
Analytical method to estimate resin cement diffusion into dentin
NASA Astrophysics Data System (ADS)
de Oliveira Ferraz, Larissa Cristina; Ubaldini, Adriana Lemos Mori; de Oliveira, Bruna Medeiros Bertol; Neto, Antonio Medina; Sato, Fracielle; Baesso, Mauro Luciano; Pascotto, Renata Corrêa
2016-05-01
This study analyzed the diffusion of two resin luting agents (resin cements) into dentin, with the aim of presenting an analytical method for estimating the thickness of the diffusion zone. Class V cavities were prepared in the buccal and lingual surfaces of molars (n=9). Indirect composite inlays were luted into the cavities with either a self-adhesive or a self-etch resin cement. The teeth were sectioned bucco-lingually and the cement-dentin interface was analyzed by using micro-Raman spectroscopy (MRS) and scanning electron microscopy. Evolution of peak intensities of the Raman bands, collected from the functional groups corresponding to the resin monomer (C–O–C, 1113 cm-1) present in the cements, and the mineral content (P–O, 961 cm-1) in dentin were sigmoid shaped functions. A Boltzmann function (BF) was then fitted to the peaks encountered at 1113 cm-1 to estimate the resin cement diffusion into dentin. The BF identified a resin cement-dentin diffusion zone of 1.8±0.4 μm for the self-adhesive cement and 2.5±0.3 μm for the self-etch cement. This analysis allowed the authors to estimate the diffusion of the resin cements into the dentin. Fitting the MRS data to the BF contributed to and is relevant for future studies of the adhesive interface.
Smallwood, D. O.
1996-01-01
It is shown that the usual method for estimating the coherence functions (ordinary, partial, and multiple) for a general multiple-input! multiple-output problem can be expressed as a modified form of Cholesky decomposition of the cross-spectral density matrix of the input and output records. The results can be equivalently obtained using singular value decomposition (SVD) of the cross-spectral density matrix. Using SVD suggests a new form of fractional coherence. The formulation as a SVD problem also suggests a way to order the inputs when a natural physical order of the inputs is absent.
A variable circular-plot method for estimated bird numbers
Reynolds, R.T.; Scott, J.M.; Nussbaum, R.A.
1980-01-01
A bird census method is presented that is designed for tall, structurally complex vegetation types, and rugged terrain. With this method the observer counts all birds seen or heard around a station, and estimates the horizontal distance from the station to each bird. Count periods at stations vary according to the avian community and structural complexity of the vegetation. The density of each species is determined by inspecting a histogram of the number of individuals per unit area in concentric bands of predetermined widths about the stations, choosing the band (with outside radius x) where the density begins to decline, and summing the number of individuals counted within the circle of radius x and dividing by the area (Bx2). Although all observations beyond radius x are rejected with this procedure, coefficients of maximum distance.
Accuracy of age estimation of radiographic methods using developing teeth.
Maber, M; Liversidge, H M; Hector, M P
2006-05-15
Developing teeth are used to assess maturity and estimate age in a number of disciplines, however the accuracy of different methods has not been systematically investigated. The aim of this study was to determine the accuracy of several methods. Tooth formation was assessed from radiographs of healthy children attending a dental teaching hospital. The sample was 946 children (491 boys, 455 girls, aged 3-16.99 years) with similar number of children from Bangladeshi and British Caucasian ethnic origin. Panoramic radiographs were examined and seven mandibular teeth staged according to Demirjian's dental maturity scale [A. Demirjian, Dental development, CD-ROM, Silver Platter Education, University of Montreal, Montreal, 1993-1994; A. Demirjian, H. Goldstein, J.M. Tanner, A new system of dental age assessment, Hum. Biol. 45 (1973) 211-227; A. Demirjian, H. Goldstein, New systems for dental maturity based on seven and four teeth, Ann. Hum. Biol. 3 (1976) 411-421], Nolla [C.M. Nolla, The development of the permanent teeth, J. Dent. Child. 27 (1960) 254-266] and Haavikko [K. Haavikko, The formation and the alveolar and clinical eruption of the permanent teeth. An orthopantomographic study. Proc. Finn. Dent. Soc. 66 (1970) 103-170]. Dental age was calculated for each method, including an adaptation of Demirjian's method with updated scoring [G. Willems, A. Van Olmen, B. Spiessens, C. Carels, Dental age estimation in Belgian children: Demirjian's technique revisited, J. Forensic Sci. 46 (2001) 893-895]. The mean difference (+/-S.D. in years) between dental and real age was calculated for each method and in the case of Haavikko, each tooth type; and tested using t-test. Mean difference was also calculated for the age group 3-13.99 years for Haavikko (mean and individual teeth). Results show that the most accurate method was by Willems [G. Willems, A. Van Olmen, B. Spiessens, C. Carels, Dental age estimation in Belgian children: Demirjian's technique revisited, J. Forensic Sci
Application of Common Mid-Point Method to Estimate Asphalt
NASA Astrophysics Data System (ADS)
Zhao, Shan; Al-Aadi, Imad
2015-04-01
3-D radar is a multi-array stepped-frequency ground penetration radar (GPR) that can measure at a very close sampling interval in both in-line and cross-line directions. Constructing asphalt layers in accordance with specified thicknesses is crucial for pavement structure capacity and pavement performance. Common mid-point method (CMP) is a multi-offset measurement method that can improve the accuracy of the asphalt layer thickness estimation. In this study, the viability of using 3-D radar to predict asphalt concrete pavement thickness with an extended CMP method was investigated. GPR signals were collected on asphalt pavements with various thicknesses. Time domain resolution of the 3-D radar was improved by applying zero-padding technique in the frequency domain. The performance of the 3-D radar was then compared to that of the air-coupled horn antenna. The study concluded that 3-D radar can be used to predict asphalt layer thickness using CMP method accurately when the layer thickness is larger than 0.13m. The lack of time domain resolution of 3-D radar can be solved by frequency zero-padding. Keywords: asphalt pavement thickness, 3-D Radar, stepped-frequency, common mid-point method, zero padding.
Study on color difference estimation method of medicine biochemical analysis
NASA Astrophysics Data System (ADS)
Wang, Chunhong; Zhou, Yue; Zhao, Hongxia; Sun, Jiashi; Zhou, Fengkun
2006-01-01
The biochemical analysis in medicine is an important inspection and diagnosis method in hospital clinic. The biochemical analysis of urine is one important item. The Urine test paper shows corresponding color with different detection project or different illness degree. The color difference between the standard threshold and the test paper color of urine can be used to judge the illness degree, so that further analysis and diagnosis to urine is gotten. The color is a three-dimensional physical variable concerning psychology, while reflectance is one-dimensional variable; therefore, the estimation method of color difference in urine test can have better precision and facility than the conventional test method with one-dimensional reflectance, it can make an accurate diagnose. The digital camera is easy to take an image of urine test paper and is used to carry out the urine biochemical analysis conveniently. On the experiment, the color image of urine test paper is taken by popular color digital camera and saved in the computer which installs a simple color space conversion (RGB -> XYZ -> L *a *b *)and the calculation software. Test sample is graded according to intelligent detection of quantitative color. The images taken every time were saved in computer, and the whole illness process will be monitored. This method can also use in other medicine biochemical analyses that have relation with color. Experiment result shows that this test method is quick and accurate; it can be used in hospital, calibrating organization and family, so its application prospect is extensive.
Comparison of carbon and biomass estimation methods for European forests
NASA Astrophysics Data System (ADS)
Neumann, Mathias; Mues, Volker; Harkonen, Sanna; Mura, Matteo; Bouriaud, Olivier; Lang, Mait; Achten, Wouter; Thivolle-Cazat, Alain; Bronisz, Karol; Merganicova, Katarina; Decuyper, Mathieu; Alberdi, Iciar; Astrup, Rasmus; Schadauer, Klemens; Hasenauer, Hubert
2015-04-01
National and international reporting systems as well as research, enterprises and political stakeholders require information on carbon stocks of forests. Terrestrial assessment systems like forest inventory data in combination with carbon calculation methods are often used for this purpose. To assess the effect of the calculation method used, a comparative analysis was done using the carbon calculation methods from 13 European countries and the research plots from ICP Forests (International Co-operative Programme on Assessment and Monitoring of Air Pollution Effects on Forests). These methods are applied for five European tree species (Fagus sylvatica L., Quercus robur L., Betula pendula Roth, Picea abies (L.) Karst. and Pinus sylvestris L.) using a standardized theoretical tree dataset to avoid biases due to data collection and sample design. The carbon calculation methods use allometric biomass and volume functions, carbon and biomass expansion factors or a combination thereof. The results of the analysis show a high variation in the results for total tree carbon as well as for carbon in the single tree compartments. The same pattern is found when comparing the respective volume estimates. This is consistent for all five tree species and the variation remains when the results are grouped according to the European forest regions. Possible explanations are differences in the sample material used for the biomass models, the model variables or differences in the definition of tree compartments. The analysed carbon calculation methods have a strong effect on the results both for single trees and forest stands. To avoid misinterpretation the calculation method has to be chosen carefully along with quality checks and the calculation method needs consideration especially in comparative studies to avoid biased and misleading conclusions.
Method for Estimating the Presence of Clostridium perfringens in Food
Harmon, S. M.; Kautter, D. A.
1970-01-01
The methods currently used for the enumeration of Clostridium perfringens in food are often inadequate because of the rapid loss of viability of this organism when the sample is frozen or refrigerated. A method for estimating the presence of C. perfringens in food which utilizes the hemolytic and lecithinase activities of alpha toxin was developed. The hemolytic activity was measured in hemolysin indicator plates. Lecithinase activity of the extract was determined by the lecithovitellin test. Of 34 strains of C. perfringens associated with foodborne disease outbreaks, 32 produced sufficient alpha toxin in roast beef with gravy and in chicken broth to permit a reliable estimate of growth in these foods. Alpha toxin was extracted from food with 0.4 m saline buffered (at pH 8.0) with 0.05 mN-2-hydroxyethylpiperazine-N′-2-ethanesulfonic acid and concentrated by dialysis against 30% polyethylene glycol. A detectable quantity of alpha toxin was produced by approximately 106C. perfringens cells per g of substrate, and the amount increased in proportion to the cell population. Results obtained with food samples responsible for gastroenteritis in humans indicate that a correlation can be made between the amount of alpha toxin present and previous growth of C. perfringens in food regardless of whether the organisms are viable when the examination is performed. Images PMID:4321712
a Review of the Method of Moho Fold Estimation
NASA Astrophysics Data System (ADS)
Shin, Y.; Lim, M.; Park, Y.; Rim, H.
2010-12-01
We review the method of Moho fold estimation and its validation introduced in the recently published papers by Shin et al.(2009, 2007) and by Jin et al.(1994). The Tibetan Plateau, the study area of the papers, is greatly affected by heavy compression between Eurasian and Indian Plates and consequently has particular deformation structures related with the tectonic collisional environment, including possible buckling of very deep Moho. The recent method suggested by Shin et al.(2009) enables one to reveal the three-dimensional structure of the Moho fold and to validate it in direction, amplitude, and wavelength of the fold by comparing with other geophysical (e.g. an elastic plate model under horizontal loading) or geodetic (e.g. current crustal movement by GPS) evidences. We also review the several particular features of the Moho fold beneath Tibet. Finally, in the viewpoint of Moho fold estimation, we present a comparison of the recent global gravity models; both of satellite-based models (GGM03S, EIGEN-5S, ITG-GRACE2010S, GOCO01S, and GO_CONS_GCF_2DIR) and combination models including terrestrial gravimetry (GGM03C, EIGEN-5C, EGM2008, EIGEN-GL04C, and EIGEN51C). Reference: [1] Jin, Y. et al., 1994, Nature, 371, 669-674. [2] Shin, Y. H. et al., 2009, Geophysical Research Letters, 36, L01302, doi:10.1029/2008GL036068. [3] Shin, Y. H. et al., 2007, Geophysical Journal International, 170, 971-985.
Method of Estimating Continuous Cooling Transformation Curves of Glasses
NASA Technical Reports Server (NTRS)
Zhu, Dongmei; Zhou, Wancheng; Ray, Chandra S.; Day, Delbert E.
2006-01-01
A method is proposed for estimating the critical cooling rate and continuous cooling transformation (CCT) curve from isothermal TTT data of glasses. The critical cooling rates and CCT curves for a group of lithium disilicate glasses containing different amounts of Pt as nucleating agent estimated through this method are compared with the experimentally measured values. By analysis of the experimental and calculated data of the lithium disilicate glasses, a simple relationship between the crystallized amount in the glasses during continuous cooling, X, and the temperature of undercooling, (Delta)T, was found to be X = AR(sup-4)exp(B (Delta)T), where (Delta)T is the temperature difference between the theoretical melting point of the glass composition and the temperature in discussion, R is the cooling rate, and A and B are constants. The relation between the amount of crystallisation during continuous cooling and isothermal hold can be expressed as (X(sub cT)/X(sub iT) = (4/B)(sup 4) (Delta)T(sup -4), where X(sub cT) stands for the crystallised amount in a glass during continuous cooling for a time t when the temperature comes to T, and X(sub iT) is the crystallised amount during isothermal hold at temperature T for a time t.
The Mayfield method of estimating nesting success: A model, estimators and simulation results
Hensler, G.L.; Nichols, J.D.
1981-01-01
Using a nesting model proposed by Mayfield we show that the estimator he proposes is a maximum likelihood estimator (m.l.e.). M.l.e. theory allows us to calculate the asymptotic distribution of this estimator, and we propose an estimator of the asymptotic variance. Using these estimators we give approximate confidence intervals and tests of significance for daily survival. Monte Carlo simulation results show the performance of our estimators and tests under many sets of conditions. A traditional estimator of nesting success is shown to be quite inferior to the Mayfield estimator. We give sample sizes required for a given accuracy under several sets of conditions.
Estimated Accuracy of Three Common Trajectory Statistical Methods
NASA Technical Reports Server (NTRS)
Kabashnikov, Vitaliy P.; Chaikovsky, Anatoli P.; Kucsera, Tom L.; Metelskaya, Natalia S.
2011-01-01
Three well-known trajectory statistical methods (TSMs), namely concentration field (CF), concentration weighted trajectory (CWT), and potential source contribution function (PSCF) methods were tested using known sources and artificially generated data sets to determine the ability of TSMs to reproduce spatial distribution of the sources. In the works by other authors, the accuracy of the trajectory statistical methods was estimated for particular species and at specified receptor locations. We have obtained a more general statistical estimation of the accuracy of source reconstruction and have found optimum conditions to reconstruct source distributions of atmospheric trace substances. Only virtual pollutants of the primary type were considered. In real world experiments, TSMs are intended for application to a priori unknown sources. Therefore, the accuracy of TSMs has to be tested with all possible spatial distributions of sources. An ensemble of geographical distributions of virtual sources was generated. Spearman s rank order correlation coefficient between spatial distributions of the known virtual and the reconstructed sources was taken to be a quantitative measure of the accuracy. Statistical estimates of the mean correlation coefficient and a range of the most probable values of correlation coefficients were obtained. All the TSMs that were considered here showed similar close results. The maximum of the ratio of the mean correlation to the width of the correlation interval containing the most probable correlation values determines the optimum conditions for reconstruction. An optimal geographical domain roughly coincides with the area supplying most of the substance to the receptor. The optimal domain s size is dependent on the substance decay time. Under optimum reconstruction conditions, the mean correlation coefficients can reach 0.70 0.75. The boundaries of the interval with the most probable correlation values are 0.6 0.9 for the decay time of 240 h
Method for estimating road salt contamination of Norwegian lakes
NASA Astrophysics Data System (ADS)
Kitterød, Nils-Otto; Wike Kronvall, Kjersti; Turtumøygaard, Stein; Haaland, Ståle
2013-04-01
Consumption of road salt in Norway, used to improve winter road conditions, has been tripled during the last two decades, and there is a need to quantify limits for optimal use of road salt to avoid further environmental harm. The purpose of this study was to implement methodology to estimate chloride concentration in any given water body in Norway. This goal is feasible to achieve if the complexity of solute transport in the landscape is simplified. The idea was to keep computations as simple as possible to be able to increase spatial resolution of input functions. The first simplification we made was to treat all roads exposed to regular salt application as steady state sources of sodium chloride. This is valid if new road salt is applied before previous contamination is removed through precipitation. The main reasons for this assumption are the significant retention capacity of vegetation; organic matter; and soil. The second simplification we made was that the groundwater table is close to the surface. This assumption is valid for major part of Norway, which means that topography is sufficient to delineate catchment area at any location in the landscape. Given these two assumptions, we applied spatial functions of mass load (mass NaCl pr. time unit) and conditional estimates of normal water balance (volume of water pr. time unit) to calculate steady state chloride concentration along the lake perimeter. Spatial resolution of mass load and estimated concentration along the lake perimeter was 25 m x 25 m while water balance had 1 km x 1 km resolution. The method was validated for a limited number of Norwegian lakes and estimation results have been compared to observations. Initial results indicate significant overlap between measurements and estimations, but only for lakes where the road salt is the major contribution for chloride contamination. For lakes in catchments with high subsurface transmissivity, the groundwater table is not necessarily following the
Estimation of Anthocyanin Content of Berries by NIR Method
Zsivanovits, G.; Ludneva, D.; Iliev, A.
2010-01-21
Anthocyanin contents of fruits were estimated by VIS spectrophotometer and compared with spectra measured by NIR spectrophotometer (600-1100 nm step 10 nm). The aim was to find a relationship between NIR method and traditional spectrophotometric method. The testing protocol, using NIR, is easier, faster and non-destructive. NIR spectra were prepared in pairs, reflectance and transmittance. A modular spectrocomputer, realized on the basis of a monochromator and peripherals Bentham Instruments Ltd (GB) and a photometric camera created at Canning Research Institute, were used. An important feature of this camera is the possibility offered for a simultaneous measurement of both transmittance and reflectance with geometry patterns T0/180 and R0/45. The collected spectra were analyzed by CAMO Unscrambler 9.1 software, with PCA, PLS, PCR methods. Based on the analyzed spectra quality and quantity sensitive calibrations were prepared. The results showed that the NIR method allows measuring of the total anthocyanin content in fresh berry fruits or processed products without destroying them.
Estimation of Anthocyanin Content of Berries by NIR Method
NASA Astrophysics Data System (ADS)
Zsivanovits, G.; Ludneva, D.; Iliev, A.
2010-01-01
Anthocyanin contents of fruits were estimated by VIS spectrophotometer and compared with spectra measured by NIR spectrophotometer (600-1100 nm step 10 nm). The aim was to find a relationship between NIR method and traditional spectrophotometric method. The testing protocol, using NIR, is easier, faster and non-destructive. NIR spectra were prepared in pairs, reflectance and transmittance. A modular spectrocomputer, realized on the basis of a monochromator and peripherals Bentham Instruments Ltd (GB) and a photometric camera created at Canning Research Institute, were used. An important feature of this camera is the possibility offered for a simultaneous measurement of both transmittance and reflectance with geometry patterns T0/180 and R0/45. The collected spectra were analyzed by CAMO Unscrambler 9.1 software, with PCA, PLS, PCR methods. Based on the analyzed spectra quality and quantity sensitive calibrations were prepared. The results showed that the NIR method allows measuring of the total anthocyanin content in fresh berry fruits or processed products without destroying them.
A comparison of spectral estimation methods for the analysis of sibilant fricatives
Reidy, Patrick F.
2015-01-01
It has been argued that, to ensure accurate spectral feature estimates for sibilants, the spectral estimation method should include a low-variance spectral estimator; however, no empirical evaluation of estimation methods in terms of feature estimates has been given. The spectra of /s/ and /ʃ/ were estimated with different methods that varied the pre-emphasis filter and estimator. These methods were evaluated in terms of effects on two features (centroid and degree of sibilance) and on the detection of four linguistic contrasts within these features. Estimation method affected the spectral features but none of the tested linguistic contrasts. PMID:25920873
Estimating recharge at Yucca Mountain, Nevada, USA: comparison of methods
NASA Astrophysics Data System (ADS)
Flint, Alan L.; Flint, Lorraine E.; Kwicklis, Edward M.; Fabryka-Martin, June T.; Bodvarsson, Gudmundur S.
2002-02-01
Obtaining values of net infiltration, groundwater travel time, and recharge is necessary at the Yucca Mountain site, Nevada, USA, in order to evaluate the expected performance of a potential repository as a containment system for high-level radioactive waste. However, the geologic complexities of this site, its low precipitation and net infiltration, with numerous mechanisms operating simultaneously to move water through the system, provide many challenges for the estimation of the spatial distribution of recharge. A variety of methods appropriate for arid environments has been applied, including water-balance techniques, calculations using Darcy's law in the unsaturated zone, a soil-physics method applied to neutron-hole water-content data, inverse modeling of thermal profiles in boreholes extending through the thick unsaturated zone, chloride mass balance, atmospheric radionuclides, and empirical approaches. These methods indicate that near-surface infiltration rates at Yucca Mountain are highly variable in time and space, with local (point) values ranging from zero to several hundred millimeters per year. Spatially distributed net-infiltration values average 5 mm/year, with the highest values approaching 20 mm/year near Yucca Crest. Site-scale recharge estimates range from less than 1 to about 12 mm/year. These results have been incorporated into a site-scale model that has been calibrated using these data sets that reflect infiltration processes acting on highly variable temporal and spatial scales. The modeling study predicts highly non-uniform recharge at the water table, distributed significantly differently from the non-uniform infiltration pattern at the surface.
Computational methods estimating uncertainties for profile reconstruction in scatterometry
NASA Astrophysics Data System (ADS)
Gross, H.; Rathsfeld, A.; Scholze, F.; Model, R.; Bär, M.
2008-04-01
The solution of the inverse problem in scatterometry, i.e. the determination of periodic surface structures from light diffraction patterns, is incomplete without knowledge of the uncertainties associated with the reconstructed surface parameters. With decreasing feature sizes of lithography masks, increasing demands on metrology techniques arise. Scatterometry as a non-imaging indirect optical method is applied to periodic line-space structures in order to determine geometric parameters like side-wall angles, heights, top and bottom widths and to evaluate the quality of the manufacturing process. The numerical simulation of the diffraction process is based on the finite element solution of the Helmholtz equation. The inverse problem seeks to reconstruct the grating geometry from measured diffraction patterns. Restricting the class of gratings and the set of measurements, this inverse problem can be reformulated as a non-linear operator equation in Euclidean spaces. The operator maps the grating parameters to the efficiencies of diffracted plane wave modes. We employ a Gauss-Newton type iterative method to solve this operator equation and end up minimizing the deviation of the measured efficiency or phase shift values from the simulated ones. The reconstruction properties and the convergence of the algorithm, however, is controlled by the local conditioning of the non-linear mapping and the uncertainties of the measured efficiencies or phase shifts. In particular, the uncertainties of the reconstructed geometric parameters essentially depend on the uncertainties of the input data and can be estimated by various methods. We compare the results obtained from a Monte Carlo procedure to the estimations gained from the approximative covariance matrix of the profile parameters close to the optimal solution and apply them to EUV masks illuminated by plane waves with wavelengths in the range of 13 nm.
Estimating recharge at Yucca Mountain, Nevada, USA: Comparison of methods
Flint, A.L.; Flint, L.E.; Kwicklis, E.M.; Fabryka-Martin, J. T.; Bodvarsson, G.S.
2002-01-01
Obtaining values of net infiltration, groundwater travel time, and recharge is necessary at the Yucca Mountain site, Nevada, USA, in order to evaluate the expected performance of a potential repository as a containment system for high-level radioactive waste. However, the geologic complexities of this site, its low precipitation and net infiltration, with numerous mechanisms operating simultaneously to move water through the system, provide many challenges for the estimation of the spatial distribution of recharge. A variety of methods appropriate for arid environments has been applied, including water-balance techniques, calculations using Darcy's law in the unsaturated zone, a soil-physics method applied to neutron-hole water-content data, inverse modeling of thermal profiles in boreholes extending through the thick unsaturated zone, chloride mass balance, atmospheric radionuclides, and empirical approaches. These methods indicate that near-surface infiltration rates at Yucca Mountain are highly variable in time and space, with local (point) values ranging from zero to several hundred millimeters per year. Spatially distributed net-infiltration values average 5 mm/year, with the highest values approaching 20 mm/year near Yucca Crest. Site-scale recharge estimates range from less than 1 to about 12 mm/year. These results have been incorporated into a site-scale model that has been calibrated using these data sets that reflect infiltration processes acting on highly variable temporal and spatial scales. The modeling study predicts highly non-uniform recharge at the water table, distributed significantly differently from the non-uniform infiltration pattern at the surface.
Estimating recharge at yucca mountain, nevada, usa: comparison of methods
Flint, A. L.; Flint, L. E.; Kwicklis, E. M.; Fabryka-Martin, J. T.; Bodvarsson, G. S.
2001-11-01
Obtaining values of net infiltration, groundwater travel time, and recharge is necessary at the Yucca Mountain site, Nevada, USA, in order to evaluate the expected performance of a potential repository as a containment system for high-level radioactive waste. However, the geologic complexities of this site, its low precipitation and net infiltration, with numerous mechanisms operating simultaneously to move water through the system, provide many challenges for the estimation of the spatial distribution of recharge. A variety of methods appropriate for and environments has been applied, including water-balance techniques, calculations using Darcy's law in the unsaturated zone, a soil-physics method applied to neutron-hole water-content data, inverse modeling of thermal profiles in boreholes extending through the thick unsaturated zone, chloride mass balance, atmospheric radionuclides, and empirical approaches. These methods indicate that near-surface infiltration rates at Yucca Mountain are highly variable in time and space, with local (point) values ranging from zero to several hundred millimeters per year. Spatially distributed net-infiltration values average 5 mm/year, with the highest values approaching 20 nun/year near Yucca Crest. Site-scale recharge estimates range from less than I to about 12 mm/year. These results have been incorporated into a site-scale model that has been calibrated using these data sets that reflect infiltration processes acting on highly variable temporal and spatial scales. The modeling study predicts highly non-uniform recharge at the water table, distributed significantly differently from the non-uniform infiltration pattern at the surface. [References: 57
Effect of packing density on strain estimation by Fry method
NASA Astrophysics Data System (ADS)
Srivastava, Deepak; Ojha, Arun
2015-04-01
Fry method is a graphical technique that uses relative movement of material points, typically the grain centres or centroids, and yields the finite strain ellipse as the central vacancy of a point distribution. Application of the Fry method assumes an anticlustered and isotropic grain centre distribution in undistorted samples. This assumption is, however, difficult to test in practice. As an alternative, the sedimentological degree of sorting is routinely used as an approximation for the degree of clustering and anisotropy. The effect of the sorting on the Fry method has already been explored by earlier workers. This study tests the effect of the tightness of packing, the packing density%, which equals to the ratio% of the area occupied by all the grains and the total area of the sample. A practical advantage of using the degree of sorting or the packing density% is that these parameters, unlike the degree of clustering or anisotropy, do not vary during a constant volume homogeneous distortion. Using the computer graphics simulations and the programming, we approach the issue of packing density in four steps; (i) generation of several sets of random point distributions such that each set has same degree of sorting but differs from the other sets with respect to the packing density%, (ii) two-dimensional homogeneous distortion of each point set by various known strain ratios and orientation, (iii) estimation of strain in each distorted point set by the Fry method, and, (iv) error estimation by comparing the known strain and those given by the Fry method. Both the absolute errors and the relative root mean squared errors give consistent results. For a given degree of sorting, the Fry method gives better results in the samples having greater than 30% packing density. This is because the grain centre distributions show stronger clustering and a greater degree of anisotropy with the decrease in the packing density. As compared to the degree of sorting alone, a
Estimating Earth's modal Q with epicentral stacking method
NASA Astrophysics Data System (ADS)
Chen, X.; Park, J. J.
2014-12-01
The attenuation rates of Earth's normal modes are the most important constraints on the anelastic state of Earth's deep interior. Yet current measurements of Earth's attenuation rates suffer from 3 sources of biases: the mode coupling effect, the beating effect, and the background noise, which together lead to significant uncertainties in the attenuation rates. In this research, we present a new technique to estimate the attenuation rates of Earth's normal modes - the epicentral stacking method. Rather than using the conventional geographical coordinate system, we instead deal with Earth's normal modes in the epicentral coordinate system, in which only 5 singlets rather than 2l+1 are excited. By stacking records from the same events at a series of time lags, we are able to recover the time-varying amplitudes of the 5 excited singlets, and thus measure their attenuation rates. The advantage of our method is that it enhances the SNR through stacking and minimizes the background noise effect, yet it avoids the beating effect problem commonly associated with the conventional multiplet stacking method by singling out the singlets. The attenuation rates measured from our epicentral stacking method seem to be reliable measurements in that: a) the measured attenuation rates are generally consistent among the 10 large events we used, except for a few events with unexplained larger attenuation rates; b) the line for the log of singlet amplitudes and time lag is very close to a straight line, suggesting an accurate estimation of attenuation rate. The Q measurements from our method are consistently lower than previous modal Q measurements, but closer to the PREM model. For example, for mode 0S25 whose Coriolis force coupling is negligible, our measured Q is between 190 to 210 depending on the event, while the PREM modal Q of 0S25 is 205, and previous modal Q measurements are as high as 242. The difference between our results and previous measurements might be due to the lower
Estimates of tropical bromoform emissions using an inversion method
NASA Astrophysics Data System (ADS)
Ashfold, M. J.; Harris, N. R. P.; Manning, A. J.; Robinson, A. D.; Warwick, N. J.; Pyle, J. A.
2014-01-01
Bromine plays an important role in ozone chemistry in both the troposphere and stratosphere. When measured by mass, bromoform (CHBr3) is thought to be the largest organic source of bromine to the atmosphere. While seaweed and phytoplankton are known to be dominant sources, the size and the geographical distribution of CHBr3 emissions remains uncertain. Particularly little is known about emissions from the Maritime Continent, which have usually been assumed to be large, and which appear to be especially likely to reach the stratosphere. In this study we aim to reduce this uncertainty by combining the first multi-annual set of CHBr3 measurements from this region, and an inversion process, to investigate systematically the distribution and magnitude of CHBr3 emissions. The novelty of our approach lies in the application of the inversion method to CHBr3. We find that local measurements of a short-lived gas like CHBr3 can be used to constrain emissions from only a relatively small, sub-regional domain. We then obtain detailed estimates of CHBr3 emissions within this area, which appear to be relatively insensitive to the assumptions inherent in the inversion process. We extrapolate this information to produce estimated emissions for the entire tropics (defined as 20° S-20° N) of 225 Gg CHBr3 yr-1. The ocean in the area we base our extrapolations upon is typically somewhat shallower, and more biologically productive, than the tropical average. Despite this, our tropical estimate is lower than most other recent studies, and suggests that CHBr3 emissions in the coastline-rich Maritime Continent may not be stronger than emissions in other parts of the tropics.
An extended stochastic method for seismic hazard estimation
NASA Astrophysics Data System (ADS)
Abd el-aal, A. K.; El-Eraki, M. A.; Mostafa, S. I.
2015-12-01
In this contribution, we developed an extended stochastic technique for seismic hazard assessment purposes. This technique depends on the hypothesis of stochastic technique of Boore (2003) "Simulation of ground motion using the stochastic method. Appl. Geophy. 160:635-676". The essential characteristics of extended stochastic technique are to obtain and simulate ground motion in order to minimize future earthquake consequences. The first step of this technique is defining the seismic sources which mostly affect the study area. Then, the maximum expected magnitude is defined for each of these seismic sources. It is followed by estimating the ground motion using an empirical attenuation relationship. Finally, the site amplification is implemented in calculating the peak ground acceleration (PGA) at each site of interest. We tested and applied this developed technique at Cairo, Suez, Port Said, Ismailia, Zagazig and Damietta cities to predict the ground motion. Also, it is applied at Cairo, Zagazig and Damietta cities to estimate the maximum peak ground acceleration at actual soil conditions. In addition, 0.5, 1, 5, 10 and 20 % damping median response spectra are estimated using the extended stochastic simulation technique. The calculated highest acceleration values at bedrock conditions are found at Suez city with a value of 44 cm s-2. However, these acceleration values decrease towards the north of the study area to reach 14.1 cm s-2 at Damietta city. This comes in agreement with the results of previous studies of seismic hazards in northern Egypt and is found to be comparable. This work can be used for seismic risk mitigation and earthquake engineering purposes.
A TRMM Rainfall Estimation Method Applicable to Land Areas
NASA Technical Reports Server (NTRS)
Prabhakara, C.; Iacovazzi, R.; Weinman, J.; Dalu, G.
1999-01-01
Methods developed to estimate rain rate on a footprint scale over land with the satellite-borne multispectral dual-polarization Special Sensor Microwave Imager (SSM/1) radiometer have met with limited success. Variability of surface emissivity on land and beam filling are commonly cited as the weaknesses of these methods. On the contrary, we contend a more significant reason for this lack of success is that the information content of spectral and polarization measurements of the SSM/I is limited. because of significant redundancy. As a result, the complex nature and vertical distribution C, of frozen and melting ice particles of different densities, sizes, and shapes cannot resolved satisfactorily. Extinction in the microwave region due to these complex particles can mask the extinction due to rain drops. Because of these reasons, theoretical models that attempt to retrieve rain rate do not succeed on a footprint scale. To illustrate the weakness of these models, as an example we can consider the brightness temperature measurement made by the radiometer in the 85 GHz channel (T85). Models indicate that T85 should be inversely related to the rain rate, because of scattering. However, rain rate derived from 15-minute rain gauges on land indicate that this is not true in a majority of footprints. This is also supported by the ship-borne radar observations of rain in the Tropical Oceans and Global Atmosphere Coupled Ocean-Atmosphere Response Experiment (TOGA-COARE) region over the ocean. Based on these observations. we infer that theoretical models that attempt to retrieve rain rate do not succeed on a footprint scale. We do not follow the above path of rain retrieval on a footprint scale. Instead, we depend on the limited ability of the microwave radiometer to detect the presence of rain. This capability is useful to determine the rain area in a mesoscale region. We find in a given rain event that this rain area is closely related to the mesoscale-average rain rate
Estimating rotavirus vaccine effectiveness in Japan using a screening method.
Araki, Kaoru; Hara, Megumi; Sakanishi, Yuta; Shimanoe, Chisato; Nishida, Yuichiro; Matsuo, Muneaki; Tanaka, Keitaro
2016-05-01
Rotavirus gastroenteritis is a highly contagious, acute viral disease that imposes a significant health burden worldwide. In Japan, rotavirus vaccines have been commercially available since 2011 for voluntary vaccination, but vaccine coverage and effectiveness have not been evaluated. In the absence of a vaccination registry in Japan, vaccination coverage in the general population was estimated according to the number of vaccines supplied by the manufacturer, the number of children who received financial support for vaccination, and the size of the target population. Patients with rotavirus gastroenteritis were identified by reviewing the medical records of all children who consulted 6 major hospitals in Saga Prefecture with gastroenteritis symptoms. Vaccination status among these patients was investigated by reviewing their medical records or interviewing their guardians by telephone. Vaccine effectiveness was determined using a screening method. Vaccination coverage increased with time, and it was 2-times higher in municipalities where the vaccination fee was supported. In the 2012/13 season, vaccination coverage in Saga Prefecture was 14.9% whereas the proportion of patients vaccinated was 5.1% among those with clinically diagnosed rotavirus gastroenteritis and 1.9% among those hospitalized for rotavirus gastroenteritis. Thus, vaccine effectiveness was estimated as 69.5% and 88.8%, respectively. This is the first study to evaluate rotavirus vaccination coverage and effectiveness in Japan since vaccination began. PMID:26680277
Appendix A: other methods for estimating trends of Arctic birds
Bart, Jonathan; Brown, Stephen; Morrison, R.I. Guy; Smith, Paul A.
2012-01-01
The Arctic PRISM was designed to determine shorebird population size and trend. During an extensive peer review of PRISM, some reviewers suggested that measuring demographic rates or monitoring shorebirds on migration would be more appropriate than estimating population size on the breeding grounds. However, each method has its own limitations. For demographic monitoring, an unbiased estimate based on a large sample of first-year survivorship would be extremely difficult for shorebirds in the arctic because the needed sample size would be unobtainable (in Canada at least) and the level of effort that would need to be expended (both financial and human resource-wise) would far exceed that of the current Arctic PRISM methodology. For migration monitoring, issues such as changes in use of monitored to non-monitored sites, residency times, and detection rates introduce bias that has not yet been resolved. While we believe demographic and migration monitoring are very valuable and are already components of the PRISM approach (e.g., Tier 2 sites focus on the collection of demographic data), we do not believe that either is likely to achieve the PRISM accuracy target of an 80% power to detect a 50% decline.
Effect of radon measurement methods on dose estimation.
Kávási, Norbert; Kobayashi, Yosuke; Kovács, Tibor; Somlai, János; Jobbágy, Viktor; Nagy, Katalin; Deák, Eszter; Berhés, István; Bender, Tamás; Ishikawa, Tetsuo; Tokonami, Shinji; Vaupotic, Janja; Yoshinaga, Shinji; Yonehara, Hidenori
2011-05-01
Different radon measurement methods were applied in the old and new buildings of the Turkish bath of Eger, Hungary, in order to elaborate a radon measurement protocol. Besides, measurements were also made concerning the radon and thoron short-lived decay products, gamma dose from external sources and water radon. The most accurate results for dose estimation were provided by the application of personal radon meters. Estimated annual effective doses from radon and its short-lived decay products in the old and new buildings, using 0.2 and 0.1 measured equilibrium factors, were 0.83 and 0.17 mSv, respectively. The effective dose from thoron short-lived decay products was only 5 % of these values. The respective external gamma radiation effective doses were 0.19 and 0.12 mSv y(-1). Effective dose from the consumption of tap water containing radon was 0.05 mSv y(-1), while in the case of spring water, it was 0.14 mSv y(-1). PMID:21450699
Estimating rotavirus vaccine effectiveness in Japan using a screening method
Araki, Kaoru; Hara, Megumi; Sakanishi, Yuta; Shimanoe, Chisato; Nishida, Yuichiro; Matsuo, Muneaki; Tanaka, Keitaro
2016-01-01
abstract Rotavirus gastroenteritis is a highly contagious, acute viral disease that imposes a significant health burden worldwide. In Japan, rotavirus vaccines have been commercially available since 2011 for voluntary vaccination, but vaccine coverage and effectiveness have not been evaluated. In the absence of a vaccination registry in Japan, vaccination coverage in the general population was estimated according to the number of vaccines supplied by the manufacturer, the number of children who received financial support for vaccination, and the size of the target population. Patients with rotavirus gastroenteritis were identified by reviewing the medical records of all children who consulted 6 major hospitals in Saga Prefecture with gastroenteritis symptoms. Vaccination status among these patients was investigated by reviewing their medical records or interviewing their guardians by telephone. Vaccine effectiveness was determined using a screening method. Vaccination coverage increased with time, and it was 2-times higher in municipalities where the vaccination fee was supported. In the 2012/13 season, vaccination coverage in Saga Prefecture was 14.9% whereas the proportion of patients vaccinated was 5.1% among those with clinically diagnosed rotavirus gastroenteritis and 1.9% among those hospitalized for rotavirus gastroenteritis. Thus, vaccine effectiveness was estimated as 69.5% and 88.8%, respectively. This is the first study to evaluate rotavirus vaccination coverage and effectiveness in Japan since vaccination began. PMID:26680277
[Methods for the estimation of the renal function].
Fontseré Baldellou, Néstor; Bonal I Bastons, Jordi; Romero González, Ramón
2007-10-13
The chronic kidney disease represents one of the pathologies with greater incidence and prevalence in the present sanitary systems. The ambulatory application of different methods that allow a suitable detection, monitoring and stratification of the renal functionalism is of crucial importance. On the basis of the vagueness obtained by means of the application of the serum creatinine, a set of predictive equations for the estimation of the glomerular filtration rate have been developed. Nevertheless, it is essential for the physician to know its limitations, in situations of normal renal function and hyperfiltration, certain associate pathologies and extreme situations of nutritional status and age. In these cases, the application of the isotopic techniques for the calculation of the renal function is more recommendable. PMID:17980123
The sisterhood method of estimating maternal mortality: the Matlab experience.
Shahidullah, M
1995-01-01
This study reports the results of a test of validation of the sisterhood method of measuring the level of maternal mortality using data from a Demographic Surveillance System (DSS) operating since 1966 in Matlab, Bangladesh. The records of maternal deaths that occurred during 1976-90 in the Matlab DSS area were used. One of the deceased woman's surviving brothers or sisters, aged 15 or older and born to the same mother, was asked if the deceased sister had died of maternity-related causes. Of the 384 maternal deaths for which siblings were interviewed, 305 deaths were correctly reported, 16 deaths were underreported, and the remaining 63 were misreported as nonmaternal deaths. Information on maternity-related deaths obtained in a sisterhood survey conducted in the Matlab DSS area was compared with the information recorded in the DSS. Results suggest that in places similar to Matlab, the sisterhood method can be used to provide an indication of the level of maternal mortality if no other data exist, though the method will produce negative bias in maternal mortality estimates. PMID:7618193
Application of throughfall methods to estimate dry deposition of mercury
Lindberg, S.E.; Owens, J.G.; Stratton, W.
1992-12-31
Several dry deposition methods for Mercury (Hg) are being developed and tested in our laboratory. These include big-leaf and multilayer resistance models, micrometeorological methods such as Bowen ratio gradient approaches, laboratory controlled plant chambers, and throughfall. We have previously described our initial results using modeling and gradient methods. Throughfall may be used to estimate Hg dry deposition if some simplifying assumptions are met. We describe here the application and initial results of throughfull studies at the Walker Branch Watershed forest, and discuss the influence of certain assumptions on interpretation of the data. Throughfall appears useful in that it can place a lower bound to dry deposition under field conditions. Our preliminary throughfall data indicate net dry deposition rates to a pine canopy which increase significantly from winter to summer, as previously predicted by our resistance model. Atmospheric data suggest that rainfall washoff of fine aerosol dry deposition at this site is not sufficient to account for all of the Hg in net throughfall. Potential additional sources include dry deposited gas-phase compounds, soil-derived coarse aerosols, and oxidation reactions at the leaf surface.
An automatic iris occlusion estimation method based on high-dimensional density estimation.
Li, Yung-Hui; Savvides, Marios
2013-04-01
Iris masks play an important role in iris recognition. They indicate which part of the iris texture map is useful and which part is occluded or contaminated by noisy image artifacts such as eyelashes, eyelids, eyeglasses frames, and specular reflections. The accuracy of the iris mask is extremely important. The performance of the iris recognition system will decrease dramatically when the iris mask is inaccurate, even when the best recognition algorithm is used. Traditionally, people used the rule-based algorithms to estimate iris masks from iris images. However, the accuracy of the iris masks generated this way is questionable. In this work, we propose to use Figueiredo and Jain's Gaussian Mixture Models (FJ-GMMs) to model the underlying probabilistic distributions of both valid and invalid regions on iris images. We also explored possible features and found that Gabor Filter Bank (GFB) provides the most discriminative information for our goal. Finally, we applied Simulated Annealing (SA) technique to optimize the parameters of GFB in order to achieve the best recognition rate. Experimental results show that the masks generated by the proposed algorithm increase the iris recognition rate on both ICE2 and UBIRIS dataset, verifying the effectiveness and importance of our proposed method for iris occlusion estimation. PMID:22868651
New Method of Estimating Binary's Mass Ratios by Using Superhumps
NASA Astrophysics Data System (ADS)
Kato, Taichi; Osaki, Yoji
2013-12-01
We propose a new dynamical method of estimating binary's mass ratios by using the period of superhumps in SU UMa-type dwarf novae during the growing stage (the stage A superhumps). This method is based on the working hypothesis that the period of superhumps in the growing stage is determined by the dynamical precession rate at the 3:1 resonance radius, and is suggested in our new interpretation of the superhump period evolution during a superoutburst (2013, PASJ, 65, 95). By comparing objects having known mass ratios, we show that our method can provide sufficiently accurate mass ratios comparable to those obtained by eclipse observations in quiescence. One of the advantages of this method is that it requires neither an eclipse nor any experimental calibration. It is particularly suitable for exploring the low mass-ratio end of the evolution of cataclysmic variables, where the secondary is not detectable by conventional methods. Our analysis suggests that previous determinations of the mass ratio by using superhump periods during a superoutburst were systematically underestimated for low mass-ratio systems, and we provided a new calibration. It reveals that most WZ Sge-type dwarf novae have either secondaries close to the border of the lower main-sequence or brown dwarfs, and most of the objects have not yet reached the evolutionary stage of period bouncers. Our results are not in contradiction with an assumption that an observed minimum period (˜77 min) of ordinary hydrogen-rich cataclysmic variables is indeed the minimum period. We highlight how important the early observation of stage A superhumps is, and propose an effective future strategy of observation.
A new rapid method for rockfall energies and distances estimation
NASA Astrophysics Data System (ADS)
Giacomini, Anna; Ferrari, Federica; Thoeni, Klaus; Lambert, Cedric
2016-04-01
Rockfalls are characterized by long travel distances and significant energies. Over the last decades, three main methods have been proposed in the literature to assess the rockfall runout: empirical, process-based and GIS-based methods (Dorren, 2003). Process-based methods take into account the physics of rockfall by simulating the motion of a falling rock along a slope and they are generally based on a probabilistic rockfall modelling approach that allows for taking into account the uncertainties associated with the rockfall phenomenon. Their application has the advantage of evaluating the energies, bounce heights and distances along the path of a falling block, hence providing valuable information for the design of mitigation measures (Agliardi et al., 2009), however, the implementation of rockfall simulations can be time-consuming and data-demanding. This work focuses on the development of a new methodology for estimating the expected kinetic energies and distances of the first impact at the base of a rock cliff, subject to the conditions that the geometry of the cliff and the properties of the representative block are known. The method is based on an extensive two-dimensional sensitivity analysis, conducted by means of kinematic simulations based on probabilistic modelling of two-dimensional rockfall trajectories (Ferrari et al., 2016). To take into account for the uncertainty associated with the estimation of the input parameters, the study was based on 78400 rockfall scenarios performed by systematically varying the input parameters that are likely to affect the block trajectory, its energy and distance at the base of the rock wall. The variation of the geometry of the rock cliff (in terms of height and slope angle), the roughness of the rock surface and the properties of the outcropping material were considered. A simplified and idealized rock wall geometry was adopted. The analysis of the results allowed finding empirical laws that relate impact energies
EVALUATION OF RIVER LOAD ESTIMATION METHODS FOR TOTAL PHOSPHORUS
Accurate estimates of pollutant loadings to the Great Lakes are required for trend detection, model development, and planning. On many major rivers, infrequent sampling of most pollutants makes these estimates difficult. However, most large rivers have complete daily flow records...
Practical Methods for Estimating Software Systems Fault Content and Location
NASA Technical Reports Server (NTRS)
Nikora, A.; Schneidewind, N.; Munson, J.
1999-01-01
Over the past several years, we have developed techniques to discriminate between fault-prone software modules and those that are not, to estimate a software system's residual fault content, to identify those portions of a software system having the highest estimated number of faults, and to estimate the effects of requirements changes on software quality.
Method for estimating the cooperativity length in polymers
NASA Astrophysics Data System (ADS)
Pieruccini, Marco; Alessandrini, Andrea
2015-05-01
The problem of estimating the size of the cooperatively rearranging regions (CRRs) in supercooled polymeric melts from an analysis of the α -process in ordinary relaxation experiments is addressed. The mechanism whereby a CRR changes its configuration is viewed as consisting of two distinct steps: a reduced number of monomers reaches initially an activated state, allowing for some local rearrangement; then, the subsequent regression of the energy fluctuation may take place through the configurational degrees of freedom, thus allowing for further rearrangements on larger length scales. The latter are indeed those to which the well-known Donth's scheme refers. Local readjustments are described in the framework of a canonical formalism on a stationary ensemble of small-scale regions, distributed over all possible energy thresholds for rearrangement. Large-scale configurational changes, instead, are described as spontaneous processes. Two main regimes are envisaged, depending on whether the role played by the configurational degrees of freedom in the regression of the energy fluctuation is significant or not. It is argued that the latter case is related to the occurrence of an Arrhenian dependence of the central relaxation rate. Consistency with Donth's scheme is demonstrated, and data from the literature confirm the agreement of the two methods of analysis when configurational degrees of freedom are relevant for the fluctuation regression. Poly(n -butyl methacrylate) is chosen in order to show how CRR size and temperature fluctuations at rearrangement can be estimated from stress relaxation experiments carried out by means of an atomic force microscopy setup. Cases in which the configurational pathway for regression is significantly hindered are considered. Relaxation in poly(dimethyl siloxane) confined in nanopores is taken as an example to suggest how a more complete view of the effects of configurational constraints would be possible if direct measurements of
A non-destructive dental method for age estimation.
Kvaal, S; Solheim, T
1994-06-01
Dental radiographs have rarely been used in dental age estimation methods for adults and the aim of this investigation was to derive formulae for age calculation based on measurements of teeth and their radiographs. Age-related changes were studied in 452 extracted, unsectioned incisors, canines and premolars. The length of the apical translucent zone and extent of the periodontal retraction were measured on the teeth while the pulp length and width as well as root length and width were measured on the radiographs and the ratios between the root and pulp measurements calculated. For all types of teeth significant, negative Pearson's correlation coefficients were found between age and the ratios between the pulp and the root width. In this study also, the correlation between age and the length of the apical translucent zone was weaker than expected. The periodontal retraction was significantly correlated with age in maxillary premolars alone. Multiple regression analyses showed inclusion of the ratio between the measurements of the pulp and the root on the radiographs for all teeth; the length of the apical translucency in five types; and periodontal retraction in only three types of teeth. The correlation coefficients ranged from r = 0.48 to r = 0.90 between the chronological and the calculated age using the formulae from this multiple regression study. The strongest coefficients were for premolars. These formulae may be recommended for use in odontological age estimations in forensic and archaeological cases where teeth are loose or can be extracted and where it is important that the teeth are not sectioned. PMID:9227083
Hardware architecture design of a fast global motion estimation method
NASA Astrophysics Data System (ADS)
Liang, Chaobing; Sang, Hongshi; Shen, Xubang
2015-12-01
VLSI implementation of gradient-based global motion estimation (GME) faces two main challenges: irregular data access and high off-chip memory bandwidth requirement. We previously proposed a fast GME method that reduces computational complexity by choosing certain number of small patches containing corners and using them in a gradient-based framework. A hardware architecture is designed to implement this method and further reduce off-chip memory bandwidth requirement. On-chip memories are used to store coordinates of the corners and template patches, while the Gaussian pyramids of both the template and reference frame are stored in off-chip SDRAMs. By performing geometric transform only on the coordinates of the center pixel of a 3-by-3 patch in the template image, a 5-by-5 area containing the warped 3-by-3 patch in the reference image is extracted from the SDRAMs by burst read. Patched-based and burst mode data access helps to keep the off-chip memory bandwidth requirement at the minimum. Although patch size varies at different pyramid level, all patches are processed in term of 3x3 patches, so the utilization of the patch-processing circuit reaches 100%. FPGA implementation results show that the design utilizes 24,080 bits on-chip memory and for a sequence with resolution of 352x288 and frequency of 60Hz, the off-chip bandwidth requirement is only 3.96Mbyte/s, compared with 243.84Mbyte/s of the original gradient-based GME method. This design can be used in applications like video codec, video stabilization, and super-resolution, where real-time GME is a necessity and minimum memory bandwidth requirement is appreciated.
Seismic Methods of Identifying Explosions and Estimating Their Yield
NASA Astrophysics Data System (ADS)
Walter, W. R.; Ford, S. R.; Pasyanos, M.; Pyle, M. L.; Myers, S. C.; Mellors, R. J.; Pitarka, A.; Rodgers, A. J.; Hauk, T. F.
2014-12-01
Seismology plays a key national security role in detecting, locating, identifying and determining the yield of explosions from a variety of causes, including accidents, terrorist attacks and nuclear testing treaty violations (e.g. Koper et al., 2003, 1999; Walter et al. 1995). A collection of mainly empirical forensic techniques has been successfully developed over many years to obtain source information on explosions from their seismic signatures (e.g. Bowers and Selby, 2009). However a lesson from the three DPRK declared nuclear explosions since 2006, is that our historic collection of data may not be representative of future nuclear test signatures (e.g. Selby et al., 2012). To have confidence in identifying future explosions amongst the background of other seismic signals, and accurately estimate their yield, we need to put our empirical methods on a firmer physical footing. Goals of current research are to improve our physical understanding of the mechanisms of explosion generation of S- and surface-waves, and to advance our ability to numerically model and predict them. As part of that process we are re-examining regional seismic data from a variety of nuclear test sites including the DPRK and the former Nevada Test Site (now the Nevada National Security Site (NNSS)). Newer relative location and amplitude techniques can be employed to better quantify differences between explosions and used to understand those differences in term of depth, media and other properties. We are also making use of the Source Physics Experiments (SPE) at NNSS. The SPE chemical explosions are explicitly designed to improve our understanding of emplacement and source material effects on the generation of shear and surface waves (e.g. Snelson et al., 2013). Finally we are also exploring the value of combining seismic information with other technologies including acoustic and InSAR techniques to better understand the source characteristics. Our goal is to improve our explosion models
Application of age estimation methods based on teeth eruption: how easy is Olze method to use?
De Angelis, D; Gibelli, D; Merelli, V; Botto, M; Ventura, F; Cattaneo, C
2014-09-01
The development of new methods for age estimation has become with time an urgent issue because of the increasing immigration, in order to estimate accurately the age of those subjects who lack valid identity documents. Methods of age estimation are divided in skeletal and dental ones, and among the latter, Olze's method is one of the most recent, since it was introduced in 2010 with the aim to identify the legal age of 18 and 21 years by evaluating the different stages of development of the periodontal ligament of the third molars with closed root apices. The present study aims at verifying the applicability of the method to the daily forensic practice, with special focus on the interobserver repeatability. Olze's method was applied by three different observers (two physicians and one dentist without a specific training in Olze's method) to 61 orthopantomograms from subjects of mixed ethnicity aged between 16 and 51 years. The analysis took into consideration the lower third molars. The results provided by the different observers were then compared in order to verify the interobserver error. Results showed that interobserver error varies between 43 and 57 % for the right lower third molar (M48) and between 23 and 49 % for the left lower third molar (M38). Chi-square test did not show significant differences according to the side of teeth and type of professional figure. The results prove that Olze's method is not easy to apply when used by not adequately trained personnel, because of an intrinsic interobserver error. Since it is however a crucial method in age determination, it should be used only by experienced observers after an intensive and specific training. PMID:24781787
A method for quantitatively estimating diffuse and discrete hydrothermal discharge
NASA Astrophysics Data System (ADS)
Baker, Edward T.; Massoth, Gary J.; Walker, Sharon L.; Embley, Robert W.
1993-07-01
Submarine hydrothermal fluids discharge as undiluted, high-temperature jets and as diffuse, highly diluted, low-temperature percolation. Estimates of the relative contribution of each discharge type, which are important for the accurate determination of local and global hydrothermal budgets, are difficult to obtain directly. In this paper we describe a new method of using measurements of hydrothermal tracers such as Fe/Mn, Fe/heat, and Mn/heat in high-temperature fluids, low-temperature fluids, and the neutrally buoyant plume to deduce the relative contribution of each discharge type. We sampled vent fluids from the north Cleft vent field on the Juan de Fuca Ridge in 1988, 1989 and 1991, and plume samples every year from 1986 to 1991. The tracers were, on average, 3 to 90 times greater in high-temperature than in low-temperature fluids, with plume values intermediate. A mixing model calculates that high-temperature fluids contribute only ˜ 3% of the fluid mass flux but > 90% of the hydrothermal Fe and > 60% of the hydrothermal Mn to the overlying plume. Three years of extensive camera-CTD sled tows through the vent field show that diffuse venting is restricted to a narrow fissure zone extending for 18 km along the axial strike. Linear plume theory applied to the temperature plumes detected when the sled crossed this zone yields a maximum likelihood estimate for the diffuse heat flux of8.9 × 10 4 W/m, for a total flux of 534 MW, considering that diffuse venting is active along only one-third of the fissure system. For mean low- and high-temperature discharge of 25°C and 319°C, respectively, the discrete heat flux must be 266 MW to satisfy the mass flux partitioning. If the north Cleft vent field is globally representative, the assumption that high-temperature discharge dominates the mass flux in axial vent fields leads to an overestimation of the flux of many non-conservative hydrothermal species by about an order of magnitude.
Stability over Time of Different Methods of Estimating School Performance
ERIC Educational Resources Information Center
Dumay, Xavier; Coe, Rob; Anumendem, Dickson Nkafu
2014-01-01
This paper aims to investigate how stability varies with the approach used in estimating school performance in a large sample of English primary schools. The results show that (a) raw performance is considerably more stable than adjusted performance, which in turn is slightly more stable than growth model estimates; (b) schools' performance…
ERIC Educational Resources Information Center
Lafferty, Mark T.
2010-01-01
The number of project failures and those projects completed over cost and over schedule has been a significant issue for software project managers. Among the many reasons for failure, inaccuracy in software estimation--the basis for project bidding, budgeting, planning, and probability estimates--has been identified as a root cause of a high…
ERIC Educational Resources Information Center
Wang, Lijuan; McArdle, John J.
2008-01-01
The main purpose of this research is to evaluate the performance of a Bayesian approach for estimating unknown change points using Monte Carlo simulations. The univariate and bivariate unknown change point mixed models were presented and the basic idea of the Bayesian approach for estimating the models was discussed. The performance of Bayesian…
Analytic Method to Estimate Particle Acceleration in Flux Ropes
NASA Technical Reports Server (NTRS)
Guidoni, S. E.; Karpen, J. T.; DeVore, C. R.
2015-01-01
The mechanism that accelerates particles to the energies required to produce the observed high-energy emission in solar flares is not well understood. Drake et al. (2006) proposed a kinetic mechanism for accelerating electrons in contracting magnetic islands formed by reconnection. In this model, particles that gyrate around magnetic field lines transit from island to island, increasing their energy by Fermi acceleration in those islands that are contracting. Based on these ideas, we present an analytic model to estimate the energy gain of particles orbiting around field lines inside a flux rope (2.5D magnetic island). We calculate the change in the velocity of the particles as the flux rope evolves in time. The method assumes a simple profile for the magnetic field of the evolving island; it can be applied to any case where flux ropes are formed. In our case, the flux-rope evolution is obtained from our recent high-resolution, compressible 2.5D MHD simulations of breakout eruptive flares. The simulations allow us to resolve in detail the generation and evolution of large-scale flux ropes as a result of sporadic and patchy reconnection in the flare current sheet. Our results show that the initial energy of particles can be increased by 2-5 times in a typical contracting island, before the island reconnects with the underlying arcade. Therefore, particles need to transit only from 3-7 islands to increase their energies by two orders of magnitude. These macroscopic regions, filled with a large number of particles, may explain the large observed rates of energetic electron production in flares. We conclude that this mechanism is a promising candidate for electron acceleration in flares, but further research is needed to extend our results to 3D flare conditions.
Variational methods to estimate terrestrial ecosystem model parameters
NASA Astrophysics Data System (ADS)
Delahaies, Sylvain; Roulstone, Ian
2016-04-01
Carbon is at the basis of the chemistry of life. Its ubiquity in the Earth system is the result of complex recycling processes. Present in the atmosphere in the form of carbon dioxide it is adsorbed by marine and terrestrial ecosystems and stored within living biomass and decaying organic matter. Then soil chemistry and a non negligible amount of time transform the dead matter into fossil fuels. Throughout this cycle, carbon dioxide is released in the atmosphere through respiration and combustion of fossils fuels. Model-data fusion techniques allow us to combine our understanding of these complex processes with an ever-growing amount of observational data to help improving models and predictions. The data assimilation linked ecosystem carbon (DALEC) model is a simple box model simulating the carbon budget allocation for terrestrial ecosystems. Over the last decade several studies have demonstrated the relative merit of various inverse modelling strategies (MCMC, ENKF, 4DVAR) to estimate model parameters and initial carbon stocks for DALEC and to quantify the uncertainty in the predictions. Despite its simplicity, DALEC represents the basic processes at the heart of more sophisticated models of the carbon cycle. Using adjoint based methods we study inverse problems for DALEC with various data streams (8 days MODIS LAI, monthly MODIS LAI, NEE). The framework of constraint optimization allows us to incorporate ecological common sense into the variational framework. We use resolution matrices to study the nature of the inverse problems and to obtain data importance and information content for the different type of data. We study how varying the time step affect the solutions, and we show how "spin up" naturally improves the conditioning of the inverse problems.
Optimal filtering methods to structural damage estimation under ground excitation.
Hsieh, Chien-Shu; Liaw, Der-Cherng; Lin, Tzu-Hsuan
2013-01-01
This paper considers the problem of shear building damage estimation subject to earthquake ground excitation using the Kalman filtering approach. The structural damage is assumed to take the form of reduced elemental stiffness. Two damage estimation algorithms are proposed: one is the multiple model approach via the optimal two-stage Kalman estimator (OTSKE), and the other is the robust two-stage Kalman filter (RTSKF), an unbiased minimum-variance filtering approach to determine the locations and extents of the damage stiffness. A numerical example of a six-storey shear plane frame structure subject to base excitation is used to illustrate the usefulness of the proposed results. PMID:24453869
Bayesian Methods for Radiation Detection and Dosimetry
Peter G. Groer
2002-09-29
We performed work in three areas: radiation detection, external and internal radiation dosimetry. In radiation detection we developed Bayesian techniques to estimate the net activity of high and low activity radioactive samples. These techniques have the advantage that the remaining uncertainty about the net activity is described by probability densities. Graphs of the densities show the uncertainty in pictorial form. Figure 1 below demonstrates this point. We applied stochastic processes for a method to obtain Bayesian estimates of 222Rn-daughter products from observed counting rates. In external radiation dosimetry we studied and developed Bayesian methods to estimate radiation doses to an individual with radiation induced chromosome aberrations. We analyzed chromosome aberrations after exposure to gammas and neutrons and developed a method for dose-estimation after criticality accidents. The research in internal radiation dosimetry focused on parameter estimation for compartmental models from observed compartmental activities. From the estimated probability densities of the model parameters we were able to derive the densities for compartmental activities for a two compartment catenary model at different times. We also calculated the average activities and their standard deviation for a simple two compartment model.
Shanei, Ahmad; Afshin, Maryam; Moslehi, Masoud; Rastaghi, Sedighe
2015-01-01
To make an accurate estimation of the uptake of radioactivity in an organ using the conjugate view method, corrections of physical factors, such as background activity, scatter, and attenuation are needed. The aim of this study was to evaluate the accuracy of four different methods for background correction in activity quantification of the heart in myocardial perfusion scans. The organ activity was calculated using the conjugate view method. A number of 22 healthy volunteers were injected with 17-19 mCi of (99m)Tc-methoxy-isobutyl-isonitrile (MIBI) at rest or during exercise. Images were obtained by a dual-headed gamma camera. Four methods for background correction were applied: (1) Conventional correction (referred to as the Gates' method), (2) Buijs method, (3) BgdA subtraction, (4) BgdB subtraction. To evaluate the accuracy of these methods, the results of the calculations using the above-mentioned methods were compared with the reference results. The calculated uptake in the heart using conventional method, Buijs method, BgdA subtraction, and BgdB subtraction methods was 1.4 ± 0.7% (P < 0.05), 2.6 ± 0.6% (P < 0.05), 1.3 ± 0.5% (P < 0.05), and 0.8 ± 0.3% (P < 0.05) of injected dose (I.D) at rest and 1.8 ± 0.6% (P > 0.05), 3.1 ± 0.8% (P > 0.05), 1.9 ± 0.8% (P < 0.05), and 1.2 ± 0.5% (P < 0.05) of I.D, during exercise. The mean estimated myocardial uptake of (99m)Tc-MIBI was dependent on the correction method used. Comparison among the four different methods of background activity correction applied in this study showed that the Buijs method was the most suitable method for background correction in myocardial perfusion scan. PMID:26955568
Dynamic State Estimation Utilizing High Performance Computing Methods
Schneider, Kevin P.; Huang, Zhenyu; Yang, Bo; Hauer, Matthew L.; Nieplocha, Jaroslaw
2009-03-18
The state estimation tools which are currently deployed in power system control rooms are based on a quasi-steady-state assumption. As a result, the suite of operational tools that rely on state estimation results as inputs do not have dynamic information available and their accuracy is compromised. This paper presents an overview of the Kalman Filtering process and then focuses on the implementation of the predication component on multiple processors.
An Investigation of Methods for Improving Estimation of Test Score Distributions.
ERIC Educational Resources Information Center
Hanson, Bradley A.
Three methods of estimating test score distributions that may improve on using the observed frequencies (OBFs) as estimates of a population test score distribution are considered: the kernel method (KM); the polynomial method (PM); and the four-parameter beta binomial method (FPBBM). The assumption each method makes about the smoothness of the…
Iterative methods for distributed parameter estimation in parabolic PDE
Vogel, C.R.; Wade, J.G.
1994-12-31
The goal of the work presented is the development of effective iterative techniques for large-scale inverse or parameter estimation problems. In this extended abstract, a detailed description of the mathematical framework in which the authors view these problem is presented, followed by an outline of the ideas and algorithms developed. Distributed parameter estimation problems often arise in mathematical modeling with partial differential equations. They can be viewed as inverse problems; the `forward problem` is that of using the fully specified model to predict the behavior of the system. The inverse or parameter estimation problem is: given the form of the model and some observed data from the system being modeled, determine the unknown parameters of the model. These problems are of great practical and mathematical interest, and the development of efficient computational algorithms is an active area of study.
PHREATOPHYTE WATER USE ESTIMATED BY EDDY-CORRELATION METHODS.
Weaver, H.L.; Weeks, E.P.; Campbell, G.S.; Stannard, D.I.; Tanner, B.D.
1986-01-01
Water-use was estimated for three phreatophyte communities: a saltcedar community and an alkali-Sacaton grass community in New Mexico, and a greasewood rabbit-brush-saltgrass community in Colorado. These water-use estimates were calculated from eddy-correlation measurements using three different analyses, since the direct eddy-correlation measurements did not satisfy a surface energy balance. The analysis that seems to be most accurate indicated the saltcedar community used from 58 to 87 cm (23 to 34 in. ) of water each year. The other two communities used about two-thirds this quantity.
Niibori, Y.; Tochiyama, O.; Chida, T.
1997-12-31
The authors have investigated the characteristic permeability on the basis of some probability density functions of permeability, applying the Monte Carlo method and FEM. It was found that its value does not depend on type of probability density function of permeability, but on the arithmetic mean, the standard deviation and the skewness of permeability. This paper describes the use of the stochastic values of permeability for estimating the rate of radioactivity release to the accessible environment, applying the advection-dispersion model to two-dimensional, heterogeneous media. When a discrete probability density function (referred to as the Bernoulli trials) and the lognormal distribution have common values for the arithmetic mean, the standard deviation and the skewness of permeability, the calculated transport rates (described as the pseudo impulse responses) show good agreements for Peclet number around 10 and the dimensionless standard deviation around 1. Further, it is found that the transport rates apparently depends not only on the arithmetic mean and the standard deviation, but also on the skewness of permeability. When the value of skewness does not follow the lognormal distribution which has only two independent parameters (the mean and the standard deviation), the authors can replicate the three moments estimated from an observed distribution of permeability, by using the Bernoulli trials having three independent parameters.
COMPARISON OF METHODS FOR ESTIMATING GROUND-WATER PUMPAGE FOR IRRIGATION.
Frenzel, Steven A.
1985-01-01
Ground-water pumpage for irrigation was measured at 32 sites on the eastern Snake River Plain in southern Idaho during 1983. Pumpage at these sites also was estimated by three commonly used methods, and pumpage estimates were compared to measured values to determine the accuracy of each estimate. Statistical comparisons of estimated and metered pumpage using an F-test showed that only estimates made using the instantaneous discharge method were not significantly different ( alpha equals 0. 01) from metered values. Pumpage estimates made using the power consumption method reflect variability in pumping efficiency among sites. Pumpage estimates made using the crop-consumptive use method reflect variability in water-management practices. Pumpage estimates made using the instantaneous discharge method reflect variability in discharges at each site during the irrigation season.
Characterization, parameter estimation, and aircraft response statistics of atmospheric turbulence
NASA Technical Reports Server (NTRS)
Mark, W. D.
1981-01-01
A nonGaussian three component model of atmospheric turbulence is postulated that accounts for readily observable features of turbulence velocity records, their autocorrelation functions, and their spectra. Methods for computing probability density functions and mean exceedance rates of a generic aircraft response variable are developed using nonGaussian turbulence characterizations readily extracted from velocity recordings. A maximum likelihood method is developed for optimal estimation of the integral scale and intensity of records possessing von Karman transverse of longitudinal spectra. Formulas for the variances of such parameter estimates are developed. The maximum likelihood and least-square approaches are combined to yield a method for estimating the autocorrelation function parameters of a two component model for turbulence.
A Practical Method of Policy Analysis by Estimating Effect Size
ERIC Educational Resources Information Center
Phelps, James L.
2011-01-01
The previous articles on class size and other productivity research paint a complex and confusing picture of the relationship between policy variables and student achievement. Missing is a conceptual scheme capable of combining the seemingly unrelated research and dissimilar estimates of effect size into a unified structure for policy analysis and…
Assessment of in silico methods to estimate aquatic species sensitivity
Determining the sensitivity of a diversity of species to environmental contaminants continues to be a significant challenge in ecological risk assessment because toxicity data are generally limited to a few standard species. In many cases, QSAR models are used to estimate toxici...
Estimation method for national methane emission from solid waste landfills
NASA Astrophysics Data System (ADS)
Kumar, Sunil; Gaikwad, S. A.; Shekdar, A. V.; Kshirsagar, P. S.; Singh, R. N.
In keeping with the global efforts on inventorisation of methane emission, municipal solid waste (MSW) landfills are recognised as one of the major sources of anthropogenic emissions generated from human activities. In India, most of the solid wastes are disposed of by landfilling in low-lying areas located in and around the urban centres resulting in generation of large quantities of biogas containing a sizeable proportion of methane. After a critical review of literature on the methodology for estimation of methane emissions, the default methodology has been used in estimation following the IPCC guidelines 1996. However, as the default methodology assumes that all potential methane is emitted in the year of waste deposition, a triangular model for biogas from landfill has been proposed and the results are compared. The methodology proposed for methane emissions from landfills based on a triangular model is more realistic and can very well be used in estimation on global basis. Methane emissions from MSW landfills for the year AD 1980-1999 have been estimated which could be used in computing national inventories of methane emission.
REVIEW AND DEVELOPMENT OF ESTIMATION METHODS FOR WILDLAND FIRE EMISSIONS
The product will be a collection of information/data materials and/or operational data systems that provide organized data and estimates to identify the occurence of aggregated or individual fires, the material burned, and air pollutant emissions. An interim background document ...
Assessing Methods for Generalizing Experimental Impact Estimates to Target Populations
ERIC Educational Resources Information Center
Kern, Holger L.; Stuart, Elizabeth A.; Hill, Jennifer; Green, Donald P.
2016-01-01
Randomized experiments are considered the gold standard for causal inference because they can provide unbiased estimates of treatment effects for the experimental participants. However, researchers and policymakers are often interested in using a specific experiment to inform decisions about other target populations. In education research,…
Methods to explain genomic estimates of breeding value
Technology Transfer Automated Retrieval System (TEKTRAN)
Genetic markers allow animal breeders to locate, estimate, and trace inheritance of many unknown genes that affect quantitative traits. Traditional models use pedigree data to compute expected proportions of genes identical by descent (assumed the same for all traits). Newer genomic models use thous...
A Modified Frequency Estimation Equating Method for the Common-Item Nonequivalent Groups Design
ERIC Educational Resources Information Center
Wang, Tianyou; Brennan, Robert L.
2009-01-01
Frequency estimation, also called poststratification, is an equating method used under the common-item nonequivalent groups design. A modified frequency estimation method is proposed here, based on altering one of the traditional assumptions in frequency estimation in order to correct for equating bias. A simulation study was carried out to…
Etalon-photometric method for estimation of tissues density at x-ray images
NASA Astrophysics Data System (ADS)
Buldakov, Nicolay S.; Buldakova, Tatyana I.; Suyatinov, Sergey I.
2016-04-01
The etalon-photometric method for quantitative estimation of physical density of pathological entities is considered. The method consists in using etalon during the registration and estimation of photometric characteristics of objects. The algorithm for estimating of physical density at X-ray images is offered.
Sousa, Fátima Aparecida Emm Faleiros; da Silva, Talita de Cássia Raminelli; Siqueira, Hilze Benigno de Oliveira Moura; Saltareli, Simone; Gomez, Rodrigo Ramon Falconi; Hortense, Priscilla
2016-01-01
Abstract Objective: to describe acute and chronic pain from the perspective of the life cycle. Methods: participants: 861 people in pain. The Multidimensional Pain Evaluation Scale (MPES) was used. Results: in the category estimation method the highest descriptors of chronic pain for children/ adolescents were "Annoying" and for adults "Uncomfortable". The highest descriptors of acute pain for children/adolescents was "Complicated"; and for adults was "Unbearable". In magnitude estimation method, the highest descriptors of chronic pain was "Desperate" and for descriptors of acute pain was "Terrible". Conclusions: the MPES is a reliable scale it can be applied during different stages of development. PMID:27556875
Flood frequency estimation by hydrological continuous simulation and classical methods
NASA Astrophysics Data System (ADS)
Brocca, L.; Camici, S.; Melone, F.; Moramarco, T.; Tarpanelli, A.
2009-04-01
In recent years, the effects of flood damages have motivated the development of new complex methodologies for the simulation of the hydrologic/hydraulic behaviour of river systems, fundamental to direct the territorial planning as well as for the floodplain management and risk analysis. The valuation of the flood-prone areas can be carried out through various procedures that are usually based on the estimation of the peak discharge for an assigned probability of exceedence. In the case of ungauged or scarcely gauged catchments this is not straightforward, as the limited availability of historical peak flow data induces a relevant uncertainty in the flood frequency analysis. A possible solution to overcome this problem is the application of hydrological simulation studies in order to generate long synthetic discharge time series. For this purpose, recently, new methodologies based on the stochastic generation of rainfall and temperature data have been proposed. The inferred information can be used as input for a continuous hydrological model to generate a synthetic time series of peak river flow and, hence, the flood frequency distribution at a given site. In this study stochastic rainfall data have been generated via the Neyman-Scott Rectangular Pulses (NSRP) model characterized by a flexible structure in which the model parameters broadly relate to underlying physical features observed in rainfall fields and it is capable of preserving statistical properties of a rainfall time series over a range of time scales. The peak river flow time series have been generated through a continuous hydrological model aimed at flood prediction and developed for the purpose (hereinafter named MISDc) (Brocca, L., Melone, F., Moramarco, T., Singh, V.P., 2008. A continuous rainfall-runoff model as tool for the critical hydrological scenario assessment in natural channels. In: M. Taniguchi, W.C. Burnett, Y. Fukushima, M. Haigh, Y. Umezawa (Eds), From headwater to the ocean
Altini, Marco; Penders, Julien; Vullers, Ruud; Amft, Oliver
2015-01-01
Several methods to estimate energy expenditure (EE) using body-worn sensors exist; however, quantifications of the differences in estimation error are missing. In this paper, we compare three prevalent EE estimation methods and five body locations to provide a basis for selecting among methods, sensors number, and positioning. We considered 1) counts-based estimation methods, 2) activity-specific estimation methods using METs lookup, and 3) activity-specific estimation methods using accelerometer features. The latter two estimation methods utilize subsequent activity classification and EE estimation steps. Furthermore, we analyzed accelerometer sensors number and on-body positioning to derive optimal EE estimation results during various daily activities. To evaluate our approach, we implemented a study with 15 participants that wore five accelerometer sensors while performing a wide range of sedentary, household, lifestyle, and gym activities at different intensities. Indirect calorimetry was used in parallel to obtain EE reference data. Results show that activity-specific estimation methods using accelerometer features can outperform counts-based methods by 88% and activity-specific methods using METs lookup for active clusters by 23%. No differences were found between activity-specific methods using METs lookup and using accelerometer features for sedentary clusters. For activity-specific estimation methods using accelerometer features, differences in EE estimation error between the best combinations of each number of sensors (1 to 5), analyzed with repeated measures ANOVA, were not significant. Thus, we conclude that choosing the best performing single sensor does not reduce EE estimation accuracy compared to a five sensors system and can reliably be used. However, EE estimation errors can increase up to 80% if a nonoptimal sensor location is chosen. PMID:24691168
Quantitative estimation of poikilocytosis by the coherent optical method
NASA Astrophysics Data System (ADS)
Safonova, Larisa P.; Samorodov, Andrey V.; Spiridonov, Igor N.
2000-05-01
The investigation upon the necessity and the reliability required of the determination of the poikilocytosis in hematology has shown that existing techniques suffer from grave shortcomings. To determine a deviation of the erythrocytes' form from the normal (rounded) one in blood smears it is expedient to use an integrative estimate. The algorithm which is based on the correlation between erythrocyte morphological parameters with properties of the spatial-frequency spectrum of blood smear is suggested. During analytical and experimental research an integrative form parameter (IFP) which characterizes the increase of the relative concentration of cells with the changed form over 5% and the predominating type of poikilocytes was suggested. An algorithm of statistically reliable estimation of the IFP on the standard stained blood smears has been developed. To provide the quantitative characterization of the morphological features of cells a form vector has been proposed, and its validity for poikilocytes differentiation was shown.
A life history method for estimating convective rainfall
NASA Technical Reports Server (NTRS)
Martin, D. W.
1981-01-01
The remote sensing of rain amounts is of great interest for a great variety of operational applications, including hydrology, hydroelectricity and agriculture is discussed. The microwave radiometer represents the most obvious technique, however, poor spatial and temporal resolution, together with the problems associated with the estimation of effective rain layer height make visible and IR techniques more promising at the present time. Based on bivariate frequency distribution of brightness versus temperature, brightness enhancing or infrared technique alone may be inadequate to deduce details of convective activity. It is implied that better estimates of rainfall will come from visible and IR observations combined than from either used alone. The technique identifies clouds with high probability of rain as those which have large optical and presumably physical thickness as measured by the visible albedo in comparison with their height, determined by the intensity of the IR emission.
On optical mass estimation methods for galaxy groups
NASA Astrophysics Data System (ADS)
Pearson, R. J.; Ponman, T. J.; Norberg, P.; Robotham, A. S. G.; Farr, W. M.
2015-05-01
We examine the performance of a variety of different estimators for the mass of galaxy groups, based on their galaxy distribution alone. We draw galaxies from the Sloan Digital Sky Survey for a set of groups and clusters for which hydrostatic mass estimates based on high-quality Chandra X-ray data are available. These are used to calibrate the galaxy-based mass proxies, and to test their performance. Richness, luminosity, galaxy overdensity, rms radius and dynamical mass proxies are all explored. These different mass indicators all have their merits, and we argue that using them in combination can provide protection against being misled by the effects of dynamical disturbance or variations in star formation efficiency. Using them in this way leads us to infer the presence of significant non-statistical scatter in the X-ray based mass estimates we employ. We apply a similar analysis to a set of mock groups derived from applying a semi-analytic galaxy formation code to the Millennium dark matter simulation. The relations between halo mass and the mass proxies differ significantly in some cases from those seen in the observational groups, and we discuss possible reasons for this.
A stochastic framework for inequality constrained estimation
NASA Astrophysics Data System (ADS)
Roese-Koerner, Lutz; Devaraju, Balaji; Sneeuw, Nico; Schuh, Wolf-Dieter
2012-11-01
Quality description is one of the key features of geodetic inference. This is even more true if additional information about the parameters is available that could improve the accuracy of the estimate. However, if such additional information is provided in the form of inequality constraints, most of the standard tools of quality description (variance propagation, confidence ellipses, etc.) cannot be applied, as there is no analytical relationship between parameters and observations. Some analytical methods have been developed for describing the quality of inequality constrained estimates. However, these methods either ignore the probability mass in the infeasible region or the influence of inactive constraints and therefore yield only approximate results. In this article, a frequentist framework for quality description of inequality constrained least-squares estimates is developed, based on the Monte Carlo method. The quality is described in terms of highest probability density regions. Beyond this accuracy estimate, the proposed method allows to determine the influence and contribution of each constraint on each parameter using Lagrange multipliers. Plausibility of the constraints is checked by hypothesis testing and estimating the probability mass in the infeasible region. As more probability mass concentrates in less space, applying the proposed method results in smaller confidence regions compared to the unconstrained ordinary least-squares solution. The method is applied to describe the quality of estimates in the problem of approximating a time series with positive definite functions.
Estimation of lithofacies proportions using well and well test data
Hu, L.Y.; Blanc, G.; Noetinger, B.
1996-12-31
A crucial step of the commonly used geostatistical methods for modeling heterogeneous reservoirs (e.g. the sequential indicator simulation and the truncated Gaussian functions) is the estimation of the lithofacies local proportion (or probability density) functions. Well-test derived permeabilities show good correlation with lithofacies proportions around wells. Integrating well and well-test data in estimating lithofacies proportions could permit the building of more realistic models of reservoir heterogeneity. However this integration is difficult because of the different natures and measurement scales of these two types of data. This paper presents a two step approach to integrating well and well-test data into heterogeneous reservoir modeling. First lithofacies proportions in well-test investigation areas are estimated using a new kriging algorithm called KISCA. KISCA consists in kriging jointly the proportions of all lithofacies in a well-test investigation area so that the corresponding well-test derived permeability is respected through a weighted power averaging of lithofacies permeabilities. For multiple well-tests, an iterative process is used in KISCA to account for their interaction. After this, the estimated proportions are combined with lithofacies indicators at wells for estimating proportion (or probability density) functions over the entire reservoir field using a classical kriging method. Some numerical examples were considered to test the proposed method for estimating lithofacies proportions. In addition, a synthetic lithofacies reservoir model was generated and a well-test simulation was performed. The comparison between the experimental and estimated proportions in the well-test investigation area demonstrates the validity of the proposed method.
Practical Aspects of the Equation-Error Method for Aircraft Parameter Estimation
NASA Technical Reports Server (NTRS)
Morelli, Eugene a.
2006-01-01
Various practical aspects of the equation-error approach to aircraft parameter estimation were examined. The analysis was based on simulated flight data from an F-16 nonlinear simulation, with realistic noise sequences added to the computed aircraft responses. This approach exposes issues related to the parameter estimation techniques and results, because the true parameter values are known for simulation data. The issues studied include differentiating noisy time series, maximum likelihood parameter estimation, biases in equation-error parameter estimates, accurate computation of estimated parameter error bounds, comparisons of equation-error parameter estimates with output-error parameter estimates, analyzing data from multiple maneuvers, data collinearity, and frequency-domain methods.
NASA Astrophysics Data System (ADS)
Ishigaki, Tsukasa; Yamamoto, Yoshinobu; Nakamura, Yoshiyuki; Akamatsu, Motoyuki
Patients that have an health service by doctor have to wait long time at many hospitals. The long waiting time is the worst factor of patient's dissatisfaction for hospital service according to questionnaire for patients. The present paper describes an estimation method of the waiting time for each patient without an electronic medical chart system. The method applies a portable RFID system to data acquisition and robust estimation of probability distribution of the health service and test time by doctor for high-accurate waiting time estimation. We carried out an health service of data acquisition at a real hospital and verified the efficiency of the proposed method. The proposed system widely can be used as data acquisition system in various fields such as marketing service, entertainment or human behavior measurement.
Estimating the prevalence of anaemia: a comparison of three methods.
Sari, M.; de Pee, S.; Martini, E.; Herman, S.; Sugiatmi; Bloem, M. W.; Yip, R.
2001-01-01
OBJECTIVE: To determine the most effective method for analysing haemoglobin concentrations in large surveys in remote areas, and to compare two methods (indirect cyanmethaemoglobin and HemoCue) with the conventional method (direct cyanmethaemoglobin). METHODS: Samples of venous and capillary blood from 121 mothers in Indonesia were compared using all three methods. FINDINGS: When the indirect cyanmethaemoglobin method was used the prevalence of anaemia was 31-38%. When the direct cyanmethaemoglobin or HemoCue method was used the prevalence was 14-18%. Indirect measurement of cyanmethaemoglobin had the highest coefficient of variation and the largest standard deviation of the difference between the first and second assessment of the same blood sample (10-12 g/l indirect measurement vs 4 g/l direct measurement). In comparison with direct cyanmethaemoglobin measurement of venous blood, HemoCue had the highest sensitivity (82.4%) and specificity (94.2%) when used for venous blood. CONCLUSIONS: Where field conditions and local resources allow it, haemoglobin concentration should be assessed with the direct cyanmethaemoglobin method, the gold standard. However, the HemoCue method can be used for surveys involving different laboratories or which are conducted in relatively remote areas. In very hot and humid climates, HemoCue microcuvettes should be discarded if not used within a few days of opening the container containing the cuvettes. PMID:11436471
Golmakani, Nahid; Khaleghinezhad, Khosheh; Dadgar, Selmeh; Hashempor, Majid; Baharian, Nosrat
2015-01-01
Introduction: In developing countries, hemorrhage accounts for 30% of the maternal deaths. Postpartum hemorrhage has been defined as blood loss of around 500 ml or more, after completing the third phase of labor. Most cases of postpartum hemorrhage occur during the first hour after birth. The most common reason for bleeding in the early hours after childbirth is uterine atony. Bleeding during delivery is usually a visual estimate that is measured by the midwife. It has a high error rate. However, studies have shown that the use of a standard can improve the estimation. The aim of the research is to compare the estimation of postpartum hemorrhage using the weighting method and the National Guideline for postpartum hemorrhage estimation. Materials and Methods: This descriptive study was conducted on 112 females in the Omolbanin Maternity Department of Mashhad, for a six-month period, from November 2012 to May 2013. The accessible method was used for sampling. The data collection tools were case selection, observation and interview forms. For postpartum hemorrhage estimation, after the third section of labor was complete, the quantity of bleeding was estimated in the first and second hours after delivery, by the midwife in charge, using the National Guideline for vaginal delivery, provided by the Maternal Health Office. Also, after visual estimation by using the National Guideline, the sheets under parturient in first and second hours after delivery were exchanged and weighted. The data were analyzed using descriptive statistics and the t-test. Results: According to the results, a significant difference was found between the estimated blood loss based on the weighting methods and that using the National Guideline (weighting method 62.68 ± 16.858 cc vs. National Guideline 45.31 ± 13.484 cc in the first hour after delivery) (P = 0.000) and (weighting method 41.26 ± 10.518 vs. National Guideline 30.24 ± 8.439 in second hour after delivery) (P = 0.000). Conclusions
Nonlinear Attitude Filtering Methods
NASA Technical Reports Server (NTRS)
Markley, F. Landis; Crassidis, John L.; Cheng, Yang
2005-01-01
This paper provides a survey of modern nonlinear filtering methods for attitude estimation. Early applications relied mostly on the extended Kalman filter for attitude estimation. Since these applications, several new approaches have been developed that have proven to be superior to the extended Kalman filter. Several of these approaches maintain the basic structure of the extended Kalman filter, but employ various modifications in order to provide better convergence or improve other performance characteristics. Examples of such approaches include: filter QUEST, extended QUEST, the super-iterated extended Kalman filter, the interlaced extended Kalman filter, and the second-order Kalman filter. Filters that propagate and update a discrete set of sigma points rather than using linearized equations for the mean and covariance are also reviewed. A two-step approach is discussed with a first-step state that linearizes the measurement model and an iterative second step to recover the desired attitude states. These approaches are all based on the Gaussian assumption that the probability density function is adequately specified by its mean and covariance. Other approaches that do not require this assumption are reviewed, including particle filters and a Bayesian filter based on a non-Gaussian, finite-parameter probability density function on SO(3). Finally, the predictive filter, nonlinear observers and adaptive approaches are shown. The strengths and weaknesses of the various approaches are discussed.
Computation of nonparametric convex hazard estimators via profile methods
Jankowski, Hanna K.; Wellner, Jon A.
2010-01-01
This paper proposes a profile likelihood algorithm to compute the nonparametric maximum likelihood estimator of a convex hazard function. The maximisation is performed in two steps: First the support reduction algorithm is used to maximise the likelihood over all hazard functions with a given point of minimum (or antimode). Then it is shown that the profile (or partially maximised) likelihood is quasi-concave as a function of the antimode, so that a bisection algorithm can be applied to find the maximum of the profile likelihood, and hence also the global maximum. The new algorithm is illustrated using both artificial and real data, including lifetime data for Canadian males and females. PMID:20300560
A method to estimate optical distortion using planetary images
NASA Astrophysics Data System (ADS)
Kouyama, Toru; Yamazaki, Atsushi; Yamada, Manabu; Imamura, Takeshi
2013-09-01
We developed a method to calibrate optical distortion parameters for axisymmetrical optical systems using images of a spherical target taken at a variety of distances. The method utilizes the fact that the influence of distortion on the apparent radius in the image changes with the disk size of the projected body. Because several planets can be used as the spherical target, this method enables us to obtain distortion parameters in space and by using a large number of planetary images, desired accuracy of parameters can be achieved statistically. The applicability of the method was tested by applying it to simulated planetary images and real Venus images taken by Venus Monitoring Camera onboard the ESA's Venus Express, and optical distortion was successfully retrieved with the pixel position error of less than 1 pixel. Venus is the planet most suitable for the proposed method because of its smooth, nearly spherical surface of the haze layer covering the planet.
Estimates of minimum patch size depend on the method of estimation and the condition of the habitat.
McCoy, Earl D; Mushinsky, Henry R
2007-06-01
Minimum patch size for a viable population can be estimated in several ways. The density-area method estimates minimum patch size as the smallest area in which no new individuals are encountered as one extends the arbitrary boundaries of a study area outward. The density-area method eliminates the assumption of no variation in density with size of habitat area that accompanies other methods, but it is untested in situations in which habitat loss has confined populations to small areas. We used a variant of the density area method to study the minimum patch size for the gopher tortoise (Gopherus polyphemus) in Florida, USA, where this keystone species is being confined to ever smaller habitat fragments. The variant was based on the premise that individuals within populations are likely to occur at unusually high densities when confined to small areas, and it estimated minimum patch size as the smallest area beyond which density plateaus. The data for our study came from detailed surveys of 38 populations of the tortoise. For all 38 populations, the areas occupied were determined empirically, and for 19 of them, duplicate surveys were undertaken about a decade apart. We found that a consistent inverse density area relationship was present over smaller areas. The minimum patch size estimated from the density-area relationship was at least 100 ha, which is substantially larger than previous estimates. The relative abundance of juveniles was inversely related to population density for sites with relatively poor habitat quality, indicating that the estimated minimum patch size could represent an extinction threshold. We concluded that a negative density area relationship may be an inevitable consequence of excessive habitat loss. We also concluded that any detrimental effects of an inverse density area relationship may be exacerbated by the deterioration in habitat quality that often accompanies habitat loss. Finally, we concluded that the value of any estimate of
Simple and robust baseline estimation method for multichannel SAR-GMTI systems
NASA Astrophysics Data System (ADS)
Chen, Zhao-Yan; Wang, Tong; Ma, Nan
2016-07-01
In this paper, the authors propose an approach of estimating the effective baseline for ground moving target indication (GMTI) mode of synthetic aperture radar (SAR), which is different from any previous work. The authors show that the new method leads to a simpler and more robust baseline estimate. This method employs a baseline search operation, where the degree of coherence (DOC) is served as a metric to judge whether the optimum baseline estimate is obtained. The rationale behind this method is that the more accurate the baseline estimate, the higher the coherence of the two channels after co-registering with the estimated baseline value. The merits of the proposed method are twofold: simple to design and robust to the Doppler centroid estimation error. The performance of the proposed method is good. The effectiveness of the method is tested with real SAR data.
Wind power error estimation in resource assessments.
Rodríguez, Osvaldo; Del Río, Jesús A; Jaramillo, Oscar A; Martínez, Manuel
2015-01-01
Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies. PMID:26000444
Wind Power Error Estimation in Resource Assessments
Rodríguez, Osvaldo; del Río, Jesús A.; Jaramillo, Oscar A.; Martínez, Manuel
2015-01-01
Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies. PMID:26000444
Numerical method for estimating the size of chaotic regions of phase space
Henyey, F.S.; Pomphrey, N.
1987-10-01
A numerical method for estimating irregular volumes of phase space is derived. The estimate weights the irregular area on a surface of section with the average return time to the section. We illustrate the method by application to the stadium and oval billiard systems and also apply the method to the continuous Henon-Heiles system. 15 refs., 10 figs. (LSP)
Advanced Method to Estimate Fuel Slosh Simulation Parameters
NASA Technical Reports Server (NTRS)
Schlee, Keith; Gangadharan, Sathya; Ristow, James; Sudermann, James; Walker, Charles; Hubert, Carl
2005-01-01
The nutation (wobble) of a spinning spacecraft in the presence of energy dissipation is a well-known problem in dynamics and is of particular concern for space missions. The nutation of a spacecraft spinning about its minor axis typically grows exponentially and the rate of growth is characterized by the Nutation Time Constant (NTC). For launch vehicles using spin-stabilized upper stages, fuel slosh in the spacecraft propellant tanks is usually the primary source of energy dissipation. For analytical prediction of the NTC this fuel slosh is commonly modeled using simple mechanical analogies such as pendulums or rigid rotors coupled to the spacecraft. Identifying model parameter values which adequately represent the sloshing dynamics is the most important step in obtaining an accurate NTC estimate. Analytic determination of the slosh model parameters has met with mixed success and is made even more difficult by the introduction of propellant management devices and elastomeric diaphragms. By subjecting full-sized fuel tanks with actual flight fuel loads to motion similar to that experienced in flight and measuring the forces experienced by the tanks these parameters can be determined experimentally. Currently, the identification of the model parameters is a laborious trial-and-error process in which the equations of motion for the mechanical analog are hand-derived, evaluated, and their results are compared with the experimental results. The proposed research is an effort to automate the process of identifying the parameters of the slosh model using a MATLAB/SimMechanics-based computer simulation of the experimental setup. Different parameter estimation and optimization approaches are evaluated and compared in order to arrive at a reliable and effective parameter identification process. To evaluate each parameter identification approach, a simple one-degree-of-freedom pendulum experiment is constructed and motion is induced using an electric motor. By applying the
Semi-quantitative method to estimate levels of Campylobacter
Technology Transfer Automated Retrieval System (TEKTRAN)
Introduction: Research projects utilizing live animals and/or systems often require reliable, accurate quantification of Campylobacter following treatments. Even with marker strains, conventional methods designed to quantify are labor and material intensive requiring either serial dilutions or MPN ...
A history-based method to estimate animal preference.
Maia, Caroline Marques; Volpato, Gilson Luiz
2016-01-01
Giving animals their preferred items (e.g., environmental enrichment) has been suggested as a method to improve animal welfare, thus raising the question of how to determine what animals want. Most studies have employed choice tests for detecting animal preferences. However, whether choice tests represent animal preferences remains a matter of controversy. Here, we present a history-based method to analyse data from individual choice tests to discriminate between preferred and non-preferred items. This method differentially weighs choices from older and recent tests performed over time. Accordingly, we provide both a preference index that identifies preferred items contrasted with non-preferred items in successive multiple-choice tests and methods to detect the strength of animal preferences for each item. We achieved this goal by investigating colour choices in the Nile tilapia fish species. PMID:27350213
A history-based method to estimate animal preference
Maia, Caroline Marques; Volpato, Gilson Luiz
2016-01-01
Giving animals their preferred items (e.g., environmental enrichment) has been suggested as a method to improve animal welfare, thus raising the question of how to determine what animals want. Most studies have employed choice tests for detecting animal preferences. However, whether choice tests represent animal preferences remains a matter of controversy. Here, we present a history-based method to analyse data from individual choice tests to discriminate between preferred and non-preferred items. This method differentially weighs choices from older and recent tests performed over time. Accordingly, we provide both a preference index that identifies preferred items contrasted with non-preferred items in successive multiple-choice tests and methods to detect the strength of animal preferences for each item. We achieved this goal by investigating colour choices in the Nile tilapia fish species. PMID:27350213
Method of estimating pulse response using an impedance spectrum
Morrison, John L; Morrison, William H; Christophersen, Jon P; Motloch, Chester G
2014-10-21
Electrochemical Impedance Spectrum data are used to predict pulse performance of an energy storage device. The impedance spectrum may be obtained in-situ. A simulation waveform includes a pulse wave with a period greater than or equal to the lowest frequency used in the impedance measurement. Fourier series coefficients of the pulse train can be obtained. The number of harmonic constituents in the Fourier series are selected so as to appropriately resolve the response, but the maximum frequency should be less than or equal to the highest frequency used in the impedance measurement. Using a current pulse as an example, the Fourier coefficients of the pulse are multiplied by the impedance spectrum at corresponding frequencies to obtain Fourier coefficients of the voltage response to the desired pulse. The Fourier coefficients of the response are then summed and reassembled to obtain the overall time domain estimate of the voltage using the Fourier series analysis.
Performance of different detrending methods in turbulent flux estimation
NASA Astrophysics Data System (ADS)
Donateo, Antonio; Cava, Daniela; Contini, Daniele
2015-04-01
The eddy covariance is the most direct, efficient and reliable method to measure the turbulent flux of a scalar (Baldocchi, 2003). Required conditions for high-quality eddy covariance measurements are amongst others stationarity of the measured data and a fully developed turbulence. The simplest method for obtaining the fluctuating components for covariance calculation according to Reynolds averaging rules under ideal stationary conditions is the so called mean removal method. However steady state conditions rarely exist in the atmosphere, because of the diurnal cycle, changes in meteorological conditions, or sensor drift. All these phenomena produce trends or low-frequency changes superimposed to the turbulent signal. Different methods for trend removal have been proposed in literature; however a general agreement on how separate low frequency perturbations from turbulence has not yet been reached. The most commonly applied methods are the linear detrending (Gash and Culf, 1996) and the high-pass filter, namely the moving average (Moncrieff et al., 2004). Moreover Vickers and Mahrt (2003) proposed a multi resolution decomposition method in order to select an appropriate time scale for mean removal as a function of atmospheric stability conditions. The present work investigates the performance of these different detrending methods in removing the low frequency contribution to the turbulent fluxes calculation, including also a spectral filter by a Fourier decomposition of the time series. The different methods have been applied to the calculation of the turbulent fluxes for different scalars (temperature, ultrafine particles number concentration, carbon dioxide and water vapour concentration). A comparison of the detrending methods will be performed also for different measurement site, namely a urban site, a suburban area, and a remote area in Antarctica. Moreover the performance of the moving average in detrending time series has been analyzed as a function of the
High-orbit satellite magnitude estimation using photometric measurement method
NASA Astrophysics Data System (ADS)
Zhang, Shixue
2015-12-01
The means to get the accurate high-orbit satellite magnitude can be significant in space target surveillance. This paper proposes a satellite photometric measurement method based on image processing. We calculate the satellite magnitude by comparing the output value of camera's CCD between the known fixed star and the satellite. We calculate the luminance value of a certain object on the acquired image using a background-removing method. According to the observation parameters such as azimuth, elevation, height and the situation of the telescope, we can draw the star map on the image, so we can get the real magnitude of a certain fixed star in the image. We derive a new method to calculate the magnitude value of a certain satellite according to the magnitude of the fixed star in the image. To guarantee the algorithm's stability, we evaluate the measurement precision of the method, and analysis the restrict condition in actual application. We have made plenty of experiment of our system using large telescope in satellite surveillance, and testify the correctness of the algorithm. The experimental result shows that the precision of the proposed algorithm in satellite magnitude measurement is 0.24mv, and this method can be generalized to other relative fields.
Comparative evaluation of two quantitative precipitation estimation methods in Korea
NASA Astrophysics Data System (ADS)
Ko, H.; Nam, K.; Jung, H.
2013-12-01
The spatial distribution and intensity of rainfall is necessary for hydrological model, particularly, grid based distributed model. The weather radar is much higher spatial resolution (1kmx1km) than rain gauges (~13km) although radar is indirect measurement of rainfall and rain gauges are directly observed it. And also, radar is provided areal and gridded rainfall information while rain gauges are provided point data. Therefore, radar rainfall data can be useful for input data on the hydrological model. In this study, we compared two QPE schemes to produce radar rainfall for hydrological utilization. The two methods are 1) spatial adjustment and 2) real-time Z-R relationship adjustment (hereafter RAR; Radar-Aws Rain rate). We computed and analyzed the statistics such as ME (Mean Error), RMSE (Root mean square Error), and correlation using cross-validation method (here, leave-one-out method).
Estimation of mechanical properties of nanomaterials using artificial intelligence methods
NASA Astrophysics Data System (ADS)
Vijayaraghavan, V.; Garg, A.; Wong, C. H.; Tai, K.
2014-09-01
Computational modeling tools such as molecular dynamics (MD), ab initio, finite element modeling or continuum mechanics models have been extensively applied to study the properties of carbon nanotubes (CNTs) based on given input variables such as temperature, geometry and defects. Artificial intelligence techniques can be used to further complement the application of numerical methods in characterizing the properties of CNTs. In this paper, we have introduced the application of multi-gene genetic programming (MGGP) and support vector regression to formulate the mathematical relationship between the compressive strength of CNTs and input variables such as temperature and diameter. The predictions of compressive strength of CNTs made by these models are compared to those generated using MD simulations. The results indicate that MGGP method can be deployed as a powerful method for predicting the compressive strength of the carbon nanotubes.
A Study on Channel Estimation Methods for Time-Domain Spreading MC-CDMA Systems
NASA Astrophysics Data System (ADS)
Nagate, Atsushi; Fujii, Teruya
As a candidate for the transmission technology of next generation mobile communication systems, time-domain spreading MC-CDMA systems have begun to attract much attention. In these systems, data and pilot symbols are spread in the time domain and code-multiplexed. To combat fading issues, we need to conduct channel estimation by using the code-multiplexed pilot symbols. Especially in next generation systems, frequency bands higher than those of current systems, which raise the maximum Doppler frequency, are expected to be used, so that a more powerful channel estimation method is expected. Considering this, we propose a channel estimation method for highly accurate channel estimation; it is a combination of a two-dimensional channel estimation method and an impulse response-based channel estimation method. We evaluate the proposed method by computer simulations.
Estimation of missing rainfall data using spatial interpolation and imputation methods
NASA Astrophysics Data System (ADS)
Radi, Noor Fadhilah Ahmad; Zakaria, Roslinazairimah; Azman, Muhammad Az-zuhri
2015-02-01
This study is aimed to estimate missing rainfall data by dividing the analysis into three different percentages namely 5%, 10% and 20% in order to represent various cases of missing data. In practice, spatial interpolation methods are chosen at the first place to estimate missing data. These methods include normal ratio (NR), arithmetic average (AA), coefficient of correlation (CC) and inverse distance (ID) weighting methods. The methods consider the distance between the target and the neighbouring stations as well as the correlations between them. Alternative method for solving missing data is an imputation method. Imputation is a process of replacing missing data with substituted values. A once-common method of imputation is single-imputation method, which allows parameter estimation. However, the single imputation method ignored the estimation of variability which leads to the underestimation of standard errors and confidence intervals. To overcome underestimation problem, multiple imputations method is used, where each missing value is estimated with a distribution of imputations that reflect the uncertainty about the missing data. In this study, comparison of spatial interpolation methods and multiple imputations method are presented to estimate missing rainfall data. The performance of the estimation methods used are assessed using the similarity index (S-index), mean absolute error (MAE) and coefficient of correlation (R).
On using sample selection methods in estimating the price elasticity of firms' demand for insurance.
Marquis, M Susan; Louis, Thomas A
2002-01-01
We evaluate a technique based on sample selection models that has been used by health economists to estimate the price elasticity of firms' demand for insurance. We demonstrate that, this technique produces inflated estimates of the price elasticity. We show that alternative methods lead to valid estimates. PMID:11845921
Estimation of IRT Graded Response Models: Limited versus Full Information Methods
ERIC Educational Resources Information Center
Forero, Carlos G.; Maydeu-Olivares, Alberto
2009-01-01
The performance of parameter estimates and standard errors in estimating F. Samejima's graded response model was examined across 324 conditions. Full information maximum likelihood (FIML) was compared with a 3-stage estimator for categorical item factor analysis (CIFA) when the unweighted least squares method was used in CIFA's third stage. CIFA…
A method for estimating both the solubility parameters and molar volumes of liquids
NASA Technical Reports Server (NTRS)
Fedors, R. F.
1974-01-01
Development of an indirect method of estimating the solubility parameter of high molecular weight polymers. The proposed method of estimating the solubility parameter, like Small's method, is based on group additive constants, but is believed to be superior to Small's method for two reasons: (1) the contribution of a much larger number of functional groups have been evaluated, and (2) the method requires only a knowledge of structural formula of the compound.
Clement, Matthew; O'Keefe, Joy M; Walters, Brianne
2015-01-01
While numerous methods exist for estimating abundance when detection is imperfect, these methods may not be appropriate due to logistical difficulties or unrealistic assumptions. In particular, if highly mobile taxa are frequently absent from survey locations, methods that estimate a probability of detection conditional on presence will generate biased abundance estimates. Here, we propose a new estimator for estimating abundance of mobile populations using telemetry and counts of unmarked animals. The estimator assumes that the target population conforms to a fission-fusion grouping pattern, in which the population is divided into groups that frequently change in size and composition. If assumptions are met, it is not necessary to locate all groups in the population to estimate abundance. We derive an estimator, perform a simulation study, conduct a power analysis, and apply the method to field data. The simulation study confirmed that our estimator is asymptotically unbiased with low bias, narrow confidence intervals, and good coverage, given a modest survey effort. The power analysis provided initial guidance on survey effort. When applied to small data sets obtained by radio-tracking Indiana bats, abundance estimates were reasonable, although imprecise. The proposed method has the potential to improve abundance estimates for mobile species that have a fission-fusion social structure, such as Indiana bats, because it does not condition detection on presence at survey locations and because it avoids certain restrictive assumptions.
Estimating School Efficiency: A Comparison of Methods Using Simulated Data.
ERIC Educational Resources Information Center
Bifulco, Robert; Bretschneider, Stuart
2001-01-01
Uses simulated data to assess the adequacy of two econometric and linear-programming techniques (data-envelopment analysis and corrected ordinary least squares) for measuring performance-based school reform. In complex data sets (simulated to contain measurement error and endogeneity), these methods are inadequate efficiency measures. (Contains 40…
A Simple Estimation Method for Aggregate Government Outsourcing
ERIC Educational Resources Information Center
Minicucci, Stephen; Donahue, John D.
2004-01-01
The scholarly and popular debate on the delegation to the private sector of governmental tasks rests on an inadequate empirical foundation, as no systematic data are collected on direct versus indirect service delivery. We offer a simple method for approximating levels of service outsourcing, based on relatively straightforward combinations of and…
Effects of Vertical Scaling Methods on Linear Growth Estimation
ERIC Educational Resources Information Center
Lei, Pui-Wa; Zhao, Yu
2012-01-01
Vertical scaling is necessary to facilitate comparison of scores from test forms of different difficulty levels. It is widely used to enable the tracking of student growth in academic performance over time. Most previous studies on vertical scaling methods assume relatively long tests and large samples. Little is known about their performance when…
Estimation of the size of the female sex worker population in Rwanda using three different methods.
Mutagoma, Mwumvaneza; Kayitesi, Catherine; Gwiza, Aimé; Ruton, Hinda; Koleros, Andrew; Gupta, Neil; Balisanga, Helene; Riedel, David J; Nsanzimana, Sabin
2015-10-01
HIV prevalence is disproportionately high among female sex workers compared to the general population. Many African countries lack useful data on the size of female sex worker populations to inform national HIV programmes. A female sex worker size estimation exercise using three different venue-based methodologies was conducted among female sex workers in all provinces of Rwanda in August 2010. The female sex worker national population size was estimated using capture-recapture and enumeration methods, and the multiplier method was used to estimate the size of the female sex worker population in Kigali. A structured questionnaire was also used to supplement the data. The estimated number of female sex workers by the capture-recapture method was 3205 (95% confidence interval: 2998-3412). The female sex worker size was estimated at 3348 using the enumeration method. In Kigali, the female sex worker size was estimated at 2253 (95% confidence interval: 1916-2524) using the multiplier method. Nearly 80% of all female sex workers in Rwanda were found to be based in the capital, Kigali. This study provided a first-time estimate of the female sex worker population size in Rwanda using capture-recapture, enumeration, and multiplier methods. The capture-recapture and enumeration methods provided similar estimates of female sex worker in Rwanda. Combination of such size estimation methods is feasible and productive in low-resource settings and should be considered vital to inform national HIV programmes. PMID:25336306
NASA Technical Reports Server (NTRS)
Sidik, S. M.
1975-01-01
Ridge, Marquardt's generalized inverse, shrunken, and principal components estimators are discussed in terms of the objectives of point estimation of parameters, estimation of the predictive regression function, and hypothesis testing. It is found that as the normal equations approach singularity, more consideration must be given to estimable functions of the parameters as opposed to estimation of the full parameter vector; that biased estimators all introduce constraints on the parameter space; that adoption of mean squared error as a criterion of goodness should be independent of the degree of singularity; and that ordinary least-squares subset regression is the best overall method.
Comparing the estimation methods of stable distributions with respect to robustness properties
NASA Astrophysics Data System (ADS)
Celik, Nuri; Erden, Samet; Sarikaya, M. Zeki
2016-04-01
In statistical applications, some data set may exhibit the features like high skewness and kurtosis and heavy tailness that are incompatible with the normality assumption especially in finance and engineering. For these reason, the modeling of the data sets with α stable distributions will be reasonable approach. The stable distributions have four parameters. In literature, the estimation methods have been studied in order to estimate these unknown model parameters. In this study, we give small information about these proposed estimation methods and we compare these estimators with respect to robustness properties with a comprehensive simulation study, since the robustness property of an estimator has been an important tool for an appropriate modeling.
Sensor fusion method for off-road vehicle position estimation
NASA Astrophysics Data System (ADS)
Guo, Linsong; Zhang, Qin; Han, Shufeng
2002-07-01
A FOG-aided GPS fusion system was developed for positioning an off-road vehicle, which consists of a six-axis inertial measurement unit (IMU) and a Garmin global positioning system (GPS). An observation-based Kalman filter was designed to integrate the readings from both sensors so that the noise in GPS signal was smoothed out, the redundant information was fused and a high update rate of output signals was obtained. The drift error of FOG was also compensated. By using this system, a low cost GPS can be used to replace expensive GPS with a higher accuracy. Measurement and fusion results showed that the positioning error of the vehicle estimated using this fusion system was greatly reduced from a GPS-only system. At a vehicle speed of about 1.34 m/s, the mean bias in East axis of the fusion system was 0.48 m comparing to the GPS mean bias of 1.28 m, and the mean bias in North axis was reduced to 0.32 m from 1.48 m. The update frequency of the fusion system was increased to 9 Hz from 1 Hz of the GPS. A prototype system was installed on a sprayer for vehicle positioning measurement.
Development of methods to estimate beryllium exposure. Final report
Rice, C.H.
1988-06-30
The project was designed to access data, provide preliminary exposure rankings, and delineate the process for detailing retrospective exposure assessments for beryllium among workers at processing facilities. A literature review was conducted, and walk-through surveys were conducted at two facilities still in operation. More than 8000 environmental records were entered into a computer file. Descriptive statistics were then generated and the process of rank ordering exposures across facilities was begun. In efforts to formulate crude indices of exposure, job titles of persons in the NIOSH mortality study were reviewed and categorized for any beryllium exposure, chemical form of beryllium exposure, and exposure to acid mists. Daily Weighted Average exposure estimates were reviewed by job title, across all facilities. The mean exposure at each facility was calculated. The strategy developed for retrospective exposure assessment is described. Tasks included determination of the usefulness of the Pennsylvania Workers' Compensation files; cataloging the numbers of samples available from company sources; investigating data holdings at Oak Ridge National Laboratory; and obtaining records from the Department of Energy Library.
Hilliges, M; Johansson, O
1999-01-01
The proper assessment of neuron numbers in the nervous system during physiological and pathological conditions, as well as following various treatments, has always been an important part of neuroscience. The present paper evaluates three methods for numerical estimates of nerves in epithelium: I) unbiased nerve fiber profile and nerve fiber fragment estimation methods, II) the traditional method of counting whole nerve fibers, and III) the nerve fiber estimation method. In addition, an unbiased nerve length estimation method was evaluated. Of these four methods, the nerve length per volume method was theoretically optimal, but more time-consuming than the others. The numbers obtained with the methods of nerve fiber profile, nerve fragment and nerve fiber estimation are dependent on the thickness of the epithelium and the sections as well as certain shape factors of the counted fiber. However for those, the actual counting can readily be performed in the microscope and is consequently quick and relatively inexpensive. The statistical analysis showed a very good correlation (R > 0.96) between the three numerical methods, meaning that basically any method could be used. However, dependent on theoretical and practical considerations and the correlation statistics, it may be concluded that the nerve fiber profile or fragment estimation methods should be employed if differences in epithelial and section thickness and the nerve fibers shape factors can be controlled. Such drawbacks are not inherent in the nerve length estimation method and, thus, it can generally be applied. PMID:10197065
Satellite attitude dynamics and estimation with the implicit midpoint method
NASA Astrophysics Data System (ADS)
Hellström, Christian; Mikkola, Seppo
2009-07-01
We describe the application of the implicit midpoint integrator to the problem of attitude dynamics for low-altitude satellites without the use of quaternions. Initially, we consider the satellite to rotate without external torques applied to it. We compare the numerical solution with the exact solution in terms of Jacobi's elliptic functions. Then, we include the gravity-gradient torque, where the implicit midpoint integrator proves to be a fast, simple and accurate method. Higher-order versions of the implicit midpoint scheme are compared to Gauss-Legendre Runge-Kutta methods in terms of accuracy and processing time. Finally, we investigate the performance of a parameter-adaptive Kalman filter based on the implicit midpoint integrator for the determination of the principal moments of inertia through observations.
A TRMM Rainfall Estimation Method Applicable to Land Areas
NASA Technical Reports Server (NTRS)
Prabhakara, C.; Iacovazzi, R., Jr.; Oki, R.; Weinman, J. A.
1998-01-01
Utilizing multi-spectral, dual-polarization Special Sensor Microwave Imager (SSM/I) radiometer measurements, we have developed in this study a method to retrieve average rain rate, R(sub f(sub R)), in a mesoscale grid box of 2deg x 3deg over land. The key parameter of this method is the fractional rain area, f(sub R), in that grid box, which is determined with the help of a threshold on the 85 GHz scattering depression 0 deduced from the SSM/I data. In order to demonstrate the usefulness of this method, nine-months of R(sub f(sub R))are retrieved from SSM/I data over three grid boxes in the Northeastern United States. These retrievals are then compared with the corresponding ground-truth-average rain rate, R(sub g), deduced from 15-minute rain gauges. Based on nine months of rain rate retrievals over three grid boxes, we find that R(sub f(sub R)can explain about 64 % of the variance contained in R(sub g). A similar evaluation of the grid-box-average rain rates R(sub GSCAT) and R(sub SRL), given by the NASA/GSCAT and NOAA/SRL rain retrieval algorithms, is performed. This evaluation reveals that R(sub GSCAT) and R(sub SRL) can explain only about 42 % of the variance contained in R(sub g). In our method, a threshold on the 85 GHz scattering depression is used primarily to determine the fractional rain area in a mesoscale grid box. Quantitative information pertaining to the 85 GHz scattering depression in the grid box is disregarded. In the NASA/GSCAT and NOAA/SRL methods on the other hand, this quantitative information is included. Based on the performance of all three methods, we infer that the magnitude of the scattering depression is a poor indicator of rain rate. Furthermore, from maps based on the observations made by SSM/I on land and ocean we find that there is a significant redundancy in the information content of the SSM/I multi-spectral observations. This leads us to infer that observations of SSM/I at 19 and 37 GHz add only marginal information to that
Spectrophotometric method for the estimation of 6-aminopenicillanic acid.
Shaikh, K; Talati, P G; Gang, D M
1973-02-01
A simple, rapid, and sensitive method is described whereby 6-aminopenicillanic acid can be spectrophotometrically determined in the presence of penicillins and their degradation products without prior separation. d-(+)-Glucosamine is used as reagent. The effect of such parameters as pH, temperature, and time of heating on the formation of the chromophore is described. The recommended range is from 25 to 250 mug of 6-aminopenicillanic acid. PMID:4364173
Estimation of partial pressure during graphite conditioning by matrix method
NASA Astrophysics Data System (ADS)
Chaudhuri, P.; Prakash, A.; Reddy, D. C.
2008-05-01
Plasma Facing Components (PFC) of SST-1 tokamak are designed to be compatible for UHV as it is kept in the main vacuum vessel. Graphite is the most widely used plasma facing material in present day tokamaks. High thermal shock resistance and low atomic number carbon are the most important properties of graphite for this application. However, graphite is porous and absorbs gases, which may be released during plasma operation. Graphite tiles are baked at high temperature of about 1000 °C in high vacuum (10-5 Torr) for several hours before installing them in the tokamak to remove the impurities (mainly water vapour and metal impurities), which may have been deposited during machining of the tiles‥ The measurements of the released gas (such as H2, H2O, CO, CO2, Hydrocarbons, etc.) from graphite tiles during baking are accomplished with the help of a Quadrupole Mass Analyzer (QMA). Since, the output of this measurement is a mass spectrum and not the partial pressures of the residual gases, one needs to adopt some procedure to convert the spectrum to obtain the partial pressures. The conventional method of analysis is tedious and time consuming. We propose a new approach based on constructing a set of linear equations and solving them using matrix operations. This is a simple method compared to the conventional one and also eliminates the limitations of the conventional method. A Fortran program has been developed which identifies the likely gases present in the vacuum system and calculates their partial pressures from the data of the residual gas analyzers. Application of this method of calculating partial pressures from mass spectra data will be discussed in detail in this paper.
Data-Driven Method to Estimate Nonlinear Chemical Equivalence
Mayo, Michael; Collier, Zachary A.; Winton, Corey; Chappell, Mark A
2015-01-01
There is great need to express the impacts of chemicals found in the environment in terms of effects from alternative chemicals of interest. Methods currently employed in fields such as life-cycle assessment, risk assessment, mixtures toxicology, and pharmacology rely mostly on heuristic arguments to justify the use of linear relationships in the construction of “equivalency factors,” which aim to model these concentration-concentration correlations. However, the use of linear models, even at low concentrations, oversimplifies the nonlinear nature of the concentration-response curve, therefore introducing error into calculations involving these factors. We address this problem by reporting a method to determine a concentration-concentration relationship between two chemicals based on the full extent of experimentally derived concentration-response curves. Although this method can be easily generalized, we develop and illustrate it from the perspective of toxicology, in which we provide equations relating the sigmoid and non-monotone, or “biphasic,” responses typical of the field. The resulting concentration-concentration relationships are manifestly nonlinear for nearly any chemical level, even at the very low concentrations common to environmental measurements. We demonstrate the method using real-world examples of toxicological data which may exhibit sigmoid and biphasic mortality curves. Finally, we use our models to calculate equivalency factors, and show that traditional results are recovered only when the concentration-response curves are “parallel,” which has been noted before, but we make formal here by providing mathematical conditions on the validity of this approach. PMID:26158701
Systematic variational method for statistical nonlinear state and parameter estimation
NASA Astrophysics Data System (ADS)
Ye, Jingxin; Rey, Daniel; Kadakia, Nirag; Eldridge, Michael; Morone, Uriel I.; Rozdeba, Paul; Abarbanel, Henry D. I.; Quinn, John C.
2015-11-01
In statistical data assimilation one evaluates the conditional expected values, conditioned on measurements, of interesting quantities on the path of a model through observation and prediction windows. This often requires working with very high dimensional integrals in the discrete time descriptions of the observations and model dynamics, which become functional integrals in the continuous-time limit. Two familiar methods for performing these integrals include (1) Monte Carlo calculations and (2) variational approximations using the method of Laplace plus perturbative corrections to the dominant contributions. We attend here to aspects of the Laplace approximation and develop an annealing method for locating the variational path satisfying the Euler-Lagrange equations that comprises the major contribution to the integrals. This begins with the identification of the minimum action path starting with a situation where the model dynamics is totally unresolved in state space, and the consistent minimum of the variational problem is known. We then proceed to slowly increase the model resolution, seeking to remain in the basin of the minimum action path, until a path that gives the dominant contribution to the integral is identified. After a discussion of some general issues, we give examples of the assimilation process for some simple, instructive models from the geophysical literature. Then we explore a slightly richer model of the same type with two distinct time scales. This is followed by a model characterizing the biophysics of individual neurons.
Wakefulness estimation only using ballistocardiogram: nonintrusive method for sleep monitoring.
Chung, Gih Sung; Lee, Jeong Su; Hwang, Su Hwan; Lim, Young Kyu; Jeong, Do-Un; Park, Kwang Suk
2010-01-01
To evaluate sleep quality or autonomic nervous system, many annoying electrodes have be attached to subjects' body. It can disturb comfortable sleep and, moreover, since it is very expensive experiment, continuous sleep monitoring is difficult. Since heart rate reflects the autonomic nervous system, it is highly synchronized with the sympathetic activation during transition from non-REM sleep to wakefulness. When the transition occurred the heart rate abruptly increased clearly distinguished with other changes. By using this physiology, we tried to classify the wakefulness during the whole night sleep. Our final goal is adopting this method to the continuous monitoring in our daily life. electrocardiogram (ECG) is not the suitable. Subjects have to attach the electrodes by themselves in their housing to obtain ECG. In that point of view, we used the ballistocardiogram (BCG) that is the representative method to obtain heart beat nonintrusively. For ten normal subjects, the wakefulness classifications by using the heart rate dynamics were executed. Nine subjects showed substantial agreement with the visually-scored method, polysomnography (PSG), and only one subject showed moderate agreement in Cohen's kappa value. PMID:21096160
Systematic variational method for statistical nonlinear state and parameter estimation.
Ye, Jingxin; Rey, Daniel; Kadakia, Nirag; Eldridge, Michael; Morone, Uriel I; Rozdeba, Paul; Abarbanel, Henry D I; Quinn, John C
2015-11-01
In statistical data assimilation one evaluates the conditional expected values, conditioned on measurements, of interesting quantities on the path of a model through observation and prediction windows. This often requires working with very high dimensional integrals in the discrete time descriptions of the observations and model dynamics, which become functional integrals in the continuous-time limit. Two familiar methods for performing these integrals include (1) Monte Carlo calculations and (2) variational approximations using the method of Laplace plus perturbative corrections to the dominant contributions. We attend here to aspects of the Laplace approximation and develop an annealing method for locating the variational path satisfying the Euler-Lagrange equations that comprises the major contribution to the integrals. This begins with the identification of the minimum action path starting with a situation where the model dynamics is totally unresolved in state space, and the consistent minimum of the variational problem is known. We then proceed to slowly increase the model resolution, seeking to remain in the basin of the minimum action path, until a path that gives the dominant contribution to the integral is identified. After a discussion of some general issues, we give examples of the assimilation process for some simple, instructive models from the geophysical literature. Then we explore a slightly richer model of the same type with two distinct time scales. This is followed by a model characterizing the biophysics of individual neurons. PMID:26651756
Estimation of solar radiation by using modified Heliosat-II method and COMS-MI imagery
NASA Astrophysics Data System (ADS)
Choi, Wonseok; Song, Ahram; Kim, Yongil
2015-10-01
Estimation of solar radiation is very important basic research which can be used in solar energy resources estimation, prediction of crop yields, resource-related decision-making and so on. Accordingly, recently diverse researches for estimating solar radiation are performing in Korea. Heliosat-II method is one of the widely used model to estimate solar irradiance, and it's accuracy has been demonstrated by many other studies. But Heliosat-II method cannot be applied directly for estimate solar irradiance around Korea. Because Heliosat-II method is optimized for estimating solar radiation of Europe. Basically Heliosat-II method estimate solar radiation by using Meteosat meteorological satellite imagery and statistical data which are taken around Europe. Because these data do not include Korea, Heliosat-II method must be modified for using in estimation solar radiation of Korea. So purpose of this study is Heliosat-II modification for irradiance estimation by using image of COMS-M, weather satellite of Korea. For this purpose, in this study, error if albedo was removed in ground albedo image which was made by using apparent albedo and atmosphere reflectance. And method of producing background albedo map which is used in Heliosat-II method is modified for getting more delicate one. Through the study, ground albedo correction could be successfully performed and background albedo maps could be successfully derived.
Wu, Zhihong; Lu, Ke; Zhu, Yuan
2015-01-01
The torque output accuracy of the IPMSM in electric vehicles using a state of the art MTPA strategy highly depends on the accuracy of machine parameters, thus, a torque estimation method is necessary for the safety of the vehicle. In this paper, a torque estimation method based on flux estimator with a modified low pass filter is presented. Moreover, by taking into account the non-ideal characteristic of the inverter, the torque estimation accuracy is improved significantly. The effectiveness of the proposed method is demonstrated through MATLAB/Simulink simulation and experiment. PMID:26114557
Zhu, Yuan
2015-01-01
The torque output accuracy of the IPMSM in electric vehicles using a state of the art MTPA strategy highly depends on the accuracy of machine parameters, thus, a torque estimation method is necessary for the safety of the vehicle. In this paper, a torque estimation method based on flux estimator with a modified low pass filter is presented. Moreover, by taking into account the non-ideal characteristic of the inverter, the torque estimation accuracy is improved significantly. The effectiveness of the proposed method is demonstrated through MATLAB/Simulink simulation and experiment. PMID:26114557
EXPERIMENTAL METHODS TO ESTIMATE ACCUMULATED SOLIDS IN NUCLEAR WASTE TANKS
Duignan, M.; Steeper, T.; Steimke, J.
2012-12-10
devices and techniques were very effective to estimate the movement, location, and concentrations of the solids representing plutonium and are expected to perform well at a larger scale. The operation of the techniques and their measurement accuracies will be discussed as well as the overall results of the accumulated solids test.
Comparative Evaluation of Two Methods to Estimate Natural Gas Production in Texas
2003-01-01
This report describes an evaluation conducted by the Energy Information Administration (EIA) in August 2003 of two methods that estimate natural gas production in Texas. The first method (parametric method) was used by EIA from February through August 2003 and the second method (multinomial method) replaced it starting in September 2003, based on the results of this evaluation.
NASA Astrophysics Data System (ADS)
Tachibana, Hideyuki; Suzuki, Takafumi; Mabuchi, Kunihiko
We address an estimation method of isometric muscle tension of fingers, as fundamental research for a neural signal-based prosthesis of fingers. We utilize needle electromyogram (EMG) signals, which have approximately equivalent information to peripheral neural signals. The estimating algorithm comprised two convolution operations. The first convolution is between normal distribution and a spike array, which is detected by needle EMG signals. The convolution estimates the probability density of spike-invoking time in the muscle. In this convolution, we hypothesize that each motor unit in a muscle activates spikes independently based on a same probability density function. The second convolution is between the result of the previous convolution and isometric twitch, viz., the impulse response of the motor unit. The result of the calculation is the sum of all estimated tensions of whole muscle fibers, i.e., muscle tension. We confirmed that there is good correlation between the estimated tension of the muscle and the actual tension, with >0.9 correlation coefficients at 59%, and >0.8 at 89% of all trials.
A method to estimate weight and dimensions of large and small gas turbine engines
NASA Technical Reports Server (NTRS)
Onat, E.; Klees, G. W.
1979-01-01
A computerized method was developed to estimate weight and envelope dimensions of large and small gas turbine engines within + or - 5% to 10%. The method is based on correlations of component weight and design features of 29 data base engines. Rotating components were estimated by a preliminary design procedure which is sensitive to blade geometry, operating conditions, material properties, shaft speed, hub tip ratio, etc. The development and justification of the method selected, and the various methods of analysis are discussed.
System and Method for Outlier Detection via Estimating Clusters
NASA Technical Reports Server (NTRS)
Iverson, David J. (Inventor)
2016-01-01
An efficient method and system for real-time or offline analysis of multivariate sensor data for use in anomaly detection, fault detection, and system health monitoring is provided. Models automatically derived from training data, typically nominal system data acquired from sensors in normally operating conditions or from detailed simulations, are used to identify unusual, out of family data samples (outliers) that indicate possible system failure or degradation. Outliers are determined through analyzing a degree of deviation of current system behavior from the models formed from the nominal system data. The deviation of current system behavior is presented as an easy to interpret numerical score along with a measure of the relative contribution of each system parameter to any off-nominal deviation. The techniques described herein may also be used to "clean" the training data.
Statistical classification methods for estimating ancestry using morphoscopic traits.
Hefner, Joseph T; Ousley, Stephen D
2014-07-01
Ancestry assessments using cranial morphoscopic traits currently rely on subjective trait lists and observer experience rather than empirical support. The trait list approach, which is untested, unverified, and in many respects unrefined, is relied upon because of tradition and subjective experience. Our objective was to examine the utility of frequently cited morphoscopic traits and to explore eleven appropriate and novel methods for classifying an unknown cranium into one of several reference groups. Based on these results, artificial neural networks (aNNs), OSSA, support vector machines, and random forest models showed mean classification accuracies of at least 85%. The aNNs had the highest overall classification rate (87.8%), and random forests show the smallest difference between the highest (90.4%) and lowest (76.5%) classification accuracies. The results of this research demonstrate that morphoscopic traits can be successfully used to assess ancestry without relying only on the experience of the observer. PMID:24646108
Method and system for non-linear motion estimation
NASA Technical Reports Server (NTRS)
Lu, Ligang (Inventor)
2011-01-01
A method and system for extrapolating and interpolating a visual signal including determining a first motion vector between a first pixel position in a first image to a second pixel position in a second image, determining a second motion vector between the second pixel position in the second image and a third pixel position in a third image, determining a third motion vector between one of the first pixel position in the first image and the second pixel position in the second image, and the second pixel position in the second image and the third pixel position in the third image using a non-linear model, determining a position of the fourth pixel in a fourth image based upon the third motion vector.
NASA Astrophysics Data System (ADS)
Rupšys, P.
2015-10-01
A system of stochastic differential equations (SDE) with mixed-effects parameters and multivariate normal copula density function were used to develop tree height model for Scots pine trees in Lithuania. A two-step maximum likelihood parameter estimation method is used and computational guidelines are given. After fitting the conditional probability density functions to outside bark diameter at breast height, and total tree height, a bivariate normal copula distribution model was constructed. Predictions from the mixed-effects parameters SDE tree height model calculated during this research were compared to the regression tree height equations. The results are implemented in the symbolic computational language MAPLE.
Rupšys, P.
2015-10-28
A system of stochastic differential equations (SDE) with mixed-effects parameters and multivariate normal copula density function were used to develop tree height model for Scots pine trees in Lithuania. A two-step maximum likelihood parameter estimation method is used and computational guidelines are given. After fitting the conditional probability density functions to outside bark diameter at breast height, and total tree height, a bivariate normal copula distribution model was constructed. Predictions from the mixed-effects parameters SDE tree height model calculated during this research were compared to the regression tree height equations. The results are implemented in the symbolic computational language MAPLE.
Conditional Density Estimation with HMM Based Support Vector Machines
NASA Astrophysics Data System (ADS)
Hu, Fasheng; Liu, Zhenqiu; Jia, Chunxin; Chen, Dechang
Conditional density estimation is very important in financial engineer, risk management, and other engineering computing problem. However, most regression models have a latent assumption that the probability density is a Gaussian distribution, which is not necessarily true in many real life applications. In this paper, we give a framework to estimate or predict the conditional density mixture dynamically. Through combining the Input-Output HMM with SVM regression together and building a SVM model in each state of the HMM, we can estimate a conditional density mixture instead of a single gaussian. With each SVM in each node, this model can be applied for not only regression but classifications as well. We applied this model to denoise the ECG data. The proposed method has the potential to apply to other time series such as stock market return predictions.
Study on Comparison of Bidding and Pricing Behavior Distinction between Estimate Methods
NASA Astrophysics Data System (ADS)
Morimoto, Emi; Namerikawa, Susumu
The most characteristic trend on bidding and pricing behavior distinction in recent years is the increasing number of bidders just above the criteria for low-price bidding investigations. The contractor's markup is the difference between the bidding price and the execution price. Therefore, the contractor's markup is the difference between criteria for low-price bidding investigations price and the execution price in the public works bid in Japan. Virtually, bidder's strategies and behavior have been controlled by public engineer's budgets. Estimation and bid are inseparably linked in the Japanese public works procurement system. The trial of the unit price-type estimation method begins in 2004. On another front, accumulated estimation method is one of the general methods in public works. So, there are two types of standard estimation methods in Japan. In this study, we did a statistical analysis on the bid information of civil engineering works for the Ministry of Land, Infrastructure, and Transportation in 2008. It presents several issues that bidding and pricing behavior is related to an estimation method (several estimation methods) for public works bid in Japan. The two types of standard estimation methods produce different results that number of bidders (decide on bid-no bid strategy) and distribution of bid price (decide on mark-up strategy).The comparison on the distribution of bid prices showed that the percentage of the bid concentrated on the criteria for low-price bidding investigations have had a tendency to get higher in the large-sized public works by the unit price-type estimation method, comparing with the accumulated estimation method. On one hand, the number of bidders who bids for public works estimated unit-price tends to increase significantly Public works estimated unit-price is likely to have been one of the factors for the construction companies to decide if they participate in the biddings.
Olascoaga, Beñat; Mac Arthur, Alasdair; Atherton, Jon; Porcar-Castell, Albert
2016-03-01
Accurate temporal and spatial measurements of leaf optical traits (i.e., absorption, reflectance and transmittance) are paramount to photosynthetic studies. These optical traits are also needed to couple radiative transfer and physiological models to facilitate the interpretation of optical data. However, estimating leaf optical traits in leaves with complex morphologies remains a challenge. Leaf optical traits can be measured using integrating spheres, either by placing the leaf sample in one of the measuring ports (External Method) or by placing the sample inside the sphere (Internal Method). However, in leaves with complex morphology (e.g., needles), the External Method presents limitations associated with gaps between the leaves, and the Internal Method presents uncertainties related to the estimation of total leaf area. We introduce a modified version of the Internal Method, which bypasses the effect of gaps and the need to estimate total leaf area, by painting the leaves black and measuring them before and after painting. We assess and compare the new method with the External Method using a broadleaf and two conifer species. Both methods yielded similar leaf absorption estimates for the broadleaf, but absorption estimates were higher with the External Method for the conifer species. Factors explaining the differences between methods, their trade-offs and their advantages and limitations are also discussed. We suggest that the new method can be used to estimate leaf absorption in any type of leaf independently of its morphology, and be used to study further the impact of gap fraction in the External Method. PMID:26843207
An improved method for nonlinear parameter estimation: a case study of the Rössler model
NASA Astrophysics Data System (ADS)
He, Wen-Ping; Wang, Liu; Jiang, Yun-Di; Wan, Shi-Quan
2015-06-01
Parameter estimation is an important research topic in nonlinear dynamics. Based on the evolutionary algorithm (EA), Wang et al. (2014) present a new scheme for nonlinear parameter estimation and numerical tests indicate that the estimation precision is satisfactory. However, the convergence rate of the EA is relatively slow when multiple unknown parameters in a multidimensional dynamical system are estimated simultaneously. To solve this problem, an improved method for parameter estimation of nonlinear dynamical equations is provided in the present paper. The main idea of the improved scheme is to use all of the known time series for all of the components in some dynamical equations to estimate the parameters in single component one by one, instead of estimating all of the parameters in all of the components simultaneously. Thus, we can estimate all of the parameters stage by stage. The performance of the improved method was tested using a classic chaotic system—Rössler model. The numerical tests show that the amended parameter estimation scheme can greatly improve the searching efficiency and that there is a significant increase in the convergence rate of the EA, particularly for multiparameter estimation in multidimensional dynamical equations. Moreover, the results indicate that the accuracy of parameter estimation and the CPU time consumed by the presented method have no obvious dependence on the sample size.
An improved method for nonlinear parameter estimation: a case study of the Rössler model
NASA Astrophysics Data System (ADS)
He, Wen-Ping; Wang, Liu; Jiang, Yun-Di; Wan, Shi-Quan
2016-08-01
Parameter estimation is an important research topic in nonlinear dynamics. Based on the evolutionary algorithm (EA), Wang et al. (2014) present a new scheme for nonlinear parameter estimation and numerical tests indicate that the estimation precision is satisfactory. However, the convergence rate of the EA is relatively slow when multiple unknown parameters in a multidimensional dynamical system are estimated simultaneously. To solve this problem, an improved method for parameter estimation of nonlinear dynamical equations is provided in the present paper. The main idea of the improved scheme is to use all of the known time series for all of the components in some dynamical equations to estimate the parameters in single component one by one, instead of estimating all of the parameters in all of the components simultaneously. Thus, we can estimate all of the parameters stage by stage. The performance of the improved method was tested using a classic chaotic system—Rössler model. The numerical tests show that the amended parameter estimation scheme can greatly improve the searching efficiency and that there is a significant increase in the convergence rate of the EA, particularly for multiparameter estimation in multidimensional dynamical equations. Moreover, the results indicate that the accuracy of parameter estimation and the CPU time consumed by the presented method have no obvious dependence on the sample size.
Multivariate drought frequency estimation using copula method in Southwest China
NASA Astrophysics Data System (ADS)
Hao, Cui; Zhang, Jiahua; Yao, Fengmei
2015-12-01
Drought over Southwest China occurs frequently and has an obvious seasonal characteristic. Proper management of regional droughts requires knowledge of the expected frequency or probability of specific climate information. This study utilized k-means classification and copulas to demonstrate the regional drought occurrence probability and return period based on trivariate drought properties, i.e., drought duration, severity, and peak. A drought event in this study was defined when 3-month Standardized Precipitation Evapotranspiration Index (SPEI) was less than -0.99 according to the regional climate characteristic. Then, the next step was to classify the region into six clusters by k-means method based on annual and seasonal precipitation and temperature and to establish marginal probabilistic distributions for each drought property in each sub-region. Several copula types were selected to test the best fit distribution, and Student t copula was recognized as the best one to integrate drought duration, severity, and peak. The results indicated that a proper classification was important for a regional drought frequency analysis, and copulas were useful tools in exploring the associations of the correlated drought variables and analyzing drought frequency. Student t copula was a robust and proper function for drought joint probability and return period analysis, which is important for analyzing and predicting the regional drought risks.
Brassey, Charlotte A.; Maidment, Susannah C. R.; Barrett, Paul M.
2015-01-01
Body mass is a key biological variable, but difficult to assess from fossils. Various techniques exist for estimating body mass from skeletal parameters, but few studies have compared outputs from different methods. Here, we apply several mass estimation methods to an exceptionally complete skeleton of the dinosaur Stegosaurus. Applying a volumetric convex-hulling technique to a digital model of Stegosaurus, we estimate a mass of 1560 kg (95% prediction interval 1082–2256 kg) for this individual. By contrast, bivariate equations based on limb dimensions predict values between 2355 and 3751 kg and require implausible amounts of soft tissue and/or high body densities. When corrected for ontogenetic scaling, however, volumetric and linear equations are brought into close agreement. Our results raise concerns regarding the application of predictive equations to extinct taxa with no living analogues in terms of overall morphology and highlight the sensitivity of bivariate predictive equations to the ontogenetic status of the specimen. We emphasize the significance of rare, complete fossil skeletons in validating widely applied mass estimation equations based on incomplete skeletal material and stress the importance of accurately determining specimen age prior to further analyses. PMID:25740841
Brassey, Charlotte A; Maidment, Susannah C R; Barrett, Paul M
2015-03-01
Body mass is a key biological variable, but difficult to assess from fossils. Various techniques exist for estimating body mass from skeletal parameters, but few studies have compared outputs from different methods. Here, we apply several mass estimation methods to an exceptionally complete skeleton of the dinosaur Stegosaurus. Applying a volumetric convex-hulling technique to a digital model of Stegosaurus, we estimate a mass of 1560 kg (95% prediction interval 1082-2256 kg) for this individual. By contrast, bivariate equations based on limb dimensions predict values between 2355 and 3751 kg and require implausible amounts of soft tissue and/or high body densities. When corrected for ontogenetic scaling, however, volumetric and linear equations are brought into close agreement. Our results raise concerns regarding the application of predictive equations to extinct taxa with no living analogues in terms of overall morphology and highlight the sensitivity of bivariate predictive equations to the ontogenetic status of the specimen. We emphasize the significance of rare, complete fossil skeletons in validating widely applied mass estimation equations based on incomplete skeletal material and stress the importance of accurately determining specimen age prior to further analyses. PMID:25740841
Variable methods to estimate the ionospheric horizontal gradient
NASA Astrophysics Data System (ADS)
Nagarajoo, Karthigesu
2016-06-01
DGPS or differential Global Positioning System is a system where the range error at a reference station (after eliminating the error due to its’ clock, hardware delay and multipath) will be eliminated from the range measurement at the user, which view the same satellite, presuming that the satellites path to both the reference station and the user experience common errors due to the ionosphere, clock errors etc. In this assumption, the error due to the ionospheric refraction is assumed to be the same for the two closely spaced paths (such as a baseline length between reference station and the user of 10km as used in simulations throughout this paper, unless otherwise stated) and thus the presence of ionospheric horizontal gradient is ignored. If a user's path is exposed to a drastically large ionosphere gradient, the large difference of ionosphere delays between the reference station and the user can result in significant position error for the user. Several examples of extremely large ionosphere gradients that could cause the significant user errors have been observed. The ionospheric horizontal gradient could be obtained instead from the gradient of the Total Electron Content, TEC observed from a number of received GPS satellites at one or more reference stations or based on empirical models updated with real time data. To investigate the former, in this work, the dual frequency method has been used to obtain both South-North and East-West gradients by using four different receiving stations separated in those directions. In addition, observation data from Navy Ionospheric Monitoring System (NIMS) receivers and the TEC contour map from Rutherford Appleton Laboratory (RAL) UK have also been used in order to define the magnitude and direction of the gradient.
Magnetic Resonance Elastography as a Method to Estimate Myocardial Contractility
Kolipaka, Arunark; Aggarwal, Shivani R.; McGee, Kiaran P.; Anavekar, Nandan; Manduca, Armando; Ehman, Richard L.; Araoz, Philip A.
2012-01-01
Purpose To determine whether increasing epinephrine infusion in an in-vivo pig model is associated with an increase in end-systolic magnetic resonance elastography (MRE)-derived effective stiffness. Methods Finite element modeling (FEM) was performed to determine range of myocardial wall thicknesses that could be used for analysis. Then MRE was performed on 5-pigs to measure the end-systolic effective stiffness with epinephrine infusion. Epinephrine was continuously infused intravenously in each pig to increase the heart-rate in increments of 20%. For each such increase end-systolic effective stiffness was measured using MRE. In each pig, Student’s t-test was used to compare effective end-systolic stiffness at baseline and at initial infusion of epinephrine. Least-square linear regression was performed to determine the correlation between normalized end-systolic effective stiffness and increase in heart-rate with epinephrine infusion. Results FEM showed that phase gradient inversion could be performed on wall thickness ~≥1.5cm. In pigs, effective end-systolic stiffness significantly increased from baseline to the first infusion in all pigs (p=0.047). A linear correlation was found between normalized effective end-systolic stiffness and percent increase in heart-rate by epinephrine infusion with R2 ranging from 0.86–0.99 in 4-pigs. In one of the pigs the R2 value was 0.1. A linear correlation with R2=0.58 was found between normalized effective end-systolic stiffness and percent increase in heart-rate when pooling data points from all pigs. Conclusion Noninvasive MRE-derived end-systolic effective myocardial stiffness may be a surrogate for myocardial contractility. PMID:22334349
Estimating basin scale evapotranspiration (ET) by water balance and remote sensing methods
Senay, G.B.; Leake, S.; Nagler, P.L.; Artan, G.; Dickinson, J.; Cordova, J.T.; Glenn, E.P.
2011-01-01
Evapotranspiration (ET) is an important hydrological process that can be studied and estimated at multiple spatial scales ranging from a leaf to a river basin. We present a review of methods in estimating basin scale ET and its applications in understanding basin water balance dynamics. The review focuses on two aspects of ET: (i) how the basin scale water balance approach is used to estimate ET; and (ii) how ‘direct’ measurement and modelling approaches are used to estimate basin scale ET. Obviously, the basin water balance-based ET requires the availability of good precipitation and discharge data to calculate ET as a residual on longer time scales (annual) where net storage changes are assumed to be negligible. ET estimated from such a basin water balance principle is generally used for validating the performance of ET models. On the other hand, many of the direct estimation methods involve the use of remotely sensed data to estimate spatially explicit ET and use basin-wide averaging to estimate basin scale ET. The direct methods can be grouped into soil moisture balance modelling, satellite-based vegetation index methods, and methods based on satellite land surface temperature measurements that convert potential ET into actual ET using a proportionality relationship. The review also includes the use of complementary ET estimation principles for large area applications. The review identifies the need to compare and evaluate the different ET approaches using standard data sets in basins covering different hydro-climatic regions of the world.
Modified slanted-edge method and multidirectional modulation transfer function estimation.
Masaoka, Kenichiro; Yamashita, Takayuki; Nishida, Yukihiro; Sugawara, Masayuki
2014-03-10
The slanted-edge method specified in ISO Standard 12233, which measures the modulation transfer function (MTF) by analyzing an image of a slightly slanted knife-edge target, is not robust against noise because it takes the derivative of each data line in the edge-angle estimation. We propose here a modified method that estimates the edge angle by fitting a two-dimensional function to the image data. The method has a higher accuracy, precision, and robustness against noise than the ISO 12233 method and is applicable to any arbitrary pixel array, enabling a multidirectional MTF estimate in a single measurement of a starburst image. PMID:24663939
An Efficient Acoustic Density Estimation Method with Human Detectors Applied to Gibbons in Cambodia
Kidney, Darren; Rawson, Benjamin M.; Borchers, David L.; Stevenson, Ben C.; Marques, Tiago A.; Thomas, Len
2016-01-01
Some animal species are hard to see but easy to hear. Standard visual methods for estimating population density for such species are often ineffective or inefficient, but methods based on passive acoustics show more promise. We develop spatially explicit capture-recapture (SECR) methods for territorial vocalising species, in which humans act as an acoustic detector array. We use SECR and estimated bearing data from a single-occasion acoustic survey of a gibbon population in northeastern Cambodia to estimate the density of calling groups. The properties of the estimator are assessed using a simulation study, in which a variety of survey designs are also investigated. We then present a new form of the SECR likelihood for multi-occasion data which accounts for the stochastic availability of animals. In the context of gibbon surveys this allows model-based estimation of the proportion of groups that produce territorial vocalisations on a given day, thereby enabling the density of groups, instead of the density of calling groups, to be estimated. We illustrate the performance of this new estimator by simulation. We show that it is possible to estimate density reliably from human acoustic detections of visually cryptic species using SECR methods. For gibbon surveys we also show that incorporating observers’ estimates of bearings to detected groups substantially improves estimator performance. Using the new form of the SECR likelihood we demonstrate that estimates of availability, in addition to population density and detection function parameters, can be obtained from multi-occasion data, and that the detection function parameters are not confounded with the availability parameter. This acoustic SECR method provides a means of obtaining reliable density estimates for territorial vocalising species. It is also efficient in terms of data requirements since since it only requires routine survey data. We anticipate that the low-tech field requirements will make this method
An Efficient Acoustic Density Estimation Method with Human Detectors Applied to Gibbons in Cambodia.
Kidney, Darren; Rawson, Benjamin M; Borchers, David L; Stevenson, Ben C; Marques, Tiago A; Thomas, Len
2016-01-01
Some animal species are hard to see but easy to hear. Standard visual methods for estimating population density for such species are often ineffective or inefficient, but methods based on passive acoustics show more promise. We develop spatially explicit capture-recapture (SECR) methods for territorial vocalising species, in which humans act as an acoustic detector array. We use SECR and estimated bearing data from a single-occasion acoustic survey of a gibbon population in northeastern Cambodia to estimate the density of calling groups. The properties of the estimator are assessed using a simulation study, in which a variety of survey designs are also investigated. We then present a new form of the SECR likelihood for multi-occasion data which accounts for the stochastic availability of animals. In the context of gibbon surveys this allows model-based estimation of the proportion of groups that produce territorial vocalisations on a given day, thereby enabling the density of groups, instead of the density of calling groups, to be estimated. We illustrate the performance of this new estimator by simulation. We show that it is possible to estimate density reliably from human acoustic detections of visually cryptic species using SECR methods. For gibbon surveys we also show that incorporating observers' estimates of bearings to detected groups substantially improves estimator performance. Using the new form of the SECR likelihood we demonstrate that estimates of availability, in addition to population density and detection function parameters, can be obtained from multi-occasion data, and that the detection function parameters are not confounded with the availability parameter. This acoustic SECR method provides a means of obtaining reliable density estimates for territorial vocalising species. It is also efficient in terms of data requirements since since it only requires routine survey data. We anticipate that the low-tech field requirements will make this method
Parameter estimation of analog circuits based on the fractional wavelet method
NASA Astrophysics Data System (ADS)
Yong, Deng; He, Zhang
2015-03-01
Aiming at the problem of parameter estimation in analog circuits, a new approach is proposed. The approach is based on the fractional wavelet to derive the Volterra series model of the circuit under test (CUT). By the gradient search algorithm used in the Volterra model, the unknown parameters in the CUT are estimated and the Volterra model is identified. The simulations show that the parameter estimation results of the proposed method in the paper are better than those of other parameter estimation methods. Project supported by the Key Research Project of Sichuan Provincial Department of Education, China (No. 13ZA0186).
Methods to estimate the between‐study variance and its uncertainty in meta‐analysis†
Jackson, Dan; Viechtbauer, Wolfgang; Bender, Ralf; Bowden, Jack; Knapp, Guido; Kuss, Oliver; Higgins, Julian PT; Langan, Dean; Salanti, Georgia
2015-01-01
Meta‐analyses are typically used to estimate the overall/mean of an outcome of interest. However, inference about between‐study variability, which is typically modelled using a between‐study variance parameter, is usually an additional aim. The DerSimonian and Laird method, currently widely used by default to estimate the between‐study variance, has been long challenged. Our aim is to identify known methods for estimation of the between‐study variance and its corresponding uncertainty, and to summarise the simulation and empirical evidence that compares them. We identified 16 estimators for the between‐study variance, seven methods to calculate confidence intervals, and several comparative studies. Simulation studies suggest that for both dichotomous and continuous data the estimator proposed by Paule and Mandel and for continuous data the restricted maximum likelihood estimator are better alternatives to estimate the between‐study variance. Based on the scenarios and results presented in the published studies, we recommend the Q‐profile method and the alternative approach based on a ‘generalised Cochran between‐study variance statistic’ to compute corresponding confidence intervals around the resulting estimates. Our recommendations are based on a qualitative evaluation of the existing literature and expert consensus. Evidence‐based recommendations require an extensive simulation study where all methods would be compared under the same scenarios. © 2015 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd. PMID:26332144
NASA Astrophysics Data System (ADS)
Aslan, Serdar; Taylan Cemgil, Ali; Akın, Ata
2016-08-01
Objective. In this paper, we aimed for the robust estimation of the parameters and states of the hemodynamic model by using blood oxygen level dependent signal. Approach. In the fMRI literature, there are only a few successful methods that are able to make a joint estimation of the states and parameters of the hemodynamic model. In this paper, we implemented a maximum likelihood based method called the particle smoother expectation maximization (PSEM) algorithm for the joint state and parameter estimation. Main results. Former sequential Monte Carlo methods were only reliable in the hemodynamic state estimates. They were claimed to outperform the local linearization (LL) filter and the extended Kalman filter (EKF). The PSEM algorithm is compared with the most successful method called square-root cubature Kalman smoother (SCKS) for both state and parameter estimation. SCKS was found to be better than the dynamic expectation maximization (DEM) algorithm, which was shown to be a better estimator than EKF, LL and particle filters. Significance. PSEM was more accurate than SCKS for both the state and the parameter estimation. Hence, PSEM seems to be the most accurate method for the system identification and state estimation for the hemodynamic model inversion literature. This paper do not compare its results with Tikhonov-regularized Newton—CKF (TNF-CKF), a recent robust method which works in filtering sense.
Loss Factor Estimation Using the Impulse Response Decay Method on a Stiffened Structure
NASA Technical Reports Server (NTRS)
Cabell, Randolph; Schiller, Noah; Allen, Albert; Moeller, Mark
2009-01-01
High-frequency vibroacoustic modeling is typically performed using energy-based techniques such as Statistical Energy Analysis (SEA). Energy models require an estimate of the internal damping loss factor. Unfortunately, the loss factor is difficult to estimate analytically, and experimental methods such as the power injection method can require extensive measurements over the structure of interest. This paper discusses the implications of estimating damping loss factors using the impulse response decay method (IRDM) from a limited set of response measurements. An automated procedure for implementing IRDM is described and then evaluated using data from a finite element model of a stiffened, curved panel. Estimated loss factors are compared with loss factors computed using a power injection method and a manual curve fit. The paper discusses the sensitivity of the IRDM loss factor estimates to damping of connected subsystems and the number and location of points in the measurement ensemble.
A biphasic parameter estimation method for quantitative analysis of dynamic renal scintigraphic data
NASA Astrophysics Data System (ADS)
Koh, T. S.; Zhang, Jeff L.; Ong, C. K.; Shuter, B.
2006-06-01
Dynamic renal scintigraphy is an established method in nuclear medicine, commonly used for the assessment of renal function. In this paper, a biphasic model fitting method is proposed for simultaneous estimation of both vascular and parenchymal parameters from renal scintigraphic data. These parameters include the renal plasma flow, vascular and parenchymal mean transit times, and the glomerular extraction rate. Monte Carlo simulation was used to evaluate the stability and confidence of the parameter estimates obtained by the proposed biphasic method, before applying the method on actual patient study cases to compare with the conventional fitting approach and other established renal indices. The various parameter estimates obtained using the proposed method were found to be consistent with the respective pathologies of the study cases. The renal plasma flow and extraction rate estimated by the proposed method were in good agreement with those previously obtained using dynamic computed tomography and magnetic resonance imaging.
A Comparison of Methods for Estimating Quadratic Effects in Nonlinear Structural Equation Models
Harring, Jeffrey R.; Weiss, Brandi A.; Hsu, Jui-Chen
2012-01-01
Two Monte Carlo simulations were performed to compare methods for estimating and testing hypotheses of quadratic effects in latent variable regression models. The methods considered in the current study were (a) a 2-stage moderated regression approach using latent variable scores, (b) an unconstrained product indicator approach, (c) a latent moderated structural equation method, (d) a fully Bayesian approach, and (e) marginal maximum likelihood estimation. Of the 5 estimation methods, it was found that overall the methods based on maximum likelihood estimation and the Bayesian approach performed best in terms of bias, root-mean-square error, standard error ratios, power, and Type I error control, although key differences were observed. Similarities as well as disparities among methods are highlight and general recommendations articulated. As a point of comparison, all 5 approaches were fit to a reparameterized version of the latent quadratic model to educational reading data. PMID:22429193
Effect of spectral shape on acoustic fatigue life estimates
NASA Technical Reports Server (NTRS)
Miles, R. N.
1992-01-01
Methods for estimating fatigue life due to random loading are briefly reviewed. These methods include a probabilistic approach in which the expected value of the rate of damage accumulation is computed by integrating over the probability density of damaging events and a method which consists of analyzing the response time history to count damaging events. It is noted that it is necessary to employ a time domain approach to perform Rainflow counting, while simple peak counting may be accomplished using the probabilistic method. Data obtained indicate that Rainflow counting produces significantly different fatigue life predictions than other methods that are commonly used in acoustic fatigue predictions. When low-frequency oscillations are present in a signal along with high-frequency components, peak counting will produce substantially shorter fatigue lives than Rainflow counting. It is concluded that Rainflow counting is capable of providing reliable fatigue life predictions for acoustic fatigue studies.
Using Resampling To Estimate the Precision of an Empirical Standard-Setting Method.
ERIC Educational Resources Information Center
Muijtjens, Arno M. M.; Kramer, Anneke W. M.; Kaufman, David M.; Van der Vleuten, Cees P. M.
2003-01-01
Developed a method to estimate the cutscore precisions for empirical standard-setting methods by using resampling. Illustrated the method with two actual datasets consisting of 86 Dutch medical residents and 155 Canadian medical students taking objective structured clinical examinations. Results show the applicability of the method. (SLD)
Novel and simple non-parametric methods of estimating the joint and marginal densities
NASA Astrophysics Data System (ADS)
Alghalith, Moawia
2016-07-01
We introduce very simple non-parametric methods that overcome key limitations of the existing literature on both the joint and marginal density estimation. In doing so, we do not assume any form of the marginal distribution or joint distribution a priori. Furthermore, our method circumvents the bandwidth selection problems. We compare our method to the kernel density method.
ERIC Educational Resources Information Center
Cui, Zhongmin; Kolen, Michael J.
2008-01-01
This article considers two methods of estimating standard errors of equipercentile equating: the parametric bootstrap method and the nonparametric bootstrap method. Using a simulation study, these two methods are compared under three sample sizes (300, 1,000, and 3,000), for two test content areas (the Iowa Tests of Basic Skills Maps and Diagrams…
NASA Astrophysics Data System (ADS)
Kwon, Ki-Won; Cho, Yongsoo
This letter presents a simple joint estimation method for residual frequency offset (RFO) and sampling frequency offset (STO) in OFDM-based digital video broadcasting (DVB) systems. The proposed method selects a continual pilot (CP) subset from an unsymmetrically and non-uniformly distributed CP set to obtain an unbiased estimator. Simulation results show that the proposed method using a properly selected CP subset is unbiased and performs robustly.
Parameters Estimation For A Patellofemoral Joint Of A Human Knee Using A Vector Method
NASA Astrophysics Data System (ADS)
Ciszkiewicz, A.; Knapczyk, J.
2015-08-01
Position and displacement analysis of a spherical model of a human knee joint using the vector method was presented. Sensitivity analysis and parameter estimation were performed using the evolutionary algorithm method. Computer simulations for the mechanism with estimated parameters proved the effectiveness of the prepared software. The method itself can be useful when solving problems concerning the displacement and loads analysis in the knee joint.
Parameters Estimation for the Spherical Model of the Human Knee Joint Using Vector Method
NASA Astrophysics Data System (ADS)
Ciszkiewicz, A.; Knapczyk, J.
2014-08-01
Position and displacement analysis of a spherical model of a human knee joint using the vector method was presented. Sensitivity analysis and parameter estimation were performed using the evolutionary algorithm method. Computer simulations for the mechanism with estimated parameters proved the effectiveness of the prepared software. The method itself can be useful when solving problems concerning the displacement and loads analysis in the knee joint
Parameters estimation using the first passage times method in a jump-diffusion model
NASA Astrophysics Data System (ADS)
Khaldi, K.; Meddahi, S.
2016-06-01
The main purposes of this paper are two contributions: (1) it presents a new method, which is the first passage time (FPT method) generalized for all passage times (GPT method), in order to estimate the parameters of stochastic Jump-Diffusion process. (2) it compares in a time series model, share price of gold, the empirical results of the estimation and forecasts obtained with the GPT method and those obtained by the moments method and the FPT method applied to the Merton Jump-Diffusion (MJD) model.
A comparison of de-noising methods for differential phase shift and associated rainfall estimation
NASA Astrophysics Data System (ADS)
Hu, Zhiqun; Liu, Liping; Wu, Linlin; Wei, Qing
2015-04-01
Measured differential phase shift UDP is known to be a noisy unstable polarimetric radar variable, such that the quality of UDP data has direct impact on specific differential phase shift KDP estimation, and subsequently, the KDP-based rainfall estimation. Over the past decades, many UDP de-noising methods have been developed; however, the de-noising effects in these methods and their impact on KDP-based rainfall estimation lack comprehensive comparative analysis. In this study, simulated noisy UDP data were generated and de-noised by using several methods such as finite-impulse response (FIR), Kalman, wavelet, traditional mean, and median filters. The biases were compared between KDP from simulated and observed UDP radial profiles after de-noising by these methods. The results suggest that the complicated FIR, Kalman, and wavelet methods have a better de-noising effect than the traditional methods. After UDP was de-noised, the accuracy of the KDP-based rainfall estimation increased significantly based on the analysis of three actual rainfall events. The improvement in estimation was more obvious when KDP was estimated with UDP de-noised by Kalman, FIR, and wavelet methods when the average rainfall was heavier than 5 mm h ≥1. However, the improved estimation was not significant when the precipitation intensity further increased to a rainfall rate beyond 10 mm h ≥1. The performance of wavelet analysis was found to be the most stable of these filters.
An Empirical Comparison of Tree-Based Methods for Propensity Score Estimation
Watkins, Stephanie; Jonsson-Funk, Michele; Brookhart, M Alan; Rosenberg, Steven A; O'Shea, T Michael; Daniels, Julie
2013-01-01
Objective To illustrate the use of ensemble tree-based methods (random forest classification [RFC] and bagging) for propensity score estimation and to compare these methods with logistic regression, in the context of evaluating the effect of physical and occupational therapy on preschool motor ability among very low birth weight (VLBW) children. Data Source We used secondary data from the Early Childhood Longitudinal Study Birth Cohort (ECLS-B) between 2001 and 2006. Study Design We estimated the predicted probability of treatment using tree-based methods and logistic regression (LR). We then modeled the exposure-outcome relation using weighted LR models while considering covariate balance and precision for each propensity score estimation method. Principal Findings Among approximately 500 VLBW children, therapy receipt was associated with moderately improved preschool motor ability. Overall, ensemble methods produced the best covariate balance (Mean Squared Difference: 0.03–0.07) and the most precise effect estimates compared to LR (Mean Squared Difference: 0.11). The overall magnitude of the effect estimates was similar between RFC and LR estimation methods. Conclusion Propensity score estimation using RFC and bagging produced better covariate balance with increased precision compared to LR. Ensemble methods are a useful alterative to logistic regression to control confounding in observational studies. PMID:23701015
Methods for estimating monthly streamflow characteristics at ungaged sites in western Montana
Parrett, Charles; Cartier, Kenn D.
1989-01-01
Three methods were developed for estimating monthly streamflow characteristics for western Montana. The first method, based on multiple-regression equations, relates monthly streamflow characteristics to various basin and climatic variables. Standard errors range from 43 to 107%. The equations are generally not applicable to streams that receive or lose water as a result of geology or that have appreciable upstream storage or diversions. The second method, based on regression equations, relates monthly streamflow characteristics to channel width. Standard errors range from 41 to 111%. The equations are generally not applicable to streams with exposed bedrock, with braided or sand channel, or with recent alterations. The third method requires 12 once-monthly streamflow measurements at an ungaged site. They are then correlated with concurrent flows at some nearby gaged site, and the resulting relation is used to estimate the required monthly streamflow characteristic at the ungaged site. Standard errors range from 19 to 92%. Although generally substantially more reliable than the first or second method, this method may be unreliable if the measurement site and the gage site are not hydrologically similar. A procedure for weighting individual estimates, based on variance and degree of independence of individual estimating methods, was also developed. Standard errors range from 15 to 43% when all three methods are used. The weighted-average estimated from all three methods are generally substantially more reliable than any of the individual estimates. (USGS)
Methods to estimate the between-study variance and its uncertainty in meta-analysis.
Veroniki, Areti Angeliki; Jackson, Dan; Viechtbauer, Wolfgang; Bender, Ralf; Bowden, Jack; Knapp, Guido; Kuss, Oliver; Higgins, Julian P T; Langan, Dean; Salanti, Georgia
2016-03-01
Meta-analyses are typically used to estimate the overall/mean of an outcome of interest. However, inference about between-study variability, which is typically modelled using a between-study variance parameter, is usually an additional aim. The DerSimonian and Laird method, currently widely used by default to estimate the between-study variance, has been long challenged. Our aim is to identify known methods for estimation of the between-study variance and its corresponding uncertainty, and to summarise the simulation and empirical evidence that compares them. We identified 16 estimators for the between-study variance, seven methods to calculate confidence intervals, and several comparative studies. Simulation studies suggest that for both dichotomous and continuous data the estimator proposed by Paule and Mandel and for continuous data the restricted maximum likelihood estimator are better alternatives to estimate the between-study variance. Based on the scenarios and results presented in the published studies, we recommend the Q-profile method and the alternative approach based on a 'generalised Cochran between-study variance statistic' to compute corresponding confidence intervals around the resulting estimates. Our recommendations are based on a qualitative evaluation of the existing literature and expert consensus. Evidence-based recommendations require an extensive simulation study where all methods would be compared under the same scenarios. PMID:26332144
Method and system to estimate variables in an integrated gasification combined cycle (IGCC) plant
Kumar, Aditya; Shi, Ruijie; Dokucu, Mustafa
2013-09-17
System and method to estimate variables in an integrated gasification combined cycle (IGCC) plant are provided. The system includes a sensor suite to measure respective plant input and output variables. An extended Kalman filter (EKF) receives sensed plant input variables and includes a dynamic model to generate a plurality of plant state estimates and a covariance matrix for the state estimates. A preemptive-constraining processor is configured to preemptively constrain the state estimates and covariance matrix to be free of constraint violations. A measurement-correction processor may be configured to correct constrained state estimates and a constrained covariance matrix based on processing of sensed plant output variables. The measurement-correction processor is coupled to update the dynamic model with corrected state estimates and a corrected covariance matrix. The updated dynamic model may be configured to estimate values for at least one plant variable not originally sensed by the sensor suite.
New Method for Estimation of Aeolian Sand Transport Rate Using Ceramic Sand Flux Sensor (UD-101)
Udo, Keiko
2009-01-01
In this study, a new method for the estimation of aeolian sand transport rate was developed; the method employs a ceramic sand flux sensor (UD-101). UD-101 detects wind-blown sand impacting on its surface. The method was devised by considering the results of wind tunnel experiments that were performed using a vertical sediment trap and the UD-101. Field measurements to evaluate the estimation accuracy during the prevalence of unsteady winds were performed on a flat backshore. The results showed that aeolian sand transport rates estimated using the developed method were of the same order as those estimated using the existing method for high transport rates, i.e., for transport rates greater than 0.01 kg m−1 s−1. PMID:22291553
Spatial Statistics Preserving Interpolation Methods for Estimation of Missing Precipitation Data
NASA Astrophysics Data System (ADS)
El Sharif, H.; Teegavarapu, R. S.
2011-12-01
Spatial interpolation methods used for estimation of missing precipitation data at a site seldom check for their ability to preserve site and regional statistics. Such statistics are primarily defined by spatial correlations and other site-to-site statistics in a region. Preservation of site and regional statistics represents a means of assessing the validity of missing precipitation estimates at a site. This study will evaluate the efficacy of traditional deterministic and stochastic interpolation methods aimed at estimation of missing data in preserving site and regional statistics. New optimal spatial interpolation methods that are intended to preserve these statistics are also proposed and evaluated in this study. Rain gauge sites in the state of Kentucky, USA, are used as a case study for evaluation of existing and newly proposed methods. Several error and performance measures will be used to evaluate the methods and trade-offs in accuracy of estimation and preservation of site and regional statistics.
A New Formulation of the Filter-Error Method for Aerodynamic Parameter Estimation in Turbulence
NASA Technical Reports Server (NTRS)
Grauer, Jared A.; Morelli, Eugene A.
2015-01-01
A new formulation of the filter-error method for estimating aerodynamic parameters in nonlinear aircraft dynamic models during turbulence was developed and demonstrated. The approach uses an estimate of the measurement noise covariance to identify the model parameters, their uncertainties, and the process noise covariance, in a relaxation method analogous to the output-error method. Prior information on the model parameters and uncertainties can be supplied, and a post-estimation correction to the uncertainty was included to account for colored residuals not considered in the theory. No tuning parameters, needing adjustment by the analyst, are used in the estimation. The method was demonstrated in simulation using the NASA Generic Transport Model, then applied to the subscale T-2 jet-engine transport aircraft flight. Modeling results in different levels of turbulence were compared with results from time-domain output error and frequency- domain equation error methods to demonstrate the effectiveness of the approach.
Simplified Estimating Method for Shock Response Spectrum Envelope of V-Band Clamp Separation Shock
NASA Astrophysics Data System (ADS)
Iwasa, Takashi; Shi, Qinzhong
A simplified estimating method for the Shock Response Spectrum (SRS) envelope at the spacecraft interface near the V-band clamp separation device has been established. This simplified method is based on the pyroshock analysis method with a single degree of freedom (D.O.F) model proposed in our previous paper. The parameters required in the estimating method are only geometrical information of the interface and a tension of the V-band clamp. According to the use of these parameters, a simplified calculation of the SRS magnitude at the knee frequency is newly proposed. By comparing the estimation results with actual pyroshock test results, it was verified that the SRS envelope estimated with the simplified method appropriately covered the pyroshock test data of the actual space satellite systems except some specific high frequency responses.
A comparison of the methods for objective strain estimation from the Fry plots
NASA Astrophysics Data System (ADS)
Kumar, Rajan; Srivastava, Deepak C.; Ojha, Arun K.
2014-06-01
Fry method is a graphical technique that displays the strain ellipse in the form of central vacancy on a point distribution, the Fry plot. For objective strain estimation from the Fry plot, the central vacancy must appear as a sharply focused ellipse. In practice, however, a diffused appearance of the central vacancy in the Fry plots induces considerable subjectivity in direct strain estimation. Several alternative computer-based methods have recently been proposed for objective strain estimation from the Fry plots. Relative merits and limitations of these methods are, however, not yet well understood.
NASA Technical Reports Server (NTRS)
Campbell, John P; Mckinney, Marion O
1952-01-01
A summary of methods for making dynamic lateral stability and response calculations and for estimating the aerodynamic stability derivatives required for use in these calculations is presented. The processes of performing calculations of the time histories of lateral motions, of the period and damping of these motions, and of the lateral stability boundaries are presented as a series of simple straightforward steps. Existing methods for estimating the stability derivatives are summarized and, in some cases, simple new empirical formulas are presented. Detailed estimation methods are presented for low-subsonic-speed conditions but only a brief discussion and a list of references are given for transonic and supersonic speed conditions.
NASA Astrophysics Data System (ADS)
Tao, Shanshan; Dong, Sheng; Wang, Zhifeng; Jiang, Wensheng
2016-06-01
The maximum entropy distribution, which consists of various recognized theoretical distributions, is a better curve to estimate the design thickness of sea ice. Method of moment and empirical curve fitting method are common-used parameter estimation methods for maximum entropy distribution. In this study, we propose to use the particle swarm optimization method as a new parameter estimation method for the maximum entropy distribution, which has the advantage to avoid deviation introduced by simplifications made in other methods. We conducted a case study to fit the hindcasted thickness of the sea ice in the Liaodong Bay of Bohai Sea using these three parameter-estimation methods for the maximum entropy distribution. All methods implemented in this study pass the K-S tests at 0.05 significant level. In terms of the average sum of deviation squares, the empirical curve fitting method provides the best fit for the original data, while the method of moment provides the worst. Among all three methods, the particle swarm optimization method predicts the largest thickness of the sea ice for a same return period. As a result, we recommend using the particle swarm optimization method for the maximum entropy distribution for offshore structures mainly influenced by the sea ice in winter, but using the empirical curve fitting method to reduce the cost in the design of temporary and economic buildings.
Fast 2D DOA Estimation Algorithm by an Array Manifold Matching Method with Parallel Linear Arrays.
Yang, Lisheng; Liu, Sheng; Li, Dong; Jiang, Qingping; Cao, Hailin
2016-01-01
In this paper, the problem of two-dimensional (2D) direction-of-arrival (DOA) estimation with parallel linear arrays is addressed. Two array manifold matching (AMM) approaches, in this work, are developed for the incoherent and coherent signals, respectively. The proposed AMM methods estimate the azimuth angle only with the assumption that the elevation angles are known or estimated. The proposed methods are time efficient since they do not require eigenvalue decomposition (EVD) or peak searching. In addition, the complexity analysis shows the proposed AMM approaches have lower computational complexity than many current state-of-the-art algorithms. The estimated azimuth angles produced by the AMM approaches are automatically paired with the elevation angles. More importantly, for estimating the azimuth angles of coherent signals, the aperture loss issue is avoided since a decorrelation procedure is not required for the proposed AMM method. Numerical studies demonstrate the effectiveness of the proposed approaches. PMID:26907301