Science.gov

Sample records for accurate point estimates

  1. Regularization Based Iterative Point Match Weighting for Accurate Rigid Transformation Estimation.

    PubMed

    Liu, Yonghuai; De Dominicis, Luigi; Wei, Baogang; Chen, Liang; Martin, Ralph R

    2015-09-01

    Feature extraction and matching (FEM) for 3D shapes finds numerous applications in computer graphics and vision for object modeling, retrieval, morphing, and recognition. However, unavoidable incorrect matches lead to inaccurate estimation of the transformation relating different datasets. Inspired by AdaBoost, this paper proposes a novel iterative re-weighting method to tackle the challenging problem of evaluating point matches established by typical FEM methods. Weights are used to indicate the degree of belief that each point match is correct. Our method has three key steps: (i) estimation of the underlying transformation using weighted least squares, (ii) penalty parameter estimation via minimization of the weighted variance of the matching errors, and (iii) weight re-estimation taking into account both matching errors and information learnt in previous iterations. A comparative study, based on real shapes captured by two laser scanners, shows that the proposed method outperforms four other state-of-the-art methods in terms of evaluating point matches between overlapping shapes established by two typical FEM methods, resulting in more accurate estimates of the underlying transformation. This improved transformation can be used to better initialize the iterative closest point algorithm and its variants, making 3D shape registration more likely to succeed. PMID:26357287

  2. Simple, fast and accurate eight points amplitude estimation method of sinusoidal signals for DSP based instrumentation

    NASA Astrophysics Data System (ADS)

    Vizireanu, D. N.; Halunga, S. V.

    2012-04-01

    A simple, fast and accurate amplitude estimation algorithm of sinusoidal signals for DSP based instrumentation is proposed. It is shown that eight samples, used in two steps, are sufficient. A practical analytical formula for amplitude estimation is obtained. Numerical results are presented. Simulations have been performed when the sampled signal is affected by white Gaussian noise and when the samples are quantized on a given number of bits.

  3. Estimation method of point spread function based on Kalman filter for accurately evaluating real optical properties of photonic crystal fibers.

    PubMed

    Shen, Yan; Lou, Shuqin; Wang, Xin

    2014-03-20

    The evaluation accuracy of real optical properties of photonic crystal fibers (PCFs) is determined by the accurate extraction of air hole edges from microscope images of cross sections of practical PCFs. A novel estimation method of point spread function (PSF) based on Kalman filter is presented to rebuild the micrograph image of the PCF cross-section and thus evaluate real optical properties for practical PCFs. Through tests on both artificially degraded images and microscope images of cross sections of practical PCFs, we prove that the proposed method can achieve more accurate PSF estimation and lower PSF variance than the traditional Bayesian estimation method, and thus also reduce the defocus effect. With this method, we rebuild the microscope images of two kinds of commercial PCFs produced by Crystal Fiber and analyze the real optical properties of these PCFs. Numerical results are in accord with the product parameters. PMID:24663461

  4. Precision Pointing Control to and Accurate Target Estimation of a Non-Cooperative Vehicle

    NASA Technical Reports Server (NTRS)

    VanEepoel, John; Thienel, Julie; Sanner, Robert M.

    2006-01-01

    In 2004, NASA began investigating a robotic servicing mission for the Hubble Space Telescope (HST). Such a mission would not only require estimates of the HST attitude and rates in order to achieve capture by the proposed Hubble Robotic Vehicle (HRV), but also precision control to achieve the desired rate and maintain the orientation to successfully dock with HST. To generalize the situation, HST is the target vehicle and HRV is the chaser. This work presents a nonlinear approach for estimating the body rates of a non-cooperative target vehicle, and coupling this estimation to a control scheme. Non-cooperative in this context relates to the target vehicle no longer having the ability to maintain attitude control or transmit attitude knowledge.

  5. Accurate pointing of tungsten welding electrodes

    NASA Technical Reports Server (NTRS)

    Ziegelmeier, P.

    1971-01-01

    Thoriated-tungsten is pointed accurately and quickly by using sodium nitrite. Point produced is smooth and no effort is necessary to hold the tungsten rod concentric. The chemically produced point can be used several times longer than ground points. This method reduces time and cost of preparing tungsten electrodes.

  6. Onboard Autonomous Corrections for Accurate IRF Pointing.

    NASA Astrophysics Data System (ADS)

    Jorgensen, J. L.; Betto, M.; Denver, T.

    2002-05-01

    Over the past decade, the Noise Equivalent Angle (NEA) of onboard attitude reference instruments, has decreased from tens-of-arcseconds to the sub-arcsecond level. This improved performance is partly due to improved sensor-technology with enhanced signal to noise ratios, partly due to improved processing electronics which allows for more sophisticated and faster signal processing. However, the main reason for the increased precision, is the application of onboard autonomy, which apart from simple outlier rejection also allows for removal of "false positive" answers, and other "unexpected" noise sources, that otherwise would degrade the quality of the measurements (e.g. discrimination between signals caused by starlight and ionizing radiation). The utilization of autonomous signal processing has also provided the means for another onboard processing step, namely the autonomous recovery from lost in space, where the attitude instrument without a priori knowledge derive the absolute attitude, i.e. in IRF coordinates, within fractions of a second. Combined with precise orbital state or position data, the absolute attitude information opens for multiple ways to improve the mission performance, either by reducing operations costs, by increasing pointing accuracy, by reducing mission expendables, or by providing backup decision information in case of anomalies. The Advanced Stellar Compass's (ASC) is a miniature, high accuracy, attitude instrument which features fully autonomous operations. The autonomy encompass all direct steps from automatic health checkout at power-on, over fully automatic SEU and SEL handling and proton induced sparkle removal, to recovery from "lost in space", and optical disturbance detection and handling. But apart from these more obvious autonomy functions, the ASC also features functions to handle and remove the aforementioned residuals. These functions encompass diverse operators such as a full orbital state vector model with automatic cloud

  7. Accurate pose estimation for forensic identification

    NASA Astrophysics Data System (ADS)

    Merckx, Gert; Hermans, Jeroen; Vandermeulen, Dirk

    2010-04-01

    In forensic authentication, one aims to identify the perpetrator among a series of suspects or distractors. A fundamental problem in any recognition system that aims for identification of subjects in a natural scene is the lack of constrains on viewing and imaging conditions. In forensic applications, identification proves even more challenging, since most surveillance footage is of abysmal quality. In this context, robust methods for pose estimation are paramount. In this paper we will therefore present a new pose estimation strategy for very low quality footage. Our approach uses 3D-2D registration of a textured 3D face model with the surveillance image to obtain accurate far field pose alignment. Starting from an inaccurate initial estimate, the technique uses novel similarity measures based on the monogenic signal to guide a pose optimization process. We will illustrate the descriptive strength of the introduced similarity measures by using them directly as a recognition metric. Through validation, using both real and synthetic surveillance footage, our pose estimation method is shown to be accurate, and robust to lighting changes and image degradation.

  8. Point estimates for probability moments

    PubMed Central

    Rosenblueth, Emilio

    1975-01-01

    Given a well-behaved real function Y of a real random variable X and the first two or three moments of X, expressions are derived for the moments of Y as linear combinations of powers of the point estimates y(x+) and y(x-), where x+ and x- are specific values of X. Higher-order approximations and approximations for discontinuous Y using more point estimates are also given. Second-moment approximations are generalized to the case when Y is a function of several variables. PMID:16578731

  9. High energy laser testbed for accurate beam pointing control

    NASA Astrophysics Data System (ADS)

    Kim, Dojong; Kim, Jae Jun; Frist, Duane; Nagashima, Masaki; Agrawal, Brij

    2010-02-01

    Precision laser beam pointing is a key technology in High Energy Laser systems. In this paper, a laboratory High Energy Laser testbed developed at the Naval Postgraduate School is introduced. System identification is performed and a mathematical model is constructed to estimate system performance. New beam pointing control algorithms are designed based on this mathematical model. It is shown in both computer simulation and experiment that the adaptive filter algorithm can improve the pointing performance of the system.

  10. Accurate and robust estimation of camera parameters using RANSAC

    NASA Astrophysics Data System (ADS)

    Zhou, Fuqiang; Cui, Yi; Wang, Yexin; Liu, Liu; Gao, He

    2013-03-01

    Camera calibration plays an important role in the field of machine vision applications. The popularly used calibration approach based on 2D planar target sometimes fails to give reliable and accurate results due to the inaccurate or incorrect localization of feature points. To solve this problem, an accurate and robust estimation method for camera parameters based on RANSAC algorithm is proposed to detect the unreliability and provide the corresponding solutions. Through this method, most of the outliers are removed and the calibration errors that are the main factors influencing measurement accuracy are reduced. Both simulative and real experiments have been carried out to evaluate the performance of the proposed method and the results show that the proposed method is robust under large noise condition and quite efficient to improve the calibration accuracy compared with the original state.

  11. Fast and accurate estimation for astrophysical problems in large databases

    NASA Astrophysics Data System (ADS)

    Richards, Joseph W.

    2010-10-01

    A recent flood of astronomical data has created much demand for sophisticated statistical and machine learning tools that can rapidly draw accurate inferences from large databases of high-dimensional data. In this Ph.D. thesis, methods for statistical inference in such databases will be proposed, studied, and applied to real data. I use methods for low-dimensional parametrization of complex, high-dimensional data that are based on the notion of preserving the connectivity of data points in the context of a Markov random walk over the data set. I show how this simple parameterization of data can be exploited to: define appropriate prototypes for use in complex mixture models, determine data-driven eigenfunctions for accurate nonparametric regression, and find a set of suitable features to use in a statistical classifier. In this thesis, methods for each of these tasks are built up from simple principles, compared to existing methods in the literature, and applied to data from astronomical all-sky surveys. I examine several important problems in astrophysics, such as estimation of star formation history parameters for galaxies, prediction of redshifts of galaxies using photometric data, and classification of different types of supernovae based on their photometric light curves. Fast methods for high-dimensional data analysis are crucial in each of these problems because they all involve the analysis of complicated high-dimensional data in large, all-sky surveys. Specifically, I estimate the star formation history parameters for the nearly 800,000 galaxies in the Sloan Digital Sky Survey (SDSS) Data Release 7 spectroscopic catalog, determine redshifts for over 300,000 galaxies in the SDSS photometric catalog, and estimate the types of 20,000 supernovae as part of the Supernova Photometric Classification Challenge. Accurate predictions and classifications are imperative in each of these examples because these estimates are utilized in broader inference problems

  12. 31 CFR 205.24 - How are accurate estimates maintained?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 31 Money and Finance: Treasury 2 2010-07-01 2010-07-01 false How are accurate estimates maintained... Treasury-State Agreement § 205.24 How are accurate estimates maintained? (a) If a State has knowledge that an estimate does not reasonably correspond to the State's cash needs for a Federal assistance...

  13. Accurate localization of needle entry point in interventional MRI.

    PubMed

    Daanen, V; Coste, E; Sergent, G; Godart, F; Vasseur, C; Rousseau, J

    2000-10-01

    In interventional magnetic resonance imaging (MRI), the systems designed to help the surgeon during biopsy must provide accurate knowledge of the positions of the target and also the entry point of the needle on the skin of the patient. In some cases, this needle entry point can be outside the B(0) homogeneity area, where the distortions may be larger than a few millimeters. In that case, major correction for geometric deformation must be performed. Moreover, the use of markers to highlight the needle entry point is inaccurate. The aim of this study was to establish a three-dimensional coordinate correction according to the position of the entry point of the needle. We also describe a 2-degree of freedom electromechanical device that is used to determine the needle entry point on the patient's skin with a laser spot. PMID:11042649

  14. Accurate Biomass Estimation via Bayesian Adaptive Sampling

    NASA Technical Reports Server (NTRS)

    Wheeler, Kevin R.; Knuth, Kevin H.; Castle, Joseph P.; Lvov, Nikolay

    2005-01-01

    The following concepts were introduced: a) Bayesian adaptive sampling for solving biomass estimation; b) Characterization of MISR Rahman model parameters conditioned upon MODIS landcover. c) Rigorous non-parametric Bayesian approach to analytic mixture model determination. d) Unique U.S. asset for science product validation and verification.

  15. Micromagnetometer calibration for accurate orientation estimation.

    PubMed

    Zhang, Zhi-Qiang; Yang, Guang-Zhong

    2015-02-01

    Micromagnetometers, together with inertial sensors, are widely used for attitude estimation for a wide variety of applications. However, appropriate sensor calibration, which is essential to the accuracy of attitude reconstruction, must be performed in advance. Thus far, many different magnetometer calibration methods have been proposed to compensate for errors such as scale, offset, and nonorthogonality. They have also been used for obviate magnetic errors due to soft and hard iron. However, in order to combine the magnetometer with inertial sensor for attitude reconstruction, alignment difference between the magnetometer and the axes of the inertial sensor must be determined as well. This paper proposes a practical means of sensor error correction by simultaneous consideration of sensor errors, magnetic errors, and alignment difference. We take the summation of the offset and hard iron error as the combined bias and then amalgamate the alignment difference and all the other errors as a transformation matrix. A two-step approach is presented to determine the combined bias and transformation matrix separately. In the first step, the combined bias is determined by finding an optimal ellipsoid that can best fit the sensor readings. In the second step, the intrinsic relationships of the raw sensor readings are explored to estimate the transformation matrix as a homogeneous linear least-squares problem. Singular value decomposition is then applied to estimate both the transformation matrix and magnetic vector. The proposed method is then applied to calibrate our sensor node. Although there is no ground truth for the combined bias and transformation matrix for our node, the consistency of calibration results among different trials and less than 3(°) root mean square error for orientation estimation have been achieved, which illustrates the effectiveness of the proposed sensor calibration method for practical applications. PMID:25265625

  16. Quantifying Accurate Calorie Estimation Using the "Think Aloud" Method

    ERIC Educational Resources Information Center

    Holmstrup, Michael E.; Stearns-Bruening, Kay; Rozelle, Jeffrey

    2013-01-01

    Objective: Clients often have limited time in a nutrition education setting. An improved understanding of the strategies used to accurately estimate calories may help to identify areas of focused instruction to improve nutrition knowledge. Methods: A "Think Aloud" exercise was recorded during the estimation of calories in a standard dinner meal…

  17. Localizing edges for estimating point spread function by removing outlier points

    NASA Astrophysics Data System (ADS)

    Li, Yong; Xu, Liangpeng; Jin, Hongbin; Zou, Junwei

    2016-02-01

    This paper presents an approach to detect sharp edges for estimating point spread function (PSF) of a lens. A category of PSF estimation methods detect sharp edges from low-resolution (LR) images and estimate PSF with the detected edges. Existing techniques usually rely on accurate detection of ending points of the profile normal to an edge. In practice, however, it is often very difficult to localize profiles accurately. Inaccurately localized profiles generate a poor PSF estimation. We employ the Random Sample Consensus (RANSAC) algorithm to rule out outlier points. In RANSAC, prior knowledge about a pattern shape is incorporated, and the edge points lying far away from the pattern shape will be removed. The proposed method is tested on images of saddle patterns. Experimental results show that the proposed method can robustly localize sharp edges from LR saddle pattern images and yield accurate PSF estimation.

  18. Accurate parameter estimation for unbalanced three-phase system.

    PubMed

    Chen, Yuan; So, Hing Cheung

    2014-01-01

    Smart grid is an intelligent power generation and control console in modern electricity networks, where the unbalanced three-phase power system is the commonly used model. Here, parameter estimation for this system is addressed. After converting the three-phase waveforms into a pair of orthogonal signals via the α β-transformation, the nonlinear least squares (NLS) estimator is developed for accurately finding the frequency, phase, and voltage parameters. The estimator is realized by the Newton-Raphson scheme, whose global convergence is studied in this paper. Computer simulations show that the mean square error performance of NLS method can attain the Cramér-Rao lower bound. Moreover, our proposal provides more accurate frequency estimation when compared with the complex least mean square (CLMS) and augmented CLMS. PMID:25162056

  19. Accurate pose estimation using single marker single camera calibration system

    NASA Astrophysics Data System (ADS)

    Pati, Sarthak; Erat, Okan; Wang, Lejing; Weidert, Simon; Euler, Ekkehard; Navab, Nassir; Fallavollita, Pascal

    2013-03-01

    Visual marker based tracking is one of the most widely used tracking techniques in Augmented Reality (AR) applications. Generally, multiple square markers are needed to perform robust and accurate tracking. Various marker based methods for calibrating relative marker poses have already been proposed. However, the calibration accuracy of these methods relies on the order of the image sequence and pre-evaluation of pose-estimation errors, making the method offline. Several studies have shown that the accuracy of pose estimation for an individual square marker depends on camera distance and viewing angle. We propose a method to accurately model the error in the estimated pose and translation of a camera using a single marker via an online method based on the Scaled Unscented Transform (SUT). Thus, the pose estimation for each marker can be estimated with highly accurate calibration results independent of the order of image sequences compared to cases when this knowledge is not used. This removes the need for having multiple markers and an offline estimation system to calculate camera pose in an AR application.

  20. An Accurate Link Correlation Estimator for Improving Wireless Protocol Performance

    PubMed Central

    Zhao, Zhiwei; Xu, Xianghua; Dong, Wei; Bu, Jiajun

    2015-01-01

    Wireless link correlation has shown significant impact on the performance of various sensor network protocols. Many works have been devoted to exploiting link correlation for protocol improvements. However, the effectiveness of these designs heavily relies on the accuracy of link correlation measurement. In this paper, we investigate state-of-the-art link correlation measurement and analyze the limitations of existing works. We then propose a novel lightweight and accurate link correlation estimation (LACE) approach based on the reasoning of link correlation formation. LACE combines both long-term and short-term link behaviors for link correlation estimation. We implement LACE as a stand-alone interface in TinyOS and incorporate it into both routing and flooding protocols. Simulation and testbed results show that LACE: (1) achieves more accurate and lightweight link correlation measurements than the state-of-the-art work; and (2) greatly improves the performance of protocols exploiting link correlation. PMID:25686314

  1. An accurate link correlation estimator for improving wireless protocol performance.

    PubMed

    Zhao, Zhiwei; Xu, Xianghua; Dong, Wei; Bu, Jiajun

    2015-01-01

    Wireless link correlation has shown significant impact on the performance of various sensor network protocols. Many works have been devoted to exploiting link correlation for protocol improvements. However, the effectiveness of these designs heavily relies on the accuracy of link correlation measurement. In this paper, we investigate state-of-the-art link correlation measurement and analyze the limitations of existing works. We then propose a novel lightweight and accurate link correlation estimation (LACE) approach based on the reasoning of link correlation formation. LACE combines both long-term and short-term link behaviors for link correlation estimation. We implement LACE as a stand-alone interface in TinyOS and incorporate it into both routing and flooding protocols. Simulation and testbed results show that LACE: (1) achieves more accurate and lightweight link correlation measurements than the state-of-the-art work; and (2) greatly improves the performance of protocols exploiting link correlation. PMID:25686314

  2. Accurate estimation of sigma(exp 0) using AIRSAR data

    NASA Technical Reports Server (NTRS)

    Holecz, Francesco; Rignot, Eric

    1995-01-01

    During recent years signature analysis, classification, and modeling of Synthetic Aperture Radar (SAR) data as well as estimation of geophysical parameters from SAR data have received a great deal of interest. An important requirement for the quantitative use of SAR data is the accurate estimation of the backscattering coefficient sigma(exp 0). In terrain with relief variations radar signals are distorted due to the projection of the scene topography into the slant range-Doppler plane. The effect of these variations is to change the physical size of the scattering area, leading to errors in the radar backscatter values and incidence angle. For this reason the local incidence angle, derived from sensor position and Digital Elevation Model (DEM) data must always be considered. Especially in the airborne case, the antenna gain pattern can be an additional source of radiometric error, because the radar look angle is not known precisely as a result of the the aircraft motions and the local surface topography. Consequently, radiometric distortions due to the antenna gain pattern must also be corrected for each resolution cell, by taking into account aircraft displacements (position and attitude) and position of the backscatter element, defined by the DEM data. In this paper, a method to derive an accurate estimation of the backscattering coefficient using NASA/JPL AIRSAR data is presented. The results are evaluated in terms of geometric accuracy, radiometric variations of sigma(exp 0), and precision of the estimated forest biomass.

  3. Laser measuring system accurately locates point coordinates on photograph

    NASA Technical Reports Server (NTRS)

    Doede, J. H.; Lindenmeyer, C. W.; Vonderohe, R. H.

    1966-01-01

    Laser activated ultraprecision ranging apparatus interfaced with a computer determines point coordinates on a photograph. A helium-neon gas CW laser provides collimated light for a null balancing optical system. This system has no mechanical connection between the ranging apparatus and the photograph.

  4. Accurate Satellite-Derived Estimates of Tropospheric Ozone Radiative Forcing

    NASA Technical Reports Server (NTRS)

    Joiner, Joanna; Schoeberl, Mark R.; Vasilkov, Alexander P.; Oreopoulos, Lazaros; Platnick, Steven; Livesey, Nathaniel J.; Levelt, Pieternel F.

    2008-01-01

    Estimates of the radiative forcing due to anthropogenically-produced tropospheric O3 are derived primarily from models. Here, we use tropospheric ozone and cloud data from several instruments in the A-train constellation of satellites as well as information from the GEOS-5 Data Assimilation System to accurately estimate the instantaneous radiative forcing from tropospheric O3 for January and July 2005. We improve upon previous estimates of tropospheric ozone mixing ratios from a residual approach using the NASA Earth Observing System (EOS) Aura Ozone Monitoring Instrument (OMI) and Microwave Limb Sounder (MLS) by incorporating cloud pressure information from OMI. Since we cannot distinguish between natural and anthropogenic sources with the satellite data, our estimates reflect the total forcing due to tropospheric O3. We focus specifically on the magnitude and spatial structure of the cloud effect on both the shortand long-wave radiative forcing. The estimates presented here can be used to validate present day O3 radiative forcing produced by models.

  5. Lidar point cloud representation of canopy structure for biomass estimation

    NASA Astrophysics Data System (ADS)

    Neuenschwander, A. L.; Krofcheck, D. J.; Litvak, M. E.

    2014-12-01

    Laser mapping systems (lidar) have become an essential remote sensing tool for determining local and regional estimates of biomass. Lidar data (possibly in conjunction with optical imagery) can be used to segment the landscape into either individual trees or clusters of trees. Canopy characteristics (i.e. max, mean height) for a segmented tree are typically derived from a rasterized canopy height model (CHM) and subsequently used in a regression model to estimate biomass. The process of rasterizing the lidar point cloud into a CHM, however, reduces the amount information about the tree structure. Here, we compute statistics for each segmented tree from the raw lidar point cloud rather than a rasterized CHM. Working directly from the lidar point cloud enables a more accurate representation of the canopy structure. Biomass estimates from the point cloud method are compared against biomass estimates derived from a CHM for a Juniper savanna in New Mexico.

  6. Robust ODF smoothing for accurate estimation of fiber orientation.

    PubMed

    Beladi, Somaieh; Pathirana, Pubudu N; Brotchie, Peter

    2010-01-01

    Q-ball imaging was presented as a model free, linear and multimodal diffusion sensitive approach to reconstruct diffusion orientation distribution function (ODF) using diffusion weighted MRI data. The ODFs are widely used to estimate the fiber orientations. However, the smoothness constraint was proposed to achieve a balance between the angular resolution and noise stability for ODF constructs. Different regularization methods were proposed for this purpose. However, these methods are not robust and quite sensitive to the global regularization parameter. Although, numerical methods such as L-curve test are used to define a globally appropriate regularization parameter, it cannot serve as a universal value suitable for all regions of interest. This may result in over smoothing and potentially end up in neglecting an existing fiber population. In this paper, we propose to include an interpolation step prior to the spherical harmonic decomposition. This interpolation based approach is based on Delaunay triangulation provides a reliable, robust and accurate smoothing approach. This method is easy to implement and does not require other numerical methods to define the required parameters. Also, the fiber orientations estimated using this approach are more accurate compared to other common approaches. PMID:21096202

  7. Motion Estimation System Utilizing Point Cloud Registration

    NASA Technical Reports Server (NTRS)

    Chen, Qi (Inventor)

    2016-01-01

    A system and method of estimation motion of a machine is disclosed. The method may include determining a first point cloud and a second point cloud corresponding to an environment in a vicinity of the machine. The method may further include generating a first extended gaussian image (EGI) for the first point cloud and a second EGI for the second point cloud. The method may further include determining a first EGI segment based on the first EGI and a second EGI segment based on the second EGI. The method may further include determining a first two dimensional distribution for points in the first EGI segment and a second two dimensional distribution for points in the second EGI segment. The method may further include estimating motion of the machine based on the first and second two dimensional distributions.

  8. Accurate estimators of correlation functions in Fourier space

    NASA Astrophysics Data System (ADS)

    Sefusatti, E.; Crocce, M.; Scoccimarro, R.; Couchman, H. M. P.

    2016-08-01

    Efficient estimators of Fourier-space statistics for large number of objects rely on fast Fourier transforms (FFTs), which are affected by aliasing from unresolved small-scale modes due to the finite FFT grid. Aliasing takes the form of a sum over images, each of them corresponding to the Fourier content displaced by increasing multiples of the sampling frequency of the grid. These spurious contributions limit the accuracy in the estimation of Fourier-space statistics, and are typically ameliorated by simultaneously increasing grid size and discarding high-frequency modes. This results in inefficient estimates for e.g. the power spectrum when desired systematic biases are well under per cent level. We show that using interlaced grids removes odd images, which include the dominant contribution to aliasing. In addition, we discuss the choice of interpolation kernel used to define density perturbations on the FFT grid and demonstrate that using higher order interpolation kernels than the standard Cloud-In-Cell algorithm results in significant reduction of the remaining images. We show that combining fourth-order interpolation with interlacing gives very accurate Fourier amplitudes and phases of density perturbations. This results in power spectrum and bispectrum estimates that have systematic biases below 0.01 per cent all the way to the Nyquist frequency of the grid, thus maximizing the use of unbiased Fourier coefficients for a given grid size and greatly reducing systematics for applications to large cosmological data sets.

  9. Estimating Function Approaches for Spatial Point Processes

    NASA Astrophysics Data System (ADS)

    Deng, Chong

    Spatial point pattern data consist of locations of events that are often of interest in biological and ecological studies. Such data are commonly viewed as a realization from a stochastic process called spatial point process. To fit a parametric spatial point process model to such data, likelihood-based methods have been widely studied. However, while maximum likelihood estimation is often too computationally intensive for Cox and cluster processes, pairwise likelihood methods such as composite likelihood, Palm likelihood usually suffer from the loss of information due to the ignorance of correlation among pairs. For many types of correlated data other than spatial point processes, when likelihood-based approaches are not desirable, estimating functions have been widely used for model fitting. In this dissertation, we explore the estimating function approaches for fitting spatial point process models. These approaches, which are based on the asymptotic optimal estimating function theories, can be used to incorporate the correlation among data and yield more efficient estimators. We conducted a series of studies to demonstrate that these estmating function approaches are good alternatives to balance the trade-off between computation complexity and estimating efficiency. First, we propose a new estimating procedure that improves the efficiency of pairwise composite likelihood method in estimating clustering parameters. Our approach combines estimating functions derived from pairwise composite likeli-hood estimation and estimating functions that account for correlations among the pairwise contributions. Our method can be used to fit a variety of parametric spatial point process models and can yield more efficient estimators for the clustering parameters than pairwise composite likelihood estimation. We demonstrate its efficacy through a simulation study and an application to the longleaf pine data. Second, we further explore the quasi-likelihood approach on fitting

  10. Accurate Orientation Estimation Using AHRS under Conditions of Magnetic Distortion

    PubMed Central

    Yadav, Nagesh; Bleakley, Chris

    2014-01-01

    Low cost, compact attitude heading reference systems (AHRS) are now being used to track human body movements in indoor environments by estimation of the 3D orientation of body segments. In many of these systems, heading estimation is achieved by monitoring the strength of the Earth's magnetic field. However, the Earth's magnetic field can be locally distorted due to the proximity of ferrous and/or magnetic objects. Herein, we propose a novel method for accurate 3D orientation estimation using an AHRS, comprised of an accelerometer, gyroscope and magnetometer, under conditions of magnetic field distortion. The system performs online detection and compensation for magnetic disturbances, due to, for example, the presence of ferrous objects. The magnetic distortions are detected by exploiting variations in magnetic dip angle, relative to the gravity vector, and in magnetic strength. We investigate and show the advantages of using both magnetic strength and magnetic dip angle for detecting the presence of magnetic distortions. The correction method is based on a particle filter, which performs the correction using an adaptive cost function and by adapting the variance during particle resampling, so as to place more emphasis on the results of dead reckoning of the gyroscope measurements and less on the magnetometer readings. The proposed method was tested in an indoor environment in the presence of various magnetic distortions and under various accelerations (up to 3 g). In the experiments, the proposed algorithm achieves <2° static peak-to-peak error and <5° dynamic peak-to-peak error, significantly outperforming previous methods. PMID:25347584

  11. Naïve Point Estimation

    ERIC Educational Resources Information Center

    Lindskog, Marcus; Winman, Anders; Juslin, Peter

    2013-01-01

    The capacity of short-term memory is a key constraint when people make online judgments requiring them to rely on samples retrieved from memory (e.g., Dougherty & Hunter, 2003). In this article, the authors compare 2 accounts of how people use knowledge of statistical distributions to make point estimates: either by retrieving precomputed…

  12. Fast and Accurate Learning When Making Discrete Numerical Estimates.

    PubMed

    Sanborn, Adam N; Beierholm, Ulrik R

    2016-04-01

    Many everyday estimation tasks have an inherently discrete nature, whether the task is counting objects (e.g., a number of paint buckets) or estimating discretized continuous variables (e.g., the number of paint buckets needed to paint a room). While Bayesian inference is often used for modeling estimates made along continuous scales, discrete numerical estimates have not received as much attention, despite their common everyday occurrence. Using two tasks, a numerosity task and an area estimation task, we invoke Bayesian decision theory to characterize how people learn discrete numerical distributions and make numerical estimates. Across three experiments with novel stimulus distributions we found that participants fell between two common decision functions for converting their uncertain representation into a response: drawing a sample from their posterior distribution and taking the maximum of their posterior distribution. While this was consistent with the decision function found in previous work using continuous estimation tasks, surprisingly the prior distributions learned by participants in our experiments were much more adaptive: When making continuous estimates, participants have required thousands of trials to learn bimodal priors, but in our tasks participants learned discrete bimodal and even discrete quadrimodal priors within a few hundred trials. This makes discrete numerical estimation tasks good testbeds for investigating how people learn and make estimates. PMID:27070155

  13. Fast and Accurate Learning When Making Discrete Numerical Estimates

    PubMed Central

    Sanborn, Adam N.; Beierholm, Ulrik R.

    2016-01-01

    Many everyday estimation tasks have an inherently discrete nature, whether the task is counting objects (e.g., a number of paint buckets) or estimating discretized continuous variables (e.g., the number of paint buckets needed to paint a room). While Bayesian inference is often used for modeling estimates made along continuous scales, discrete numerical estimates have not received as much attention, despite their common everyday occurrence. Using two tasks, a numerosity task and an area estimation task, we invoke Bayesian decision theory to characterize how people learn discrete numerical distributions and make numerical estimates. Across three experiments with novel stimulus distributions we found that participants fell between two common decision functions for converting their uncertain representation into a response: drawing a sample from their posterior distribution and taking the maximum of their posterior distribution. While this was consistent with the decision function found in previous work using continuous estimation tasks, surprisingly the prior distributions learned by participants in our experiments were much more adaptive: When making continuous estimates, participants have required thousands of trials to learn bimodal priors, but in our tasks participants learned discrete bimodal and even discrete quadrimodal priors within a few hundred trials. This makes discrete numerical estimation tasks good testbeds for investigating how people learn and make estimates. PMID:27070155

  14. Bioaccessibility tests accurately estimate bioavailability of lead to quail

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb, we incorporated Pb-contaminated soils or Pb acetate into diets for Japanese quail (Coturnix japonica), fed the quail for 15 days, and ...

  15. BIOACCESSIBILITY TESTS ACCURATELY ESTIMATE BIOAVAILABILITY OF LEAD TO QUAIL

    EPA Science Inventory

    Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contami...

  16. Triple point of e-deuterium as an accurate thermometric fixed point

    SciTech Connect

    Pavese, F.; McConville, G.T.

    1986-01-01

    The triple point of deuterium (18.7/sup 0/K) is the only possibility for excluding vapor pressure measurements in the definition of a temperature scale based on fixed points between 13.81 and 24.562/sup 0/K. This paper reports an investigation made at the Istituto di Metrologia and Mound Laboratory, using extremely pure deuterium directly sealed at the production plant into small metal cells. The large contamination by HD of commercially available gas, that cannot be accounted and corrected for due to its increase in handling, was found to be very stable with time after sealing in IMGC cells. HD contamination can be limited to less than 100 ppM in Monsanto cells, both with n-D/sub 2/ and e-D/sub 2/, when filled directly from the thermal diffusion column and sealed at the factory. e-D/sub 2/ requires a special deuterated catalyst. The triple point temperature of e-D/sub 2/ has been determined to be: T(NPL-IPTS-68) = 18.7011 +- 0.002/sup 0/K. 20 refs., 3 figs., 2 tabs.

  17. Method for estimation of protein isoelectric point.

    PubMed

    Pihlasalo, Sari; Auranen, Laura; Hänninen, Pekka; Härmä, Harri

    2012-10-01

    Adsorption of sample protein to Eu(3+) chelate-labeled nanoparticles is the basis of the developed noncompetitive and homogeneous method for the estimation of the protein isoelectric point (pI). The lanthanide ion of the nanoparticle surface-conjugated Eu(3+) chelate is dissociated at a low pH, therefore decreasing the luminescence signal. A nanoparticle-adsorbed sample protein prevents the dissociation of the chelate, leading to a high luminescence signal. The adsorption efficiency of the sample protein is reduced above the isoelectric point due to the decreased electrostatic attraction between the negatively charged protein and the negatively charged particle. Four proteins with isoelectric points ranging from ~5 to 9 were tested to show the performance of the method. These pI values measured with the developed method were close to the theoretical and experimental literature values. The method is sensitive and requires a low analyte concentration of submilligrams per liter, which is nearly 10000 times lower than the concentration required for the traditional isoelectric focusing. Moreover, the method is significantly faster and simpler than the existing methods, as a ready-to-go assay was prepared for the microtiter plate format. This mix-and-measure concept is a highly attractive alternative for routine laboratory work. PMID:22946671

  18. Is Commercially Available Point Finder Accurate and Reliable in Detecting Active Auricular Acupuncture Points?

    PubMed Central

    Maranets, Inna; Lin, Eric C.; DeZinno, Peggy

    2012-01-01

    Abstract Objectives This study was done to determine the specificity and sensitivity of a commercial Pointer Plus (Point finder) in detecting a region of low skin resistance on the ear. Design This was a prospective blinded study. Setting/location The study was done at the Yale New Haven Hospital, New Haven, CT. Subjects The subjects were men and women who work at Yale New Haven Hospital. Interventions There were no interventions. Outcome measures Correlations were made between self-reported musculoskeletal pain and the detection of low skin resistance on the ear. Results The positive predictive value for Pointer Plus detecting low skin resistance correlating to the neck region of French auricular map is 0.76 (76%). The positive predictive value for Pointer Plus to detect low skin resistance area correlating to the low back region of French auricular map is 0.25. The positive predictive value for Pointer Plus in detecting any low in skin resistance on the external auricles in patients who complained of more than two musculoskeletal pains is 0.29. Conclusions The specificity and sensitivity of a commercial Pointer Plus (point finder) in detecting a region of low skin resistance on the ear being unreliable, depending on the correlating area based on a published auricular map. Additional assessments are needed to support the clinical practice. PMID:22834870

  19. How Accurately Do Spectral Methods Estimate Effective Elastic Thickness?

    NASA Astrophysics Data System (ADS)

    Perez-Gussinye, M.; Lowry, A. R.; Watts, A. B.; Velicogna, I.

    2002-12-01

    The effective elastic thickness, Te, is an important parameter that has the potential to provide information on the long-term thermal and mechanical properties of the the lithosphere. Previous studies have estimated Te using both forward and inverse (spectral) methods. While there is generally good agreement between the results obtained using these methods, spectral methods are limited because they depend on the spectral estimator and the window size chosen for analysis. In order to address this problem, we have used a multitaper technique which yields optimal estimates of the bias and variance of the Bouguer coherence function relating topography and gravity anomaly data. The technique has been tested using realistic synthetic topography and gravity. Synthetic data were generated assuming surface and sub-surface (buried) loading of an elastic plate with fractal statistics consistent with real data sets. The cases of uniform and spatially varying Te are examined. The topography and gravity anomaly data consist of 2000x2000 km grids sampled at 8 km interval. The bias in the Te estimate is assessed from the difference between the true Te value and the mean from analyzing 100 overlapping windows within the 2000x2000 km data grids. For the case in which Te is uniform, the bias and variance decrease with window size and increase with increasing true Te value. In the case of a spatially varying Te, however, there is a trade-off between spatial resolution and variance. With increasing window size the variance of the Te estimate decreases, but the spatial changes in Te are smeared out. We find that for a Te distribution consisting of a strong central circular region of Te=50 km (radius 600 km) and progressively smaller Te towards its edges, the 800x800 and 1000x1000 km window gave the best compromise between spatial resolution and variance. Our studies demonstrate that assumed stationarity of the relationship between gravity and topography data yields good results even in

  20. Accurate feature detection and estimation using nonlinear and multiresolution analysis

    NASA Astrophysics Data System (ADS)

    Rudin, Leonid; Osher, Stanley

    1994-11-01

    A program for feature detection and estimation using nonlinear and multiscale analysis was completed. The state-of-the-art edge detection was combined with multiscale restoration (as suggested by the first author) and robust results in the presence of noise were obtained. Successful applications to numerous images of interest to DOD were made. Also, a new market in the criminal justice field was developed, based in part, on this work.

  1. Accurate tempo estimation based on harmonic + noise decomposition

    NASA Astrophysics Data System (ADS)

    Alonso, Miguel; Richard, Gael; David, Bertrand

    2006-12-01

    We present an innovative tempo estimation system that processes acoustic audio signals and does not use any high-level musical knowledge. Our proposal relies on a harmonic + noise decomposition of the audio signal by means of a subspace analysis method. Then, a technique to measure the degree of musical accentuation as a function of time is developed and separately applied to the harmonic and noise parts of the input signal. This is followed by a periodicity estimation block that calculates the salience of musical accents for a large number of potential periods. Next, a multipath dynamic programming searches among all the potential periodicities for the most consistent prospects through time, and finally the most energetic candidate is selected as tempo. Our proposal is validated using a manually annotated test-base containing 961 music signals from various musical genres. In addition, the performance of the algorithm under different configurations is compared. The robustness of the algorithm when processing signals of degraded quality is also measured.

  2. Bioaccessibility tests accurately estimate bioavailability of lead to quail

    USGS Publications Warehouse

    Beyer, W. Nelson; Basta, Nicholas T; Chaney, Rufus L.; Henry, Paula F.; Mosby, David; Rattner, Barnett A.; Scheckel, Kirk G.; Sprague, Dan; Weber, John

    2016-01-01

    Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contaminated soils. Relative bioavailabilities were expressed by comparison with blood Pb concentrations in quail fed a Pb acetate reference diet. Diets containing soil from five Pb-contaminated Superfund sites had relative bioavailabilities from 33%-63%, with a mean of about 50%. Treatment of two of the soils with phosphorus significantly reduced the bioavailability of Pb. Bioaccessibility of Pb in the test soils was then measured in six in vitro tests and regressed on bioavailability. They were: the “Relative Bioavailability Leaching Procedure” (RBALP) at pH 1.5, the same test conducted at pH 2.5, the “Ohio State University In vitro Gastrointestinal” method (OSU IVG), the “Urban Soil Bioaccessible Lead Test”, the modified “Physiologically Based Extraction Test” and the “Waterfowl Physiologically Based Extraction Test.” All regressions had positive slopes. Based on criteria of slope and coefficient of determination, the RBALP pH 2.5 and OSU IVG tests performed very well. Speciation by X-ray absorption spectroscopy demonstrated that, on average, most of the Pb in the sampled soils was sorbed to minerals (30%), bound to organic matter (24%), or present as Pb sulfate (18%). Additional Pb was associated with P (chloropyromorphite, hydroxypyromorphite and tertiary Pb phosphate), and with Pb carbonates, leadhillite (a lead sulfate carbonate hydroxide), and Pb sulfide. The formation of chloropyromorphite reduced the bioavailability of Pb and the amendment of Pb-contaminated soils with P may be a thermodynamically favored means to sequester Pb.

  3. Comparison of methods for accurate end-point detection of potentiometric titrations

    NASA Astrophysics Data System (ADS)

    Villela, R. L. A.; Borges, P. P.; Vyskočil, L.

    2015-01-01

    Detection of the end point in potentiometric titrations has wide application on experiments that demand very low measurement uncertainties mainly for certifying reference materials. Simulations of experimental coulometric titration data and consequential error analysis of the end-point values were conducted using a programming code. These simulations revealed that the Levenberg-Marquardt method is in general more accurate than the traditional second derivative technique used currently as end-point detection for potentiometric titrations. Performance of the methods will be compared and presented in this paper.

  4. Bioaccessibility tests accurately estimate bioavailability of lead to quail.

    PubMed

    Beyer, W Nelson; Basta, Nicholas T; Chaney, Rufus L; Henry, Paula F P; Mosby, David E; Rattner, Barnett A; Scheckel, Kirk G; Sprague, Daniel T; Weber, John S

    2016-09-01

    Hazards of soil-borne lead (Pb) to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, the authors measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contaminated soils. Relative bioavailabilities were expressed by comparison with blood Pb concentrations in quail fed a Pb acetate reference diet. Diets containing soil from 5 Pb-contaminated Superfund sites had relative bioavailabilities from 33% to 63%, with a mean of approximately 50%. Treatment of 2 of the soils with phosphorus (P) significantly reduced the bioavailability of Pb. Bioaccessibility of Pb in the test soils was then measured in 6 in vitro tests and regressed on bioavailability: the relative bioavailability leaching procedure at pH 1.5, the same test conducted at pH 2.5, the Ohio State University in vitro gastrointestinal method, the urban soil bioaccessible lead test, the modified physiologically based extraction test, and the waterfowl physiologically based extraction test. All regressions had positive slopes. Based on criteria of slope and coefficient of determination, the relative bioavailability leaching procedure at pH 2.5 and Ohio State University in vitro gastrointestinal tests performed very well. Speciation by X-ray absorption spectroscopy demonstrated that, on average, most of the Pb in the sampled soils was sorbed to minerals (30%), bound to organic matter (24%), or present as Pb sulfate (18%). Additional Pb was associated with P (chloropyromorphite, hydroxypyromorphite, and tertiary Pb phosphate) and with Pb carbonates, leadhillite (a lead sulfate carbonate hydroxide), and Pb sulfide. The formation of chloropyromorphite reduced the bioavailability of Pb, and the amendment of Pb-contaminated soils with P may be a thermodynamically favored means to sequester Pb. Environ Toxicol Chem 2016;35:2311-2319. Published 2016 Wiley Periodicals Inc. on behalf of

  5. Estimating Aircraft Heading Based on Laserscanner Derived Point Clouds

    NASA Astrophysics Data System (ADS)

    Koppanyi, Z.; Toth, C., K.

    2015-03-01

    Using LiDAR sensors for tracking and monitoring an operating aircraft is a new application. In this paper, we present data processing methods to estimate the heading of a taxiing aircraft using laser point clouds. During the data acquisition, a Velodyne HDL-32E laser scanner tracked a moving Cessna 172 airplane. The point clouds captured at different times were used for heading estimation. After addressing the problem and specifying the equation of motion to reconstruct the aircraft point cloud from the consecutive scans, three methods are investigated here. The first requires a reference model to estimate the relative angle from the captured data by fitting different cross-sections (horizontal profiles). In the second approach, iterative closest point (ICP) method is used between the consecutive point clouds to determine the horizontal translation of the captured aircraft body. Regarding the ICP, three different versions were compared, namely, the ordinary 3D, 3-DoF 3D and 2-DoF 3D ICP. It was found that 2-DoF 3D ICP provides the best performance. Finally, the last algorithm searches for the unknown heading and velocity parameters by minimizing the volume of the reconstructed plane. The three methods were compared using three test datatypes which are distinguished by object-sensor distance, heading and velocity. We found that the ICP algorithm fails at long distances and when the aircraft motion direction perpendicular to the scan plane, but the first and the third methods give robust and accurate results at 40m object distance and at ~12 knots for a small Cessna airplane.

  6. Wind profile estimation from point to point laser distortion data

    NASA Technical Reports Server (NTRS)

    Leland, Robert

    1989-01-01

    The author's results on the problem of using laser distortion data to estimate the wind profile along the path of the beam are presented. A new model for the dynamics of the index of refraction in a non-constant wind is developed. The model agrees qualitatively with theoretical predictions for the index of refraction statistics in linear wind shear, and is approximated by the predictions of Taylor's hypothesis in constant wind. A framework for a potential in-flight experiment is presented, and the estimation problem is discussed in a maximum likelihood context.

  7. [A New Method of Accurately Extracting Spectral Values for Discrete Sampling Points].

    PubMed

    Lü, Zhen-zhen; Liu, Guang-ming; Yang, Jin-song

    2015-08-01

    In the establishment of remote sensing information inversion model, the actual measured data of discrete sampling points and the corresponding spectrum data to pixels of remote sensing image, are used to establish the relation, thus to realize the goal of information retrieval. Accurate extraction of spectrum value is very important to establish the remote sensing inversion mode. Converting target spot layer to ROI (region of interest) and then saving the ROI as ASCII is one of the methods that researchers often used to extract the spectral values. Analyzing the coordinate and spectrum values extracted using original coordinate in ENVI, we found that the extracted and original coordinate were not inconsistent and part of spectrum values not belong to the pixel containing the sampling point. The inversion model based on the above information cannot really reflect relationship between the target properties and spectral values; so that the model is meaningless. We equally divided the pixel into four parts and summed up the law. It was found that only when the sampling points distributed in the upper left corner of pixels, the extracted values were correct. On the basis of the above methods, this paper systematically studied the principle of extraction target coordinate and spectral values, and summarized the rule. A new method for extracting spectral parameters of the pixel that sampling point located in the environment of ENVI software. Firstly, pixel sampling point coordinates for any of the four corner points were extracted by the sample points with original coordinate in ENVI. Secondly, the sampling points were judged in which partition of pixel by comparing the absolute values of difference longitude and latitude of the original and extraction coordinates. Lastly, all points were adjusted to the upper left corner of pixels by symmetry principle and spectrum values were extracted by the same way in the first step. The results indicated that the extracted spectrum

  8. Accurate biopsy-needle depth estimation in limited-angle tomography using multi-view geometry

    NASA Astrophysics Data System (ADS)

    van der Sommen, Fons; Zinger, Sveta; de With, Peter H. N.

    2016-03-01

    Recently, compressed-sensing based algorithms have enabled volume reconstruction from projection images acquired over a relatively small angle (θ < 20°). These methods enable accurate depth estimation of surgical tools with respect to anatomical structures. However, they are computationally expensive and time consuming, rendering them unattractive for image-guided interventions. We propose an alternative approach for depth estimation of biopsy needles during image-guided interventions, in which we split the problem into two parts and solve them independently: needle-depth estimation and volume reconstruction. The complete proposed system consists of the previous two steps, preceded by needle extraction. First, we detect the biopsy needle in the projection images and remove it by interpolation. Next, we exploit epipolar geometry to find point-to-point correspondences in the projection images to triangulate the 3D position of the needle in the volume. Finally, we use the interpolated projection images to reconstruct the local anatomical structures and indicate the position of the needle within this volume. For validation of the algorithm, we have recorded a full CT scan of a phantom with an inserted biopsy needle. The performance of our approach ranges from a median error of 2.94 mm for an distributed viewing angle of 1° down to an error of 0.30 mm for an angle larger than 10°. Based on the results of this initial phantom study, we conclude that multi-view geometry offers an attractive alternative to time-consuming iterative methods for the depth estimation of surgical tools during C-arm-based image-guided interventions.

  9. Zero-Point Calibration for AGN Black-Hole Mass Estimates

    NASA Technical Reports Server (NTRS)

    Peterson, B. M.; Onken, C. A.

    2004-01-01

    We discuss the measurement and associated uncertainties of AGN reverberation-based black-hole masses, since these provide the zero-point calibration for scaling relationships that allow black-hole mass estimates for quasars. We find that reverberation-based mass estimates appear to be accurate to within a factor of about 3.

  10. Accurate estimation of motion blur parameters in noisy remote sensing image

    NASA Astrophysics Data System (ADS)

    Shi, Xueyan; Wang, Lin; Shao, Xiaopeng; Wang, Huilin; Tao, Zhong

    2015-05-01

    The relative motion between remote sensing satellite sensor and objects is one of the most common reasons for remote sensing image degradation. It seriously weakens image data interpretation and information extraction. In practice, point spread function (PSF) should be estimated firstly for image restoration. Identifying motion blur direction and length accurately is very crucial for PSF and restoring image with precision. In general, the regular light-and-dark stripes in the spectrum can be employed to obtain the parameters by using Radon transform. However, serious noise existing in actual remote sensing images often causes the stripes unobvious. The parameters would be difficult to calculate and the error of the result relatively big. In this paper, an improved motion blur parameter identification method to noisy remote sensing image is proposed to solve this problem. The spectrum characteristic of noisy remote sensing image is analyzed firstly. An interactive image segmentation method based on graph theory called GrabCut is adopted to effectively extract the edge of the light center in the spectrum. Motion blur direction is estimated by applying Radon transform on the segmentation result. In order to reduce random error, a method based on whole column statistics is used during calculating blur length. Finally, Lucy-Richardson algorithm is applied to restore the remote sensing images of the moon after estimating blur parameters. The experimental results verify the effectiveness and robustness of our algorithm.

  11. Radioisotopic Tie Points of the Quaternary Geomagnetic Instability Time Scale (GITS): How Accurate and Precise?

    NASA Astrophysics Data System (ADS)

    Singer, B. S.

    2014-12-01

    Reversals and excursions of the geomagnetic field are recorded globally by sedimentary and volcanic rocks. These geodynamo instabilities provide a rich set of chronostratigraphic tie points for the Quaternary period that can provide tests of age models central to paleoclimate studies. Radioisotopic dating of volcanic rocks, mainly 40Ar/39Ar dating of lava flows, coupled with astronomically-dated deep sea sediments, reveals 10 polarity reversals and 27 field excursions during the Quaternary (Singer, 2014). A key question concerns the uncertainties associated with radioisotopic dates of those geodynamo instabilities that have been identified both in terrestrial volcanic rocks and in deep sea sediments. These particular features offer the highest confidence in linking 40Ar/39Ar dates to the global marine climate record. Geological issues aside, for rocks in which the build-up of 40Ar by decay of 40K may be overwhelmed by atmospheric 40Ar at the time of eruption, the uncertainty in 40Ar/39Ar dates derives from three sources: (1) analytical uncertainty associated with measurement of the isotopes; this is straightforward to estimate; (2) systematic uncertainties stemming from the age of standard minerals, such as the Fish Canyon sanidine, and in the 40K decay constant; and (3) systematic uncertainty introduced during analysis, mainly the size and reproducibility of procedural blanks. Whereas 1 and 2 control the precision of an age determination, 2 and 3 also control accuracy. In parallel with an astronomical calibration of 28.201 Ma for the Fish Canyon sanidine standard, awareness of the importance of procedural blanks, and a new generation multi-collector mass spectrometer capable of exceptionally low-blank and isobar-free analysis, are improving both accuracy and precision of 40Ar/39Ar dates. Results from lavas recording the Matuyama-Brunhes reversal, the Santa Rosa excursion, and the reversal at the top of the Cobb Mtn subchron demonstrate these advances. Current best

  12. Correction for solute/solvent interaction extends accurate freezing point depression theory to high concentration range.

    PubMed

    Fullerton, G D; Keener, C R; Cameron, I L

    1994-12-01

    The authors describe empirical corrections to ideally dilute expressions for freezing point depression of aqueous solutions to arrive at new expressions accurate up to three molal concentration. The method assumes non-ideality is due primarily to solute/solvent interactions such that the correct free water mass Mwc is the mass of water in solution Mw minus I.M(s) where M(s) is the mass of solute and I an empirical solute/solvent interaction coefficient. The interaction coefficient is easily derived from the constant in the linear regression fit to the experimental plot of Mw/M(s) as a function of 1/delta T (inverse freezing point depression). The I-value, when substituted into the new thermodynamic expressions derived from the assumption of equivalent activity of water in solution and ice, provides accurate predictions of freezing point depression (+/- 0.05 degrees C) up to 2.5 molal concentration for all the test molecules evaluated; glucose, sucrose, glycerol and ethylene glycol. The concentration limit is the approximate monolayer water coverage limit for the solutes which suggests that direct solute/solute interactions are negligible below this limit. This is contrary to the view of many authors due to the common practice of including hydration forces (a soft potential added to the hard core atomic potential) in the interaction potential between solute particles. When this is recognized the two viewpoints are in fundamental agreement. PMID:7699200

  13. ESTIMATION OF VIABLE AIRBORNE MICROBES DOWNWIND FROM A POINT SOURCE

    EPA Science Inventory

    Modification of the Pasquill atmospheric diffusion equations for estimating viable microbial airborne cell concentrations downwind from a continuous point source is presented. A graphical method is given to estimate the ground level cell concentration given (1) microbial death ra...

  14. ROM Plus®: accurate point-of-care detection of ruptured fetal membranes

    PubMed Central

    McQuivey, Ross W; Block, Jon E

    2016-01-01

    Accurate and timely diagnosis of rupture of fetal membranes is imperative to inform and guide gestational age-specific interventions to optimize perinatal outcomes and reduce the risk of serious complications, including preterm delivery and infections. The ROM Plus is a rapid, point-of-care, qualitative immunochromatographic diagnostic test that uses a unique monoclonal/polyclonal antibody approach to detect two different proteins found in amniotic fluid at high concentrations: alpha-fetoprotein and insulin-like growth factor binding protein-1. Clinical study results have uniformly demonstrated high diagnostic accuracy and performance characteristics with this point-of-care test that exceeds conventional clinical testing with external laboratory evaluation. The description, indications for use, procedural steps, and laboratory and clinical characterization of this assay are presented in this article. PMID:27274316

  15. Achieving accurate nuetron-multiplicity analysis of metals and oxides with weighted point model equations.

    SciTech Connect

    Burward-Hoy, J. M.; Geist, W. H.; Krick, M. S.; Mayo, D. R.

    2004-01-01

    Neutron multiplicity counting is a technique for the rapid, nondestructive measurement of plutonium mass in pure and impure materials. This technique is very powerful because it uses the measured coincidence count rates to determine the sample mass without requiring a set of representative standards for calibration. Interpreting measured singles, doubles, and triples count rates using the three-parameter standard point model accurately determines plutonium mass, neutron multiplication, and the ratio of ({alpha},n) to spontaneous-fission neutrons (alpha) for oxides of moderate mass. However, underlying standard point model assumptions - including constant neutron energy and constant multiplication throughout the sample - cause significant biases for the mass, multiplication, and alpha in measurements of metal and large, dense oxides.

  16. ROM Plus(®): accurate point-of-care detection of ruptured fetal membranes.

    PubMed

    McQuivey, Ross W; Block, Jon E

    2016-01-01

    Accurate and timely diagnosis of rupture of fetal membranes is imperative to inform and guide gestational age-specific interventions to optimize perinatal outcomes and reduce the risk of serious complications, including preterm delivery and infections. The ROM Plus is a rapid, point-of-care, qualitative immunochromatographic diagnostic test that uses a unique monoclonal/polyclonal antibody approach to detect two different proteins found in amniotic fluid at high concentrations: alpha-fetoprotein and insulin-like growth factor binding protein-1. Clinical study results have uniformly demonstrated high diagnostic accuracy and performance characteristics with this point-of-care test that exceeds conventional clinical testing with external laboratory evaluation. The description, indications for use, procedural steps, and laboratory and clinical characterization of this assay are presented in this article. PMID:27274316

  17. What the hand can't tell the eye: illusion of space constancy during accurate pointing.

    PubMed

    Chua, Romeo; Enns, James T

    2005-03-01

    When we press an elevator button or pick up a coffee cup, different visual information is used to guide our reach and to form our conscious experience of these objects. But can the information guiding our hand be brought into awareness? The fact that we can see and feel our own hand in action suggests that it might be possible. However, the dual visual systems theory claims that on-line control of movement is governed by the dorsal stream of visual processing, which is largely unconscious. Two experiments are presented as strong tests of the hypothesis that the visual information guiding on-line pointing in healthy human adults is inaccessible for conscious report. Results show that participants are incapable of consciously accessing the information used in pointing, even though they can see and feel their hands in action and accurate performance depends on it. PMID:15551080

  18. What's the Point of a Raster ? Advantages of 3D Point Cloud Processing over Raster Based Methods for Accurate Geomorphic Analysis of High Resolution Topography.

    NASA Astrophysics Data System (ADS)

    Lague, D.

    2014-12-01

    High Resolution Topographic (HRT) datasets are predominantly stored and analyzed as 2D raster grids of elevations (i.e., Digital Elevation Models). Raster grid processing is common in GIS software and benefits from a large library of fast algorithms dedicated to geometrical analysis, drainage network computation and topographic change measurement. Yet, all instruments or methods currently generating HRT datasets (e.g., ALS, TLS, SFM, stereo satellite imagery) output natively 3D unstructured point clouds that are (i) non-regularly sampled, (ii) incomplete (e.g., submerged parts of river channels are rarely measured), and (iii) include 3D elements (e.g., vegetation, vertical features such as river banks or cliffs) that cannot be accurately described in a DEM. Interpolating the raw point cloud onto a 2D grid generally results in a loss of position accuracy, spatial resolution and in more or less controlled interpolation. Here I demonstrate how studying earth surface topography and processes directly on native 3D point cloud datasets offers several advantages over raster based methods: point cloud methods preserve the accuracy of the original data, can better handle the evaluation of uncertainty associated to topographic change measurements and are more suitable to study vegetation characteristics and steep features of the landscape. In this presentation, I will illustrate and compare Point Cloud based and Raster based workflows with various examples involving ALS, TLS and SFM for the analysis of bank erosion processes in bedrock and alluvial rivers, rockfall statistics (including rockfall volume estimate directly from point clouds) and the interaction of vegetation/hydraulics and sedimentation in salt marshes. These workflows use 2 recently published algorithms for point cloud classification (CANUPO) and point cloud comparison (M3C2) now implemented in the open source software CloudCompare.

  19. Application of Common Mid-Point Method to Estimate Asphalt

    NASA Astrophysics Data System (ADS)

    Zhao, Shan; Al-Aadi, Imad

    2015-04-01

    3-D radar is a multi-array stepped-frequency ground penetration radar (GPR) that can measure at a very close sampling interval in both in-line and cross-line directions. Constructing asphalt layers in accordance with specified thicknesses is crucial for pavement structure capacity and pavement performance. Common mid-point method (CMP) is a multi-offset measurement method that can improve the accuracy of the asphalt layer thickness estimation. In this study, the viability of using 3-D radar to predict asphalt concrete pavement thickness with an extended CMP method was investigated. GPR signals were collected on asphalt pavements with various thicknesses. Time domain resolution of the 3-D radar was improved by applying zero-padding technique in the frequency domain. The performance of the 3-D radar was then compared to that of the air-coupled horn antenna. The study concluded that 3-D radar can be used to predict asphalt layer thickness using CMP method accurately when the layer thickness is larger than 0.13m. The lack of time domain resolution of 3-D radar can be solved by frequency zero-padding. Keywords: asphalt pavement thickness, 3-D Radar, stepped-frequency, common mid-point method, zero padding.

  20. Radiologists’ ability to accurately estimate and compare their own interpretative mammography performance to their peers

    PubMed Central

    Cook, Andrea J.; Elmore, Joann G.; Zhu, Weiwei; Jackson, Sara L.; Carney, Patricia A.; Flowers, Chris; Onega, Tracy; Geller, Berta; Rosenberg, Robert D.; Miglioretti, Diana L.

    2013-01-01

    Objective To determine if U.S. radiologists accurately estimate their own interpretive performance of screening mammography and how they compare their performance to their peers’. Materials and Methods 174 radiologists from six Breast Cancer Surveillance Consortium (BCSC) registries completed a mailed survey between 2005 and 2006. Radiologists’ estimated and actual recall, false positive, and cancer detection rates and positive predictive value of biopsy recommendation (PPV2) for screening mammography were compared. Radiologists’ ratings of their performance as lower, similar, or higher than their peers were compared to their actual performance. Associations with radiologist characteristics were estimated using weighted generalized linear models. The study was approved by the institutional review boards of the participating sites, informed consent was obtained from radiologists, and procedures were HIPAA compliant. Results While most radiologists accurately estimated their cancer detection and recall rates (74% and 78% of radiologists), fewer accurately estimated their false positive rate and PPV2 (19% and 26%). Radiologists reported having similar (43%) or lower (31%) recall rates and similar (52%) or lower (33%) false positive rates compared to their peers, and similar (72%) or higher (23%) cancer detection rates and similar (72%) or higher (38%) PPV2. Estimation accuracy did not differ by radiologists’ characteristics except radiologists who interpret ≤1,000 mammograms annually were less accurate at estimating their recall rates. Conclusion Radiologists perceive their performance to be better than it actually is and at least as good as their peers. Radiologists have particular difficulty estimating their false positive rates and PPV2. PMID:22915414

  1. Accurate Non-parametric Estimation of Recent Effective Population Size from Segments of Identity by Descent

    PubMed Central

    Browning, Sharon R.; Browning, Brian L.

    2015-01-01

    Existing methods for estimating historical effective population size from genetic data have been unable to accurately estimate effective population size during the most recent past. We present a non-parametric method for accurately estimating recent effective population size by using inferred long segments of identity by descent (IBD). We found that inferred segments of IBD contain information about effective population size from around 4 generations to around 50 generations ago for SNP array data and to over 200 generations ago for sequence data. In human populations that we examined, the estimates of effective size were approximately one-third of the census size. We estimate the effective population size of European-ancestry individuals in the UK four generations ago to be eight million and the effective population size of Finland four generations ago to be 0.7 million. Our method is implemented in the open-source IBDNe software package. PMID:26299365

  2. Accurate Astrometry and Photometry of Saturated and Coronagraphic Point Spread Functions

    SciTech Connect

    Marois, C; Lafreniere, D; Macintosh, B; Doyon, R

    2006-02-07

    For ground-based adaptive optics point source imaging, differential atmospheric refraction and flexure introduce a small drift of the point spread function (PSF) with time, and seeing and sky transmission variations modify the PSF flux. These effects need to be corrected to properly combine the images and obtain optimal signal-to-noise ratios, accurate relative astrometry and photometry of detected companions as well as precise detection limits. Usually, one can easily correct for these effects by using the PSF core, but this is impossible when high dynamic range observing techniques are used, like coronagraphy with a non-transmissive occulting mask, or if the stellar PSF core is saturated. We present a new technique that can solve these issues by using off-axis satellite PSFs produced by a periodic amplitude or phase mask conjugated to a pupil plane. It will be shown that these satellite PSFs track precisely the PSF position, its Strehl ratio and its intensity and can thus be used to register and to flux normalize the PSF. This approach can be easily implemented in existing adaptive optics instruments and should be considered for future extreme adaptive optics coronagraph instruments and in high-contrast imaging space observatories.

  3. Charged Point Defects in the Flatland: Accurate Formation Energy Calculations in Two-Dimensional Materials

    NASA Astrophysics Data System (ADS)

    Komsa, Hannu-Pekka; Berseneva, Natalia; Krasheninnikov, Arkady V.; Nieminen, Risto M.

    2014-07-01

    Impurities and defects frequently govern materials properties, with the most prominent example being the doping of bulk semiconductors where a minute amount of foreign atoms can be responsible for the operation of the electronic devices. Several computational schemes based on a supercell approach have been developed to get insights into types and equilibrium concentrations of point defects, which successfully work in bulk materials. Here, we show that many of these schemes cannot directly be applied to two-dimensional (2D) systems, as formation energies of charged point defects are dominated by large spurious electrostatic interactions between defects in inhomogeneous environments. We suggest two approaches that solve this problem and give accurate formation energies of charged defects in 2D systems in the dilute limit. Our methods, which are applicable to all kinds of charged defects in any 2D system, are benchmarked for impurities in technologically important h-BN and MoS2 2D materials, and they are found to perform equally well for substitutional and adatom impurities.

  4. Change-point detection in time-series data by relative density-ratio estimation.

    PubMed

    Liu, Song; Yamada, Makoto; Collier, Nigel; Sugiyama, Masashi

    2013-07-01

    The objective of change-point detection is to discover abrupt property changes lying behind time-series data. In this paper, we present a novel statistical change-point detection algorithm based on non-parametric divergence estimation between time-series samples from two retrospective segments. Our method uses the relative Pearson divergence as a divergence measure, and it is accurately and efficiently estimated by a method of direct density-ratio estimation. Through experiments on artificial and real-world datasets including human-activity sensing, speech, and Twitter messages, we demonstrate the usefulness of the proposed method. PMID:23500502

  5. Bounded limit for the Monte Carlo point-flux-estimator

    SciTech Connect

    Grimesey, R.A.

    1981-01-01

    In a Monte Carlo random walk the kernel K(R,E) is used as an expected value estimator at every collision for the collided flux phi/sub c/ r vector,E) at the detector point. A limiting value for the kernel is derived from a diffusion approximation for the probability current at a radius R/sub 1/ from the detector point. The variance of the collided flux at the detector point is thus bounded using this asymptotic form for K(R,E). The bounded point flux estimator is derived. (WHK)

  6. Leidenfrost Point and Estimate of the Vapour Layer Thickness

    ERIC Educational Resources Information Center

    Gianino, Concetto

    2008-01-01

    In this article I describe an experiment involving the Leidenfrost phenomenon, which is the long lifetime of a water drop when it is deposited on a metal that is much hotter than the boiling point of water. The experiment was carried out with high-school students. The Leidenfrost point is measured and the heat laws are used to estimate the…

  7. Accurate calculation of Stokes drag for point-particle tracking in two-way coupled flows

    NASA Astrophysics Data System (ADS)

    Horwitz, J. A. K.; Mani, A.

    2016-08-01

    In this work, we propose and test a method for calculating Stokes drag applicable to particle-laden fluid flows where two-way momentum coupling is important. In the point-particle formulation, particle dynamics are coupled to fluid dynamics via a source term that appears in the respective momentum equations. When the particle Reynolds number is small and the particle diameter is smaller than the fluid scales, it is common to approximate the momentum coupling source term as the Stokes drag. The Stokes drag force depends on the difference between the undisturbed fluid velocity evaluated at the particle location, and the particle velocity. However, owing to two-way coupling, the fluid velocity is modified in the neighborhood of a particle, relative to its undisturbed value. This causes the computed Stokes drag force to be underestimated in two-way coupled point-particle simulations. We develop estimates for the drag force error as function of the particle size relative to the grid size. Because the disturbance field created by the particle contaminates the surrounding fluid, correctly calculating the drag force cannot be done solely by direct interpolation of the fluid velocity. Instead, we develop a correction method that calculates the undisturbed fluid velocity from the computed disturbed velocity field by adding an estimate of the velocity disturbance created by the particle. The correction scheme is tested for a particle settling in an otherwise quiescent fluid and is found to reduce the error in computed settling velocity by an order of magnitude compared with common interpolation schemes.

  8. How accurately can we predict the melting points of drug-like compounds?

    PubMed

    Tetko, Igor V; Sushko, Yurii; Novotarskyi, Sergii; Patiny, Luc; Kondratov, Ivan; Petrenko, Alexander E; Charochkina, Larisa; Asiri, Abdullah M

    2014-12-22

    This article contributes a highly accurate model for predicting the melting points (MPs) of medicinal chemistry compounds. The model was developed using the largest published data set, comprising more than 47k compounds. The distributions of MPs in drug-like and drug lead sets showed that >90% of molecules melt within [50,250]°C. The final model calculated an RMSE of less than 33 °C for molecules from this temperature interval, which is the most important for medicinal chemistry users. This performance was achieved using a consensus model that performed calculations to a significantly higher accuracy than the individual models. We found that compounds with reactive and unstable groups were overrepresented among outlying compounds. These compounds could decompose during storage or measurement, thus introducing experimental errors. While filtering the data by removing outliers generally increased the accuracy of individual models, it did not significantly affect the results of the consensus models. Three analyzed distance to models did not allow us to flag molecules, which had MP values fell outside the applicability domain of the model. We believe that this negative result and the public availability of data from this article will encourage future studies to develop better approaches to define the applicability domain of models. The final model, MP data, and identified reactive groups are available online at http://ochem.eu/article/55638. PMID:25489863

  9. How Accurately Can We Predict the Melting Points of Drug-like Compounds?

    PubMed Central

    2014-01-01

    This article contributes a highly accurate model for predicting the melting points (MPs) of medicinal chemistry compounds. The model was developed using the largest published data set, comprising more than 47k compounds. The distributions of MPs in drug-like and drug lead sets showed that >90% of molecules melt within [50,250]°C. The final model calculated an RMSE of less than 33 °C for molecules from this temperature interval, which is the most important for medicinal chemistry users. This performance was achieved using a consensus model that performed calculations to a significantly higher accuracy than the individual models. We found that compounds with reactive and unstable groups were overrepresented among outlying compounds. These compounds could decompose during storage or measurement, thus introducing experimental errors. While filtering the data by removing outliers generally increased the accuracy of individual models, it did not significantly affect the results of the consensus models. Three analyzed distance to models did not allow us to flag molecules, which had MP values fell outside the applicability domain of the model. We believe that this negative result and the public availability of data from this article will encourage future studies to develop better approaches to define the applicability domain of models. The final model, MP data, and identified reactive groups are available online at http://ochem.eu/article/55638. PMID:25489863

  10. Highly effective and accurate weak point monitoring method for advanced design rule (1x nm) devices

    NASA Astrophysics Data System (ADS)

    Ahn, Jeongho; Seong, ShiJin; Yoon, Minjung; Park, Il-Suk; Kim, HyungSeop; Ihm, Dongchul; Chin, Soobok; Sivaraman, Gangadharan; Li, Mingwei; Babulnath, Raghav; Lee, Chang Ho; Kurada, Satya; Brown, Christine; Galani, Rajiv; Kim, JaeHyun

    2014-04-01

    Historically when we used to manufacture semiconductor devices for 45 nm or above design rules, IC manufacturing yield was mainly determined by global random variations and therefore the chip manufacturers / manufacturing team were mainly responsible for yield improvement. With the introduction of sub-45 nm semiconductor technologies, yield started to be dominated by systematic variations, primarily centered on resolution problems, copper/low-k interconnects and CMP. These local systematic variations, which have become decisively greater than global random variations, are design-dependent [1, 2] and therefore designers now share the responsibility of increasing yield with manufacturers / manufacturing teams. A widening manufacturing gap has led to a dramatic increase in design rules that are either too restrictive or do not guarantee a litho/etch hotspot-free design. The semiconductor industry is currently limited to 193 nm scanners and no relief is expected from the equipment side to prevent / eliminate these systematic hotspots. Hence we have seen a lot of design houses coming up with innovative design products to check hotspots based on model based lithography checks to validate design manufacturability, which will also account for complex two-dimensional effects that stem from aggressive scaling of 193 nm lithography. Most of these hotspots (a.k.a., weak points) are especially seen on Back End of the Line (BEOL) process levels like Mx ADI, Mx Etch and Mx CMP. Inspecting some of these BEOL levels can be extremely challenging as there are lots of wafer noises or nuisances that can hinder an inspector's ability to detect and monitor the defects or weak points of interest. In this work we have attempted to accurately inspect the weak points using a novel broadband plasma optical inspection approach that enhances defect signal from patterns of interest (POI) and precisely suppresses surrounding wafer noises. This new approach is a paradigm shift in wafer inspection

  11. On the accurate estimation of gap fraction during daytime with digital cover photography

    NASA Astrophysics Data System (ADS)

    Hwang, Y. R.; Ryu, Y.; Kimm, H.; Macfarlane, C.; Lang, M.; Sonnentag, O.

    2015-12-01

    Digital cover photography (DCP) has emerged as an indirect method to obtain gap fraction accurately. Thus far, however, the intervention of subjectivity, such as determining the camera relative exposure value (REV) and threshold in the histogram, hindered computing accurate gap fraction. Here we propose a novel method that enables us to measure gap fraction accurately during daytime under various sky conditions by DCP. The novel method computes gap fraction using a single DCP unsaturated raw image which is corrected for scattering effects by canopies and a reconstructed sky image from the raw format image. To test the sensitivity of the novel method derived gap fraction to diverse REVs, solar zenith angles and canopy structures, we took photos in one hour interval between sunrise to midday under dense and sparse canopies with REV 0 to -5. The novel method showed little variation of gap fraction across different REVs in both dense and spares canopies across diverse range of solar zenith angles. The perforated panel experiment, which was used to test the accuracy of the estimated gap fraction, confirmed that the novel method resulted in the accurate and consistent gap fractions across different hole sizes, gap fractions and solar zenith angles. These findings highlight that the novel method opens new opportunities to estimate gap fraction accurately during daytime from sparse to dense canopies, which will be useful in monitoring LAI precisely and validating satellite remote sensing LAI products efficiently.

  12. Estimating monthly temperature using point based interpolation techniques

    NASA Astrophysics Data System (ADS)

    Saaban, Azizan; Mah Hashim, Noridayu; Murat, Rusdi Indra Zuhdi

    2013-04-01

    This paper discusses the use of point based interpolation to estimate the value of temperature at an unallocated meteorology stations in Peninsular Malaysia using data of year 2010 collected from the Malaysian Meteorology Department. Two point based interpolation methods which are Inverse Distance Weighted (IDW) and Radial Basis Function (RBF) are considered. The accuracy of the methods is evaluated using Root Mean Square Error (RMSE). The results show that RBF with thin plate spline model is suitable to be used as temperature estimator for the months of January and December, while RBF with multiquadric model is suitable to estimate the temperature for the rest of the months.

  13. A new geometric-based model to accurately estimate arm and leg inertial estimates.

    PubMed

    Wicke, Jason; Dumas, Geneviève A

    2014-06-01

    Segment estimates of mass, center of mass and moment of inertia are required input parameters to analyze the forces and moments acting across the joints. The objectives of this study were to propose a new geometric model for limb segments, to evaluate it against criterion values obtained from DXA, and to compare its performance to five other popular models. Twenty five female and 24 male college students participated in the study. For the criterion measures, the participants underwent a whole body DXA scan, and estimates for segment mass, center of mass location, and moment of inertia (frontal plane) were directly computed from the DXA mass units. For the new model, the volume was determined from two standing frontal and sagittal photographs. Each segment was modeled as a stack of slices, the sections of which were ellipses if they are not adjoining another segment and sectioned ellipses if they were adjoining another segment (e.g. upper arm and trunk). Length of axes of the ellipses was obtained from the photographs. In addition, a sex-specific, non-uniform density function was developed for each segment. A series of anthropometric measurements were also taken by directly following the definitions provided of the different body segment models tested, and the same parameters determined for each model. Comparison of models showed that estimates from the new model were consistently closer to the DXA criterion than those from the other models, with an error of less than 5% for mass and moment of inertia and less than about 6% for center of mass location. PMID:24735506

  14. Accurate Estimation of the Entropy of Rotation-Translation Probability Distributions.

    PubMed

    Fogolari, Federico; Dongmo Foumthuim, Cedrix Jurgal; Fortuna, Sara; Soler, Miguel Angel; Corazza, Alessandra; Esposito, Gennaro

    2016-01-12

    The estimation of rotational and translational entropies in the context of ligand binding has been the subject of long-time investigations. The high dimensionality (six) of the problem and the limited amount of sampling often prevent the required resolution to provide accurate estimates by the histogram method. Recently, the nearest-neighbor distance method has been applied to the problem, but the solutions provided either address rotation and translation separately, therefore lacking correlations, or use a heuristic approach. Here we address rotational-translational entropy estimation in the context of nearest-neighbor-based entropy estimation, solve the problem numerically, and provide an exact and an approximate method to estimate the full rotational-translational entropy. PMID:26605696

  15. A comparative study for the estimation of geodetic point velocity by artificial neural networks

    NASA Astrophysics Data System (ADS)

    Yilmaz, M.; Gullu, M.

    2014-06-01

    Space geodesy era provides velocity information which results in the positioning of geodetic points by considering the time evolution. The geodetic point positions on the Earth's surface change over time due to plate tectonics, and these changes have to be accounted for geodetic purposes. The velocity field of geodetic network is determined from GPS sessions. Velocities of the new structured geodetic points within the geodetic network are estimated from this velocity field by the interpolation methods. In this study, the utility of Artificial Neural Networks (ANN) widely applied in diverse fields of science is investigated in order to estimate the geodetic point velocities. Back Propagation Artificial Neural Network (BPANN) and Radial Basis Function Neural Network (RBFNN) are used to estimate the geodetic point velocities. In order to evaluate the performance of ANNs, the velocities are also interpolated by Kriging (KRIG) method. The results are compared in terms of the root mean square error (RMSE) over five different geodetic networks. It was concluded that the estimation of geodetic point velocity by BPANN is more effective and accurate than by KRIG when the points to be estimated are more than the points known.

  16. Effects of LiDAR point density and landscape context on estimates of urban forest biomass

    NASA Astrophysics Data System (ADS)

    Singh, Kunwar K.; Chen, Gang; McCarter, James B.; Meentemeyer, Ross K.

    2015-03-01

    Light Detection and Ranging (LiDAR) data is being increasingly used as an effective alternative to conventional optical remote sensing to accurately estimate aboveground forest biomass ranging from individual tree to stand levels. Recent advancements in LiDAR technology have resulted in higher point densities and improved data accuracies accompanied by challenges for procuring and processing voluminous LiDAR data for large-area assessments. Reducing point density lowers data acquisition costs and overcomes computational challenges for large-area forest assessments. However, how does lower point density impact the accuracy of biomass estimation in forests containing a great level of anthropogenic disturbance? We evaluate the effects of LiDAR point density on the biomass estimation of remnant forests in the rapidly urbanizing region of Charlotte, North Carolina, USA. We used multiple linear regression to establish a statistical relationship between field-measured biomass and predictor variables derived from LiDAR data with varying densities. We compared the estimation accuracies between a general Urban Forest type and three Forest Type models (evergreen, deciduous, and mixed) and quantified the degree to which landscape context influenced biomass estimation. The explained biomass variance of the Urban Forest model, using adjusted R2, was consistent across the reduced point densities, with the highest difference of 11.5% between the 100% and 1% point densities. The combined estimates of Forest Type biomass models outperformed the Urban Forest models at the representative point densities (100% and 40%). The Urban Forest biomass model with development density of 125 m radius produced the highest adjusted R2 (0.83 and 0.82 at 100% and 40% LiDAR point densities, respectively) and the lowest RMSE values, highlighting a distance impact of development on biomass estimation. Our evaluation suggests that reducing LiDAR point density is a viable solution to regional

  17. Polynomial Fitting of DT-MRI Fiber Tracts Allows Accurate Estimation of Muscle Architectural Parameters

    PubMed Central

    Damon, Bruce M.; Heemskerk, Anneriet M.; Ding, Zhaohua

    2012-01-01

    Fiber curvature is a functionally significant muscle structural property, but its estimation from diffusion-tensor MRI fiber tracking data may be confounded by noise. The purpose of this study was to investigate the use of polynomial fitting of fiber tracts for improving the accuracy and precision of fiber curvature (κ) measurements. Simulated image datasets were created in order to provide data with known values for κ and pennation angle (θ). Simulations were designed to test the effects of increasing inherent fiber curvature (3.8, 7.9, 11.8, and 15.3 m−1), signal-to-noise ratio (50, 75, 100, and 150), and voxel geometry (13.8 and 27.0 mm3 voxel volume with isotropic resolution; 13.5 mm3 volume with an aspect ratio of 4.0) on κ and θ measurements. In the originally reconstructed tracts, θ was estimated accurately under most curvature and all imaging conditions studied; however, the estimates of κ were imprecise and inaccurate. Fitting the tracts to 2nd order polynomial functions provided accurate and precise estimates of κ for all conditions except very high curvature (κ=15.3 m−1), while preserving the accuracy of the θ estimates. Similarly, polynomial fitting of in vivo fiber tracking data reduced the κ values of fitted tracts from those of unfitted tracts and did not change the θ values. Polynomial fitting of fiber tracts allows accurate estimation of physiologically reasonable values of κ, while preserving the accuracy of θ estimation. PMID:22503094

  18. Skin Temperature Over the Carotid Artery, an Accurate Non-invasive Estimation of Near Core Temperature

    PubMed Central

    Imani, Farsad; Karimi Rouzbahani, Hamid Reza; Goudarzi, Mehrdad; Tarrahi, Mohammad Javad; Ebrahim Soltani, Alireza

    2016-01-01

    Background: During anesthesia, continuous body temperature monitoring is essential, especially in children. Anesthesia can increase the risk of loss of body temperature by three to four times. Hypothermia in children results in increased morbidity and mortality. Since the measurement points of the core body temperature are not easily accessible, near core sites, like rectum, are used. Objectives: The purpose of this study was to measure skin temperature over the carotid artery and compare it with the rectum temperature, in order to propose a model for accurate estimation of near core body temperature. Patients and Methods: Totally, 124 patients within the age range of 2 - 6 years, undergoing elective surgery, were selected. Temperature of rectum and skin over the carotid artery was measured. Then, the patients were randomly divided into two groups (each including 62 subjects), namely modeling (MG) and validation groups (VG). First, in the modeling group, the average temperature of the rectum and skin over the carotid artery were measured separately. The appropriate model was determined, according to the significance of the model’s coefficients. The obtained model was used to predict the rectum temperature in the second group (VG group). Correlation of the predicted values with the real values (the measured rectum temperature) in the second group was investigated. Also, the difference in the average values of these two groups was examined in terms of significance. Results: In the modeling group, the average rectum and carotid temperatures were 36.47 ± 0.54°C and 35.45 ± 0.62°C, respectively. The final model was obtained, as follows: Carotid temperature × 0.561 + 16.583 = Rectum temperature. The predicted value was calculated based on the regression model and then compared with the measured rectum value, which showed no significant difference (P = 0.361). Conclusions: The present study was the first research, in which rectum temperature was compared with that

  19. Thermal Imaging of Earth for Accurate Pointing of Deep-Space Antennas

    NASA Technical Reports Server (NTRS)

    Ortiz, Gerardo; Lee, Shinhak

    2005-01-01

    A report discusses a proposal to use thermal (long-wavelength infrared) images of the Earth, as seen from spacecraft at interplanetary distances, for pointing antennas and telescopes toward the Earth for Ka-band and optical communications. The purpose is to overcome two limitations of using visible images: (1) at large Earth phase angles, the light from the Earth is too faint; and (2) performance is degraded by large albedo variations associated with weather changes. In particular, it is proposed to use images in the wavelength band of 8 to 13 m, wherein the appearance of the Earth is substantially independent of the Earth phase angle and emissivity variations are small. The report addresses tracking requirements for optical and Ka-band communications, selection of the wavelength band, available signal level versus phase angle, background noise, and signal-to-noise ratio. Tracking errors are estimated for several conceptual systems employing currently available infrared image sensors. It is found that at Mars range, it should be possible to locate the centroid of the Earth image within a noise equivalent angle (a random angular error) between 10 and 150 nanoradians at a bias error of no more than 80 nanoradians

  20. Accurate estimation of forest carbon stocks by 3-D remote sensing of individual trees.

    PubMed

    Omasa, Kenji; Qiu, Guo Yu; Watanuki, Kenichi; Yoshimi, Kenji; Akiyama, Yukihide

    2003-03-15

    Forests are one of the most important carbon sinks on Earth. However, owing to the complex structure, variable geography, and large area of forests, accurate estimation of forest carbon stocks is still a challenge for both site surveying and remote sensing. For these reasons, the Kyoto Protocol requires the establishment of methodologies for estimating the carbon stocks of forests (Kyoto Protocol, Article 5). A possible solution to this challenge is to remotely measure the carbon stocks of every tree in an entire forest. Here, we present a methodology for estimating carbon stocks of a Japanese cedar forest by using a high-resolution, helicopter-borne 3-dimensional (3-D) scanning lidar system that measures the 3-D canopy structure of every tree in a forest. Results show that a digital image (10-cm mesh) of woody canopy can be acquired. The treetop can be detected automatically with a reasonable accuracy. The absolute error ranges for tree height measurements are within 42 cm. Allometric relationships of height to carbon stocks then permit estimation of total carbon storage by measurement of carbon stocks of every tree. Thus, we suggest that our methodology can be used to accurately estimate the carbon stocks of Japanese cedar forests at a stand scale. Periodic measurements will reveal changes in forest carbon stocks. PMID:12680675

  1. A method to accurately estimate the muscular torques of human wearing exoskeletons by torque sensors.

    PubMed

    Hwang, Beomsoo; Jeon, Doyoung

    2015-01-01

    In exoskeletal robots, the quantification of the user's muscular effort is important to recognize the user's motion intentions and evaluate motor abilities. In this paper, we attempt to estimate users' muscular efforts accurately using joint torque sensor which contains the measurements of dynamic effect of human body such as the inertial, Coriolis, and gravitational torques as well as torque by active muscular effort. It is important to extract the dynamic effects of the user's limb accurately from the measured torque. The user's limb dynamics are formulated and a convenient method of identifying user-specific parameters is suggested for estimating the user's muscular torque in robotic exoskeletons. Experiments were carried out on a wheelchair-integrated lower limb exoskeleton, EXOwheel, which was equipped with torque sensors in the hip and knee joints. The proposed methods were evaluated by 10 healthy participants during body weight-supported gait training. The experimental results show that the torque sensors are to estimate the muscular torque accurately in cases of relaxed and activated muscle conditions. PMID:25860074

  2. A Simulation Study Comparison of Bayesian Estimation with Conventional Methods for Estimating Unknown Change Points

    ERIC Educational Resources Information Center

    Wang, Lijuan; McArdle, John J.

    2008-01-01

    The main purpose of this research is to evaluate the performance of a Bayesian approach for estimating unknown change points using Monte Carlo simulations. The univariate and bivariate unknown change point mixed models were presented and the basic idea of the Bayesian approach for estimating the models was discussed. The performance of Bayesian…

  3. Correlations estimate volume distilled using gravity, boiling point

    SciTech Connect

    Moreno, A.; Consuelo Perez de Alba, M. del; Manriquez, L.; Guardia Mendoz, P. de la

    1995-10-23

    Mathematical nd graphic correlations have been developed for estimating cumulative volume distilled as a function of crude API gravity and true boiling point (TBP). The correlations can be used for crudes with gravities of 21--34{degree} API and boiling points of 150--540 C. In distillation predictions for several mexican and Iraqi crude oils, the correlations have exhibited accuracy comparable to that of laboratory measurements. The paper discusses the need for such a correlation and the testing of the correlation.

  4. Leidenfrost point and estimate of the vapour layer thickness

    NASA Astrophysics Data System (ADS)

    Gianino, Concetto

    2008-11-01

    In this article I describe an experiment involving the Leidenfrost phenomenon, which is the long lifetime of a water drop when it is deposited on a metal that is much hotter than the boiling point of water. The experiment was carried out with high-school students. The Leidenfrost point is measured and the heat laws are used to estimate the thickness of the vapour layer, d≈0.06 mm, which prevents the drop from touching the hotplate.

  5. Derivative Estimation from Marginally Sampled Vector Point Functions.

    NASA Astrophysics Data System (ADS)

    Doswell, Charles A., III; Caracena, Fernando

    1988-01-01

    Several aspects of the problem of estimating derivatives from an irregular, discrete sample of vector observations are considered. It is shown that one must properly account for transformations from one vector representation to another. if one is to preserve the original properties of a vector point function during such a transformation (e.g., from u and v wind components to speed and direction). A simple technique for calculating the linear kinematic properties of a vector point function (translation, cud, divergence, and deformation) is derived for any noncolinear triad of points. This technique is equivalent to a calculation done using line integrals, but is much more efficient.It is shown that estimating derivatives by mapping the vector components onto a grid and taking finite differences is not equivalent to estimating the derivatives and mapping those estimates onto a grid, whenever the original observations are taken on a discrete, irregular network. This problem is particularly important whenever the data network is sparse relative to the wavelength of the phenomena. It is shown that conventional mapping/differencing fail to use all the information in the data, as well. Some suggesstions for minimizing the errors in derivative estimation for general (nonlinear) vector point functions are discussed.

  6. Voxel-based registration of simulated and real patient CBCT data for accurate dental implant pose estimation

    NASA Astrophysics Data System (ADS)

    Moreira, António H. J.; Queirós, Sandro; Morais, Pedro; Rodrigues, Nuno F.; Correia, André Ricardo; Fernandes, Valter; Pinho, A. C. M.; Fonseca, Jaime C.; Vilaça, João. L.

    2015-03-01

    The success of dental implant-supported prosthesis is directly linked to the accuracy obtained during implant's pose estimation (position and orientation). Although traditional impression techniques and recent digital acquisition methods are acceptably accurate, a simultaneously fast, accurate and operator-independent methodology is still lacking. Hereto, an image-based framework is proposed to estimate the patient-specific implant's pose using cone-beam computed tomography (CBCT) and prior knowledge of implanted model. The pose estimation is accomplished in a threestep approach: (1) a region-of-interest is extracted from the CBCT data using 2 operator-defined points at the implant's main axis; (2) a simulated CBCT volume of the known implanted model is generated through Feldkamp-Davis-Kress reconstruction and coarsely aligned to the defined axis; and (3) a voxel-based rigid registration is performed to optimally align both patient and simulated CBCT data, extracting the implant's pose from the optimal transformation. Three experiments were performed to evaluate the framework: (1) an in silico study using 48 implants distributed through 12 tridimensional synthetic mandibular models; (2) an in vitro study using an artificial mandible with 2 dental implants acquired with an i-CAT system; and (3) two clinical case studies. The results shown positional errors of 67+/-34μm and 108μm, and angular misfits of 0.15+/-0.08° and 1.4°, for experiment 1 and 2, respectively. Moreover, in experiment 3, visual assessment of clinical data results shown a coherent alignment of the reference implant. Overall, a novel image-based framework for implants' pose estimation from CBCT data was proposed, showing accurate results in agreement with dental prosthesis modelling requirements.

  7. Easy and accurate variance estimation of the nonparametric estimator of the partial area under the ROC curve and its application.

    PubMed

    Yu, Jihnhee; Yang, Luge; Vexler, Albert; Hutson, Alan D

    2016-06-15

    The receiver operating characteristic (ROC) curve is a popular technique with applications, for example, investigating an accuracy of a biomarker to delineate between disease and non-disease groups. A common measure of accuracy of a given diagnostic marker is the area under the ROC curve (AUC). In contrast with the AUC, the partial area under the ROC curve (pAUC) looks into the area with certain specificities (i.e., true negative rate) only, and it can be often clinically more relevant than examining the entire ROC curve. The pAUC is commonly estimated based on a U-statistic with the plug-in sample quantile, making the estimator a non-traditional U-statistic. In this article, we propose an accurate and easy method to obtain the variance of the nonparametric pAUC estimator. The proposed method is easy to implement for both one biomarker test and the comparison of two correlated biomarkers because it simply adapts the existing variance estimator of U-statistics. In this article, we show accuracy and other advantages of the proposed variance estimation method by broadly comparing it with previously existing methods. Further, we develop an empirical likelihood inference method based on the proposed variance estimator through a simple implementation. In an application, we demonstrate that, depending on the inferences by either the AUC or pAUC, we can make a different decision on a prognostic ability of a same set of biomarkers. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26790540

  8. Estimating the Effective Permittivity for Reconstructing Accurate Microwave-Radar Images.

    PubMed

    Lavoie, Benjamin R; Okoniewski, Michal; Fear, Elise C

    2016-01-01

    We present preliminary results from a method for estimating the optimal effective permittivity for reconstructing microwave-radar images. Using knowledge of how microwave-radar images are formed, we identify characteristics that are typical of good images, and define a fitness function to measure the relative image quality. We build a polynomial interpolant of the fitness function in order to identify the most likely permittivity values of the tissue. To make the estimation process more efficient, the polynomial interpolant is constructed using a locally and dimensionally adaptive sampling method that is a novel combination of stochastic collocation and polynomial chaos. Examples, using a series of simulated, experimental and patient data collected using the Tissue Sensing Adaptive Radar system, which is under development at the University of Calgary, are presented. These examples show how, using our method, accurate images can be reconstructed starting with only a broad estimate of the permittivity range. PMID:27611785

  9. Accurate estimation of object location in an image sequence using helicopter flight data

    NASA Technical Reports Server (NTRS)

    Tang, Yuan-Liang; Kasturi, Rangachar

    1994-01-01

    In autonomous navigation, it is essential to obtain a three-dimensional (3D) description of the static environment in which the vehicle is traveling. For a rotorcraft conducting low-latitude flight, this description is particularly useful for obstacle detection and avoidance. In this paper, we address the problem of 3D position estimation for static objects from a monocular sequence of images captured from a low-latitude flying helicopter. Since the environment is static, it is well known that the optical flow in the image will produce a radiating pattern from the focus of expansion. We propose a motion analysis system which utilizes the epipolar constraint to accurately estimate 3D positions of scene objects in a real world image sequence taken from a low-altitude flying helicopter. Results show that this approach gives good estimates of object positions near the rotorcraft's intended flight-path.

  10. Effective Echo Detection and Accurate Orbit Estimation Algorithms for Space Debris Radar

    NASA Astrophysics Data System (ADS)

    Isoda, Kentaro; Sakamoto, Takuya; Sato, Toru

    Orbit estimation of space debris, objects of no inherent value orbiting the earth, is a task that is important for avoiding collisions with spacecraft. The Kamisaibara Spaceguard Center radar system was built in 2004 as the first radar facility in Japan devoted to the observation of space debris. In order to detect the smaller debris, coherent integration is effective in improving SNR (Signal-to-Noise Ratio). However, it is difficult to apply coherent integration to real data because the motions of the targets are unknown. An effective algorithm is proposed for echo detection and orbit estimation of the faint echoes from space debris. The characteristics of the evaluation function are utilized by the algorithm. Experiments show the proposed algorithm improves SNR by 8.32dB and enables estimation of orbital parameters accurately to allow for re-tracking with a single radar.

  11. Parameter Estimation of Ion Current Formulations Requires Hybrid Optimization Approach to Be Both Accurate and Reliable

    PubMed Central

    Loewe, Axel; Wilhelms, Mathias; Schmid, Jochen; Krause, Mathias J.; Fischer, Fathima; Thomas, Dierk; Scholz, Eberhard P.; Dössel, Olaf; Seemann, Gunnar

    2016-01-01

    Computational models of cardiac electrophysiology provided insights into arrhythmogenesis and paved the way toward tailored therapies in the last years. To fully leverage in silico models in future research, these models need to be adapted to reflect pathologies, genetic alterations, or pharmacological effects, however. A common approach is to leave the structure of established models unaltered and estimate the values of a set of parameters. Today’s high-throughput patch clamp data acquisition methods require robust, unsupervised algorithms that estimate parameters both accurately and reliably. In this work, two classes of optimization approaches are evaluated: gradient-based trust-region-reflective and derivative-free particle swarm algorithms. Using synthetic input data and different ion current formulations from the Courtemanche et al. electrophysiological model of human atrial myocytes, we show that neither of the two schemes alone succeeds to meet all requirements. Sequential combination of the two algorithms did improve the performance to some extent but not satisfactorily. Thus, we propose a novel hybrid approach coupling the two algorithms in each iteration. This hybrid approach yielded very accurate estimates with minimal dependency on the initial guess using synthetic input data for which a ground truth parameter set exists. When applied to measured data, the hybrid approach yielded the best fit, again with minimal variation. Using the proposed algorithm, a single run is sufficient to estimate the parameters. The degree of superiority over the other investigated algorithms in terms of accuracy and robustness depended on the type of current. In contrast to the non-hybrid approaches, the proposed method proved to be optimal for data of arbitrary signal to noise ratio. The hybrid algorithm proposed in this work provides an important tool to integrate experimental data into computational models both accurately and robustly allowing to assess the often non

  12. Evaluating lidar point densities for effective estimation of aboveground biomass

    USGS Publications Warehouse

    Wu, Zhuoting; Dye, Dennis G.; Stoker, Jason; Vogel, John M.; Velasco, Miguel G.; Middleton, Barry R.

    2016-01-01

    The U.S. Geological Survey (USGS) 3D Elevation Program (3DEP) was recently established to provide airborne lidar data coverage on a national scale. As part of a broader research effort of the USGS to develop an effective remote sensing-based methodology for the creation of an operational biomass Essential Climate Variable (Biomass ECV) data product, we evaluated the performance of airborne lidar data at various pulse densities against Landsat 8 satellite imagery in estimating above ground biomass for forests and woodlands in a study area in east-central Arizona, U.S. High point density airborne lidar data, were randomly sampled to produce five lidar datasets with reduced densities ranging from 0.5 to 8 point(s)/m2, corresponding to the point density range of 3DEP to provide national lidar coverage over time. Lidar-derived aboveground biomass estimate errors showed an overall decreasing trend as lidar point density increased from 0.5 to 8 points/m2. Landsat 8-based aboveground biomass estimates produced errors larger than the lowest lidar point density of 0.5 point/m2, and therefore Landsat 8 observations alone were ineffective relative to airborne lidar for generating a Biomass ECV product, at least for the forest and woodland vegetation types of the Southwestern U.S. While a national Biomass ECV product with optimal accuracy could potentially be achieved with 3DEP data at 8 points/m2, our results indicate that even lower density lidar data could be sufficient to provide a national Biomass ECV product with accuracies significantly higher than that from Landsat observations alone.

  13. Accurate reconstruction of viral quasispecies spectra through improved estimation of strain richness

    PubMed Central

    2015-01-01

    Background Estimating the number of different species (richness) in a mixed microbial population has been a main focus in metagenomic research. Existing methods of species richness estimation ride on the assumption that the reads in each assembled contig correspond to only one of the microbial genomes in the population. This assumption and the underlying probabilistic formulations of existing methods are not useful for quasispecies populations where the strains are highly genetically related. The lack of knowledge on the number of different strains in a quasispecies population is observed to hinder the precision of existing Viral Quasispecies Spectrum Reconstruction (QSR) methods due to the uncontrolled reconstruction of a large number of in silico false positives. In this work, we formulated a novel probabilistic method for strain richness estimation specifically targeting viral quasispecies. By using this approach we improved our recently proposed spectrum reconstruction pipeline ViQuaS to achieve higher levels of precision in reconstructed quasispecies spectra without compromising the recall rates. We also discuss how one other existing popular QSR method named ShoRAH can be improved using this new approach. Results On benchmark data sets, our estimation method provided accurate richness estimates (< 0.2 median estimation error) and improved the precision of ViQuaS by 2%-13% and F-score by 1%-9% without compromising the recall rates. We also demonstrate that our estimation method can be used to improve the precision and F-score of ShoRAH by 0%-7% and 0%-5% respectively. Conclusions The proposed probabilistic estimation method can be used to estimate the richness of viral populations with a quasispecies behavior and to improve the accuracy of the quasispecies spectra reconstructed by the existing methods ViQuaS and ShoRAH in the presence of a moderate level of technical sequencing errors. Availability http://sourceforge.net/projects/viquas/ PMID:26678073

  14. Intraocular lens power estimation by accurate ray tracing for eyes underwent previous refractive surgeries

    NASA Astrophysics Data System (ADS)

    Yang, Que; Wang, Shanshan; Wang, Kai; Zhang, Chunyu; Zhang, Lu; Meng, Qingyu; Zhu, Qiudong

    2015-08-01

    For normal eyes without history of any ocular surgery, traditional equations for calculating intraocular lens (IOL) power, such as SRK-T, Holladay, Higis, SRK-II, et al., all were relativley accurate. However, for eyes underwent refractive surgeries, such as LASIK, or eyes diagnosed as keratoconus, these equations may cause significant postoperative refractive error, which may cause poor satisfaction after cataract surgery. Although some methods have been carried out to solve this problem, such as Hagis-L equation[1], or using preoperative data (data before LASIK) to estimate K value[2], no precise equations were available for these eyes. Here, we introduced a novel intraocular lens power estimation method by accurate ray tracing with optical design software ZEMAX. Instead of using traditional regression formula, we adopted the exact measured corneal elevation distribution, central corneal thickness, anterior chamber depth, axial length, and estimated effective lens plane as the input parameters. The calculation of intraocular lens power for a patient with keratoconus and another LASIK postoperative patient met very well with their visual capacity after cataract surgery.

  15. Accurate and quantitative polarization-sensitive OCT by unbiased birefringence estimator with noise-stochastic correction

    NASA Astrophysics Data System (ADS)

    Kasaragod, Deepa; Sugiyama, Satoshi; Ikuno, Yasushi; Alonso-Caneiro, David; Yamanari, Masahiro; Fukuda, Shinichi; Oshika, Tetsuro; Hong, Young-Joo; Li, En; Makita, Shuichi; Miura, Masahiro; Yasuno, Yoshiaki

    2016-03-01

    Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of OCT that contrasts the polarization properties of tissues. It has been applied to ophthalmology, cardiology, etc. Proper quantitative imaging is required for a widespread clinical utility. However, the conventional method of averaging to improve the signal to noise ratio (SNR) and the contrast of the phase retardation (or birefringence) images introduce a noise bias offset from the true value. This bias reduces the effectiveness of birefringence contrast for a quantitative study. Although coherent averaging of Jones matrix tomography has been widely utilized and has improved the image quality, the fundamental limitation of nonlinear dependency of phase retardation and birefringence to the SNR was not overcome. So the birefringence obtained by PS-OCT was still not accurate for a quantitative imaging. The nonlinear effect of SNR to phase retardation and birefringence measurement was previously formulated in detail for a Jones matrix OCT (JM-OCT) [1]. Based on this, we had developed a maximum a-posteriori (MAP) estimator and quantitative birefringence imaging was demonstrated [2]. However, this first version of estimator had a theoretical shortcoming. It did not take into account the stochastic nature of SNR of OCT signal. In this paper, we present an improved version of the MAP estimator which takes into account the stochastic property of SNR. This estimator uses a probability distribution function (PDF) of true local retardation, which is proportional to birefringence, under a specific set of measurements of the birefringence and SNR. The PDF was pre-computed by a Monte-Carlo (MC) simulation based on the mathematical model of JM-OCT before the measurement. A comparison between this new MAP estimator, our previous MAP estimator [2], and the standard mean estimator is presented. The comparisons are performed both by numerical simulation and in vivo measurements of anterior and

  16. Software cost estimation using class point metrics (CPM)

    NASA Astrophysics Data System (ADS)

    Ghode, Aditi; Periyasamy, Kasilingam

    2011-12-01

    Estimating cost for the software project is one of the most important and crucial task to maintain the software reliability. Many cost estimation models have been reported till now, but most of them have significant drawbacks due to rapid changes in the technology. For example, Source Line Of Code (SLOC) can only be counted when the software construction is complete. Function Point (FP) metric is deficient in handling Object Oriented Technology, as it was designed for procedural languages such as COBOL. Since Object-Oriented Programming became a popular development practice, most of the software companies started applying the Unified Modeling Language (UML). The objective of this research is to develop a new cost estimation model with the application of class diagram for the software cost estimation.

  17. Local surface sampling step estimation for extracting boundaries of planar point clouds

    NASA Astrophysics Data System (ADS)

    Brie, David; Bombardier, Vincent; Baeteman, Grégory; Bennis, Abdelhamid

    2016-09-01

    This paper presents a new approach to estimate the surface sampling step of planar point clouds acquired by Terrestrial Laser Scanner (TLS) which is varying with the distance to the surface and the angular positions. The local surface sampling step is obtained by doing a first order Taylor expansion of planar point coordinates. Then, it is shown how to use it in Delaunay-based boundary point extraction. The resulting approach, which is implemented in the ModiBuilding software, is applied to two facade point clouds of a building. The first is acquired with a single station and the second with two stations. In both cases, the proposed approach performs very accurately and appears to be robust to the variations of the point cloud density.

  18. The accurate estimation of physicochemical properties of ternary mixtures containing ionic liquids via artificial neural networks.

    PubMed

    Cancilla, John C; Díaz-Rodríguez, Pablo; Matute, Gemma; Torrecilla, José S

    2015-02-14

    The estimation of the density and refractive index of ternary mixtures comprising the ionic liquid (IL) 1-butyl-3-methylimidazolium tetrafluoroborate, 2-propanol, and water at a fixed temperature of 298.15 K has been attempted through artificial neural networks. The obtained results indicate that the selection of this mathematical approach was a well-suited option. The mean prediction errors obtained, after simulating with a dataset never involved in the training process of the model, were 0.050% and 0.227% for refractive index and density estimation, respectively. These accurate results, which have been attained only using the composition of the dissolutions (mass fractions), imply that, most likely, ternary mixtures similar to the one analyzed, can be easily evaluated utilizing this algorithmic tool. In addition, different chemical processes involving ILs can be monitored precisely, and furthermore, the purity of the compounds in the studied mixtures can be indirectly assessed thanks to the high accuracy of the model. PMID:25583241

  19. Toward an Accurate Estimate of the Exfoliation Energy of Black Phosphorus: A Periodic Quantum Chemical Approach.

    PubMed

    Sansone, Giuseppe; Maschio, Lorenzo; Usvyat, Denis; Schütz, Martin; Karttunen, Antti

    2016-01-01

    The black phosphorus (black-P) crystal is formed of covalently bound layers of phosphorene stacked together by weak van der Waals interactions. An experimental measurement of the exfoliation energy of black-P is not available presently, making theoretical studies the most important source of information for the optimization of phosphorene production. Here, we provide an accurate estimate of the exfoliation energy of black-P on the basis of multilevel quantum chemical calculations, which include the periodic local Møller-Plesset perturbation theory of second order, augmented by higher-order corrections, which are evaluated with finite clusters mimicking the crystal. Very similar results are also obtained by density functional theory with the D3-version of Grimme's empirical dispersion correction. Our estimate of the exfoliation energy for black-P of -151 meV/atom is substantially larger than that of graphite, suggesting the need for different strategies to generate isolated layers for these two systems. PMID:26651397

  20. CUTE: Correlation Utilities and Two-point Estimation

    NASA Astrophysics Data System (ADS)

    Alonso, David

    2015-05-01

    CUTE (Correlation Utilities and Two-point Estimation) extracts any two-point statistic from enormous datasets with hundreds of millions of objects, such as large galaxy surveys. The computational time grows with the square of the number of objects to be correlated; technology provides multiple means to massively parallelize this problem and CUTE is specifically designed for these kind of calculations. Two implementations are provided: one for execution on shared-memory machines using OpenMP and one that runs on graphical processing units (GPUs) using CUDA.

  1. Effects of LiDAR point density, sampling size and height threshold on estimation accuracy of crop biophysical parameters.

    PubMed

    Luo, Shezhou; Chen, Jing M; Wang, Cheng; Xi, Xiaohuan; Zeng, Hongcheng; Peng, Dailiang; Li, Dong

    2016-05-30

    Vegetation leaf area index (LAI), height, and aboveground biomass are key biophysical parameters. Corn is an important and globally distributed crop, and reliable estimations of these parameters are essential for corn yield forecasting, health monitoring and ecosystem modeling. Light Detection and Ranging (LiDAR) is considered an effective technology for estimating vegetation biophysical parameters. However, the estimation accuracies of these parameters are affected by multiple factors. In this study, we first estimated corn LAI, height and biomass (R2 = 0.80, 0.874 and 0.838, respectively) using the original LiDAR data (7.32 points/m2), and the results showed that LiDAR data could accurately estimate these biophysical parameters. Second, comprehensive research was conducted on the effects of LiDAR point density, sampling size and height threshold on the estimation accuracy of LAI, height and biomass. Our findings indicated that LiDAR point density had an important effect on the estimation accuracy for vegetation biophysical parameters, however, high point density did not always produce highly accurate estimates, and reduced point density could deliver reasonable estimation results. Furthermore, the results showed that sampling size and height threshold were additional key factors that affect the estimation accuracy of biophysical parameters. Therefore, the optimal sampling size and the height threshold should be determined to improve the estimation accuracy of biophysical parameters. Our results also implied that a higher LiDAR point density, larger sampling size and height threshold were required to obtain accurate corn LAI estimation when compared with height and biomass estimations. In general, our results provide valuable guidance for LiDAR data acquisition and estimation of vegetation biophysical parameters using LiDAR data. PMID:27410085

  2. A Fast Algorithm to Estimate the Deepest Points of Lakes for Regional Lake Registration.

    PubMed

    Shen, Zhanfeng; Yu, Xinju; Sheng, Yongwei; Li, Junli; Luo, Jiancheng

    2015-01-01

    When conducting image registration in the U.S. state of Alaska, it is very difficult to locate satisfactory ground control points because ice, snow, and lakes cover much of the ground. However, GCPs can be located by seeking stable points from the extracted lake data. This paper defines a process to estimate the deepest points of lakes as the most stable ground control points for registration. We estimate the deepest point of a lake by computing the center point of the largest inner circle (LIC) of the polygon representing the lake. An LIC-seeking method based on Voronoi diagrams is proposed, and an algorithm based on medial axis simplification (MAS) is introduced. The proposed design also incorporates parallel data computing. A key issue of selecting a policy for partitioning vector data is carefully studied, the selected policy that equalize the algorithm complexity is proved the most optimized policy for vector parallel processing. Using several experimental applications, we conclude that the presented approach accurately estimates the deepest points in Alaskan lakes; furthermore, we gain perfect efficiency using MAS and a policy of algorithm complexity equalization. PMID:26656598

  3. A Fast Algorithm to Estimate the Deepest Points of Lakes for Regional Lake Registration

    PubMed Central

    Shen, Zhanfeng; Yu, Xinju; Sheng, Yongwei; Li, Junli; Luo, Jiancheng

    2015-01-01

    When conducting image registration in the U.S. state of Alaska, it is very difficult to locate satisfactory ground control points because ice, snow, and lakes cover much of the ground. However, GCPs can be located by seeking stable points from the extracted lake data. This paper defines a process to estimate the deepest points of lakes as the most stable ground control points for registration. We estimate the deepest point of a lake by computing the center point of the largest inner circle (LIC) of the polygon representing the lake. An LIC-seeking method based on Voronoi diagrams is proposed, and an algorithm based on medial axis simplification (MAS) is introduced. The proposed design also incorporates parallel data computing. A key issue of selecting a policy for partitioning vector data is carefully studied, the selected policy that equalize the algorithm complexity is proved the most optimized policy for vector parallel processing. Using several experimental applications, we conclude that the presented approach accurately estimates the deepest points in Alaskan lakes; furthermore, we gain perfect efficiency using MAS and a policy of algorithm complexity equalization. PMID:26656598

  4. Applications of operational calculus: equations for the five-point rectangle and robust center point estimators

    SciTech Connect

    Silver, Gary L

    2009-01-01

    Equations for interpolating five data in rectangular array are seldom encountered in textbooks. This paper describes a new method that renders polynomial and exponential equations for the design. Operational center point estimators are often more more resistant to the effects of an outlying datum than the mean.

  5. Lamb mode selection for accurate wall loss estimation via guided wave tomography

    SciTech Connect

    Huthwaite, P.; Ribichini, R.; Lowe, M. J. S.; Cawley, P.

    2014-02-18

    Guided wave tomography offers a method to accurately quantify wall thickness losses in pipes and vessels caused by corrosion. This is achieved using ultrasonic waves transmitted over distances of approximately 1–2m, which are measured by an array of transducers and then used to reconstruct a map of wall thickness throughout the inspected region. To achieve accurate estimations of remnant wall thickness, it is vital that a suitable Lamb mode is chosen. This paper presents a detailed evaluation of the fundamental modes, S{sub 0} and A{sub 0}, which are of primary interest in guided wave tomography thickness estimates since the higher order modes do not exist at all thicknesses, to compare their performance using both numerical and experimental data while considering a range of challenging phenomena. The sensitivity of A{sub 0} to thickness variations was shown to be superior to S{sub 0}, however, the attenuation from A{sub 0} when a liquid loading was present was much higher than S{sub 0}. A{sub 0} was less sensitive to the presence of coatings on the surface of than S{sub 0}.

  6. Lamb mode selection for accurate wall loss estimation via guided wave tomography

    NASA Astrophysics Data System (ADS)

    Huthwaite, P.; Ribichini, R.; Lowe, M. J. S.; Cawley, P.

    2014-02-01

    Guided wave tomography offers a method to accurately quantify wall thickness losses in pipes and vessels caused by corrosion. This is achieved using ultrasonic waves transmitted over distances of approximately 1-2m, which are measured by an array of transducers and then used to reconstruct a map of wall thickness throughout the inspected region. To achieve accurate estimations of remnant wall thickness, it is vital that a suitable Lamb mode is chosen. This paper presents a detailed evaluation of the fundamental modes, S0 and A0, which are of primary interest in guided wave tomography thickness estimates since the higher order modes do not exist at all thicknesses, to compare their performance using both numerical and experimental data while considering a range of challenging phenomena. The sensitivity of A0 to thickness variations was shown to be superior to S0, however, the attenuation from A0 when a liquid loading was present was much higher than S0. A0 was less sensitive to the presence of coatings on the surface of than S0.

  7. Determining point charge arrays that produce accurate ionic crystal fields for atomic cluster calculations

    SciTech Connect

    Derenzo, Stephen E.; Klintenberg, Mattias K.; Weber, Marvin J.

    2000-02-01

    In performing atomic cluster calculations of local electronic structure defects in ionic crystals, the crystal is often modeled as a central cluster of 5-50 ions embedded in an array of point charges. For most crystals, however, a finite three-dimensional repeated array of unit cells generates electrostatic potentials that are in significant disagreement with the Madelung (infinite crystal) potentials computed by the Ewald method. This is illustrated for the cubic crystal CaF{sub 2}. We present a novel algorithm for solving this problem for any crystal whose unit cell information is known: (1) the unit cell is used to generate a neutral array containing typically 10 000 point charges at their normal crystallographic positions; (2) the array is divided into zone 1 (a volume defined by the atomic cluster of interest), zone 2 (several hundred additional point charges that together with zone 1 fill a spherical volume), and zone 3 (all other point charges); (3) the Ewald formula is used to compute the site potentials at all point charges in zones 1 and 2; (4) a system of simultaneous linear equations is solved to find the zone 3 charge values that make the zone 1 and zone 2 site potentials exactly equal to their Ewald values and the total charge and dipole moments equal to zero, and (5) the solution is checked at 1000 additional points randomly chosen in zone 1. The method is applied to 33 different crystal types with 50-71 ions in zone 1. In all cases the accuracy determined in step 5 steadily improves as the sizes of zones 2 and 3 are increased, reaching a typical rms error of 1 {mu}V in zone 1 for 500 point charges in zone 2 and 10 000 in zone 3. (c) 2000 American Institute of Physics.

  8. Accurate State Estimation and Tracking of a Non-Cooperative Target Vehicle

    NASA Technical Reports Server (NTRS)

    Thienel, Julie K.; Sanner, Robert M.

    2006-01-01

    Autonomous space rendezvous scenarios require knowledge of the target vehicle state in order to safely dock with the chaser vehicle. Ideally, the target vehicle state information is derived from telemetered data, or with the use of known tracking points on the target vehicle. However, if the target vehicle is non-cooperative and does not have the ability to maintain attitude control, or transmit attitude knowledge, the docking becomes more challenging. This work presents a nonlinear approach for estimating the body rates of a non-cooperative target vehicle, and coupling this estimation to a tracking control scheme. The approach is tested with the robotic servicing mission concept for the Hubble Space Telescope (HST). Such a mission would not only require estimates of the HST attitude and rates, but also precision control to achieve the desired rate and maintain the orientation to successfully dock with HST.

  9. Removing the thermal component from heart rate provides an accurate VO2 estimation in forest work.

    PubMed

    Dubé, Philippe-Antoine; Imbeau, Daniel; Dubeau, Denise; Lebel, Luc; Kolus, Ahmet

    2016-05-01

    Heart rate (HR) was monitored continuously in 41 forest workers performing brushcutting or tree planting work. 10-min seated rest periods were imposed during the workday to estimate the HR thermal component (ΔHRT) per Vogt et al. (1970, 1973). VO2 was measured using a portable gas analyzer during a morning submaximal step-test conducted at the work site, during a work bout over the course of the day (range: 9-74 min), and during an ensuing 10-min rest pause taken at the worksite. The VO2 estimated, from measured HR and from corrected HR (thermal component removed), were compared to VO2 measured during work and rest. Varied levels of HR thermal component (ΔHRTavg range: 0-38 bpm) originating from a wide range of ambient thermal conditions, thermal clothing insulation worn, and physical load exerted during work were observed. Using raw HR significantly overestimated measured work VO2 by 30% on average (range: 1%-64%). 74% of VO2 prediction error variance was explained by the HR thermal component. VO2 estimated from corrected HR, was not statistically different from measured VO2. Work VO2 can be estimated accurately in the presence of thermal stress using Vogt et al.'s method, which can be implemented easily by the practitioner with inexpensive instruments. PMID:26851474

  10. Accurate Estimation of the Intrinsic Dimension Using Graph Distances: Unraveling the Geometric Complexity of Datasets

    PubMed Central

    Granata, Daniele; Carnevale, Vincenzo

    2016-01-01

    The collective behavior of a large number of degrees of freedom can be often described by a handful of variables. This observation justifies the use of dimensionality reduction approaches to model complex systems and motivates the search for a small set of relevant “collective” variables. Here, we analyze this issue by focusing on the optimal number of variable needed to capture the salient features of a generic dataset and develop a novel estimator for the intrinsic dimension (ID). By approximating geodesics with minimum distance paths on a graph, we analyze the distribution of pairwise distances around the maximum and exploit its dependency on the dimensionality to obtain an ID estimate. We show that the estimator does not depend on the shape of the intrinsic manifold and is highly accurate, even for exceedingly small sample sizes. We apply the method to several relevant datasets from image recognition databases and protein multiple sequence alignments and discuss possible interpretations for the estimated dimension in light of the correlations among input variables and of the information content of the dataset. PMID:27510265

  11. Hybridization modeling of oligonucleotide SNP arrays for accurate DNA copy number estimation

    PubMed Central

    Wan, Lin; Sun, Kelian; Ding, Qi; Cui, Yuehua; Li, Ming; Wen, Yalu; Elston, Robert C.; Qian, Minping; Fu, Wenjiang J

    2009-01-01

    Affymetrix SNP arrays have been widely used for single-nucleotide polymorphism (SNP) genotype calling and DNA copy number variation inference. Although numerous methods have achieved high accuracy in these fields, most studies have paid little attention to the modeling of hybridization of probes to off-target allele sequences, which can affect the accuracy greatly. In this study, we address this issue and demonstrate that hybridization with mismatch nucleotides (HWMMN) occurs in all SNP probe-sets and has a critical effect on the estimation of allelic concentrations (ACs). We study sequence binding through binding free energy and then binding affinity, and develop a probe intensity composite representation (PICR) model. The PICR model allows the estimation of ACs at a given SNP through statistical regression. Furthermore, we demonstrate with cell-line data of known true copy numbers that the PICR model can achieve reasonable accuracy in copy number estimation at a single SNP locus, by using the ratio of the estimated AC of each sample to that of the reference sample, and can reveal subtle genotype structure of SNPs at abnormal loci. We also demonstrate with HapMap data that the PICR model yields accurate SNP genotype calls consistently across samples, laboratories and even across array platforms. PMID:19586935

  12. Accurate Estimation of the Intrinsic Dimension Using Graph Distances: Unraveling the Geometric Complexity of Datasets.

    PubMed

    Granata, Daniele; Carnevale, Vincenzo

    2016-01-01

    The collective behavior of a large number of degrees of freedom can be often described by a handful of variables. This observation justifies the use of dimensionality reduction approaches to model complex systems and motivates the search for a small set of relevant "collective" variables. Here, we analyze this issue by focusing on the optimal number of variable needed to capture the salient features of a generic dataset and develop a novel estimator for the intrinsic dimension (ID). By approximating geodesics with minimum distance paths on a graph, we analyze the distribution of pairwise distances around the maximum and exploit its dependency on the dimensionality to obtain an ID estimate. We show that the estimator does not depend on the shape of the intrinsic manifold and is highly accurate, even for exceedingly small sample sizes. We apply the method to several relevant datasets from image recognition databases and protein multiple sequence alignments and discuss possible interpretations for the estimated dimension in light of the correlations among input variables and of the information content of the dataset. PMID:27510265

  13. Rapid Bayesian point source inversion using pattern recognition --- bridging the gap between regional scaling relations and accurate physical modelling

    NASA Astrophysics Data System (ADS)

    Valentine, A. P.; Kaeufl, P.; De Wit, R. W. L.; Trampert, J.

    2014-12-01

    Obtaining knowledge about source parameters in (near) real-time during or shortly after an earthquake is essential for mitigating damage and directing resources in the aftermath of the event. Therefore, a variety of real-time source-inversion algorithms have been developed over recent decades. This has been driven by the ever-growing availability of dense seismograph networks in many seismogenic areas of the world and the significant advances in real-time telemetry. By definition, these algorithms rely on short time-windows of sparse, local and regional observations, resulting in source estimates that are highly sensitive to observational errors, noise and missing data. In order to obtain estimates more rapidly, many algorithms are either entirely based on empirical scaling relations or make simplifying assumptions about the Earth's structure, which can in turn lead to biased results. It is therefore essential that realistic uncertainty bounds are estimated along with the parameters. A natural means of propagating probabilistic information on source parameters through the entire processing chain from first observations to potential end users and decision makers is provided by the Bayesian formalism.We present a novel method based on pattern recognition allowing us to incorporate highly accurate physical modelling into an uncertainty-aware real-time inversion algorithm. The algorithm is based on a pre-computed Green's functions database, containing a large set of source-receiver paths in a highly heterogeneous crustal model. Unlike similar methods, which often employ a grid search, we use a supervised learning algorithm to relate synthetic waveforms to point source parameters. This training procedure has to be performed only once and leads to a representation of the posterior probability density function p(m|d) --- the distribution of source parameters m given observations d --- which can be evaluated quickly for new data.Owing to the flexibility of the pattern

  14. Methods for accurate estimation of net discharge in a tidal channel

    USGS Publications Warehouse

    Simpson, M.R.; Bland, R.

    2000-01-01

    Accurate estimates of net residual discharge in tidally affected rivers and estuaries are possible because of recently developed ultrasonic discharge measurement techniques. Previous discharge estimates using conventional mechanical current meters and methods based on stage/discharge relations or water slope measurements often yielded errors that were as great as or greater than the computed residual discharge. Ultrasonic measurement methods consist of: 1) the use of ultrasonic instruments for the measurement of a representative 'index' velocity used for in situ estimation of mean water velocity and 2) the use of the acoustic Doppler current discharge measurement system to calibrate the index velocity measurement data. Methods used to calibrate (rate) the index velocity to the channel velocity measured using the Acoustic Doppler Current Profiler are the most critical factors affecting the accuracy of net discharge estimation. The index velocity first must be related to mean channel velocity and then used to calculate instantaneous channel discharge. Finally, discharge is low-pass filtered to remove the effects of the tides. An ultrasonic velocity meter discharge-measurement site in a tidally affected region of the Sacramento-San Joaquin Rivers was used to study the accuracy of the index velocity calibration procedure. Calibration data consisting of ultrasonic velocity meter index velocity and concurrent acoustic Doppler discharge measurement data were collected during three time periods. Two sets of data were collected during a spring tide (monthly maximum tidal current) and one of data collected during a neap tide (monthly minimum tidal current). The relative magnitude of instrumental errors, acoustic Doppler discharge measurement errors, and calibration errors were evaluated. Calibration error was found to be the most significant source of error in estimating net discharge. Using a comprehensive calibration method, net discharge estimates developed from the three

  15. MIDAS robust trend estimator for accurate GPS station velocities without step detection

    NASA Astrophysics Data System (ADS)

    Blewitt, Geoffrey; Kreemer, Corné; Hammond, William C.; Gazeaux, Julien

    2016-03-01

    Automatic estimation of velocities from GPS coordinate time series is becoming required to cope with the exponentially increasing flood of available data, but problems detectable to the human eye are often overlooked. This motivates us to find an automatic and accurate estimator of trend that is resistant to common problems such as step discontinuities, outliers, seasonality, skewness, and heteroscedasticity. Developed here, Median Interannual Difference Adjusted for Skewness (MIDAS) is a variant of the Theil-Sen median trend estimator, for which the ordinary version is the median of slopes vij = (xj-xi)/(tj-ti) computed between all data pairs i > j. For normally distributed data, Theil-Sen and least squares trend estimates are statistically identical, but unlike least squares, Theil-Sen is resistant to undetected data problems. To mitigate both seasonality and step discontinuities, MIDAS selects data pairs separated by 1 year. This condition is relaxed for time series with gaps so that all data are used. Slopes from data pairs spanning a step function produce one-sided outliers that can bias the median. To reduce bias, MIDAS removes outliers and recomputes the median. MIDAS also computes a robust and realistic estimate of trend uncertainty. Statistical tests using GPS data in the rigid North American plate interior show ±0.23 mm/yr root-mean-square (RMS) accuracy in horizontal velocity. In blind tests using synthetic data, MIDAS velocities have an RMS accuracy of ±0.33 mm/yr horizontal, ±1.1 mm/yr up, with a 5th percentile range smaller than all 20 automatic estimators tested. Considering its general nature, MIDAS has the potential for broader application in the geosciences.

  16. High-Resolution Tsunami Inundation Simulations Based on Accurate Estimations of Coastal Waveforms

    NASA Astrophysics Data System (ADS)

    Oishi, Y.; Imamura, F.; Sugawara, D.; Furumura, T.

    2015-12-01

    We evaluate the accuracy of high-resolution tsunami inundation simulations in detail using the actual observational data of the 2011 Tohoku-Oki earthquake (Mw9.0) and investigate the methodologies to improve the simulation accuracy.Due to the recent development of parallel computing technologies, high-resolution tsunami inundation simulations are conducted more commonly than before. To evaluate how accurately these simulations can reproduce inundation processes, we test several types of simulation configurations on a parallel computer, where we can utilize the observational data (e.g., offshore and coastal waveforms and inundation properties) that are recorded during the Tohoku-Oki earthquake.Before discussing the accuracy of inundation processes on land, the incident waves at coastal sites must be accurately estimated. However, for megathrust earthquakes, it is difficult to find the tsunami source that can provide accurate estimations of tsunami waveforms at every coastal site because of the complex spatiotemporal distribution of the source and the limitation of observation. To overcome this issue, we employ a site-specific source inversion approach that increases the estimation accuracy within a specific coastal site by applying appropriate weighting to the observational data in the inversion process.We applied our source inversion technique to the Tohoku tsunami and conducted inundation simulations using 5-m resolution digital elevation model data (DEM) for the coastal area around Miyako Bay and Sendai Bay. The estimated waveforms at the coastal wave gauges of these bays successfully agree with the observed waveforms. However, the simulations overestimate the inundation extent indicating the necessity to improve the inundation model. We find that the value of Manning's roughness coefficient should be modified from the often-used value of n = 0.025 to n = 0.033 to obtain proper results at both cities.In this presentation, the simulation results with several

  17. Estimation of point explosion parameters by body-wave spectra

    NASA Astrophysics Data System (ADS)

    Tsereteli, Nino; Kereselidze, Zurab

    2014-05-01

    Radial model of point explosion is presented. According to this model the epicenter are consists with two qualitatively different spherical area. In the first sphere the explosion energy is spent on plastic deformations. The second spherical area, where the medium are elastically, presents area where the body waves are generated. The frequency spectrum of this wave can presents the intrinsic frequency of natural oscillations of the point explosion. The Euler radial equation was used during the modeling of this process. Using analytical equation of discrete frequency spectrum is possible to solve the inverse seismological problem. In other words it is possible to calculate the internal and external radius of elastic area. Finally we can obtain a sufficiently correct analytic solution to define the linear characteristics of the point explosion area and estimating the energy released.

  18. Toward an Accurate and Inexpensive Estimation of CCSD(T)/CBS Binding Energies of Large Water Clusters.

    PubMed

    Sahu, Nityananda; Singh, Gurmeet; Nandi, Apurba; Gadre, Shridhar R

    2016-07-21

    Owing to the steep scaling behavior, highly accurate CCSD(T) calculations, the contemporary gold standard of quantum chemistry, are prohibitively difficult for moderate- and large-sized water clusters even with the high-end hardware. The molecular tailoring approach (MTA), a fragmentation-based technique is found to be useful for enabling such high-level ab initio calculations. The present work reports the CCSD(T) level binding energies of many low-lying isomers of large (H2O)n (n = 16, 17, and 25) clusters employing aug-cc-pVDZ and aug-cc-pVTZ basis sets within the MTA framework. Accurate estimation of the CCSD(T) level binding energies [within 0.3 kcal/mol of the respective full calculation (FC) results] is achieved after effecting the grafting procedure, a protocol for minimizing the errors in the MTA-derived energies arising due to the approximate nature of MTA. The CCSD(T) level grafting procedure presented here hinges upon the well-known fact that the MP2 method, which scales as O(N(5)), can be a suitable starting point for approximating to the highly accurate CCSD(T) [that scale as O(N(7))] energies. On account of the requirement of only an MP2-level FC on the entire cluster, the current methodology ultimately leads to a cost-effective solution for the CCSD(T) level accurate binding energies of large-sized water clusters even at the complete basis set limit utilizing off-the-shelf hardware. PMID:27351269

  19. Accurate estimation of the RMS emittance from single current amplifier data

    SciTech Connect

    Stockli, Martin P.; Welton, R.F.; Keller, R.; Letchford, A.P.; Thomae, R.W.; Thomason, J.W.G.

    2002-05-31

    This paper presents the SCUBEEx rms emittance analysis, a self-consistent, unbiased elliptical exclusion method, which combines traditional data-reduction methods with statistical methods to obtain accurate estimates for the rms emittance. Rather than considering individual data, the method tracks the average current density outside a well-selected, variable boundary to separate the measured beam halo from the background. The average outside current density is assumed to be part of a uniform background and not part of the particle beam. Therefore the average outside current is subtracted from the data before evaluating the rms emittance within the boundary. As the boundary area is increased, the average outside current and the inside rms emittance form plateaus when all data containing part of the particle beam are inside the boundary. These plateaus mark the smallest acceptable exclusion boundary and provide unbiased estimates for the average background and the rms emittance. Small, trendless variations within the plateaus allow for determining the uncertainties of the estimates caused by variations of the measured background outside the smallest acceptable exclusion boundary. The robustness of the method is established with complementary variations of the exclusion boundary. This paper presents a detailed comparison between traditional data reduction methods and SCUBEEx by analyzing two complementary sets of emittance data obtained with a Lawrence Berkeley National Laboratory and an ISIS H{sup -} ion source.

  20. Painfree and accurate Bayesian estimation of psychometric functions for (potentially) overdispersed data.

    PubMed

    Schütt, Heiko H; Harmeling, Stefan; Macke, Jakob H; Wichmann, Felix A

    2016-05-01

    The psychometric function describes how an experimental variable, such as stimulus strength, influences the behaviour of an observer. Estimation of psychometric functions from experimental data plays a central role in fields such as psychophysics, experimental psychology and in the behavioural neurosciences. Experimental data may exhibit substantial overdispersion, which may result from non-stationarity in the behaviour of observers. Here we extend the standard binomial model which is typically used for psychometric function estimation to a beta-binomial model. We show that the use of the beta-binomial model makes it possible to determine accurate credible intervals even in data which exhibit substantial overdispersion. This goes beyond classical measures for overdispersion-goodness-of-fit-which can detect overdispersion but provide no method to do correct inference for overdispersed data. We use Bayesian inference methods for estimating the posterior distribution of the parameters of the psychometric function. Unlike previous Bayesian psychometric inference methods our software implementation-psignifit 4-performs numerical integration of the posterior within automatically determined bounds. This avoids the use of Markov chain Monte Carlo (MCMC) methods typically requiring expert knowledge. Extensive numerical tests show the validity of the approach and we discuss implications of overdispersion for experimental design. A comprehensive MATLAB toolbox implementing the method is freely available; a python implementation providing the basic capabilities is also available. PMID:27013261

  1. Accurate estimation of human body orientation from RGB-D sensors.

    PubMed

    Liu, Wu; Zhang, Yongdong; Tang, Sheng; Tang, Jinhui; Hong, Richang; Li, Jintao

    2013-10-01

    Accurate estimation of human body orientation can significantly enhance the analysis of human behavior, which is a fundamental task in the field of computer vision. However, existing orientation estimation methods cannot handle the various body poses and appearances. In this paper, we propose an innovative RGB-D-based orientation estimation method to address these challenges. By utilizing the RGB-D information, which can be real time acquired by RGB-D sensors, our method is robust to cluttered environment, illumination change and partial occlusions. Specifically, efficient static and motion cue extraction methods are proposed based on the RGB-D superpixels to reduce the noise of depth data. Since it is hard to discriminate all the 360 (°) orientation using static cues or motion cues independently, we propose to utilize a dynamic Bayesian network system (DBNS) to effectively employ the complementary nature of both static and motion cues. In order to verify our proposed method, we build a RGB-D-based human body orientation dataset that covers a wide diversity of poses and appearances. Our intensive experimental evaluations on this dataset demonstrate the effectiveness and efficiency of the proposed method. PMID:23893759

  2. Estimation of Distributed Fermat-Point Location for Wireless Sensor Networking

    PubMed Central

    Huang, Po-Hsian; Chen, Jiann-Liang; Larosa, Yanuarius Teofilus; Chiang, Tsui-Lien

    2011-01-01

    This work presents a localization scheme for use in wireless sensor networks (WSNs) that is based on a proposed connectivity-based RF localization strategy called the distributed Fermat-point location estimation algorithm (DFPLE). DFPLE applies triangle area of location estimation formed by intersections of three neighboring beacon nodes. The Fermat point is determined as the shortest path from three vertices of the triangle. The area of estimated location then refined using Fermat point to achieve minimum error in estimating sensor nodes location. DFPLE solves problems of large errors and poor performance encountered by localization schemes that are based on a bounding box algorithm. Performance analysis of a 200-node development environment reveals that, when the number of sensor nodes is below 150, the mean error decreases rapidly as the node density increases, and when the number of sensor nodes exceeds 170, the mean error remains below 1% as the node density increases. Second, when the number of beacon nodes is less than 60, normal nodes lack sufficient beacon nodes to enable their locations to be estimated. However, the mean error changes slightly as the number of beacon nodes increases above 60. Simulation results revealed that the proposed algorithm for estimating sensor positions is more accurate than existing algorithms, and improves upon conventional bounding box strategies. PMID:22163851

  3. Point estimation of simultaneous methods for solving polynomial equations

    NASA Astrophysics Data System (ADS)

    Petkovic, Miodrag S.; Petkovic, Ljiljana D.; Rancic, Lidija Z.

    2007-08-01

    The construction of computationally verifiable initial conditions which provide both the guaranteed and fast convergence of the numerical root-finding algorithm is one of the most important problems in solving nonlinear equations. Smale's "point estimation theory" from 1981 was a great advance in this topic; it treats convergence conditions and the domain of convergence in solving an equation f(z)=0 using only the information of f at the initial point z0. The study of a general problem of the construction of initial conditions of practical interest providing guaranteed convergence is very difficult, even in the case of algebraic polynomials. In the light of Smale's point estimation theory, an efficient approach based on some results concerning localization of polynomial zeros and convergent sequences is applied in this paper to iterative methods for the simultaneous determination of simple zeros of polynomials. We state new, improved initial conditions which provide the guaranteed convergence of frequently used simultaneous methods for solving algebraic equations: Ehrlich-Aberth's method, Ehrlich-Aberth's method with Newton's correction, Borsch-Supan's method with Weierstrass' correction and Halley-like (or Wang-Zheng) method. The introduced concept offers not only a clear insight into the convergence analysis of sequences generated by the considered methods, but also explicitly gives their order of convergence. The stated initial conditions are of significant practical importance since they are computationally verifiable; they depend only on the coefficients of a given polynomial, its degree n and initial approximations to polynomial zeros.

  4. Quick and accurate estimation of the elastic constants using the minimum image method

    NASA Astrophysics Data System (ADS)

    Tretiakov, Konstantin V.; Wojciechowski, Krzysztof W.

    2015-04-01

    A method for determining the elastic properties using the minimum image method (MIM) is proposed and tested on a model system of particles interacting by the Lennard-Jones (LJ) potential. The elastic constants of the LJ system are determined in the thermodynamic limit, N → ∞, using the Monte Carlo (MC) method in the NVT and NPT ensembles. The simulation results show that when determining the elastic constants, the contribution of long-range interactions cannot be ignored, because that would lead to erroneous results. In addition, the simulations have revealed that the inclusion of further interactions of each particle with all its minimum image neighbors even in case of small systems leads to results which are very close to the values of elastic constants in the thermodynamic limit. This enables one for a quick and accurate estimation of the elastic constants using very small samples.

  5. Exterior Orientation Estimation of Oblique Aerial Imagery Using Vanishing Points

    NASA Astrophysics Data System (ADS)

    Verykokou, Styliani; Ioannidis, Charalabos

    2016-06-01

    In this paper, a methodology for the calculation of rough exterior orientation (EO) parameters of multiple large-scale overlapping oblique aerial images, in the case that GPS/INS information is not available (e.g., for old datasets), is presented. It consists of five main steps; (a) the determination of the overlapping image pairs and the single image in which four ground control points have to be measured; (b) the computation of the transformation parameters from every image to the coordinate reference system; (c) the rough estimation of the camera interior orientation parameters; (d) the estimation of the true horizon line and the nadir point of each image; (e) the calculation of the rough EO parameters of each image. A developed software suite implementing the proposed methodology is tested using a set of UAV multi-perspective oblique aerial images. Several tests are performed for the assessment of the errors and show that the estimated EO parameters can be used either as initial approximations for a bundle adjustment procedure or as rough georeferencing information for several applications, like 3D modelling, even by non-photogrammetrists, because of the minimal user intervention needed. Finally, comparisons with a commercial software are made, in terms of automation and correctness of the computed EO parameters.

  6. Incentives Increase Participation in Mass Dog Rabies Vaccination Clinics and Methods of Coverage Estimation Are Assessed to Be Accurate

    PubMed Central

    Steinmetz, Melissa; Czupryna, Anna; Bigambo, Machunde; Mzimbiri, Imam; Powell, George; Gwakisa, Paul

    2015-01-01

    In this study we show that incentives (dog collars and owner wristbands) are effective at increasing owner participation in mass dog rabies vaccination clinics and we conclude that household questionnaire surveys and the mark-re-sight (transect survey) method for estimating post-vaccination coverage are accurate when all dogs, including puppies, are included. Incentives were distributed during central-point rabies vaccination clinics in northern Tanzania to quantify their effect on owner participation. In villages where incentives were handed out participation increased, with an average of 34 more dogs being vaccinated. Through economies of scale, this represents a reduction in the cost-per-dog of $0.47. This represents the price-threshold under which the cost of the incentive used must fall to be economically viable. Additionally, vaccination coverage levels were determined in ten villages through the gold-standard village-wide census technique, as well as through two cheaper and quicker methods (randomized household questionnaire and the transect survey). Cost data were also collected. Both non-gold standard methods were found to be accurate when puppies were included in the calculations, although the transect survey and the household questionnaire survey over- and under-estimated the coverage respectively. Given that additional demographic data can be collected through the household questionnaire survey, and that its estimate of coverage is more conservative, we recommend this method. Despite the use of incentives the average vaccination coverage was below the 70% threshold for eliminating rabies. We discuss the reasons and suggest solutions to improve coverage. Given recent international targets to eliminate rabies, this study provides valuable and timely data to help improve mass dog vaccination programs in Africa and elsewhere. PMID:26633821

  7. Incentives Increase Participation in Mass Dog Rabies Vaccination Clinics and Methods of Coverage Estimation Are Assessed to Be Accurate.

    PubMed

    Minyoo, Abel B; Steinmetz, Melissa; Czupryna, Anna; Bigambo, Machunde; Mzimbiri, Imam; Powell, George; Gwakisa, Paul; Lankester, Felix

    2015-12-01

    In this study we show that incentives (dog collars and owner wristbands) are effective at increasing owner participation in mass dog rabies vaccination clinics and we conclude that household questionnaire surveys and the mark-re-sight (transect survey) method for estimating post-vaccination coverage are accurate when all dogs, including puppies, are included. Incentives were distributed during central-point rabies vaccination clinics in northern Tanzania to quantify their effect on owner participation. In villages where incentives were handed out participation increased, with an average of 34 more dogs being vaccinated. Through economies of scale, this represents a reduction in the cost-per-dog of $0.47. This represents the price-threshold under which the cost of the incentive used must fall to be economically viable. Additionally, vaccination coverage levels were determined in ten villages through the gold-standard village-wide census technique, as well as through two cheaper and quicker methods (randomized household questionnaire and the transect survey). Cost data were also collected. Both non-gold standard methods were found to be accurate when puppies were included in the calculations, although the transect survey and the household questionnaire survey over- and under-estimated the coverage respectively. Given that additional demographic data can be collected through the household questionnaire survey, and that its estimate of coverage is more conservative, we recommend this method. Despite the use of incentives the average vaccination coverage was below the 70% threshold for eliminating rabies. We discuss the reasons and suggest solutions to improve coverage. Given recent international targets to eliminate rabies, this study provides valuable and timely data to help improve mass dog vaccination programs in Africa and elsewhere. PMID:26633821

  8. A Simple yet Accurate Method for the Estimation of the Biovolume of Planktonic Microorganisms.

    PubMed

    Saccà, Alessandro

    2016-01-01

    Determining the biomass of microbial plankton is central to the study of fluxes of energy and materials in aquatic ecosystems. This is typically accomplished by applying proper volume-to-carbon conversion factors to group-specific abundances and biovolumes. A critical step in this approach is the accurate estimation of biovolume from two-dimensional (2D) data such as those available through conventional microscopy techniques or flow-through imaging systems. This paper describes a simple yet accurate method for the assessment of the biovolume of planktonic microorganisms, which works with any image analysis system allowing for the measurement of linear distances and the estimation of the cross sectional area of an object from a 2D digital image. The proposed method is based on Archimedes' principle about the relationship between the volume of a sphere and that of a cylinder in which the sphere is inscribed, plus a coefficient of 'unellipticity' introduced here. Validation and careful evaluation of the method are provided using a variety of approaches. The new method proved to be highly precise with all convex shapes characterised by approximate rotational symmetry, and combining it with an existing method specific for highly concave or branched shapes allows covering the great majority of cases with good reliability. Thanks to its accuracy, consistency, and low resources demand, the new method can conveniently be used in substitution of any extant method designed for convex shapes, and can readily be coupled with automated cell imaging technologies, including state-of-the-art flow-through imaging devices. PMID:27195667

  9. A Simple yet Accurate Method for the Estimation of the Biovolume of Planktonic Microorganisms

    PubMed Central

    2016-01-01

    Determining the biomass of microbial plankton is central to the study of fluxes of energy and materials in aquatic ecosystems. This is typically accomplished by applying proper volume-to-carbon conversion factors to group-specific abundances and biovolumes. A critical step in this approach is the accurate estimation of biovolume from two-dimensional (2D) data such as those available through conventional microscopy techniques or flow-through imaging systems. This paper describes a simple yet accurate method for the assessment of the biovolume of planktonic microorganisms, which works with any image analysis system allowing for the measurement of linear distances and the estimation of the cross sectional area of an object from a 2D digital image. The proposed method is based on Archimedes’ principle about the relationship between the volume of a sphere and that of a cylinder in which the sphere is inscribed, plus a coefficient of ‘unellipticity’ introduced here. Validation and careful evaluation of the method are provided using a variety of approaches. The new method proved to be highly precise with all convex shapes characterised by approximate rotational symmetry, and combining it with an existing method specific for highly concave or branched shapes allows covering the great majority of cases with good reliability. Thanks to its accuracy, consistency, and low resources demand, the new method can conveniently be used in substitution of any extant method designed for convex shapes, and can readily be coupled with automated cell imaging technologies, including state-of-the-art flow-through imaging devices. PMID:27195667

  10. How Accurate and Robust Are the Phylogenetic Estimates of Austronesian Language Relationships?

    PubMed Central

    Greenhill, Simon J.; Drummond, Alexei J.; Gray, Russell D.

    2010-01-01

    We recently used computational phylogenetic methods on lexical data to test between two scenarios for the peopling of the Pacific. Our analyses of lexical data supported a pulse-pause scenario of Pacific settlement in which the Austronesian speakers originated in Taiwan around 5,200 years ago and rapidly spread through the Pacific in a series of expansion pulses and settlement pauses. We claimed that there was high congruence between traditional language subgroups and those observed in the language phylogenies, and that the estimated age of the Austronesian expansion at 5,200 years ago was consistent with the archaeological evidence. However, the congruence between the language phylogenies and the evidence from historical linguistics was not quantitatively assessed using tree comparison metrics. The robustness of the divergence time estimates to different calibration points was also not investigated exhaustively. Here we address these limitations by using a systematic tree comparison metric to calculate the similarity between the Bayesian phylogenetic trees and the subgroups proposed by historical linguistics, and by re-estimating the age of the Austronesian expansion using only the most robust calibrations. The results show that the Austronesian language phylogenies are highly congruent with the traditional subgroupings, and the date estimates are robust even when calculated using a restricted set of historical calibrations. PMID:20224774

  11. Accurate Estimation of the Fine Layering Effect on the Wave Propagation in the Carbonate Rocks

    NASA Astrophysics Data System (ADS)

    Bouchaala, F.; Ali, M. Y.

    2014-12-01

    The attenuation caused to the seismic wave during its propagation can be mainly divided into two parts, the scattering and the intrinsic attenuation. The scattering is an elastic redistribution of the energy due to the medium heterogeneities. However the intrinsic attenuation is an inelastic phenomenon, mainly due to the fluid-grain friction during the wave passage. The intrinsic attenuation is directly related to the physical characteristics of the medium, so this parameter is very can be used for media characterization and fluid detection, which is beneficial for the oil and gas industry. The intrinsic attenuation is estimated by subtracting the scattering from the total attenuation, therefore the accuracy of the intrinsic attenuation is directly dependent on the accuracy of the total attenuation and the scattering. The total attenuation can be estimated from the recorded waves, by using in-situ methods as the spectral ratio and frequency shift methods. The scattering is estimated by assuming the heterogeneities as a succession of stacked layers, each layer is characterized by a single density and velocity. The accuracy of the scattering is strongly dependent on the layer thicknesses, especially in the case of the media composed of carbonate rocks, such media are known for their strong heterogeneity. Previous studies gave some assumptions for the choice of the layer thickness, but they showed some limitations especially in the case of carbonate rocks. In this study we established a relationship between the layer thicknesses and the frequency of the propagation, after certain mathematical development of the Generalized O'Doherty-Anstey formula. We validated this relationship through some synthetic tests and real data provided from a VSP carried out over an onshore oilfield in the emirate of Abu Dhabi in the United Arab Emirates, primarily composed of carbonate rocks. The results showed the utility of our relationship for an accurate estimation of the scattering

  12. Point skin tests in allergology: estimation of point skin tests with histamine solutions of different concentration

    NASA Astrophysics Data System (ADS)

    Zuber, Janusz; Kruszewski, Jerzy; Klosowicz, Stanislaw J.; Zmija, Jozef

    1995-08-01

    The application of liquid crystal contact thermography for point skin tests used in allergology diagnostic has been studied. The effect of a concentration of histamine, adopted as the etalon substance, on observed temperature fields is presented. Obtained results have been confirmed by thermovision measurements. A correlation between studied method and visual estimation used until now is the best for temperature range observed as a blue color.

  13. Unbounded Binary Search for a Fast and Accurate Maximum Power Point Tracking

    NASA Astrophysics Data System (ADS)

    Kim, Yong Sin; Winston, Roland

    2011-12-01

    This paper presents a technique for maximum power point tracking (MPPT) of a concentrating photovoltaic system using cell level power optimization. Perturb and observe (P&O) has been a standard for an MPPT, but it introduces a tradeoff between the tacking speed and the accuracy of the maximum power delivered. The P&O algorithm is not suitable for a rapid environmental condition change by partial shading and self-shading due to its tracking time being linear to the length of the voltage range. Some of researches have been worked on fast tracking but they come with internal ad hoc parameters. In this paper, by using the proposed unbounded binary search algorithm for the MPPT, tracking time becomes a logarithmic function of the voltage search range without ad hoc parameters.

  14. Can student health professionals accurately estimate alcohol content in commonly occurring drinks?

    PubMed Central

    Sinclair, Julia; Searle, Emma

    2016-01-01

    Objectives: Correct identification of alcohol as a contributor to, or comorbidity of, many psychiatric diseases requires health professionals to be competent and confident to take an accurate alcohol history. Being able to estimate (or calculate) the alcohol content in commonly consumed drinks is a prerequisite for quantifying levels of alcohol consumption. The aim of this study was to assess this ability in medical and nursing students. Methods: A cross-sectional survey of 891 medical and nursing students across different years of training was conducted. Students were asked the alcohol content of 10 different alcoholic drinks by seeing a slide of the drink (with picture, volume and percentage of alcohol by volume) for 30 s. Results: Overall, the mean number of correctly estimated drinks (out of the 10 tested) was 2.4, increasing to just over 3 if a 10% margin of error was used. Wine and premium strength beers were underestimated by over 50% of students. Those who drank alcohol themselves, or who were further on in their clinical training, did better on the task, but overall the levels remained low. Conclusions: Knowledge of, or the ability to work out, the alcohol content of commonly consumed drinks is poor, and further research is needed to understand the reasons for this and the impact this may have on the likelihood to undertake screening or initiate treatment. PMID:27536344

  15. Ultrasound Fetal Weight Estimation: How Accurate Are We Now Under Emergency Conditions?

    PubMed

    Dimassi, Kaouther; Douik, Fatma; Ajroudi, Mariem; Triki, Amel; Gara, Mohamed Faouzi

    2015-10-01

    The primary aim of this study was to evaluate the accuracy of sonographic estimation of fetal weight when performed at due date by first-line sonographers. This was a prospective study including 500 singleton pregnancies. Ultrasound examinations were performed by residents on delivery day. Estimated fetal weights (EFWs) were calculated and compared with the corresponding birth weights. The median absolute difference between EFW and birth weight was 200 g (100-330). This difference was within ±10% in 75.2% of the cases. The median absolute percentage error was 5.53% (2.70%-10.03%). Linear regression analysis revealed a good correlation between EFW and birth weight (r = 0.79, p < 0.0001). According to Bland-Altman analysis, bias was -85.06 g (95% limits of agreement: -663.33 to 494.21). In conclusion, EFWs calculated by residents were as accurate as those calculated by experienced sonographers. Nevertheless, predictive performance remains limited, with a low sensitivity in the diagnosis of macrosomia. PMID:26164286

  16. Ocean Lidar Measurements of Beam Attenuation and a Roadmap to Accurate Phytoplankton Biomass Estimates

    NASA Astrophysics Data System (ADS)

    Hu, Yongxiang; Behrenfeld, Mike; Hostetler, Chris; Pelon, Jacques; Trepte, Charles; Hair, John; Slade, Wayne; Cetinic, Ivona; Vaughan, Mark; Lu, Xiaomei; Zhai, Pengwang; Weimer, Carl; Winker, David; Verhappen, Carolus C.; Butler, Carolyn; Liu, Zhaoyan; Hunt, Bill; Omar, Ali; Rodier, Sharon; Lifermann, Anne; Josset, Damien; Hou, Weilin; MacDonnell, David; Rhew, Ray

    2016-06-01

    Beam attenuation coefficient, c, provides an important optical index of plankton standing stocks, such as phytoplankton biomass and total particulate carbon concentration. Unfortunately, c has proven difficult to quantify through remote sensing. Here, we introduce an innovative approach for estimating c using lidar depolarization measurements and diffuse attenuation coefficients from ocean color products or lidar measurements of Brillouin scattering. The new approach is based on a theoretical formula established from Monte Carlo simulations that links the depolarization ratio of sea water to the ratio of diffuse attenuation Kd and beam attenuation C (i.e., a multiple scattering factor). On July 17, 2014, the CALIPSO satellite was tilted 30° off-nadir for one nighttime orbit in order to minimize ocean surface backscatter and demonstrate the lidar ocean subsurface measurement concept from space. Depolarization ratios of ocean subsurface backscatter are measured accurately. Beam attenuation coefficients computed from the depolarization ratio measurements compare well with empirical estimates from ocean color measurements. We further verify the beam attenuation coefficient retrievals using aircraft-based high spectral resolution lidar (HSRL) data that are collocated with in-water optical measurements.

  17. Discrete state model and accurate estimation of loop entropy of RNA secondary structures.

    PubMed

    Zhang, Jian; Lin, Ming; Chen, Rong; Wang, Wei; Liang, Jie

    2008-03-28

    Conformational entropy makes important contribution to the stability and folding of RNA molecule, but it is challenging to either measure or compute conformational entropy associated with long loops. We develop optimized discrete k-state models of RNA backbone based on known RNA structures for computing entropy of loops, which are modeled as self-avoiding walks. To estimate entropy of hairpin, bulge, internal loop, and multibranch loop of long length (up to 50), we develop an efficient sampling method based on the sequential Monte Carlo principle. Our method considers excluded volume effect. It is general and can be applied to calculating entropy of loops with longer length and arbitrary complexity. For loops of short length, our results are in good agreement with a recent theoretical model and experimental measurement. For long loops, our estimated entropy of hairpin loops is in excellent agreement with the Jacobson-Stockmayer extrapolation model. However, for bulge loops and more complex secondary structures such as internal and multibranch loops, we find that the Jacobson-Stockmayer extrapolation model has large errors. Based on estimated entropy, we have developed empirical formulae for accurate calculation of entropy of long loops in different secondary structures. Our study on the effect of asymmetric size of loops suggest that loop entropy of internal loops is largely determined by the total loop length, and is only marginally affected by the asymmetric size of the two loops. Our finding suggests that the significant asymmetric effects of loop length in internal loops measured by experiments are likely to be partially enthalpic. Our method can be applied to develop improved energy parameters important for studying RNA stability and folding, and for predicting RNA secondary and tertiary structures. The discrete model and the program used to calculate loop entropy can be downloaded at http://gila.bioengr.uic.edu/resources/RNA.html. PMID:18376982

  18. An accurate and simple technique of determination of the maximum power point and measurement of some solar cell parameters

    NASA Astrophysics Data System (ADS)

    Deb, S.; Maitra, K.; Roychoudhuri, A.

    1985-06-01

    In the wake of the energy crisis, attempts are being made to develop a variety of energy conversion devices, such as solar cells. The single most important operational characteristic for a conversion element generating electricity is the V against I curve. Three points on this characteristic curve are of paramount importance, including the short-circuit, the open-circuit, and the maximum power point. The present paper has the objective to propose a new simple and accurate method of determining the maximum power point (Vm, Im) of the V against I characteristics, based on a geometrical interpretation. The method is general enough to be applicable to any energy conversion device having a nonlinear V against I characteristic. The paper provides also a method for determining the fill factor (FF), the series resistance (Rs), and the diode ideality factor (A) from a single set of connected observations.

  19. Accurate estimation of retinal vessel width using bagged decision trees and an extended multiresolution Hermite model.

    PubMed

    Lupaşcu, Carmen Alina; Tegolo, Domenico; Trucco, Emanuele

    2013-12-01

    We present an algorithm estimating the width of retinal vessels in fundus camera images. The algorithm uses a novel parametric surface model of the cross-sectional intensities of vessels, and ensembles of bagged decision trees to estimate the local width from the parameters of the best-fit surface. We report comparative tests with REVIEW, currently the public database of reference for retinal width estimation, containing 16 images with 193 annotated vessel segments and 5066 profile points annotated manually by three independent experts. Comparative tests are reported also with our own set of 378 vessel widths selected sparsely in 38 images from the Tayside Scotland diabetic retinopathy screening programme and annotated manually by two clinicians. We obtain considerably better accuracies compared to leading methods in REVIEW tests and in Tayside tests. An important advantage of our method is its stability (success rate, i.e., meaningful measurement returned, of 100% on all REVIEW data sets and on the Tayside data set) compared to a variety of methods from the literature. We also find that results depend crucially on testing data and conditions, and discuss criteria for selecting a training set yielding optimal accuracy. PMID:24001930

  20. Closed-form solutions for estimating a rigid motion from plane correspondences extracted from point clouds

    NASA Astrophysics Data System (ADS)

    Khoshelham, Kourosh

    2016-04-01

    Registration is often a prerequisite step in processing point clouds. While planar surfaces are suitable features for registration, most of the existing plane-based registration methods rely on iterative solutions for the estimation of transformation parameters from plane correspondences. This paper presents a new closed-form solution for the estimation of a rigid motion from a set of point-plane correspondences. The role of normalization is investigated and its importance for accurate plane fitting and plane-based registration is shown. The paper also presents a thorough evaluation of the closed-form solutions and compares their performance with the iterative solution in terms of accuracy, robustness, stability and efficiency. The results suggest that the closed-form solution based on point-plane correspondences should be the method of choice in point cloud registration as it is significantly faster than the iterative solution, and performs as well as or better than the iterative solution in most situations. The normalization of the point coordinates is also recommended as an essential preprocessing step for point cloud registration. An implementation of the closed-form solutions in MATLAB is available at: http://people.eng.unimelb.edu.au/kkhoshelham/research.html#directmotion

  1. Can endocranial volume be estimated accurately from external skull measurements in great-tailed grackles (Quiscalus mexicanus)?

    PubMed

    Logan, Corina J; Palmstrom, Christin R

    2015-01-01

    There is an increasing need to validate and collect data approximating brain size on individuals in the field to understand what evolutionary factors drive brain size variation within and across species. We investigated whether we could accurately estimate endocranial volume (a proxy for brain size), as measured by computerized tomography (CT) scans, using external skull measurements and/or by filling skulls with beads and pouring them out into a graduated cylinder for male and female great-tailed grackles. We found that while females had higher correlations than males, estimations of endocranial volume from external skull measurements or beads did not tightly correlate with CT volumes. We found no accuracy in the ability of external skull measures to predict CT volumes because the prediction intervals for most data points overlapped extensively. We conclude that we are unable to detect individual differences in endocranial volume using external skull measurements. These results emphasize the importance of validating and explicitly quantifying the predictive accuracy of brain size proxies for each species and each sex. PMID:26082858

  2. Estimating the Effects of Detection Heterogeneity and Overdispersion on Trends Estimated from Avian Point Counts

    EPA Science Inventory

    Point counts are a common method for sampling avian distribution and abundance. Though methods for estimating detection probabilities are available, many analyses use raw counts and do not correct for detectability. We use a removal model of detection within an N-mixture approa...

  3. Accurate Visual Heading Estimation at High Rotation Rate Without Oculomotor or Static-Depth Cues

    NASA Technical Reports Server (NTRS)

    Stone, Leland S.; Perrone, John A.; Null, Cynthia H. (Technical Monitor)

    1995-01-01

    It has been claimed that either oculomotor or static depth cues provide the signals about self-rotation necessary approx.-1 deg/s. We tested this hypothesis by simulating self-motion along a curved path with the eyes fixed in the head (plus or minus 16 deg/s of rotation). Curvilinear motion offers two advantages: 1) heading remains constant in retinotopic coordinates, and 2) there is no visual-oculomotor conflict (both actual and simulated eye position remain stationary). We simulated 400 ms of rotation combined with 16 m/s of translation at fixed angles with respect to gaze towards two vertical planes of random dots initially 12 and 24 m away, with a field of view of 45 degrees. Four subjects were asked to fixate a central cross and to respond whether they were translating to the left or right of straight-ahead gaze. From the psychometric curves, heading bias (mean) and precision (semi-interquartile) were derived. The mean bias over 2-5 runs was 3.0, 4.0, -2.0, -0.4 deg for the first author and three naive subjects, respectively (positive indicating towards the rotation direction). The mean precision was 2.0, 1.9, 3.1, 1.6 deg. respectively. The ability of observers to make relatively accurate and precise heading judgments, despite the large rotational flow component, refutes the view that extra-flow-field information is necessary for human visual heading estimation at high rotation rates. Our results support models that process combined translational/rotational flow to estimate heading, but should not be construed to suggest that other cues do not play an important role when they are available to the observer.

  4. Effects of a More Accurate Polarizable Hamiltonian on Polymorph Free Energies Computed Efficiently by Reweighting Point-Charge Potentials.

    PubMed

    Dybeck, Eric C; Schieber, Natalie P; Shirts, Michael R

    2016-08-01

    We examine the free energies of three benzene polymorphs as a function of temperature in the point-charge OPLS-AA and GROMOS54A7 potentials as well as the polarizable AMOEBA09 potential. For this system, using a polarizable Hamiltonian instead of the cheaper point-charge potentials is shown to have a significantly smaller effect on the stability at 250 K than on the lattice energy at 0 K. The benzene I polymorph is found to be the most stable crystal structure in all three potentials examined and at all temperatures examined. For each potential, we report the free energies over a range of temperatures and discuss the added value of using full free energy methods over the minimized lattice energy to determine the relative crystal stability at finite temperatures. The free energies in the polarizable Hamiltonian are efficiently calculated using samples collected in a cheaper point-charge potential. The polarizable free energies are estimated from the point-charge trajectories using Boltzmann reweighting with MBAR. The high configuration-space overlap necessary for efficient Boltzmann reweighting is achieved by designing point-charge potentials with intramolecular parameters matching those in the expensive polarizable Hamiltonian. Finally, we compare the computational cost of this indirect reweighted free energy estimate to the cost of simulating directly in the expensive polarizable Hamiltonian. PMID:27341280

  5. Bone Pose Estimation in the Presence of Soft Tissue Artifact Using Triangular Cosserat Point Elements.

    PubMed

    Solav, Dana; Rubin, M B; Cereatti, Andrea; Camomilla, Valentina; Wolf, Alon

    2016-04-01

    Accurate estimation of the position and orientation (pose) of a bone from a cluster of skin markers is limited mostly by the relative motion between the bone and the markers, which is known as the soft tissue artifact (STA). This work presents a method, based on continuum mechanics, to describe the kinematics of a cluster affected by STA. The cluster is characterized by triangular cosserat point elements (TCPEs) defined by all combinations of three markers. The effects of the STA on the TCPEs are quantified using three parameters describing the strain in each TCPE and the relative rotation and translation between TCPEs. The method was evaluated using previously collected ex vivo kinematic data. Femur pose was estimated from 12 skin markers on the thigh, while its reference pose was measured using bone pins. Analysis revealed that instantaneous subsets of TCPEs exist which estimate bone position and orientation more accurately than the Procrustes Superimposition applied to the cluster of all markers. It has been shown that some of these parameters correlate well with femur pose errors, which suggests that they can be used to select, at each instant, subsets of TCPEs leading an improved estimation of the underlying bone pose. PMID:26194039

  6. Trend Estimation and Change Point Detection in Climatic Series

    NASA Astrophysics Data System (ADS)

    Bates, B. C.; Chandler, R. E.

    2011-12-01

    The problems of trend estimation and change point detection in climatic series have received substantial attention in recent years. Key issues include the magnitudes and directions of underlying trends, and the existence (or otherwise) of abrupt shifts in the mean background state. There are many procedures in use including: t-tests, Mann-Whitney and Pettit tests, linear and piecewise linear regression; cumulative sum analysis; hierarchical Bayesian change point analysis; Markov chain Monte Carlo methods; and reversible jump Markov chain Monte Carlo. The purpose of our presentation is to motivate wider use of modern regression techniques for trend estimation and change point detection in climatic series. We pay particular attention to the underlying statistical assumptions as their violation can lead to serious errors in data interpretation and study conclusions. In this context we consider two case studies. The first involves the application of local linear regression and a test for discontinuities in the regression function to the winter (December-March) North Atlantic Oscillation (NAO) index series for the period 1864-2010. This series exhibits a reversal from strongly negative values in the late 1960s to strongly positive NAO index values in the mid-1990s. The second involves the analysis of a seasonal (June to October) series of typhoon counts in the vicinity of Taiwan for the period 1970-2006. A previous investigation by other researchers concluded that an abrupt shift in this series occurred between 1999 and 2000. For both case studies, our findings indicate little evidence for abrupt shifts: rather, the decadal to multidecadal changes in the mean levels of both series appear well described by smooth trends. For the winter NAO index series, the trend is non-monotonic; for the typhoon counts, it can be regarded as linear on the square root scale. Our statistical results do not contradict those obtained by other researchers: our interpretation of these results

  7. Reconstruction of the activity of point sources for the accurate characterization of nuclear waste drums by segmented gamma scanning.

    PubMed

    Krings, Thomas; Mauerhofer, Eric

    2011-06-01

    This work improves the reliability and accuracy in the reconstruction of the total isotope activity content in heterogeneous nuclear waste drums containing point sources. The method is based on χ(2)-fits of the angular dependent count rate distribution measured during a drum rotation in segmented gamma scanning. A new description of the analytical calculation of the angular count rate distribution is introduced based on a more precise model of the collimated detector. The new description is validated and compared to the old description using MCNP5 simulations of angular dependent count rate distributions of Co-60 and Cs-137 point sources. It is shown that the new model describes the angular dependent count rate distribution significantly more accurate compared to the old model. Hence, the reconstruction of the activity is more accurate and the errors are considerably reduced that lead to more reliable results. Furthermore, the results are compared to the conventional reconstruction method assuming a homogeneous matrix and activity distribution. PMID:21353575

  8. Evaluation of pedotransfer functions for estimating the soil water retention points

    NASA Astrophysics Data System (ADS)

    Bahmani, Omid; Palangi, Sahar

    2016-06-01

    Direct measurement of soil moisture has been often expensive and time-consuming. The aim of this study was determining the best method to estimate the soil moisture using the pedotransfer functions in the soil par2 model. Soil samples selected from the database UNSODA in three textures include sandy loam, silty loam and clay. In clay soil, the Campbell model indicated better results at field capacity (FC) and wilting point (WP) with RMSE = (0.06, 0.09) and d = (0.65, 0.55) respectively. In silty loam soil, the Epic model had accurate estimation with MBE = 0.00 at FC and Campbell model had the acceptable result of WP with RMSE = 0.03 and d = 0.77. In sandy loam, Hutson and Campbell models had a better result to estimation the FC and WP than others. Also Hutson model had an acceptable result to estimation the TAW (Total Available Water) with RMSE = (0.03, 0.04, 0.04) and MBE = (0.02, 0.01, 0.01) for clay, sandy loam and silty loam, respectively. These models demonstrate the moisture points had the internal linkage with the soil textures. Results indicated that the PTFs models simulate the agreement results with the experimental observations.

  9. Curie point depth estimation of the Eastern Caribbean

    NASA Astrophysics Data System (ADS)

    Garcia, Andreina; Orihuela Guevara, Nuris

    2013-04-01

    In this paper we present an estimation of the Curie point depth (CPD) on the Eastern Caribbean. The estimation of the CPD was done from satellite magnetic anomalies, by the application of the Centroid method over the studied area. In order to calculate the CPD, the area was subdivided in square windows of side equal to 2°, with an overlap distance of 1° to each other. As result of this research, it was obtained the Curie isotherm grid by using kriging interpolation method. Despite of the oceanic nature of the Eastern Caribbean plate, this map reveals important lateral variations in the interior of the plate and its boundaries. The lateral variations observed in CPD are related with the complexity of thermal processes in the subsurface of the region. From a global perspective, the earth's oceanic provinces show a CPD's smooth behavior, excepting plate boundaries of these oceanic provinces. In this case, the Eastern Caribbean plate's CPD variations are related to both: Plate's boundaries and plate's interior. The maximum CPD variations are observed in the southern boundary of Caribbean plate (9 to 35 km) and over the Lesser Antilles and Barbados prism (16 to 30 km). This behavior reflects the complex geologic evolution history of the studied area, in which has been documented the presence of extensive mantle of basalt and dolerite sills. These sills have been originated in various cycles of cretaceous mantle activity, and have been the main cause of the oceanic crust's thickening in the interior of the Caribbean plate. At the same time, this thickening of the oceanic plate explains the existence of a Mohorovičić discontinuity, with an average depth greater than other regions of the planet, with slight irregularities related to highs of the ocean floor (Nicaragua and Beata Crests, Aves High) but not similar to the magnitude of lateral variations revealed by the Curie isotherm map.

  10. Improved image quality in pinhole SPECT by accurate modeling of the point spread function in low magnification systems

    SciTech Connect

    Pino, Francisco; Roé, Nuria; Aguiar, Pablo; Falcon, Carles; Ros, Domènec; Pavía, Javier

    2015-02-15

    Purpose: Single photon emission computed tomography (SPECT) has become an important noninvasive imaging technique in small-animal research. Due to the high resolution required in small-animal SPECT systems, the spatially variant system response needs to be included in the reconstruction algorithm. Accurate modeling of the system response should result in a major improvement in the quality of reconstructed images. The aim of this study was to quantitatively assess the impact that an accurate modeling of spatially variant collimator/detector response has on image-quality parameters, using a low magnification SPECT system equipped with a pinhole collimator and a small gamma camera. Methods: Three methods were used to model the point spread function (PSF). For the first, only the geometrical pinhole aperture was included in the PSF. For the second, the septal penetration through the pinhole collimator was added. In the third method, the measured intrinsic detector response was incorporated. Tomographic spatial resolution was evaluated and contrast, recovery coefficients, contrast-to-noise ratio, and noise were quantified using a custom-built NEMA NU 4–2008 image-quality phantom. Results: A high correlation was found between the experimental data corresponding to intrinsic detector response and the fitted values obtained by means of an asymmetric Gaussian distribution. For all PSF models, resolution improved as the distance from the point source to the center of the field of view increased and when the acquisition radius diminished. An improvement of resolution was observed after a minimum of five iterations when the PSF modeling included more corrections. Contrast, recovery coefficients, and contrast-to-noise ratio were better for the same level of noise in the image when more accurate models were included. Ring-type artifacts were observed when the number of iterations exceeded 12. Conclusions: Accurate modeling of the PSF improves resolution, contrast, and recovery

  11. Crop area estimation based on remotely-sensed data with an accurate but costly subsample

    NASA Technical Reports Server (NTRS)

    Gunst, R. F.

    1983-01-01

    Alternatives to sampling-theory stratified and regression estimators of crop production and timber biomass were examined. An alternative estimator which is viewed as especially promising is the errors-in-variable regression estimator. Investigations established the need for caution with this estimator when the ratio of two error variances is not precisely known.

  12. Zero-Cost Estimation of Zero-Point Energies.

    PubMed

    Császár, Attila G; Furtenbacher, Tibor

    2015-10-01

    An additive, linear, atom-type-based (ATB) scheme is developed allowing no-cost estimation of zero-point vibrational energies (ZPVE) of neutral, closed-shell molecules in their ground electronic states. The atom types employed correspond to those defined within the MM2 molecular mechanics force field approach. The reference training set of 156 molecules cover chained and branched alkanes, alkenes, cycloalkanes and cycloalkenes, alkynes, alcohols, aldehydes, carboxylic acids, amines, amides, ethers, esters, ketones, benzene derivatives, heterocycles, nucleobases, all the natural amino acids, some dipeptides and sugars, as well as further simple molecules and ones containing several structural units, including several vitamins. A weighted linear least-squares fit of atom-type-based ZPVE increments results in recommended values for the following atoms, with the number of atom types defined in parentheses: H(8), D(1), B(1), C(6), N(7), O(3), F(1), Si(1), P(2), S(3), and Cl(1). The average accuracy of the ATB ZPVEs is considerably better than 1 kcal mol(-1), that is, better than chemical accuracy. The proposed ATB scheme could be extended to many more atoms and atom types, following a careful validation procedure; deviation from the MM2 atom types seems to be necessary, especially for third-row elements. PMID:26398318

  13. Estimating the effects of detection heterogeneity and overdispersion on trends estimated from avian point counts.

    PubMed

    Etterson, Matthew A; Niemi, Gerald J; Danz, Nicholas P

    2009-12-01

    Point counts are a common method for sampling avian distribution and abundance. Although methods for estimating detection probabilities are available, many analyses use raw counts and do not correct for detectability. We use a removal model of detection within an N-mixture approach to estimate abundance trends corrected for imperfect detection. We compare the corrected trend estimates to those estimated from raw counts for 16 species using 15 years of monitoring data on three national forests in the western Great Lakes, USA. We also tested the effects of overdispersion by modeling both counts and removal mixtures under three statistical distributions: Poisson, zero-inflated Poisson, and negative binomial. For most species, the removal model produced estimates of detection probability that conformed to expectations. For many species, but not all, estimates of trends were similar regardless of statistical distribution or method of analysis. Within a given combination of likelihood (counts vs. mixtures) and statistical distribution, trends usually differed by both stand type and national forest, with species showing declines in some stand types and increases in others. For three species, Brown Creeper, Yellow-rumped Warbler, and Black-throated Green Warbler, temporal patterns in detectability resulted in substantial differences in estimated trends under the removal mixtures compared to the analysis of raw counts. Overall, we found that the zero-inflated Poisson was the best distribution for our data, although the Poisson or negative binomial performed better for a few species. The similarity in estimated trends that we observed among counts and removal mixtures was probably a result of both experimental design and sampling effort. First, the study was originally designed to avoid confounding observer effects with habitats or time. Second, our time series is relatively long and our sample sizes within years are large. PMID:20014578

  14. A new approach based on embedding Green's functions into fixed-point iterations for highly accurate solution to Troesch's problem

    NASA Astrophysics Data System (ADS)

    Kafri, H. Q.; Khuri, S. A.; Sayfy, A.

    2016-03-01

    In this paper, a novel approach is introduced for the solution of the non-linear Troesch's boundary value problem. The underlying strategy is based on Green's functions and fixed-point iterations, including Picard's and Krasnoselskii-Mann's schemes. The resulting numerical solutions are compared with both the analytical solutions and numerical solutions that exist in the literature. Convergence of the iterative schemes is proved via manipulation of the contraction principle. It is observed that the method handles the boundary layer very efficiently, reduces lengthy calculations, provides rapid convergence, and yields accurate results particularly for large eigenvalues. Indeed, to our knowledge, this is the first time that this problem is solved successfully for very large eigenvalues, actually the rate of convergence increases as the magnitude of the eigenvalues increases.

  15. On point estimation of the abnormality of a Mahalanobis index

    PubMed Central

    Elfadaly, Fadlalla G.; Garthwaite, Paul H.; Crawford, John R.

    2016-01-01

    Mahalanobis distance may be used as a measure of the disparity between an individual’s profile of scores and the average profile of a population of controls. The degree to which the individual’s profile is unusual can then be equated to the proportion of the population who would have a larger Mahalanobis distance than the individual. Several estimators of this proportion are examined. These include plug-in maximum likelihood estimators, medians, the posterior mean from a Bayesian probability matching prior, an estimator derived from a Taylor expansion, and two forms of polynomial approximation, one based on Bernstein polynomial and one on a quadrature method. Simulations show that some estimators, including the commonly-used plug-in maximum likelihood estimators, can have substantial bias for small or moderate sample sizes. The polynomial approximations yield estimators that have low bias, with the quadrature method marginally to be preferred over Bernstein polynomials. However, the polynomial estimators sometimes yield infeasible estimates that are outside the 0–1 range. While none of the estimators are perfectly unbiased, the median estimators match their definition; in simulations their estimates of the proportion have a median error close to zero. The standard median estimator can give unrealistically small estimates (including 0) and an adjustment is proposed that ensures estimates are always credible. This latter estimator has much to recommend it when unbiasedness is not of paramount importance, while the quadrature method is recommended when bias is the dominant issue. PMID:27375307

  16. Accurate and robust registration of high-speed railway viaduct point clouds using closing conditions and external geometric constraints

    NASA Astrophysics Data System (ADS)

    Ji, Zheng; Song, Mengxiao; Guan, Haiyan; Yu, Yongtao

    2015-08-01

    This paper proposes an automatic method for registering multiple laser scans without a control network. The proposed registration method first uses artificial targets to pair-wise register adjacent scans for initial transformation estimates; the proposed registration method then employs combined adjustments with closing conditions and external triangle constraints to globally register all scans along a long-range, high-speed railway corridor. The proposed registration method uses (1) closing conditions to eliminate registration errors that gradually accumulate as the length of a corridor (the number of scan stations) increases, and (2) external geometric constraints to ensure the shape correctness of an elongated high-speed railway. A 640-m high-speed railway viaduct with twenty-one piers is used to conduct experiments using our proposed registration method. A group of comparative experiments is undertaken to evaluate the robustness and efficiency of the proposed registration method to accurately register long-range corridors.

  17. Accurate Point-of-Care Detection of Ruptured Fetal Membranes: Improved Diagnostic Performance Characteristics with a Monoclonal/Polyclonal Immunoassay

    PubMed Central

    Rogers, Linda C.; Scott, Laurie; Block, Jon E.

    2016-01-01

    OBJECTIVE Accurate and timely diagnosis of rupture of membranes (ROM) is imperative to allow for gestational age-specific interventions. This study compared the diagnostic performance characteristics between two methods used for the detection of ROM as measured in the same patient. METHODS Vaginal secretions were evaluated using the conventional fern test as well as a point-of-care monoclonal/polyclonal immunoassay test (ROM Plus®) in 75 pregnant patients who presented to labor and delivery with complaints of leaking amniotic fluid. Both tests were compared to analytical confirmation of ROM using three external laboratory tests. Diagnostic performance characteristics were calculated including sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and accuracy. RESULTS Diagnostic performance characteristics uniformly favored ROM detection using the immunoassay test compared to the fern test: sensitivity (100% vs. 77.8%), specificity (94.8% vs. 79.3%), PPV (75% vs. 36.8%), NPV (100% vs. 95.8%), and accuracy (95.5% vs. 79.1%). CONCLUSIONS The point-of-care immunoassay test provides improved diagnostic accuracy for the detection of ROM compared to fern testing. It has the potential of improving patient management decisions, thereby minimizing serious complications and perinatal morbidity. PMID:27199579

  18. An accurate modeling, simulation, and analysis tool for predicting and estimating Raman LIDAR system performance

    NASA Astrophysics Data System (ADS)

    Grasso, Robert J.; Russo, Leonard P.; Barrett, John L.; Odhner, Jefferson E.; Egbert, Paul I.

    2007-09-01

    BAE Systems presents the results of a program to model the performance of Raman LIDAR systems for the remote detection of atmospheric gases, air polluting hydrocarbons, chemical and biological weapons, and other molecular species of interest. Our model, which integrates remote Raman spectroscopy, 2D and 3D LADAR, and USAF atmospheric propagation codes permits accurate determination of the performance of a Raman LIDAR system. The very high predictive performance accuracy of our model is due to the very accurate calculation of the differential scattering cross section for the specie of interest at user selected wavelengths. We show excellent correlation of our calculated cross section data, used in our model, with experimental data obtained from both laboratory measurements and the published literature. In addition, the use of standard USAF atmospheric models provides very accurate determination of the atmospheric extinction at both the excitation and Raman shifted wavelengths.

  19. The effect of high leverage points on the maximum estimated likelihood for separation in logistic regression

    NASA Astrophysics Data System (ADS)

    Ariffin, Syaiba Balqish; Midi, Habshah; Arasan, Jayanthi; Rana, Md Sohel

    2015-02-01

    This article is concerned with the performance of the maximum estimated likelihood estimator in the presence of separation in the space of the independent variables and high leverage points. The maximum likelihood estimator suffers from the problem of non overlap cases in the covariates where the regression coefficients are not identifiable and the maximum likelihood estimator does not exist. Consequently, iteration scheme fails to converge and gives faulty results. To remedy this problem, the maximum estimated likelihood estimator is put forward. It is evident that the maximum estimated likelihood estimator is resistant against separation and the estimates always exist. The effect of high leverage points are then investigated on the performance of maximum estimated likelihood estimator through real data sets and Monte Carlo simulation study. The findings signify that the maximum estimated likelihood estimator fails to provide better parameter estimates in the presence of both separation, and high leverage points.

  20. Bi-fluorescence imaging for estimating accurately the nuclear condition of Rhizoctonia spp.

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Aims: To simplify the determination of the nuclear condition of the pathogenic Rhizoctonia, which currently needs to be performed either using two fluorescent dyes, thus is more costly and time-consuming, or using only one fluorescent dye, and thus less accurate. Methods and Results: A red primary ...

  1. Technical note: tree truthing: how accurate are substrate estimates in primate field studies?

    PubMed

    Bezanson, Michelle; Watts, Sean M; Jobin, Matthew J

    2012-04-01

    Field studies of primate positional behavior typically rely on ground-level estimates of substrate size, angle, and canopy location. These estimates potentially influence the identification of positional modes by the observer recording behaviors. In this study we aim to test ground-level estimates against direct measurements of support angles, diameters, and canopy heights in trees at La Suerte Biological Research Station in Costa Rica. After reviewing methods that have been used by past researchers, we provide data collected within trees that are compared to estimates obtained from the ground. We climbed five trees and measured 20 supports. Four observers collected measurements of each support from different locations on the ground. Diameter estimates varied from the direct tree measures by 0-28 cm (Mean: 5.44 ± 4.55). Substrate angles varied by 1-55° (Mean: 14.76 ± 14.02). Height in the tree was best estimated using a clinometer as estimates with a two-meter reference placed by the tree varied by 3-11 meters (Mean: 5.31 ± 2.44). We determined that the best support size estimates were those generated relative to the size of the focal animal and divided into broader categories. Support angles were best estimated in 5° increments and then checked using a Haglöf clinometer in combination with a laser pointer. We conclude that three major factors should be addressed when estimating support features: observer error (e.g., experience and distance from the target), support deformity, and how support size and angle influence the positional mode selected by a primate individual. individual. PMID:22371099

  2. Accurate state estimation for a hydraulic actuator via a SDRE nonlinear filter

    NASA Astrophysics Data System (ADS)

    Strano, Salvatore; Terzo, Mario

    2016-06-01

    The state estimation in hydraulic actuators is a fundamental tool for the detection of faults or a valid alternative to the installation of sensors. Due to the hard nonlinearities that characterize the hydraulic actuators, the performances of the linear/linearization based techniques for the state estimation are strongly limited. In order to overcome these limits, this paper focuses on an alternative nonlinear estimation method based on the State-Dependent-Riccati-Equation (SDRE). The technique is able to fully take into account the system nonlinearities and the measurement noise. A fifth order nonlinear model is derived and employed for the synthesis of the estimator. Simulations and experimental tests have been conducted and comparisons with the largely used Extended Kalman Filter (EKF) are illustrated. The results show the effectiveness of the SDRE based technique for applications characterized by not negligible nonlinearities such as dead zone and frictions.

  3. SU-F-BRF-09: A Non-Rigid Point Matching Method for Accurate Bladder Dose Summation in Cervical Cancer HDR Brachytherapy

    SciTech Connect

    Chen, H; Zhen, X; Zhou, L; Zhong, Z; Pompos, A; Yan, H; Jiang, S; Gu, X

    2014-06-15

    Purpose: To propose and validate a deformable point matching scheme for surface deformation to facilitate accurate bladder dose summation for fractionated HDR cervical cancer treatment. Method: A deformable point matching scheme based on the thin plate spline robust point matching (TPSRPM) algorithm is proposed for bladder surface registration. The surface of bladders segmented from fractional CT images is extracted and discretized with triangular surface mesh. Deformation between the two bladder surfaces are obtained by matching the two meshes' vertices via the TPS-RPM algorithm, and the deformation vector fields (DVFs) characteristic of this deformation is estimated by B-spline approximation. Numerically, the algorithm is quantitatively compared with the Demons algorithm using five clinical cervical cancer cases by several metrics: vertex-to-vertex distance (VVD), Hausdorff distance (HD), percent error (PE), and conformity index (CI). Experimentally, the algorithm is validated on a balloon phantom with 12 surface fiducial markers. The balloon is inflated with different amount of water, and the displacement of fiducial markers is benchmarked as ground truth to study TPS-RPM calculated DVFs' accuracy. Results: In numerical evaluation, the mean VVD is 3.7(±2.0) mm after Demons, and 1.3(±0.9) mm after TPS-RPM. The mean HD is 14.4 mm after Demons, and 5.3mm after TPS-RPM. The mean PE is 101.7% after Demons and decreases to 18.7% after TPS-RPM. The mean CI is 0.63 after Demons, and increases to 0.90 after TPS-RPM. In the phantom study, the mean Euclidean distance of the fiducials is 7.4±3.0mm and 4.2±1.8mm after Demons and TPS-RPM, respectively. Conclusions: The bladder wall deformation is more accurate using the feature-based TPS-RPM algorithm than the intensity-based Demons algorithm, indicating that TPS-RPM has the potential for accurate bladder dose deformation and dose summation for multi-fractional cervical HDR brachytherapy. This work is supported in part by

  4. FAST TRACK COMMUNICATION Accurate estimate of α variation and isotope shift parameters in Na and Mg+

    NASA Astrophysics Data System (ADS)

    Sahoo, B. K.

    2010-12-01

    We present accurate calculations of fine-structure constant variation coefficients and isotope shifts in Na and Mg+ using the relativistic coupled-cluster method. In our approach, we are able to discover the roles of various correlation effects explicitly to all orders in these calculations. Most of the results, especially for the excited states, are reported for the first time. It is possible to ascertain suitable anchor and probe lines for the studies of possible variation in the fine-structure constant by using the above results in the considered systems.

  5. A microbial clock provides an accurate estimate of the postmortem interval in a mouse model system

    PubMed Central

    Metcalf, Jessica L; Wegener Parfrey, Laura; Gonzalez, Antonio; Lauber, Christian L; Knights, Dan; Ackermann, Gail; Humphrey, Gregory C; Gebert, Matthew J; Van Treuren, Will; Berg-Lyons, Donna; Keepers, Kyle; Guo, Yan; Bullard, James; Fierer, Noah; Carter, David O; Knight, Rob

    2013-01-01

    Establishing the time since death is critical in every death investigation, yet existing techniques are susceptible to a range of errors and biases. For example, forensic entomology is widely used to assess the postmortem interval (PMI), but errors can range from days to months. Microbes may provide a novel method for estimating PMI that avoids many of these limitations. Here we show that postmortem microbial community changes are dramatic, measurable, and repeatable in a mouse model system, allowing PMI to be estimated within approximately 3 days over 48 days. Our results provide a detailed understanding of bacterial and microbial eukaryotic ecology within a decomposing corpse system and suggest that microbial community data can be developed into a forensic tool for estimating PMI. DOI: http://dx.doi.org/10.7554/eLife.01104.001 PMID:24137541

  6. Fast and accurate probability density estimation in large high dimensional astronomical datasets

    NASA Astrophysics Data System (ADS)

    Gupta, Pramod; Connolly, Andrew J.; Gardner, Jeffrey P.

    2015-01-01

    Astronomical surveys will generate measurements of hundreds of attributes (e.g. color, size, shape) on hundreds of millions of sources. Analyzing these large, high dimensional data sets will require efficient algorithms for data analysis. An example of this is probability density estimation that is at the heart of many classification problems such as the separation of stars and quasars based on their colors. Popular density estimation techniques use binning or kernel density estimation. Kernel density estimation has a small memory footprint but often requires large computational resources. Binning has small computational requirements but usually binning is implemented with multi-dimensional arrays which leads to memory requirements which scale exponentially with the number of dimensions. Hence both techniques do not scale well to large data sets in high dimensions. We present an alternative approach of binning implemented with hash tables (BASH tables). This approach uses the sparseness of data in the high dimensional space to ensure that the memory requirements are small. However hashing requires some extra computation so a priori it is not clear if the reduction in memory requirements will lead to increased computational requirements. Through an implementation of BASH tables in C++ we show that the additional computational requirements of hashing are negligible. Hence this approach has small memory and computational requirements. We apply our density estimation technique to photometric selection of quasars using non-parametric Bayesian classification and show that the accuracy of the classification is same as the accuracy of earlier approaches. Since the BASH table approach is one to three orders of magnitude faster than the earlier approaches it may be useful in various other applications of density estimation in astrostatistics.

  7. Spectral estimation from laser scanner data for accurate color rendering of objects

    NASA Astrophysics Data System (ADS)

    Baribeau, Rejean

    2002-06-01

    Estimation methods are studied for the recovery of the spectral reflectance across the visible range from the sensing at just three discrete laser wavelengths. Methods based on principal component analysis and on spline interpolation are judged based on the CIE94 color differences for some reference data sets. These include the Macbeth color checker, the OSA-UCS color charts, some artist pigments, and a collection of miscellaneous surface colors. The optimal three sampling wavelengths are also investigated. It is found that color can be estimated with average accuracy ΔE94 = 2.3 when optimal wavelengths 455 nm, 540 n, and 610 nm are used.

  8. Crop area estimation based on remotely-sensed data with an accurate but costly subsample

    NASA Technical Reports Server (NTRS)

    Gunst, R. F.

    1985-01-01

    Research activities conducted under the auspices of National Aeronautics and Space Administration Cooperative Agreement NCC 9-9 are discussed. During this contract period research efforts are concentrated in two primary areas. The first are is an investigation of the use of measurement error models as alternatives to least squares regression estimators of crop production or timber biomass. The secondary primary area of investigation is on the estimation of the mixing proportion of two-component mixture models. This report lists publications, technical reports, submitted manuscripts, and oral presentation generated by these research efforts. Possible areas of future research are mentioned.

  9. The double-helix point spread function enables precise and accurate measurement of 3D single-molecule localization and orientation

    PubMed Central

    Backlund, Mikael P.; Lew, Matthew D.; Backer, Adam S.; Sahl, Steffen J.; Grover, Ginni; Agrawal, Anurag; Piestun, Rafael; Moerner, W. E.

    2014-01-01

    Single-molecule-based super-resolution fluorescence microscopy has recently been developed to surpass the diffraction limit by roughly an order of magnitude. These methods depend on the ability to precisely and accurately measure the position of a single-molecule emitter, typically by fitting its emission pattern to a symmetric estimator (e.g. centroid or 2D Gaussian). However, single-molecule emission patterns are not isotropic, and depend highly on the orientation of the molecule’s transition dipole moment, as well as its z-position. Failure to account for this fact can result in localization errors on the order of tens of nm for in-focus images, and ~50–200 nm for molecules at modest defocus. The latter range becomes especially important for three-dimensional (3D) single-molecule super-resolution techniques, which typically employ depths-of-field of up to ~2 μm. To address this issue we report the simultaneous measurement of precise and accurate 3D single-molecule position and 3D dipole orientation using the Double-Helix Point Spread Function (DH-PSF) microscope. We are thus able to significantly improve dipole-induced position errors, reducing standard deviations in lateral localization from ~2x worse than photon-limited precision (48 nm vs. 25 nm) to within 5 nm of photon-limited precision. Furthermore, by averaging many estimations of orientation we are able to improve from a lateral standard deviation of 116 nm (~4x worse than the precision, 28 nm) to 34 nm (within 6 nm). PMID:24817798

  10. Accurate radiocarbon age estimation using "early" measurements: a new approach to reconstructing the Paleolithic absolute chronology

    NASA Astrophysics Data System (ADS)

    Omori, Takayuki; Sano, Katsuhiro; Yoneda, Minoru

    2014-05-01

    This paper presents new correction approaches for "early" radiocarbon ages to reconstruct the Paleolithic absolute chronology. In order to discuss time-space distribution about the replacement of archaic humans, including Neanderthals in Europe, by the modern humans, a massive data, which covers a wide-area, would be needed. Today, some radiocarbon databases focused on the Paleolithic have been published and used for chronological studies. From a viewpoint of current analytical technology, however, the any database have unreliable results that make interpretation of radiocarbon dates difficult. Most of these unreliable ages had been published in the early days of radiocarbon analysis. In recent years, new analytical methods to determine highly-accurate dates have been developed. Ultrafiltration and ABOx-SC methods, as new sample pretreatments for bone and charcoal respectively, have attracted attention because they could remove imperceptible contaminates and derive reliable accurately ages. In order to evaluate the reliability of "early" data, we investigated the differences and variabilities of radiocarbon ages on different pretreatments, and attempted to develop correction functions for the assessment of the reliability. It can be expected that reliability of the corrected age is increased and the age applied to chronological research together with recent ages. Here, we introduce the methodological frameworks and archaeological applications.

  11. Estimating Tipping Points in Feedback-Driven Financial Networks

    NASA Astrophysics Data System (ADS)

    Kostanjcar, Zvonko; Begusic, Stjepan; Stanley, Harry Eugene; Podobnik, Boris

    2016-09-01

    Much research has been conducted arguing that tipping points at which complex systems experience phase transitions are difficult to identify. To test the existence of tipping points in financial markets, based on the alternating offer strategic model we propose a network of bargaining agents who mutually either cooperate or where the feedback mechanism between trading and price dynamics is driven by an external "hidden" variable R that quantifies the degree of market overpricing. Due to the feedback mechanism, R fluctuates and oscillates over time, and thus periods when the market is underpriced and overpriced occur repeatedly. As the market becomes overpriced, bubbles are created that ultimately burst in a market crash. The probability that the index will drop in the next year exhibits a strong hysteresis behavior from which we calculate the tipping point. The probability distribution function of R has a bimodal shape characteristic of small systems near the tipping point. By examining the S&P500 index we illustrate the applicability of the model and demonstate that the financial data exhibits a hysteresis and a tipping point that agree with the model predictions. We report a cointegration between the returns of the S&P 500 index and its intrinsic value.

  12. A Generalized Subspace Least Mean Square Method for High-resolution Accurate Estimation of Power System Oscillation Modes

    SciTech Connect

    Zhang, Peng; Zhou, Ning; Abdollahi, Ali

    2013-09-10

    A Generalized Subspace-Least Mean Square (GSLMS) method is presented for accurate and robust estimation of oscillation modes from exponentially damped power system signals. The method is based on orthogonality of signal and noise eigenvectors of the signal autocorrelation matrix. Performance of the proposed method is evaluated using Monte Carlo simulation and compared with Prony method. Test results show that the GSLMS is highly resilient to noise and significantly dominates Prony method in tracking power system modes under noisy environments.

  13. Accurate motion parameter estimation for colonoscopy tracking using a regression method

    NASA Astrophysics Data System (ADS)

    Liu, Jianfei; Subramanian, Kalpathi R.; Yoo, Terry S.

    2010-03-01

    Co-located optical and virtual colonoscopy images have the potential to provide important clinical information during routine colonoscopy procedures. In our earlier work, we presented an optical flow based algorithm to compute egomotion from live colonoscopy video, permitting navigation and visualization of the corresponding patient anatomy. In the original algorithm, motion parameters were estimated using the traditional Least Sum of squares(LS) procedure which can be unstable in the context of optical flow vectors with large errors. In the improved algorithm, we use the Least Median of Squares (LMS) method, a robust regression method for motion parameter estimation. Using the LMS method, we iteratively analyze and converge toward the main distribution of the flow vectors, while disregarding outliers. We show through three experiments the improvement in tracking results obtained using the LMS method, in comparison to the LS estimator. The first experiment demonstrates better spatial accuracy in positioning the virtual camera in the sigmoid colon. The second and third experiments demonstrate the robustness of this estimator, resulting in longer tracked sequences: from 300 to 1310 in the ascending colon, and 410 to 1316 in the transverse colon.

  14. Accurate Angle Estimator for High-Frame-Rate 2-D Vector Flow Imaging.

    PubMed

    Villagomez Hoyos, Carlos Armando; Stuart, Matthias Bo; Hansen, Kristoffer Lindskov; Nielsen, Michael Bachmann; Jensen, Jorgen Arendt

    2016-06-01

    This paper presents a novel approach for estimating 2-D flow angles using a high-frame-rate ultrasound method. The angle estimator features high accuracy and low standard deviation (SD) over the full 360° range. The method is validated on Field II simulations and phantom measurements using the experimental ultrasound scanner SARUS and a flow rig before being tested in vivo. An 8-MHz linear array transducer is used with defocused beam emissions. In the simulations of a spinning disk phantom, a 360° uniform behavior on the angle estimation is observed with a median angle bias of 1.01° and a median angle SD of 1.8°. Similar results are obtained on a straight vessel for both simulations and measurements, where the obtained angle biases are below 1.5° with SDs around 1°. Estimated velocity magnitudes are also kept under 10% bias and 5% relative SD in both simulations and measurements. An in vivo measurement is performed on a carotid bifurcation of a healthy individual. A 3-s acquisition during three heart cycles is captured. A consistent and repetitive vortex is observed in the carotid bulb during systoles. PMID:27093598

  15. Accurate estimation of influenza epidemics using Google search data via ARGO.

    PubMed

    Yang, Shihao; Santillana, Mauricio; Kou, S C

    2015-11-24

    Accurate real-time tracking of influenza outbreaks helps public health officials make timely and meaningful decisions that could save lives. We propose an influenza tracking model, ARGO (AutoRegression with GOogle search data), that uses publicly available online search data. In addition to having a rigorous statistical foundation, ARGO outperforms all previously available Google-search-based tracking models, including the latest version of Google Flu Trends, even though it uses only low-quality search data as input from publicly available Google Trends and Google Correlate websites. ARGO not only incorporates the seasonality in influenza epidemics but also captures changes in people's online search behavior over time. ARGO is also flexible, self-correcting, robust, and scalable, making it a potentially powerful tool that can be used for real-time tracking of other social events at multiple temporal and spatial resolutions. PMID:26553980

  16. Accurate estimation of influenza epidemics using Google search data via ARGO

    PubMed Central

    Yang, Shihao; Santillana, Mauricio; Kou, S. C.

    2015-01-01

    Accurate real-time tracking of influenza outbreaks helps public health officials make timely and meaningful decisions that could save lives. We propose an influenza tracking model, ARGO (AutoRegression with GOogle search data), that uses publicly available online search data. In addition to having a rigorous statistical foundation, ARGO outperforms all previously available Google-search–based tracking models, including the latest version of Google Flu Trends, even though it uses only low-quality search data as input from publicly available Google Trends and Google Correlate websites. ARGO not only incorporates the seasonality in influenza epidemics but also captures changes in people’s online search behavior over time. ARGO is also flexible, self-correcting, robust, and scalable, making it a potentially powerful tool that can be used for real-time tracking of other social events at multiple temporal and spatial resolutions. PMID:26553980

  17. Raman spectroscopy for highly accurate estimation of the age of refrigerated porcine muscle

    NASA Astrophysics Data System (ADS)

    Timinis, Constantinos; Pitris, Costas

    2016-03-01

    The high water content of meat, combined with all the nutrients it contains, make it vulnerable to spoilage at all stages of production and storage even when refrigerated at 5 °C. A non-destructive and in situ tool for meat sample testing, which could provide an accurate indication of the storage time of meat, would be very useful for the control of meat quality as well as for consumer safety. The proposed solution is based on Raman spectroscopy which is non-invasive and can be applied in situ. For the purposes of this project, 42 meat samples from 14 animals were obtained and three Raman spectra per sample were collected every two days for two weeks. The spectra were subsequently processed and the sample age was calculated using a set of linear differential equations. In addition, the samples were classified in categories corresponding to the age in 2-day steps (i.e., 0, 2, 4, 6, 8, 10, 12 or 14 days old), using linear discriminant analysis and cross-validation. Contrary to other studies, where the samples were simply grouped into two categories (higher or lower quality, suitable or unsuitable for human consumption, etc.), in this study, the age was predicted with a mean error of ~ 1 day (20%) or classified, in 2-day steps, with 100% accuracy. Although Raman spectroscopy has been used in the past for the analysis of meat samples, the proposed methodology has resulted in a prediction of the sample age far more accurately than any report in the literature.

  18. Techniques for accurate estimation of net discharge in a tidal channel

    USGS Publications Warehouse

    Simpson, Michael R.; Bland, Roger

    1999-01-01

    An ultrasonic velocity meter discharge-measurement site in a tidally affected region of the Sacramento-San Joaquin rivers was used to study the accuracy of the index velocity calibration procedure. Calibration data consisting of ultrasonic velocity meter index velocity and concurrent acoustic Doppler discharge measurement data were collected during three time periods. The relative magnitude of equipment errors, acoustic Doppler discharge measurement errors, and calibration errors were evaluated. Calibration error was the most significant source of error in estimating net discharge. Using a comprehensive calibration method, net discharge estimates developed from the three sets of calibration data differed by less than an average of 4 cubic meters per second. Typical maximum flow rates during the data-collection period averaged 750 cubic meters per second.

  19. Lower bound on reliability for Weibull distribution when shape parameter is not estimated accurately

    NASA Technical Reports Server (NTRS)

    Huang, Zhaofeng; Porter, Albert A.

    1991-01-01

    The mathematical relationships between the shape parameter Beta and estimates of reliability and a life limit lower bound for the two parameter Weibull distribution are investigated. It is shown that under rather general conditions, both the reliability lower bound and the allowable life limit lower bound (often called a tolerance limit) have unique global minimums over a range of Beta. Hence lower bound solutions can be obtained without assuming or estimating Beta. The existence and uniqueness of these lower bounds are proven. Some real data examples are given to show how these lower bounds can be easily established and to demonstrate their practicality. The method developed here has proven to be extremely useful when using the Weibull distribution in analysis of no-failure or few-failures data. The results are applicable not only in the aerospace industry but anywhere that system reliabilities are high.

  20. Lower bound on reliability for Weibull distribution when shape parameter is not estimated accurately

    NASA Technical Reports Server (NTRS)

    Huang, Zhaofeng; Porter, Albert A.

    1990-01-01

    The mathematical relationships between the shape parameter Beta and estimates of reliability and a life limit lower bound for the two parameter Weibull distribution are investigated. It is shown that under rather general conditions, both the reliability lower bound and the allowable life limit lower bound (often called a tolerance limit) have unique global minimums over a range of Beta. Hence lower bound solutions can be obtained without assuming or estimating Beta. The existence and uniqueness of these lower bounds are proven. Some real data examples are given to show how these lower bounds can be easily established and to demonstrate their practicality. The method developed here has proven to be extremely useful when using the Weibull distribution in analysis of no-failure or few-failures data. The results are applicable not only in the aerospace industry but anywhere that system reliabilities are high.

  1. Are satellite based rainfall estimates accurate enough for crop modelling under Sahelian climate?

    NASA Astrophysics Data System (ADS)

    Ramarohetra, J.; Sultan, B.

    2012-04-01

    Agriculture is considered as the most climate dependant human activity. In West Africa and especially in the sudano-sahelian zone, rain-fed agriculture - that represents 93% of cultivated areas and is the means of support of 70% of the active population - is highly vulnerable to precipitation variability. To better understand and anticipate climate impacts on agriculture, crop models - that estimate crop yield from climate information (e.g rainfall, temperature, insolation, humidity) - have been developed. These crop models are useful (i) in ex ante analysis to quantify the impact of different strategies implementation - crop management (e.g. choice of varieties, sowing date), crop insurance or medium-range weather forecast - on yields, (ii) for early warning systems and to (iii) assess future food security. Yet, the successful application of these models depends on the accuracy of their climatic drivers. In the sudano-sahelian zone , the quality of precipitation estimations is then a key factor to understand and anticipate climate impacts on agriculture via crop modelling and yield estimations. Different kinds of precipitation estimations can be used. Ground measurements have long-time series but an insufficient network density, a large proportion of missing values, delay in reporting time, and they have limited availability. An answer to these shortcomings may lie in the field of remote sensing that provides satellite-based precipitation estimations. However, satellite-based rainfall estimates (SRFE) are not a direct measurement but rather an estimation of precipitation. Used as an input for crop models, it determines the performance of the simulated yield, hence SRFE require validation. The SARRAH crop model is used to model three different varieties of pearl millet (HKP, MTDO, Souna3) in a square degree centred on 13.5°N and 2.5°E, in Niger. Eight satellite-based rainfall daily products (PERSIANN, CMORPH, TRMM 3b42-RT, GSMAP MKV+, GPCP, TRMM 3b42v6, RFEv2 and

  2. Plant DNA Barcodes Can Accurately Estimate Species Richness in Poorly Known Floras

    PubMed Central

    Costion, Craig; Ford, Andrew; Cross, Hugh; Crayn, Darren; Harrington, Mark; Lowe, Andrew

    2011-01-01

    Background Widespread uptake of DNA barcoding technology for vascular plants has been slow due to the relatively poor resolution of species discrimination (∼70%) and low sequencing and amplification success of one of the two official barcoding loci, matK. Studies to date have mostly focused on finding a solution to these intrinsic limitations of the markers, rather than posing questions that can maximize the utility of DNA barcodes for plants with the current technology. Methodology/Principal Findings Here we test the ability of plant DNA barcodes using the two official barcoding loci, rbcLa and matK, plus an alternative barcoding locus, trnH-psbA, to estimate the species diversity of trees in a tropical rainforest plot. Species discrimination accuracy was similar to findings from previous studies but species richness estimation accuracy proved higher, up to 89%. All combinations which included the trnH-psbA locus performed better at both species discrimination and richness estimation than matK, which showed little enhanced species discriminatory power when concatenated with rbcLa. The utility of the trnH-psbA locus is limited however, by the occurrence of intraspecific variation observed in some angiosperm families to occur as an inversion that obscures the monophyly of species. Conclusions/Significance We demonstrate for the first time, using a case study, the potential of plant DNA barcodes for the rapid estimation of species richness in taxonomically poorly known areas or cryptic populations revealing a powerful new tool for rapid biodiversity assessment. The combination of the rbcLa and trnH-psbA loci performed better for this purpose than any two-locus combination that included matK. We show that although DNA barcodes fail to discriminate all species of plants, new perspectives and methods on biodiversity value and quantification may overshadow some of these shortcomings by applying barcode data in new ways. PMID:22096501

  3. Compact and accurate linear and nonlinear autoregressive moving average model parameter estimation using laguerre functions.

    PubMed

    Chon, K H; Cohen, R J; Holstein-Rathlou, N H

    1997-01-01

    A linear and nonlinear autoregressive moving average (ARMA) identification algorithm is developed for modeling time series data. The algorithm uses Laguerre expansion of kernals (LEK) to estimate Volterra-Wiener kernals. However, instead of estimating linear and nonlinear system dynamics via moving average models, as is the case for the Volterra-Wiener analysis, we propose an ARMA model-based approach. The proposed algorithm is essentially the same as LEK, but this algorithm is extended to include past values of the output as well. Thus, all of the advantages associated with using the Laguerre function remain with our algorithm; but, by extending the algorithm to the linear and nonlinear ARMA model, a significant reduction in the number of Laguerre functions can be made, compared with the Volterra-Wiener approach. This translates into a more compact system representation and makes the physiological interpretation of higher order kernels easier. Furthermore, simulation results show better performance of the proposed approach in estimating the system dynamics than LEK in certain cases, and it remains effective in the presence of significant additive measurement noise. PMID:9236985

  4. Accurate estimation of sea surface temperatures using dissolution-corrected calibrations for Mg/Ca paleothermometry

    NASA Astrophysics Data System (ADS)

    Rosenthal, Yair; Lohmann, George P.

    2002-09-01

    Paired δ18O and Mg/Ca measurements on the same foraminiferal shells offer the ability to independently estimate sea surface temperature (SST) changes and assess their temporal relationship to the growth and decay of continental ice sheets. The accuracy of this method is confounded, however, by the absence of a quantitative method to correct Mg/Ca records for alteration by dissolution. Here we describe dissolution-corrected calibrations for Mg/Ca-paleothermometry in which the preexponent constant is a function of size-normalized shell weight: (1) for G. ruber (212-300 μm) (Mg/Ca)ruber = (0.025 wt + 0.11) e0.095T and (b) for G. sacculifer (355-425 μm) (Mg/Ca)sacc = (0.0032 wt + 0.181) e0.095T. The new calibrations improve the accuracy of SST estimates and are globally applicable. With this correction, eastern equatorial Atlantic SST during the Last Glacial Maximum is estimated to be 2.9° ± 0.4°C colder than today.

  5. Lung motion estimation using dynamic point shifting: An innovative model based on a robust point matching algorithm

    SciTech Connect

    Yi, Jianbing; Yang, Xuan Li, Yan-Ran; Chen, Guoliang

    2015-10-15

    Purpose: Image-guided radiotherapy is an advanced 4D radiotherapy technique that has been developed in recent years. However, respiratory motion causes significant uncertainties in image-guided radiotherapy procedures. To address these issues, an innovative lung motion estimation model based on a robust point matching is proposed in this paper. Methods: An innovative robust point matching algorithm using dynamic point shifting is proposed to estimate patient-specific lung motion during free breathing from 4D computed tomography data. The correspondence of the landmark points is determined from the Euclidean distance between the landmark points and the similarity between the local images that are centered at points at the same time. To ensure that the points in the source image correspond to the points in the target image during other phases, the virtual target points are first created and shifted based on the similarity between the local image centered at the source point and the local image centered at the virtual target point. Second, the target points are shifted by the constrained inverse function mapping the target points to the virtual target points. The source point set and shifted target point set are used to estimate the transformation function between the source image and target image. Results: The performances of the authors’ method are evaluated on two publicly available DIR-lab and POPI-model lung datasets. For computing target registration errors on 750 landmark points in six phases of the DIR-lab dataset and 37 landmark points in ten phases of the POPI-model dataset, the mean and standard deviation by the authors’ method are 1.11 and 1.11 mm, but they are 2.33 and 2.32 mm without considering image intensity, and 1.17 and 1.19 mm with sliding conditions. For the two phases of maximum inhalation and maximum exhalation in the DIR-lab dataset with 300 landmark points of each case, the mean and standard deviation of target registration errors on the

  6. Higher Accurate Estimation of Axial and Bending Stiffnesses of Plates Clamped by Bolts

    NASA Astrophysics Data System (ADS)

    Naruse, Tomohiro; Shibutani, Yoji

    Equivalent stiffness of clamped plates should be prescribed not only to evaluate the strength of bolted joints by the scheme of “joint diagram” but also to make structural analyses for practical structures with many bolted joints. We estimated the axial stiffness and bending stiffness of clamped plates by using Finite Element (FE) analyses while taking the contact condition on bearing surfaces and between the plates into account. The FE models were constructed for bolted joints tightened with M8, 10, 12 and 16 bolts and plate thicknesses of 3.2, 4.5, 6.0 and 9.0 mm, and the axial and bending compliances were precisely evaluated. These compliances of clamped plates were compared with those from VDI 2230 (2003) code, in which the equivalent conical compressive stress field in the plate has been assumed. The code gives larger axial stiffness for 11% and larger bending stiffness for 22%, and it cannot apply to the clamped plates with different thickness. Thus the code shall give lower bolt stress (unsafe estimation). We modified the vertical angle tangent, tanφ, of the equivalent conical by adding a term of the logarithm of thickness ratio t1/t2 and by fitting to the analysis results. The modified tanφ can estimate the axial compliance with the error from -1.5% to 6.8% and the bending compliance with the error from -6.5% to 10%. Furthermore, the modified tanφ can take the thickness difference into consideration.

  7. Accurate estimation of airborne ultrasonic time-of-flight for overlapping echoes.

    PubMed

    Sarabia, Esther G; Llata, Jose R; Robla, Sandra; Torre-Ferrero, Carlos; Oria, Juan P

    2013-01-01

    In this work, an analysis of the transmission of ultrasonic signals generated by piezoelectric sensors for air applications is presented. Based on this analysis, an ultrasonic response model is obtained for its application to the recognition of objects and structured environments for navigation by autonomous mobile robots. This model enables the analysis of the ultrasonic response that is generated using a pair of sensors in transmitter-receiver configuration using the pulse-echo technique. This is very interesting for recognizing surfaces that simultaneously generate a multiple echo response. This model takes into account the effect of the radiation pattern, the resonant frequency of the sensor, the number of cycles of the excitation pulse, the dynamics of the sensor and the attenuation with distance in the medium. This model has been developed, programmed and verified through a battery of experimental tests. Using this model a new procedure for obtaining accurate time of flight is proposed. This new method is compared with traditional ones, such as threshold or correlation, to highlight its advantages and drawbacks. Finally the advantages of this method are demonstrated for calculating multiple times of flight when the echo is formed by several overlapping echoes. PMID:24284774

  8. Accurate Estimation of Airborne Ultrasonic Time-of-Flight for Overlapping Echoes

    PubMed Central

    Sarabia, Esther G.; Llata, Jose R.; Robla, Sandra; Torre-Ferrero, Carlos; Oria, Juan P.

    2013-01-01

    In this work, an analysis of the transmission of ultrasonic signals generated by piezoelectric sensors for air applications is presented. Based on this analysis, an ultrasonic response model is obtained for its application to the recognition of objects and structured environments for navigation by autonomous mobile robots. This model enables the analysis of the ultrasonic response that is generated using a pair of sensors in transmitter-receiver configuration using the pulse-echo technique. This is very interesting for recognizing surfaces that simultaneously generate a multiple echo response. This model takes into account the effect of the radiation pattern, the resonant frequency of the sensor, the number of cycles of the excitation pulse, the dynamics of the sensor and the attenuation with distance in the medium. This model has been developed, programmed and verified through a battery of experimental tests. Using this model a new procedure for obtaining accurate time of flight is proposed. This new method is compared with traditional ones, such as threshold or correlation, to highlight its advantages and drawbacks. Finally the advantages of this method are demonstrated for calculating multiple times of flight when the echo is formed by several overlapping echoes. PMID:24284774

  9. ACCURATE ESTIMATIONS OF STELLAR AND INTERSTELLAR TRANSITION LINES OF TRIPLY IONIZED GERMANIUM

    SciTech Connect

    Dutta, Narendra Nath; Majumder, Sonjoy E-mail: sonjoy@gmail.com

    2011-08-10

    In this paper, we report on weighted oscillator strengths of E1 transitions and transition probabilities of E2 transitions among different low-lying states of triply ionized germanium using highly correlated relativistic coupled cluster (RCC) method. Due to the abundance of Ge IV in the solar system, planetary nebulae, white dwarf stars, etc., the study of such transitions is important from an astrophysical point of view. The weighted oscillator strengths of E1 transitions are presented in length and velocity gauge forms to check the accuracy of the calculations. We find excellent agreement between calculated and experimental excitation energies. Oscillator strengths of few transitions, wherever studied in the literature via other theoretical and experimental approaches, are compared with our RCC calculations.

  10. An Energy-Efficient Strategy for Accurate Distance Estimation in Wireless Sensor Networks

    PubMed Central

    Tarrío, Paula; Bernardos, Ana M.; Casar, José R.

    2012-01-01

    In line with recent research efforts made to conceive energy saving protocols and algorithms and power sensitive network architectures, in this paper we propose a transmission strategy to minimize the energy consumption in a sensor network when using a localization technique based on the measurement of the strength (RSS) or the time of arrival (TOA) of the received signal. In particular, we find the transmission power and the packet transmission rate that jointly minimize the total consumed energy, while ensuring at the same time a desired accuracy in the RSS or TOA measurements. We also propose some corrections to these theoretical results to take into account the effects of shadowing and packet loss in the propagation channel. The proposed strategy is shown to be effective in realistic scenarios providing energy savings with respect to other transmission strategies, and also guaranteeing a given accuracy in the distance estimations, which will serve to guarantee a desired accuracy in the localization result. PMID:23202218

  11. Developing accurate survey methods for estimating population sizes and trends of the critically endangered Nihoa Millerbird and Nihoa Finch.

    USGS Publications Warehouse

    Gorresen, P. Marcos; Camp, Richard J.; Brinck, Kevin W.; Farmer, Chris

    2012-01-01

    Point-transect surveys indicated that millerbirds were more abundant than shown by the striptransect method, and were estimated at 802 birds in 2010 (95%CI = 652 – 964) and 704 birds in 2011 (95%CI = 579 – 837). Point-transect surveys yielded population estimates with improved precision which will permit trends to be detected in shorter time periods and with greater statistical power than is available from strip-transect survey methods. Mean finch population estimates and associated uncertainty were not markedly different among the three survey methods, but the performance of models used to estimate density and population size are expected to improve as the data from additional surveys are incorporated. Using the pointtransect survey, the mean finch population size was estimated at 2,917 birds in 2010 (95%CI = 2,037 – 3,965) and 2,461 birds in 2011 (95%CI = 1,682 – 3,348). Preliminary testing of the line-transect method in 2011 showed that it would not generate sufficient detections to effectively model bird density, and consequently, relatively precise population size estimates. Both species were fairly evenly distributed across Nihoa and appear to occur in all or nearly all available habitat. The time expended and area traversed by observers was similar among survey methods; however, point-transect surveys do not require that observers walk a straight transect line, thereby allowing them to avoid culturally or biologically sensitive areas and minimize the adverse effects of recurrent travel to any particular area. In general, pointtransect surveys detect more birds than strip-survey methods, thereby improving precision and resulting population size and trend estimation. The method is also better suited for the steep and uneven terrain of Nihoa

  12. [Research on maize multispectral image accurate segmentation and chlorophyll index estimation].

    PubMed

    Wu, Qian; Sun, Hong; Li, Min-zan; Song, Yuan-yuan; Zhang, Yan-e

    2015-01-01

    In order to rapidly acquire maize growing information in the field, a non-destructive method of maize chlorophyll content index measurement was conducted based on multi-spectral imaging technique and imaging processing technology. The experiment was conducted at Yangling in Shaanxi province of China and the crop was Zheng-dan 958 planted in about 1 000 m X 600 m experiment field. Firstly, a 2-CCD multi-spectral image monitoring system was available to acquire the canopy images. The system was based on a dichroic prism, allowing precise separation of the visible (Blue (B), Green (G), Red (R): 400-700 nm) and near-infrared (NIR, 760-1 000 nm) band. The multispectral images were output as RGB and NIR images via the system vertically fixed to the ground with vertical distance of 2 m and angular field of 50°. SPAD index of each sample was'measured synchronously to show the chlorophyll content index. Secondly, after the image smoothing using adaptive smooth filtering algorithm, the NIR maize image was selected to segment the maize leaves from background, because there was a big difference showed in gray histogram between plant and soil background. The NIR image segmentation algorithm was conducted following steps of preliminary and accuracy segmentation: (1) The results of OTSU image segmentation method and the variable threshold algorithm were discussed. It was revealed that the latter was better one in corn plant and weed segmentation. As a result, the variable threshold algorithm based on local statistics was selected for the preliminary image segmentation. The expansion and corrosion were used to optimize the segmented image. (2) The region labeling algorithm was used to segment corn plants from soil and weed background with an accuracy of 95. 59 %. And then, the multi-spectral image of maize canopy was accurately segmented in R, G and B band separately. Thirdly, the image parameters were abstracted based on the segmented visible and NIR images. The average gray

  13. The challenges of accurately estimating time of long bone injury in children.

    PubMed

    Pickett, Tracy A

    2015-07-01

    The ability to determine the time an injury occurred can be of crucial significance in forensic medicine and holds special relevance to the investigation of child abuse. However, dating paediatric long bone injury, including fractures, is nuanced by complexities specific to the paediatric population. These challenges include the ability to identify bone injury in a growing or only partially-calcified skeleton, different injury patterns seen within the spectrum of the paediatric population, the effects of bone growth on healing as a separate entity from injury, differential healing rates seen at different ages, and the relative scarcity of information regarding healing rates in children, especially the very young. The challenges posed by these factors are compounded by a lack of consistency in defining and categorizing healing parameters. This paper sets out the primary limitations of existing knowledge regarding estimating timing of paediatric bone injury. Consideration and understanding of the multitude of factors affecting bone injury and healing in children will assist those providing opinion in the medical-legal forum. PMID:26048508

  14. Accurate estimation of normal incidence absorption coefficients with confidence intervals using a scanning laser Doppler vibrometer

    NASA Astrophysics Data System (ADS)

    Vuye, Cedric; Vanlanduit, Steve; Guillaume, Patrick

    2009-06-01

    When using optical measurements of the sound fields inside a glass tube, near the material under test, to estimate the reflection and absorption coefficients, not only these acoustical parameters but also confidence intervals can be determined. The sound fields are visualized using a scanning laser Doppler vibrometer (SLDV). In this paper the influence of different test signals on the quality of the results, obtained with this technique, is examined. The amount of data gathered during one measurement scan makes a thorough statistical analysis possible leading to the knowledge of confidence intervals. The use of a multi-sine, constructed on the resonance frequencies of the test tube, shows to be a very good alternative for the traditional periodic chirp. This signal offers the ability to obtain data for multiple frequencies in one measurement, without the danger of a low signal-to-noise ratio. The variability analysis in this paper clearly shows the advantages of the proposed multi-sine compared to the periodic chirp. The measurement procedure and the statistical analysis are validated by measuring the reflection ratio at a closed end and comparing the results with the theoretical value. Results of the testing of two building materials (an acoustic ceiling tile and linoleum) are presented and compared to supplier data.

  15. Accurate Estimation of Protein Folding and Unfolding Times: Beyond Markov State Models.

    PubMed

    Suárez, Ernesto; Adelman, Joshua L; Zuckerman, Daniel M

    2016-08-01

    Because standard molecular dynamics (MD) simulations are unable to access time scales of interest in complex biomolecular systems, it is common to "stitch together" information from multiple shorter trajectories using approximate Markov state model (MSM) analysis. However, MSMs may require significant tuning and can yield biased results. Here, by analyzing some of the longest protein MD data sets available (>100 μs per protein), we show that estimators constructed based on exact non-Markovian (NM) principles can yield significantly improved mean first-passage times (MFPTs) for protein folding and unfolding. In some cases, MSM bias of more than an order of magnitude can be corrected when identical trajectory data are reanalyzed by non-Markovian approaches. The NM analysis includes "history" information, higher order time correlations compared to MSMs, that is available in every MD trajectory. The NM strategy is insensitive to fine details of the states used and works well when a fine time-discretization (i.e., small "lag time") is used. PMID:27340835

  16. Thermal Conductivities in Solids from First Principles: Accurate Computations and Rapid Estimates

    NASA Astrophysics Data System (ADS)

    Carbogno, Christian; Scheffler, Matthias

    In spite of significant research efforts, a first-principles determination of the thermal conductivity κ at high temperatures has remained elusive. Boltzmann transport techniques that account for anharmonicity perturbatively become inaccurate under such conditions. Ab initio molecular dynamics (MD) techniques using the Green-Kubo (GK) formalism capture the full anharmonicity, but can become prohibitively costly to converge in time and size. We developed a formalism that accelerates such GK simulations by several orders of magnitude and that thus enables its application within the limited time and length scales accessible in ab initio MD. For this purpose, we determine the effective harmonic potential occurring during the MD, the associated temperature-dependent phonon properties and lifetimes. Interpolation in reciprocal and frequency space then allows to extrapolate to the macroscopic scale. For both force-field and ab initio MD, we validate this approach by computing κ for Si and ZrO2, two materials known for their particularly harmonic and anharmonic character. Eventually, we demonstrate how these techniques facilitate reasonable estimates of κ from existing MD calculations at virtually no additional computational cost.

  17. ProViDE: A software tool for accurate estimation of viral diversity in metagenomic samples

    PubMed Central

    Ghosh, Tarini Shankar; Mohammed, Monzoorul Haque; Komanduri, Dinakar; Mande, Sharmila Shekhar

    2011-01-01

    Given the absence of universal marker genes in the viral kingdom, researchers typically use BLAST (with stringent E-values) for taxonomic classification of viral metagenomic sequences. Since majority of metagenomic sequences originate from hitherto unknown viral groups, using stringent e-values results in most sequences remaining unclassified. Furthermore, using less stringent e-values results in a high number of incorrect taxonomic assignments. The SOrt-ITEMS algorithm provides an approach to address the above issues. Based on alignment parameters, SOrt-ITEMS follows an elaborate work-flow for assigning reads originating from hitherto unknown archaeal/bacterial genomes. In SOrt-ITEMS, alignment parameter thresholds were generated by observing patterns of sequence divergence within and across various taxonomic groups belonging to bacterial and archaeal kingdoms. However, many taxonomic groups within the viral kingdom lack a typical Linnean-like taxonomic hierarchy. In this paper, we present ProViDE (Program for Viral Diversity Estimation), an algorithm that uses a customized set of alignment parameter thresholds, specifically suited for viral metagenomic sequences. These thresholds capture the pattern of sequence divergence and the non-uniform taxonomic hierarchy observed within/across various taxonomic groups of the viral kingdom. Validation results indicate that the percentage of ‘correct’ assignments by ProViDE is around 1.7 to 3 times higher than that by the widely used similarity based method MEGAN. The misclassification rate of ProViDE is around 3 to 19% (as compared to 5 to 42% by MEGAN) indicating significantly better assignment accuracy. ProViDE software and a supplementary file (containing supplementary figures and tables referred to in this article) is available for download from http://metagenomics.atc.tcs.com/binning/ProViDE/ PMID:21544173

  18. A new method based on the subpixel Gaussian model for accurate estimation of asteroid coordinates

    NASA Astrophysics Data System (ADS)

    Savanevych, V. E.; Briukhovetskyi, O. B.; Sokovikova, N. S.; Bezkrovny, M. M.; Vavilova, I. B.; Ivashchenko, Yu. M.; Elenin, L. V.; Khlamov, S. V.; Movsesian, Ia. S.; Dashkova, A. M.; Pogorelov, A. V.

    2015-08-01

    We describe a new iteration method to estimate asteroid coordinates, based on a subpixel Gaussian model of the discrete object image. The method operates by continuous parameters (asteroid coordinates) in a discrete observational space (the set of pixel potentials) of the CCD frame. In this model, the kind of coordinate distribution of the photons hitting a pixel of the CCD frame is known a priori, while the associated parameters are determined from a real digital object image. The method that is developed, which is flexible in adapting to any form of object image, has a high measurement accuracy along with a low calculating complexity, due to the maximum-likelihood procedure that is implemented to obtain the best fit instead of a least-squares method and Levenberg-Marquardt algorithm for minimization of the quadratic form. Since 2010, the method has been tested as the basis of our Collection Light Technology (COLITEC) software, which has been installed at several observatories across the world with the aim of the automatic discovery of asteroids and comets in sets of CCD frames. As a result, four comets (C/2010 X1 (Elenin), P/2011 NO1(Elenin), C/2012 S1 (ISON) and P/2013 V3 (Nevski)) as well as more than 1500 small Solar system bodies (including five near-Earth objects (NEOs), 21 Trojan asteroids of Jupiter and one Centaur object) have been discovered. We discuss these results, which allowed us to compare the accuracy parameters of the new method and confirm its efficiency. In 2014, the COLITEC software was recommended to all members of the Gaia-FUN-SSO network for analysing observations as a tool to detect faint moving objects in frames.

  19. The effect of high leverage points on the logistic ridge regression estimator having multicollinearity

    NASA Astrophysics Data System (ADS)

    Ariffin, Syaiba Balqish; Midi, Habshah

    2014-06-01

    This article is concerned with the performance of logistic ridge regression estimation technique in the presence of multicollinearity and high leverage points. In logistic regression, multicollinearity exists among predictors and in the information matrix. The maximum likelihood estimator suffers a huge setback in the presence of multicollinearity which cause regression estimates to have unduly large standard errors. To remedy this problem, a logistic ridge regression estimator is put forward. It is evident that the logistic ridge regression estimator outperforms the maximum likelihood approach for handling multicollinearity. The effect of high leverage points are then investigated on the performance of the logistic ridge regression estimator through real data set and simulation study. The findings signify that logistic ridge regression estimator fails to provide better parameter estimates in the presence of both high leverage points and multicollinearity.

  20. Figure of merit of diamond power devices based on accurately estimated impact ionization processes

    NASA Astrophysics Data System (ADS)

    Hiraiwa, Atsushi; Kawarada, Hiroshi

    2013-07-01

    Although a high breakdown voltage or field is considered as a major advantage of diamond, there has been a large difference in breakdown voltages or fields of diamond devices in literature. Most of these apparently contradictory results did not correctly reflect material properties because of specific device designs, such as punch-through structure and insufficient edge termination. Once these data were removed, the remaining few results, including a record-high breakdown field of 20 MV/cm, were theoretically reproduced, exactly calculating ionization integrals based on the ionization coefficients that were obtained after compensating for possible errors involved in reported theoretical values. In this compensation, we newly developed a method for extracting an ionization coefficient from an arbitrary relationship between breakdown voltage and doping density in the Chynoweth's framework. The breakdown field of diamond was estimated to depend on the doping density more than other materials, and accordingly required to be compared at the same doping density. The figure of merit (FOM) of diamond devices, obtained using these breakdown data, was comparable to the FOMs of 4H-SiC and Wurtzite-GaN devices at room temperature, but was projected to be larger than the latter by more than one order of magnitude at higher temperatures about 300 °C. Considering the relatively undeveloped state of diamond technology, there is room for further enhancement of the diamond FOM, improving breakdown voltage and mobility. Through these investigations, junction breakdown was found to be initiated by electrons or holes in a p--type or n--type drift layer, respectively. The breakdown voltages in the two types of drift layers differed from each other in a strict sense but were practically the same. Hence, we do not need to care about the conduction type of drift layers, but should rather exactly calculate the ionization integral without approximating ionization coefficients by a power

  1. Wind effect on PV module temperature: Analysis of different techniques for an accurate estimation.

    NASA Astrophysics Data System (ADS)

    Schwingshackl, Clemens; Petitta, Marcello; Ernst Wagner, Jochen; Belluardo, Giorgio; Moser, David; Castelli, Mariapina; Zebisch, Marc; Tetzlaff, Anke

    2013-04-01

    temperature estimation using meteorological parameters. References: [1] Skoplaki, E. et al., 2008: A simple correlation for the operating temperature of photovoltaic modules of arbitrary mounting, Solar Energy Materials & Solar Cells 92, 1393-1402 [2] Skoplaki, E. et al., 2008: Operating temperature of photovoltaic modules: A survey of pertinent correlations, Renewable Energy 34, 23-29 [3] Koehl, M. et al., 2011: Modeling of the nominal operating cell temperature based on outdoor weathering, Solar Energy Materials & Solar Cells 95, 1638-1646 [4] Mattei, M. et al., 2005: Calculation of the polycrystalline PV module temperature using a simple method of energy balance, Renewable Energy 31, 553-567 [5] Kurtz, S. et al.: Evaluation of high-temperature exposure of rack-mounted photovoltaic modules

  2. Estimation of measurement accuracy of track point coordinates in nuclear photoemulsion

    NASA Astrophysics Data System (ADS)

    Shamanov, V. V.

    1995-03-01

    A simple method for an estimation of the measurement accuracy of track point coordinates in nuclear photoemulsion is described. The method is based on analysis of residual deviations of measured track points from a straight line approximating the track. Reliability of the algorithm is illustrated by Monte Carlo simulation. Examples of using the method for an estimation of the accuracy of track point coordinates measured with the microscope KSM-1 (VEB Carl Zeiss Jena) are given.

  3. An Efficient Operator for the Change Point Estimation in Partial Spline Model

    PubMed Central

    Han, Sung Won; Zhong, Hua; Putt, Mary

    2015-01-01

    In bio-informatics application, the estimation of the starting and ending points of drop-down in the longitudinal data is important. One possible approach to estimate such change times is to use the partial spline model with change points. In order to use estimate change time, the minimum operator in terms of a smoothing parameter has been widely used, but we showed that the minimum operator causes large MSE of change point estimates. In this paper, we proposed the summation operator in terms of a smoothing parameter, and our simulation study showed that the summation operator gives smaller MSE for estimated change points than the minimum one. We also applied the proposed approach to the experiment data, blood flow during photodynamic cancer therapy. PMID:25705072

  4. A Unique Equation to Estimate Flash Points of Selected Pure Liquids Application to the Correction of Probably Erroneous Flash Point Values

    NASA Astrophysics Data System (ADS)

    Catoire, Laurent; Naudet, Valérie

    2004-12-01

    A simple empirical equation is presented for the estimation of closed-cup flash points for pure organic liquids. Data needed for the estimation of a flash point (FP) are the normal boiling point (Teb), the standard enthalpy of vaporization at 298.15 K [ΔvapH°(298.15 K)] of the compound, and the number of carbon atoms (n) in the molecule. The bounds for this equation are: -100⩽FP(°C)⩽+200; 250⩽Teb(K)⩽650; 20⩽Δvap H°(298.15 K)/(kJ mol-1)⩽110; 1⩽n⩽21. Compared to other methods (empirical equations, structural group contribution methods, and neural network quantitative structure-property relationships), this simple equation is shown to predict accurately the flash points for a variety of compounds, whatever their chemical groups (monofunctional compounds and polyfunctional compounds) and whatever their structure (linear, branched, cyclic). The same equation is shown to be valid for hydrocarbons, organic nitrogen compounds, organic oxygen compounds, organic sulfur compounds, organic halogen compounds, and organic silicone compounds. It seems that the flash points of organic deuterium compounds, organic tin compounds, organic nickel compounds, organic phosphorus compounds, organic boron compounds, and organic germanium compounds can also be predicted accurately by this equation. A mean absolute deviation of about 3 °C, a standard deviation of about 2 °C, and a maximum absolute deviation of 10 °C are obtained when predictions are compared to experimental data for more than 600 compounds. For all these compounds, the absolute deviation is equal or lower than the reproductibility expected at a 95% confidence level for closed-cup flash point measurement. This estimation technique has its limitations concerning the polyhalogenated compounds for which the equation should be used with caution. The mean absolute deviation and maximum absolute deviation observed and the fact that the equation provides unbiaised predictions lead to the conclusion that

  5. Accurate state estimation from uncertain data and models: an application of data assimilation to mathematical models of human brain tumors

    PubMed Central

    2011-01-01

    Background Data assimilation refers to methods for updating the state vector (initial condition) of a complex spatiotemporal model (such as a numerical weather model) by combining new observations with one or more prior forecasts. We consider the potential feasibility of this approach for making short-term (60-day) forecasts of the growth and spread of a malignant brain cancer (glioblastoma multiforme) in individual patient cases, where the observations are synthetic magnetic resonance images of a hypothetical tumor. Results We apply a modern state estimation algorithm (the Local Ensemble Transform Kalman Filter), previously developed for numerical weather prediction, to two different mathematical models of glioblastoma, taking into account likely errors in model parameters and measurement uncertainties in magnetic resonance imaging. The filter can accurately shadow the growth of a representative synthetic tumor for 360 days (six 60-day forecast/update cycles) in the presence of a moderate degree of systematic model error and measurement noise. Conclusions The mathematical methodology described here may prove useful for other modeling efforts in biology and oncology. An accurate forecast system for glioblastoma may prove useful in clinical settings for treatment planning and patient counseling. Reviewers This article was reviewed by Anthony Almudevar, Tomas Radivoyevitch, and Kristin Swanson (nominated by Georg Luebeck). PMID:22185645

  6. Accurate Treatment of Electrostatics during Molecular Adsorption in Nanoporous Crystals without Assigning Point Charges to Framework Atoms

    SciTech Connect

    Watanabe, T; Manz, TA; Sholl, DS

    2011-03-24

    Molecular simulations have become an important complement to experiments for studying gas adsorption and separation in crystalline nanoporous materials. Conventionally, these simulations use force fields that model adsorbate-pore interactions by assigning point charges to the atoms of the adsorbent. The assignment of framework charges always introduces ambiguity because there are many different choices for defining point charges, even when the true electron density of a material is known. We show how to completely avoid such ambiguity by using the electrostatic potential energy surface (EPES) calculated from plane wave density functional theory (DFT). We illustrate this approach by simulating CO(2) adsorption in four metal-organic frameworks (MOFs): IRMOF-1, ZIE-8, ZIE-90, and Zn(nicotinate)(2). The resulting CO(2) adsorption isotherms are insensitive to the exchange-correlation functional used in the DFT calculation of the EPES but are sensitive to changes in the crystal structure and lattice parameters. Isotherms computed from the DFT EPES are compared to those computed from several point charge models. This comparison makes possible, for the first time, an unbiased assessment of the accuracy of these point charge models for describing adsorption in MOFs. We find an unusually high Henry's constant (109 mmol/g.bar) and intermediate isosteric heat of adsorption (34.9 kJ/mol) for Zn(nicotinate)(2), which makes it a potentially attractive mateiial for CO(2) adsorption applications.

  7. Reservoir evaluation of thin-bedded turbidites and hydrocarbon pore thickness estimation for an accurate quantification of resource

    NASA Astrophysics Data System (ADS)

    Omoniyi, Bayonle; Stow, Dorrik

    2016-04-01

    One of the major challenges in the assessment of and production from turbidite reservoirs is to take full account of thin and medium-bedded turbidites (<10cm and <30cm respectively). Although such thinner, low-pay sands may comprise a significant proportion of the reservoir succession, they can go unnoticed by conventional analysis and so negatively impact on reserve estimation, particularly in fields producing from prolific thick-bedded turbidite reservoirs. Field development plans often take little note of such thin beds, which are therefore bypassed by mainstream production. In fact, the trapped and bypassed fluids can be vital where maximising field value and optimising production are key business drivers. We have studied in detail, a succession of thin-bedded turbidites associated with thicker-bedded reservoir facies in the North Brae Field, UKCS, using a combination of conventional logs and cores to assess the significance of thin-bedded turbidites in computing hydrocarbon pore thickness (HPT). This quantity, being an indirect measure of thickness, is critical for an accurate estimation of original-oil-in-place (OOIP). By using a combination of conventional and unconventional logging analysis techniques, we obtain three different results for the reservoir intervals studied. These results include estimated net sand thickness, average sand thickness, and their distribution trend within a 3D structural grid. The net sand thickness varies from 205 to 380 ft, and HPT ranges from 21.53 to 39.90 ft. We observe that an integrated approach (neutron-density cross plots conditioned to cores) to HPT quantification reduces the associated uncertainties significantly, resulting in estimation of 96% of actual HPT. Further work will focus on assessing the 3D dynamic connectivity of the low-pay sands with the surrounding thick-bedded turbidite facies.

  8. Comparative assessment of bone pose estimation using Point Cluster Technique and OpenSim.

    PubMed

    Lathrop, Rebecca L; Chaudhari, Ajit M W; Siston, Robert A

    2011-11-01

    Estimating the position of the bones from optical motion capture data is a challenge associated with human movement analysis. Bone pose estimation techniques such as the Point Cluster Technique (PCT) and simulations of movement through software packages such as OpenSim are used to minimize soft tissue artifact and estimate skeletal position; however, using different methods for analysis may produce differing kinematic results which could lead to differences in clinical interpretation such as a misclassification of normal or pathological gait. This study evaluated the differences present in knee joint kinematics as a result of calculating joint angles using various techniques. We calculated knee joint kinematics from experimental gait data using the standard PCT, the least squares approach in OpenSim applied to experimental marker data, and the least squares approach in OpenSim applied to the results of the PCT algorithm. Maximum and resultant RMS differences in knee angles were calculated between all techniques. We observed differences in flexion/extension, varus/valgus, and internal/external rotation angles between all approaches. The largest differences were between the PCT results and all results calculated using OpenSim. The RMS differences averaged nearly 5° for flexion/extension angles with maximum differences exceeding 15°. Average RMS differences were relatively small (< 1.08°) between results calculated within OpenSim, suggesting that the choice of marker weighting is not critical to the results of the least squares inverse kinematics calculations. The largest difference between techniques appeared to be a constant offset between the PCT and all OpenSim results, which may be due to differences in the definition of anatomical reference frames, scaling of musculoskeletal models, and/or placement of virtual markers within OpenSim. Different methods for data analysis can produce largely different kinematic results, which could lead to the misclassification

  9. An Evaluation of the Plant Density Estimator the Point-Centred Quarter Method (PCQM) Using Monte Carlo Simulation

    PubMed Central

    Khan, Md Nabiul Islam; Hijbeek, Renske; Berger, Uta; Koedam, Nico; Grueters, Uwe; Islam, S. M. Zahirul; Hasan, Md Asadul; Dahdouh-Guebas, Farid

    2016-01-01

    Background In the Point-Centred Quarter Method (PCQM), the mean distance of the first nearest plants in each quadrant of a number of random sample points is converted to plant density. It is a quick method for plant density estimation. In recent publications the estimator equations of simple PCQM (PCQM1) and higher order ones (PCQM2 and PCQM3, which uses the distance of the second and third nearest plants, respectively) show discrepancy. This study attempts to review PCQM estimators in order to find the most accurate equation form. We tested the accuracy of different PCQM equations using Monte Carlo Simulations in simulated (having ‘random’, ‘aggregated’ and ‘regular’ spatial patterns) plant populations and empirical ones. Principal Findings PCQM requires at least 50 sample points to ensure a desired level of accuracy. PCQM with a corrected estimator is more accurate than with a previously published estimator. The published PCQM versions (PCQM1, PCQM2 and PCQM3) show significant differences in accuracy of density estimation, i.e. the higher order PCQM provides higher accuracy. However, the corrected PCQM versions show no significant differences among them as tested in various spatial patterns except in plant assemblages with a strong repulsion (plant competition). If N is number of sample points and R is distance, the corrected estimator of PCQM1 is 4(4N − 1)/(π ∑ R2) but not 12N/(π ∑ R2), of PCQM2 is 4(8N − 1)/(π ∑ R2) but not 28N/(π ∑ R2) and of PCQM3 is 4(12N − 1)/(π ∑ R2) but not 44N/(π ∑ R2) as published. Significance If the spatial pattern of a plant association is random, PCQM1 with a corrected equation estimator and over 50 sample points would be sufficient to provide accurate density estimation. PCQM using just the nearest tree in each quadrant is therefore sufficient, which facilitates sampling of trees, particularly in areas with just a few hundred trees per hectare. PCQM3 provides the best density estimations for all

  10. On a fourth order accurate implicit finite difference scheme for hyperbolic conservation laws. II - Five-point schemes

    NASA Technical Reports Server (NTRS)

    Harten, A.; Tal-Ezer, H.

    1981-01-01

    This paper presents a family of two-level five-point implicit schemes for the solution of one-dimensional systems of hyperbolic conservation laws, which generalized the Crank-Nicholson scheme to fourth order accuracy (4-4) in both time and space. These 4-4 schemes are nondissipative and unconditionally stable. Special attention is given to the system of linear equations associated with these 4-4 implicit schemes. The regularity of this system is analyzed and efficiency of solution-algorithms is examined. A two-datum representation of these 4-4 implicit schemes brings about a compactification of the stencil to three mesh points at each time-level. This compact two-datum representation is particularly useful in deriving boundary treatments. Numerical results are presented to illustrate some properties of the proposed scheme.

  11. Point cloud modeling using the homogeneous transformation for non-cooperative pose estimation

    NASA Astrophysics Data System (ADS)

    Lim, Tae W.

    2015-06-01

    A modeling process to simulate point cloud range data that a lidar (light detection and ranging) sensor produces is presented in this paper in order to support the development of non-cooperative pose (relative attitude and position) estimation approaches which will help improve proximity operation capabilities between two adjacent vehicles. The algorithms in the modeling process were based on the homogeneous transformation, which has been employed extensively in robotics and computer graphics, as well as in recently developed pose estimation algorithms. Using a flash lidar in a laboratory testing environment, point cloud data of a test article was simulated and compared against the measured point cloud data. The simulated and measured data sets match closely, validating the modeling process. The modeling capability enables close examination of the characteristics of point cloud images of an object as it undergoes various translational and rotational motions. Relevant characteristics that will be crucial in non-cooperative pose estimation were identified such as shift, shadowing, perspective projection, jagged edges, and differential point cloud density. These characteristics will have to be considered in developing effective non-cooperative pose estimation algorithms. The modeling capability will allow extensive non-cooperative pose estimation performance simulations prior to field testing, saving development cost and providing performance metrics of the pose estimation concepts and algorithms under evaluation. The modeling process also provides "truth" pose of the test objects with respect to the sensor frame so that the pose estimation error can be quantified.

  12. A plan for accurate estimation of daily area-mean rainfall during the CaPE experiment

    NASA Technical Reports Server (NTRS)

    Duchon, Claude E.

    1992-01-01

    The Convection and Precipitation/Electrification (CaPE) experiment took place in east central Florida from 8 July to 18 August, 1991. There were five research themes associated with CaPE. In broad terms they are: investigation of the evolution of the electric field in convective clouds, determination of meteorological and electrical conditions associated with lightning, development of mesoscale numerical forecasts (2-12 hr) and nowcasts (less than 2 hr) of convective initiation and remote estimation of rainfall. It is the last theme coupled with numerous raingage and streamgage measurements, satellite and aircraft remote sensing, radiosondes and other meteorological measurements in the atmospheric boundary layer that provide the basis for determining the hydrologic cycle for the CaPE experiment area. The largest component of the hydrologic cycle in this region is rainfall. An accurate determination of daily area-mean rainfall is important in correctly modeling its apportionment into runoff, infiltration and evapotranspiration. In order to achieve this goal a research plan was devised and initial analysis begun. The overall research plan is discussed with special emphasis placed on the adjustment of radar rainfall estimates to raingage rainfall.

  13. Reliability of Scales with General Structure: Point and Interval Estimation Using a Structural Equation Modeling Approach.

    ERIC Educational Resources Information Center

    Raykov, Tenko; Shrout, Patrick E.

    2002-01-01

    Discusses a method for obtaining point and interval estimates of reliability for composites of measures with a general structure. The approach is based on fitting a correspondingly constrained structural equation model and generalizes earlier covariance structure analysis methods for scale reliability estimation with congeneric tests. (SLD)

  14. Central blood pressure estimation by using N-point moving average method in the brachial pulse wave.

    PubMed

    Sugawara, Rie; Horinaka, Shigeo; Yagi, Hiroshi; Ishimura, Kimihiko; Honda, Takeharu

    2015-05-01

    Recently, a method of estimating the central systolic blood pressure (C-SBP) using an N-point moving average method in the radial or brachial artery waveform has been reported. Then, we investigated the relationship between the C-SBP estimated from the brachial artery pressure waveform using the N-point moving average method and the C-SBP measured invasively using a catheter. C-SBP using a N/6 moving average method from the scaled right brachial artery pressure waveforms using VaSera VS-1500 was calculated. This estimated C-SBP was compared with the invasively measured C-SBP within a few minutes. In 41 patients who underwent cardiac catheterization (mean age: 65 years), invasively measured C-SBP was significantly lower than right cuff-based brachial BP (138.2 ± 26.3 vs 141.0 ± 24.9 mm Hg, difference -2.78 ± 1.36 mm Hg, P = 0.048). The cuff-based SBP was significantly higher than invasive measured C-SBP in subjects with younger than 60 years old. However, the estimated C-SBP using a N/6 moving average method from the scaled right brachial artery pressure waveforms and the invasively measured C-SBP did not significantly differ (137.8 ± 24.2 vs 138.2 ± 26.3 mm Hg, difference -0.49 ± 1.39, P = 0.73). N/6-point moving average method using the non-invasively acquired brachial artery waveform calibrated by the cuff-based brachial SBP was an accurate, convenient and useful method for estimating C-SBP. Thus, C-SBP can be estimated simply by applying a regular arm cuff, which is greatly feasible in the practical medicine. PMID:25693855

  15. Spatially shifting temporal points: estimating pooled within-time series variograms for scarce hydrological data

    NASA Astrophysics Data System (ADS)

    Bhowmik, A. K.; Cabral, P.

    2015-02-01

    Estimation of pooled within-time series (PTS) variograms is a frequently used technique for geostatistical interpolation of continuous hydrological variables in spatial data-scarce regions conditional that time series are available. The only available method for estimating PTS variograms averages semivariances, which are computed for individual time steps, over each spatial lag within a pooled time series. However, semivariances computed by a~few paired comparisons for individual time steps are erratic and hence they may hamper precision of PTS variogram estimation. Here, we outlined an alternative method for estimating PTS variograms by spatializing temporal data points and shifting them. The data were pooled by ensuring consistency of spatial structure and stationarity within a time series, while pooling sufficient number of data points for reliable variogram estimation. The pooled spatial data point sets from different time steps were assigned to different coordinate sets on the same space. Then a semivariance was computed for each spatial lag within a pooled time series by comparing all point pairs separable by that spatial lag, and a PTS variogram was estimated by controlling the lower and upper boundary of spatial lags. Our method showed higher precision than the available method for PTS variogram estimation and was developed by using the freely available R open source software environment. The method will reduce uncertainty for spatial variability modeling while preserving spatiotemporal properties of data for geostatistical interpolation of hydrological variables in spatial data-scarce developing countries.

  16. How accurate and precise are limited sampling strategies in estimating exposure to mycophenolic acid in people with autoimmune disease?

    PubMed

    Abd Rahman, Azrin N; Tett, Susan E; Staatz, Christine E

    2014-03-01

    Mycophenolic acid (MPA) is a potent immunosuppressant agent, which is increasingly being used in the treatment of patients with various autoimmune diseases. Dosing to achieve a specific target MPA area under the concentration-time curve from 0 to 12 h post-dose (AUC12) is likely to lead to better treatment outcomes in patients with autoimmune disease than a standard fixed-dose strategy. This review summarizes the available published data around concentration monitoring strategies for MPA in patients with autoimmune disease and examines the accuracy and precision of methods reported to date using limited concentration-time points to estimate MPA AUC12. A total of 13 studies were identified that assessed the correlation between single time points and MPA AUC12 and/or examined the predictive performance of limited sampling strategies in estimating MPA AUC12. The majority of studies investigated mycophenolate mofetil (MMF) rather than the enteric-coated mycophenolate sodium (EC-MPS) formulation of MPA. Correlations between MPA trough concentrations and MPA AUC12 estimated by full concentration-time profiling ranged from 0.13 to 0.94 across ten studies, with the highest associations (r (2) = 0.90-0.94) observed in lupus nephritis patients. Correlations were generally higher in autoimmune disease patients compared with renal allograft recipients and higher after MMF compared with EC-MPS intake. Four studies investigated use of a limited sampling strategy to predict MPA AUC12 determined by full concentration-time profiling. Three studies used a limited sampling strategy consisting of a maximum combination of three sampling time points with the latest sample drawn 3-6 h after MMF intake, whereas the remaining study tested all combinations of sampling times. MPA AUC12 was best predicted when three samples were taken at pre-dose and at 1 and 3 h post-dose with a mean bias and imprecision of 0.8 and 22.6 % for multiple linear regression analysis and of -5.5 and 23.0 % for

  17. Development of a new, robust and accurate, spectroscopic metric for scatterer size estimation in optical coherence tomography (OCT) images

    NASA Astrophysics Data System (ADS)

    Kassinopoulos, Michalis; Pitris, Costas

    2016-03-01

    The modulations appearing on the backscattering spectrum originating from a scatterer are related to its diameter as described by Mie theory for spherical particles. Many metrics for Spectroscopic Optical Coherence Tomography (SOCT) take advantage of this observation in order to enhance the contrast of Optical Coherence Tomography (OCT) images. However, none of these metrics has achieved high accuracy when calculating the scatterer size. In this work, Mie theory was used to further investigate the relationship between the degree of modulation in the spectrum and the scatterer size. From this study, a new spectroscopic metric, the bandwidth of the Correlation of the Derivative (COD) was developed which is more robust and accurate, compared to previously reported techniques, in the estimation of scatterer size. The self-normalizing nature of the derivative and the robustness of the first minimum of the correlation as a measure of its width, offer significant advantages over other spectral analysis approaches especially for scatterer sizes above 3 μm. The feasibility of this technique was demonstrated using phantom samples containing 6, 10 and 16 μm diameter microspheres as well as images of normal and cancerous human colon. The results are very promising, suggesting that the proposed metric could be implemented in OCT spectral analysis for measuring nuclear size distribution in biological tissues. A technique providing such information would be of great clinical significance since it would allow the detection of nuclear enlargement at the earliest stages of precancerous development.

  18. Optimization of tissue physical parameters for accurate temperature estimation from finite-element simulation of radiofrequency ablation.

    PubMed

    Subramanian, Swetha; Mast, T Douglas

    2015-10-01

    Computational finite element models are commonly used for the simulation of radiofrequency ablation (RFA) treatments. However, the accuracy of these simulations is limited by the lack of precise knowledge of tissue parameters. In this technical note, an inverse solver based on the unscented Kalman filter (UKF) is proposed to optimize values for specific heat, thermal conductivity, and electrical conductivity resulting in accurately simulated temperature elevations. A total of 15 RFA treatments were performed on ex vivo bovine liver tissue. For each RFA treatment, 15 finite-element simulations were performed using a set of deterministically chosen tissue parameters to estimate the mean and variance of the resulting tissue ablation. The UKF was implemented as an inverse solver to recover the specific heat, thermal conductivity, and electrical conductivity corresponding to the measured area of the ablated tissue region, as determined from gross tissue histology. These tissue parameters were then employed in the finite element model to simulate the position- and time-dependent tissue temperature. Results show good agreement between simulated and measured temperature. PMID:26352462

  19. Optimization of tissue physical parameters for accurate temperature estimation from finite-element simulation of radiofrequency ablation

    NASA Astrophysics Data System (ADS)

    Subramanian, Swetha; Mast, T. Douglas

    2015-09-01

    Computational finite element models are commonly used for the simulation of radiofrequency ablation (RFA) treatments. However, the accuracy of these simulations is limited by the lack of precise knowledge of tissue parameters. In this technical note, an inverse solver based on the unscented Kalman filter (UKF) is proposed to optimize values for specific heat, thermal conductivity, and electrical conductivity resulting in accurately simulated temperature elevations. A total of 15 RFA treatments were performed on ex vivo bovine liver tissue. For each RFA treatment, 15 finite-element simulations were performed using a set of deterministically chosen tissue parameters to estimate the mean and variance of the resulting tissue ablation. The UKF was implemented as an inverse solver to recover the specific heat, thermal conductivity, and electrical conductivity corresponding to the measured area of the ablated tissue region, as determined from gross tissue histology. These tissue parameters were then employed in the finite element model to simulate the position- and time-dependent tissue temperature. Results show good agreement between simulated and measured temperature.

  20. Identification of the monitoring point density needed to reliably estimate contaminant mass fluxes

    NASA Astrophysics Data System (ADS)

    Liedl, R.; Liu, S.; Fraser, M.; Barker, J.

    2005-12-01

    Plume monitoring frequently relies on the evaluation of point-scale measurements of concentration at observation wells which are located at control planes or `fences' perpendicular to groundwater flow. Depth-specific concentration values are used to estimate the total mass flux of individual contaminants through the fence. Results of this approach, which is based on spatial interpolation, obviously depend on the density of the measurement points. Our contribution relates the accurracy of mass flux estimation to the point density and, in particular, allows to identify a minimum point density needed to achieve a specified accurracy. In order to establish this relationship, concentration data from fences installed in the coal tar creosote plume at the Borden site are used. These fences are characterized by a rather high density of about 7 points/m2 and it is reasonable to assume that the true mass flux is obtained with this point density. This mass flux is then compared with results for less dense grids down to about 0.1points/m2. Mass flux estimates obtained for this range of point densities are analyzed by the moving window method in order to reduce purely random fluctuations. For each position of the moving window the mass flux is estimated and the coefficient of variation (CV) is calculated to quantify variablity of the results. Thus, the CV provides a relative measure of accurracy in the estimated fluxes. By applying this approach to the Borden naphthalene plume at different times, it is found that the point density changes from sufficient to insufficient due to the temporally decreasing mass flux. By comparing the results of naphthalene and phenol at the same fence and at the same time, we can see that the same grid density might be sufficient for one compound but not for another. If a rather strict CV criterion of 5% is used, a grid of 7 points/m2 is shown to allow for reliable estimates of the true mass fluxes only in the beginning of plume development when

  1. Point Cloud Based Relative Pose Estimation of a Satellite in Close Range

    PubMed Central

    Liu, Lujiang; Zhao, Gaopeng; Bo, Yuming

    2016-01-01

    Determination of the relative pose of satellites is essential in space rendezvous operations and on-orbit servicing missions. The key problems are the adoption of suitable sensor on board of a chaser and efficient techniques for pose estimation. This paper aims to estimate the pose of a target satellite in close range on the basis of its known model by using point cloud data generated by a flash LIDAR sensor. A novel model based pose estimation method is proposed; it includes a fast and reliable pose initial acquisition method based on global optimal searching by processing the dense point cloud data directly, and a pose tracking method based on Iterative Closest Point algorithm. Also, a simulation system is presented in this paper in order to evaluate the performance of the sensor and generate simulated sensor point cloud data. It also provides truth pose of the test target so that the pose estimation error can be quantified. To investigate the effectiveness of the proposed approach and achievable pose accuracy, numerical simulation experiments are performed; results demonstrate algorithm capability of operating with point cloud directly and large pose variations. Also, a field testing experiment is conducted and results show that the proposed method is effective. PMID:27271633

  2. Point Cloud Based Relative Pose Estimation of a Satellite in Close Range.

    PubMed

    Liu, Lujiang; Zhao, Gaopeng; Bo, Yuming

    2016-01-01

    Determination of the relative pose of satellites is essential in space rendezvous operations and on-orbit servicing missions. The key problems are the adoption of suitable sensor on board of a chaser and efficient techniques for pose estimation. This paper aims to estimate the pose of a target satellite in close range on the basis of its known model by using point cloud data generated by a flash LIDAR sensor. A novel model based pose estimation method is proposed; it includes a fast and reliable pose initial acquisition method based on global optimal searching by processing the dense point cloud data directly, and a pose tracking method based on Iterative Closest Point algorithm. Also, a simulation system is presented in this paper in order to evaluate the performance of the sensor and generate simulated sensor point cloud data. It also provides truth pose of the test target so that the pose estimation error can be quantified. To investigate the effectiveness of the proposed approach and achievable pose accuracy, numerical simulation experiments are performed; results demonstrate algorithm capability of operating with point cloud directly and large pose variations. Also, a field testing experiment is conducted and results show that the proposed method is effective. PMID:27271633

  3. A double-observer approach for estimating detection probability and abundance from point counts

    USGS Publications Warehouse

    Nichols, J.D.; Hines, J.E.; Sauer, J.R.; Fallon, F.W.; Fallon, J.E.; Heglund, P.J.

    2000-01-01

    Although point counts are frequently used in ornithological studies, basic assumptions about detection probabilities often are untested. We apply a double-observer approach developed to estimate detection probabilities for aerial surveys (Cook and Jacobson 1979) to avian point counts. At each point count, a designated 'primary' observer indicates to another ('secondary') observer all birds detected. The secondary observer records all detections of the primary observer as well as any birds not detected by the primary observer. Observers alternate primary and secondary roles during the course of the survey. The approach permits estimation of observer-specific detection probabilities and bird abundance. We developed a set of models that incorporate different assumptions about sources of variation (e.g. observer, bird species) in detection probability. Seventeen field trials were conducted, and models were fit to the resulting data using program SURVIV. Single-observer point counts generally miss varying proportions of the birds actually present, and observer and bird species were found to be relevant sources of variation in detection probabilities. Overall detection probabilities (probability of being detected by at least one of the two observers) estimated using the double-observer approach were very high (>0.95), yielding precise estimates of avian abundance. We consider problems with the approach and recommend possible solutions, including restriction of the approach to fixed-radius counts to reduce the effect of variation in the effective radius of detection among various observers and to provide a basis for using spatial sampling to estimate bird abundance on large areas of interest. We believe that most questions meriting the effort required to carry out point counts also merit serious attempts to estimate detection probabilities associated with the counts. The double-observer approach is a method that can be used for this purpose.

  4. Estimated results analysis and application of the precise point positioning based high-accuracy ionosphere delay

    NASA Astrophysics Data System (ADS)

    Wang, Shi-tai; Peng, Jun-huan

    2015-12-01

    The characterization of ionosphere delay estimated with precise point positioning is analyzed in this paper. The estimation, interpolation and application of the ionosphere delay are studied based on the processing of 24-h data from 5 observation stations. The results show that the estimated ionosphere delay is affected by the hardware delay bias from receiver so that there is a difference between the estimated and interpolated results. The results also show that the RMSs (root mean squares) are bigger, while the STDs (standard deviations) are better than 0.11 m. When the satellite difference is used, the hardware delay bias can be canceled. The interpolated satellite-differenced ionosphere delay is better than 0.11 m. Although there is a difference between the between the estimated and interpolated ionosphere delay results it cannot affect its application in single-frequency positioning and the positioning accuracy can reach cm level.

  5. A Direct Latent Variable Modeling Based Method for Point and Interval Estimation of Coefficient Alpha

    ERIC Educational Resources Information Center

    Raykov, Tenko; Marcoulides, George A.

    2015-01-01

    A direct approach to point and interval estimation of Cronbach's coefficient alpha for multiple component measuring instruments is outlined. The procedure is based on a latent variable modeling application with widely circulated software. As a by-product, using sample data the method permits ascertaining whether the population discrepancy…

  6. Human Body 3D Posture Estimation Using Significant Points and Two Cameras

    PubMed Central

    Juang, Chia-Feng; Chen, Teng-Chang; Du, Wei-Chin

    2014-01-01

    This paper proposes a three-dimensional (3D) human posture estimation system that locates 3D significant body points based on 2D body contours extracted from two cameras without using any depth sensors. The 3D significant body points that are located by this system include the head, the center of the body, the tips of the feet, the tips of the hands, the elbows, and the knees. First, a linear support vector machine- (SVM-) based segmentation method is proposed to distinguish the human body from the background in red, green, and blue (RGB) color space. The SVM-based segmentation method uses not only normalized color differences but also included angle between pixels in the current frame and the background in order to reduce shadow influence. After segmentation, 2D significant points in each of the two extracted images are located. A significant point volume matching (SPVM) method is then proposed to reconstruct the 3D significant body point locations by using 2D posture estimation results. Experimental results show that the proposed SVM-based segmentation method shows better performance than other gray level- and RGB-based segmentation approaches. This paper also shows the effectiveness of the 3D posture estimation results in different postures. PMID:24883422

  7. Accurate experimental determination of the isotope effects on the triple point temperature of water. II. Combined dependence on the 18O and 17O abundances

    NASA Astrophysics Data System (ADS)

    Faghihi, V.; Kozicki, M.; Aerts-Bijma, A. T.; Jansen, H. G.; Spriensma, J. J.; Peruzzi, A.; Meijer, H. A. J.

    2015-12-01

    This paper is the second of two articles on the quantification of isotope effects on the triple point temperature of water. In this second article, we address the combined effects of 18O and 17O isotopes. We manufactured five triple point cells with waters with 18O and 17O abundances exceeding widely the natural abundance range while maintaining their natural 18O/17O relationship. The 2H isotopic abundance was kept close to that of VSMOW (Vienna Standard Mean Ocean Water). These cells realized triple point temperatures ranging between  -220 μK to 1420 μK with respect to the temperature realized by a triple point cell filled with VSMOW. Our experiment allowed us to determine an accurate and reliable value for the newly defined combined 18, 17O correction parameter of AO  =  630 μK with a combined uncertainty of 10 μK. To apply this correction, only the 18O abundance of the TPW needs to be known (and the water needs to be of natural origin). Using the results of our two articles, we recommend a correction equation along with the coefficient values for isotopic compositions differing from that of VSMOW and compare the effect of this new equation on a number of triple point cells from the literature and from our own institute. Using our correction equation, the uncertainty in the isotope correction for triple point cell waters used around the world will be  <1 μK.

  8. Observing Volcanic Thermal Anomalies from Space: How Accurate is the Estimation of the Hotspot's Size and Temperature?

    NASA Astrophysics Data System (ADS)

    Zaksek, K.; Pick, L.; Lombardo, V.; Hort, M. K.

    2015-12-01

    Measuring the heat emission from active volcanic features on the basis of infrared satellite images contributes to the volcano's hazard assessment. Because these thermal anomalies only occupy a small fraction (< 1 %) of a typically resolved target pixel (e.g. from Landsat 7, MODIS) the accurate determination of the hotspot's size and temperature is however problematic. Conventionally this is overcome by comparing observations in at least two separate infrared spectral wavebands (Dual-Band method). We investigate the resolution limits of this thermal un-mixing technique by means of a uniquely designed indoor analog experiment. Therein the volcanic feature is simulated by an electrical heating alloy of 0.5 mm diameter installed on a plywood panel of high emissivity. Two thermographic cameras (VarioCam high resolution and ImageIR 8300 by Infratec) record images of the artificial heat source in wavebands comparable to those available from satellite data. These range from the short-wave infrared (1.4-3 µm) over the mid-wave infrared (3-8 µm) to the thermal infrared (8-15 µm). In the conducted experiment the pixel fraction of the hotspot was successively reduced by increasing the camera-to-target distance from 3 m to 35 m. On the basis of an individual target pixel the expected decrease of the hotspot pixel area with distance at a relatively constant wire temperature of around 600 °C was confirmed. The deviation of the hotspot's pixel fraction yielded by the Dual-Band method from the theoretically calculated one was found to be within 20 % up until a target distance of 25 m. This means that a reliable estimation of the hotspot size is only possible if the hotspot is larger than about 3 % of the pixel area, a resolution boundary most remotely sensed volcanic hotspots fall below. Future efforts will focus on the investigation of a resolution limit for the hotspot's temperature by varying the alloy's amperage. Moreover, the un-mixing results for more realistic multi

  9. Monte Carlo point process estimation of electromyographic envelopes from motor cortical spikes for brain-machine interfaces

    NASA Astrophysics Data System (ADS)

    Liao, Yuxi; She, Xiwei; Wang, Yiwen; Zhang, Shaomin; Zhang, Qiaosheng; Zheng, Xiaoxiang; Principe, Jose C.

    2015-12-01

    Objective. Representation of movement in the motor cortex (M1) has been widely studied in brain-machine interfaces (BMIs). The electromyogram (EMG) has greater bandwidth than the conventional kinematic variables (such as position, velocity), and is functionally related to the discharge of cortical neurons. As the stochastic information of EMG is derived from the explicit spike time structure, point process (PP) methods will be a good solution for decoding EMG directly from neural spike trains. Previous studies usually assume linear or exponential tuning curves between neural firing and EMG, which may not be true. Approach. In our analysis, we estimate the tuning curves in a data-driven way and find both the traditional functional-excitatory and functional-inhibitory neurons, which are widely found across a rat’s motor cortex. To accurately decode EMG envelopes from M1 neural spike trains, the Monte Carlo point process (MCPP) method is implemented based on such nonlinear tuning properties. Main results. Better reconstruction of EMG signals is shown on baseline and extreme high peaks, as our method can better preserve the nonlinearity of the neural tuning during decoding. The MCPP improves the prediction accuracy (the normalized mean squared error) 57% and 66% on average compared with the adaptive point process filter using linear and exponential tuning curves respectively, for all 112 data segments across six rats. Compared to a Wiener filter using spike rates with an optimal window size of 50 ms, MCPP decoding EMG from a point process improves the normalized mean square error (NMSE) by 59% on average. Significance. These results suggest that neural tuning is constantly changing during task execution and therefore, the use of spike timing methodologies and estimation of appropriate tuning curves needs to be undertaken for better EMG decoding in motor BMIs.

  10. An opportunity for directly estimating the characteristics of zero-point dynamics in polyethylene crystals

    NASA Astrophysics Data System (ADS)

    Vettegren, V. I.; Slutsker, A. I.; Titenkov, L. S.; Kulik, V. B.; Gilyarov, V. L.

    2007-02-01

    For large polyethylene crystallites (100 × 60 × 60 nm), the width of the Raman band at 1129 cm-1 and the angular position of the x-ray equatorial 110 reflection were measured as a function of temperature over the range 5-300 K. It is found that the Raman bandwidth has an athermic (zero-point) component at low temperatures. This component is used to estimate the zero-point energies of torsional and bending vibrations of polyethylene molecules. These energies are close to those obtained from analyzing the x-ray diffraction data. It is concluded that the characteristics of zero-point dynamics can be determined directly from measuring the zero-point width of a Raman band.

  11. Estimating the melting point, entropy of fusion, and enthalpy of fusion of organic compounds via SPARC.

    PubMed

    Whiteside, T S; Hilal, S H; Brenner, A; Carreira, L A

    2016-08-01

    The entropy of fusion, enthalpy of fusion, and melting point of organic compounds can be estimated through three models developed using the SPARC (SPARC Performs Automated Reasoning in Chemistry) platform. The entropy of fusion is modelled through a combination of interaction terms and physical descriptors. The enthalpy of fusion is modelled as a function of the entropy of fusion, boiling point, and flexibility of the molecule. The melting point model is the enthalpy of fusion divided by the entropy of fusion. These models were developed in part to improve SPARC's vapour pressure and solubility models. These models have been tested on 904 unique compounds. The entropy model has a RMS of 12.5 J mol(-1) K(-1). The enthalpy model has a RMS of 4.87 kJ mol(-1). The melting point model has a RMS of 54.4°C. PMID:27586365

  12. Estimation of dew point temperature using neuro-fuzzy and neural network techniques

    NASA Astrophysics Data System (ADS)

    Kisi, Ozgur; Kim, Sungwon; Shiri, Jalal

    2013-11-01

    This study investigates the ability of two different artificial neural network (ANN) models, generalized regression neural networks model (GRNNM) and Kohonen self-organizing feature maps neural networks model (KSOFM), and two different adaptive neural fuzzy inference system (ANFIS) models, ANFIS model with sub-clustering identification (ANFIS-SC) and ANFIS model with grid partitioning identification (ANFIS-GP), for estimating daily dew point temperature. The climatic data that consisted of 8 years of daily records of air temperature, sunshine hours, wind speed, saturation vapor pressure, relative humidity, and dew point temperature from three weather stations, Daego, Pohang, and Ulsan, in South Korea were used in the study. The estimates of ANN and ANFIS models were compared according to the three different statistics, root mean square errors, mean absolute errors, and determination coefficient. Comparison results revealed that the ANFIS-SC, ANFIS-GP, and GRNNM models showed almost the same accuracy and they performed better than the KSOFM model. Results also indicated that the sunshine hours, wind speed, and saturation vapor pressure have little effect on dew point temperature. It was found that the dew point temperature could be successfully estimated by using T mean and R H variables.

  13. Estimation of the auto frequency response function at unexcited points using dummy masses

    NASA Astrophysics Data System (ADS)

    Hosoya, Naoki; Yaginuma, Shinji; Onodera, Hiroshi; Yoshimura, Takuya

    2015-02-01

    If structures with complex shapes have space limitations, vibration tests using an exciter or impact hammer for the excitation are difficult. Although measuring the auto frequency response function at an unexcited point may not be practical via a vibration test, it can be obtained by assuming that the inertia acting on a dummy mass is an external force on the target structure upon exciting a different excitation point. We propose a method to estimate the auto frequency response functions at unexcited points by attaching a small mass (dummy mass), which is comparable to the accelerometer mass. The validity of the proposed method is demonstrated by comparing the auto frequency response functions estimated at unexcited points in a beam structure to those obtained from numerical simulations. We also consider random measurement errors by finite element analysis and vibration tests, but not bias errors. Additionally, the applicability of the proposed method is demonstrated by applying it to estimate the auto frequency response function of the lower arm in a car suspension.

  14. Real-time estimation of FLE statistics for 3-D tracking with point-based registration.

    PubMed

    Wiles, Andrew D; Peters, Terry M

    2009-09-01

    Target registration error (TRE) has become a widely accepted error metric in point-based registration since the error metric was introduced in the 1990s. It is particularly prominent in image-guided surgery (IGS) applications where point-based registration is used in both image registration and optical tracking. In point-based registration, the TRE is a function of the fiducial marker geometry, location of the target and the fiducial localizer error (FLE). While the first two items are easily obtained, the FLE is usually estimated using an a priori technique and applied without any knowledge of real-time information. However, if the FLE can be estimated in real-time, particularly as it pertains to optical tracking, then the TRE can be estimated more robustly. In this paper, a method is presented where the FLE statistics are estimated from the latest measurement of the fiducial registration error (FRE) statistics. The solution is obtained by solving a linear system of equations of the form Ax=b for each marker at each time frame where x are the six independent FLE covariance parameters and b are the six independent estimated FRE covariance parameters. The A matrix is only a function of the tool geometry and hence the inverse of the matrix can be computed a priori and used at each instant in which the FLE estimation is required, hence minimizing the level of computation at each frame. When using a good estimate of the FRE statistics, Monte Carlo simulations demonstrate that the root mean square of the FLE can be computed within a range of 70-90 microm. Robust estimation of the TRE for an optically tracked tool, using a good estimate of the FLE, will provide two enhancements in IGS. First, better patient to image registration will be obtained by using the TRE of the optical tool as a weighting factor of point-based registration used to map the patient to image space. Second, the directionality of the TRE can be relayed back to the surgeon giving the surgeon the option

  15. Estimates of Emissions and Chemical Lifetimes of NOx from Point Sources using OMI Retrievals

    NASA Astrophysics Data System (ADS)

    de Foy, B.

    2014-12-01

    We use three different methods to estimate emissions of NOx from large point sources based on OMI retrievals. The results are evaluated against data from the Continuous Emission Monitoring System (CEMS). The methods tested are: 1. Simple box model, 2. Two-dimensional Gaussian fit and 3. Exponentially-Modified Gaussian Fit. The sensitivity of the results to the plume speed and wind direction was explored by considering different ways of estimating these from wind measurements. The accuracy of the emissions estimates compared with the CEMS data was found to be variable from site to site. Furthermore, lifetimes obtained from some of the methods were found to be very short and are thought to be more representative of plume transport than of chemical transformation. We explore the strengths and weaknesses of the methods and consider avenues for improved estimates.

  16. Improving the blind restoration of retinal images by means of point-spread-function estimation assessment

    NASA Astrophysics Data System (ADS)

    Marrugo, Andrés. G.; Millán, María. S.; Å orel, Michal; Kotera, Jan; Å roubek, Filip

    2015-01-01

    Retinal images often suffer from blurring which hinders disease diagnosis and progression assessment. The restoration of the images is carried out by means of blind deconvolution, but the success of the restoration depends on the correct estimation of the point-spread-function (PSF) that blurred the image. The restoration can be space-invariant or space-variant. Because a retinal image has regions without texture or sharp edges, the blind PSF estimation may fail. In this paper we propose a strategy for the correct assessment of PSF estimation in retinal images for restoration by means of space-invariant or space-invariant blind deconvolution. Our method is based on a decomposition in Zernike coefficients of the estimated PSFs to identify valid PSFs. This significantly improves the quality of the image restoration revealed by the increased visibility of small details like small blood vessels and by the lack of restoration artifacts.

  17. A removal model for estimating detection probabilities from point-count surveys

    USGS Publications Warehouse

    Farnsworth, G.L.; Pollock, K.H.; Nichols, J.D.; Simons, T.R.; Hines, J.E.; Sauer, J.R.

    2002-01-01

    Use of point-count surveys is a popular method for collecting data on abundance and distribution of birds. However, analyses of such data often ignore potential differences in detection probability. We adapted a removal model to directly estimate detection probability during point-count surveys. The model assumes that singing frequency is a major factor influencing probability of detection when birds are surveyed using point counts. This may be appropriate for surveys in which most detections are by sound. The model requires counts to be divided into several time intervals. Point counts are often conducted for 10 min, where the number of birds recorded is divided into those first observed in the first 3 min, the subsequent 2 min, and the last 5 min. We developed a maximum-likelihood estimator for the detectability of birds recorded during counts divided into those intervals. This technique can easily be adapted to point counts divided into intervals of any length. We applied this method to unlimited-radius counts conducted in Great Smoky Mountains National Park. We used model selection criteria to identify whether detection probabilities varied among species, throughout the morning, throughout the season, and among different observers. We found differences in detection probability among species. Species that sing frequently such as Winter Wren (Troglodytes troglodytes) and Acadian Flycatcher (Empidonax virescens) had high detection probabilities (~90%) and species that call infrequently such as Pileated Woodpecker (Dryocopus pileatus) had low detection probability (36%). We also found detection probabilities varied with the time of day for some species (e.g. thrushes) and between observers for other species. We used the same approach to estimate detection probability and density for a subset of the observations with limited-radius point counts.

  18. State of charge estimation for LiMn2O4 power battery based on strong tracking sigma point Kalman filter

    NASA Astrophysics Data System (ADS)

    Li, Di; Ouyang, Jian; Li, Huiqi; Wan, Jiafu

    2015-04-01

    The State of Charge (SOC) estimation is important since it has a crucial role in the operation of Electrical Vehicle (EV) power battery. This paper built an Equivalent Circuit Model (ECM) of the LiMn2O4 power battery, and vast characteristics experiments were undertaken to make the model identification and thus the battery SOC estimation was realized. The SOC estimation was based on the Strong Tracking Sigma Point Kalman Filter (STSPKF) algorithm. The comparison of experimental and simulated results indicates that the STSPKF algorithm performs well in estimating the battery SOC, which has the advantages of tracking the variables in real-time and adjusting the error covariance by taking the Strong Tracking Factor (STF) into account. The results also show that the STSPKF algorithm estimated the SOC more accurately than the Extended Kalman Filter (EKF) algorithm.

  19. A non-rigid point matching method with local topology preservation for accurate bladder dose summation in high dose rate cervical brachytherapy

    NASA Astrophysics Data System (ADS)

    Chen, Haibin; Zhong, Zichun; Liao, Yuliang; Pompoš, Arnold; Hrycushko, Brian; Albuquerque, Kevin; Zhen, Xin; Zhou, Linghong; Gu, Xuejun

    2016-02-01

    GEC-ESTRO guidelines for high dose rate cervical brachytherapy advocate the reporting of the D2cc (the minimum dose received by the maximally exposed 2cc volume) to organs at risk. Due to large interfractional organ motion, reporting of accurate cumulative D2cc over a multifractional course is a non-trivial task requiring deformable image registration and deformable dose summation. To efficiently and accurately describe the point-to-point correspondence of the bladder wall over all treatment fractions while preserving local topologies, we propose a novel graphic processing unit (GPU)-based non-rigid point matching algorithm. This is achieved by introducing local anatomic information into the iterative update of correspondence matrix computation in the ‘thin plate splines-robust point matching’ (TPS-RPM) scheme. The performance of the GPU-based TPS-RPM with local topology preservation algorithm (TPS-RPM-LTP) was evaluated using four numerically simulated synthetic bladders having known deformations, a custom-made porcine bladder phantom embedded with twenty one fiducial markers, and 29 fractional computed tomography (CT) images from seven cervical cancer patients. Results show that TPS-RPM-LTP achieved excellent geometric accuracy with landmark residual distance error (RDE) of 0.7  ±  0.3 mm for the numerical synthetic data with different scales of bladder deformation and structure complexity, and 3.7  ±  1.8 mm and 1.6  ±  0.8 mm for the porcine bladder phantom with large and small deformation, respectively. The RDE accuracy of the urethral orifice landmarks in patient bladders was 3.7  ±  2.1 mm. When compared to the original TPS-RPM, the TPS-RPM-LTP improved landmark matching by reducing landmark RDE by 50  ±  19%, 37  ±  11% and 28  ±  11% for the synthetic, porcine phantom and the patient bladders, respectively. This was achieved with a computational time of less than 15 s in all cases

  20. A non-rigid point matching method with local topology preservation for accurate bladder dose summation in high dose rate cervical brachytherapy.

    PubMed

    Chen, Haibin; Zhong, Zichun; Liao, Yuliang; Pompoš, Arnold; Hrycushko, Brian; Albuquerque, Kevin; Zhen, Xin; Zhou, Linghong; Gu, Xuejun

    2016-02-01

    GEC-ESTRO guidelines for high dose rate cervical brachytherapy advocate the reporting of the D2cc (the minimum dose received by the maximally exposed 2cc volume) to organs at risk. Due to large interfractional organ motion, reporting of accurate cumulative D2cc over a multifractional course is a non-trivial task requiring deformable image registration and deformable dose summation. To efficiently and accurately describe the point-to-point correspondence of the bladder wall over all treatment fractions while preserving local topologies, we propose a novel graphic processing unit (GPU)-based non-rigid point matching algorithm. This is achieved by introducing local anatomic information into the iterative update of correspondence matrix computation in the 'thin plate splines-robust point matching' (TPS-RPM) scheme. The performance of the GPU-based TPS-RPM with local topology preservation algorithm (TPS-RPM-LTP) was evaluated using four numerically simulated synthetic bladders having known deformations, a custom-made porcine bladder phantom embedded with twenty one fiducial markers, and 29 fractional computed tomography (CT) images from seven cervical cancer patients. Results show that TPS-RPM-LTP achieved excellent geometric accuracy with landmark residual distance error (RDE) of 0.7  ±  0.3 mm for the numerical synthetic data with different scales of bladder deformation and structure complexity, and 3.7  ±  1.8 mm and 1.6  ±  0.8 mm for the porcine bladder phantom with large and small deformation, respectively. The RDE accuracy of the urethral orifice landmarks in patient bladders was 3.7  ±  2.1 mm. When compared to the original TPS-RPM, the TPS-RPM-LTP improved landmark matching by reducing landmark RDE by 50  ±  19%, 37  ±  11% and 28  ±  11% for the synthetic, porcine phantom and the patient bladders, respectively. This was achieved with a computational time of less than 15 s in all cases

  1. Comparison of Single-Point and Continuous Sampling Methods for Estimating Residential Indoor Temperature and Humidity.

    PubMed

    Johnston, James D; Magnusson, Brianna M; Eggett, Dennis; Collingwood, Scott C; Bernhardt, Scott A

    2015-01-01

    Residential temperature and humidity are associated with multiple health effects. Studies commonly use single-point measures to estimate indoor temperature and humidity exposures, but there is little evidence to support this sampling strategy. This study evaluated the relationship between single-point and continuous monitoring of air temperature, apparent temperature, relative humidity, and absolute humidity over four exposure intervals (5-min, 30-min, 24-hr, and 12-days) in 9 northern Utah homes, from March-June 2012. Three homes were sampled twice, for a total of 12 observation periods. Continuous data-logged sampling was conducted in homes for 2-3 wks, and simultaneous single-point measures (n = 114) were collected using handheld thermo-hygrometers. Time-centered single-point measures were moderately correlated with short-term (30-min) data logger mean air temperature (r = 0.76, β = 0.74), apparent temperature (r = 0.79, β = 0.79), relative humidity (r = 0.70, β = 0.63), and absolute humidity (r = 0.80, β = 0.80). Data logger 12-day means were also moderately correlated with single-point air temperature (r = 0.64, β = 0.43) and apparent temperature (r = 0.64, β = 0.44), but were weakly correlated with single-point relative humidity (r = 0.53, β = 0.35) and absolute humidity (r = 0.52, β = 0.39). Of the single-point RH measures, 59 (51.8%) deviated more than ±5%, 21 (18.4%) deviated more than ±10%, and 6 (5.3%) deviated more than ±15% from data logger 12-day means. Where continuous indoor monitoring is not feasible, single-point sampling strategies should include multiple measures collected at prescribed time points based on local conditions. PMID:26030088

  2. Shear wavelength estimation based on inverse filtering and multiple-point shear wave generation

    NASA Astrophysics Data System (ADS)

    Kitazaki, Tomoaki; Kondo, Kengo; Yamakawa, Makoto; Shiina, Tsuyoshi

    2016-07-01

    Elastography provides important diagnostic information because tissue elasticity is related to pathological conditions. For example, in a mammary gland, higher grade malignancies yield harder tumors. Estimating shear wave speed enables the quantification of tissue elasticity imaging using time-of-flight. However, time-of-flight measurement is based on an assumption about the propagation direction of a shear wave which is highly affected by reflection and refraction, and thus might cause an artifact. An alternative elasticity estimation approach based on shear wavelength was proposed and applied to passive configurations. To determine the elasticity of tissue more quickly and more accurately, we proposed a new method for shear wave elasticity imaging that combines the shear wavelength approach and inverse filtering with multiple shear wave sources induced by acoustic radiation force (ARF). The feasibility of the proposed method was verified using an elasticity phantom with a hard inclusion.

  3. Position Estimation of Access Points in 802.11 Wireless Networks

    SciTech Connect

    Kent, C A; Dowla, F U; Atwal, P K; Lennon, W J

    2003-12-05

    We developed a technique to locate wireless network nodes using multiple time-of-flight range measurements in a position estimate. When used with communication methods that allow propagation through walls, such as Ultra-Wideband and 802.11, we can locate network nodes in buildings and in caves where GPS is unavailable. This paper details the implementation on an 802.11a network where we demonstrated the ability to locate a network access point to within 20 feet.

  4. Capacity Estimation Model for Signalized Intersections under the Impact of Access Point.

    PubMed

    Zhao, Jing; Li, Peng; Zhou, Xizhao

    2016-01-01

    Highway Capacity Manual 2010 provides various factors to adjust the base saturation flow rate for the capacity analysis of signalized intersections. No factors, however, is considered for the potential change of signalized intersections capacity caused by the access point closeing to the signalized intersection. This paper presented a theoretical model to estimate the lane group capacity at signalized intersections with the consideration of the effects of access points. Two scenarios of access point locations, upstream or downstream of the signalized intersection, and impacts of six types of access traffic flow are taken into account. The proposed capacity model was validated based on VISSIM simulation. Results of extensive numerical analysis reveal the substantial impact of access point on the capacity, which has an inverse correlation with both the number of major street lanes and the distance between the intersection and access point. Moreover, among the six types of access traffic flows, the access traffic flow 1 (right-turning traffic from major street), flow 4 (left-turning traffic from access point), and flow 5 (left-turning traffic from major street) cause a more significant effect on lane group capacity than others. Some guidance on the mitigation of the negative effect is provided for practitioners. PMID:26726998

  5. Capacity Estimation Model for Signalized Intersections under the Impact of Access Point

    PubMed Central

    Zhao, Jing; Li, Peng; Zhou, Xizhao

    2016-01-01

    Highway Capacity Manual 2010 provides various factors to adjust the base saturation flow rate for the capacity analysis of signalized intersections. No factors, however, is considered for the potential change of signalized intersections capacity caused by the access point closeing to the signalized intersection. This paper presented a theoretical model to estimate the lane group capacity at signalized intersections with the consideration of the effects of access points. Two scenarios of access point locations, upstream or downstream of the signalized intersection, and impacts of six types of access traffic flow are taken into account. The proposed capacity model was validated based on VISSIM simulation. Results of extensive numerical analysis reveal the substantial impact of access point on the capacity, which has an inverse correlation with both the number of major street lanes and the distance between the intersection and access point. Moreover, among the six types of access traffic flows, the access traffic flow 1 (right-turning traffic from major street), flow 4 (left-turning traffic from access point), and flow 5 (left-turning traffic from major street) cause a more significant effect on lane group capacity than others. Some guidance on the mitigation of the negative effect is provided for practitioners. PMID:26726998

  6. A removal model for estimating detection probabilities from point-count surveys

    USGS Publications Warehouse

    Farnsworth, G.L.; Pollock, K.H.; Nichols, J.D.; Simons, T.R.; Hines, J.E.; Sauer, J.R.

    2000-01-01

    We adapted a removal model to estimate detection probability during point count surveys. The model assumes one factor influencing detection during point counts is the singing frequency of birds. This may be true for surveys recording forest songbirds when most detections are by sound. The model requires counts to be divided into several time intervals. We used time intervals of 2, 5, and 10 min to develop a maximum-likelihood estimator for the detectability of birds during such surveys. We applied this technique to data from bird surveys conducted in Great Smoky Mountains National Park. We used model selection criteria to identify whether detection probabilities varied among species, throughout the morning, throughout the season, and among different observers. The overall detection probability for all birds was 75%. We found differences in detection probability among species. Species that sing frequently such as Winter Wren and Acadian Flycatcher had high detection probabilities (about 90%) and species that call infrequently such as Pileated Woodpecker had low detection probability (36%). We also found detection probabilities varied with the time of day for some species (e.g. thrushes) and between observers for other species. This method of estimating detectability during point count surveys offers a promising new approach to using count data to address questions of the bird abundance, density, and population trends.

  7. Articulated Non-Rigid Point Set Registration for Human Pose Estimation from 3D Sensors

    PubMed Central

    Ge, Song; Fan, Guoliang

    2015-01-01

    We propose a generative framework for 3D human pose estimation that is able to operate on both individual point sets and sequential depth data. We formulate human pose estimation as a point set registration problem, where we propose three new approaches to address several major technical challenges in this research. First, we integrate two registration techniques that have a complementary nature to cope with non-rigid and articulated deformations of the human body under a variety of poses. This unique combination allows us to handle point sets of complex body motion and large pose variation without any initial conditions, as required by most existing approaches. Second, we introduce an efficient pose tracking strategy to deal with sequential depth data, where the major challenge is the incomplete data due to self-occlusions and view changes. We introduce a visible point extraction method to initialize a new template for the current frame from the previous frame, which effectively reduces the ambiguity and uncertainty during registration. Third, to support robust and stable pose tracking, we develop a segment volume validation technique to detect tracking failures and to re-initialize pose registration if needed. The experimental results on both benchmark 3D laser scan and depth datasets demonstrate the effectiveness of the proposed framework when compared with state-of-the-art algorithms. PMID:26131673

  8. Sample-to-sample variations and biases in estimates of two-point correlation functions

    NASA Astrophysics Data System (ADS)

    Itoh, Makoto; Suginohara, Tatsushi; Suto, Yasushi

    1992-10-01

    A quantitative estimate of the sample-to-sample variations of two-point correlation functions was obtained by extracting subsamples from N-body simulation data, and the extent to which these subsamples reproduce the correlation functions estimated from the entire data was examined. The method used was more direct than that used in the studies of Barrow et al. (1984) and Ling et al. (1986): rather than estimating the variations of the correlation functions from pseudosamples which were created from single data sets as has been done in the previous investigations, several independent ensembles with different geometries were created in the present study. Thus, the problem was assessed more quantitatively and precisely.

  9. Point and Fixed Plot Sampling Inventory Estimates at the Savannah River Site, South Carolina.

    SciTech Connect

    Parresol, Bernard, R.

    2004-02-01

    This report provides calculation of systematic point sampling volume estimates for trees greater than or equal to 5 inches diameter breast height (dbh) and fixed radius plot volume estimates for trees < 5 inches dbh at the Savannah River Site (SRS), Aiken County, South Carolina. The inventory of 622 plots was started in March 1999 and completed in January 2002 (Figure 1). Estimates are given in cubic foot volume. The analyses are presented in a series of Tables and Figures. In addition, a preliminary analysis of fuel levels on the SRS is given, based on depth measurements of the duff and litter layers on the 622 inventory plots plus line transect samples of down coarse woody material. Potential standing live fuels are also included. The fuels analyses are presented in a series of tables.

  10. Benchmark atomization energy of ethane : importance of accurate zero-point vibrational energies and diagonal Born-Oppenheimer corrections for a 'simple' organic molecule.

    SciTech Connect

    Karton, A.; Martin, J. M. L.; Ruscic, B.; Chemistry; Weizmann Institute of Science

    2007-06-01

    A benchmark calculation of the atomization energy of the 'simple' organic molecule C2H6 (ethane) has been carried out by means of W4 theory. While the molecule is straightforward in terms of one-particle and n-particle basis set convergence, its large zero-point vibrational energy (and anharmonic correction thereto) and nontrivial diagonal Born-Oppenheimer correction (DBOC) represent interesting challenges. For the W4 set of molecules and C2H6, we show that DBOCs to the total atomization energy are systematically overestimated at the SCF level, and that the correlation correction converges very rapidly with the basis set. Thus, even at the CISD/cc-pVDZ level, useful correlation corrections to the DBOC are obtained. When applying such a correction, overall agreement with experiment was only marginally improved, but a more significant improvement is seen when hydrogen-containing systems are considered in isolation. We conclude that for closed-shell organic molecules, the greatest obstacles to highly accurate computational thermochemistry may not lie in the solution of the clamped-nuclei Schroedinger equation, but rather in the zero-point vibrational energy and the diagonal Born-Oppenheimer correction.

  11. Estimating Accurate Relative Spacecraft Angular Position from DSN VLBI Phases Using X-Band Telemetry or DOR Tones

    NASA Technical Reports Server (NTRS)

    Bagri, Durgadas S.; Majid, Walid

    2009-01-01

    At present spacecraft angular position with Deep Space Network (DSN) is determined using group delay estimates from very long baseline interferometer (VLBI) phase measurements employing differential one way ranging (DOR) tones. As an alternative to this approach, we propose estimating position of a spacecraft to half a fringe cycle accuracy using time variations between measured and calculated phases as the Earth rotates using DSN VLBI baseline(s). Combining fringe location of the target with the phase allows high accuracy for spacecraft angular position estimate. This can be achieved using telemetry signals of at least 4-8 MSamples/sec data rate or DOR tones.

  12. Accurate experimental determination of the isotope effects on the triple point temperature of water. I. Dependence on the 2H abundance

    NASA Astrophysics Data System (ADS)

    Faghihi, V.; Peruzzi, A.; Aerts-Bijma, A. T.; Jansen, H. G.; Spriensma, J. J.; van Geel, J.; Meijer, H. A. J.

    2015-12-01

    Variation in the isotopic composition of water is one of the major contributors to uncertainty in the realization of the triple point of water (TPW). Although the dependence of the TPW on the isotopic composition of the water has been known for years, there is still a lack of a detailed and accurate experimental determination of the values for the correction constants. This paper is the first of two articles (Part I and Part II) that address quantification of isotope abundance effects on the triple point temperature of water. In this paper, we describe our experimental assessment of the 2H isotope effect. We manufactured five triple point cells with prepared water mixtures with a range of 2H isotopic abundances encompassing widely the natural abundance range, while the 18O and 17O isotopic abundance were kept approximately constant and the 18O  -  17O ratio was close to the Meijer-Li relationship for natural waters. The selected range of 2H isotopic abundances led to cells that realised TPW temperatures between approximately  -140 μK to  +2500 μK with respect to the TPW temperature as realized by VSMOW (Vienna Standard Mean Ocean Water). Our experiment led to determination of the value for the δ2H correction parameter of A2H  =  673 μK / (‰ deviation of δ2H from VSMOW) with a combined uncertainty of 4 μK (k  =  1, or 1σ).

  13. THE IMPACT OF POINT-SOURCE SUBTRACTION RESIDUALS ON 21 cm EPOCH OF REIONIZATION ESTIMATION

    SciTech Connect

    Trott, Cathryn M.; Wayth, Randall B.; Tingay, Steven J.

    2012-09-20

    Precise subtraction of foreground sources is crucial for detecting and estimating 21 cm H I signals from the Epoch of Reionization (EoR). We quantify how imperfect point-source subtraction due to limitations of the measurement data set yields structured residual signal in the data set. We use the Cramer-Rao lower bound, as a metric for quantifying the precision with which a parameter may be measured, to estimate the residual signal in a visibility data set due to imperfect point-source subtraction. We then propagate these residuals into two metrics of interest for 21 cm EoR experiments-the angular power spectrum and two-dimensional power spectrum-using a combination of full analytic covariant derivation, analytic variant derivation, and covariant Monte Carlo simulations. This methodology differs from previous work in two ways: (1) it uses information theory to set the point-source position error, rather than assuming a global rms error, and (2) it describes a method for propagating the errors analytically, thereby obtaining the full correlation structure of the power spectra. The methods are applied to two upcoming low-frequency instruments that are proposing to perform statistical EoR experiments: the Murchison Widefield Array and the Precision Array for Probing the Epoch of Reionization. In addition to the actual antenna configurations, we apply the methods to minimally redundant and maximally redundant configurations. We find that for peeling sources above 1 Jy, the amplitude of the residual signal, and its variance, will be smaller than the contribution from thermal noise for the observing parameters proposed for upcoming EoR experiments, and that optimal subtraction of bright point sources will not be a limiting factor for EoR parameter estimation. We then use the formalism to provide an ab initio analytic derivation motivating the 'wedge' feature in the two-dimensional power spectrum, complementing previous discussion in the literature.

  14. Estimation of boiling points using density functional theory with polarized continuum model solvent corrections.

    PubMed

    Chan, Poh Yin; Tong, Chi Ming; Durrant, Marcus C

    2011-09-01

    An empirical method for estimation of the boiling points of organic molecules based on density functional theory (DFT) calculations with polarized continuum model (PCM) solvent corrections has been developed. The boiling points are calculated as the sum of three contributions. The first term is calculated directly from the structural formula of the molecule, and is related to its effective surface area. The second is a measure of the electronic interactions between molecules, based on the DFT-PCM solvation energy, and the third is employed only for planar aromatic molecules. The method is applicable to a very diverse range of organic molecules, with normal boiling points in the range of -50 to 500 °C, and includes ten different elements (C, H, Br, Cl, F, N, O, P, S and Si). Plots of observed versus calculated boiling points gave R²=0.980 for a training set of 317 molecules, and R²=0.979 for a test set of 74 molecules. The role of intramolecular hydrogen bonding in lowering the boiling points of certain molecules is quantitatively discussed. PMID:21798775

  15. Star Tracker Based ATP System Conceptual Design and Pointing Accuracy Estimation

    NASA Technical Reports Server (NTRS)

    Orfiz, Gerardo G.; Lee, Shinhak

    2006-01-01

    A star tracker based beaconless (a.k.a. non-cooperative beacon) acquisition, tracking and pointing concept for precisely pointing an optical communication beam is presented as an innovative approach to extend the range of high bandwidth (> 100 Mbps) deep space optical communication links throughout the solar system and to remove the need for a ground based high power laser as a beacon source. The basic approach for executing the ATP functions involves the use of stars as the reference sources from which the attitude knowledge is obtained and combined with high bandwidth gyroscopes for propagating the pointing knowledge to the beam pointing mechanism. Details of the conceptual design are presented including selection of an orthogonal telescope configuration and the introduction of an optical metering scheme to reduce misalignment error. Also, estimates are presented that demonstrate that aiming of the communications beam to the Earth based receive terminal can be achieved with a total system pointing accuracy of better than 850 nanoradians (3 sigma) from anywhere in the solar system.

  16. Vein visualization using a smart phone with multispectral Wiener estimation for point-of-care applications.

    PubMed

    Song, Jae Hee; Kim, Choye; Yoo, Yangmo

    2015-03-01

    Effective vein visualization is clinically important for various point-of-care applications, such as needle insertion. It can be achieved by utilizing ultrasound imaging or by applying infrared laser excitation and monitoring its absorption. However, while these approaches can be used for vein visualization, they are not suitable for point-of-care applications because of their cost, time, and accessibility. In this paper, a new vein visualization method based on multispectral Wiener estimation is proposed and its real-time implementation on a smart phone is presented. In the proposed method, a conventional RGB camera on a commercial smart phone (i.e., Galaxy Note 2, Samsung Electronics Inc., Suwon, Korea) is used to acquire reflectance information from veins. Wiener estimation is then applied to extract the multispectral information from the veins. To evaluate the performance of the proposed method, an experiment was conducted using a color calibration chart (ColorChecker Classic, X-rite, Grand Rapids, MI, USA) and an average root-mean-square error of 12.0% was obtained. In addition, an in vivo subcutaneous vein imaging experiment was performed to explore the clinical performance of the smart phone-based Wiener estimation. From the in vivo experiment, the veins at various sites were successfully localized using the reconstructed multispectral images and these results were confirmed by ultrasound B-mode and color Doppler images. These results indicate that the presented multispectral Wiener estimation method can be used for visualizing veins using a commercial smart phone for point-of-care applications (e.g., vein puncture guidance). PMID:24691170

  17. Iterative image reconstruction for positron emission tomography based on a detector response function estimated from point source measurements

    NASA Astrophysics Data System (ADS)

    Tohme, Michel S.; Qi, Jinyi

    2009-06-01

    The accuracy of the system model in an iterative reconstruction algorithm greatly affects the quality of reconstructed positron emission tomography (PET) images. For efficient computation in reconstruction, the system model in PET can be factored into a product of a geometric projection matrix and sinogram blurring matrix, where the former is often computed based on analytical calculation, and the latter is estimated using Monte Carlo simulations. Direct measurement of a sinogram blurring matrix is difficult in practice because of the requirement of a collimated source. In this work, we propose a method to estimate the 2D blurring kernels from uncollimated point source measurements. Since the resulting sinogram blurring matrix stems from actual measurements, it can take into account the physical effects in the photon detection process that are difficult or impossible to model in a Monte Carlo (MC) simulation, and hence provide a more accurate system model. Another advantage of the proposed method over MC simulation is that it can easily be applied to data that have undergone a transformation to reduce the data size (e.g., Fourier rebinning). Point source measurements were acquired with high count statistics in a relatively fine grid inside the microPET II scanner using a high-precision 2D motion stage. A monotonically convergent iterative algorithm has been derived to estimate the detector blurring matrix from the point source measurements. The algorithm takes advantage of the rotational symmetry of the PET scanner and explicitly models the detector block structure. The resulting sinogram blurring matrix is incorporated into a maximum a posteriori (MAP) image reconstruction algorithm. The proposed method has been validated using a 3 × 3 line phantom, an ultra-micro resolution phantom and a 22Na point source superimposed on a warm background. The results of the proposed method show improvements in both resolution and contrast ratio when compared with the MAP

  18. Iterative Image Reconstruction for Positron Emission Tomography Based on Detector Response Function Estimated from Point Source Measurements

    PubMed Central

    Tohme, Michel S.; Qi, Jinyi

    2009-01-01

    The accuracy of the system model in an iterative reconstruction algorithm greatly affects the quality of reconstructed positron emission tomography (PET) images. For efficient computation in reconstruction, the system model in PET can be factored into a product of a geometric projection matrix and sinogram blurring matrix, where the former is often computed based on analytical calculation, and the latter is estimated using Monte Carlo simulations. Direct measurement of sinogram blurring matrix is difficult in practice because of the requirement of a collimated source. In this work, we propose a method to estimate the 2D blurring kernels from uncollimated point source measurements. Since the resulting sinogram blurring matrix stems from actual measurements, it can take into account the physical effects in the photon detection process that are difficult or impossible to model in a Monte Carlo (MC) simulation, and hence provide a more accurate system model. Another advantage of the proposed method over MC simulation is that it can be easily applied to data that have undergone a transformation to reduce the data size (e.g., Fourier rebinning). Point source measurements were acquired with high count statistics in a relatively fine grid inside the microPET II scanner using a high-precision 2-D motion stage. A monotonically convergent iterative algorithm has been derived to estimate the detector blurring matrix from the point source measurements. The algorithm takes advantage of the rotational symmetry of the PET scanner and explicitly models the detector block structure. The resulting sinogram blurring matrix is incorporated into a maximum a posteriori (MAP) image reconstruction algorithm. The proposed method has been validated using a 3-by-3 line phantom, an ultra-micro resolution phantom, and a 22Na point source superimposed on a warm background. The results of the proposed method show improvements in both resolution and contrast ratio when compared with the MAP

  19. Quaternion-Based Unscented Kalman Filter for Accurate Indoor Heading Estimation Using Wearable Multi-Sensor System

    PubMed Central

    Yuan, Xuebing; Yu, Shuai; Zhang, Shengzhi; Wang, Guoping; Liu, Sheng

    2015-01-01

    Inertial navigation based on micro-electromechanical system (MEMS) inertial measurement units (IMUs) has attracted numerous researchers due to its high reliability and independence. The heading estimation, as one of the most important parts of inertial navigation, has been a research focus in this field. Heading estimation using magnetometers is perturbed by magnetic disturbances, such as indoor concrete structures and electronic equipment. The MEMS gyroscope is also used for heading estimation. However, the accuracy of gyroscope is unreliable with time. In this paper, a wearable multi-sensor system has been designed to obtain the high-accuracy indoor heading estimation, according to a quaternion-based unscented Kalman filter (UKF) algorithm. The proposed multi-sensor system including one three-axis accelerometer, three single-axis gyroscopes, one three-axis magnetometer and one microprocessor minimizes the size and cost. The wearable multi-sensor system was fixed on waist of pedestrian and the quadrotor unmanned aerial vehicle (UAV) for heading estimation experiments in our college building. The results show that the mean heading estimation errors are less 10° and 5° to multi-sensor system fixed on waist of pedestrian and the quadrotor UAV, respectively, compared to the reference path. PMID:25961384

  20. Quaternion-based unscented Kalman filter for accurate indoor heading estimation using wearable multi-sensor system.

    PubMed

    Yuan, Xuebing; Yu, Shuai; Zhang, Shengzhi; Wang, Guoping; Liu, Sheng

    2015-01-01

    Inertial navigation based on micro-electromechanical system (MEMS) inertial measurement units (IMUs) has attracted numerous researchers due to its high reliability and independence. The heading estimation, as one of the most important parts of inertial navigation, has been a research focus in this field. Heading estimation using magnetometers is perturbed by magnetic disturbances, such as indoor concrete structures and electronic equipment. The MEMS gyroscope is also used for heading estimation. However, the accuracy of gyroscope is unreliable with time. In this paper, a wearable multi-sensor system has been designed to obtain the high-accuracy indoor heading estimation, according to a quaternion-based unscented Kalman filter (UKF) algorithm. The proposed multi-sensor system including one three-axis accelerometer, three single-axis gyroscopes, one three-axis magnetometer and one microprocessor minimizes the size and cost. The wearable multi-sensor system was fixed on waist of pedestrian and the quadrotor unmanned aerial vehicle (UAV) for heading estimation experiments in our college building. The results show that the mean heading estimation errors are less 10° and 5° to multi-sensor system fixed on waist of pedestrian and the quadrotor UAV, respectively, compared to the reference path. PMID:25961384

  1. The Point Count Transect Method for Estimates of Biodiversity on Coral Reefs: Improving the Sampling of Rare Species.

    PubMed

    Roberts, T Edward; Bridge, Thomas C; Caley, M Julian; Baird, Andrew H

    2016-01-01

    Understanding patterns in species richness and diversity over environmental gradients (such as altitude and depth) is an enduring component of ecology. As most biological communities feature few common and many rare species, quantifying the presence and abundance of rare species is a crucial requirement for analysis of these patterns. Coral reefs present specific challenges for data collection, with limitations on time and site accessibility making efficiency crucial. Many commonly used methods, such as line intercept transects (LIT), are poorly suited to questions requiring the detection of rare events or species. Here, an alternative method for surveying reef-building corals is presented; the point count transect (PCT). The PCT consists of a count of coral colonies at a series of sample stations, located at regular intervals along a transect. In contrast the LIT records the proportion of each species occurring under a transect tape of a given length. The same site was surveyed using PCT and LIT to compare species richness estimates between the methods. The total number of species increased faster per individual sampled and unit of time invested using PCT. Furthermore, 41 of the 44 additional species recorded by the PCT occurred ≤ 3 times, demonstrating the increased capacity of PCT to detect rare species. PCT provides a more accurate estimate of local-scale species richness than the LIT, and is an efficient alternative method for surveying reef corals to address questions associated with alpha-diversity, and rare or incidental events. PMID:27011368

  2. The Point Count Transect Method for Estimates of Biodiversity on Coral Reefs: Improving the Sampling of Rare Species

    PubMed Central

    Roberts, T. Edward; Bridge, Thomas C.; Caley, M. Julian; Baird, Andrew H.

    2016-01-01

    Understanding patterns in species richness and diversity over environmental gradients (such as altitude and depth) is an enduring component of ecology. As most biological communities feature few common and many rare species, quantifying the presence and abundance of rare species is a crucial requirement for analysis of these patterns. Coral reefs present specific challenges for data collection, with limitations on time and site accessibility making efficiency crucial. Many commonly used methods, such as line intercept transects (LIT), are poorly suited to questions requiring the detection of rare events or species. Here, an alternative method for surveying reef-building corals is presented; the point count transect (PCT). The PCT consists of a count of coral colonies at a series of sample stations, located at regular intervals along a transect. In contrast the LIT records the proportion of each species occurring under a transect tape of a given length. The same site was surveyed using PCT and LIT to compare species richness estimates between the methods. The total number of species increased faster per individual sampled and unit of time invested using PCT. Furthermore, 41 of the 44 additional species recorded by the PCT occurred ≤ 3 times, demonstrating the increased capacity of PCT to detect rare species. PCT provides a more accurate estimate of local-scale species richness than the LIT, and is an efficient alternative method for surveying reef corals to address questions associated with alpha-diversity, and rare or incidental events. PMID:27011368

  3. Estimating CO2 emissions from point sources: a case study of an isolated power station

    NASA Astrophysics Data System (ADS)

    Utembe, S. R.; Jones, N.; Rayner, P. J.; Genkova, I.; Griffith, D. W. T.; O'Brien, D. M.; Lunney, C.; Clark, A. J.

    2014-12-01

    A methodology to estimate CO2 emissions from an isolated power plant is presented and illustrated for the Northern Power Station at Port Augusta, South Australia. The method involves measurement of in-situ and column-averaged CO2 at a site near the power plant, forward modelling (using WRF-Chem) of the observed signals and inverse modelling to obtain an estimate of the fluxes from the power plant. By subtracting the simulated background CO2 (obtained from Monitoring Atmospheric Composition and Climate CO2 fields) from the observed and simulated signals, we are able to account for fluxes from the power plant that are mainly responsible for the variations in the CO2 concentrations. Although the enhancements of the surface concentration of CO2 are a factor of 10 larger than the enhancements in the column-averaged concentration, the forward transport model has difficulty predicting the in-situ data, which is complicated by sea breeze effects and influence from other local sources. Better simulation is obtained for the column-averaged data leading to better estimates of fluxes. The ratio of our estimated emissions to the reported values is 1.06 ± 0.54. Modelling local biospheric fluxes makes little difference either to the estimated emissions or quality of the fit to the data. Variations in the large-scale concentration field have a larger impact highlighting the importance of good boundary conditions even in the relatively homogeneous Southern Hemisphere. The estimates are insensitive to details of the calculation such as stack height or modelling of plume injection. We conclude that column-integrated measurements offer a reasonable trade-off between sensitivity and model capability for estimating point sources.

  4. Two-point correlation functions to characterize microgeometry and estimate permeabilities of synthetic and natural sandstones

    SciTech Connect

    Blair, S.C.; Berge, P.A.; Berryman, J.G.

    1993-08-01

    We have developed an image-processing method for characterizing the microstructure of rock and other porous materials, and for providing a quantitative means for understanding the dependence of physical properties on the pore structure. This method is based upon the statistical properties of the microgeometry as observed in scanning electron micrograph (SEM) images of cross sections of porous materials. The method utilizes a simple statistical function, called the spatial correlation function, which can be used to predict bounds on permeability and other physical properties. We obtain estimates of the porosity and specific surface area of the material from the two-point correlation function. The specific surface area can be related to the permeability of porous materials using a Kozeny-Carman relation, and we show that the specific surface area measured on images of sandstones is consistent with the specific surface area used in a simple flow model for computation of permeability. In this paper, we discuss the two-point spatial correlation function and its use in characterizing microstructure features such as pore and grain sizes. We present estimates of permeabilities found using SEM images of several different synthetic and natural sandstones. Comparison of the estimates to laboratory measurements shows good agreement. Finally, we briefly discuss extension of this technique to two-phase flow.

  5. Analysis of open-loop conical scan pointing error and variance estimators

    NASA Technical Reports Server (NTRS)

    Alvarez, L. S.

    1993-01-01

    General pointing error and variance estimators for an open-loop conical scan (conscan) system are derived and analyzed. The conscan algorithm is modeled as a weighted least-squares estimator whose inputs are samples of receiver carrier power and its associated measurement uncertainty. When the assumptions of constant measurement noise and zero pointing error estimation are applied, the variance equation is then strictly a function of the carrier power to uncertainty ratio and the operator selectable radius and period input to the algorithm. The performance equation is applied to a 34-m mirror-based beam-waveguide conscan system interfaced with the Block V Receiver Subsystem tracking a Ka-band (32-GHz) downlink. It is shown that for a carrier-to-noise power ratio greater than or equal to 30 dB-Hz, the conscan period for Ka-band operation may be chosen well below the current DSN minimum of 32 sec. The analysis presented forms the basis of future conscan work in both research and development as well as for the upcoming DSN antenna controller upgrade for the new DSS-24 34-m beam-waveguide antenna.

  6. Estimating forest structure at five tropical forested sites using lidar point cloud data

    NASA Astrophysics Data System (ADS)

    Palace, M. W.; Sullivan, F.; Treuhaft, R. N.; Keller, M. M.

    2014-12-01

    Tropical forests are fundamental components in the global carbon cycle and are threatened by deforestation and climate change. Because of their importance in carbon dynamics, understanding the structural architecture of these forests is vital. Airborne lidar data provides a unique opportunity to examine not only the height of these forests, which is often used to estimate biomass, but also the crown geometry and vertical profile of the canopy. These structural attributes inform temporal and spatial apsects of carbon dynamics providing insight into the past disturbances and growth of forests. We examined airborne lidar point cloud data from five sites in the Brazilian Amazon collected during the years 2012 to 2014. We generated both digital elevation maps, canopy height models (CHM), and vertical vegetation profiles (VVP) in our analysis. We analyzed the CHM using crown delineation with an iterative maximum finding routine to find the tops of canopies, local maxima to determine edges of crowns, and two parameters that control termination of crown edges. We also ran textural analysis methods on the CHM and VVP. Using multiple linear regression models and boosted regression trees we estimated forest structural parameters including biomass, stem density, basal area, width and depth of crowns and stem size distribution. Structural attributes estimated from lidar point cloud data can improve our understanding of the carbon dynamics of tropical forests on a landscape level and regional level.

  7. Minimum Number of Observation Points for LEO Satellite Orbit Estimation by OWL Network

    NASA Astrophysics Data System (ADS)

    Park, Maru; Jo, Jung Hyun; Cho, Sungki; Choi, Jin; Kim, Chun-Hwey; Park, Jang-Hyun; Yim, Hong-Suh; Choi, Young-Jun; Moon, Hong-Kyu; Bae, Young-Ho; Park, Sun-Youp; Kim, Ji-Hye; Roh, Dong-Goo; Jang, Hyun-Jung; Park, Young-Sik; Jeong, Min-Ji

    2015-12-01

    By using the Optical Wide-field Patrol (OWL) network developed by the Korea Astronomy and Space Science Institute (KASI) we generated the right ascension and declination angle data from optical observation of Low Earth Orbit (LEO) satellites. We performed an analysis to verify the optimum number of observations needed per arc for successful estimation of orbit. The currently functioning OWL observatories are located in Daejeon (South Korea), Songino (Mongolia), and Oukaïmeden (Morocco). The Daejeon Observatory is functioning as a test bed. In this study, the observed targets were Gravity Probe B, COSMOS 1455, COSMOS 1726, COSMOS 2428, SEASAT 1, ATV-5, and CryoSat-2 (all in LEO). These satellites were observed from the test bed and the Songino Observatory of the OWL network during 21 nights in 2014 and 2015. After we estimated the orbit from systematically selected sets of observation points (20, 50, 100, and 150) for each pass, we compared the difference between the orbit estimates for each case, and the Two Line Element set (TLE) from the Joint Space Operation Center (JSpOC). Then, we determined the average of the difference and selected the optimal observation points by comparing the average values.

  8. GGOS and the EOP - the key role of SLR for a stable estimation of highly accurate Earth orientation parameters

    NASA Astrophysics Data System (ADS)

    Bloßfeld, Mathis; Panzetta, Francesca; Müller, Horst; Gerstl, Michael

    2016-04-01

    The GGOS vision is to integrate geometric and gravimetric observation techniques to estimate consistent geodetic-geophysical parameters. In order to reach this goal, the common estimation of station coordinates, Stokes coefficients and Earth Orientation Parameters (EOP) is necessary. Satellite Laser Ranging (SLR) provides the ability to study correlations between the different parameter groups since the observed satellite orbit dynamics are sensitive to the above mentioned geodetic parameters. To decrease the correlations, SLR observations to multiple satellites have to be combined. In this paper, we compare the estimated EOP of (i) single satellite SLR solutions and (ii) multi-satellite SLR solutions. Therefore, we jointly estimate station coordinates, EOP, Stokes coefficients and orbit parameters using different satellite constellations. A special focus in this investigation is put on the de-correlation of different geodetic parameter groups due to the combination of SLR observations. Besides SLR observations to spherical satellites (commonly used), we discuss the impact of SLR observations to non-spherical satellites such as, e.g., the JASON-2 satellite. The goal of this study is to discuss the existing parameter interactions and to present a strategy how to obtain reliable estimates of station coordinates, EOP, orbit parameter and Stokes coefficients in one common adjustment. Thereby, the benefits of a multi-satellite SLR solution are evaluated.

  9. Assignment of Calibration Information to Deeper Phylogenetic Nodes is More Effective in Obtaining Precise and Accurate Divergence Time Estimates.

    PubMed

    Mello, Beatriz; Schrago, Carlos G

    2014-01-01

    Divergence time estimation has become an essential tool for understanding macroevolutionary events. Molecular dating aims to obtain reliable inferences, which, within a statistical framework, means jointly increasing the accuracy and precision of estimates. Bayesian dating methods exhibit the propriety of a linear relationship between uncertainty and estimated divergence dates. This relationship occurs even if the number of sites approaches infinity and places a limit on the maximum precision of node ages. However, how the placement of calibration information may affect the precision of divergence time estimates remains an open question. In this study, relying on simulated and empirical data, we investigated how the location of calibration within a phylogeny affects the accuracy and precision of time estimates. We found that calibration priors set at median and deep phylogenetic nodes were associated with higher precision values compared to analyses involving calibration at the shallowest node. The results were independent of the tree symmetry. An empirical mammalian dataset produced results that were consistent with those generated by the simulated sequences. Assigning time information to the deeper nodes of a tree is crucial to guarantee the accuracy and precision of divergence times. This finding highlights the importance of the appropriate choice of outgroups in molecular dating. PMID:24855333

  10. How accurately can students estimate their performance on an exam and how does this relate to their actual performance on the exam?

    NASA Astrophysics Data System (ADS)

    Rebello, N. Sanjay

    2012-02-01

    Research has shown students' beliefs regarding their own abilities in math and science can influence their performance in these disciplines. I investigated the relationship between students' estimated performance and actual performance on five exams in a second semester calculus-based physics class. Students in a second-semester calculus-based physics class were given about 72 hours after the completion of each of five exams, to estimate their individual and class mean score on each exam. Students were given extra credit worth 1% of the exam points for estimating their score correct within 2% of the actual score and another 1% extra credit for estimating the class mean score within 2% of the correct value. I compared students' individual and mean score estimations with the actual scores to investigate the relationship between estimation accuracies and exam performance of the students as well as trends over the semester.

  11. Effect of distance-related heterogeneity on population size estimates from point counts

    USGS Publications Warehouse

    Efford, M.G.; Dawson, D.K.

    2009-01-01

    Point counts are used widely to index bird populations. Variation in the proportion of birds counted is a known source of error, and for robust inference it has been advocated that counts be converted to estimates of absolute population size. We used simulation to assess nine methods for the conduct and analysis of point counts when the data included distance-related heterogeneity of individual detection probability. Distance from the observer is a ubiquitous source of heterogeneity, because nearby birds are more easily detected than distant ones. Several recent methods (dependent double-observer, time of first detection, time of detection, independent multiple-observer, and repeated counts) do not account for distance-related heterogeneity, at least in their simpler forms. We assessed bias in estimates of population size by simulating counts with fixed radius w over four time intervals (occasions). Detection probability per occasion was modeled as a half-normal function of distance with scale parameter sigma and intercept g(0) = 1.0. Bias varied with sigma/w; values of sigma inferred from published studies were often 50% for a 100-m fixed-radius count. More critically, the bias of adjusted counts sometimes varied more than that of unadjusted counts, and inference from adjusted counts would be less robust. The problem was not solved by using mixture models or including distance as a covariate. Conventional distance sampling performed well in simulations, but its assumptions are difficult to meet in the field. We conclude that no existing method allows effective estimation of population size from point counts.

  12. How Accurate Are German Work-Time Data? A Comparison of Time-Diary Reports and Stylized Estimates

    ERIC Educational Resources Information Center

    Otterbach, Steffen; Sousa-Poza, Alfonso

    2010-01-01

    This study compares work time data collected by the German Time Use Survey (GTUS) using the diary method with stylized work time estimates from the GTUS, the German Socio-Economic Panel, and the German Microcensus. Although on average the differences between the time-diary data and the interview data is not large, our results show that significant…

  13. Unbalanced and Minimal Point Equivalent Estimation Second-Order Split-Plot Designs

    NASA Technical Reports Server (NTRS)

    Parker, Peter A.; Kowalski, Scott M.; Vining, G. Geoffrey

    2007-01-01

    Restricting the randomization of hard-to-change factors in industrial experiments is often performed by employing a split-plot design structure. From an economic perspective, these designs minimize the experimental cost by reducing the number of resets of the hard-to- change factors. In this paper, unbalanced designs are considered for cases where the subplots are relatively expensive and the experimental apparatus accommodates an unequal number of runs per whole-plot. We provide construction methods for unbalanced second-order split- plot designs that possess the equivalence estimation optimality property, providing best linear unbiased estimates of the parameters; independent of the variance components. Unbalanced versions of the central composite and Box-Behnken designs are developed. For cases where the subplot cost approaches the whole-plot cost, minimal point designs are proposed and illustrated with a split-plot Notz design.

  14. Estimate of Shock Standoff Distance Ahead of a General Stagnation Point

    NASA Technical Reports Server (NTRS)

    Reshotko, Eli

    1961-01-01

    The shock standoff distance ahead of a general rounded stagnation point has been estimated under the assumption of a constant-density-shock layer. It is found that, with the exception of almost-two-dimensional bodies with very strong shock waves, the present theoretical calculations and the experimental data of Zakkay and Visich for toroids are well represented by the relation Delta-3D/R(s) = ((Delta-ax sym)/(R(s))/(2/(K+1))) where Delta is the shock standoff distance, R(s),x is the smaller principal shock radius, and K is the ratio of the smaller to the larger of the principal shock radii.

  15. Surface roughness estimation at three points on the lunar surface using 23-CM monostatic radar

    NASA Technical Reports Server (NTRS)

    Simpson, R. A.

    1976-01-01

    Differences in quasi-specular scattering by the lunar surface have been observed at 23-cm wavelength by using earth-based radar. By taking advantage of libration, three subradar points were isolated, and distinct scattering laws were identified for terrain near Hipparchus, Sinus Medii, and the crater Schroeter F. Interpretations of lunar radar data should henceforth incorporate a recognition that these variations take place. Unidirectional rms surface slope estimates of 6-8 deg in the Central Highlands and 4-5 deg in old mare are appropriate to horizontal scales of 100 m.

  16. Precise Point Positioning with Ionosphere Estimation and application of Regional Ionospheric Maps

    NASA Astrophysics Data System (ADS)

    Galera Monico, J. F.; Marques, H. A.; Rocha, G. D. D. C.

    2015-12-01

    The ionosphere is one of most difficult source of errors to be modelled in the GPS positioning, mainly when applying data collected by single frequency receivers. Considering Precise Point Positioning (PPP) with single frequency data the options available include, for example, the use of Klobuchar model or applying Global Ionosphere Maps (GIM). The GIM contains Vertical Electron Content (VTEC) values that are commonly estimated considering a global network with poor covering in certain regions. For this reason Regional Ionosphere Maps (RIM) have been developed considering local GNSS network, for instance, the La Plata Ionospheric Model (LPIM) developed inside the context of SIRGAS (Geocentric Reference System for Americas). The South American RIM are produced with data from nearly 50 GPS ground receivers and considering these maps are generated for each hour with spatial resolution of one degree it is expected to provide better accuracy in GPS positioning for such region. Another possibility to correct for ionosphere effects in the PPP is to apply the ionosphere estimation technique based on Kalman filter. In this case, the ionosphere can be treated as a stochastic process and a good initial guess is necessary what can be obtained from an ionospheric map. In this paper we present the methodology involved with ionosphere estimation by using Kalman filter and also the application of global and regional ionospheric maps in the PPP as first guess. The ionosphere estimation strategy was implemented in the house software called RT_PPP that is capable of accomplishing PPP either for single or dual frequency data. GPS data from Brazilian station near equatorial region were processed and results with regional maps were compared with those by using global maps. Improvements of the order 15% were observed. In case of ionosphere estimation, the estimated coordinates were compared with ionosphere free solution and after PPP convergence the results reached centimeter accuracy.

  17. Estimation of ground reaction force and zero moment point on a powered ankle-foot prosthesis.

    PubMed

    Martinez-Villalpando, Ernesto C; Herr, Hugh; Farrell, Matthew

    2007-01-01

    The ground reaction force (GRF) and the zero moment point (ZMP) are important parameters for the advancement of biomimetic control of robotic lower-limb prosthetic devices. In this document a method to estimate GRF and ZMP on a motorized ankle-foot prosthesis (MIT Powered Ankle-Foot Prosthesis) is presented. The method proposed is based on the analysis of data collected from a sensory system embedded in the prosthetic device using a custom designed wearable computing unit. In order to evaluate the performance of the estimation methods described, standing and walking clinical studies were conducted on a transtibial amputee. The results were statistically compared to standard analysis methodologies employed in a gait laboratory. The average RMS error and correlation factor were calculated for all experimental sessions. By using a static analysis procedure, the estimation of the vertical component of GRF had an averaged correlation coefficient higher than 0.94. The estimated ZMP location had a distance error of less than 1 cm, equal to 4% of the anterior-posterior foot length or 12% of the medio-lateral foot width. PMID:18003052

  18. Assessment of the point-source method for estimating dose rates to members of the public from exposure to patients with 131I thyroid treatment

    SciTech Connect

    Dewji, Shaheen Azim; Bellamy, Michael B.; Hertel, Nolan E.; Leggett, Richard Wayne; Sherbini, Sami; Saba, Mohammad S.; Eckerman, Keith F.

    2015-09-01

    The U.S. Nuclear Regulatory Commission (USNRC) initiated a contract with Oak Ridge National Laboratory (ORNL) to calculate radiation dose rates to members of the public that may result from exposure to patients recently administered iodine-131 (131I) as part of medical therapy. The main purpose was to compare dose rate estimates based on a point source and target with values derived from more realistic simulations that considered the time-dependent distribution of 131I in the patient and attenuation of emitted photons by the patient’s tissues. The external dose rate estimates were derived using Monte Carlo methods and two representations of the Phantom with Movable Arms and Legs, previously developed by ORNL and the USNRC, to model the patient and a nearby member of the public. Dose rates to tissues and effective dose rates were calculated for distances ranging from 10 to 300 cm between the phantoms and compared to estimates based on the point-source method, as well as to results of previous studies that estimated exposure from 131I patients. The point-source method overestimates dose rates to members of the public in very close proximity to an 131I patient but is a broadly accurate method of dose rate estimation at separation distances of 300 cm or more at times closer to administration.

  19. Assessment of the point-source method for estimating dose rates to members of the public from exposure to patients with 131I thyroid treatment

    DOE PAGESBeta

    Dewji, Shaheen Azim; Bellamy, Michael B.; Hertel, Nolan E.; Leggett, Richard Wayne; Sherbini, Sami; Saba, Mohammad S.; Eckerman, Keith F.

    2015-09-01

    The U.S. Nuclear Regulatory Commission (USNRC) initiated a contract with Oak Ridge National Laboratory (ORNL) to calculate radiation dose rates to members of the public that may result from exposure to patients recently administered iodine-131 (131I) as part of medical therapy. The main purpose was to compare dose rate estimates based on a point source and target with values derived from more realistic simulations that considered the time-dependent distribution of 131I in the patient and attenuation of emitted photons by the patient’s tissues. The external dose rate estimates were derived using Monte Carlo methods and two representations of the Phantommore » with Movable Arms and Legs, previously developed by ORNL and the USNRC, to model the patient and a nearby member of the public. Dose rates to tissues and effective dose rates were calculated for distances ranging from 10 to 300 cm between the phantoms and compared to estimates based on the point-source method, as well as to results of previous studies that estimated exposure from 131I patients. The point-source method overestimates dose rates to members of the public in very close proximity to an 131I patient but is a broadly accurate method of dose rate estimation at separation distances of 300 cm or more at times closer to administration.« less

  20. Accurate 3D rigid-body target motion and structure estimation by using GMTI/HRR with template information

    NASA Astrophysics Data System (ADS)

    Wu, Shunguang; Hong, Lang

    2008-04-01

    A framework of simultaneously estimating the motion and structure parameters of a 3D object by using high range resolution (HRR) and ground moving target indicator (GMTI) measurements with template information is given. By decoupling the motion and structure information and employing rigid-body constraints, we have developed the kinematic and measurement equations of the problem. Since the kinematic system is unobservable by using only one scan HRR and GMTI measurements, we designed an architecture to run the motion and structure filters in parallel by using multi-scan measurements. Moreover, to improve the estimation accuracy in large noise and/or false alarm environments, an interacting multi-template joint tracking (IMTJT) algorithm is proposed. Simulation results have shown that the averaged root mean square errors for both motion and structure state vectors have been significantly reduced by using the template information.

  1. Dense and accurate motion and strain estimation in high resolution speckle images using an image-adaptive approach

    NASA Astrophysics Data System (ADS)

    Cofaru, Corneliu; Philips, Wilfried; Van Paepegem, Wim

    2011-09-01

    Digital image processing methods represent a viable and well acknowledged alternative to strain gauges and interferometric techniques for determining full-field displacements and strains in materials under stress. This paper presents an image adaptive technique for dense motion and strain estimation using high-resolution speckle images that show the analyzed material in its original and deformed states. The algorithm starts by dividing the speckle image showing the original state into irregular cells taking into consideration both spatial and gradient image information present. Subsequently the Newton-Raphson digital image correlation technique is applied to calculate the corresponding motion for each cell. Adaptive spatial regularization in the form of the Geman- McClure robust spatial estimator is employed to increase the spatial consistency of the motion components of a cell with respect to the components of neighbouring cells. To obtain the final strain information, local least-squares fitting using a linear displacement model is performed on the horizontal and vertical displacement fields. To evaluate the presented image partitioning and strain estimation techniques two numerical and two real experiments are employed. The numerical experiments simulate the deformation of a specimen with constant strain across the surface as well as small rigid-body rotations present while real experiments consist specimens that undergo uniaxial stress. The results indicate very good accuracy of the recovered strains as well as better rotation insensitivity compared to classical techniques.

  2. Estimation of the temperature dependent interaction between uncharged point defects in Si

    SciTech Connect

    Kamiyama, Eiji; Vanhellemont, Jan; Sueoka, Koji

    2015-01-15

    A method is described to estimate the temperature dependent interaction between two uncharged point defects in Si based on DFT calculations. As an illustration, the formation of the uncharged di-vacancy V{sub 2} is discussed, based on the temperature dependent attractive field between both vacancies. For that purpose, all irreducible configurations of two uncharged vacancies are determined, each with their weight given by the number of equivalent configurations. Using a standard 216-atoms supercell, nineteen irreducible configurations of two vacancies are obtained. The binding energies of all these configurations are calculated. Each vacancy is surrounded by several attractive sites for another vacancy. The obtained temperature dependent of total volume of these attractive sites has a radius that is closely related with the capture radius for the formation of a di-vacancy that is used in continuum theory. The presented methodology can in principle also be applied to estimate the capture radius for pair formation of any type of point defects.

  3. Estimating dispersed and point source emissions of methane in East Anglia: results and implications

    NASA Astrophysics Data System (ADS)

    Harris, Neil; Connors, Sarah; Hancock, Ben; Jones, Pip; Murphy, Jonathan; Riddick, Stuart; Robinson, Andrew; Skelton, Robert; Manning, Alistair; Forster, Grant; Oram, David; O'Doherty, Simon; Young, Dickon; Stavert, Ann; Fisher, Rebecca; Lowry, David; Nisbet, Euan; Zazzeri, Guilia; Allen, Grant; Pitt, Joseph

    2016-04-01

    We have been investigating ways to estimate dispersed and point source emissions of methane. To do so we have used continuous measurements from a small network of instruments at 4 sites across East Anglia since 2012. These long-term series have been supplemented by measurements taken in focussed studies at landfills, which are important point sources of methane, and by measurements of the 13C:12C ratio in methane to provide additional information about its sources. These measurements have been analysed using the NAME InTEM inversion model to provide county-level emissions (~30 km x ~30 km) in East Anglia. A case study near a landfill just north of Cambridge was also analysed using a Gaussian plume model and the Windtrax dispersion model. The resulting emission estimates from the three techniques are consistent within the uncertainties, despite the different spatial scales being considered. A seasonal cycle in emissions from the landfill (identified by the isotopic measurements) is observed with higher emissions in winter than summer. This would be expected from consideration of the likely activity of methanogenic bacteria in the landfill, but is not currently represented in emission inventories such as the UK National Atmospheric Emissions Inventory. The possibility of assessing North Sea gas field emissions using ground-based measurements will also be discussed.

  4. Use of the point load index in estimation of the strength rating for the RMR system

    NASA Astrophysics Data System (ADS)

    Karaman, Kadir; Kaya, Ayberk; Kesimal, Ayhan

    2015-06-01

    The Rock Mass Rating (RMR) system is a worldwide reference for design applications involving estimation of rock mass properties and tunnel support. In the RMR system, Uniaxial Compressive Strength (UCS) is an important input parameter to determine the strength rating of intact rock. In practice, there are some difficulties in determining the UCS of rocks from problematic ground conditions due to rapid data requirements. In this study, a combined strength rating chart was developed to overcome this problem based on the experience gained in the last decades from the point load test. For this purpose, a total of 490 UCS and Point Load Index (PLI) data pairs collected from the accessible world literature and obtained from the Eastern Black Sea Region (EBSR) in Turkey were evaluated together. The UCS and PLI data pairs were classified for the cases of PLI < 1 and PLI > 1 MPa, and two different strength rating charts were suggested by using the regression analyses. The Variance Account For (VAF), Root Mean Square Error (RMSE) and Mean Absolute Percentage Error (MAPE) indices were calculated to compare the performance of the prediction capacity of the suggested strength rating charts. Further, the one way analysis of variance (ANOVA) was performed to test whether the means of the calculated and predicted ratings are similar to each other. Findings of the analyses have demonstrated that the combined strength rating chart for the cases of PLI < 1 and PLI > 1 MPa can be reliably used in estimation of the strength ratings for the RMR system.

  5. A Roving Dual-Presentation Simultaneity-Judgment Task to Estimate the Point of Subjective Simultaneity

    PubMed Central

    Yarrow, Kielan; Martin, Sian E.; Di Costa, Steven; Solomon, Joshua A.; Arnold, Derek H.

    2016-01-01

    The most popular tasks with which to investigate the perception of subjective synchrony are the temporal order judgment (TOJ) and the simultaneity judgment (SJ). Here, we discuss a complementary approach—a dual-presentation (2x) SJ task—and focus on appropriate analysis methods for a theoretically desirable “roving” design. Two stimulus pairs are presented on each trial and the observer must select the most synchronous. To demonstrate this approach, in Experiment 1 we tested the 2xSJ task alongside TOJ, SJ, and simple reaction-time (RT) tasks using audiovisual stimuli. We interpret responses from each task using detection-theoretic models, which assume variable arrival times for sensory signals at critical brain structures for timing perception. All tasks provide similar estimates of the point of subjective simultaneity (PSS) on average, and PSS estimates from some tasks were correlated on an individual basis. The 2xSJ task produced lower and more stable estimates of model-based (and thus comparable) sensory/decision noise than the TOJ. In Experiment 2 we obtained similar results using RT, TOJ, ternary, and 2xSJ tasks for all combinations of auditory, visual, and tactile stimuli. In Experiment 3 we investigated attentional prior entry, using both TOJs and 2xSJs. We found that estimates of prior-entry magnitude correlated across these tasks. Overall, our study establishes the practicality of the roving dual-presentation SJ task, but also illustrates the additional complexity of the procedure. We consider ways in which this task might complement more traditional procedures, particularly when it is important to estimate both PSS and sensory/decisional noise. PMID:27047434

  6. A Roving Dual-Presentation Simultaneity-Judgment Task to Estimate the Point of Subjective Simultaneity.

    PubMed

    Yarrow, Kielan; Martin, Sian E; Di Costa, Steven; Solomon, Joshua A; Arnold, Derek H

    2016-01-01

    The most popular tasks with which to investigate the perception of subjective synchrony are the temporal order judgment (TOJ) and the simultaneity judgment (SJ). Here, we discuss a complementary approach-a dual-presentation (2x) SJ task-and focus on appropriate analysis methods for a theoretically desirable "roving" design. Two stimulus pairs are presented on each trial and the observer must select the most synchronous. To demonstrate this approach, in Experiment 1 we tested the 2xSJ task alongside TOJ, SJ, and simple reaction-time (RT) tasks using audiovisual stimuli. We interpret responses from each task using detection-theoretic models, which assume variable arrival times for sensory signals at critical brain structures for timing perception. All tasks provide similar estimates of the point of subjective simultaneity (PSS) on average, and PSS estimates from some tasks were correlated on an individual basis. The 2xSJ task produced lower and more stable estimates of model-based (and thus comparable) sensory/decision noise than the TOJ. In Experiment 2 we obtained similar results using RT, TOJ, ternary, and 2xSJ tasks for all combinations of auditory, visual, and tactile stimuli. In Experiment 3 we investigated attentional prior entry, using both TOJs and 2xSJs. We found that estimates of prior-entry magnitude correlated across these tasks. Overall, our study establishes the practicality of the roving dual-presentation SJ task, but also illustrates the additional complexity of the procedure. We consider ways in which this task might complement more traditional procedures, particularly when it is important to estimate both PSS and sensory/decisional noise. PMID:27047434

  7. Accurate estimate of the critical exponent nu for self-avoiding walks via a fast implementation of the pivot algorithm.

    PubMed

    Clisby, Nathan

    2010-02-01

    We introduce a fast implementation of the pivot algorithm for self-avoiding walks, which we use to obtain large samples of walks on the cubic lattice of up to 33x10{6} steps. Consequently the critical exponent nu for three-dimensional self-avoiding walks is determined to great accuracy; the final estimate is nu=0.587 597(7). The method can be adapted to other models of polymers with short-range interactions, on the lattice or in the continuum. PMID:20366773

  8. Estimating the Critical Point of Crowding in the Emergency Department for the Warning System

    NASA Astrophysics Data System (ADS)

    Chang, Y.; Pan, C.; Tseng, C.; Wen, J.

    2011-12-01

    The purpose of this study is to deduce a function from the admissions/discharge rate of patient flow to estimate a "Critical Point" that provides a reference for warning systems in regards to crowding in the emergency department (ED) of a hospital or medical clinic. In this study, a model of "Input-Throughput-Output" was used in our established mathematical function to evaluate the critical point. The function is defined as dPin/dt=dwait/dt+Cp×B+ dPout/dt where Pin= number of registered patients, Pwait= number of waiting patients, Cp= retention rate per bed (calculated for the critical point), B= number of licensed beds in the treatment area, and Pout= number of patients discharged from the treatment area. Using the average Cp of ED crowding, we could start the warning system at an appropriate time and then plan for necessary emergency response to facilitate the patient process more smoothly. It was concluded that ED crowding could be quantified using the average value of Cp and the value could be used as a reference for medical staff to give optimal emergency medical treatment to patients. Therefore, additional practical work should be launched to collect more precise quantitative data.

  9. Benthic remineralisation rates in southern North Sea - from point measurements to areal estimates

    NASA Astrophysics Data System (ADS)

    Neumann, Andreas; Friedrich, Jana; van Beusekom, Justus; Naderipour, Céline

    2015-04-01

    The southern North Sea is enclosed by densely populated hinterland with intensive use by agriculture and industry and thus substantially affected by anthropogenic influences. As a coastal subsystem, this applies especially to the German Wadden Sea, a system of back-barrier tidal flats along the whole German Bight. Ongoing efforts to implement environmental protection policies during the last decades changed the significance of various pollutants such as reactive nitrogen or phosphate, which raises the desire for constant monitoring of the coastal ecosystem to assess the efficiency of the employed environmental protection measures. Environmental monitoring is limited to point measurements which thus have to be interpolated with appropriate models. However, existing models to estimate various sediment characteristics for the interpolation of point measurements appear insufficient when compared with actual field measurements in the southern North Sea. We therefore seek to improve these models by identifying and quantifying key variables of benthic solute fluxes by comprehensive measurements which cover the complete spatial and seasonal variability. We employ in-situ measurements with the eddy-correlation technique and flux chambers in combination with ex-situ incubations of sediment cores to establish benthic fluxes of oxygen and nutrients. Additional ex-situ measurements determine basic sediment characteristics such as permeability, volumetric reaction rates, and substrate concentration. With our first results we mapped the distribution of measured sediment permeability, which suggest that areas with water depth greater than 30 m are impervious whereas sediment in shallower water at the Dogger Bank and along the coast is substantially permeable with permeability between 10-12 m2 and 10-10 m2. This implies that benthic fluxes can be estimated with simple diffusion-type models for water depths >30 m, whereas estimates especially for coastal sediments require

  10. Curie Point Depth Estimates Beneath the Incipient Okavango Rift Zone, Northwest Botswana

    NASA Astrophysics Data System (ADS)

    Leseane, K.; Atekwana, E. A.; Mickus, K. L.; Mohamed, A.; Atekwana, E. A.

    2013-12-01

    We investigated the regional thermal structure of the crust beneath the Okavango Rift Zone (ORZ), surrounding cratons and orogenic mobile belts using the Curie Point Depth (CPD) estimates. Estimating the depth to the base of magnetic sources is important in understanding and constraining the thermal structure of the crust in zones of incipient continental rifting where no other data are available to image the crustal thermal structure. Our objective was to determine if there are any thermal perturbations within the lithosphere during rift initiation. The top and bottom of the magnetized crust were calculated using the two dimensional (2D) power-density spectra analysis and three dimensional (3D) inversions of the total field magnetic data of Botswana in overlapping square windows of 1degree x 1 degree. The calculated CPD estimates varied between ~8 km and ~24 km. The deepest CPD values (16-24 km) occur under the surrounding cratons and orogenic mobile belts whereas the shallowest CPD values were found within the ORZ. CPD values of 8 to 10 km occur in the northeastern part of ORZ; a site of more developed rift structures and where hot springs are known to occur. CPD values of 12 to 16 km were obtained in the southwestern part of the ORZ where rift structures are progressively less developed and where the rift terminates. The results suggests possible thermal anomaly beneath the incipient ORZ. Further geophysical studies as part of the PRIDE (Project for Rift Initiation Development and Evolution) project are needed to confirm this proposition.

  11. On the tridimensional estimation of the gaze point by a stereoscopic wearable eye tracker.

    PubMed

    Lanata, Antonio; Greco, Alberto; Valenza, Gaetano; Scilingo, Enzo Pasquale

    2015-08-01

    This paper reports a novel stereo-vision-method (binocular system-geometrical mapped (BS-GM)) to estimate the depth coordinates of the eye gaze point in a controlled 3D space of vision. The method outcomes were compared in both 2D and 3D visual targets with both mono- and stereo-vision algorithms in order to estimate accuracy of results. More specifically, we compared BS-GM with a monocular method and with two stereo-vision methodologies which were different in order to the mapping functions. All of the methods were implemented in the same head mounted eye tracking system able to acquire both eyes. In 2D visual space (i.e. plane of vision) we compared BS-GM with a monocular method, a binocular system-linear mapped (BS-LM) and a binocular system-quadratic mapped (BS-QM). In the 3D space estimation all of the binocular systems were compared each other. Thirteen enrolled subjects observed 31 targets of known coordinates in a controlled environment. Results achieved on 2D comparison showed no statistical significant difference among the four methods, while the comparison on 3D space of vision showed that BS-GM method achieved a significant better accuracy than BS-LM and BS-QM method. Specifically, BS-GM showed and average percentage error obit of 3.47%. PMID:26736748

  12. Estimation of the skull insertion loss using an optoacoustic point source

    NASA Astrophysics Data System (ADS)

    Estrada, Héctor; Rebling, Johannes; Turner, Jake; Kneipp, Moritz; Shoham, Shy; Razansky, Daniel

    2016-03-01

    The acoustically-mismatched skull bone poses significant challenges for the application of ultrasonic and optical techniques in neuroimaging, still typically requiring invasive approaches using craniotomy or skull thinning. Optoacoustic imaging partially circumvents the acoustic distortions due to the skull because the induced wave is transmitted only once as opposed to the round trip in pulse-echo ultrasonography. To this end, the mouse brain has been successfully imaged transcranially by optoacoustic scanning microscopy. Yet, the skull may adversely affect the lateral and axial resolution of transcranial brain images. In order to accurately characterize the complex behavior of the optoacoustic signal as it traverses through the skull, one needs to consider the ultrawideband nature of the optoacoustic signals. Here the insertion loss of murine skull has been measured by means of a hybrid optoacoustic-ultrasound scanning microscope having a spherically focused PVDF transducer and pulsed laser excitation at 532 nm of a 20 μm diameter absorbing microsphere acting as an optoacoustic point source. Accurate modeling of the acoustic transmission through the skull is further performed using a Fourier-domain expansion of a solid-plate model, based on the simultaneously acquired pulse-echo ultrasound image providing precise information about the skull's position and its orientation relative to the optoacoustic source. Good qualitative agreement has been found between the a solid-plate model and experimental measurements. The presented strategy might pave the way for modeling skull effects and deriving efficient correction schemes to account for acoustic distortions introduced by an adult murine skull, thus improving the spatial resolution, effective penetration depth and overall image quality of transcranial optoacoustic brain microscopy.

  13. Estimating the contribution of point sources to atmospheric metals using single-particle mass spectrometry

    NASA Astrophysics Data System (ADS)

    Snyder, David C.; Schauer, James J.; Gross, Deborah S.; Turner, Jay R.

    Single-particle mass spectra were collected using an Aerosol Time-of-Flight Mass Spectrometer (ATOFMS) during December of 2003 and February of 2004 at an industrially impacted location in East St. Louis, IL. Hourly integrated peak areas for twenty ions were evaluated for their suitability in representing metals/metalloids, particularly those reported in the US EPA Toxic Release Inventory (TRI). Of the initial twenty ions examined, six (Al, As, Cu, Hg, Ti, and V) were found to be unsuitable due to strong isobaric interferences with commonly observed organic fragments, and one (Be) was found to have no significant signal. The usability of three ions (Co, Cr, and Mn) was limited due to suspected isobaric interferences based on temporal comparisons with commonly observed organic fragments. The identity of the remaining ions (Sb, Ba, Cd, Ca, Fe, Ni, Pb, K, Se, and Zn) was substantiated by comparing their signals with the integrated hourly signals of one or more isotope ions. When compared with one-in-six day integrated elemental data as determined by X-ray fluorescence spectroscopy (XRF), the daily integrated ATOFMS signal for several metal ions revealed a semi-quantitative relationship between ATOFMS peak area and XRF concentrations, although in some cases comparison of these measurements were poor at low elemental concentrations/ion signals due to isobaric interferences. A method of estimating the impact of local point sources was developed using hourly integrated ATOFMS peak areas, and this method attributed as much as 85% of the concentration of individual metals observed at the study site to local point sources. Hourly surface wind data were used in conjunction with TRI facility emissions data to reveal likely point sources impacting metal concentrations at the study site and to illustrate the utility of using single-particle mass spectral data to characterize atmospheric metals and identify point sources.

  14. Estimating forest biomass from LiDAR data: A comparison of the raster-based and point-cloud data approach

    NASA Astrophysics Data System (ADS)

    Garcia-Alonso, M.; Ferraz, A.; Saatchi, S. S.; Casas, A.; Koltunov, A.; Ustin, S.; Ramirez, C.; Balzter, H.

    2015-12-01

    Accurate knowledge of forest biomass and its dynamics is critical for better understanding the carbon cycle and improving forest management decisions to ensure forest sustainability. LiDAR technology provides accurate estimates of aboveground biomass in different ecosystems, minimizing the signal saturation problems that are common with other remote sensing technologies. LiDAR data processing can be based on two different approaches. The first is based on deriving structural metrics from returns classified as vegetation, while the second one is based on metrics derived from the canopy height model (CHM). The CHM is obtained by subtracting the digital elevation model (DEM) that was created from the ground returns, from the digital surface model (DSM), which was itself constructed using the maximum height within each grid cell. The former approach provides a better description of the vertical distribution of the vegetation, whereas the latter significantly reduces the computational burden involved in processing point cloud data at the expense of losing information. This study evaluates the performance of both approaches for biomass estimation over very different ecosystems, including a Mediterranean forest in the Sierra Nevada Mountains of California and a tropical forest in Barro Colorado Island (Panama). In addition, the effect of point density on the variables derived, and ultimately on the estimated biomass, will be assessed.

  15. Are Treponema pallidum Specific Rapid and Point-of-Care Tests for Syphilis Accurate Enough for Screening in Resource Limited Settings? Evidence from a Meta-Analysis

    PubMed Central

    Jafari, Yalda; Peeling, Rosanna W.; Shivkumar, Sushmita; Claessens, Christiane; Joseph, Lawrence; Pai, Nitika Pant

    2013-01-01

    Background Rapid and point-of-care (POC) tests for syphilis are an invaluable screening tool, yet inadequate evaluation of their diagnostic accuracy against best reference standards limits their widespread global uptake. To fill this gap, a systematic review and meta-analysis was conducted to evaluate the sensitivity and specificity of rapid and POC tests in blood and serum samples against Treponema pallidum (TP) specific reference standards. Methods Five electronic databases (1980–2012) were searched, data was extracted from 33 articles, and Bayesian hierarchical models were fit. Results In serum samples, against a TP specific reference standard point estimates with 95% credible intervals (CrI) for the sensitivities of popular tests were: i) Determine, 90.04% (80.45, 95.21), ii) SD Bioline, 87.06% (75.67, 94.50), iii) VisiTect, 85.13% (72.83, 92.57), and iv) Syphicheck, 74.48% (56.85, 88.44), while specificities were: i) Syphicheck, 99.14% (96.37, 100), ii) Visitect, 96.45% (91.92, 99.29), iii) SD Bioline, 95.85% (89.89, 99.53), and iv) Determine, 94.15% (89.26, 97.66). In whole blood samples, sensitivities were: i) Determine, 86.32% (77.26, 91.70), ii) SD Bioline, 84.50% (78.81, 92.61), iii) Syphicheck, 74.47% (63.94, 82.13), and iv) VisiTect, 74.26% (53.62, 83.68), while specificities were: i) Syphicheck, 99.58% (98.91, 99.96), ii) VisiTect, 99.43% (98.22, 99.98), iii) SD Bioline, 97.95%(92.54, 99.33), and iv) Determine, 95.85% (92.42, 97.74). Conclusions Rapid and POC treponemal tests reported sensitivity and specificity estimates comparable to laboratory-based treponemal tests. In resource limited settings, where access to screening is limited and where risk of patients lost to follow up is high, the introduction of these tests has already been shown to improve access to screening and treatment to prevent stillbirths and neonatal mortality due to congenital syphilis. Based on the evidence, it is concluded that rapid and POC tests are useful in resource

  16. Travel Time Estimation Using Freeway Point Detector Data Based on Evolving Fuzzy Neural Inference System

    PubMed Central

    Tang, Jinjun; Zou, Yajie; Ash, John; Zhang, Shen; Liu, Fang; Wang, Yinhai

    2016-01-01

    Travel time is an important measurement used to evaluate the extent of congestion within road networks. This paper presents a new method to estimate the travel time based on an evolving fuzzy neural inference system. The input variables in the system are traffic flow data (volume, occupancy, and speed) collected from loop detectors located at points both upstream and downstream of a given link, and the output variable is the link travel time. A first order Takagi-Sugeno fuzzy rule set is used to complete the inference. For training the evolving fuzzy neural network (EFNN), two learning processes are proposed: (1) a K-means method is employed to partition input samples into different clusters, and a Gaussian fuzzy membership function is designed for each cluster to measure the membership degree of samples to the cluster centers. As the number of input samples increases, the cluster centers are modified and membership functions are also updated; (2) a weighted recursive least squares estimator is used to optimize the parameters of the linear functions in the Takagi-Sugeno type fuzzy rules. Testing datasets consisting of actual and simulated data are used to test the proposed method. Three common criteria including mean absolute error (MAE), root mean square error (RMSE), and mean absolute relative error (MARE) are utilized to evaluate the estimation performance. Estimation results demonstrate the accuracy and effectiveness of the EFNN method through comparison with existing methods including: multiple linear regression (MLR), instantaneous model (IM), linear model (LM), neural network (NN), and cumulative plots (CP). PMID:26829639

  17. Travel Time Estimation Using Freeway Point Detector Data Based on Evolving Fuzzy Neural Inference System.

    PubMed

    Tang, Jinjun; Zou, Yajie; Ash, John; Zhang, Shen; Liu, Fang; Wang, Yinhai

    2016-01-01

    Travel time is an important measurement used to evaluate the extent of congestion within road networks. This paper presents a new method to estimate the travel time based on an evolving fuzzy neural inference system. The input variables in the system are traffic flow data (volume, occupancy, and speed) collected from loop detectors located at points both upstream and downstream of a given link, and the output variable is the link travel time. A first order Takagi-Sugeno fuzzy rule set is used to complete the inference. For training the evolving fuzzy neural network (EFNN), two learning processes are proposed: (1) a K-means method is employed to partition input samples into different clusters, and a Gaussian fuzzy membership function is designed for each cluster to measure the membership degree of samples to the cluster centers. As the number of input samples increases, the cluster centers are modified and membership functions are also updated; (2) a weighted recursive least squares estimator is used to optimize the parameters of the linear functions in the Takagi-Sugeno type fuzzy rules. Testing datasets consisting of actual and simulated data are used to test the proposed method. Three common criteria including mean absolute error (MAE), root mean square error (RMSE), and mean absolute relative error (MARE) are utilized to evaluate the estimation performance. Estimation results demonstrate the accuracy and effectiveness of the EFNN method through comparison with existing methods including: multiple linear regression (MLR), instantaneous model (IM), linear model (LM), neural network (NN), and cumulative plots (CP). PMID:26829639

  18. Parameter Estimation of Fossil Oysters from High Resolution 3D Point Cloud and Image Data

    NASA Astrophysics Data System (ADS)

    Djuricic, Ana; Harzhauser, Mathias; Dorninger, Peter; Nothegger, Clemens; Mandic, Oleg; Székely, Balázs; Molnár, Gábor; Pfeifer, Norbert

    2014-05-01

    A unique fossil oyster reef was excavated at Stetten in Lower Austria, which is also the highlight of the geo-edutainment park 'Fossilienwelt Weinviertel'. It provides the rare opportunity to study the Early Miocene flora and fauna of the Central Paratethys Sea. The site presents the world's largest fossil oyster biostrome formed about 16.5 million years ago in a tropical estuary of the Korneuburg Basin. About 15,000 up to 80-cm-long shells of Crassostrea gryphoides cover a 400 m2 large area. Our project 'Smart-Geology for the World's largest fossil oyster reef' combines methods of photogrammetry, geology and paleontology to document, evaluate and quantify the shell bed. This interdisciplinary approach will be applied to test hypotheses on the genesis of the taphocenosis (e.g.: tsunami versus major storm) and to reconstruct pre- and post-event processes. Hence, we are focusing on using visualization technologies from photogrammetry in geology and paleontology in order to develop new methods for automatic and objective evaluation of 3D point clouds. These will be studied on the basis of a very dense surface reconstruction of the oyster reef. 'Smart Geology', as extension of the classic discipline, exploits massive data, automatic interpretation, and visualization. Photogrammetry provides the tools for surface acquisition and objective, automated interpretation. We also want to stress the economic aspect of using automatic shape detection in paleontology, which saves manpower and increases efficiency during the monitoring and evaluation process. Currently, there are many well known algorithms for 3D shape detection of certain objects. We are using dense 3D laser scanning data from an instrument utilizing the phase shift measuring principle, which provides accurate geometrical basis < 3 mm. However, the situation is difficult in this multiple object scenario where more than 15,000 complete or fragmentary parts of an object with random orientation are found. The goal

  19. Line-point intercept, grid-point intercept, and ocular estimate methods: their relative value for rangeland assessment and monitoring

    Technology Transfer Automated Retrieval System (TEKTRAN)

    We compared the utility of three methods for rangeland assessment and monitoring based on the number of species detected, foliar cover, precision (coefficient of variation) and time required for each method. We used four 70-m transects in 15 sites of five vegetation types (3 sites/type). Point inter...

  20. Accurate spike estimation from noisy calcium signals for ultrafast three-dimensional imaging of large neuronal populations in vivo

    PubMed Central

    Deneux, Thomas; Kaszas, Attila; Szalay, Gergely; Katona, Gergely; Lakner, Tamás; Grinvald, Amiram; Rózsa, Balázs; Vanzetta, Ivo

    2016-01-01

    Extracting neuronal spiking activity from large-scale two-photon recordings remains challenging, especially in mammals in vivo, where large noises often contaminate the signals. We propose a method, MLspike, which returns the most likely spike train underlying the measured calcium fluorescence. It relies on a physiological model including baseline fluctuations and distinct nonlinearities for synthetic and genetically encoded indicators. Model parameters can be either provided by the user or estimated from the data themselves. MLspike is computationally efficient thanks to its original discretization of probability representations; moreover, it can also return spike probabilities or samples. Benchmarked on extensive simulations and real data from seven different preparations, it outperformed state-of-the-art algorithms. Combined with the finding obtained from systematic data investigation (noise level, spiking rate and so on) that photonic noise is not necessarily the main limiting factor, our method allows spike extraction from large-scale recordings, as demonstrated on acousto-optical three-dimensional recordings of over 1,000 neurons in vivo. PMID:27432255

  1. Accurate spike estimation from noisy calcium signals for ultrafast three-dimensional imaging of large neuronal populations in vivo.

    PubMed

    Deneux, Thomas; Kaszas, Attila; Szalay, Gergely; Katona, Gergely; Lakner, Tamás; Grinvald, Amiram; Rózsa, Balázs; Vanzetta, Ivo

    2016-01-01

    Extracting neuronal spiking activity from large-scale two-photon recordings remains challenging, especially in mammals in vivo, where large noises often contaminate the signals. We propose a method, MLspike, which returns the most likely spike train underlying the measured calcium fluorescence. It relies on a physiological model including baseline fluctuations and distinct nonlinearities for synthetic and genetically encoded indicators. Model parameters can be either provided by the user or estimated from the data themselves. MLspike is computationally efficient thanks to its original discretization of probability representations; moreover, it can also return spike probabilities or samples. Benchmarked on extensive simulations and real data from seven different preparations, it outperformed state-of-the-art algorithms. Combined with the finding obtained from systematic data investigation (noise level, spiking rate and so on) that photonic noise is not necessarily the main limiting factor, our method allows spike extraction from large-scale recordings, as demonstrated on acousto-optical three-dimensional recordings of over 1,000 neurons in vivo. PMID:27432255

  2. Spin polarization of Co-Fe alloys estimated by point contact Andreev reflection and tunneling magnetoresistance

    NASA Astrophysics Data System (ADS)

    Karthik, S. V.; Nakatani, T. M.; Rajanikanth, A.; Takahashi, Y. K.; Hono, K.

    2009-04-01

    The compositional dependence on spin polarization of Co100-xFex alloys has been studied by point contact Andreev reflection (PCAR) and tunneling magnetoresistance (TMR) measurements. The intrinsic spin polarization for bcc Co75Fe25 alloy is P =0.58±0.03 at 4.2K contrary to the pure Fe (P=0.46±0.03) and Co (P=0.45±0.03). The tunneling spin polarization values of Co75Fe25 (110) textured polycrystalline electrode and (001) epitaxially grown electrode was estimated to be PT=0.5±0.01 and PT=0.57±0.01 at 8K from the TMR ratios using Julliere's model for the MTJs prepared on oxidized Si and MgO (001) substrates. The spin polarization obtained from the tunneling junctions and PCAR experiments have been discussed.

  3. Estimation of the Elevational Distance between Image Planes by Analysis of Ultrasonic Echoes from Point Scatterers

    NASA Astrophysics Data System (ADS)

    Suzuki, Atsuhiro; Hasegawa, Hideyuki; Kanai, Hiroshi

    2011-07-01

    There are two approaches to three-dimensional (3D) image reconstruction using a 1D array ultrasonic transducer: mechanical linear scanning and free-hand scanning. Mechanical scanning employs a motorized mechanism to translate the transducer linearly. However, the large size and weight of the scanning system sometimes make it inconvenient to use. In free-hand scanning, a sensor (e.g., electromagnetic or optical) is attached to the ultrasonic transducer to measure the position and orientation of the transducer. These techniques are sensitive to the usage environment. Recently, sensorless free-hand scanning techniques have been developed. Seabra et al. reported sensorless free-hand techniques for the carotid artery by monitoring the velocity of the ultrasound probe [J. C. R. Seabra, L. M. Pedro, and J. F. Ferandes: IEEE Trans. Biomed. Eng. 56 (2009) 1442]. This system achieved an accuracy of 2.5 mm [root mean square (RMS) error] of the location. To develop accurate sensorless measurement, we propose a novel method using the phase shift between ultrasonic RF echoes. In this study, we measured the transmit-receive directivity of a linear-array transducer using a silicone phantom and estimated the elevational distance between two 2D US images using the phase shift. An accuracy of 49.9 µm in RMS, which is less than that of the previous sensorless free-hand method, could be achieved by the proposed method.

  4. Estimating the point accuracy of population registers using capture-recapture methods in Scotland.

    PubMed Central

    Garton, M J; Abdalla, M I; Reid, D M; Russell, I T

    1996-01-01

    STUDY OBJECTIVE: To estimate the point accuracy of adult registration on the community health index (CHI) by comparing it with the electoral register (ER) and the community charge register (CCR). DESIGN: Survey of overlapping samples from three registers to ascertain whether respondents were living at the addresses given on the registers, analysed by capture-recapture methods. SETTING: Aberdeen North and South parliamentary constituencies. PARTICIPANTS: Random samples of adult registrants aged at least 18 years from the CHI (n = 1000), ER (n = 998), and CCR (n = 956). MAIN RESULTS: Estimated sensitivities (the proportions of the target population registered at the address where they live) were: CHI--84.6% (95% confidence limits 82.4%, 86.7%); ER--90.0% (87.5%, 92.5%), and CCR--87.7% (85.3%, 90.3%). Positive predictive values (the proportions of registrants who were living at their stated addresses) were: CHI--84.6% (82.2%, 87.0%); ER--94.0% (90.9%, 97.1%), and CCR--93.7% (91.7%, 95.7%). CONCLUSIONS: The CHI assessed in this study was significantly less sensitive and predictive than the corresponding ER and CCR. Capture-recapture methods are effective in assessing the accuracy of population registers. PMID:8762363

  5. Estimation of normalized point-source sensitivity of segment surface specifications for extremely large telescopes.

    PubMed

    Seo, Byoung-Joon; Nissly, Carl; Troy, Mitchell; Angeli, George; Bernier, Robert; Stepp, Larry; Williams, Eric

    2013-06-20

    We present a method which estimates the normalized point-source sensitivity (PSSN) of a segmented telescope when only information from a single segment surface is known. The estimation principle is based on a statistical approach with an assumption that all segment surfaces have the same power spectral density (PSD) as the given segment surface. As presented in this paper, the PSSN based on this statistical approach represents a worst-case scenario among statistical random realizations of telescopes when all segment surfaces have the same PSD. Therefore, this method, which we call the vendor table, is expected to be useful for individual segment specification such as the segment polishing specification. The specification based on the vendor table can be directly related to a science metric such as PSSN and provides the mirror vendors significant flexibility by specifying a single overall PSSN value for them to meet. We build a vendor table for the Thirty Meter Telescope (TMT) and test it using multiple mirror samples from various mirror vendors to prove its practical utility. Accordingly, TMT has a plan to adopt this vendor table for its M1 segment final mirror polishing requirement. PMID:23842151

  6. Enhanced resolution edge and surface estimation from ladar point clouds containing multiple return data

    NASA Astrophysics Data System (ADS)

    Neilsen, Kevin D.; Budge, Scott E.

    2013-11-01

    Signal processing enables the detection of more returns in a digital ladar waveform by computing the surface response. Prior work has shown that obtaining the surface response can improve the range resolution by a factor of 2. However, this advantage presents a problem when forming a range image-each ladar shot crossing an edge contains multiple values. To exploit this information, the location of each return inside the spatial beam footprint is estimated by dividing the footprint into sections that correspond to each return and assigning the coordinates of the return to the centroid of the region. Increased resolution results on the edges of targets where multiple returns occur. Experiments focus on angled and slotted surfaces for both simulated and real data. Results show that the angle of incidence on a 75-deg surface is computed only using a single waveform with an error of 1.4 deg and that the width of a 19-cm-wide by 16-cm-deep slot is estimated with an error of 3.4 cm using real data. Point clouds show that the edges of the slotted surface are sharpened. These results can be used to improve features extracted from objects for applications such as automatic target recognition.

  7. Shorter sampling periods and accurate estimates of milk volume and components are possible for pasture based dairy herds milked with automated milking systems.

    PubMed

    Kamphuis, Claudia; Burke, Jennie K; Taukiri, Sarah; Petch, Susan-Fay; Turner, Sally-Anne

    2016-08-01

    Dairy cows grazing pasture and milked using automated milking systems (AMS) have lower milking frequencies than indoor fed cows milked using AMS. Therefore, milk recording intervals used for herd testing indoor fed cows may not be suitable for cows on pasture based farms. We hypothesised that accurate standardised 24 h estimates could be determined for AMS herds with milk recording intervals of less than the Gold Standard (48 hs), but that the optimum milk recording interval would depend on the herd average for milking frequency. The Gold Standard protocol was applied on five commercial dairy farms with AMS, between December 2011 and February 2013. From 12 milk recording test periods, involving 2211 cow-test days and 8049 cow milkings, standardised 24 h estimates for milk volume and milk composition were calculated for the Gold Standard protocol and compared with those collected during nine alternative sampling scenarios, including six shorter sampling periods and three in which a fixed number of milk samples per cow were collected. Results infer a 48 h milk recording protocol is unnecessarily long for collecting accurate estimates during milk recording on pasture based AMS farms. Collection of two milk samples only per cow was optimal in terms of high concordance correlation coefficients for milk volume and components and a low proportion of missed cow-test days. Further research is required to determine the effects of diurnal variations in milk composition on standardised 24 h estimates for milk volume and components, before a protocol based on a fixed number of samples could be considered. Based on the results of this study New Zealand have adopted a split protocol for herd testing based on the average milking frequency for the herd (NZ Herd Test Standard 8100:2015). PMID:27600967

  8. RosenPoint: A Microsoft Excel-based program for the Rosenblueth point estimate method and an application in slope stability analysis

    NASA Astrophysics Data System (ADS)

    Wang, Jui-Pin; Huang, Duruo

    2012-11-01

    The Rosenblueth point estimate method is one of the probabilistic analyses in estimating failure probability of a system, such as a slope. The essence of the approach is to use two point estimates, mean value±standard deviation, to present a variable in safety evaluation. The simple and straightforward framework leads to its wide application, but as a system governed by n variables (n is large), mass computations (2n repetitions in calculation) are required during the analysis. This prevents the possibility of hand computation using the approach, and a proper computing tool is needed under this situation. In this study, a Microsoft Excel-based program, RosenPoint, was developed for the Rosenblueth approach, and the program developments, descriptions and modifications are given in detail. The program is successfully demonstrated by computing the failure probability of an infinite slope under earthquake condition with a deterministic factor of safety (FOS) equal to 1.77. As the critical FOS is equal to 1.4, the slope that is considered stable by a conventional analysis is found associated with a substantial failure probability around 20%. Since the current version of RosenPoint is designed for estimating slope failure probability, the program needs modification as it is used for other tasks. Owing to the separated programming structure in RosenPoint, the subroutine governing FOS algorithms only needs to be replaced or recompiled as modification is needed. In addition, the capacity of the current RosenPoint is limited to 19 variables due to the dimension constraint of Excel spreadsheets (=220 rows). However, the capacity can be easily improved with sacrificing output completeness. This program modification is also described in this paper.

  9. Empirical Bayes Point Estimates of True Score Using a Compound Binomial Error Model. Research Memorandum 74-11.

    ERIC Educational Resources Information Center

    Kearns, Jack

    Empirical Bayes point estimates of true score may be obtained if the distribution of observed score for a fixed examinee is approximated in one of several ways by a well-known compound binomial model. The Bayes estimates of true score may be expressed in terms of the observed score distribution and the distribution of a hypothetical binomial test.…

  10. Point estimation of soil water infiltration process using Artificial Neural Networks for some calcareous soils

    NASA Astrophysics Data System (ADS)

    Parchami-Araghi, Farzin; Mirlatifi, Seyed Majid; Ghorbani Dashtaki, Shoja; Mahdian, Mohmmad Hossein

    2013-02-01

    SummaryInfiltration process is one of the most important components of the hydrological cycle. The direct measurement of infiltration is laborious, time consuming, expensive, and often involves large spatial and temporal variability. Thus, any indirect estimation of this process is quite helpful. The main objective of this study was to predict the cumulative infiltration at specific time steps, using readily available soil data and Artificial Neural Networks (ANNs). 210 double ring infiltration data were collected from different regions of Iran. Basic soil properties of the two upper pedogenic layers (A and B horizons) including initial soil water content, soil water contents at field capacity (-33 kPa) and permanent wilting point (-1500 kPa), bulk density, particle-size distributions, organic carbon, gravel content (>2 mm size), and CaCO3 content were determined. The feedforward multilayer perceptron ANN model was used to predict the cumulative infiltration at 5, 10, 15, 20, 30, 45, 60, 90, 120, 150, 180, 210, 240, and 270 min after the start of the infiltration experiment and at the time of the basic infiltration rate. The developed ANN models were categorized to type I and type II ANN models. The basic soil properties of the first upper soil horizon were hierarchically used as inputs to develop type I ANN models. In contrast, the type II ANN models were developed while the available soil properties of the two upper soil horizons were implemented as inputs using principal component analysis technique. Results of the reliability test for the developed ANN models indicated that type I ANN models with a RMSE of 1.136-9.312 cm had the best performance in estimating the cumulative infiltration. Type I ANN models with the mean RMSD of 6.307 cm had the best performance in estimating the cumulative infiltration curve (CIC). Results indicated that at the 1% probability level, ANNs-derived CIC can be accepted as one of the replications of a reliable infiltration experiment

  11. A new methodology in fast and accurate matching of the 2D and 3D point clouds extracted by laser scanner systems

    NASA Astrophysics Data System (ADS)

    Torabi, M.; Mousavi G., S. M.; Younesian, D.

    2015-03-01

    Registration of the point clouds is a conventional challenge in computer vision related applications. As an application, matching of train wheel profiles extracted from two viewpoints is studied in this paper. The registration problem is formulated into an optimization problem. An error minimization function for registration of the two partially overlapping point clouds is presented. The error function is defined as the sum of the squared distance between the source points and their corresponding pairs which should be minimized. The corresponding pairs are obtained thorough Iterative Closest Point (ICP) variants. Here, a point-to-plane ICP variant is employed. Principal Component Analysis (PCA) is used to obtain tangent planes. Thus it is shown that minimization of the proposed objective function diminishes point-to-plane ICP variant. We utilized this algorithm to register point clouds of two partially overlapping profiles of wheel train extracted from two viewpoints in 2D. Also, a number of synthetic point clouds and a number of real point clouds in 3D are studied to evaluate the reliability and rate of convergence in our method compared with other registration methods.

  12. Estimating the operating point of the cochlear transducer using low-frequency biased distortion products

    PubMed Central

    Brown, Daniel J.; Hartsock, Jared J.; Gill, Ruth M.; Fitzgerald, Hillary E.; Salt, Alec N.

    2009-01-01

    Distortion products in the cochlear microphonic (CM) and in the ear canal in the form of distortion product otoacoustic emissions (DPOAEs) are generated by nonlinear transduction in the cochlea and are related to the resting position of the organ of Corti (OC). A 4.8 Hz acoustic bias tone was used to displace the OC, while the relative amplitude and phase of distortion products evoked by a single tone [most often 500 Hz, 90 dB SPL (sound pressure level)] or two simultaneously presented tones (most often 4 kHz and 4.8 kHz, 80 dB SPL) were monitored. Electrical responses recorded from the round window, scala tympani and scala media of the basal turn, and acoustic emissions in the ear canal were simultaneously measured and compared during the bias. Bias-induced changes in the distortion products were similar to those predicted from computer models of a saturating transducer with a first-order Boltzmann distribution. Our results suggest that biased DPOAEs can be used to non-invasively estimate the OC displacement, producing a measurement equivalent to the transducer operating point obtained via Boltzmann analysis of the basal turn CM. Low-frequency biased DPOAEs might provide a diagnostic tool to objectively diagnose abnormal displacements of the OC, as might occur with endolymphatic hydrops. PMID:19354389

  13. Step change point estimation in the multivariate-attribute process variability using artificial neural networks and maximum likelihood estimation

    NASA Astrophysics Data System (ADS)

    Maleki, Mohammad Reza; Amiri, Amirhossein; Mousavi, Seyed Meysam

    2015-07-01

    In some statistical process control applications, the combination of both variable and attribute quality characteristics which are correlated represents the quality of the product or the process. In such processes, identification the time of manifesting the out-of-control states can help the quality engineers to eliminate the assignable causes through proper corrective actions. In this paper, first we use an artificial neural network (ANN)-based method in the literature for detecting the variance shifts as well as diagnosing the sources of variation in the multivariate-attribute processes. Then, based on the quality characteristics responsible for the out-of-control state, we propose a modular model based on the ANN for estimating the time of step change in the multivariate-attribute process variability. We also compare the performance of the ANN-based estimator with the estimator based on maximum likelihood method (MLE). A numerical example based on simulation study is used to evaluate the performance of the estimators in terms of the accuracy and precision criteria. The results of the simulation study show that the proposed ANN-based estimator outperforms the MLE estimator under different out-of-control scenarios where different shift magnitudes in the covariance matrix of multivariate-attribute quality characteristics are manifested.

  14. A systematic approach for the accurate non-invasive estimation of blood glucose utilizing a novel light-tissue interaction adaptive modelling scheme

    NASA Astrophysics Data System (ADS)

    Rybynok, V. O.; Kyriacou, P. A.

    2007-10-01

    Diabetes is one of the biggest health challenges of the 21st century. The obesity epidemic, sedentary lifestyles and an ageing population mean prevalence of the condition is currently doubling every generation. Diabetes is associated with serious chronic ill health, disability and premature mortality. Long-term complications including heart disease, stroke, blindness, kidney disease and amputations, make the greatest contribution to the costs of diabetes care. Many of these long-term effects could be avoided with earlier, more effective monitoring and treatment. Currently, blood glucose can only be monitored through the use of invasive techniques. To date there is no widely accepted and readily available non-invasive monitoring technique to measure blood glucose despite the many attempts. This paper challenges one of the most difficult non-invasive monitoring techniques, that of blood glucose, and proposes a new novel approach that will enable the accurate, and calibration free estimation of glucose concentration in blood. This approach is based on spectroscopic techniques and a new adaptive modelling scheme. The theoretical implementation and the effectiveness of the adaptive modelling scheme for this application has been described and a detailed mathematical evaluation has been employed to prove that such a scheme has the capability of extracting accurately the concentration of glucose from a complex biological media.

  15. A technique for estimating spatial sampling errors in coarse-scale soil moisture estimates derived from point-scale observations

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The validation of satellite surface soil moisture retrievals requires the spatial aggregation of point-scale ground soil moisture measurements up to coarse resolution satellite footprint scales (>10 km). In regions containing a limited number of ground measurements per satellite footprint, a large c...

  16. GIS based probabilistic analysis for shallow landslide susceptibility using Point Estimate Method

    NASA Astrophysics Data System (ADS)

    Park, Hyuck-Jin; Lee, Jung-Hyun

    2016-04-01

    The mechanical properties of soil materials (such as cohesion and friction angle) used in physically based model for landslide susceptibility analyses have been identified as the major source of uncertainty caused by complex geological conditions and spatial variability. In addition, limited sampling is another source of the uncertainty since the input parameters were obtained from broad areas. Therefore, in order to properly account for the uncertainty in mechanical parameters, the parameters were considered as random variables and the probabilistic analysis method has been used. In many previous researches, the Monte Carlo simulation has been widely used as the probabilistic analysis. However, since the Monte Carlo method requires a large number of repeated calculations and a great deal of calculation time to evaluate the probability of failure, it is not easy to adopt this approach to extensive study area due to a huge amount of computation time for regional study area. Therefore, this study proposes the alternative probabilistic analysis approach using the Point Estimate method (PEM), which has the advantage overcoming the shortcomings of the Monte Carlo simulation. This is because PEM requires only the mean and standard deviation of random variables and can obtain the probability of failure with a simple calculation. This proposed approach was performed in GIS based environments and applied to the study are which was experienced a large amount of landslides. The spatial database for input parameters and landslide inventory map were constructed in a grid-based GIS environment. To evaluate the performance of the model, the results of the landslide susceptibility assessment were compared with the landslide inventories using ROC graph.

  17. Effects of age, weight, and fat slaughter end points on estimates of breed and retained heterosis effects for carcass traits.

    PubMed

    Ríos-Utrera, A; Cundiff, L V; Gregory, K E; Koch, R M; Dikeman, M E; Koohmaraie, M; Van Vleck, L D

    2006-01-01

    The influence of different levels of adjusted fat thickness (AFT) and HCW slaughter end points (covariates) on estimates of breed and retained heterosis effects was studied for 14 carcass traits from serially slaughtered purebred and composite steers from the US Meat Animal Research Center (MARC). Contrasts among breed solutions were estimated at 0.7, 1.1, and 1.5 cm of AFT, and at 295.1, 340.5, and 385.9 kg of HCW. For constant slaughter age, contrasts were adjusted to the overall mean (432.5 d). Breed effects for Red Poll, Hereford, Limousin, Braunvieh, Pinzgauer, Gelbvieh, Simmental, Charolais, MARC I, MARC II, and MARC III were estimated as deviations from Angus. In addition, purebreds were pooled into 3 groups based on lean-to-fat ratio, and then differences were estimated among groups. Retention of combined individual and maternal heterosis was estimated for each composite. Mean retained heterosis for the 3 composites also was estimated. Breed rankings and expression of heterosis varied within and among end points. For example, Charolais had greater (P < 0.05) dressing percentages than Angus at the 2 largest levels of AFT and smaller (P < 0.01) percentages at the 2 largest levels of HCW, whereas the 2 breeds did not differ (P > or = 0.05) at a constant age. The MARC III composite produced 9.7 kg more (P < 0.01) fat than Angus at AFT of 0.7 cm, but 7.9 kg less (P < 0.05) at AFT of 1.5 cm. For MARC III, the estimate of retained heterosis for HCW was significant (P < 0.05) at the lowest level of AFT, but at the intermediate and greatest levels estimates were nil. The pattern was the same for MARC I and MARC III for LM area. Adjustment for age resulted in near zero estimates of retained heterosis for AFT, and similarly, adjustment for HCW resulted in nil estimates of retained heterosis for LM area. For actual retail product as a percentage of HCW, the estimate of retained heterosis for MARC III was negative (-1.27%; P < 0.05) at 0.7 cm but was significantly

  18. Depth Estimation and Specular Removal for Glossy Surfaces Using Point and Line Consistency with Light-Field Cameras.

    PubMed

    Tao, Michael W; Su, Jong-Chyi; Wang, Ting-Chun; Malik, Jitendra; Ramamoorthi, Ravi

    2016-06-01

    Light-field cameras have now become available in both consumer and industrial applications, and recent papers have demonstrated practical algorithms for depth recovery from a passive single-shot capture. However, current light-field depth estimation methods are designed for Lambertian objects and fail or degrade for glossy or specular surfaces. The standard Lambertian photoconsistency measure considers the variance of different views, effectively enforcing point-consistency, i.e., that all views map to the same point in RGB space. This variance or point-consistency condition is a poor metric for glossy surfaces. In this paper, we present a novel theory of the relationship between light-field data and reflectance from the dichromatic model. We present a physically-based and practical method to estimate the light source color and separate specularity. We present a new photo consistency metric, line-consistency, which represents how viewpoint changes affect specular points. We then show how the new metric can be used in combination with the standard Lambertian variance or point-consistency measure to give us results that are robust against scenes with glossy surfaces. With our analysis, we can also robustly estimate multiple light source colors and remove the specular component from glossy objects. We show that our method outperforms current state-of-the-art specular removal and depth estimation algorithms in multiple real world scenarios using the consumer Lytro and Lytro Illum light field cameras. PMID:26372203

  19. Improved nonparametric estimation of the optimal diagnostic cut-off point associated with the Youden index under different sampling schemes.

    PubMed

    Yin, Jingjing; Samawi, Hani; Linder, Daniel

    2016-07-01

    A diagnostic cut-off point of a biomarker measurement is needed for classifying a random subject to be either diseased or healthy. However, the cut-off point is usually unknown and needs to be estimated by some optimization criteria. One important criterion is the Youden index, which has been widely adopted in practice. The Youden index, which is defined as the maximum of (sensitivity + specificity -1), directly measures the largest total diagnostic accuracy a biomarker can achieve. Therefore, it is desirable to estimate the optimal cut-off point associated with the Youden index. Sometimes, taking the actual measurements of a biomarker is very difficult and expensive, while ranking them without the actual measurement can be relatively easy. In such cases, ranked set sampling can give more precise estimation than simple random sampling, as ranked set samples are more likely to span the full range of the population. In this study, kernel density estimation is utilized to numerically solve for an estimate of the optimal cut-off point. The asymptotic distributions of the kernel estimators based on two sampling schemes are derived analytically and we prove that the estimators based on ranked set sampling are relatively more efficient than that of simple random sampling and both estimators are asymptotically unbiased. Furthermore, the asymptotic confidence intervals are derived. Intensive simulations are carried out to compare the proposed method using ranked set sampling with simple random sampling, with the proposed method outperforming simple random sampling in all cases. A real data set is analyzed for illustrating the proposed method. PMID:26756282

  20. On the Bayesness, minimaxity and admissibility of point estimators of allelic frequencies.

    PubMed

    Martínez, Carlos Alberto; Khare, Kshitij; Elzo, Mauricio A

    2015-10-21

    In this paper, decision theory was used to derive Bayes and minimax decision rules to estimate allelic frequencies and to explore their admissibility. Decision rules with uniformly smallest risk usually do not exist and one approach to solve this problem is to use the Bayes principle and the minimax principle to find decision rules satisfying some general optimality criterion based on their risk functions. Two cases were considered, the simpler case of biallelic loci and the more complex case of multiallelic loci. For each locus, the sampling model was a multinomial distribution and the prior was a Beta (biallelic case) or a Dirichlet (multiallelic case) distribution. Three loss functions were considered: squared error loss (SEL), Kulback-Leibler loss (KLL) and quadratic error loss (QEL). Bayes estimators were derived under these three loss functions and were subsequently used to find minimax estimators using results from decision theory. The Bayes estimators obtained from SEL and KLL turned out to be the same. Under certain conditions, the Bayes estimator derived from QEL led to an admissible minimax estimator (which was also equal to the maximum likelihood estimator). The SEL also allowed finding admissible minimax estimators. Some estimators had uniformly smaller variance than the MLE and under suitable conditions the remaining estimators also satisfied this property. In addition to their statistical properties, the estimators derived here allow variation in allelic frequencies, which is closer to the reality of finite populations exposed to evolutionary forces. PMID:26271891

  1. The estimation of pointing angle and normalized surface scattering cross section from GEOS-3 radar altimeter measurements

    NASA Technical Reports Server (NTRS)

    Brown, G. S.; Curry, W. J.

    1977-01-01

    The statistical error of the pointing angle estimation technique is determined as a function of the effective receiver signal to noise ratio. Other sources of error are addressed and evaluated with inadequate calibration being of major concern. The impact of pointing error on the computation of normalized surface scattering cross section (sigma) from radar and the waveform attitude induced altitude bias is considered and quantitative results are presented. Pointing angle and sigma processing algorithms are presented along with some initial data. The intensive mode clean vs. clutter AGC calibration problem is analytically resolved. The use clutter AGC data in the intensive mode is confirmed as the correct calibration set for the sigma computations.

  2. Genetic diversity estimates point to immediate efforts for conserving the endangered Tibetan sheep of India

    PubMed Central

    Sharma, Rekha; Kumar, Brijesh; Arora, Reena; Ahlawat, Sonika; Mishra, A.K.; Tantia, M.S.

    2016-01-01

    Tibetan is a valuable Himalayan sheep breed classified as endangered. Knowledge of the level and distribution of genetic diversity in Tibetan sheep is important for designing conservation strategies for their sustainable survival and to preserve their evolutionary potential. Thus, for the first time, genetic variability in the Tibetan population was accessed with twenty five inter-simple sequence repeat markers. All the microsatellites were polymorphic and a total of 148 alleles were detected across these loci. The observed number of alleles across all the loci was more than the effective number of alleles and ranged from 3 (BM6506) to 11 (BM6526) with 5.920 ± 0.387 mean number of alleles per locus. The average observed heterozygosity was less than the expected heterozygosity. The observed and expected heterozygosity values ranged from 0.150 (BM1314) to 0.9 (OarCP20) with an overall mean of 0.473 ± 0.044 and from 0.329 (BM8125) to 0.885 (BM6526) with an overall mean 0.672 ± 0.030, respectively. The lower heterozygosity pointed towards diminished genetic diversity in the population. Thirteen microsatellite loci exhibited significant (P < 0.05) departures from the Hardy–Weinberg proportions in the population. The estimate of heterozygote deficiency varied from − 0.443 (OarCP20) to 0.668 (OarFCB128) with a mean positive value of 0.302 ± 0.057. A normal ‘L’ shaped distribution of mode-shift test and non-significant heterozygote excess on the basis of different models suggested absence of recent bottleneck in the existing Tibetan population. In view of the declining population of Tibetan sheep (less than 250) in the breeding tract, need of the hour is immediate scientific management of the population so as to increase the population hand in hand with retaining the founder alleles to the maximum possible extent. PMID:27014586

  3. Genetic diversity estimates point to immediate efforts for conserving the endangered Tibetan sheep of India.

    PubMed

    Sharma, Rekha; Kumar, Brijesh; Arora, Reena; Ahlawat, Sonika; Mishra, A K; Tantia, M S

    2016-06-01

    Tibetan is a valuable Himalayan sheep breed classified as endangered. Knowledge of the level and distribution of genetic diversity in Tibetan sheep is important for designing conservation strategies for their sustainable survival and to preserve their evolutionary potential. Thus, for the first time, genetic variability in the Tibetan population was accessed with twenty five inter-simple sequence repeat markers. All the microsatellites were polymorphic and a total of 148 alleles were detected across these loci. The observed number of alleles across all the loci was more than the effective number of alleles and ranged from 3 (BM6506) to 11 (BM6526) with 5.920 ± 0.387 mean number of alleles per locus. The average observed heterozygosity was less than the expected heterozygosity. The observed and expected heterozygosity values ranged from 0.150 (BM1314) to 0.9 (OarCP20) with an overall mean of 0.473 ± 0.044 and from 0.329 (BM8125) to 0.885 (BM6526) with an overall mean 0.672 ± 0.030, respectively. The lower heterozygosity pointed towards diminished genetic diversity in the population. Thirteen microsatellite loci exhibited significant (P < 0.05) departures from the Hardy-Weinberg proportions in the population. The estimate of heterozygote deficiency varied from - 0.443 (OarCP20) to 0.668 (OarFCB128) with a mean positive value of 0.302 ± 0.057. A normal 'L' shaped distribution of mode-shift test and non-significant heterozygote excess on the basis of different models suggested absence of recent bottleneck in the existing Tibetan population. In view of the declining population of Tibetan sheep (less than 250) in the breeding tract, need of the hour is immediate scientific management of the population so as to increase the population hand in hand with retaining the founder alleles to the maximum possible extent. PMID:27014586

  4. Instantaneous and time-averaged dispersion and measurement models for estimation theory applications with elevated point source plumes

    NASA Technical Reports Server (NTRS)

    Diamante, J. M.; Englar, T. S., Jr.; Jazwinski, A. H.

    1977-01-01

    Estimation theory, which originated in guidance and control research, is applied to the analysis of air quality measurements and atmospheric dispersion models to provide reliable area-wide air quality estimates. A method for low dimensional modeling (in terms of the estimation state vector) of the instantaneous and time-average pollutant distributions is discussed. In particular, the fluctuating plume model of Gifford (1959) is extended to provide an expression for the instantaneous concentration due to an elevated point source. Individual models are also developed for all parameters in the instantaneous and the time-average plume equations, including the stochastic properties of the instantaneous fluctuating plume.

  5. Accurate 3D point cloud comparison and volumetric change analysis of Terrestrial Laser Scan data in a hard rock coastal cliff environment

    NASA Astrophysics Data System (ADS)

    Earlie, C. S.; Masselink, G.; Russell, P.; Shail, R.; Kingston, K.

    2013-12-01

    Our understanding of the evolution of hard rock coastlines is limited due to the episodic nature and ';slow' rate at which changes occur. High-resolution surveying techniques, such as Terrestrial Laser Scanning (TLS), have just begun to be adopted as a method of obtaining detailed point cloud data to monitor topographical changes over short periods of time (weeks to months). However, the difficulties involved in comparing consecutive point cloud data sets in a complex three-dimensional plane, such as occlusion due to surface roughness and positioning of data capture point as a result of a consistently changing environment (a beach profile), mean that comparing data sets can lead to errors in the region of 10 - 20 cm. Meshing techniques are often used for point cloud data analysis for simple surfaces, but in surfaces such as rocky cliff faces, this technique has been found to be ineffective. Recession rates of hard rock coastlines in the UK are typically determined using aerial photography or airborne LiDAR data, yet the detail of the important changes occurring to the cliff face and toe are missed using such techniques. In this study we apply an algorithm (M3C2 - Multiscale Model to Model Cloud Comparison), initially developed for analysing fluvial morphological change, that directly compares point to point cloud data using surface normals that are consistent with surface roughness and measure the change that occurs along the normal direction (Lague et al., 2013). The surfaces changes are analysed using a set of user defined scales based on surface roughness and registration error. Once the correct parameters are defined, the volumetric cliff face changes are calculated by integrating the mean distance between the point clouds. The analysis has been undertaken at two hard rock sites identified for their active erosion located on the UK's south west peninsular at Porthleven in south west Cornwall and Godrevy in north Cornwall. Alongside TLS point cloud data, in

  6. Incorporating variability in point estimates in risk assessment: Bridging the gap between LC50 and population endpoints.

    PubMed

    Stark, John D; Vargas, Roger I; Banks, John E

    2015-07-01

    Historically, point estimates such as the median lethal concentration (LC50) have been instrumental in assessing risks associated with toxicants to rare or economically important species. In recent years, growing awareness of the shortcomings of this approach has led to an increased focus on analyses using population endpoints. However, risk assessment of pesticides still relies heavily on large amounts of LC50 data amassed over decades in the laboratory. Despite the fact that these data are generally well replicated, little or no attention has been given to the sometime high levels of variability associated with the generation of point estimates. This is especially important in agroecosystems where arthropod predator-prey interactions are often disrupted by the use of pesticides. Using laboratory derived data of 4 economically important species (2 fruit fly pest species and 2 braconid parasitoid species) and matrix based population models, the authors demonstrate in the present study a method for bridging traditional point estimate risk assessments with population outcomes. The results illustrate that even closely related species can show strikingly divergent responses to the same exposures to pesticides. Furthermore, the authors show that using different values within the 95% confidence intervals of LC50 values can result in very different population outcomes, ranging from quick recovery to extinction for both pest and parasitoid species. The authors discuss the implications of these results and emphasize the need to incorporate variability and uncertainty in point estimates for use in risk assessment. PMID:25760716

  7. Incorporating variability in point estimates in risk assessment: bridging the gap between LC50 and population endpoints

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Historically, the use of point estimates such as the LC50 has been instrumental in assessing the risk associated with toxicants to rare or economically important species. In recent years, growing awareness of the shortcomings of this approach has led to an increased focus on analyses using populatio...

  8. Bayesian Estimation of Fugitive Methane Point Source Emission Rates from a SingleDownwind High-Frequency Gas Sensor

    EPA Science Inventory

    Bayesian Estimation of Fugitive Methane Point Source Emission Rates from a Single Downwind High-Frequency Gas Sensor With the tremendous advances in onshore oil and gas exploration and production (E&P) capability comes the realization that new tools are needed to support env...

  9. ESTIMATING THE RATE OF PLASMID TRANSFER: AN END-POINT METHOD

    EPA Science Inventory

    A method is described for determining rate parameter of conjugative plasmid transfer that is based on single estimates of donor, recipient and transconjugant densities, and the growth rate in exponential phase of the mating culture. he formula for estimating the plasmid transfer ...

  10. Weakly Informative Prior for Point Estimation of Covariance Matrices in Hierarchical Models

    ERIC Educational Resources Information Center

    Chung, Yeojin; Gelman, Andrew; Rabe-Hesketh, Sophia; Liu, Jingchen; Dorie, Vincent

    2015-01-01

    When fitting hierarchical regression models, maximum likelihood (ML) estimation has computational (and, for some users, philosophical) advantages compared to full Bayesian inference, but when the number of groups is small, estimates of the covariance matrix (S) of group-level varying coefficients are often degenerate. One can do better, even from…

  11. Global shape estimates and GIS cartography of Io and Enceladus using new control point network

    NASA Astrophysics Data System (ADS)

    Nadezhdina, I.; Patraty, V.; Shishkina, L.; Zhukov, D.; Zubarev, A.; Karachevtseva, I.; Oberst, J.

    2012-04-01

    We have analyzed a total of 53 Galileo and Voyager images of Io and 54 Cassini images of Enceladus to derive new geodetic control point networks for the two satellites. In order to derive the network for Io we used a subset of 66 images from those used in previous control point network studies [1, 2]. Additionally we have carried out new point measurements. We used recently reconstructed Galileo spacecraft trajectory data, supplied by the spacecraft navigation team of JPL. A total of 1956 tie point measurements for Io and 4392 ones for Enceladus have been carried out, which were processed by performing photogrammetric bundle block adjustments. Measurements and block adjustments were performed by means of the «PHOTOMOD» software [3] which was especially adapted for this study to accommodate global networks of small bodies, such as Io and Enceladus. As a result, two catalogs with the Cartesian three-dimensional coordinates of 197 and 351 control points were obtained for Io and Enceladus, respectively. The control points for Io have a mean overall accuracy of 4985.7 m (RMS). The individual accuracy of the control points for Enceladus differ substantially over the surface (the range is from 0.1 to 36.0 km) because images lack coverage and resolutions. We also determine best-fit spheres, spheroids, and tri-axial ellipsoids. The centers of the models were found to be shifted from the coordinate system origin attesting to possible errors in the ephemeris of Io. Conclusion and Future work: A comparison of our results for Io with the most recent control point network analysis [2] has revealed that we managed to derive the same accuracy of the control points using a smaller number of images and measurements (This study: 1956 measurements, DLR study: 4392). This probably attests to the fact that the now available new navigation data are internally more consistent. At present an analysis of the data is in progress. We report that control point measurements and global network

  12. Automatic NMO Correction and Full Common Depth Point NMO Velocity Field Estimation in Anisotropic Media

    NASA Astrophysics Data System (ADS)

    Sedek, Mohamed; Gross, Lutz; Tyson, Stephen

    2016-07-01

    We present a new computational method of automatic normal moveout (NMO) correction that not only accurately flattens and corrects the far offset data, but simultaneously provides NMO velocity (v_nmo ) for each individual seismic trace. The method is based on a predefined number of NMO velocity sweeps using linear vertical interpolation of different NMO velocities at each seismic trace. At each sweep, we measure the semblance between the zero offset trace (pilot trace) and the next seismic trace using a trace-by-trace rather than sample-by-sample based semblance measure; then after all the sweeps are done, the one with the maximum semblance value is chosen, which is assumed to be the most suitable NMO velocity trace that accurately flattens seismic reflection events. Other traces follow the same process, and a final velocity field is then extracted. Isotropic, anisotropic and lateral heterogenous synthetic geological models were built to test the method. A range of synthetic background noise, ranging from 10 to 30 %, was applied to the models. In addition, the method was tested on Hess's VTI (vertical transverse isotropy) model. Furthermore, we tested our method on a real pre-stack seismic CDP gathered from a gas field in Alaska. The results from the presented examples show an excellent NMO correction and extracted a reasonably accurate NMO velocity field.

  13. Impact of Footprint Diameter and Off-Nadir Pointing on the Precision of Canopy Height Estimates from Spaceborne Lidar

    NASA Technical Reports Server (NTRS)

    Pang, Yong; Lefskky, Michael; Sun, Guoqing; Ranson, Jon

    2011-01-01

    A spaceborne lidar mission could serve multiple scientific purposes including remote sensing of ecosystem structure, carbon storage, terrestrial topography and ice sheet monitoring. The measurement requirements of these different goals will require compromises in sensor design. Footprint diameters that would be larger than optimal for vegetation studies have been proposed. Some spaceborne lidar mission designs include the possibility that a lidar sensor would share a platform with another sensor, which might require off-nadir pointing at angles of up to 16 . To resolve multiple mission goals and sensor requirements, detailed knowledge of the sensitivity of sensor performance to these aspects of mission design is required. This research used a radiative transfer model to investigate the sensitivity of forest height estimates to footprint diameter, off-nadir pointing and their interaction over a range of forest canopy properties. An individual-based forest model was used to simulate stands of mixed conifer forest in the Tahoe National Forest (Northern California, USA) and stands of deciduous forests in the Bartlett Experimental Forest (New Hampshire, USA). Waveforms were simulated for stands generated by a forest succession model using footprint diameters of 20 m to 70 m. Off-nadir angles of 0 to 16 were considered for a 25 m diameter footprint diameter. Footprint diameters in the range of 25 m to 30 m were optimal for estimates of maximum forest height (R(sup 2) of 0.95 and RMSE of 3 m). As expected, the contribution of vegetation height to the vertical extent of the waveform decreased with larger footprints, while the contribution of terrain slope increased. Precision of estimates decreased with an increasing off-nadir pointing angle, but off-nadir pointing had less impact on height estimates in deciduous forests than in coniferous forests. When pointing off-nadir, the decrease in precision was dependent on local incidence angle (the angle between the off

  14. Calculation of the geometrical three-point parameter constant appearing in the second order accurate effective medium theory expression for the B-term diffusion coefficient in fully porous and porous-shell random sphere packings.

    PubMed

    Deridder, Sander; Desmet, Gert

    2012-02-01

    Using computational fluid dynamics (CFD), the effective B-term diffusion constant γ(eff) has been calculated for four different random sphere packings with different particle size distributions and packing geometries. Both fully porous and porous-shell sphere packings are considered. The obtained γ(eff)-values have subsequently been used to determine the value of the three-point geometrical constant (ζ₂) appearing in the 2nd-order accurate effective medium theory expression for γ(eff). It was found that, whereas the 1st-order accurate effective medium theory expression is accurate to within 5% over most part of the retention factor range, the 2nd-order accurate expression is accurate to within 1% when calculated with the best-fit ζ₂-value. Depending on the exact microscopic geometry, the best-fit ζ₂-values typically lie in the range of 0.20-0.30, holding over the entire range of intra-particle diffusion coefficients typically encountered for small molecules (0.1 ≤ D(pz)/D(m) ≤ 0.5). These values are in agreement with the ζ₂-value proposed by Thovert et al. for the random packing they considered. PMID:22236565

  15. A Robust and Accurate Two-Step Auto-Labeling Conditional Iterative Closest Points (TACICP) Algorithm for Three-Dimensional Multi-Modal Carotid Image Registration

    PubMed Central

    Guo, Hengkai; Wang, Guijin; Huang, Lingyun; Hu, Yuxin; Yuan, Chun; Li, Rui; Zhao, Xihai

    2016-01-01

    Atherosclerosis is among the leading causes of death and disability. Combining information from multi-modal vascular images is an effective and efficient way to diagnose and monitor atherosclerosis, in which image registration is a key technique. In this paper a feature-based registration algorithm, Two-step Auto-labeling Conditional Iterative Closed Points (TACICP) algorithm, is proposed to align three-dimensional carotid image datasets from ultrasound (US) and magnetic resonance (MR). Based on 2D segmented contours, a coarse-to-fine strategy is employed with two steps: rigid initialization step and non-rigid refinement step. Conditional Iterative Closest Points (CICP) algorithm is given in rigid initialization step to obtain the robust rigid transformation and label configurations. Then the labels and CICP algorithm with non-rigid thin-plate-spline (TPS) transformation model is introduced to solve non-rigid carotid deformation between different body positions. The results demonstrate that proposed TACICP algorithm has achieved an average registration error of less than 0.2mm with no failure case, which is superior to the state-of-the-art feature-based methods. PMID:26881433

  16. Sigma-point Kalman filtering for battery management systems of LiPB-based HEV battery packs. Part 1: Introduction and state estimation

    NASA Astrophysics Data System (ADS)

    Plett, Gregory L.

    We have previously described algorithms for a battery management system (BMS) that uses Kalman filtering (KF) techniques to estimate such quantities as: cell self-discharge rate, state-of-charge (SOC), nominal capacity, resistance, and others. Since the dynamics of electrochemical cells are not linear, we used a non-linear extension to the original KF called the extended Kalman filter (EKF). We were able to achieve very good estimates of SOC and other states and parameters using EKF. However, some applications e.g., that of the battery-management-system (BMS) of a hybrid-electric-vehicle (HEV) can require even more accurate estimates than these. To see how to improve on EKF, we must examine the mathematical foundation of that algorithm in more detail than we presented in the prior work to discover the assumptions that are made in its derivation. Since these suppositions are not met exactly in BMS application, we explore an alternative non-linear Kalman filtering techniques known as "sigma-point Kalman filtering" (SPKF), which has some theoretical advantages that manifest themselves in more accurate predictions. The computational complexity of SPKF is of the same order as EKF, so the gains are made at little or no additional cost. The SPKF method as applied to BMS algorithms is presented here in a series of two papers. This first paper is devoted primarily to deriving the EKF and SPKF algorithms using the framework of sequential probabilistic inference. This is done to show that the two algorithms, which at first may look quite different, are actually very similar in most respects; also, we discover why we might expect the SPKF to outperform EKF in non-linear estimation applications. Results are presented for a battery pack based on a third-generation prototype LiPB cell, and compared with prior results using EKF. As expected, SPKF outperforms EKF, both in its estimate of SOC and in its estimate of the error bounds thereof. The second paper presents some more

  17. Estimation of point source fugitive emission rates from a single sensor time series: A conditionally-sampled Gaussian plume reconstruction

    NASA Astrophysics Data System (ADS)

    Foster-Wittig, Tierney A.; Thoma, Eben D.; Albertson, John D.

    2015-08-01

    Emerging mobile fugitive emissions detection and measurement approaches require robust inverse source algorithms to be effective. Two Gaussian plume inverse approaches are described for estimating emission rates from ground-level point sources observed from remote vantage points. The techniques were tested using data from 41 controlled methane release experiments (14 studies) and further investigated using 7 field studies executed downwind of oil and gas well pads in Wyoming. Analyzed measurements were acquired from stationary observation locations 18-106 m downwind of the emission sources. From the fluctuating wind direction, the lateral plume geometry is reconstructed using a derived relationship between the wind direction and crosswind plume position. The crosswind plume spread is determined with both modeled and reconstructed Gaussian plume approaches and estimates of source emission rates are found through inversion. The source emission rates were compared to a simple point source Gaussian emission estimation approach that is part of Draft EPA Method OTM 33A. Compared to the known release rates, the modeled, reconstructed, and point source Gaussian controlled release results yield average percent errors of -5%, -2%, and 6% with standard deviations of 29%, 25%, and 37%, respectively. Compared to each other, the three methods agree within 30% for 78% of all 48 observations (41 CR and 7 Wyoming).

  18. A hierarchical model combining distance sampling and time removal to estimate detection probability during avian point counts

    USGS Publications Warehouse

    Amundson, Courtney L.; Royle, J. Andrew; Handel, Colleen M.

    2014-01-01

    Imperfect detection during animal surveys biases estimates of abundance and can lead to improper conclusions regarding distribution and population trends. Farnsworth et al. (2005) developed a combined distance-sampling and time-removal model for point-transect surveys that addresses both availability (the probability that an animal is available for detection; e.g., that a bird sings) and perceptibility (the probability that an observer detects an animal, given that it is available for detection). We developed a hierarchical extension of the combined model that provides an integrated analysis framework for a collection of survey points at which both distance from the observer and time of initial detection are recorded. Implemented in a Bayesian framework, this extension facilitates evaluating covariates on abundance and detection probability, incorporating excess zero counts (i.e. zero-inflation), accounting for spatial autocorrelation, and estimating population density. Species-specific characteristics, such as behavioral displays and territorial dispersion, may lead to different patterns of availability and perceptibility, which may, in turn, influence the performance of such hierarchical models. Therefore, we first test our proposed model using simulated data under different scenarios of availability and perceptibility. We then illustrate its performance with empirical point-transect data for a songbird that consistently produces loud, frequent, primarily auditory signals, the Golden-crowned Sparrow (Zonotrichia atricapilla); and for 2 ptarmigan species (Lagopus spp.) that produce more intermittent, subtle, and primarily visual cues. Data were collected by multiple observers along point transects across a broad landscape in southwest Alaska, so we evaluated point-level covariates on perceptibility (observer and habitat), availability (date within season and time of day), and abundance (habitat, elevation, and slope), and included a nested point

  19. Considerations for estimating remote operator dust exposure using fixed-point samples on continuous mining sections

    SciTech Connect

    Listak, J.M.; Goodman, G.V.R.; Jankowski, R.A.

    1999-07-01

    Respirable dust studies were conducted at several underground coal mining operations to evaluate and compare the dust measurements of fixed-point machine-mounted samples on a continuous miner and personal samples of the remote miner operator. Fixed-point sampling was conducted at the right rear corner of the continuous miner which corresponded to the traditional location of the operator's cab. Although it has been documented that higher concentrations of dust are present at the machine-mounted position, this work sought to determine whether a relationship exists between the concentrations at the fixed-point position and the dust levels experienced at the remote operator position and whether this relationship could be applied on an industry-wide basis. To achieve this objective, gravimetric samplers were used to collect respirable dust data on continuous miner sections. These samplers were placed at a fixed position at the cab location of the continuous mining machine and on or near the remote miner operator during the 1 shift/day sampling periods. Dust sampling took place at mines with a variety of geographic locations and in-mine conditions. The dust concentration data collected at each site and for each sampling period were reduced to ratios of fixed-point to operator concentration. The ratios were calculated to determine similarities, differences, and/or variability at the two positions. The data show that dust concentrations at the remote operator position were always lower than dust concentrations measured at the fixed-point continuous miner location. However, the ratios of fixed-point to remote operator dust levels showed little consistency from shift to shift or from operation to operation. The fact that these ratios are so variable may introduce some uncertainty into attempting to correlate dust exposures of the remote operator to dust levels measured on the continuous mining machine.

  20. A method for estimating spikelet number per panicle: Integrating image analysis and a 5-point calibration model

    NASA Astrophysics Data System (ADS)

    Zhao, Sanqin; Gu, Jiabing; Zhao, Youyong; Hassan, Muhammad; Li, Yinian; Ding, Weimin

    2015-11-01

    Spikelet number per panicle (SNPP) is one of the most important yield components used to estimate rice yields. The use of high-throughput quantitative image analysis methods for understanding the diversity of the panicle has increased rapidly. However, it is difficult to simultaneously extract panicle branch and spikelet/grain information from images at the same resolution due to the different scales of these traits. To use a lower resolution and meet the accuracy requirement, we proposed an interdisciplinary method that integrated image analysis and a 5-point calibration model to rapidly estimate SNPP. First, a linear relationship model between the total length of the primary branch (TLPB) and the SNPP was established based on the physiological characteristics of the panicle. Second, the TLPB and area (the primary branch region) traits were rapidly extracted by developing image analysis algorithm. Finally, a 5-point calibration method was adopted to improve the universality of the model. The number of panicle samples that the error of the SNPP estimates was less than 10% was greater than 90% by the proposed method. The estimation accuracy was consistent with the accuracy determined using manual measurements. The proposed method uses available concepts and techniques for automated estimations of rice yield information.

  1. A method for estimating spikelet number per panicle: Integrating image analysis and a 5-point calibration model

    PubMed Central

    Zhao, Sanqin; Gu, Jiabing; Zhao, Youyong; Hassan, Muhammad; Li, Yinian; Ding, Weimin

    2015-01-01

    Spikelet number per panicle (SNPP) is one of the most important yield components used to estimate rice yields. The use of high-throughput quantitative image analysis methods for understanding the diversity of the panicle has increased rapidly. However, it is difficult to simultaneously extract panicle branch and spikelet/grain information from images at the same resolution due to the different scales of these traits. To use a lower resolution and meet the accuracy requirement, we proposed an interdisciplinary method that integrated image analysis and a 5-point calibration model to rapidly estimate SNPP. First, a linear relationship model between the total length of the primary branch (TLPB) and the SNPP was established based on the physiological characteristics of the panicle. Second, the TLPB and area (the primary branch region) traits were rapidly extracted by developing image analysis algorithm. Finally, a 5-point calibration method was adopted to improve the universality of the model. The number of panicle samples that the error of the SNPP estimates was less than 10% was greater than 90% by the proposed method. The estimation accuracy was consistent with the accuracy determined using manual measurements. The proposed method uses available concepts and techniques for automated estimations of rice yield information. PMID:26542412

  2. Estimating abundance from repeated presence-absence data or point counts

    USGS Publications Warehouse

    Royle, J. Andrew; Nichols, J.D.

    2003-01-01

    We describe an approach for estimating occupancy rate or the proportion of area occupied when heterogeneity in detection probability exists as a result of variation in abundance of the organism under study. The key feature of such problems, which we exploit, is that variation in abundance induces variation in detection probability. Thus, heterogeneity in abundance can be modeled as heterogeneity in detection probability. Moreover, this linkage between heterogeneity in abundance and heterogeneity in detection probability allows one to exploit a heterogeneous detection probability model to estimate the underlying distribution of abundances. Therefore, our method allows estimation of abundance from repeated observations of the presence or absence of animals without having to uniquely mark individuals in the population.

  3. Estimating a Meaningful Point of Change: A Comparison of Exploratory Techniques Based on Nonparametric Regression

    ERIC Educational Resources Information Center

    Klotsche, Jens; Gloster, Andrew T.

    2012-01-01

    Longitudinal studies are increasingly common in psychological research. Characterized by repeated measurements, longitudinal designs aim to observe phenomena that change over time. One important question involves identification of the exact point in time when the observed phenomena begin to meaningfully change above and beyond baseline…

  4. Screening-level estimates of mass discharge uncertainty from point measurement methods

    EPA Science Inventory

    The uncertainty of mass discharge measurements associated with point-scale measurement techniques was investigated by deriving analytical solutions for the mass discharge coefficient of variation for two simplified, conceptual models. In the first case, a depth-averaged domain w...

  5. Accurate blackbodies

    NASA Astrophysics Data System (ADS)

    Latvakoski, Harri M.; Watson, Mike; Topham, Shane; Scott, Deron; Wojcik, Mike; Bingham, Gail

    2010-07-01

    Infrared radiometers and spectrometers generally use blackbodies for calibration, and with the high accuracy needs of upcoming missions, blackbodies capable of meeting strict accuracy requirements are needed. One such mission, the NASA climate science mission Climate Absolute Radiance and Refractivity Observatory (CLARREO), which will measure Earth's emitted spectral radiance from orbit, has an absolute accuracy requirement of 0.1 K (3σ) at 220 K over most of the thermal infrared. Space Dynamics Laboratory (SDL) has a blackbody design capable of meeting strict modern accuracy requirements. This design is relatively simple to build, was developed for use on the ground or onorbit, and is readily scalable for aperture size and required performance. These-high accuracy blackbodies are currently in use as a ground calibration unit and with a high-altitude balloon instrument. SDL is currently building a prototype blackbody to demonstrate the ability to achieve very high accuracy, and we expect it to have emissivity of ~0.9999 from 1.5 to 50 μm, temperature uncertainties of ~25 mK, and radiance uncertainties of ~10 mK due to temperature gradients. The high emissivity and low thermal gradient uncertainties are achieved through cavity design, while the low temperature uncertainty is attained by including phase change materials such as mercury, gallium, and water in the blackbody. Blackbody temperature sensors are calibrated at the melt points of these materials, which are determined by heating through their melt point. This allows absolute temperature calibration traceable to the SI temperature scale.

  6. A novel asymmetric-loop molecular beacon-based two-phase hybridization assay for accurate and high-throughput detection of multiple drug resistance-conferring point mutations in Mycobacterium tuberculosis

    PubMed Central

    Chen, Qinghai; Wu, Nan; Xie, Meng; Zhang, Bo; Chen, Ming; Li, Jianjun; Zhuo, Lisha; Kuang, Hong; Fu, Weiling

    2012-01-01

    Summary The accurate and high-throughput detection of drug resistance-related multiple point mutations remains a challenge. Although the combination of molecular beacons with bio-immobilization technology, such as microarray, is promising, its application is difficult due to the ineffective immobilization of molecular beacons on the chip surface. Here, we propose a novel asymmetric-loop molecular beacon in which the loop consists of 2 parts. One is complementary to a target, while the other is complementary to an oligonucleotide probe immobilized on the chip surface. With this novel probe, a two-phase hybridization assay can be used for simultaneously detecting multiple point mutations. This assay will have advantages, such as easy probe availability, multiplex detection, low background, and high-efficiency hybridization, and may provide a new avenue for the immobilization of molecular beacons and high-throughput detection of point mutations. PMID:22460100

  7. Using a genetic algorithm to estimate the details of earthquake slip distributions from point surface displacements

    NASA Astrophysics Data System (ADS)

    Lindsay, A.; McCloskey, J.; Nic Bhloscaidh, M.

    2016-03-01

    Examining fault activity over several earthquake cycles is necessary for long-term modeling of the fault strain budget and stress state. While this requires knowledge of coseismic slip distributions for successive earthquakes along the fault, these exist only for the most recent events. However, overlying the Sunda Trench, sparsely distributed coral microatolls are sensitive to tectonically induced changes in relative sea levels and provide a century-spanning paleogeodetic and paleoseismic record. Here we present a new technique called the Genetic Algorithm Slip Estimator to constrain slip distributions from observed surface deformations of corals. We identify a suite of models consistent with the observations, and from them we compute an ensemble estimate of the causative slip. We systematically test our technique using synthetic data. Applying the technique to observed coral displacements for the 2005 Nias-Simeulue earthquake and 2007 Mentawai sequence, we reproduce key features of slip present in previously published inversions such as the magnitude and location of slip asperities. From the displacement data available for the 1797 and 1833 Mentawai earthquakes, we present slip estimates reproducing observed displacements. The areas of highest modeled slip in the paleoearthquake are nonoverlapping, and our solutions appear to tile the plate interface, complementing one another. This observation is supported by the complex rupture pattern of the 2007 Mentawai sequence, underlining the need to examine earthquake occurrence through long-term strain budget and stress modeling. Although developed to estimate earthquake slip, the technique is readily adaptable for a wider range of applications.

  8. Point Estimates and Confidence Intervals for Variable Importance in Multiple Linear Regression

    ERIC Educational Resources Information Center

    Thomas, D. Roland; Zhu, PengCheng; Decady, Yves J.

    2007-01-01

    The topic of variable importance in linear regression is reviewed, and a measure first justified theoretically by Pratt (1987) is examined in detail. Asymptotic variance estimates are used to construct individual and simultaneous confidence intervals for these importance measures. A simulation study of their coverage properties is reported, and an…

  9. ESTIMATING THE EXPOSURE POINT CONCENTRATION TERM USING PROUCL, VERSION 3.0

    EPA Science Inventory

    In superfund and RCRA Projects of the U.S. EPA, cleanup, exposure, and risk assessment decisions are often made based upon the mean concentrations of the contaminants of potential concern (COPC). A 95% upper confidence limit (UCL) of the population mean is used to estimate the e...

  10. Multiple automated headspace in-tube extraction for the accurate analysis of relevant wine aroma compounds and for the estimation of their relative liquid-gas transfer rates.

    PubMed

    Zapata, Julián; Lopez, Ricardo; Herrero, Paula; Ferreira, Vicente

    2012-11-30

    An automated headspace in-tube extraction (ITEX) method combined with multiple headspace extraction (MHE) has been developed to provide simultaneously information about the accurate wine content in 20 relevant aroma compounds and about their relative transfer rates to the headspace and hence about the relative strength of their interactions with the matrix. In the method, 5 μL (for alcohols, acetates and carbonyl alcohols) or 200 μL (for ethyl esters) of wine sample were introduced in a 2 mL vial, heated at 35°C and extracted with 32 (for alcohols, acetates and carbonyl alcohols) or 16 (for ethyl esters) 0.5 mL pumping strokes in four consecutive extraction and analysis cycles. The application of the classical theory of Multiple Extractions makes it possible to obtain a highly reliable estimate of the total amount of volatile compound present in the sample and a second parameter, β, which is simply the proportion of volatile not transferred to the trap in one extraction cycle, but that seems to be a reliable indicator of the actual volatility of the compound in that particular wine. A study with 20 wines of different types and 1 synthetic sample has revealed the existence of significant differences in the relative volatility of 15 out of 20 odorants. Differences are particularly intense for acetaldehyde and other carbonyls, but are also notable for alcohols and long chain fatty acid ethyl esters. It is expected that these differences, linked likely to sulphur dioxide and some unknown specific compositional aspects of the wine matrix, can be responsible for relevant sensory changes, and may even be the cause explaining why the same aroma composition can produce different aroma perceptions in two different wines. PMID:23102525

  11. Enhancing efficiency and quality of statistical estimation of immunogenicity assay cut points through standardization and automation.

    PubMed

    Su, Cheng; Zhou, Lei; Hu, Zheng; Weng, Winnie; Subramani, Jayanthi; Tadkod, Vineet; Hamilton, Kortney; Bautista, Ami; Wu, Yu; Chirmule, Narendra; Zhong, Zhandong Don

    2015-10-01

    Biotherapeutics can elicit immune responses, which can alter the exposure, safety, and efficacy of the therapeutics. A well-designed and robust bioanalytical method is critical for the detection and characterization of relevant anti-drug antibody (ADA) and the success of an immunogenicity study. As a fundamental criterion in immunogenicity testing, assay cut points need to be statistically established with a risk-based approach to reduce subjectivity. This manuscript describes the development of a validated, web-based, multi-tier customized assay statistical tool (CAST) for assessing cut points of ADA assays. The tool provides an intuitive web interface that allows users to import experimental data generated from a standardized experimental design, select the assay factors, run the standardized analysis algorithms, and generate tables, figures, and listings (TFL). It allows bioanalytical scientists to perform complex statistical analysis at a click of the button to produce reliable assay parameters in support of immunogenicity studies. PMID:26130368

  12. Estimating animal resource selection from telemetry data using point process models

    USGS Publications Warehouse

    Johnson, Devin S.; Hooten, Mevin B.; Kuhn, Carey E.

    2013-01-01

    To demonstrate the analysis of telemetry data with the point process approach, we analysed a data set of telemetry locations from northern fur seals (Callorhinus ursinus) in the Pribilof Islands, Alaska. Both a space–time and an aggregated space-only model were fitted. At the individual level, the space–time analysis showed little selection relative to the habitat covariates. However, at the study area level, the space-only model showed strong selection relative to the covariates.

  13. Estimation of reliability of linear point structures revealed in two-dimensional distributions of experimental data

    NASA Astrophysics Data System (ADS)

    Falomkina, O. V.; Pyatkov, Yu V.; Pyt'ev, Yu P.; Kamanin, D. V.

    2016-02-01

    In the experiments at the FOBOS spectrometer [1] dedicated to study the spontaneous fission of the 248Cm and 252Cf nuclei in the mass correlation distribution of fission fragments new unusual structures bounded by magic clusters were observed for the first time. The structures were interpreted as a manifestation of a new exotic decay called collinear cluster tri-partition (CCT). These pioneer results were confirmed and detailed later in the series of experiments at different time-of-flight spectrometers [2]. Interpretation of the results obtained needs estimation of the statistical reliability of the structures mentioned above. The report presents the results of the solution to the problem of statistical reliability estimation on the basis of morphological image analysis [3].

  14. Estimation of precipitable water vapour using kinematic GNSS precise point positioning over an altitude range of 1 km

    NASA Astrophysics Data System (ADS)

    Webb, S. R.; Penna, N. T.; Clarke, P. J.; Webster, S.; Martin, I.

    2013-12-01

    The estimation of total precipitable water vapour (PWV) using kinematic GNSS has been investigated since around 2001, aiming to extend the use of static ground-based GNSS, from which PWV estimates are now operationally assimilated into numerical weather prediction models. To date, kinematic GNSS PWV studies suggest a PWV measurement agreement with radiosondes of 2-3 mm, almost commensurate with static GNSS measurement accuracy, but only shipborne experiments have so far been carried out. As a first step towards extending such sea level-based studies to platforms that operate at a range of altitudes, such as airplanes or land based vehicles, the kinematic GNSS estimation of PWV over an exactly repeated trajectory is considered. A data set was collected from a GNSS receiver and antenna mounted on a carriage of the Snowdon Mountain Railway, UK, which continually ascends and descends through 950 m of vertical relief. Static GNSS reference receivers were installed at the top and bottom of the altitude profile, and derived zenith wet delay (ZWD) was interpolated to the altitude of the train to provide reference values together with profile estimates from the 100 m resolution runs of the Met Office's Unified Model. We demonstrate similar GNSS accuracies as obtained from previous shipborne studies, namely a double difference relative kinematic GNSS ZWD accuracy within 14 mm, and a kinematic GNSS precise point positioning ZWD accuracy within 15 mm. The latter is a more typical airborne PWV estimation scenario i.e. without the reliance on ground-based GNSS reference stations. We show that the kinematic GPS-only precise point positioning ZWD estimation is enhanced by also incorporating GLONASS observations.

  15. Roughness Estimation from Point Clouds - A Comparison of Terrestrial Laser Scanning and Image Matching by Unmanned Aerial Vehicle Acquisitions

    NASA Astrophysics Data System (ADS)

    Rutzinger, Martin; Bremer, Magnus; Ragg, Hansjörg

    2013-04-01

    Recently, terrestrial laser scanning (TLS) and matching of images acquired by unmanned arial vehicles (UAV) are operationally used for 3D geodata acquisition in Geoscience applications. However, the two systems cover different application domains in terms of acquisition conditions and data properties i.e. accuracy and line of sight. In this study we investigate the major differences between the two platforms for terrain roughness estimation. Terrain roughness is an important input for various applications such as morphometry studies, geomorphologic mapping, and natural process modeling (e.g. rockfall, avalanche, and hydraulic modeling). Data has been collected simultaneously by TLS using an Optech ILRIS3D and a rotary UAV using an octocopter from twins.nrn for a 900 m² test site located in a riverbed in Tyrol, Austria (Judenbach, Mieming). The TLS point cloud has been acquired from three scan positions. These have been registered using iterative closest point algorithm and a target-based referencing approach. For registration geometric targets (spheres) with a diameter of 20 cm were used. These targets were measured with dGPS for absolute georeferencing. The TLS point cloud has an average point density of 19,000 pts/m², which represents a point spacing of about 5 mm. 15 images where acquired by UAV in a height of 20 m using a calibrated camera with focal length of 18.3 mm. A 3D point cloud containing RGB attributes was derived using APERO/MICMAC software, by a direct georeferencing approach based on the aircraft IMU data. The point cloud is finally co-registered with the TLS data to guarantee an optimal preparation in order to perform the analysis. The UAV point cloud has an average point density of 17,500 pts/m², which represents a point spacing of 7.5 mm. After registration and georeferencing the level of detail of roughness representation in both point clouds have been compared considering elevation differences, roughness and representation of different grain

  16. Iterative reconstruction of Fourier-rebinned PET data using sinogram blurring function estimated from point source scans

    PubMed Central

    Tohme, Michel S.; Qi, Jinyi

    2010-01-01

    Purpose: The accuracy of the system model that governs the transformation from the image space to the projection space in positron emission tomography (PET) greatly affects the quality of reconstructed images. For efficient computation in iterative reconstructions, the system model in PET can be factored into a product of geometric projection and sinogram blurring function. To further speed up reconstruction, fully 3D PET data can be rebinned into a stack of 2D sinograms and then be reconstructed using 2D iterative algorithms. The purpose of this work is to develop a method to estimate the sinogram blurring function to be used in reconstruction of Fourier-rebinned data. Methods: In a previous work, the authors developed an approach to estimating the sinogram blurring function of nonrebinned PET data from experimental scans of point sources. In this study, the authors extend this method to the estimation of sinogram blurring function for Fourier-rebinned PET data. A point source was scanned at a set of sampled positions in the microPET II scanner. The sinogram blurring function is considered to be separable between the transaxial and axial directions. A radially and angularly variant 2D blurring function is estimated from Fourier-rebinned point source scans to model the transaxial blurring with consideration of the detector block structure of the scanner; a space-variant 1D blurring kernel along the axial direction is estimated separately to model the correlation between neighboring planes due to detector intrinsic blurring and Fourier rebinning. The estimated sinogram blurring function is incorporated in a 2D maximum a posteriori (MAP) reconstruction algorithm for image reconstruction. Results: Physical phantom experiments were performed on the microPET II scanner to validate the proposed method. The authors compared the proposed method to 2D MAP reconstruction without sinogram blurring model and 2D MAP reconstruction with a Monte Carlo based blurring model. The

  17. Estimating Limit Reference Points for Western Pacific Leatherback Turtles (Dermochelys coriacea) in the U.S. West Coast EEZ.

    PubMed

    Curtis, K Alexandra; Moore, Jeffrey E; Benson, Scott R

    2015-01-01

    Biological limit reference points (LRPs) for fisheries catch represent upper bounds that avoid undesirable population states. LRPs can support consistent management evaluation among species and regions, and can advance ecosystem-based fisheries management. For transboundary species, LRPs prorated by local abundance can inform local management decisions when international coordination is lacking. We estimated LRPs for western Pacific leatherbacks in the U.S. West Coast Exclusive Economic Zone (WCEEZ) using three approaches with different types of information on local abundance. For the current application, the best-informed LRP used a local abundance estimate derived from nest counts, vital rate information, satellite tag data, and fishery observer data, and was calculated with a Potential Biological Removal estimator. Management strategy evaluation was used to set tuning parameters of the LRP estimators to satisfy risk tolerances for falling below population thresholds, and to evaluate sensitivity of population outcomes to bias in key inputs. We estimated local LRPs consistent with three hypothetical management objectives: allowing the population to rebuild to its maximum net productivity level (4.7 turtles per five years), limiting delay of population rebuilding (0.8 turtles per five years), or only preventing further decline (7.7 turtles per five years). These LRPs pertain to all human-caused removals and represent the WCEEZ contribution to meeting population management objectives within a broader international cooperative framework. We present multi-year estimates, because at low LRP values, annual assessments are prone to substantial error that can lead to volatile and costly management without providing further conservation benefit. The novel approach and the performance criteria used here are not a direct expression of the "jeopardy" standard of the U.S. Endangered Species Act, but they provide useful assessment information and could help guide international

  18. Estimating Limit Reference Points for Western Pacific Leatherback Turtles (Dermochelys coriacea) in the U.S. West Coast EEZ

    PubMed Central

    Curtis, K. Alexandra; Moore, Jeffrey E.; Benson, Scott R.

    2015-01-01

    Biological limit reference points (LRPs) for fisheries catch represent upper bounds that avoid undesirable population states. LRPs can support consistent management evaluation among species and regions, and can advance ecosystem-based fisheries management. For transboundary species, LRPs prorated by local abundance can inform local management decisions when international coordination is lacking. We estimated LRPs for western Pacific leatherbacks in the U.S. West Coast Exclusive Economic Zone (WCEEZ) using three approaches with different types of information on local abundance. For the current application, the best-informed LRP used a local abundance estimate derived from nest counts, vital rate information, satellite tag data, and fishery observer data, and was calculated with a Potential Biological Removal estimator. Management strategy evaluation was used to set tuning parameters of the LRP estimators to satisfy risk tolerances for falling below population thresholds, and to evaluate sensitivity of population outcomes to bias in key inputs. We estimated local LRPs consistent with three hypothetical management objectives: allowing the population to rebuild to its maximum net productivity level (4.7 turtles per five years), limiting delay of population rebuilding (0.8 turtles per five years), or only preventing further decline (7.7 turtles per five years). These LRPs pertain to all human-caused removals and represent the WCEEZ contribution to meeting population management objectives within a broader international cooperative framework. We present multi-year estimates, because at low LRP values, annual assessments are prone to substantial error that can lead to volatile and costly management without providing further conservation benefit. The novel approach and the performance criteria used here are not a direct expression of the “jeopardy” standard of the U.S. Endangered Species Act, but they provide useful assessment information and could help guide

  19. Scaling non-point-source mercury emissions from two active industrial gold mines: influential variables and annual emission estimates.

    PubMed

    Eckley, C S; Gustin, M; Miller, M B; Marsik, F

    2011-01-15

    Open-pit gold mines encompass thousands of hectares of disturbed materials that are often naturally enriched in mercury (Hg). The objective of this study was to estimate annual non-point-source Hg emissions from two active gold mines in Nevada. This was achieved by measuring diel and seasonally representative Hg fluxes from mesocosms of materials collected from each mine. These measurements provided a framework for scaling emissions over space and time at each mine by identifying the important variables correlated with Hg flux. The validity of these correlations was tested by comparisons with measurements conducted in situ at the mines. Of the average diel fluxes obtained in situ (92 daily flux measurements), 81% were within the 95% prediction limits of the regressions developed from the laboratory-derived data. Some surfaces at the mines could not be simulated in the laboratory setting (e.g., material actively leached by cyanide solution and tailings saturated with cyanide solution), and as such in situ data were applied for scaling. Based on the surface areas of the materials and environmental conditions at the mines during the year of study, non-point-source Hg releases were estimated to be 19 and 109 kg·year(-1). These account for 56% and 14%, respectively, of the overall emissions from each mine (point + nonpoint sources). Material being heap-leached and active tailings impoundments were the major contributors to the releases (>60% combined) suggesting that as mining operations cease, releases will decline. PMID:21142061

  20. High-Precision Lunar Ranging and Gravitational Parameter Estimation With the Apache Point Observatory Lunar Laser-ranging Operation

    NASA Astrophysics Data System (ADS)

    Johnson, Nathan H.

    This dissertation is concerned with several problems of instrumentation and data analysis encountered by the Apache Point Observatory Lunar Laser-ranging Operation. Chapter 2 considers crosstalk between elements of a single-photon avalanche photodiode detector. Experimental and analytic methods were developed to determine crosstalk rates, and empirical findings are presented. Chapter 3 details electronics developments that have improved the quality of data collected by detectors of the same type. Chapter 4 explores the challenges of estimating gravitational parameters on the basis of ranging data collected by this and other experiments and presents resampling techniques for the derivation of standard errors for estimates of such parameters determined by the Planetary Ephemeris Program (PEP), a solar-system model and data-fitting code. Possible directions for future work are discussed in Chapter 5. A manual of instructions for working with PEP is presented as an appendix.

  1. Estimating SO2 emissions from a large point source using 10 year OMI SO2 observations: Afsin Elbistan Power Plant

    NASA Astrophysics Data System (ADS)

    Kaynak Tezel, Burcak; Firatli, Ertug

    2016-04-01

    SO2 pollution has still been a problem for parts of Turkey, especially regions with large scale coal power plants. In this study, 10 year Ozone Monitoring Instrument (OMI) SO2 observations are used for estimating SO2 emissions from large point sources in Turkey. We aim to estimate SO2 emissions from coal power plants where no online monitoring is available and improve the emissions given in current emission inventories with these top-down estimates. High-resolution yearly averaged maps are created on a domain over large point sources by oversampling SO2 columns for each grid for the years 2005-2014. This method reduced the noise and resulted in a better signal from large point sources and it was used for coal power plants in U.S and India, previously. The SO2 signal over selected power plants are observed with this method, and the spatiotemporal changes of SO2 signal are analyzed. With the assumption that OMI SO2 observations are correlating with emissions, long-term OMI SO2 observation averages can be used to estimate emission levels of significant point sources. Two-dimensional Gaussian function is used for explaining the relationships between OMI SO2 observations and emissions. Afsin Elbistan Power Plant, which is the largest capacity coal power plant in Turkey, is investigated in detail as a case study. The satellite scans within 50 km of the power plant are selected and averaged over a 2 x 2 km2 gridded domain by smoothing method for 2005-2014. The yearly averages of OMI SO2 are calculated to investigate the magnitude and the impact area of the SO2 emissions of the power plant. A significant increase in OMI SO2 observations over Afsin Elbistan from 2005 to 2009 was observed (over 2 times) possibly due to the capacity increase from 1715 to 2795 MW in 2006. Comparison between the yearly gross electricity production of the plant and OMI SO2 observations indicated consistency until 2009, but OMI SO2 observations indicated a rapid increase while gross electricity

  2. A computer program for the estimation of protein and nucleic acid sequence diversity in random point mutagenesis libraries

    PubMed Central

    Volles, Michael J.; Lansbury, Peter T.

    2005-01-01

    A computer program for the generation and analysis of in silico random point mutagenesis libraries is described. The program operates by mutagenizing an input nucleic acid sequence according to mutation parameters specified by the user for each sequence position and type of point mutation. The program can mimic almost any type of random mutagenesis library, including those produced via error-prone PCR (ep-PCR), mutator Escherichia coli strains, chemical mutagenesis, and doped or random oligonucleotide synthesis. The program analyzes the generated nucleic acid sequences and/or the associated protein library to produce several estimates of library diversity (number of unique sequences, point mutations, and single point mutants) and the rate of saturation of these diversities during experimental screening or selection of clones. This information allows one to select the optimal screen size for a given mutagenesis library, necessary to efficiently obtain a certain coverage of the sequence-space. The program also reports the abundance of each specific protein mutation at each sequence position, which is useful as a measure of the level and type of mutation bias in the library. Alternatively, one can use the program to evaluate the relative merits of preexisting libraries, or to examine various hypothetical mutation schemes to determine the optimal method for creating a library that serves the screen/selection of interest. Simulated libraries of at least 109 sequences are accessible by the numerical algorithm with currently available personal computers; an analytical algorithm is also available which can rapidly calculate a subset of the numerical statistics in libraries of arbitrarily large size. A multi-type double-strand stochastic model of ep-PCR is developed in an appendix to demonstrate the applicability of the algorithm to amplifying mutagenesis procedures. Estimators of DNA polymerase mutation-type-specific error rates are derived using the model. Analyses of an

  3. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  4. Using ToxCast™ Data to Reconstruct Dynamic Cell State Trajectories and Estimate Toxicological Points of Departure

    PubMed Central

    Shah, Imran; Setzer, R. Woodrow; Jack, John; Houck, Keith A.; Judson, Richard S.; Knudsen, Thomas B.; Liu, Jie; Martin, Matthew T.; Reif, David M.; Richard, Ann M.; Thomas, Russell S.; Crofton, Kevin M.; Dix, David J.; Kavlock, Robert J.

    2015-01-01

    state trajectories and estimate toxicological points of departure. Environ Health Perspect 124:910–919; http://dx.doi.org/10.1289/ehp.1409029 PMID:26473631

  5. Youden Index and Optimal Cut-Point Estimated from Observations Affected by a Lower Limit of Detection

    PubMed Central

    Ruopp, Marcus D.; Perkins, Neil J.; Whitcomb, Brian W.; Schisterman, Enrique F.

    2008-01-01

    Summary The receiver operating characteristic (ROC) curve is used to evaluate a biomarker’s ability for classifying disease status. The Youden Index (J), the maximum potential effectiveness of a biomarker, is a common summary measure of the ROC curve. In biomarker development, levels may be unquantifiable below a limit of detection (LOD) and missing from the overall dataset. Disregarding these observations may negatively bias the ROC curve and thus J. Several correction methods have been suggested for mean estimation and testing; however, little has been written about the ROC curve or its summary measures. We adapt non-parametric (empirical) and semi-parametric (ROC-GLM [generalized linear model]) methods and propose parametric methods (maximum likelihood (ML)) to estimate J and the optimal cut-point (c*) for a biomarker affected by a LOD. We develop unbiased estimators of J and c* via ML for normally and gamma distributed biomarkers. Alpha level confidence intervals are proposed using delta and bootstrap methods for the ML, semi-parametric, and non-parametric approaches respectively. Simulation studies are conducted over a range of distributional scenarios and sample sizes evaluating estimators’ bias, root-mean square error, and coverage probability; the average bias was less than one percent for ML and GLM methods across scenarios and decreases with increased sample size. An example using polychlorinated biphenyl levels to classify women with and without endometriosis illustrates the potential benefits of these methods. We address the limitations and usefulness of each method in order to give researchers guidance in constructing appropriate estimates of biomarkers’ true discriminating capabilities. PMID:18435502

  6. Accurate calculation of diffraction-limited encircled and ensquared energy.

    PubMed

    Andersen, Torben B

    2015-09-01

    Mathematical properties of the encircled and ensquared energy functions for the diffraction-limited point-spread function (PSF) are presented. These include power series and a set of linear differential equations that facilitate the accurate calculation of these functions. Asymptotic expressions are derived that provide very accurate estimates for the relative amount of energy in the diffraction PSF that fall outside a square or rectangular large detector. Tables with accurate values of the encircled and ensquared energy functions are also presented. PMID:26368873

  7. An Evaluation of Vegetation Filtering Algorithms for Improved Snow Depth Estimation from Point Cloud Observations in Mountain Environments

    NASA Astrophysics Data System (ADS)

    Vanderjagt, B. J.; Durand, M. T.; Lucieer, A.; Wallace, L.

    2014-12-01

    High-resolution snow depth measurements are possible through bare-earth (BE) differencing of point cloud datasets obtained using LiDAR and photogrammetry during snow-free and snow-covered conditions. These accuracy and resolution of these snow depth measurements are desirable in mountain environments in which ground measurements are dangerous and difficult to perform, and other remote sensing techniques are often characterized by large errors and uncertainties due variable topography, vegetation, and snow properties. BE ground filtering algorithms make different assumptions about ground characteristics to differentiate between ground and non-ground features. Because of this, ground surfaces may have unique characteristics that confound ground filters depending on the location and terrain conditions. These include low-lying shrubs (<1 m), areas with high topographic relief, and areas with high surface roughness. We evaluate several different algorithms, including lowest point, kriging, and more sophisticated splining techniques such as the Multiscale Curvature Classification (MCC) to resolve snow depths. Understanding how these factors affect BE surface models and thus snow depth measurements is a valuable contribution towards improving the processing protocols associated with these relatively new snow observation techniques. We test the different BE filtering algorithms using LiDAR and photogrammetric measurements taken from an Unmanned Aerial Vehicle (UAV) in Southwest Tasmania, Australia during the winter and spring of 2013. The study area is characterized by sloping, uneven terrain, and different types of vegetation including eucalyptus and conifer trees, as well as dense shrubs varying in heights from 0.3-1.5 meters. Initial snow depth measurements using the unfiltered point cloud measurements are characterized by large errors (~20-90 cm) due to the dense vegetation. Using filtering techniques instead of raw differencing improves the estimation of snow depth in

  8. Impacts of real-time satellite clock errors on GPS precise point positioning-based troposphere zenith delay estimation

    NASA Astrophysics Data System (ADS)

    Shi, Junbo; Xu, Chaoqian; Li, Yihe; Gao, Yang

    2015-08-01

    Global Positioning System (GPS) has become a cost-effective tool to determine troposphere zenith total delay (ZTD) with accuracy comparable to other atmospheric sensors such as the radiosonde, the water vapor radiometer, the radio occultation and so on. However, the high accuracy of GPS troposphere ZTD estimates relies on the precise satellite orbit and clock products available with various latencies. Although the International GNSS Service (IGS) can provide predicted orbit and clock products for real-time applications, the predicted clock accuracy of 3 ns cannot always guarantee the high accuracy of troposphere ZTD estimates. Such limitations could be overcome by the use of the newly launched IGS real-time service which provides 5 cm orbit and 0.2-1.0 ns (an equivalent range error of 6-30 cm) clock products in real time. Considering the relatively larger magnitude of the clock error than that of the orbit error, this paper investigates the effect of real-time satellite clock errors on the GPS precise point positioning (PPP)-based troposphere ZTD estimation. Meanwhile, how the real-time satellite clock errors impact the GPS PPP-based troposphere ZTD estimation has also been studied to obtain the most precise ZTD solutions. First, two types of real-time satellite clock products are assessed with respect to the IGS final clock product in terms of accuracy and precision. Second, the real-time GPS PPP-based troposphere ZTD estimation is conducted using data from 34 selected IGS stations over three independent weeks in April, July and October, 2013. Numerical results demonstrate that the precision, rather than the accuracy, of the real-time satellite clock products impacts the real-time PPP-based ZTD solutions more significantly. In other words, the real-time satellite clock product with better precision leads to more precise real-time PPP-based troposphere ZTD solutions. Therefore, it is suggested that users should select and apply real-time satellite products with

  9. Position-dependent velocity of an effective temperature point for the estimation of the thermal diffusivity of solids

    NASA Astrophysics Data System (ADS)

    Balachandar, Settu; Shivaprakash, N. C.; Kameswara Rao, L.

    2016-01-01

    A new approach is proposed to estimate the thermal diffusivity of optically transparent solids at ambient temperature based on the velocity of an effective temperature point (ETP), and by using a two-beam interferometer the proposed concept is corroborated. 1D unsteady heat flow via step-temperature excitation is interpreted as a ‘micro-scale rectilinear translatory motion’ of an ETP. The velocity dependent function is extracted by revisiting the Fourier heat diffusion equation. The relationship between the velocity of the ETP with thermal diffusivity is modeled using a standard solution. Under optimized thermal excitation, the product of the ‘velocity of the ETP’ and the distance is a new constitutive equation for the thermal diffusivity of the solid. The experimental approach involves the establishment of a 1D unsteady heat flow inside the sample through step-temperature excitation. In the moving isothermal surfaces, the ETP is identified using a two-beam interferometer. The arrival-time of the ETP to reach a fixed distance away from heat source is measured, and its velocity is calculated. The velocity of the ETP and a given distance is sufficient to estimate the thermal diffusivity of a solid. The proposed method is experimentally verified for BK7 glass samples and the measured results are found to match closely with the reported value.

  10. Polydimethylsiloxane-air partition ratios for semi-volatile organic compounds by GC-based measurement and COSMO-RS estimation: Rapid measurements and accurate modelling.

    PubMed

    Okeme, Joseph O; Parnis, J Mark; Poole, Justen; Diamond, Miriam L; Jantunen, Liisa M

    2016-08-01

    Polydimethylsiloxane (PDMS) shows promise for use as a passive air sampler (PAS) for semi-volatile organic compounds (SVOCs). To use PDMS as a PAS, knowledge of its chemical-specific partitioning behaviour and time to equilibrium is needed. Here we report on the effectiveness of two approaches for estimating the partitioning properties of polydimethylsiloxane (PDMS), values of PDMS-to-air partition ratios or coefficients (KPDMS-Air), and time to equilibrium of a range of SVOCs. Measured values of KPDMS-Air, Exp' at 25 °C obtained using the gas chromatography retention method (GC-RT) were compared with estimates from a poly-parameter free energy relationship (pp-FLER) and a COSMO-RS oligomer-based model. Target SVOCs included novel flame retardants (NFRs), polybrominated diphenyl ethers (PBDEs), polycyclic aromatic hydrocarbons (PAHs), organophosphate flame retardants (OPFRs), polychlorinated biphenyls (PCBs) and organochlorine pesticides (OCPs). Significant positive relationships were found between log KPDMS-Air, Exp' and estimates made using the pp-FLER model (log KPDMS-Air, pp-LFER) and the COSMOtherm program (log KPDMS-Air, COSMOtherm). The discrepancy and bias between measured and predicted values were much higher for COSMO-RS than the pp-LFER model, indicating the anticipated better performance of the pp-LFER model than COSMO-RS. Calculations made using measured KPDMS-Air, Exp' values show that a PDMS PAS of 0.1 cm thickness will reach 25% of its equilibrium capacity in ∼1 day for alpha-hexachlorocyclohexane (α-HCH) to ∼ 500 years for tris (4-tert-butylphenyl) phosphate (TTBPP), which brackets the volatility range of all compounds tested. The results presented show the utility of GC-RT method for rapid and precise measurements of KPDMS-Air. PMID:27179237

  11. Estimation of Minimal Breakdown Point in a GaP Plasma Structure and Discharge Features in Air and Argon Media

    NASA Astrophysics Data System (ADS)

    Kurt, H. Hilal; Tanrıverdi, Evrim

    2016-08-01

    We present gas discharge phenomena in argon and air media using a gallium phosphide (GaP) semiconductor and metal electrodes. The system has a large-diameter ( D) semiconductor and a microscaled adjustable interelectrode gap ( d). Both theoretical and experimental findings are discussed for a direct-current (dc) electric field ( E) applied to this structure with parallel-plate geometry. As one of the main parameters, the pressure p takes an adjustable value from 0.26 kPa to 101 kPa. After collection of experimental data, a new theoretical formula is developed to estimate the minimal breakdown point of the system as a function of p and d. It is proven that the minimal breakdown point in the semiconductor and metal electrode system differs dramatically from that in metal and metal electrode systems. In addition, the surface charge density σ and spatial electron distribution n e are calculated theoretically. Current-voltage characteristics (CVCs) demonstrate that there exist certain negative differential resistance (NDR) regions for small interelectrode separations (i.e., d = 50 μm) and low and moderate pressures between 3.7 kPa and 13 kPa in Ar medium. From the difference of currents in CVCs, the bifurcation of the discharge current is clarified for an applied voltage U. Since the current differences in NDRs have various values from 1 μA to 7.24 μA for different pressures, the GaP semiconductor plasma structure can be used in microwave diode systems due to its clear NDR region.

  12. Tracking Neural Modulation Depth by Dual Sequential Monte Carlo Estimation on Point Processes for Brain-Machine Interfaces.

    PubMed

    Wang, Yiwen; She, Xiwei; Liao, Yuxi; Li, Hongbao; Zhang, Qiaosheng; Zhang, Shaomin; Zheng, Xiaoxiang; Principe, Jose

    2016-08-01

    Classic brain-machine interface (BMI) approaches decode neural signals from the brain responsible for achieving specific motor movements, which subsequently command prosthetic devices. Brain activities adaptively change during the control of the neuroprosthesis in BMIs, where the alteration of the preferred direction and the modulation of the gain depth are observed. The static neural tuning models have been limited by fixed codes, resulting in a decay of decoding performance over the course of the movement and subsequent instability in motor performance. To achieve stable performance, we propose a dual sequential Monte Carlo adaptive point process method, which models and decodes the gradually changing modulation depth of individual neuron over the course of a movement. We use multichannel neural spike trains from the primary motor cortex of a monkey trained to perform a target pursuit task using a joystick. Our results show that our computational approach successfully tracks the neural modulation depth over time with better goodness-of-fit than classic static neural tuning models, resulting in smaller errors between the true kinematics and the estimations in both simulated and real data. Our novel decoding approach suggests that the brain may employ such strategies to achieve stable motor output, i.e., plastic neural tuning is a feature of neural systems. BMI users may benefit from this adaptive algorithm to achieve more complex and controlled movement outcomes. PMID:26584486

  13. Results from the HARPS-N 2014 Campaign to Estimate Accurately the Densities of Planets Smaller than 2.5 Earth Radii

    NASA Astrophysics Data System (ADS)

    Charbonneau, David; Harps-N Collaboration

    2015-01-01

    Although the NASA Kepler Mission has determined the physical sizes of hundreds of small planets, and we have in many cases characterized the star in detail, we know virtually nothing about the planetary masses: There are only 7 planets smaller than 2.5 Earth radii for which there exist published mass estimates with a precision better than 20 percent, the bare minimum value required to begin to distinguish between different models of composition.HARPS-N is an ultra-stable fiber-fed high-resolution spectrograph optimized for the measurement of very precise radial velocities. We have 80 nights of guaranteed time per year, of which half are dedicated to the study of small Kepler planets.In preparation for the 2014 season, we compared all available Kepler Objects of Interest to identify the ones for which our 40 nights could be used most profitably. We analyzed the Kepler light curves to constrain the stellar rotation periods, the lifetimes of active regions on the stellar surface, and the noise that would result in our radial velocities. We assumed various mass-radius relations to estimate the observing time required to achieve a mass measurement with a precision of 15%, giving preference to stars that had been well characterized through asteroseismology. We began by monitoring our long list of targets. Based on preliminary results we then selected our final short list, gathering typically 70 observations per target during summer 2014.These resulting mass measurements will have a signifcant impact on our understanding of these so-called super-Earths and small Neptunes. They would form a core dataset with which the international astronomical community can meaningfully seek to understand these objects and their formation in a quantitative fashion.HARPS-N was funded by the Swiss Space Office, the Harvard Origin of Life Initiative, the Scottish Universities Physics Alliance, the University of Geneva, the Smithsonian Astrophysical Observatory, the Italian National

  14. Accurate Evaluation of Quantum Integrals

    NASA Technical Reports Server (NTRS)

    Galant, David C.; Goorvitch, D.

    1994-01-01

    Combining an appropriate finite difference method with Richardson's extrapolation results in a simple, highly accurate numerical method for solving a Schr\\"{o}dinger's equation. Important results are that error estimates are provided, and that one can extrapolate expectation values rather than the wavefunctions to obtain highly accurate expectation values. We discuss the eigenvalues, the error growth in repeated Richardson's extrapolation, and show that the expectation values calculated on a crude mesh can be extrapolated to obtain expectation values of high accuracy.

  15. Normal Tissue Complication Probability Estimation by the Lyman-Kutcher-Burman Method Does Not Accurately Predict Spinal Cord Tolerance to Stereotactic Radiosurgery

    SciTech Connect

    Daly, Megan E.; Luxton, Gary; Choi, Clara Y.H.; Gibbs, Iris C.; Chang, Steven D.; Adler, John R.; Soltys, Scott G.

    2012-04-01

    traditionally used to estimate spinal cord NTCP may not apply to the dosimetry of SRS. Further research with additional NTCP models is needed.

  16. An SVM-based classifier for estimating the state of various rotating components in agro-industrial machinery with a vibration signal acquired from a single point on the machine chassis.

    PubMed

    Ruiz-Gonzalez, Ruben; Gomez-Gil, Jaime; Gomez-Gil, Francisco Javier; Martínez-Martínez, Víctor

    2014-01-01

    The goal of this article is to assess the feasibility of estimating the state of various rotating components in agro-industrial machinery by employing just one vibration signal acquired from a single point on the machine chassis. To do so, a Support Vector Machine (SVM)-based system is employed. Experimental tests evaluated this system by acquiring vibration data from a single point of an agricultural harvester, while varying several of its working conditions. The whole process included two major steps. Initially, the vibration data were preprocessed through twelve feature extraction algorithms, after which the Exhaustive Search method selected the most suitable features. Secondly, the SVM-based system accuracy was evaluated by using Leave-One-Out cross-validation, with the selected features as the input data. The results of this study provide evidence that (i) accurate estimation of the status of various rotating components in agro-industrial machinery is possible by processing the vibration signal acquired from a single point on the machine structure; (ii) the vibration signal can be acquired with a uniaxial accelerometer, the orientation of which does not significantly affect the classification accuracy; and, (iii) when using an SVM classifier, an 85% mean cross-validation accuracy can be reached, which only requires a maximum of seven features as its input, and no significant improvements are noted between the use of either nonlinear or linear kernels. PMID:25372618

  17. An SVM-Based Classifier for Estimating the State of Various Rotating Components in Agro-Industrial Machinery with a Vibration Signal Acquired from a Single Point on the Machine Chassis

    PubMed Central

    Ruiz-Gonzalez, Ruben; Gomez-Gil, Jaime; Gomez-Gil, Francisco Javier; Martínez-Martínez, Víctor

    2014-01-01

    The goal of this article is to assess the feasibility of estimating the state of various rotating components in agro-industrial machinery by employing just one vibration signal acquired from a single point on the machine chassis. To do so, a Support Vector Machine (SVM)-based system is employed. Experimental tests evaluated this system by acquiring vibration data from a single point of an agricultural harvester, while varying several of its working conditions. The whole process included two major steps. Initially, the vibration data were preprocessed through twelve feature extraction algorithms, after which the Exhaustive Search method selected the most suitable features. Secondly, the SVM-based system accuracy was evaluated by using Leave-One-Out cross-validation, with the selected features as the input data. The results of this study provide evidence that (i) accurate estimation of the status of various rotating components in agro-industrial machinery is possible by processing the vibration signal acquired from a single point on the machine structure; (ii) the vibration signal can be acquired with a uniaxial accelerometer, the orientation of which does not significantly affect the classification accuracy; and, (iii) when using an SVM classifier, an 85% mean cross-validation accuracy can be reached, which only requires a maximum of seven features as its input, and no significant improvements are noted between the use of either nonlinear or linear kernels. PMID:25372618

  18. A Solid-State 95Mo NMR and Computational Investigation of Dodecahedral and Square Antiprismatic Octacyanomolybdate(IV) Anions: Is the Point-Charge Approximation an Accurate Probe of Local Symmetry?

    SciTech Connect

    Forgeron, Michelle A.; Wasylishen, Roderick E.

    2006-06-21

    Solid-state 95Mo NMR spectroscopy is shown to be an efficient and effective tool for analyzing the diamagnetic octacyanomolybdate(IV) anions, Mo(CN)8 4-, of approximate dodecahedral, D2d, and square antiprismatic, D4d, symmetry. The sensitivity of the Mo magnetic shielding (?) and electric field gradient (EFG) tensors to small changes in the local structure of these anions allows the approximate D2d and D4d Mo(CN)8 4- anions to be readily distinguished. The use of high applied magnetic fields, 11.75, 17.63 and 21.1 T, amplifies the overall sensitivity of the NMR experiment and enables more accurate characterization of the Mo ? and EFG tensors. Although the magnitudes of the Mo ? and EFG interactions are comparable for the D2d and D4d Mo(CN)8 4- anions, the relative values and orientations of the principal components of the Mo ? and EFG tensors give rise to 95Mo NMR line shapes that are significantly different at the fields utilized here. Quantum chemical calculations of the Mo ? and EFG tensors, using zeroth-order regular approximation density functional theory (ZORA DFT) and restricted Hartree-Fock (RHF) methods, have also been carried out and are in good agreement with experiment. The most significant and surprising result from the DFT and RHF calculations is a significant EFG at Mo for an isolated Mo(CN)8 4- anion possessing an ideal square antiprismatic structure; this is contrary to the point-charge approximation, PCA, which predicts a zero EFG at Mo for this structure.

  19. A test of the 'one-point method' for estimating maximum carboxylation capacity from field-measured, light-saturated photosynthesis.

    PubMed

    De Kauwe, Martin G; Lin, Yan-Shih; Wright, Ian J; Medlyn, Belinda E; Crous, Kristine Y; Ellsworth, David S; Maire, Vincent; Prentice, I Colin; Atkin, Owen K; Rogers, Alistair; Niinemets, Ülo; Serbin, Shawn P; Meir, Patrick; Uddling, Johan; Togashi, Henrique F; Tarvainen, Lasse; Weerasinghe, Lasantha K; Evans, Bradley J; Ishida, F Yoko; Domingues, Tomas F

    2016-05-01

    Simulations of photosynthesis by terrestrial biosphere models typically need a specification of the maximum carboxylation rate (Vcmax ). Estimating this parameter using A-Ci curves (net photosynthesis, A, vs intercellular CO2 concentration, Ci ) is laborious, which limits availability of Vcmax data. However, many multispecies field datasets include net photosynthetic rate at saturating irradiance and at ambient atmospheric CO2 concentration (Asat ) measurements, from which Vcmax can be extracted using a 'one-point method'. We used a global dataset of A-Ci curves (564 species from 46 field sites, covering a range of plant functional types) to test the validity of an alternative approach to estimate Vcmax from Asat via this 'one-point method'. If leaf respiration during the day (Rday ) is known exactly, Vcmax can be estimated with an r(2) value of 0.98 and a root-mean-squared error (RMSE) of 8.19 μmol m(-2) s(-1) . However, Rday typically must be estimated. Estimating Rday as 1.5% of Vcmax, we found that Vcmax could be estimated with an r(2) of 0.95 and an RMSE of 17.1 μmol m(-2) s(-1) . The one-point method provides a robust means to expand current databases of field-measured Vcmax , giving new potential to improve vegetation models and quantify the environmental drivers of Vcmax variation. PMID:26719951

  20. Accurate cloud-based smart IMT measurement, its validation and stroke risk stratification in carotid ultrasound: A web-based point-of-care tool for multicenter clinical trial.

    PubMed

    Saba, Luca; Banchhor, Sumit K; Suri, Harman S; Londhe, Narendra D; Araki, Tadashi; Ikeda, Nobutaka; Viskovic, Klaudija; Shafique, Shoaib; Laird, John R; Gupta, Ajay; Nicolaides, Andrew; Suri, Jasjit S

    2016-08-01

    This study presents AtheroCloud™ - a novel cloud-based smart carotid intima-media thickness (cIMT) measurement tool using B-mode ultrasound for stroke/cardiovascular risk assessment and its stratification. This is an anytime-anywhere clinical tool for routine screening and multi-center clinical trials. In this pilot study, the physician can upload ultrasound scans in one of the following formats (DICOM, JPEG, BMP, PNG, GIF or TIFF) directly into the proprietary cloud of AtheroPoint from the local server of the physician's office. They can then run the intelligent and automated AtheroCloud™ cIMT measurements in point-of-care settings in less than five seconds per image, while saving the vascular reports in the cloud. We statistically benchmark AtheroCloud™ cIMT readings against sonographer (a registered vascular technologist) readings and manual measurements derived from the tracings of the radiologist. One hundred patients (75 M/25 F, mean age: 68±11 years), IRB approved, Toho University, Japan, consisted of Left/Right common carotid artery (CCA) artery (200 ultrasound scans), (Toshiba, Tokyo, Japan) were collected using a 7.5MHz transducer. The measured cIMTs for L/R carotid were as follows (in mm): (i) AtheroCloud™ (0.87±0.20, 0.77±0.20); (ii) sonographer (0.97±0.26, 0.89±0.29) and (iii) manual (0.90±0.20, 0.79±0.20), respectively. The coefficient of correlation (CC) between sonographer and manual for L/R cIMT was 0.74 (P<0.0001) and 0.65 (P<0.0001), while, between AtheroCloud™ and manual was 0.96 (P<0.0001) and 0.97 (P<0.0001), respectively. We observed that 91.15% of the population in AtheroCloud™ had a mean cIMT error less than 0.11mm compared to sonographer's 68.31%. The area under curve for receiving operating characteristics was 0.99 for AtheroCloud™ against 0.81 for sonographer. Our Framingham Risk Score stratified the population into three bins as follows: 39% in low-risk, 70.66% in medium-risk and 10.66% in high-risk bins

  1. A point-infiltration model for estimating runoff from rainfall on small basins in semiarid areas of Wyoming

    USGS Publications Warehouse

    Rankl, James G.

    1990-01-01

    A physically based point-infiltration model was developed for computing infiltration of rainfall into soils and the resulting runoff from small basins in Wyoming. The user describes a 'design storm' in terms of average rainfall intensity and storm duration. Information required to compute runoff for the design storm by using the model include (1) soil type and description, and (2) two infiltration parameters and a surface-retention storage parameter. Parameter values are tabulated in the report. Rainfall and runoff data for three ephemeral-stream basins that contain only one type of soil were used to develop the model. Two assumptions were necessary: antecedent soil moisture is some long-term average, and storm rainfall is uniform in both time and space. The infiltration and surface-retention storage parameters were determined for the soil of each basin. Observed rainstorm and runoff data were used to develop a separation curve, or incipient-runoff curve, which distinguishes between runoff and nonrunoff rainfall data. The position of this curve defines the infiltration and surface-retention storage parameters. A procedure for applying the model to basins that contain more than one type of soil was developed using data from 7 of the 10 study basins. For these multiple-soil basins, the incipient-runoff curve defines the infiltration and retention-storage parameters for the soil having the highest runoff potential. Parameters were defined by ranking the soils according to their relative permeabilities and optimizing the position of the incipient-runoff curve by using measured runoff as a control for the fit. Analyses of runoff from multiple-soil basins indicate that the effective contributing area of runoff is less than the drainage area of the basin. In this study, the effective drainage area ranged from 41.6 to 71.1 percent of the total drainage area. Information on effective drainage area is useful in evaluating drainage area as an independent variable in

  2. Digital signal processing and control and estimation theory -- Points of tangency, area of intersection, and parallel directions

    NASA Technical Reports Server (NTRS)

    Willsky, A. S.

    1976-01-01

    A number of current research directions in the fields of digital signal processing and modern control and estimation theory were studied. Topics such as stability theory, linear prediction and parameter identification, system analysis and implementation, two-dimensional filtering, decentralized control and estimation, image processing, and nonlinear system theory were examined in order to uncover some of the basic similarities and differences in the goals, techniques, and philosophy of the two disciplines. An extensive bibliography is included.

  3. Point: Clarifying Policy Evidence With Potential-Outcomes Thinking—Beyond Exposure-Response Estimation in Air Pollution Epidemiology

    PubMed Central

    Zigler, Corwin Matthew; Dominici, Francesca

    2014-01-01

    The regulatory environment surrounding policies to control air pollution warrants a new type of epidemiologic evidence. Whereas air pollution epidemiology has typically informed policies with estimates of exposure-response relationships between pollution and health outcomes, these estimates alone cannot support current debates surrounding the actual health effects of air quality regulations. We argue that directly evaluating specific control strategies is distinct from estimating exposure-response relationships and that increased emphasis on estimating effects of well-defined regulatory interventions would enhance the evidence that supports policy decisions. Appealing to similar calls for accountability assessment of whether regulatory actions impact health outcomes, we aim to sharpen the analytic distinctions between studies that directly evaluate policies and those that estimate exposure-response relationships, with particular focus on perspectives for causal inference. Our goal is not to review specific methodologies or studies, nor is it to extoll the advantages of “causal” versus “associational” evidence. Rather, we argue that potential-outcomes perspectives can elevate current policy debates with more direct evidence of the extent to which complex regulatory interventions affect health. Augmenting the existing body of exposure-response estimates with rigorous evidence of the causal effects of well-defined actions will ensure that the highest-level epidemiologic evidence continues to support regulatory policies. PMID:25399414

  4. Point: clarifying policy evidence with potential-outcomes thinking--beyond exposure-response estimation in air pollution epidemiology.

    PubMed

    Zigler, Corwin Matthew; Dominici, Francesca

    2014-12-15

    The regulatory environment surrounding policies to control air pollution warrants a new type of epidemiologic evidence. Whereas air pollution epidemiology has typically informed policies with estimates of exposure-response relationships between pollution and health outcomes, these estimates alone cannot support current debates surrounding the actual health effects of air quality regulations. We argue that directly evaluating specific control strategies is distinct from estimating exposure-response relationships and that increased emphasis on estimating effects of well-defined regulatory interventions would enhance the evidence that supports policy decisions. Appealing to similar calls for accountability assessment of whether regulatory actions impact health outcomes, we aim to sharpen the analytic distinctions between studies that directly evaluate policies and those that estimate exposure-response relationships, with particular focus on perspectives for causal inference. Our goal is not to review specific methodologies or studies, nor is it to extoll the advantages of "causal" versus "associational" evidence. Rather, we argue that potential-outcomes perspectives can elevate current policy debates with more direct evidence of the extent to which complex regulatory interventions affect health. Augmenting the existing body of exposure-response estimates with rigorous evidence of the causal effects of well-defined actions will ensure that the highest-level epidemiologic evidence continues to support regulatory policies. PMID:25399414

  5. Estimation of point source fugitive emission rates from a single sensor time series: a conditionally-sampled Gaussian plume reconstruction

    EPA Science Inventory

    This paper presents a technique for determining the trace gas emission rate from a point source. The technique was tested using data from controlled methane release experiments and from measurement downwind of a natural gas production facility in Wyoming. Concentration measuremen...

  6. Effects of Varying Epoch Lengths, Wear Time Algorithms, and Activity Cut-Points on Estimates of Child Sedentary Behavior and Physical Activity from Accelerometer Data

    PubMed Central

    Banda, Jorge A.; Haydel, K. Farish; Davila, Tania; Desai, Manisha; Haskell, William L.; Matheson, Donna; Robinson, Thomas N.

    2016-01-01

    Objective To examine the effects of accelerometer epoch lengths, wear time (WT) algorithms, and activity cut-points on estimates of WT, sedentary behavior (SB), and physical activity (PA). Methods 268 7–11 year-olds with BMI ≥ 85th percentile for age and sex wore accelerometers on their right hips for 4–7 days. Data were processed and analyzed at epoch lengths of 1-, 5-, 10-, 15-, 30-, and 60-seconds. For each epoch length, WT minutes/day was determined using three common WT algorithms, and minutes/day and percent time spent in SB, light (LPA), moderate (MPA), and vigorous (VPA) PA were determined using five common activity cut-points. ANOVA tested differences in WT, SB, LPA, MPA, VPA, and MVPA when using the different epoch lengths, WT algorithms, and activity cut-points. Results WT minutes/day varied significantly by epoch length when using the NHANES WT algorithm (p < .0001), but did not vary significantly by epoch length when using the ≥ 20 minute consecutive zero or Choi WT algorithms. Minutes/day and percent time spent in SB, LPA, MPA, VPA, and MVPA varied significantly by epoch length for all sets of activity cut-points tested with all three WT algorithms (all p < .0001). Across all epoch lengths, minutes/day and percent time spent in SB, LPA, MPA, VPA, and MVPA also varied significantly across all sets of activity cut-points with all three WT algorithms (all p < .0001). Conclusions The common practice of converting WT algorithms and activity cut-point definitions to match different epoch lengths may introduce significant errors. Estimates of SB and PA from studies that process and analyze data using different epoch lengths, WT algorithms, and/or activity cut-points are not comparable, potentially leading to very different results, interpretations, and conclusions, misleading research and public policy. PMID:26938240

  7. A test of the 'one-point method' for estimating maximum carboxylation capacity from field-measured, light-saturated photosynthesis

    DOE PAGESBeta

    Martin G. De Kauwe; Serbin, Shawn P.; Lin, Yan -Shih; Wright, Ian J.; Medlyn, Belinda E.; Crous, Kristine Y.; Ellsworth, David S.; Maire, Vincent; Prentice, I. Colin; Atkin, Owen K.; et al

    2015-12-31

    Here, simulations of photosynthesis by terrestrial biosphere models typically need a specification of the maximum carboxylation rate (Vcmax). Estimating this parameter using A–Ci curves (net photosynthesis, A, vs intercellular CO2 concentration, Ci) is laborious, which limits availability of Vcmax data. However, many multispecies field datasets include net photosynthetic rate at saturating irradiance and at ambient atmospheric CO2 concentration (Asat) measurements, from which Vcmax can be extracted using a ‘one-point method’.

  8. Sci—Thur AM: YIS - 11: Estimation of Bladder-Wall Cumulative Dose in Multi-Fraction Image-Based Gynaecological Brachytherapy Using Deformable Point Set Registration

    SciTech Connect

    Zakariaee, R; Brown, C J; Hamarneh, G; Parsons, C A; Spadinger, I

    2014-08-15

    Dosimetric parameters based on dose-volume histograms (DVH) of contoured structures are routinely used to evaluate dose delivered to target structures and organs at risk. However, the DVH provides no information on the spatial distribution of the dose in situations of repeated fractions with changes in organ shape or size. The aim of this research was to develop methods to more accurately determine geometrically localized, cumulative dose to the bladder wall in intracavitary brachytherapy for cervical cancer. The CT scans and treatment plans of 20 cervical cancer patients were used. Each patient was treated with five high-dose-rate (HDR) brachytherapy fractions of 600cGy prescribed dose. The bladder inner and outer surfaces were delineated using MIM Maestro software (MIM Software Inc.) and were imported into MATLAB (MathWorks) as 3-dimensional point clouds constituting the “bladder wall”. A point-set registration toolbox for MATLAB, Coherent Point Drift (CPD), was used to non-rigidly transform the bladder-wall points from four of the fractions to the coordinate system of the remaining (reference) fraction, which was chosen to be the emptiest bladder for each patient. The doses were accumulated on the reference fraction and new cumulative dosimetric parameters were calculated. The LENT-SOMA toxicity scores of these patients were studied against the cumulative dose parameters. Based on this study, there was no significant correlation between the toxicity scores and the determined cumulative dose parameters.

  9. Future PMPs Estimation in Korea under AR5 RCP 8.5 Climate Change Scenario: Focus on Dew Point Temperature Change

    NASA Astrophysics Data System (ADS)

    Okjeong, Lee; Sangdan, Kim

    2016-04-01

    According to future climate change scenarios, future temperature is expected to increase gradually. Therefore, it is necessary to reflect the effects of these climate changes to predict Probable Maximum Precipitations (PMPs). In this presentation, PMPs will be estimated with future dew point temperature change. After selecting 174 major storm events from 1981 to 2005, new PMPs will be proposed with respect to storm areas (25, 100, 225, 400, 900, 2,025, 4,900, 10,000 and 19,600 km2) and storm durations (1, 2, 4, 6, 8, 12, 18, 24, 48 and 72 hours) using the Korea hydro-meteorological method. Also, orographic transposition factor will be applied in place of the conventional terrain impact factor which has been used in previous Korean PMPs estimation reports. After estimating dew point temperature using future temperature and representative humidity information under the Korea Meteorological Administration AR5 RCP 8.5, changes in the PMPs under dew point temperature change will be investigated by comparison with present and future PMPs. This research was supported by a grant(14AWMP-B082564-01) from Advanced Water Management Research Program funded by Ministry of Land, Infrastructure and Transport of Korean government.

  10. Automatic estimation of point-spread-function for deconvoluting out-of-focus optical coherence tomographic images using information entropy-based approach.

    PubMed

    Liu, Guozhong; Yousefi, Siavash; Zhi, Zhongwei; Wang, Ruikang K

    2011-09-12

    This paper proposes an automatic point spread function (PSF) estimation method to de-blur out-of-focus optical coherence tomography (OCT) images. The method utilizes Richardson-Lucy deconvolution algorithm to deconvolve noisy defocused images with a family of Gaussian PSFs with different beam spot sizes. Then, the best beam spot size is automatically estimated based on the discontinuity of information entropy of recovered images. Therefore, it is not required a prior knowledge of the parameters or PSF of OCT system for de-convoluting image. The model does not account for the diffraction and the coherent scattering of light by the sample. A series of experiments are performed on digital phantoms, a custom-built phantom doped with microspheres, fresh onion as well as the human fingertip in vivo to show the performance of the proposed method. The method may also be useful in combining with other deconvolution algorithms for PSF estimation and image recovery. PMID:21935179

  11. Automatic estimation of point-spread-function for deconvoluting out-of-focus optical coherence tomographic images using information entropy-based approach

    PubMed Central

    Liu, Guozhong; Yousefi, Siavash; Zhi, Zhongwei; Wang, Ruikang K.

    2011-01-01

    This paper proposes an automatic point spread function (PSF) estimation method to de-blur out-of-focus optical coherence tomography (OCT) images. The method utilizes Richardson-Lucy deconvolution algorithm to deconvolve noisy defocused images with a family of Gaussian PSFs with different beam spot sizes. Then, the best beam spot size is automatically estimated based on the discontinuity of information entropy of recovered images. Therefore, it is not required a prior knowledge of the parameters or PSF of OCT system for de-convoluting image. The model does not account for the diffraction and the coherent scattering of light by the sample. A series of experiments are performed on digital phantoms, a custom-built phantom doped with microspheres, fresh onion as well as the human fingertip in vivo to show the performance of the proposed method. The method may also be useful in combining with other deconvolution algorithms for PSF estimation and image recovery. PMID:21935179

  12. Global accuracy estimates of point and mean undulation differences obtained from gravity disturbances, gravity anomalies and potential coefficients

    NASA Technical Reports Server (NTRS)

    Jekeli, C.

    1979-01-01

    Through the method of truncation functions, the oceanic geoid undulation is divided into two constituents: an inner zone contribution expressed as an integral of surface gravity disturbances over a spherical cap; and an outer zone contribution derived from a finite set of potential harmonic coefficients. Global, average error estimates are formulated for undulation differences, thereby providing accuracies for a relative geoid. The error analysis focuses on the outer zone contribution for which the potential coefficient errors are modeled. The method of computing undulations based on gravity disturbance data for the inner zone is compared to the similar, conventional method which presupposes gravity anomaly data within this zone.

  13. Comparison between CT-based volumetric calculations and ICRU reference-point estimates of radiation doses delivered to bladder and rectum during intracavitary radiotherapy for cervical cancer

    SciTech Connect

    Pelloski, Christopher E.; Palmer, Matthew B.S.; Chronowski, Gregory M.; Jhingran, Anuja; Horton, John; Eifel, Patricia J. . E-mail: peifel@mdanderson.org

    2005-05-01

    Purpose: To compare CT-based volumetric calculations and International Commission on Radiation Units and Measurements (ICRU) reference-point estimates of radiation doses to the bladder and rectum in patients with carcinoma of the uterine cervix treated with definitive low-dose-rate intracavitary radiotherapy (ICRT). Methods and Materials: Between November 2001 and March 2003, 60 patients were prospectively enrolled in a pilot study of ICRT with CT-based dosimetry. Most patients underwent two ICRT insertions. After insertion of an afterloading ICRT applicator, intraoperative orthogonal films were obtained to ensure proper positioning of the system and to facilitate subsequent planning. Treatments were prescribed using standard two-dimensional dosimetry and planning. Patients also underwent helical CT of the pelvis for three-dimensional reconstruction of the radiation dose distributions. The systems were loaded with {sup 137}Cs sources using the Selectron remote afterloading system according to institutional practice for low-dose-rate brachytherapy. Three-dimensional dose distributions were generated using the Varian BrachyVision treatment planning system. The rectum was contoured from the bottom of the ischial tuberosities to the sigmoid flexure. The entire bladder was contoured. The minimal doses delivered to the 2 cm{sup 3} of bladder and rectum receiving the highest dose (D{sub BV2} and D{sub RV2}, respectively) were determined from dose-volume histograms, and these estimates were compared with two-dimensionally derived estimates of the doses to the corresponding ICRU reference points. Results: A total of 118 unique intracavitary insertions were performed, and 93 were evaluated and the subject of this analysis. For the rectum, the estimated doses to the ICRU reference point did not differ significantly from the D{sub RV2} (p = 0.561); the mean ({+-} standard deviation) difference was 21 cGy ({+-} 344 cGy). The median volume of the rectum that received at least

  14. Curie point depth beneath the Barramiya-Red Sea coast area estimated from spectral analysis of aeromagnetic data

    NASA Astrophysics Data System (ADS)

    Abd El Nabi, Sami Hamed

    2012-01-01

    The geothermal regime beneath the Barramiya-Red Sea coast area of the Central Eastern Desert of Egypt has been determined by using the Curie point depth, which is temperature dependent. This study is based on the analysis of aeromagnetic data. The depth to the tops and centroid of the magnetic anomalies are calculated by power spectrum method for the whole area. The result of this investigation indicates, two new maps of the Curie point depth (CPD) and the surface heat flow ( q) maps of the study area. The coastal regions are characterized by high heat flow (83.6 mW/m 2), due to the geothermic nature of the region, and shallow Curie depth (22.5 km), where (CPD) depends on the tectonic regime and morphology in the eastern part of the area. The western portion of the studied area has a lower heat flow (<50 mW/m 2) and deeper Curie depth (˜40 km), due to the existence of a large areal extent of negative Bouguer anomaly in the NE-SW direction. In addition to its bordering to the Red Sea margin, such high heat flow anomaly is associated with the increased earthquake swarms activity in the Abu Dabbab area.

  15. A Method to Estimate the Probability That Any Individual Lightning Stroke Contacted the Surface Within Any Radius of Any Point

    NASA Technical Reports Server (NTRS)

    Huddleston, Lisa L.; Roeder, William; Merceret, Francis J.

    2010-01-01

    A technique has been developed to calculate the probability that any nearby lightning stroke is within any radius of any point of interest. In practice, this provides the probability that a nearby lightning stroke was within a key distance of a facility, rather than the error ellipses centered on the stroke. This process takes the current bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to get the probability that the stroke is inside any specified radius. This new facility-centric technique will be much more useful to the space launch customers and may supersede the lightning error ellipse approach discussed in [5], [6].

  16. EM Sounding Characterization of Soil Environment toward Estimation of Potential Pollutant Load from Non-point Sources

    NASA Astrophysics Data System (ADS)

    Mori, Y.; Ide, J.; Somura, H.; Morisawa, T.

    2010-12-01

    A multi-frequency electro-magnetic (EM) sounding method was applied to agriculture fields to investigate the characteristics of non-point pollution load. Soil environmental properties such as differences in land management were analyzed with electrical conductivity (EC) maps. In addition, vertical EC profiles obtained from EM soundings were compared with EC in drainage ditch or river water. As results, surface soil EC maps successfully extracted the differences in land management affected by fertilizer application. Moreover, surface EC at the vertical profiles strongly related with drainage ditch or river EC, showing most of the EC in the water was explained by surface EC maps at the EM sounding data. The proposed method has strength in obtaining EC data without sampling river water, the situation we sometimes experienced at the field survey.

  17. Epidemiologic Behavior and Estimation of an Optimal Cut-Off Point for Homeostasis Model Assessment-2 Insulin Resistance: A Report from a Venezuelan Population

    PubMed Central

    Bermúdez, Valmore; Martínez, María Sofía; Apruzzese, Vanessa; Chávez-Castillo, Mervin; Gonzalez, Robys; Torres, Yaquelín; Bello, Luis; Añez, Roberto; Chacín, Maricarmen; Toledo, Alexandra; Cabrera, Mayela; Mengual, Edgardo; Ávila, Raquel; López-Miranda, José

    2014-01-01

    Background. Mathematical models such as Homeostasis Model Assessment have gained popularity in the evaluation of insulin resistance (IR). The purpose of this study was to estimate the optimal cut-off point for Homeostasis Model Assessment-2 Insulin Resistance (HOMA2-IR) in an adult population of Maracaibo, Venezuela. Methods. Descriptive, cross-sectional study with randomized, multistaged sampling included 2,026 adult individuals. IR was evaluated through HOMA2-IR calculation in 602 metabolically healthy individuals. For cut-off point estimation, two approaches were applied: HOMA2-IR percentile distribution and construction of ROC curves using sensitivity and specificity for selection. Results. HOMA2-IR arithmetic mean for the general population was 2.21 ± 1.42, with 2.18 ± 1.37 for women and 2.23 ± 1.47 for men (P = 0.466). When calculating HOMA2-IR for the healthy reference population, the resulting p75 was 2.00. Using ROC curves, the selected cut-off point was 1.95, with an area under the curve of 0.801, sensibility of 75.3%, and specificity of 72.8%. Conclusions. We propose an optimal cut-off point of 2.00 for HOMA2-IR, offering high sensitivity and specificity, sufficient for proper assessment of IR in the adult population of our city, Maracaibo. The determination of population-specific cut-off points is needed to evaluate risk for public health problems, such as obesity and metabolic syndrome. PMID:27379332

  18. Estimation of contribution from non-point sources to perfluorinated surfactants in a river by using boron as a wastewater tracer.

    PubMed

    Nishikoori, Hiroshi; Murakami, Michio; Sakai, Hiroshi; Oguma, Kumiko; Takada, Hideshige; Takizawa, Satoshi

    2011-08-01

    The contribution of non-point sources to perfluorinated surfactants (PFSs) in a river was evaluated by estimating their fluxes and by using boron (B) as a tracer. The utility of PFSs/B as an indicator for evaluating the impact of non-point sources was demonstrated. River water samples were collected from the Iruma River, upstream of the intake of drinking water treatment plants in Tokyo, during dry weather and wet weather, and 13 PFSs, dissolved organic carbon (DOC), total nitrogen (TN), and B were analyzed. Perfluorohexane sulfonate (PFHxS), perfluorooctane sulfonate (PFOS), perfluoroheptanoate (PFHpA), perfluorooctanoate (PFOA), perfluorononanoate (PFNA), perfluorodecanoate (PFDA), perfluoroundecanoate (PFUA), and perfluorododecanoate (PFDoDA) were detected on all sampling dates. The concentrations and fluxes of perfluorocarboxylates (PFCAs, e.g. PFOA and PFNA) were higher during wet weather, but those of perfluoroalkyl sulfonates (PFASs, e.g. PFHxS and PFOS) were not. The wet/dry ratios of PFSs/B (ratios of PFSs/B during wet weather to those during dry weather) agreed well with those of PFS fluxes (ratios of PFS fluxes during wet weather to those during dry weather), indicating that PFSs/B is useful for evaluating the contribution from non-point sources to PFSs in rivers. The wet/dry ratios of PFOA and PFNA were higher than those of other PFSs, DOC, and TN, showing that non-point sources contributed greatly to PFOA and PFNA in the water. This is the first study to use B as a wastewater tracer to estimate the contribution of non-point sources to PFSs in a river. PMID:21546052

  19. Point-source CO2 emission estimation from airborne sampled CO2 mass density: a case study for an industrial plant in Biganos, Southern France.

    NASA Astrophysics Data System (ADS)

    Carotenuto, Federico; Gioli, Beniamino; Toscano, Piero; Zaldei, Alessandro; Miglietta, Franco

    2013-04-01

    One interesting aspect in the airborne sampling of ground emissions of all types (from CO2 to particulate matter) is the ability to understand the source from which these emissions originated and, therefore, obtain an estimation of that ground source's strength. Recently an aerial campaign has been conducted in order to sample emissions coming from a paper production plant in Biganos (France). The campaign made use of a Sky Arrow ERA (Environmental Research Aircraft) equipped with a mobile flux platform system. This latter system couples (among the various instrumentation) a turbulence probe (BAT) and a LICOR 7500 open-path infra-red gas analyzer that also enables the estimation of high-resolution fluxes of different scalars via the spatial-integrated eddy-covariance technique. Aircraft data showed a marked increase in CO2 mass density downwind the industrial area, while vertical profiles samplings showed that concentrations were changing with altitude. The estimation of the CO2 source was obtained using a simple mass balance approach, that is, by integrating the product of CO2 concentration and the mass flow rate through a cross-sectional area downwind of the point source. The results were compared with those obtained by means of a "forward-mode" Lagrangian dispersion model operated iteratively. CO2 source strength were varied at each iteration to obtain an optimal convergence between the modeled atmospheric concentrations and the concentration data observed by the aircraft. The procedure makes use of wind speed and atmospheric turbulence data which are directly measured by the BAT probe at different altitudes. The two methods provided comparable estimates of the CO2 source thus providing a substantial validation of the model-based iterative dispersion procedure. We consider that this data-model integration approach involving aircraft surveys and models may substantially enhance the estimation of point and area sources of any scalar, even in more complex

  20. Structural Constraints and Earthquake Recurrence Estimates for the West Tahoe-Dollar Point Fault, Lake Tahoe Basin, California

    NASA Astrophysics Data System (ADS)

    Maloney, J. M.; Driscoll, N. W.; Kent, G.; Brothers, D. S.; Baskin, R. L.; Babcock, J. M.; Noble, P. J.; Karlin, R. E.

    2011-12-01

    Previous work in the Lake Tahoe Basin (LTB), California, identified the West Tahoe-Dollar Point Fault (WTDPF) as the most hazardous fault in the region. Onshore and offshore geophysical mapping delineated three segments of the WTDPF extending along the western margin of the LTB. The rupture patterns between the three WTDPF segments remain poorly understood. Fallen Leaf Lake (FLL), Cascade Lake, and Emerald Bay are three sub-basins of the LTB, located south of Lake Tahoe, that provide an opportunity to image primary earthquake deformation along the WTDPF and associated landslide deposits. We present results from recent (June 2011) high-resolution seismic CHIRP surveys in FLL and Cascade Lake, as well as complete multibeam swath bathymetry coverage of FLL. Radiocarbon dates obtained from the new piston cores acquired in FLL provide age constraints on the older FLL slide deposits and build on and complement previous work that dated the most recent event (MRE) in Fallen Leaf Lake at ~4.1-4.5 k.y. BP. The CHIRP data beneath FLL image slide deposits that appear to correlate with contemporaneous slide deposits in Emerald Bay and Lake Tahoe. A major slide imaged in FLL CHIRP data is slightly younger than the Tsoyowata ash (7950-7730 cal yrs BP) identified in sediment cores and appears synchronous with a major Lake Tahoe slide deposit (7890-7190 cal yrs BP). The equivalent age of these slides suggests the penultimate earthquake on the WTDPF may have triggered them. If correct, we postulate a recurrence interval of ~3-4 k.y. These results suggest the FLL segment of the WTDPF is near its seismic recurrence cycle. Additionally, CHIRP profiles acquired in Cascade Lake image the WTDPF for the first time in this sub-basin, which is located near the transition zone between the FLL and Rubicon Point Sections of the WTDPF. We observe two fault-strands trending N45°W across southern Cascade Lake for ~450 m. The strands produce scarps of ~5 m and ~2.7 m, respectively, on the lake

  1. Equipment Errors: A Prevalent Cause for Fallacy in Blood Pressure Recording - A Point Prevalence Estimate from an Indian Health University

    PubMed Central

    Mishra, Badrinarayan; Sinha, Nidhi Dinesh; Gidwani, Hitesh; Shukla, Sushil Kumar; Kawatra, Abhishek; Mehta, SC

    2013-01-01

    prevalent arm bladder cuff-mismatching can be important barriers to accurate BP measurement. PMID:23559698

  2. On the Choice of Access Point Selection Criterion and Other Position Estimation Characteristics for WLAN-Based Indoor Positioning.

    PubMed

    Laitinen, Elina; Lohan, Elena Simona

    2016-01-01

    The positioning based on Wireless Local Area Networks (WLAN) is one of the most promising technologies for indoor location-based services, generally using the information carried by Received Signal Strengths (RSS). One challenge, however, is the huge amount of data in the radiomap database due to the enormous number of hearable Access Points (AP) that could make the positioning system very complex. This paper concentrates on WLAN-based indoor location by comparing fingerprinting, path loss and weighted centroid based positioning approaches in terms of complexity and performance and studying the effects of grid size and AP reduction with several choices for appropriate selection criterion. All results are based on real field measurements in three multi-floor buildings. We validate our earlier findings concerning several different AP selection criteria and conclude that the best results are obtained with a maximum RSS-based criterion, which also proved to be the most consistent among the different investigated approaches. We show that the weighted centroid based low-complexity method is very sensitive to AP reduction, while the path loss-based method is also very robust to high percentage removals. Indeed, for fingerprinting, 50% of the APs can be removed safely with a properly chosen removal criterion without increasing the positioning error much. PMID:27213395

  3. On the Choice of Access Point Selection Criterion and Other Position Estimation Characteristics for WLAN-Based Indoor Positioning

    PubMed Central

    Laitinen, Elina; Lohan, Elena Simona

    2016-01-01

    The positioning based on Wireless Local Area Networks (WLAN) is one of the most promising technologies for indoor location-based services, generally using the information carried by Received Signal Strengths (RSS). One challenge, however, is the huge amount of data in the radiomap database due to the enormous number of hearable Access Points (AP) that could make the positioning system very complex. This paper concentrates on WLAN-based indoor location by comparing fingerprinting, path loss and weighted centroid based positioning approaches in terms of complexity and performance and studying the effects of grid size and AP reduction with several choices for appropriate selection criterion. All results are based on real field measurements in three multi-floor buildings. We validate our earlier findings concerning several different AP selection criteria and conclude that the best results are obtained with a maximum RSS-based criterion, which also proved to be the most consistent among the different investigated approaches. We show that the weighted centroid based low-complexity method is very sensitive to AP reduction, while the path loss-based method is also very robust to high percentage removals. Indeed, for fingerprinting, 50% of the APs can be removed safely with a properly chosen removal criterion without increasing the positioning error much. PMID:27213395

  4. Contaminant dispersion prediction and source estimation with integrated Gaussian-machine learning network model for point source emission in atmosphere.

    PubMed

    Ma, Denglong; Zhang, Zaoxiao

    2016-07-01

    Gas dispersion model is important for predicting the gas concentrations when contaminant gas leakage occurs. Intelligent network models such as radial basis function (RBF), back propagation (BP) neural network and support vector machine (SVM) model can be used for gas dispersion prediction. However, the prediction results from these network models with too many inputs based on original monitoring parameters are not in good agreement with the experimental data. Then, a new series of machine learning algorithms (MLA) models combined classic Gaussian model with MLA algorithm has been presented. The prediction results from new models are improved greatly. Among these models, Gaussian-SVM model performs best and its computation time is close to that of classic Gaussian dispersion model. Finally, Gaussian-MLA models were applied to identifying the emission source parameters with the particle swarm optimization (PSO) method. The estimation performance of PSO with Gaussian-MLA is better than that with Gaussian, Lagrangian stochastic (LS) dispersion model and network models based on original monitoring parameters. Hence, the new prediction model based on Gaussian-MLA is potentially a good method to predict contaminant gas dispersion as well as a good forward model in emission source parameters identification problem. PMID:27035273

  5. A Bayesian approach to estimation of a statistical change-point in the mean parameter for high dimensional non-linear time series

    NASA Astrophysics Data System (ADS)

    Speegle, Darrin; Steward, Robert

    2015-08-01

    We propose a semiparametric approach to infer the existence of and estimate the location of a statistical change-point to a nonlinear high dimensional time series contaminated with an additive noise component. In particular, we consider a p―dimensional stochastic process of independent multivariate normal observations where the mean function varies smoothly except at a single change-point. Our approach first involves a dimension reduction of the original time series through a random matrix multiplication. Next, we conduct a Bayesian analysis on the empirical detail coefficients of this dimensionally reduced time series after a wavelet transform. We also present a means to associate confidence bounds to the conclusions of our results. Aside from being computationally efficient and straight forward to implement, the primary advantage of our methods is seen in how these methods apply to a much larger class of time series whose mean functions are subject to only general smoothness conditions.

  6. Estimation of local anisotropy of plexiform bone: Comparison between depth sensing micro-indentation and Reference Point Indentation.

    PubMed

    Dall'Ara, E; Grabowski, P; Zioupos, P; Viceconti, M

    2015-11-26

    The recently developed Reference Point Indentation (RPI) allows the measurements of bone properties at the tissue level in vivo. The goal of this study was to compare the local anisotropic behaviour of bovine plexiform bone measured with depth sensing micro-indentation tests and with RPI. Fifteen plexiform bone specimens were extracted from a bovine femur and polished down to 0.05µm alumina paste for indentations along the axial, radial and circumferential directions (N=5 per group). Twenty-four micro-indentations (2.5µm in depth, 10% of them were excluded for testing problems) and four RPI-indentations (~50µm in depth) were performed on each sample. The local indentation modulus Eind was found to be highest for the axial direction (24.3±2.5GPa) compared to the one for the circumferential indentations (19% less stiff) and for the radial direction (30% less stiff). RPI measurements were also found to be dependent on indentation direction (p<0.001) with the exception of the Indentation Distance Increase (IDI) (p=0.173). In particular, the unloading slope US1 followed similar trends compared to the Eind: 0.47±0.03N/µm for axial, 11% lower for circumferential and 17% lower for radial. Significant correlations were found between US1 and Eind (p=0.001; R(2)=0.58), while no significant relationship was found between IDI and any of the micro-indentation measurements (p>0.157). In conclusion some of the RPI measurements can provide information about local anisotropy but IDI cannot. Moreover, there is a linear relationship between most local mechanical properties measured with RPI and with micro-indentations, but IDI does not correlate with any micro-indentation measurements. PMID:26477406

  7. A test of the 'one-point method' for estimating maximum carboxylation capacity from field-measured, light-saturated photosynthesis

    SciTech Connect

    Martin G. De Kauwe; Serbin, Shawn P.; Lin, Yan -Shih; Wright, Ian J.; Medlyn, Belinda E.; Crous, Kristine Y.; Ellsworth, David S.; Maire, Vincent; Prentice, I. Colin; Atkin, Owen K.; Rogers, Alistair; Niinemets, Ulo; Meir, Patrick; Uddling, Johan; Togashi, Henrique F.; Tarvainen, Lasse; Weerasinghe, Lasantha K.; Evans, Bradley J.; Ishida, F. Yoko; Domingues, Tomas F.

    2015-12-31

    Here, simulations of photosynthesis by terrestrial biosphere models typically need a specification of the maximum carboxylation rate (Vcmax). Estimating this parameter using A–Ci curves (net photosynthesis, A, vs intercellular CO2 concentration, Ci) is laborious, which limits availability of Vcmax data. However, many multispecies field datasets include net photosynthetic rate at saturating irradiance and at ambient atmospheric CO2 concentration (Asat) measurements, from which Vcmax can be extracted using a ‘one-point method’.

  8. BeiDou phase bias estimation and its application in precise point positioning with triple-frequency observable

    NASA Astrophysics Data System (ADS)

    Gu, Shengfeng; Lou, Yidong; Shi, Chuang; Liu, Jingnan

    2015-10-01

    At present, the BeiDou system (BDS) enables the practical application of triple-frequency observable in the Asia-Pacific region, of many possible benefits from the additional signal; this study focuses on exploiting the contribution of zero difference (ZD) ambiguity resolution (AR) to the precise point positioning (PPP). A general modeling strategy for multi-frequency PPP AR is presented, in which, the least squares ambiguity decorrelation adjustment (LAMBDA) method is employed in ambiguity fixing based on the full variance-covariance ambiguity matrix generated from the raw data processing model. Because of the reliable fixing of BDS L1 ambiguity faces more difficulty, the LAMBDA method with partial ambiguity fixing is proposed to enable the independent and instantaneous resolution of extra wide-lane (EWL) and wide-lane (WL). This mechanism of sequential ambiguity fixing is demonstrated for resolving ZD satellite phase bias and performing triple-frequency PPP AR with two reference station networks with a typical baseline of up to 400 and 800 km, respectively. Tests show that about of the EWL and WL phase bias of BDS has a consistency of better than 0.1 cycle, and this value decreases to 80 % for L1 phase bias for Experiment I, while all the solutions of Experiment II have a similar RMS of about 0.12 cycles. In addition, the repeatability of the daily mean phase bias agree to 0.093 cycles and 0.095 cycles for EWL and WL on average, which is much smaller than 0.20 cycles of L1. To assess the improvement of fixed PPP brought by applying the third frequency signal as well as the above phase bias, various ambiguity fixing strategy are considered in the numerical demonstration. It is shown that the impact of the additional signal is almost negligible when only float solution involved. It is also shown that by fixing EWL and WL together, as opposed to the single ambiguity fixing, will leads to an improvement in PPP accuracy by about on average. Attributed to the efficient

  9. Estimation of diffuse and point source microbial pollution in the ribble catchment discharging to bathing waters in the north west of England.

    PubMed

    Wither, A; Greaves, J; Dunhill, I; Wyer, M; Stapleton, C; Kay, D; Humphrey, N; Watkins, J; Francis, C; McDonald, A; Crowther, J

    2005-01-01

    Achieving compliance with the mandatory standards of the 1976 Bathing Water Directive (76/160/EEC) is required at all U.K. identified bathing waters. In recent years, the Fylde coast has been an area of significant investments in 'point source' control, which have not proven, in isolation, to satisfactorily achieve compliance with the mandatory, let alone the guide, levels of water quality in the Directive. The potential impact of riverine sources of pollution was first confirmed after a study in 1997. The completion of sewerage system enhancements offered the potential for the study of faecal indicator delivery from upstream sources comprising both point sources and diffuse agricultural sources. A research project to define these elements commenced in 2001. Initially, a desk study reported here, estimated the principal infrastructure contributions within the Ribble catchment. A second phase of this investigation has involved acquisition of empirical water quality and hydrological data from the catchment during the 2002 bathing season. These data have been used further to calibrate the 'budgets' and 'delivery' modelling and these data are still being analysed. This paper reports the initial desk study approach to faecal indicator budget estimation using available data from the sewerage infrastructure and catchment sources of faecal indicators. PMID:15850190

  10. Regional lunar gravity anomaly recovery with the GRAIL Level-1b data, and pin-point crustal density estimation with the GRAIL Level-2 and LRO topography data

    NASA Astrophysics Data System (ADS)

    Hashimoto, M.; Heki, K.

    2014-12-01

    We report the lunar gravity anomaly recovery using the GRAIL Level-1b and Level-2 data, downloaded from the PDS Geoscience Node at the Washington University. First, we used the GNV1b (satellite position data) and KBR1b (inter-satellite ranging data) files of the Level-1b data to estimate the surface mass distribution on the Moon following the method of Sugano and Heki (EPS 2004; GRL 2005). We confirmed that we could recover the gravity anomalies similar to the Level-2 data with spatial resolution of ~0.8 degrees using low altitude portions of the data. Next, we downloaded the GRAIL Level-2 data set (spherical harmonics with degree/order complete to 660) together with the topography data by LRO laser altimetry, and tried to estimate the pin-point surface crustal density. First, we selected a certain square as large as ~60 km, and compared the gravity and topography values at grid points within the square. They are roughly proportional, and the slope provides information on the density of the material making the topography. This method, however, causes apparent positive correlation between density and average topographic height of about 0.2 g/cm^3/km. We (wrongly) assume that the mass anomalies lie on the reference surface. Then, the mass above (below) the reference surface is interpreted heavier/lighter than its real density. We performed a-posteriori correction of the altitude-dependent errors in the estimated density. We finally focus on a few positive gravity anomalies on the nearside (such as those close to the Copernicus crater) that are not associated with any topographic high. We will try to constrain the subsurface structure of the dense material responsible for the anomaly using both Level-1b and -2 data.

  11. Grading More Accurately

    ERIC Educational Resources Information Center

    Rom, Mark Carl

    2011-01-01

    Grades matter. College grading systems, however, are often ad hoc and prone to mistakes. This essay focuses on one factor that contributes to high-quality grading systems: grading accuracy (or "efficiency"). I proceed in several steps. First, I discuss the elements of "efficient" (i.e., accurate) grading. Next, I present analytical results…

  12. Predict amine solution properties accurately

    SciTech Connect

    Cheng, S.; Meisen, A.; Chakma, A.

    1996-02-01

    Improved process design begins with using accurate physical property data. Especially in the preliminary design stage, physical property data such as density viscosity, thermal conductivity and specific heat can affect the overall performance of absorbers, heat exchangers, reboilers and pump. These properties can also influence temperature profiles in heat transfer equipment and thus control or affect the rate of amine breakdown. Aqueous-amine solution physical property data are available in graphical form. However, it is not convenient to use with computer-based calculations. Developed equations allow improved correlations of derived physical property estimates with published data. Expressions are given which can be used to estimate physical properties of methyldiethanolamine (MDEA), monoethanolamine (MEA) and diglycolamine (DGA) solutions.

  13. The application of iterative closest point (ICP) registration to improve 3D terrain mapping estimates using the flash 3D ladar system

    NASA Astrophysics Data System (ADS)

    Woods, Jack; Armstrong, Ernest E.; Armbruster, Walter; Richmond, Richard

    2010-04-01

    The primary purpose of this research was to develop an effective means of creating a 3D terrain map image (point-cloud) in GPS denied regions from a sequence of co-bore sighted visible and 3D LIDAR images. Both the visible and 3D LADAR cameras were hard mounted to a vehicle. The vehicle was then driven around the streets of an abandoned village used as a training facility by the German Army and imagery was collected. The visible and 3D LADAR images were then fused and 3D registration performed using a variation of the Iterative Closest Point (ICP) algorithm. The ICP algorithm is widely used for various spatial and geometric alignment of 3D imagery producing a set of rotation and translation transformations between two 3D images. ICP rotation and translation information obtain from registering the fused visible and 3D LADAR imagery was then used to calculate the x-y plane, range and intensity (xyzi) coordinates of various structures (building, vehicles, trees etc.) along the driven path. The xyzi coordinates information was then combined to create a 3D terrain map (point-cloud). In this paper, we describe the development and application of 3D imaging techniques (most specifically the ICP algorithm) used to improve spatial, range and intensity estimates of imagery collected during urban terrain mapping using a co-bore sighted, commercially available digital video camera with focal plan of 640×480 pixels and a 3D FLASH LADAR. Various representations of the reconstructed point-clouds for the drive through data will also be presented.

  14. Accurate monotone cubic interpolation

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1991-01-01

    Monotone piecewise cubic interpolants are simple and effective. They are generally third-order accurate, except near strict local extrema where accuracy degenerates to second-order due to the monotonicity constraint. Algorithms for piecewise cubic interpolants, which preserve monotonicity as well as uniform third and fourth-order accuracy are presented. The gain of accuracy is obtained by relaxing the monotonicity constraint in a geometric framework in which the median function plays a crucial role.

  15. A Method to Estimate the Probability that any Individual Cloud-to-Ground Lightning Stroke was Within any Radius of any Point

    NASA Technical Reports Server (NTRS)

    Huddleston, Lisa L.; Roeder, William P.; Merceret, Francis J.

    2011-01-01

    A new technique has been developed to estimate the probability that a nearby cloud to ground lightning stroke was within a specified radius of any point of interest. This process uses the bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to determine the probability that the stroke is inside any specified radius of any location, even if that location is not centered on or even with the location error ellipse. This technique is adapted from a method of calculating the probability of debris collision with spacecraft. Such a technique is important in spaceport processing activities because it allows engineers to quantify the risk of induced current damage to critical electronics due to nearby lightning strokes. This technique was tested extensively and is now in use by space launch organizations at Kennedy Space Center and Cape Canaveral Air Force Station. Future applications could include forensic meteorology.

  16. A Method to Estimate the Probability That Any Individual Cloud-to-Ground Lightning Stroke Was Within Any Radius of Any Point

    NASA Technical Reports Server (NTRS)

    Huddleston, Lisa L.; Roeder, William P.; Merceret, Francis J.

    2010-01-01

    A new technique has been developed to estimate the probability that a nearby cloud-to-ground lightning stroke was within a specified radius of any point of interest. This process uses the bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to determine the probability that the stroke is inside any specified radius of any location, even if that location is not centered on or even within the location error ellipse. This technique is adapted from a method of calculating the probability of debris collision with spacecraft. Such a technique is important in spaceport processing activities because it allows engineers to quantify the risk of induced current damage to critical electronics due to nearby lightning strokes. This technique was tested extensively and is now in use by space launch organizations at Kennedy Space Center and Cape Canaveral Air Force station.

  17. High resolution measurements supported by electronic structure calculations of two naphthalene derivatives: [1,5]- and [1,6]-naphthyridine—Estimation of the zero point inertial defect for planar polycyclic aromatic compounds

    SciTech Connect

    Gruet, S. E-mail: manuel.goubet@univ-lille1.fr; Pirali, O.; Goubet, M. E-mail: manuel.goubet@univ-lille1.fr

    2014-06-21

    the semi-empirical relations to estimate the zero-point inertial defect (Δ{sub 0}) of polycyclic aromatic molecules and confirmed the contribution of low frequency out-of-plane vibrational modes to the GS inertial defects of PAHs, which is indeed a key parameter to validate the analysis of such large molecules.

  18. Teleseismic Lg of Semipalatinsk and Novaya Zemlya Nuclear Explosions Recorded by the GRF (Gräfenberg) Array: Comparison with Regional Lg (BRV) and their Potential for Accurate Yield Estimation

    NASA Astrophysics Data System (ADS)

    Schlittenhardt, J.

    - A comparison of regional and teleseismic log rms (root-mean-square) Lg amplitude measurements have been made for 14 underground nuclear explosions from the East Kazakh test site recorded both by the BRV (Borovoye) station in Kazakhstan and the GRF (Gräfenberg) array in Germany. The log rms Lg amplitudes observed at the BRV regional station at a distance of 690km and at the teleseismic GRF array at a distance exceeding 4700km show very similar relative values (standard deviation 0.048 magnitude units) for underground explosions of different sizes at the Shagan River test site. This result as well as the comparison of BRV rms Lg magnitudes (which were calculated from the log rms amplitudes using an appropriate calibration) with magnitude determinations for P waves of global seismic networks (standard deviation 0.054 magnitude units) point to a high precision in estimating the relative source sizes of explosions from Lg-based single station data. Similar results were also obtained by other investigators (Patton, 1988; Ringdaletal., 1992) using Lg data from different stations at different distances.Additionally, GRF log rms Lg and P-coda amplitude measurements were made for a larger data set from Novaya Zemlya and East Kazakh explosions, which were supplemented with mb(Lg) amplitude measurements using a modified version of Nuttli's (1973, 1986a) method. From this test of the relative performance of the three different magnitude scales, it was found that the Lg and P-coda based magnitudes performed equally well, whereas the modified Nuttli mb(Lg) magnitudes show greater scatter when compared to the worldwide mb reference magnitudes. Whether this result indicates that the rms amplitude measurements are superior to the zero-to-peak amplitude measurement of a single cycle used for the modified Nuttli method, however, cannot be finally assessed, since the calculated mb(Lg) magnitudes are only preliminary until appropriate attenuation corrections are available for the

  19. Using Mean Absolute Relative Phase, Deviation Phase and Point-Estimation Relative Phase to Measure Postural Coordination in a Serial Reaching Task

    PubMed Central

    Galgon, Anne K.; Shewokis, Patricia A.

    2016-01-01

    The objectives of this communication are to present the methods used to calculate mean absolute relative phase (MARP), deviation phase (DP) and point estimate relative phase (PRP) and compare their utility in measuring postural coordination during the performance of a serial reaching task. MARP and DP are derived from continuous relative phase time series representing the relationship between two body segments or joints during movements. MARP is a single measure used to quantify the coordination pattern and DP measures the stability of the coordination pattern. PRP also quantifies coordination patterns by measuring the relationship between the timing of maximal or minimal angular displacements of two segments within cycles of movement. Seven young adults practiced a bilateral serial reaching task 300 times over 3 days. Relative phase measures were used to evaluate inter-joint relationships for shoulder-hip (proximal) and hip-ankle (distal) postural coordination at early and late learning. MARP, PRP and DP distinguished between proximal and distal postural coordination. There was no effect of practice on any of the relative phase measures for the group, but individual differences were seen over practice. Combined, MARP and DP estimated stability of in-phase and anti-phase postural coordination patterns, however additional qualitative movement analyses may be needed to interpret findings in a serial task. We discuss the strengths and limitations of using MARP and DP and compare MARP and DP to PRP measures in assessing coordination patterns in the context of various types of skillful tasks. Key points MARP, DP and PRP measures coordination between segments or joint angles Advantages and disadvantages of each measure should be considered in relationship to the performance task MARP and DP may capture coordination patterns and stability of the patterns during discrete tasks or phases of movements within a task PRP and SD or PRP may capture coordination patterns and

  20. Location and depth estimation of point-dipole and line of dipoles using analytic signals of the magnetic gradient tensor and magnitude of vector components

    NASA Astrophysics Data System (ADS)

    Oruç, Bülent

    2010-01-01

    The magnetic gradient tensor (MGT) provides gradient components of potential fields with mathematical properties which allow processing techniques e.g. analytic signal techniques. With MGT emerging as a new tool for geophysical exploration, the mathematical modelling of gradient tensor fields is necessary for interpretation of magnetic field measurements. The point-dipole and line of dipoles are used to approximate various magnetic objects. I investigate the maxima of the magnitude of magnetic vector components (MMVC) and analytic signals of magnetic gradient tensor (ASMGT) resulting from point-dipole and line of dipoles sources in determining horizontal locations. I also present a method in which depths of these sources are estimated from the ratio of the maximum of MMVC to the maximum of ASMGT. Theoretical examples have been carried out to test the feasibility of the method in obtaining source locations and depths. The method has been applied to the MMVC and ASMGT computed from the total field data over a basic/ultrabasic body at the emerald deposit of Socotó, Bahia, Brazil and buried water supply pipe near Jadaguda Township, India. In both field examples, the method produces good correlations with previous interpretations.

  1. Investigating flow patterns and related dynamics in multi-instability turbulent plasmas using a three-point cross-phase time delay estimation velocimetry scheme

    NASA Astrophysics Data System (ADS)

    Brandt, C.; Thakur, S. C.; Tynan, G. R.

    2016-04-01

    Complexities of flow patterns in the azimuthal cross-section of a cylindrical magnetized helicon plasma and the corresponding plasma dynamics are investigated by means of a novel scheme for time delay estimation velocimetry. The advantage of this introduced method is the capability of calculating the time-averaged 2D velocity fields of propagating wave-like structures and patterns in complex spatiotemporal data. It is able to distinguish and visualize the details of simultaneously present superimposed entangled dynamics and it can be applied to fluid-like systems exhibiting frequently repeating patterns (e.g., waves in plasmas, waves in fluids, dynamics in planetary atmospheres, etc.). The velocity calculations are based on time delay estimation obtained from cross-phase analysis of time series. Each velocity vector is unambiguously calculated from three time series measured at three different non-collinear spatial points. This method, when applied to fast imaging, has been crucial to understand the rich plasma dynamics in the azimuthal cross-section of a cylindrical linear magnetized helicon plasma. The capabilities and the limitations of this velocimetry method are discussed and demonstrated for two completely different plasma regimes, i.e., for quasi-coherent wave dynamics and for complex broadband wave dynamics involving simultaneously present multiple instabilities.

  2. Application of the N-point moving average method for brachial pressure waveform-derived estimation of central aortic systolic pressure.

    PubMed

    Shih, Yuan-Ta; Cheng, Hao-Min; Sung, Shih-Hsien; Hu, Wei-Chih; Chen, Chen-Huan

    2014-04-01

    The N-point moving average (NPMA) is a mathematical low-pass filter that can smooth peaked noninvasively acquired radial pressure waveforms to estimate central aortic systolic pressure using a common denominator of N/4 (where N=the acquisition sampling frequency). The present study investigated whether the NPMA method can be applied to brachial pressure waveforms. In the derivation group, simultaneously recorded invasive high-fidelity brachial and central aortic pressure waveforms from 40 subjects were analyzed to identify the best common denominator. In the validation group, the NPMA method with the obtained common denominator was applied on noninvasive brachial pressure waveforms of 100 subjects. Validity was tested by comparing the noninvasive with the simultaneously recorded invasive central aortic systolic pressure. Noninvasive brachial pressure waveforms were calibrated to the cuff systolic and diastolic blood pressures. In the derivation study, an optimal denominator of N/6 was identified for NPMA to derive central aortic systolic pressure. The mean difference between the invasively/noninvasively estimated (N/6) and invasively measured central aortic systolic pressure was 0.1±3.5 and -0.6±7.6 mm Hg in the derivation and validation study, respectively. It satisfied the Association for the Advancement of Medical Instrumentation standard of 5±8 mm Hg. In conclusion, this method for estimating central aortic systolic pressure using either invasive or noninvasive brachial pressure waves requires a common denominator of N/6. By integrating the NPMA method into the ordinary oscillometric blood pressure determining process, convenient noninvasive central aortic systolic pressure values could be obtained with acceptable accuracy. PMID:24420554

  3. Estimated times to exhaustion and power outputs at the gas exchange threshold, physical working capacity at the rating of perceived exertion threshold, and respiratory compensation point.

    PubMed

    Bergstrom, Haley C; Housh, Terry J; Zuniga, Jorge M; Camic, Clayton L; Traylor, Daniel A; Schmidt, Richard J; Johnson, Glen O

    2012-10-01

    The purposes of this study were to compare the power outputs and estimated times to exhaustion (T(lim)) at the gas exchange threshold (GET), physical working capacity at the rating of perceived exertion threshold (PWC(RPE)), and respiratory compensation point (RCP). Three male and 5 female subjects (mean ± SD: age, 22.4 ± 2.8 years) performed an incremental test to exhaustion on an electronically braked cycle ergometer to determine peak oxygen consumption rate, GET, and RCP. The PWC(RPE) was determined from ratings of perceived exertion data recorded during 3 continuous workbouts to exhaustion. The estimated T(lim) values for each subject at GET, PWC(RPE), and RCP were determined from power curve analyses (T(lim) = ax(b)). The results indicated that the PWC(RPE) (176 ± 55 W) was not significantly different from RCP (181 ± 54 W); however, GET (155 ± 42 W) was significantly less than PWC(RPE) and RCP. The estimated T(lim) for the GET (26.1 ± 9.8 min) was significantly greater than PWC(RPE) (14.6 ± 5.6 min) and RCP (11.2 ± 3.1 min). The PWC(RPE) occurred at a mean power output that was 13.5% greater than the GET and, therefore, it is likely that the perception of effort is not driven by the same mechanism that underlies the GET (i.e., lactate buffering). Furthermore, the PWC(RPE) and RCP were not significantly different and, therefore, these thresholds may be associated with the same mechanisms of fatigue, such as increased levels of interstitial and (or) arterial [K⁺]. PMID:22716291

  4. Estimating extragalactic Faraday rotation

    NASA Astrophysics Data System (ADS)

    Oppermann, N.; Junklewitz, H.; Greiner, M.; Enßlin, T. A.; Akahori, T.; Carretti, E.; Gaensler, B. M.; Goobar, A.; Harvey-Smith, L.; Johnston-Hollitt, M.; Pratley, L.; Schnitzeler, D. H. F. M.; Stil, J. M.; Vacca, V.

    2015-03-01

    Observations of Faraday rotation for extragalactic sources probe magnetic fields both inside and outside the Milky Way. Building on our earlier estimate of the Galactic contribution, we set out to estimate the extragalactic contributions. We discuss the problems involved; in particular, we point out that taking the difference between the observed values and the Galactic foreground reconstruction is not a good estimate for the extragalactic contributions. We point out a degeneracy between the contributions to the observed values due to extragalactic magnetic fields and observational noise and comment on the dangers of over-interpreting an estimate without taking into account its uncertainty information. To overcome these difficulties, we develop an extended reconstruction algorithm based on the assumption that the observational uncertainties are accurately described for a subset of the data, which can overcome the degeneracy with the extragalactic contributions. We present a probabilistic derivation of the algorithm and demonstrate its performance using a simulation, yielding a high quality reconstruction of the Galactic Faraday rotation foreground, a precise estimate of the typical extragalactic contribution, and a well-defined probabilistic description of the extragalactic contribution for each data point. We then apply this reconstruction technique to a catalog of Faraday rotation observations for extragalactic sources. The analysis is done for several different scenarios, for which we consider the error bars of different subsets of the data to accurately describe the observational uncertainties. By comparing the results, we argue that a split that singles out only data near the Galactic poles is the most robust approach. We find that the dispersion of extragalactic contributions to observed Faraday depths is most likely lower than 7 rad/m2, in agreement with earlier results, and that the extragalactic contribution to an individual data point is poorly

  5. Optimal Cut-Off Points of Fasting Plasma Glucose for Two-Step Strategy in Estimating Prevalence and Screening Undiagnosed Diabetes and Pre-Diabetes in Harbin, China

    PubMed Central

    Sun, Bo; Lan, Li; Cui, Wenxiu; Xu, Guohua; Sui, Conglan; Wang, Yibaina; Zhao, Yashuang; Wang, Jian; Li, Hongyuan

    2015-01-01

    To identify optimal cut-off points of fasting plasma glucose (FPG) for two-step strategy in screening abnormal glucose metabolism and estimating prevalence in general Chinese population. A population-based cross-sectional study was conducted on 7913 people aged 20 to 74 years in Harbin. Diabetes and pre-diabetes were determined by fasting and 2 hour post-load glucose from the oral glucose tolerance test in all participants. Screening potential of FPG, cost per case identified by two-step strategy, and optimal FPG cut-off points were described. The prevalence of diabetes was 12.7%, of which 65.2% was undiagnosed. Twelve percent or 9.0% of participants were diagnosed with pre-diabetes using 2003 ADA criteria or 1999 WHO criteria, respectively. The optimal FPG cut-off points for two-step strategy were 5.6 mmol/l for previously undiagnosed diabetes (area under the receiver-operating characteristic curve of FPG 0.93; sensitivity 82.0%; cost per case identified by two-step strategy ¥261), 5.3 mmol/l for both diabetes and pre-diabetes or pre-diabetes alone using 2003 ADA criteria (0.89 or 0.85; 72.4% or 62.9%; ¥110 or ¥258), 5.0 mmol/l for pre-diabetes using 1999 WHO criteria (0.78; 66.8%; ¥399), and 4.9 mmol/l for IGT alone (0.74; 62.2%; ¥502). Using the two-step strategy, the underestimates of prevalence reduced to nearly 38% for pre-diabetes or 18.7% for undiagnosed diabetes, respectively. Approximately a quarter of the general population in Harbin was in hyperglycemic condition. Using optimal FPG cut-off points for two-step strategy in Chinese population may be more effective and less costly for reducing the missed diagnosis of hyperglycemic condition. PMID:25785585

  6. Optimal cut-off points of fasting plasma glucose for two-step strategy in estimating prevalence and screening undiagnosed diabetes and pre-diabetes in Harbin, China.

    PubMed

    Bao, Chundan; Zhang, Dianfeng; Sun, Bo; Lan, Li; Cui, Wenxiu; Xu, Guohua; Sui, Conglan; Wang, Yibaina; Zhao, Yashuang; Wang, Jian; Li, Hongyuan

    2015-01-01

    To identify optimal cut-off points of fasting plasma glucose (FPG) for two-step strategy in screening abnormal glucose metabolism and estimating prevalence in general Chinese population. A population-based cross-sectional study was conducted on 7913 people aged 20 to 74 years in Harbin. Diabetes and pre-diabetes were determined by fasting and 2 hour post-load glucose from the oral glucose tolerance test in all participants. Screening potential of FPG, cost per case identified by two-step strategy, and optimal FPG cut-off points were described. The prevalence of diabetes was 12.7%, of which 65.2% was undiagnosed. Twelve percent or 9.0% of participants were diagnosed with pre-diabetes using 2003 ADA criteria or 1999 WHO criteria, respectively. The optimal FPG cut-off points for two-step strategy were 5.6 mmol/l for previously undiagnosed diabetes (area under the receiver-operating characteristic curve of FPG 0.93; sensitivity 82.0%; cost per case identified by two-step strategy ¥261), 5.3 mmol/l for both diabetes and pre-diabetes or pre-diabetes alone using 2003 ADA criteria (0.89 or 0.85; 72.4% or 62.9%; ¥110 or ¥258), 5.0 mmol/l for pre-diabetes using 1999 WHO criteria (0.78; 66.8%; ¥399), and 4.9 mmol/l for IGT alone (0.74; 62.2%; ¥502). Using the two-step strategy, the underestimates of prevalence reduced to nearly 38% for pre-diabetes or 18.7% for undiagnosed diabetes, respectively. Approximately a quarter of the general population in Harbin was in hyperglycemic condition. Using optimal FPG cut-off points for two-step strategy in Chinese population may be more effective and less costly for reducing the missed diagnosis of hyperglycemic condition. PMID:25785585

  7. Accurate measurement of time

    NASA Astrophysics Data System (ADS)

    Itano, Wayne M.; Ramsey, Norman F.

    1993-07-01

    The paper discusses current methods for accurate measurements of time by conventional atomic clocks, with particular attention given to the principles of operation of atomic-beam frequency standards, atomic hydrogen masers, and atomic fountain and to the potential use of strings of trapped mercury ions as a time device more stable than conventional atomic clocks. The areas of application of the ultraprecise and ultrastable time-measuring devices that tax the capacity of modern atomic clocks include radio astronomy and tests of relativity. The paper also discusses practical applications of ultraprecise clocks, such as navigation of space vehicles and pinpointing the exact position of ships and other objects on earth using the GPS.

  8. Accurate quantum chemical calculations

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.

    1989-01-01

    An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.

  9. Accurate determination of the superfluid-insulator transition in the one-dimensional Bose-Hubbard model

    NASA Astrophysics Data System (ADS)

    Zakrzewski, Jakub; Delande, Dominique

    2008-11-01

    The quantum phase transition point between the insulator and the superfluid phase at unit filling factor of the infinite one-dimensional Bose-Hubbard model is numerically computed with a high accuracy. The method uses the infinite system version of the time evolving block decimation algorithm, here tested in a challenging case. We provide also the accurate estimate of the phase transition point at double occupancy.

  10. A Method to Estimate the Probability that Any Individual Cloud-to-Ground Lightning Stroke was Within Any Radius of Any Point

    NASA Technical Reports Server (NTRS)

    Huddleston, Lisa; Roeder, WIlliam P.; Merceret, Francis J.

    2011-01-01

    A new technique has been developed to estimate the probability that a nearby cloud-to-ground lightning stroke was within a specified radius of any point of interest. This process uses the bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to determine the probability that the stroke is inside any specified radius of any location, even if that location is not centered on or even within the location error ellipse. This technique is adapted from a method of calculating the probability of debris collision with spacecraft. Such a technique is important in spaceport processing activities because it allows engineers to quantify the risk of induced current damage to critical electronics due to nearby lightning strokes. This technique was tested extensively and is now in use by space launch organizations at Kennedy Space Center and Cape Canaveral Air Force station. Future applications could include forensic meteorology.

  11. Myofascial trigger point pain.

    PubMed

    Jaeger, Bernadette

    2013-01-01

    Myofascial trigger point pain is an extremely prevalent cause of persistent pain disorders in all parts of the body, not just the head, neck, and face. Features include deep aching pain in any structure, referred from focally tender points in taut bands of skeletal muscle (the trigger points). Diagnosis depends on accurate palpation with 2-4 kg/cm2 of pressure for 10 to 20 seconds over the suspected trigger point to allow the referred pain pattern to develop. In the head and neck region, cervical muscle trigger points (key trigger points) often incite and perpetuate trigger points (satellite trigger points) and referred pain from masticatory muscles. Management requires identification and control of as many perpetuating factors as possible (posture, body mechanics, psychological stress or depression, poor sleep or nutrition). Trigger point therapies such as spray and stretch or trigger point injections are best used as adjunctive therapy. PMID:24864393

  12. Real-time estimation of prostate tumor rotation and translation with a kV imaging system based on an iterative closest point algorithm

    NASA Astrophysics Data System (ADS)

    Nasehi Tehrani, Joubin; O'Brien, Ricky T.; Rugaard Poulsen, Per; Keall, Paul

    2013-12-01

    Previous studies have shown that during cancer radiotherapy a small translation or rotation of the tumor can lead to errors in dose delivery. Current best practice in radiotherapy accounts for tumor translations, but is unable to address rotation due to a lack of a reliable real-time estimate. We have developed a method based on the iterative closest point (ICP) algorithm that can compute rotation from kilovoltage x-ray images acquired during radiation treatment delivery. A total of 11 748 kilovoltage (kV) images acquired from ten patients (one fraction for each patient) were used to evaluate our tumor rotation algorithm. For each kV image, the three dimensional coordinates of three fiducial markers inside the prostate were calculated. The three dimensional coordinates were used as input to the ICP algorithm to calculate the real-time tumor rotation and translation around three axes. The results show that the root mean square error was improved for real-time calculation of tumor displacement from a mean of 0.97 mm with the stand alone translation to a mean of 0.16 mm by adding real-time rotation and translation displacement with the ICP algorithm. The standard deviation (SD) of rotation for the ten patients was 2.3°, 0.89° and 0.72° for rotation around the right-left (RL), anterior-posterior (AP) and superior-inferior (SI) directions respectively. The correlation between all six degrees of freedom showed that the highest correlation belonged to the AP and SI translation with a correlation of 0.67. The second highest correlation in our study was between the rotation around RL and rotation around AP, with a correlation of -0.33. Our real-time algorithm for calculation of rotation also confirms previous studies that have shown the maximum SD belongs to AP translation and rotation around RL. ICP is a reliable and fast algorithm for estimating real-time tumor rotation which could create a pathway to investigational clinical treatment studies requiring real

  13. Real-time estimation of prostate tumor rotation and translation with a kV imaging system based on an iterative closest point algorithm.

    PubMed

    Tehrani, Joubin Nasehi; O'Brien, Ricky T; Poulsen, Per Rugaard; Keall, Paul

    2013-12-01

    Previous studies have shown that during cancer radiotherapy a small translation or rotation of the tumor can lead to errors in dose delivery. Current best practice in radiotherapy accounts for tumor translations, but is unable to address rotation due to a lack of a reliable real-time estimate. We have developed a method based on the iterative closest point (ICP) algorithm that can compute rotation from kilovoltage x-ray images acquired during radiation treatment delivery. A total of 11 748 kilovoltage (kV) images acquired from ten patients (one fraction for each patient) were used to evaluate our tumor rotation algorithm. For each kV image, the three dimensional coordinates of three fiducial markers inside the prostate were calculated. The three dimensional coordinates were used as input to the ICP algorithm to calculate the real-time tumor rotation and translation around three axes. The results show that the root mean square error was improved for real-time calculation of tumor displacement from a mean of 0.97 mm with the stand alone translation to a mean of 0.16 mm by adding real-time rotation and translation displacement with the ICP algorithm. The standard deviation (SD) of rotation for the ten patients was 2.3°, 0.89° and 0.72° for rotation around the right-left (RL), anterior-posterior (AP) and superior-inferior (SI) directions respectively. The correlation between all six degrees of freedom showed that the highest correlation belonged to the AP and SI translation with a correlation of 0.67. The second highest correlation in our study was between the rotation around RL and rotation around AP, with a correlation of -0.33. Our real-time algorithm for calculation of rotation also confirms previous studies that have shown the maximum SD belongs to AP translation and rotation around RL. ICP is a reliable and fast algorithm for estimating real-time tumor rotation which could create a pathway to investigational clinical treatment studies requiring real

  14. Estimating contaminant mass discharge: A field comparison of the multilevel point measurement and the integral pumping investigation approaches and their uncertainties

    NASA Astrophysics Data System (ADS)

    Béland-Pelletier, Caroline; Fraser, Michelle; Barker, Jim; Ptak, Thomas

    2011-03-01

    In this field study, two approaches to assess contaminant mass discharge were compared: the sampling of multilevel wells (MLS) and the integral groundwater investigation (or integral pumping test, IPT) that makes use of the concentration-time series obtained from pumping wells. The MLS approached used concentrations, hydraulic conductivity and gradient rather than direct chemical flux measurements, while the IPT made use of a simplified analytical inversion. The two approaches were applied at a control plane located approximately 40 m downgradient of a gasoline source at Canadian Forces Base Borden, Ontario, Canada. The methods yielded similar estimates of the mass discharging across the control plane. The sources of uncertainties in the mass discharge in each approach were evaluated, including the uncertainties inherent in the underlying assumptions and procedures. The maximum uncertainty of the MLS method was about 67%, and about 28% for the IPT method in this specific field situation. For the MLS method, the largest relative uncertainty (62%) was attributed to the limited sampling density (0.63 points/m 2), through a novel comparison with a denser sampling grid nearby. A five-fold increase of the sampling grid density would have been required to reduce the overall relative uncertainty for the MLS method to about the same level as that for the IPT method. Uncertainty in the complete coverage of the control plane provided the largest relative uncertainty (37%) in the IPT method. While MLS or IPT methods to assess contaminant mass discharge are attractive assessment tools, the large relative uncertainty in either method found for this reasonable well monitored and simple aquifer suggests that results in more complex plumes in more heterogeneous aquifers should be viewed with caution.

  15. Curie-point depths estimated from fractal spectral analyses of magnetic anomalies in the western United States and northeast Pacific Oecan

    NASA Astrophysics Data System (ADS)

    Wang, J.; Li, C.

    2011-12-01

    We estimate Curie-point depths (Zb) of the western United States and northeast Pacific Ocean by analyzing radially averaged amplitude spectra of magnetic anomalies based on a fractal magnetization model. The amplitude spectrum of source magnetization is proportional to the wavenumber (k) raised to a fractal exponent (-β). We first test whether long-wavelength components are captured appropriately by using variable overlapping windows ranging in sizes from 75 × 75 km2 to 200 × 200 km2. For each sliding window, the amplitude spectrum is pre-multiplied with the factor k-β prior to computation. We then use the centroid method (Tanaka et al., 1999) to calculate Zb. We find that when the window size approaches 200 × 200 km2 the resolution of estimated Zb is too low to reveal important geological features. For our study, fractal exponents larger than 0.6 will result in overcorrection. Considering the difficulty of simultaneous inversion of the depths to the top and centroid of magnetic sources (Zt and Z0 respectively) and β, we fix β = 0.5 for the whole study area. Note that β here is defined for amplitude spectrum, which is equivalent to 1 for power spectrum of 2D magnetic sources. Our results show that the estimated Curie depths range from 4 km to 40 km. The average Zb in the northern part of the northeast Pacific Ocean is about 14 km below the sea level, and almost the same depths are found in the junction of the active and ancient Cascade arcs and remanent track of Yellowstone hotspot. Subduction beneath the North American plate and consequent magmatism can account for small Zb in the above mentioned volcanic arc regions. The Mendocino Triple Junction separates the northeast Pacific into northern (mainly consisting of the Explorer, Juan de Fuca and Gorda plates) and southern parts. Both the Zb and the thickness of magnetic layer in the southern part are larger than those in the northern part. This contrast is due to the fact that the Pacific plate to the south

  16. A rapid and accurate method for the quantitative estimation of natural polysaccharides and their fractions using high performance size exclusion chromatography coupled with multi-angle laser light scattering and refractive index detector.

    PubMed

    Cheong, Kit-Leong; Wu, Ding-Tao; Zhao, Jing; Li, Shao-Ping

    2015-06-26

    In this study, a rapid and accurate method for quantitative analysis of natural polysaccharides and their different fractions was developed. Firstly, high performance size exclusion chromatography (HPSEC) was utilized to separate natural polysaccharides. And then the molecular masses of their fractions were determined by multi-angle laser light scattering (MALLS). Finally, quantification of polysaccharides or their fractions was performed based on their response to refractive index detector (RID) and their universal refractive index increment (dn/dc). Accuracy of the developed method for the quantification of individual and mixed polysaccharide standards, including konjac glucomannan, CM-arabinan, xyloglucan, larch arabinogalactan, oat β-glucan, dextran (410, 270, and 25 kDa), mixed xyloglucan and CM-arabinan, and mixed dextran 270 K and CM-arabinan was determined, and their average recoveries were between 90.6% and 98.3%. The limits of detection (LOD) and quantification (LOQ) were ranging from 10.68 to 20.25 μg/mL, and 42.70 to 68.85 μg/mL, respectively. Comparing to the conventional phenol sulfuric acid assay and HPSEC coupled with evaporative light scattering detection (HPSEC-ELSD) analysis, the developed HPSEC-MALLS-RID method based on universal dn/dc for the quantification of polysaccharides and their fractions is much more simple, rapid, and accurate with no need of individual polysaccharide standard, as well as free of calibration curve. The developed method was also successfully utilized for quantitative analysis of polysaccharides and their different fractions from three medicinal plants of Panax genus, Panax ginseng, Panax notoginseng and Panax quinquefolius. The results suggested that the HPSEC-MALLS-RID method based on universal dn/dc could be used as a routine technique for the quantification of polysaccharides and their fractions in natural resources. PMID:25990349

  17. Fatty acid ethyl esters in hair as alcohol markers: estimating a reliable cut-off point by evaluation of 1,057 autopsy cases.

    PubMed

    Hastedt, Martin; Bossers, Lydia; Krumbiegel, Franziska; Herre, Sieglinde; Hartwig, Sven

    2013-06-01

    Alcohol abuse is a widespread problem, especially in Western countries. Therefore, it is important to have markers of alcohol consumption with validated cut-off points. For many years research has focused on analysis of hair for alcohol markers, but data on the performance and reliability of cut-off values are still lacking. Evaluating 1,057 cases from 2005 to 2011, included a large sample group for the estimation of an applicable cut-off value when compared to earlier studies on fatty acid ethyl esters (FAEEs) in hair. The FAEEs concentrations in hair, police investigation reports, medical history, and the macroscopic and microscopic alcohol-typical results from autopsy, such as liver, pancreas, and cardiac findings, were taken into account in this study. In 80.2 % of all 1,057 cases pathologic findings that may be related to alcohol abuse were reported. The cases were divided into social drinkers (n = 168), alcohol abusers (n = 502), and cases without information on alcohol use. The median FAEEs concentration in the group of social drinkers was 0.302 ng/mg (range 0.008-14.3 ng/mg). In the group of alcohol abusers a median of 1.346 ng/mg (range 0.010-83.7 ng/mg) was found. Before June 2009 the hair FAEEs test was routinely applied to a proximal hair segment of 0-6 cm, changing to a routinely investigated hair length of 3 cm after 2009, as proposed by the Society of Hair Testing (SoHT). The method showed significant differences between the groups of social drinkers and alcoholics, leading to an improvement in the postmortem detection of alcohol abuse. Nevertheless, the performance of the method was rather poor, with an area under the curve calculated from receiver operating characteristic (ROC curve AUC) of 0.745. The optimum cut-off value for differentiation between social and chronic excessive drinking calculated for hair FAEEs was 1.08 ng/mg, with a sensitivity of 56 % and a specificity of 80 %. In relation to the "Consensus on Alcohol Markers 2012

  18. Estimating Implementation and Operational Costs of an Integrated Tiered CD4 Service including Laboratory and Point of Care Testing in a Remote Health District in South Africa

    PubMed Central

    Cassim, Naseem; Coetzee, Lindi M.; Schnippel, Kathryn; Glencross, Deborah K.

    2014-01-01

    Background An integrated tiered service delivery model (ITSDM) has been proposed to provide ‘full-coverage’ of CD4 services throughout South Africa. Five tiers are described, defined by testing volumes and number of referring health-facilities. These include: (1) Tier-1/decentralized point-of-care service (POC) in a single site; Tier-2/POC-hub servicing processing <30–40 samples from 8–10 health-clinics; Tier-3/Community laboratories servicing ∼50 health-clinics, processing <150 samples/day; high-volume centralized laboratories (Tier-4 and Tier-5) processing <300 or >600 samples/day and serving >100 or >200 health-clinics, respectively. The objective of this study was to establish costs of existing and ITSDM-tiers 1, 2 and 3 in a remote, under-serviced district in South Africa. Methods Historical health-facility workload volumes from the Pixley-ka-Seme district, and the total volumes of CD4 tests performed by the adjacent district referral CD4 laboratories, linked to locations of all referring clinics and related laboratory-to-result turn-around time (LTR-TAT) data, were extracted from the NHLS Corporate-Data-Warehouse for the period April-2012 to March-2013. Tiers were costed separately (as a cost-per-result) including equipment, staffing, reagents and test consumable costs. A one-way sensitivity analyses provided for changes in reagent price, test volumes and personnel time. Results The lowest cost-per-result was noted for the existing laboratory-based Tiers- 4 and 5 ($6.24 and $5.37 respectively), but with related increased LTR-TAT of >24–48 hours. Full service coverage with TAT <6-hours could be achieved with placement of twenty-seven Tier-1/POC or eight Tier-2/POC-hubs, at a cost-per-result of $32.32 and $15.88 respectively. A single district Tier-3 laboratory also ensured ‘full service coverage’ and <24 hour LTR-TAT for the district at $7.42 per-test. Conclusion Implementing a single Tier-3/community laboratory to extend and improve delivery

  19. Does the Spectrum model accurately predict trends in adult mortality? Evaluation of model estimates using empirical data from a rural HIV community cohort study in north-western Tanzania

    PubMed Central

    Michael, Denna; Kanjala, Chifundo; Calvert, Clara; Pretorius, Carel; Wringe, Alison; Todd, Jim; Mtenga, Balthazar; Isingo, Raphael; Zaba, Basia; Urassa, Mark

    2014-01-01

    Introduction Spectrum epidemiological models are used by UNAIDS to provide global, regional and national HIV estimates and projections, which are then used for evidence-based health planning for HIV services. However, there are no validations of the Spectrum model against empirical serological and mortality data from populations in sub-Saharan Africa. Methods Serologic, demographic and verbal autopsy data have been regularly collected among over 30,000 residents in north-western Tanzania since 1994. Five-year age-specific mortality rates (ASMRs) per 1,000 person years and the probability of dying between 15 and 60 years of age (45Q15,) were calculated and compared with the Spectrum model outputs. Mortality trends by HIV status are shown for periods before the introduction of antiretroviral therapy (1994–1999, 2000–2005) and the first 5 years afterwards (2005–2009). Results Among 30–34 year olds of both sexes, observed ASMRs per 1,000 person years were 13.33 (95% CI: 10.75–16.52) in the period 1994–1999, 11.03 (95% CI: 8.84–13.77) in 2000–2004, and 6.22 (95% CI; 4.75–8.15) in 2005–2009. Among the same age group, the ASMRs estimated by the Spectrum model were 10.55, 11.13 and 8.15 for the periods 1994–1999, 2000–2004 and 2005–2009, respectively. The cohort data, for both sexes combined, showed that the 45Q15 declined from 39% (95% CI: 27–55%) in 1994 to 22% (95% CI: 17–29%) in 2009, whereas the Spectrum model predicted a decline from 43% in 1994 to 37% in 2009. Conclusion From 1994 to 2009, the observed decrease in ASMRs was steeper in younger age groups than that predicted by the Spectrum model, perhaps because the Spectrum model under-estimated the ASMRs in 30–34 year olds in 1994–99. However, the Spectrum model predicted a greater decrease in 45Q15 mortality than observed in the cohort, although the reasons for this over-estimate are unclear. PMID:24438873

  20. Thunderstorm activity in early Earth: same estimations from point of view a role of electric discharges in formation of prebiotic conditions

    NASA Astrophysics Data System (ADS)

    Serozhkin, Yu.

    2008-09-01

    increase quantity of lightning at 50 % [7]. The examinations of processes of separation of charges in clouds result in a very narrow diapason of temperature and pressure of an atmosphere, at which the separation of charges is possible. It is necessary to tell that the electrostatic charging of thunderstorm clouds not received a satisfactory explanation. One of not explained properties is the formation at the altitude 6 … 8 km at temperature about -15o the negatively charged layer by thickness some hundreds meters. At this altitude at such pressure the water can exist in three phases. In this layer because of interaction of the ice crystals with snow pellets there is a separation of charges. Above this layer there is a so-called charge reverse - a not explained phenomenon causing that the ice crystals are lower this layer are charged positively, and above negatively. The snow pellets are higher this layer is charged positively, and below negatively. Thus negatively charged layer consists of negatively charged ice crystals and snow pellets. Positively charged snow pellets form a charge at the top of a cloud, and positively charged ice crystals form positive charge in the bottom of a cloud. It follows that the dependence of the electrostatic charging of thunderstorm clouds from parameters of atmosphere is extremely difficult to estimate. About influence of pressure it is possible to tell the general words. It is possible to tell that at pressure corresponding to the point of charge reverse (about 250 Torr at the altitude 8 km) usual thunderstorm activity will decrease. It means that if the atmospheric pressure during formation pre-biotic conditions was less than 100 Torr, it is necessary to discuss a role of electrical discharges, which are connected with accumulation of charges on particles (sand storms, tornado) or ashes at eruption of volcano. What tracks of thunderstorm activity it is possible to search in the past? It is know that the cloud - ground lightning

  1. Estimating potential evapotranspiration with improved radiation estimation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Potential evapotranspiration (PET) is of great importance to estimation of surface energy budget and water balance calculation. The accurate estimation of PET will facilitate efficient irrigation scheduling, drainage design, and other agricultural and meteorological applications. However, accuracy o...

  2. Accurate skin dose measurements using radiochromic film in clinical applications

    SciTech Connect

    Devic, S.; Seuntjens, J.; Abdel-Rahman, W.; Evans, M.; Olivares, M.; Podgorsak, E.B.; Vuong, Te; Soares, Christopher G.

    2006-04-15

    Megavoltage x-ray beams exhibit the well-known phenomena of dose buildup within the first few millimeters of the incident phantom surface, or the skin. Results of the surface dose measurements, however, depend vastly on the measurement technique employed. Our goal in this study was to determine a correction procedure in order to obtain an accurate skin dose estimate at the clinically relevant depth based on radiochromic film measurements. To illustrate this correction, we have used as a reference point a depth of 70 {mu}. We used the new GAFCHROMIC[reg] dosimetry films (HS, XR-T, and EBT) that have effective points of measurement at depths slightly larger than 70 {mu}. In addition to films, we also used an Attix parallel-plate chamber and a home-built extrapolation chamber to cover tissue-equivalent depths in the range from 4 {mu} to 1 mm of water-equivalent depth. Our measurements suggest that within the first millimeter of the skin region, the PDD for a 6 MV photon beam and field size of 10x10 cm{sup 2} increases from 14% to 43%. For the three GAFCHROMIC[reg] dosimetry film models, the 6 MV beam entrance skin dose measurement corrections due to their effective point of measurement are as follows: 15% for the EBT, 15% for the HS, and 16% for the XR-T model GAFCHROMIC[reg] films. The correction factors for the exit skin dose due to the build-down region are negligible. There is a small field size dependence for the entrance skin dose correction factor when using the EBT GAFCHROMIC[reg] film model. Finally, a procedure that uses EBT model GAFCHROMIC[reg] film for an accurate measurement of the skin dose in a parallel-opposed pair 6 MV photon beam arrangement is described.

  3. How to Estimate the Cost of Point-of-Care CD4 Testing in Program Settings: An Example Using the Alere Pima™ Analyzer in South Africa

    PubMed Central

    Larson, Bruce; Schnippel, Kathryn; Ndibongo, Buyiswa; Long, Lawrence; Fox, Matthew P.; Rosen, Sydney

    2012-01-01

    Integrating POC CD4 testing technologies into HIV counseling and testing (HCT) programs may improve post-HIV testing linkage to care and treatment. As evaluations of these technologies in program settings continue, estimates of the costs of POC CD4 tests to the service provider will be needed and estimates have begun to be reported. Without a consistent and transparent methodology, estimates of the cost per CD4 test using POC technologies are likely to be difficult to compare and may lead to erroneous conclusions about costs and cost-effectiveness. This paper provides a step-by-step approach for estimating the cost per CD4 test from a provider's perspective. As an example, the approach is applied to one specific POC technology, the Pima™ Analyzer. The costing approach is illustrated with data from a mobile HCT program in Gauteng Province of South Africa. For this program, the cost per test in 2010 was estimated at $23.76 (material costs = $8.70; labor cost per test = $7.33; and equipment, insurance, and daily quality control = $7.72). Labor and equipment costs can vary widely depending on how the program operates and the number of CD4 tests completed over time. Additional costs not included in the above analysis, for on-going training, supervision, and quality control, are likely to increase further the cost per test. The main contribution of this paper is to outline a methodology for estimating the costs of incorporating POC CD4 testing technologies into an HCT program. The details of the program setting matter significantly for the cost estimate, so that such details should be clearly documented to improve the consistency, transparency, and comparability of cost estimates. PMID:22532854

  4. A point matching algorithm based on reference point pair

    NASA Astrophysics Data System (ADS)

    Zou, Huanxin; Zhu, Youqing; Zhou, Shilin; Lei, Lin

    2016-03-01

    Outliers and occlusions are important degradation in the real application of point matching. In this paper, a novel point matching algorithm based on the reference point pairs is proposed. In each iteration, it firstly eliminates the dubious matches to obtain the relatively accurate matching points (reference point pairs), and then calculates the shape contexts of the removed points with reference to them. After re-matching the removed points, the reference point pairs are combined to achieve better correspondences. Experiments on synthetic data validate the advantages of our method in comparison with some classical methods.

  5. A fast and accurate decoder for underwater acoustic telemetry

    NASA Astrophysics Data System (ADS)

    Ingraham, J. M.; Deng, Z. D.; Li, X.; Fu, T.; McMichael, G. A.; Trumbo, B. A.

    2014-07-01

    The Juvenile Salmon Acoustic Telemetry System, developed by the U.S. Army Corps of Engineers, Portland District, has been used to monitor the survival of juvenile salmonids passing through hydroelectric facilities in the Federal Columbia River Power System. Cabled hydrophone arrays deployed at dams receive coded transmissions sent from acoustic transmitters implanted in fish. The signals' time of arrival on different hydrophones is used to track fish in 3D. In this article, a new algorithm that decodes the received transmissions is described and the results are compared to results for the previous decoding algorithm. In a laboratory environment, the new decoder was able to decode signals with lower signal strength than the previous decoder, effectively increasing decoding efficiency and range. In field testing, the new algorithm decoded significantly more signals than the previous decoder and three-dimensional tracking experiments showed that the new decoder's time-of-arrival estimates were accurate. At multiple distances from hydrophones, the new algorithm tracked more points more accurately than the previous decoder. The new algorithm was also more than 10 times faster, which is critical for real-time applications on an embedded system.

  6. A fast and accurate decoder for underwater acoustic telemetry.

    PubMed

    Ingraham, J M; Deng, Z D; Li, X; Fu, T; McMichael, G A; Trumbo, B A

    2014-07-01

    The Juvenile Salmon Acoustic Telemetry System, developed by the U.S. Army Corps of Engineers, Portland District, has been used to monitor the survival of juvenile salmonids passing through hydroelectric facilities in the Federal Columbia River Power System. Cabled hydrophone arrays deployed at dams receive coded transmissions sent from acoustic transmitters implanted in fish. The signals' time of arrival on different hydrophones is used to track fish in 3D. In this article, a new algorithm that decodes the received transmissions is described and the results are compared to results for the previous decoding algorithm. In a laboratory environment, the new decoder was able to decode signals with lower signal strength than the previous decoder, effectively increasing decoding efficiency and range. In field testing, the new algorithm decoded significantly more signals than the previous decoder and three-dimensional tracking experiments showed that the new decoder's time-of-arrival estimates were accurate. At multiple distances from hydrophones, the new algorithm tracked more points more accurately than the previous decoder. The new algorithm was also more than 10 times faster, which is critical for real-time applications on an embedded system. PMID:25085162

  7. NNLOPS accurate associated HW production

    NASA Astrophysics Data System (ADS)

    Astill, William; Bizon, Wojciech; Re, Emanuele; Zanderighi, Giulia

    2016-06-01

    We present a next-to-next-to-leading order accurate description of associated HW production consistently matched to a parton shower. The method is based on reweighting events obtained with the HW plus one jet NLO accurate calculation implemented in POWHEG, extended with the MiNLO procedure, to reproduce NNLO accurate Born distributions. Since the Born kinematics is more complex than the cases treated before, we use a parametrization of the Collins-Soper angles to reduce the number of variables required for the reweighting. We present phenomenological results at 13 TeV, with cuts suggested by the Higgs Cross section Working Group.

  8. The Relationship of Actigraph Accelerometer Cut-Points for Estimating Physical Activity with Selected Health Outcomes: Results from NHANES 2003-06

    ERIC Educational Resources Information Center

    Loprinzi, Paul D.; Lee, Hyo; Cardinal, Bradley J.; Crespo, Carlos J.; Andersen, Ross E.; Smit, Ellen

    2012-01-01

    The purpose of this study was to examine the influence of child and adult cut-points on physical activity (PA) intensity, the prevalence of meeting PA guidelines, and association with selected health outcomes. Participants (6,578 adults greater than or equal to 18 years, and 3,174 children and adolescents less than or equal to 17 years) from the…

  9. An automatic registration algorithm for the scattered point clouds based on the curvature feature

    NASA Astrophysics Data System (ADS)

    He, Bingwei; Lin, Zeming; Li, Y. F.

    2013-03-01

    Object modeling by the registration of multiple range images has important applications in reverse engineering and computer vision. In order to register multi-view scattered point clouds, a novel curvature-based automatic registration algorithm is proposed in this paper, which can solve the registration problem with partial overlapping point clouds. For two sets of scattered point clouds, the curvature of each point is estimated by using the quadratic surface fitting method. The feature points that have the maximum local curvature variations are then extracted. The initial matching points are acquired by computing the Hausdorff distance of curvature, and then the circumference shape feature of the local surface is used to obtain the accurate matching points from the initial matching points. Finally, the rotation and translation matrix are estimated by the quaternion, and an iterative algorithm is used to improve the registration accuracy. Experimental results show that the algorithm is effective.

  10. How to accurately bypass damage

    PubMed Central

    Broyde, Suse; Patel, Dinshaw J.

    2016-01-01

    Ultraviolet radiation can cause cancer through DNA damage — specifically, by linking adjacent thymine bases. Crystal structures show how the enzyme DNA polymerase η accurately bypasses such lesions, offering protection. PMID:20577203

  11. Change point detection in risk adjusted control charts.

    PubMed

    Assareh, Hassan; Smith, Ian; Mengersen, Kerrie

    2015-12-01

    Precise identification of the time when a change in a clinical process has occurred enables experts to identify a potential special cause more effectively. In this article, we develop change point estimation methods for a clinical dichotomous process in the presence of case mix. We apply Bayesian hierarchical models to formulate the change point where there exists a step change in the odds ratio and logit of risk of a Bernoulli process. Markov Chain Monte Carlo is used to obtain posterior distributions of the change point parameters including location and magnitude of changes and also corresponding probabilistic intervals and inferences. The performance of the Bayesian estimator is investigated through simulations and the result shows that precise estimates can be obtained when they are used in conjunction with the risk-adjusted CUSUM and EWMA control charts. In comparison with alternative EWMA and CUSUM estimators, more accurate and precise estimates are obtained by the Bayesian estimator. These superiorities enhance when probability quantification, flexibility and generaliability of the Bayesian change point detection model are also considered. The Deviance Information Criterion, as a model selection criterion in the Bayesian context, is applied to find the best change point model for a given dataset where there is no prior knowledge about the change type in the process. PMID:22025415

  12. Active point out-of-plane ultrasound calibration

    NASA Astrophysics Data System (ADS)

    Cheng, Alexis; Guo, Xiaoyu; Zhang, Haichong K.; Kang, Hyunjae; Etienne-Cummings, Ralph; Boctor, Emad M.

    2015-03-01

    Image-guided surgery systems are often used to provide surgeons with informational support. Due to several unique advantages such as ease of use, real-time image acquisition, and no ionizing radiation, ultrasound is a common intraoperative medical imaging modality used in image-guided surgery systems. To perform advanced forms of guidance with ultrasound, such as virtual image overlays or automated robotic actuation, an ultrasound calibration process must be performed. This process recovers the rigid body transformation between a tracked marker attached to the transducer and the ultrasound image. Point-based phantoms are considered to be accurate, but their calibration framework assumes that the point is in the image plane. In this work, we present the use of an active point phantom and a calibration framework that accounts for the elevational uncertainty of the point. Given the lateral and axial position of the point in the ultrasound image, we approximate a circle in the axial-elevational plane with a radius equal to the axial position. The standard approach transforms all of the imaged points to be a single physical point. In our approach, we minimize the distances between the circular subsets of each image, with them ideally intersecting at a single point. We simulated in noiseless and noisy cases, presenting results on out-of-plane estimation errors, calibration estimation errors, and point reconstruction precision. We also performed an experiment using a robot arm as the tracker, resulting in a point reconstruction precision of 0.64mm.

  13. Using the Guttman Scale to Define and Estimate Measurement Error in Items over Time: The Case of Cognitive Decline and the Meaning of “Points Lost”

    PubMed Central

    Tractenberg, Rochelle E.; Yumoto, Futoshi; Aisen, Paul S.; Kaye, Jeffrey A.; Mislevy, Robert J.

    2012-01-01

    We used a Guttman model to represent responses to test items over time as an approximation of what is often referred to as “points lost” in studies of cognitive decline or interventions. To capture this meaning of “point loss”, over four successive assessments, we assumed that once an item is incorrect, it cannot be correct at a later visit. If the loss of a point represents actual decline, then failure of an item to fit the Guttman model over time can be considered measurement error. This representation and definition of measurement error also permits testing the hypotheses that measurement error is constant for items in a test, and that error is independent of “true score”, which are two key consequences of the definition of “measurement error” –and thereby, reliability- under Classical Test Theory. We tested the hypotheses by fitting our model to, and comparing our results from, four consecutive annual evaluations in three groups of elderly persons: a) cognitively normal (NC, N = 149); b) diagnosed with possible or probable AD (N = 78); and c) cognitively normal initially and a later diagnosis of AD (converters, N = 133). Of 16 items that converged, error-free measurement of “cognitive loss” was observed for 10 items in NC, eight in converters, and two in AD. We found that measurement error, as we defined it, was inconsistent over time and across cognitive functioning levels, violating the theory underlying reliability and other psychometric characteristics, and key regression assumptions. PMID:22363411

  14. Spatio-temporal statistical model for the optimal combination of precipitation measured at different time scales for estimating unobserved point values and disaggregating to finer timescales

    NASA Astrophysics Data System (ADS)

    Bàrdossy, Andràs; Pegram, Geoffrey

    2015-04-01

    Precipitation observations are unique in space and time, so if not observed, the values can only be estimated. Many applications, such as the calculation of water balances, calibration of hydrological models or the provision of unbiased ground truth for remote sensing require full datasets. Thus a reliable estimation of the missing observations is of great importance. The problem is exacerbated by the ubiquitous decimation of gauge networks. We consider 2 problems as examples of the methodology: (i) infilling monthly data where some days are missing in the monthly records and (ii) infilling missing hourly values in daily records with the assistance of some nearby pluviometers. The key is that we need estimates of the distributions of the infilled values, not just their expectations, as we have found that the traditional 'best' values bias the spatial estimates. We first performed monthly precipitation interpolation using 311 full records, 31 stations of which were randomly decimated to artificially create incomplete records as inequality constraints. Interpolation was carried out (i) without using these 31 in any way and (ii) using them as inequality constraints, in the sense that we determine a lower limit by aggregating the surviving data in a decimated record. We compare the errors if (i) the 31 stations with incomplete records are not considered against (ii) the errors if the incomplete records are considered as inequalities, and found that the partially decimated data add considerable value, as compared to neglecting them. In a second application we performed a disaggregation in time. We take a set of complete hourly pluviometer data, then aggregate some stations to days. These then have their hourly missing data reconstructed and we evaluate the success of the procedure by cross-validation. In this application the daily sums for a location are considered as a constraint and the disaggregated daily data are compared to their observed hourly precipitation. The

  15. An Estimation of the Likelihood of Significant Eruptions During 2000-2009 Using Poisson Statistics on Two-Point Moving Averages of the Volcanic Time Series

    NASA Technical Reports Server (NTRS)

    Wilson, Robert M.

    2001-01-01

    Since 1750, the number of cataclysmic volcanic eruptions (volcanic explosivity index (VEI)>=4) per decade spans 2-11, with 96 percent located in the tropics and extra-tropical Northern Hemisphere. A two-point moving average of the volcanic time series has higher values since the 1860's than before, being 8.00 in the 1910's (the highest value) and 6.50 in the 1980's, the highest since the 1910's peak. Because of the usual behavior of the first difference of the two-point moving averages, one infers that its value for the 1990's will measure approximately 6.50 +/- 1, implying that approximately 7 +/- 4 cataclysmic volcanic eruptions should be expected during the present decade (2000-2009). Because cataclysmic volcanic eruptions (especially those having VEI>=5) nearly always have been associated with short-term episodes of global cooling, the occurrence of even one might confuse our ability to assess the effects of global warming. Poisson probability distributions reveal that the probability of one or more events with a VEI>=4 within the next ten years is >99 percent. It is approximately 49 percent for an event with a VEI>=5, and 18 percent for an event with a VEI>=6. Hence, the likelihood that a climatically significant volcanic eruption will occur within the next ten years appears reasonably high.

  16. Revised Filter Profiles and Zero Points for Broadband Photometry

    NASA Astrophysics Data System (ADS)

    Mann, Andrew W.; von Braun, Kaspar

    2015-02-01

    Estimating accurate bolometric fluxes for stars requires reliable photometry to absolutely flux calibrate the spectra. This is a significant problem for studies of very bright stars, which are generally saturated in modern photometric surveys. Instead we must rely on photometry with less precise calibration. We utilize precisely flux-calibrated spectra to derive improved filter bandpasses and zero points for the most common sources of photometry for bright stars. In total, we test 39 different filters in the General Catalog of Photometric Data as well as those from Tycho-2 and Hipparcos. We show that utilizing inaccurate filter profiles from the literature can create significant color terms resulting in fluxes that deviate by gsim10% from actual values. To remedy this we employ an empirical approach; we iteratively adjust the literature filter profile and zero point, convolve it with catalog spectra, and compare to the corresponding flux from the photometry. We adopt the passband values that produce the best agreement between photometry and spectroscopy and are independent of stellar color. We find that while most zero points change by < 5%, a few systems change by 10-15%. Our final profiles and zero points are similar to recent estimates from the literature. Based on determinations of systematic errors in our selected spectroscopic libraries, we estimate that most of our improved zero points are accurate to 0.5-1%.

  17. Fully Automated Generation of Accurate Digital Surface Models with Sub-Meter Resolution from Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Wohlfeil, J.; Hirschmüller, H.; Piltz, B.; Börner, A.; Suppa, M.

    2012-07-01

    Modern pixel-wise image matching algorithms like Semi-Global Matching (SGM) are able to compute high resolution digital surface models from airborne and spaceborne stereo imagery. Although image matching itself can be performed automatically, there are prerequisites, like high geometric accuracy, which are essential for ensuring the high quality of resulting surface models. Especially for line cameras, these prerequisites currently require laborious manual interaction using standard tools, which is a growing problem due to continually increasing demand for such surface models. The tedious work includes partly or fully manual selection of tie- and/or ground control points for ensuring the required accuracy of the relative orientation of images for stereo matching. It also includes masking of large water areas that seriously reduce the quality of the results. Furthermore, a good estimate of the depth range is required, since accurate estimates can seriously reduce the processing time for stereo matching. In this paper an approach is presented that allows performing all these steps fully automated. It includes very robust and precise tie point selection, enabling the accurate calculation of the images' relative orientation via bundle adjustment. It is also shown how water masking and elevation range estimation can be performed automatically on the base of freely available SRTM data. Extensive tests with a large number of different satellite images from QuickBird and WorldView are presented as proof of the robustness and reliability of the proposed method.

  18. Estimating the Speed of Light with a TV Set.

    ERIC Educational Resources Information Center

    Schroeder, Michael C.; Smith, Charles W.

    1985-01-01

    A television set, piece of aluminum foil, and meter stick can be used to estimate the speed of light within a few percentage points. The activity provides students with success and generates interest in physical optics. Steps in the experiment are outlined along with suggestions for obtaining accurate results. (DH)

  19. High Frequency QRS ECG Accurately Detects Cardiomyopathy

    NASA Technical Reports Server (NTRS)

    Schlegel, Todd T.; Arenare, Brian; Poulin, Gregory; Moser, Daniel R.; Delgado, Reynolds

    2005-01-01

    High frequency (HF, 150-250 Hz) analysis over the entire QRS interval of the ECG is more sensitive than conventional ECG for detecting myocardial ischemia. However, the accuracy of HF QRS ECG for detecting cardiomyopathy is unknown. We obtained simultaneous resting conventional and HF QRS 12-lead ECGs in 66 patients with cardiomyopathy (EF = 23.2 plus or minus 6.l%, mean plus or minus SD) and in 66 age- and gender-matched healthy controls using PC-based ECG software recently developed at NASA. The single most accurate ECG parameter for detecting cardiomyopathy was an HF QRS morphological score that takes into consideration the total number and severity of reduced amplitude zones (RAZs) present plus the clustering of RAZs together in contiguous leads. This RAZ score had an area under the receiver operator curve (ROC) of 0.91, and was 88% sensitive, 82% specific and 85% accurate for identifying cardiomyopathy at optimum score cut-off of 140 points. Although conventional ECG parameters such as the QRS and QTc intervals were also significantly longer in patients than controls (P less than 0.001, BBBs excluded), these conventional parameters were less accurate (area under the ROC = 0.77 and 0.77, respectively) than HF QRS morphological parameters for identifying underlying cardiomyopathy. The total amplitude of the HF QRS complexes, as measured by summed root mean square voltages (RMSVs), also differed between patients and controls (33.8 plus or minus 11.5 vs. 41.5 plus or minus 13.6 mV, respectively, P less than 0.003), but this parameter was even less accurate in distinguishing the two groups (area under ROC = 0.67) than the HF QRS morphologic and conventional ECG parameters. Diagnostic accuracy was optimal (86%) when the RAZ score from the HF QRS ECG and the QTc interval from the conventional ECG were used simultaneously with cut-offs of greater than or equal to 40 points and greater than or equal to 445 ms, respectively. In conclusion 12-lead HF QRS ECG employing

  20. Pointing to others: How the target gender influences pointing performance.

    PubMed

    Cleret de Langavant, Laurent; Jacquemot, Charlotte; Cruveiller, Virginie; Dupoux, Emmanuel; Bachoud-Lévi, Anne-Catherine

    2016-01-01

    Pointing is a communicative gesture that allows individuals to share information about surrounding objects with other humans. Patients with heterotopagnosia are specifically impaired in pointing to other humans' body parts but not in pointing to themselves or to objects. Here, we describe a female patient with heterotopagnosia who was more accurate in pointing to men's body parts than to women's body parts. We replicated this gender effect in healthy participants with faster reaction times for pointing to men's body parts than to women's body parts. We discuss the role of gender stereotypes in explaining why it is more difficult to point to women than to men. PMID:27593456

  1. A novel modelling framework to prioritize estimation of non-point source pollution parameters for quantifying pollutant origin and discharge in urban catchments.

    PubMed

    Fraga, I; Charters, F J; O'Sullivan, A D; Cochrane, T A

    2016-02-01

    Stormwater runoff in urban catchments contains heavy metals (zinc, copper, lead) and suspended solids (TSS) which can substantially degrade urban waterways. To identify these pollutant sources and quantify their loads the MEDUSA (Modelled Estimates of Discharges for Urban Stormwater Assessments) modelling framework was developed. The model quantifies pollutant build-up and wash-off from individual impervious roof, road and car park surfaces for individual rain events, incorporating differences in pollutant dynamics between surface types and rainfall characteristics. This requires delineating all impervious surfaces and their material types, the drainage network, rainfall characteristics and coefficients for the pollutant dynamics equations. An example application of the model to a small urban catchment demonstrates how the model can be used to identify the magnitude of pollutant loads, their spatial origin and the response of the catchment to changes in specific rainfall characteristics. A sensitivity analysis then identifies the key parameters influencing each pollutant load within the stormwater given the catchment characteristics, which allows development of a targeted calibration process that will enhance the certainty of the model outputs, while minimizing the data collection required for effective calibration. A detailed explanation of the modelling framework and pre-calibration sensitivity analysis is presented. PMID:26613353

  2. Airborne Light Detection and Ranging (lidar) Derived Deformation from the MW 6.0 24 August, 2014 South Napa Earthquake Estimated by Two and Three Dimensional Point Cloud Change Detection Techniques

    NASA Astrophysics Data System (ADS)

    Lyda, A. W.; Zhang, X.; Glennie, C. L.; Hudnut, K.; Brooks, B. A.

    2016-06-01

    Remote sensing via LiDAR (Light Detection And Ranging) has proven extremely useful in both Earth science and hazard related studies. Surveys taken before and after an earthquake for example, can provide decimeter-level, 3D near-field estimates of land deformation that offer better spatial coverage of the near field rupture zone than other geodetic methods (e.g., InSAR, GNSS, or alignment array). In this study, we compare and contrast estimates of deformation obtained from different pre and post-event airborne laser scanning (ALS) data sets of the 2014 South Napa Earthquake using two change detection algorithms, Iterative Control Point (ICP) and Particle Image Velocimetry (PIV). The ICP algorithm is a closest point based registration algorithm that can iteratively acquire three dimensional deformations from airborne LiDAR data sets. By employing a newly proposed partition scheme, "moving window," to handle the large spatial scale point cloud over the earthquake rupture area, the ICP process applies a rigid registration of data sets within an overlapped window to enhance the change detection results of the local, spatially varying surface deformation near-fault. The other algorithm, PIV, is a well-established, two dimensional image co-registration and correlation technique developed in fluid mechanics research and later applied to geotechnical studies. Adapted here for an earthquake with little vertical movement, the 3D point cloud is interpolated into a 2D DTM image and horizontal deformation is determined by assessing the cross-correlation of interrogation areas within the images to find the most likely deformation between two areas. Both the PIV process and the ICP algorithm are further benefited by a presented, novel use of urban geodetic markers. Analogous to the persistent scatterer technique employed with differential radar observations, this new LiDAR application exploits a classified point cloud dataset to assist the change detection algorithms. Ground

  3. A comparison of the PROCAM and Framingham point-scoring systems for estimation of individual risk of coronary heart disease in the Second Northwick Park Heart Study.

    PubMed

    Cooper, Jackie A; Miller, George J; Humphries, Steve E

    2005-07-01

    We have compared the predictive value of the PROCAM and Framingham risk algorithms in healthy UK men from the Second Northwick Park Heart Study (NPHS-II) (50-64 years at entry), followed for a median of 10.8 years for coronary heart disease (CHD) events. For PROCAM, the area under the receiver operating characteristic (ROC) curve was 0.63 (95% CI, 0.59-0.67), and not significantly different (p = 0.46) from the Framingham score, 0.62 (0.58-0.66). Sensitivities for a 5% false-positive rate (DR(5)) were 13.8 and 12.4%, respectively. Calibration analysis for PROCAM gave a ratio of observed to expected events of 0.46 (Hosmer-Lemeshow test, p < 0.0001) and 0.47 for Framingham (p < 0.0001). Using measures taken at 5 years of high-density lipoprotein cholesterol and (estimated) low-density lipoprotein cholesterol levels increased the ROC by only 1%. An NPHS-II risk algorithm, developed using a 50% random subset, and including age, triglyceride, total cholesterol, smoking status, and systolic blood pressure at recruitment, gave an ROC of 0.64 (0.58-0.70) with a DR(5) of 10.7% when applied to the second half of the data. Adding family history and diabetes increased the DR(5) to 18.4% (p = 0.28). Adding lipoprotein(a) >26.3 mg/dL (relative risk 1.6, 1.1-2.4) gave a DR(5) of 15.5% (p = 0.55), while adding fibrinogen levels (relative risk for 1S.D. increase = 1.5, 1.1-2.0) had essentially no additional impact (DR(5) = 16.9%, p = 0.95). Thus, the PROCAM algorithm is marginally better as a risk predictor in UK men than the Framingham score, but both significantly overestimate risk in UK men. The algorithm based on NPHS-II data performs similarly to those for PROCAM and Framingham with respect to discrimination, but gave an improved ratio of observed to expected events of 0.80 (p = 0.01), although no score had a high sensitivity. Any novel factors added to these algorithms will need to have a major impact on risk to increase sensitivity above that given by classical risk factors

  4. Accurate Alignment of Plasma Channels Based on Laser Centroid Oscillations

    SciTech Connect

    Gonsalves, Anthony; Nakamura, Kei; Lin, Chen; Osterhoff, Jens; Shiraishi, Satomi; Schroeder, Carl; Geddes, Cameron; Toth, Csaba; Esarey, Eric; Leemans, Wim

    2011-03-23

    A technique has been developed to accurately align a laser beam through a plasma channel by minimizing the shift in laser centroid and angle at the channel outptut. If only the shift in centroid or angle is measured, then accurate alignment is provided by minimizing laser centroid motion at the channel exit as the channel properties are scanned. The improvement in alignment accuracy provided by this technique is important for minimizing electron beam pointing errors in laser plasma accelerators.

  5. Nonlinear analysis and performance evaluation of the Annular Suspension and Pointing System (ASPS)

    NASA Technical Reports Server (NTRS)

    Joshi, S. M.

    1978-01-01

    The Annular Suspension and Pointing System (ASPS) can provide high accurate fine pointing for a variety of solar-, stellar-, and Earth-viewing scientific instruments during space shuttle orbital missions. In this report, a detailed nonlinear mathematical model is developed for the ASPS/Space Shuttle system. The equations are augmented with nonlinear models of components such as magnetic actuators and gimbal torquers. Control systems and payload attitude state estimators are designed in order to obtain satisfactory pointing performance, and statistical pointing performance is predicted in the presence of measurement noise and disturbances.

  6. Clinically accurate fetal ECG parameters acquired from maternal abdominal sensors

    PubMed Central

    CLIFFORD, Gari; SAMENI, Reza; WARD, Mr. Jay; ROBINSON, Julian; WOLFBERG, Adam J.

    2011-01-01

    OBJECTIVE To evaluate the accuracy of a novel system for measuring fetal heart rate and ST-segment changes using non-invasive electrodes on the maternal abdomen. STUDY DESIGN Fetal ECGs were recorded using abdominal sensors from 32 term laboring women who had a fetal scalp electrode (FSE) placed for a clinical indication. RESULTS Good quality data for FHR estimation was available in 91.2% of the FSE segments, and 89.9% of the abdominal electrode segments. The root mean square (RMS) error between the FHR data calculated by both methods over all processed segments was 0.36 beats per minute. ST deviation from the isoelectric point ranged from 0 to 14.2% of R-wave amplitude. The RMS error between the ST change calculated by both methods averaged over all processed segments was 3.2%. CONCLUSION FHR and ST change acquired from the maternal abdomen is highly accurate and on average is clinically indistinguishable from FHR and ST change calculated using FSE data. PMID:21514560

  7. Fast and accurate propagation of coherent light

    PubMed Central

    Lewis, R. D.; Beylkin, G.; Monzón, L.

    2013-01-01

    We describe a fast algorithm to propagate, for any user-specified accuracy, a time-harmonic electromagnetic field between two parallel planes separated by a linear, isotropic and homogeneous medium. The analytical formulation of this problem (ca 1897) requires the evaluation of the so-called Rayleigh–Sommerfeld integral. If the distance between the planes is small, this integral can be accurately evaluated in the Fourier domain; if the distance is very large, it can be accurately approximated by asymptotic methods. In the large intermediate region of practical interest, where the oscillatory Rayleigh–Sommerfeld kernel must be applied directly, current numerical methods can be highly inaccurate without indicating this fact to the user. In our approach, for any user-specified accuracy ϵ>0, we approximate the kernel by a short sum of Gaussians with complex-valued exponents, and then efficiently apply the result to the input data using the unequally spaced fast Fourier transform. The resulting algorithm has computational complexity , where we evaluate the solution on an N×N grid of output points given an M×M grid of input samples. Our algorithm maintains its accuracy throughout the computational domain. PMID:24204184

  8. Fast and Accurate Construction of Confidence Intervals for Heritability.

    PubMed

    Schweiger, Regev; Kaufman, Shachar; Laaksonen, Reijo; Kleber, Marcus E; März, Winfried; Eskin, Eleazar; Rosset, Saharon; Halperin, Eran

    2016-06-01

    Estimation of heritability is fundamental in genetic studies. Recently, heritability estimation using linear mixed models (LMMs) has gained popularity because these estimates can be obtained from unrelated individuals collected in genome-wide association studies. Typically, heritability estimation under LMMs uses the restricted maximum likelihood (REML) approach. Existing methods for the construction of confidence intervals and estimators of SEs for REML rely on asymptotic properties. However, these assumptions are often violated because of the bounded parameter space, statistical dependencies, and limited sample size, leading to biased estimates and inflated or deflated confidence intervals. Here, we show that the estimation of confidence intervals by state-of-the-art methods is inaccurate, especially when the true heritability is relatively low or relatively high. We further show that these inaccuracies occur in datasets including thousands of individuals. Such biases are present, for example, in estimates of heritability of gene expression in the Genotype-Tissue Expression project and of lipid profiles in the Ludwigshafen Risk and Cardiovascular Health study. We also show that often the probability that the genetic component is estimated as 0 is high even when the true heritability is bounded away from 0, emphasizing the need for accurate confidence intervals. We propose a computationally efficient method, ALBI (accurate LMM-based heritability bootstrap confidence intervals), for estimating the distribution of the heritability estimator and for constructing accurate confidence intervals. Our method can be used as an add-on to existing methods for estimating heritability and variance components, such as GCTA, FaST-LMM, GEMMA, or EMMAX. PMID:27259052

  9. Accurate calculation of Coulomb sums: Efficacy of Pade-like methods

    SciTech Connect

    Sarkar, B. ); Bhattacharyya, K. )

    1993-09-01

    The adequacy of numerical sequence accelerative transforms in providing accurate estimates of Coulomb sums is considered, referring particularly to distorted lattices. Performance of diagonal Pade approximants (DPA) in this context is critically assessed. Failure in the case of lattice vacancies is also demonstrated. The method of multiple-point Pade approximants (MPA) has been introduced for slowly convergent sequences and is shown to work well for both regular and distorted lattices, the latter being due either to impurities or vacancies. Viability of the two methods is also compared. In divergent situations with distortions owing to vacancies, a strategy of obtaining reliable results by separate applications of both DPA and MPA at appropriate places is also sketched. Representative calculations involve two basic cubic-lattice sums, one slowly convergent and the other divergent, from which very good quality estimates of Madelung constants for a number of common lattices follow.

  10. Tipping Point

    MedlinePlus Videos and Cool Tools

    ... Tipping Point by CPSC Blogger September 22 appliance child Childproofing CPSC danger death electrical fall furniture head ... TV falls with about the same force as child falling from the third story of a building. ...

  11. Toward Accurate and Quantitative Comparative Metagenomics.

    PubMed

    Nayfach, Stephen; Pollard, Katherine S

    2016-08-25

    Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341

  12. Accurate thickness measurement of graphene

    NASA Astrophysics Data System (ADS)

    Shearer, Cameron J.; Slattery, Ashley D.; Stapleton, Andrew J.; Shapter, Joseph G.; Gibson, Christopher T.

    2016-03-01

    Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.

  13. Baseline Estimation Algorithm with Block Adjustment for Multi-Pass Dual-Antenna Insar

    NASA Astrophysics Data System (ADS)

    Jin, Guowang; Xiong, Xin; Xu, Qing; Gong, Zhihui; Zhou, Yang

    2016-06-01

    Baseline parameters and interferometric phase offset need to be estimated accurately, for they are key parameters in processing of InSAR (Interferometric Synthetic Aperture Radar). If adopting baseline estimation algorithm with single pass, it needs large quantities of ground control points to estimate interferometric parameters for mosaicking multiple passes dual-antenna airborne InSAR data that covers large areas. What's more, there will be great difference between heights derived from different passes due to the errors of estimated parameters. So, an estimation algorithm of interferometric parameters with block adjustment for multi-pass dual-antenna InSAR is presented to reduce the needed ground control points and height's difference between different passes. The baseline estimation experiments were done with multi-pass InSAR data obtained by Chinese dual-antenna airborne InSAR system. Although there were less ground control points, the satisfied results were obtained, as validated the proposed baseline estimation algorithm.

  14. Towards an accurate bioimpedance identification

    NASA Astrophysics Data System (ADS)

    Sanchez, B.; Louarroudi, E.; Bragos, R.; Pintelon, R.

    2013-04-01

    This paper describes the local polynomial method (LPM) for estimating the time-invariant bioimpedance frequency response function (FRF) considering both the output-error (OE) and the errors-in-variables (EIV) identification framework and compare it with the traditional cross— and autocorrelation spectral analysis techniques. The bioimpedance FRF is measured with the multisine electrical impedance spectroscopy (EIS) technique. To show the overwhelming accuracy of the LPM approach, both the LPM and the classical cross— and autocorrelation spectral analysis technique are evaluated through the same experimental data coming from a nonsteady-state measurement of time-varying in vivo myocardial tissue. The estimated error sources at the measurement frequencies due to noise, σnZ, and the stochastic nonlinear distortions, σZNL, have been converted to Ω and plotted over the bioimpedance spectrum for each framework. Ultimately, the impedance spectra have been fitted to a Cole impedance model using both an unweighted and a weighted complex nonlinear least square (CNLS) algorithm. A table is provided with the relative standard errors on the estimated parameters to reveal the importance of which system identification frameworks should be used.

  15. ESTIMATING IRRIGATION COSTS

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Having accurate estimates of the cost of irrigation is important when making irrigation decisions. Estimates of fixed costs are critical for investment decisions. Operating cost estimates can assist in decisions regarding additional irrigations. This fact sheet examines the costs associated with ...

  16. Price Estimation Guidelines

    NASA Technical Reports Server (NTRS)

    Chamberlain, R. G.; Aster, R. W.; Firnett, P. J.; Miller, M. A.

    1985-01-01

    Improved Price Estimation Guidelines, IPEG4, program provides comparatively simple, yet relatively accurate estimate of price of manufactured product. IPEG4 processes user supplied input data to determine estimate of price per unit of production. Input data include equipment cost, space required, labor cost, materials and supplies cost, utility expenses, and production volume on industry wide or process wide basis.

  17. Automated Point Cloud Correspondence Detection for Underwater Mapping Using AUVs

    NASA Technical Reports Server (NTRS)

    Hammond, Marcus; Clark, Ashley; Mahajan, Aditya; Sharma, Sumant; Rock, Stephen

    2015-01-01

    An algorithm for automating correspondence detection between point clouds composed of multibeam sonar data is presented. This allows accurate initialization for point cloud alignment techniques even in cases where accurate inertial navigation is not available, such as iceberg profiling or vehicles with low-grade inertial navigation systems. Techniques from computer vision literature are used to extract, label, and match keypoints between "pseudo-images" generated from these point clouds. Image matches are refined using RANSAC and information about the vehicle trajectory. The resulting correspondences can be used to initialize an iterative closest point (ICP) registration algorithm to estimate accumulated navigation error and aid in the creation of accurate, self-consistent maps. The results presented use multibeam sonar data obtained from multiple overlapping passes of an underwater canyon in Monterey Bay, California. Using strict matching criteria, the method detects 23 between-swath correspondence events in a set of 155 pseudo-images with zero false positives. Using less conservative matching criteria doubles the number of matches but introduces several false positive matches as well. Heuristics based on known vehicle trajectory information are used to eliminate these.

  18. Accurate adiabatic correction in the hydrogen molecule

    SciTech Connect

    Pachucki, Krzysztof; Komasa, Jacek

    2014-12-14

    A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10{sup −12} at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H{sub 2}, HD, HT, D{sub 2}, DT, and T{sub 2} has been determined. For the ground state of H{sub 2} the estimated precision is 3 × 10{sup −7} cm{sup −1}, which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels.

  19. Accurate adiabatic correction in the hydrogen molecule

    NASA Astrophysics Data System (ADS)

    Pachucki, Krzysztof; Komasa, Jacek

    2014-12-01

    A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10-12 at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H2, HD, HT, D2, DT, and T2 has been determined. For the ground state of H2 the estimated precision is 3 × 10-7 cm-1, which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels.

  20. Accurate ab Initio Spin Densities

    PubMed Central

    2012-01-01

    We present an approach for the calculation of spin density distributions for molecules that require very large active spaces for a qualitatively correct description of their electronic structure. Our approach is based on the density-matrix renormalization group (DMRG) algorithm to calculate the spin density matrix elements as a basic quantity for the spatially resolved spin density distribution. The spin density matrix elements are directly determined from the second-quantized elementary operators optimized by the DMRG algorithm. As an analytic convergence criterion for the spin density distribution, we employ our recently developed sampling-reconstruction scheme [J. Chem. Phys.2011, 134, 224101] to build an accurate complete-active-space configuration-interaction (CASCI) wave function from the optimized matrix product states. The spin density matrix elements can then also be determined as an expectation value employing the reconstructed wave function expansion. Furthermore, the explicit reconstruction of a CASCI-type wave function provides insight into chemically interesting features of the molecule under study such as the distribution of α and β electrons in terms of Slater determinants, CI coefficients, and natural orbitals. The methodology is applied to an iron nitrosyl complex which we have identified as a challenging system for standard approaches [J. Chem. Theory Comput.2011, 7, 2740]. PMID:22707921

  1. Precise and Accurate Density Determination of Explosives Using Hydrostatic Weighing

    SciTech Connect

    B. Olinger

    2005-07-01

    Precise and accurate density determination requires weight measurements in air and water using sufficiently precise analytical balances, knowledge of the densities of air and water, knowledge of thermal expansions, availability of a density standard, and a method to estimate the time to achieve thermal equilibrium with water. Density distributions in pressed explosives are inferred from the densities of elements from a central slice.

  2. Accurate paleointensities - the multi-method approach

    NASA Astrophysics Data System (ADS)

    de Groot, Lennart

    2016-04-01

    The accuracy of models describing rapid changes in the geomagnetic field over the past millennia critically depends on the availability of reliable paleointensity estimates. Over the past decade methods to derive paleointensities from lavas (the only recorder of the geomagnetic field that is available all over the globe and through geologic times) have seen significant improvements and various alternative techniques were proposed. The 'classical' Thellier-style approach was optimized and selection criteria were defined in the 'Standard Paleointensity Definitions' (Paterson et al, 2014). The Multispecimen approach was validated and the importance of additional tests and criteria to assess Multispecimen results must be emphasized. Recently, a non-heating, relative paleointensity technique was proposed -the pseudo-Thellier protocol- which shows great potential in both accuracy and efficiency, but currently lacks a solid theoretical underpinning. Here I present work using all three of the aforementioned paleointensity methods on suites of young lavas taken from the volcanic islands of Hawaii, La Palma, Gran Canaria, Tenerife, and Terceira. Many of the sampled cooling units are <100 years old, the actual field strength at the time of cooling is therefore reasonably well known. Rather intuitively, flows that produce coherent results from two or more different paleointensity methods yield the most accurate estimates of the paleofield. Furthermore, the results for some flows pass the selection criteria for one method, but fail in other techniques. Scrutinizing and combing all acceptable results yielded reliable paleointensity estimates for 60-70% of all sampled cooling units - an exceptionally high success rate. This 'multi-method paleointensity approach' therefore has high potential to provide the much-needed paleointensities to improve geomagnetic field models for the Holocene.

  3. Accurate Inventories Of Irrigated Land

    NASA Technical Reports Server (NTRS)

    Wall, S.; Thomas, R.; Brown, C.

    1992-01-01

    System for taking land-use inventories overcomes two problems in estimating extent of irrigated land: only small portion of large state surveyed in given year, and aerial photographs made on 1 day out of year do not provide adequate picture of areas growing more than one crop per year. Developed for state of California as guide to controlling, protecting, conserving, and distributing water within state. Adapted to any large area in which large amounts of irrigation water needed for agriculture. Combination of satellite images, aerial photography, and ground surveys yields data for computer analysis. Analyst also consults agricultural statistics, current farm reports, weather reports, and maps. These information sources aid in interpreting patterns, colors, textures, and shapes on Landsat-images.

  4. Accurate Weather Forecasting for Radio Astronomy

    NASA Astrophysics Data System (ADS)

    Maddalena, Ronald J.

    2010-01-01

    The NRAO Green Bank Telescope routinely observes at wavelengths from 3 mm to 1 m. As with all mm-wave telescopes, observing conditions depend upon the variable atmospheric water content. The site provides over 100 days/yr when opacities are low enough for good observing at 3 mm, but winds on the open-air structure reduce the time suitable for 3-mm observing where pointing is critical. Thus, to maximum productivity the observing wavelength needs to match weather conditions. For 6 years the telescope has used a dynamic scheduling system (recently upgraded; www.gb.nrao.edu/DSS) that requires accurate multi-day forecasts for winds and opacities. Since opacity forecasts are not provided by the National Weather Services (NWS), I have developed an automated system that takes available forecasts, derives forecasted opacities, and deploys the results on the web in user-friendly graphical overviews (www.gb.nrao.edu/ rmaddale/Weather). The system relies on the "North American Mesoscale" models, which are updated by the NWS every 6 hrs, have a 12 km horizontal resolution, 1 hr temporal resolution, run to 84 hrs, and have 60 vertical layers that extend to 20 km. Each forecast consists of a time series of ground conditions, cloud coverage, etc, and, most importantly, temperature, pressure, humidity as a function of height. I use the Liebe's MWP model (Radio Science, 20, 1069, 1985) to determine the absorption in each layer for each hour for 30 observing wavelengths. Radiative transfer provides, for each hour and wavelength, the total opacity and the radio brightness of the atmosphere, which contributes substantially at some wavelengths to Tsys and the observational noise. Comparisons of measured and forecasted Tsys at 22.2 and 44 GHz imply that the forecasted opacities are good to about 0.01 Nepers, which is sufficient for forecasting and accurate calibration. Reliability is high out to 2 days and degrades slowly for longer-range forecasts.

  5. Information geometric density estimation

    NASA Astrophysics Data System (ADS)

    Sun, Ke; Marchand-Maillet, Stéphane

    2015-01-01

    We investigate kernel density estimation where the kernel function varies from point to point. Density estimation in the input space means to find a set of coordinates on a statistical manifold. This novel perspective helps to combine efforts from information geometry and machine learning to spawn a family of density estimators. We present example models with simulations. We discuss the principle and theory of such density estimation.

  6. Evaluation of Piloted Inputs for Onboard Frequency Response Estimation

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.; Martos, Borja

    2013-01-01

    Frequency response estimation results are presented using piloted inputs and a real-time estimation method recently developed for multisine inputs. A nonlinear simulation of the F-16 and a Piper Saratoga research aircraft were subjected to different piloted test inputs while the short period stabilator/elevator to pitch rate frequency response was estimated. Results show that the method can produce accurate results using wide-band piloted inputs instead of multisines. A new metric is introduced for evaluating which da