Science.gov

Sample records for accurate point estimates

  1. Estimation method of point spread function based on Kalman filter for accurately evaluating real optical properties of photonic crystal fibers.

    PubMed

    Shen, Yan; Lou, Shuqin; Wang, Xin

    2014-03-20

    The evaluation accuracy of real optical properties of photonic crystal fibers (PCFs) is determined by the accurate extraction of air hole edges from microscope images of cross sections of practical PCFs. A novel estimation method of point spread function (PSF) based on Kalman filter is presented to rebuild the micrograph image of the PCF cross-section and thus evaluate real optical properties for practical PCFs. Through tests on both artificially degraded images and microscope images of cross sections of practical PCFs, we prove that the proposed method can achieve more accurate PSF estimation and lower PSF variance than the traditional Bayesian estimation method, and thus also reduce the defocus effect. With this method, we rebuild the microscope images of two kinds of commercial PCFs produced by Crystal Fiber and analyze the real optical properties of these PCFs. Numerical results are in accord with the product parameters.

  2. Precision Pointing Control to and Accurate Target Estimation of a Non-Cooperative Vehicle

    NASA Technical Reports Server (NTRS)

    VanEepoel, John; Thienel, Julie; Sanner, Robert M.

    2006-01-01

    In 2004, NASA began investigating a robotic servicing mission for the Hubble Space Telescope (HST). Such a mission would not only require estimates of the HST attitude and rates in order to achieve capture by the proposed Hubble Robotic Vehicle (HRV), but also precision control to achieve the desired rate and maintain the orientation to successfully dock with HST. To generalize the situation, HST is the target vehicle and HRV is the chaser. This work presents a nonlinear approach for estimating the body rates of a non-cooperative target vehicle, and coupling this estimation to a control scheme. Non-cooperative in this context relates to the target vehicle no longer having the ability to maintain attitude control or transmit attitude knowledge.

  3. Accurate pointing of tungsten welding electrodes

    NASA Technical Reports Server (NTRS)

    Ziegelmeier, P.

    1971-01-01

    Thoriated-tungsten is pointed accurately and quickly by using sodium nitrite. Point produced is smooth and no effort is necessary to hold the tungsten rod concentric. The chemically produced point can be used several times longer than ground points. This method reduces time and cost of preparing tungsten electrodes.

  4. Point estimates for probability moments

    PubMed Central

    Rosenblueth, Emilio

    1975-01-01

    Given a well-behaved real function Y of a real random variable X and the first two or three moments of X, expressions are derived for the moments of Y as linear combinations of powers of the point estimates y(x+) and y(x-), where x+ and x- are specific values of X. Higher-order approximations and approximations for discontinuous Y using more point estimates are also given. Second-moment approximations are generalized to the case when Y is a function of several variables. PMID:16578731

  5. Accurate photometric redshift probability density estimation - method comparison and application

    NASA Astrophysics Data System (ADS)

    Rau, Markus Michael; Seitz, Stella; Brimioulle, Fabrice; Frank, Eibe; Friedrich, Oliver; Gruen, Daniel; Hoyle, Ben

    2015-10-01

    We introduce an ordinal classification algorithm for photometric redshift estimation, which significantly improves the reconstruction of photometric redshift probability density functions (PDFs) for individual galaxies and galaxy samples. As a use case we apply our method to CFHTLS galaxies. The ordinal classification algorithm treats distinct redshift bins as ordered values, which improves the quality of photometric redshift PDFs, compared with non-ordinal classification architectures. We also propose a new single value point estimate of the galaxy redshift, which can be used to estimate the full redshift PDF of a galaxy sample. This method is competitive in terms of accuracy with contemporary algorithms, which stack the full redshift PDFs of all galaxies in the sample, but requires orders of magnitude less storage space. The methods described in this paper greatly improve the log-likelihood of individual object redshift PDFs, when compared with a popular neural network code (ANNZ). In our use case, this improvement reaches 50 per cent for high-redshift objects (z ≥ 0.75). We show that using these more accurate photometric redshift PDFs will lead to a reduction in the systematic biases by up to a factor of 4, when compared with less accurate PDFs obtained from commonly used methods. The cosmological analyses we examine and find improvement upon are the following: gravitational lensing cluster mass estimates, modelling of angular correlation functions and modelling of cosmic shear correlation functions.

  6. Accurate Biomass Estimation via Bayesian Adaptive Sampling

    NASA Technical Reports Server (NTRS)

    Wheeler, Kevin R.; Knuth, Kevin H.; Castle, Joseph P.; Lvov, Nikolay

    2005-01-01

    The following concepts were introduced: a) Bayesian adaptive sampling for solving biomass estimation; b) Characterization of MISR Rahman model parameters conditioned upon MODIS landcover. c) Rigorous non-parametric Bayesian approach to analytic mixture model determination. d) Unique U.S. asset for science product validation and verification.

  7. 31 CFR 205.24 - How are accurate estimates maintained?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 31 Money and Finance: Treasury 2 2010-07-01 2010-07-01 false How are accurate estimates maintained... Treasury-State Agreement § 205.24 How are accurate estimates maintained? (a) If a State has knowledge that an estimate does not reasonably correspond to the State's cash needs for a Federal assistance...

  8. Micromagnetometer calibration for accurate orientation estimation.

    PubMed

    Zhang, Zhi-Qiang; Yang, Guang-Zhong

    2015-02-01

    Micromagnetometers, together with inertial sensors, are widely used for attitude estimation for a wide variety of applications. However, appropriate sensor calibration, which is essential to the accuracy of attitude reconstruction, must be performed in advance. Thus far, many different magnetometer calibration methods have been proposed to compensate for errors such as scale, offset, and nonorthogonality. They have also been used for obviate magnetic errors due to soft and hard iron. However, in order to combine the magnetometer with inertial sensor for attitude reconstruction, alignment difference between the magnetometer and the axes of the inertial sensor must be determined as well. This paper proposes a practical means of sensor error correction by simultaneous consideration of sensor errors, magnetic errors, and alignment difference. We take the summation of the offset and hard iron error as the combined bias and then amalgamate the alignment difference and all the other errors as a transformation matrix. A two-step approach is presented to determine the combined bias and transformation matrix separately. In the first step, the combined bias is determined by finding an optimal ellipsoid that can best fit the sensor readings. In the second step, the intrinsic relationships of the raw sensor readings are explored to estimate the transformation matrix as a homogeneous linear least-squares problem. Singular value decomposition is then applied to estimate both the transformation matrix and magnetic vector. The proposed method is then applied to calibrate our sensor node. Although there is no ground truth for the combined bias and transformation matrix for our node, the consistency of calibration results among different trials and less than 3(°) root mean square error for orientation estimation have been achieved, which illustrates the effectiveness of the proposed sensor calibration method for practical applications. PMID:25265625

  9. An accurate method for two-point boundary value problems

    NASA Technical Reports Server (NTRS)

    Walker, J. D. A.; Weigand, G. G.

    1979-01-01

    A second-order method for solving two-point boundary value problems on a uniform mesh is presented where the local truncation error is obtained for use with the deferred correction process. In this simple finite difference method the tridiagonal nature of the classical method is preserved but the magnitude of each term in the truncation error is reduced by a factor of two. The method is applied to a number of linear and nonlinear problems and it is shown to produce more accurate results than either the classical method or the technique proposed by Keller (1969).

  10. Quantifying Accurate Calorie Estimation Using the "Think Aloud" Method

    ERIC Educational Resources Information Center

    Holmstrup, Michael E.; Stearns-Bruening, Kay; Rozelle, Jeffrey

    2013-01-01

    Objective: Clients often have limited time in a nutrition education setting. An improved understanding of the strategies used to accurately estimate calories may help to identify areas of focused instruction to improve nutrition knowledge. Methods: A "Think Aloud" exercise was recorded during the estimation of calories in a standard dinner meal…

  11. Accurate Parameter Estimation for Unbalanced Three-Phase System

    PubMed Central

    Chen, Yuan

    2014-01-01

    Smart grid is an intelligent power generation and control console in modern electricity networks, where the unbalanced three-phase power system is the commonly used model. Here, parameter estimation for this system is addressed. After converting the three-phase waveforms into a pair of orthogonal signals via the α β-transformation, the nonlinear least squares (NLS) estimator is developed for accurately finding the frequency, phase, and voltage parameters. The estimator is realized by the Newton-Raphson scheme, whose global convergence is studied in this paper. Computer simulations show that the mean square error performance of NLS method can attain the Cramér-Rao lower bound. Moreover, our proposal provides more accurate frequency estimation when compared with the complex least mean square (CLMS) and augmented CLMS. PMID:25162056

  12. Accurate parameter estimation for unbalanced three-phase system.

    PubMed

    Chen, Yuan; So, Hing Cheung

    2014-01-01

    Smart grid is an intelligent power generation and control console in modern electricity networks, where the unbalanced three-phase power system is the commonly used model. Here, parameter estimation for this system is addressed. After converting the three-phase waveforms into a pair of orthogonal signals via the α β-transformation, the nonlinear least squares (NLS) estimator is developed for accurately finding the frequency, phase, and voltage parameters. The estimator is realized by the Newton-Raphson scheme, whose global convergence is studied in this paper. Computer simulations show that the mean square error performance of NLS method can attain the Cramér-Rao lower bound. Moreover, our proposal provides more accurate frequency estimation when compared with the complex least mean square (CLMS) and augmented CLMS.

  13. An Accurate Link Correlation Estimator for Improving Wireless Protocol Performance

    PubMed Central

    Zhao, Zhiwei; Xu, Xianghua; Dong, Wei; Bu, Jiajun

    2015-01-01

    Wireless link correlation has shown significant impact on the performance of various sensor network protocols. Many works have been devoted to exploiting link correlation for protocol improvements. However, the effectiveness of these designs heavily relies on the accuracy of link correlation measurement. In this paper, we investigate state-of-the-art link correlation measurement and analyze the limitations of existing works. We then propose a novel lightweight and accurate link correlation estimation (LACE) approach based on the reasoning of link correlation formation. LACE combines both long-term and short-term link behaviors for link correlation estimation. We implement LACE as a stand-alone interface in TinyOS and incorporate it into both routing and flooding protocols. Simulation and testbed results show that LACE: (1) achieves more accurate and lightweight link correlation measurements than the state-of-the-art work; and (2) greatly improves the performance of protocols exploiting link correlation. PMID:25686314

  14. An accurate link correlation estimator for improving wireless protocol performance.

    PubMed

    Zhao, Zhiwei; Xu, Xianghua; Dong, Wei; Bu, Jiajun

    2015-02-12

    Wireless link correlation has shown significant impact on the performance of various sensor network protocols. Many works have been devoted to exploiting link correlation for protocol improvements. However, the effectiveness of these designs heavily relies on the accuracy of link correlation measurement. In this paper, we investigate state-of-the-art link correlation measurement and analyze the limitations of existing works. We then propose a novel lightweight and accurate link correlation estimation (LACE) approach based on the reasoning of link correlation formation. LACE combines both long-term and short-term link behaviors for link correlation estimation. We implement LACE as a stand-alone interface in TinyOS and incorporate it into both routing and flooding protocols. Simulation and testbed results show that LACE: (1) achieves more accurate and lightweight link correlation measurements than the state-of-the-art work; and (2) greatly improves the performance of protocols exploiting link correlation.

  15. Accurate estimation of sigma(exp 0) using AIRSAR data

    NASA Technical Reports Server (NTRS)

    Holecz, Francesco; Rignot, Eric

    1995-01-01

    During recent years signature analysis, classification, and modeling of Synthetic Aperture Radar (SAR) data as well as estimation of geophysical parameters from SAR data have received a great deal of interest. An important requirement for the quantitative use of SAR data is the accurate estimation of the backscattering coefficient sigma(exp 0). In terrain with relief variations radar signals are distorted due to the projection of the scene topography into the slant range-Doppler plane. The effect of these variations is to change the physical size of the scattering area, leading to errors in the radar backscatter values and incidence angle. For this reason the local incidence angle, derived from sensor position and Digital Elevation Model (DEM) data must always be considered. Especially in the airborne case, the antenna gain pattern can be an additional source of radiometric error, because the radar look angle is not known precisely as a result of the the aircraft motions and the local surface topography. Consequently, radiometric distortions due to the antenna gain pattern must also be corrected for each resolution cell, by taking into account aircraft displacements (position and attitude) and position of the backscatter element, defined by the DEM data. In this paper, a method to derive an accurate estimation of the backscattering coefficient using NASA/JPL AIRSAR data is presented. The results are evaluated in terms of geometric accuracy, radiometric variations of sigma(exp 0), and precision of the estimated forest biomass.

  16. Accurate Satellite-Derived Estimates of Tropospheric Ozone Radiative Forcing

    NASA Technical Reports Server (NTRS)

    Joiner, Joanna; Schoeberl, Mark R.; Vasilkov, Alexander P.; Oreopoulos, Lazaros; Platnick, Steven; Livesey, Nathaniel J.; Levelt, Pieternel F.

    2008-01-01

    Estimates of the radiative forcing due to anthropogenically-produced tropospheric O3 are derived primarily from models. Here, we use tropospheric ozone and cloud data from several instruments in the A-train constellation of satellites as well as information from the GEOS-5 Data Assimilation System to accurately estimate the instantaneous radiative forcing from tropospheric O3 for January and July 2005. We improve upon previous estimates of tropospheric ozone mixing ratios from a residual approach using the NASA Earth Observing System (EOS) Aura Ozone Monitoring Instrument (OMI) and Microwave Limb Sounder (MLS) by incorporating cloud pressure information from OMI. Since we cannot distinguish between natural and anthropogenic sources with the satellite data, our estimates reflect the total forcing due to tropospheric O3. We focus specifically on the magnitude and spatial structure of the cloud effect on both the shortand long-wave radiative forcing. The estimates presented here can be used to validate present day O3 radiative forcing produced by models.

  17. Accurate estimators of correlation functions in Fourier space

    NASA Astrophysics Data System (ADS)

    Sefusatti, E.; Crocce, M.; Scoccimarro, R.; Couchman, H. M. P.

    2016-08-01

    Efficient estimators of Fourier-space statistics for large number of objects rely on fast Fourier transforms (FFTs), which are affected by aliasing from unresolved small-scale modes due to the finite FFT grid. Aliasing takes the form of a sum over images, each of them corresponding to the Fourier content displaced by increasing multiples of the sampling frequency of the grid. These spurious contributions limit the accuracy in the estimation of Fourier-space statistics, and are typically ameliorated by simultaneously increasing grid size and discarding high-frequency modes. This results in inefficient estimates for e.g. the power spectrum when desired systematic biases are well under per cent level. We show that using interlaced grids removes odd images, which include the dominant contribution to aliasing. In addition, we discuss the choice of interpolation kernel used to define density perturbations on the FFT grid and demonstrate that using higher order interpolation kernels than the standard Cloud-In-Cell algorithm results in significant reduction of the remaining images. We show that combining fourth-order interpolation with interlacing gives very accurate Fourier amplitudes and phases of density perturbations. This results in power spectrum and bispectrum estimates that have systematic biases below 0.01 per cent all the way to the Nyquist frequency of the grid, thus maximizing the use of unbiased Fourier coefficients for a given grid size and greatly reducing systematics for applications to large cosmological data sets.

  18. Motion Estimation System Utilizing Point Cloud Registration

    NASA Technical Reports Server (NTRS)

    Chen, Qi (Inventor)

    2016-01-01

    A system and method of estimation motion of a machine is disclosed. The method may include determining a first point cloud and a second point cloud corresponding to an environment in a vicinity of the machine. The method may further include generating a first extended gaussian image (EGI) for the first point cloud and a second EGI for the second point cloud. The method may further include determining a first EGI segment based on the first EGI and a second EGI segment based on the second EGI. The method may further include determining a first two dimensional distribution for points in the first EGI segment and a second two dimensional distribution for points in the second EGI segment. The method may further include estimating motion of the machine based on the first and second two dimensional distributions.

  19. Accurate heart rate estimation from camera recording via MUSIC algorithm.

    PubMed

    Fouladi, Seyyed Hamed; Balasingham, Ilangko; Ramstad, Tor Audun; Kansanen, Kimmo

    2015-01-01

    In this paper, we propose an algorithm to extract heart rate frequency from video camera using the Multiple SIgnal Classification (MUSIC) algorithm. This leads to improved accuracy of the estimated heart rate frequency in cases the performance is limited by the number of samples and frame rate. Monitoring vital signs remotely can be exploited for both non-contact physiological and psychological diagnosis. The color variation recorded by ordinary cameras is used for heart rate monitoring. The orthogonality between signal space and noise space is used to find more accurate heart rate frequency in comparison with traditional methods. It is shown via experimental results that the limitation of previous methods can be overcome by using subspace methods. PMID:26738015

  20. Accurate Orientation Estimation Using AHRS under Conditions of Magnetic Distortion

    PubMed Central

    Yadav, Nagesh; Bleakley, Chris

    2014-01-01

    Low cost, compact attitude heading reference systems (AHRS) are now being used to track human body movements in indoor environments by estimation of the 3D orientation of body segments. In many of these systems, heading estimation is achieved by monitoring the strength of the Earth's magnetic field. However, the Earth's magnetic field can be locally distorted due to the proximity of ferrous and/or magnetic objects. Herein, we propose a novel method for accurate 3D orientation estimation using an AHRS, comprised of an accelerometer, gyroscope and magnetometer, under conditions of magnetic field distortion. The system performs online detection and compensation for magnetic disturbances, due to, for example, the presence of ferrous objects. The magnetic distortions are detected by exploiting variations in magnetic dip angle, relative to the gravity vector, and in magnetic strength. We investigate and show the advantages of using both magnetic strength and magnetic dip angle for detecting the presence of magnetic distortions. The correction method is based on a particle filter, which performs the correction using an adaptive cost function and by adapting the variance during particle resampling, so as to place more emphasis on the results of dead reckoning of the gyroscope measurements and less on the magnetometer readings. The proposed method was tested in an indoor environment in the presence of various magnetic distortions and under various accelerations (up to 3 g). In the experiments, the proposed algorithm achieves <2° static peak-to-peak error and <5° dynamic peak-to-peak error, significantly outperforming previous methods. PMID:25347584

  1. Naïve Point Estimation

    ERIC Educational Resources Information Center

    Lindskog, Marcus; Winman, Anders; Juslin, Peter

    2013-01-01

    The capacity of short-term memory is a key constraint when people make online judgments requiring them to rely on samples retrieved from memory (e.g., Dougherty & Hunter, 2003). In this article, the authors compare 2 accounts of how people use knowledge of statistical distributions to make point estimates: either by retrieving precomputed…

  2. Linear pose estimation from points or lines

    NASA Technical Reports Server (NTRS)

    Ansar, A.; Daniilidis, K.

    2002-01-01

    We present a general framework which allows for a novel set of linear solutions to the pose estimation problem for both n points and n lines. We present a number of simulations which compare our results to two other recent linear algorithm as well as to iterative approaches.

  3. Large manual pointing errors, but accurate verbal reports, for indications of target azimuth

    PubMed Central

    Philbeck, John; Sargent, Jesse; Arthur, Joeanna; Dopkins, Steve

    2008-01-01

    Many tasks have been used to probe human directional knowledge, but relatively little is known about the comparative merits of different means of indicating target azimuth. Few studies have compared action-based versus non-action-based judgments for targets encircling the observer. This comparison promises to illuminate not only the perception of azimuths in the front and rear hemispaces, but also the frames of reference underlying various azimuth judgments, and ultimately their neural underpinnings. We compared a response in which participants aimed a pointer at a nearby target, with verbal azimuth estimates. Target locations were distributed between 20 and 340 deg. Non-visual pointing responses exhibited large constant errors (up to −32 deg) that tended to increase with target eccentricity. Pointing with eyes open also showed large errors (up to −21 deg). In striking contrast, verbal reports were highly accurate, with constant errors rarely exceeding +/− 5 deg. Under our testing conditions, these results are not likely to stem from differences in perception-based vs. action-based responses, but instead reflect the frames of reference underlying the pointing and verbal responses. When participants used the pointer to match the egocentric target azimuth rather than the exocentric target azimuth relative to the pointer, errors were reduced. PMID:18546661

  4. Fast and Accurate Learning When Making Discrete Numerical Estimates.

    PubMed

    Sanborn, Adam N; Beierholm, Ulrik R

    2016-04-01

    Many everyday estimation tasks have an inherently discrete nature, whether the task is counting objects (e.g., a number of paint buckets) or estimating discretized continuous variables (e.g., the number of paint buckets needed to paint a room). While Bayesian inference is often used for modeling estimates made along continuous scales, discrete numerical estimates have not received as much attention, despite their common everyday occurrence. Using two tasks, a numerosity task and an area estimation task, we invoke Bayesian decision theory to characterize how people learn discrete numerical distributions and make numerical estimates. Across three experiments with novel stimulus distributions we found that participants fell between two common decision functions for converting their uncertain representation into a response: drawing a sample from their posterior distribution and taking the maximum of their posterior distribution. While this was consistent with the decision function found in previous work using continuous estimation tasks, surprisingly the prior distributions learned by participants in our experiments were much more adaptive: When making continuous estimates, participants have required thousands of trials to learn bimodal priors, but in our tasks participants learned discrete bimodal and even discrete quadrimodal priors within a few hundred trials. This makes discrete numerical estimation tasks good testbeds for investigating how people learn and make estimates. PMID:27070155

  5. Fast and Accurate Learning When Making Discrete Numerical Estimates.

    PubMed

    Sanborn, Adam N; Beierholm, Ulrik R

    2016-04-01

    Many everyday estimation tasks have an inherently discrete nature, whether the task is counting objects (e.g., a number of paint buckets) or estimating discretized continuous variables (e.g., the number of paint buckets needed to paint a room). While Bayesian inference is often used for modeling estimates made along continuous scales, discrete numerical estimates have not received as much attention, despite their common everyday occurrence. Using two tasks, a numerosity task and an area estimation task, we invoke Bayesian decision theory to characterize how people learn discrete numerical distributions and make numerical estimates. Across three experiments with novel stimulus distributions we found that participants fell between two common decision functions for converting their uncertain representation into a response: drawing a sample from their posterior distribution and taking the maximum of their posterior distribution. While this was consistent with the decision function found in previous work using continuous estimation tasks, surprisingly the prior distributions learned by participants in our experiments were much more adaptive: When making continuous estimates, participants have required thousands of trials to learn bimodal priors, but in our tasks participants learned discrete bimodal and even discrete quadrimodal priors within a few hundred trials. This makes discrete numerical estimation tasks good testbeds for investigating how people learn and make estimates.

  6. Fast and Accurate Learning When Making Discrete Numerical Estimates

    PubMed Central

    Sanborn, Adam N.; Beierholm, Ulrik R.

    2016-01-01

    Many everyday estimation tasks have an inherently discrete nature, whether the task is counting objects (e.g., a number of paint buckets) or estimating discretized continuous variables (e.g., the number of paint buckets needed to paint a room). While Bayesian inference is often used for modeling estimates made along continuous scales, discrete numerical estimates have not received as much attention, despite their common everyday occurrence. Using two tasks, a numerosity task and an area estimation task, we invoke Bayesian decision theory to characterize how people learn discrete numerical distributions and make numerical estimates. Across three experiments with novel stimulus distributions we found that participants fell between two common decision functions for converting their uncertain representation into a response: drawing a sample from their posterior distribution and taking the maximum of their posterior distribution. While this was consistent with the decision function found in previous work using continuous estimation tasks, surprisingly the prior distributions learned by participants in our experiments were much more adaptive: When making continuous estimates, participants have required thousands of trials to learn bimodal priors, but in our tasks participants learned discrete bimodal and even discrete quadrimodal priors within a few hundred trials. This makes discrete numerical estimation tasks good testbeds for investigating how people learn and make estimates. PMID:27070155

  7. Bioaccessibility tests accurately estimate bioavailability of lead to quail

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb, we incorporated Pb-contaminated soils or Pb acetate into diets for Japanese quail (Coturnix japonica), fed the quail for 15 days, and ...

  8. BIOACCESSIBILITY TESTS ACCURATELY ESTIMATE BIOAVAILABILITY OF LEAD TO QUAIL

    EPA Science Inventory

    Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contami...

  9. Triple point of e-deuterium as an accurate thermometric fixed point

    SciTech Connect

    Pavese, F.; McConville, G.T.

    1986-01-01

    The triple point of deuterium (18.7/sup 0/K) is the only possibility for excluding vapor pressure measurements in the definition of a temperature scale based on fixed points between 13.81 and 24.562/sup 0/K. This paper reports an investigation made at the Istituto di Metrologia and Mound Laboratory, using extremely pure deuterium directly sealed at the production plant into small metal cells. The large contamination by HD of commercially available gas, that cannot be accounted and corrected for due to its increase in handling, was found to be very stable with time after sealing in IMGC cells. HD contamination can be limited to less than 100 ppM in Monsanto cells, both with n-D/sub 2/ and e-D/sub 2/, when filled directly from the thermal diffusion column and sealed at the factory. e-D/sub 2/ requires a special deuterated catalyst. The triple point temperature of e-D/sub 2/ has been determined to be: T(NPL-IPTS-68) = 18.7011 +- 0.002/sup 0/K. 20 refs., 3 figs., 2 tabs.

  10. Does more accurate exposure prediction necessarily improve health effect estimates?

    PubMed

    Szpiro, Adam A; Paciorek, Christopher J; Sheppard, Lianne

    2011-09-01

    A unique challenge in air pollution cohort studies and similar applications in environmental epidemiology is that exposure is not measured directly at subjects' locations. Instead, pollution data from monitoring stations at some distance from the study subjects are used to predict exposures, and these predicted exposures are used to estimate the health effect parameter of interest. It is usually assumed that minimizing the error in predicting the true exposure will improve health effect estimation. We show in a simulation study that this is not always the case. We interpret our results in light of recently developed statistical theory for measurement error, and we discuss implications for the design and analysis of epidemiologic research.

  11. Accurate feature detection and estimation using nonlinear and multiresolution analysis

    NASA Astrophysics Data System (ADS)

    Rudin, Leonid; Osher, Stanley

    1994-11-01

    A program for feature detection and estimation using nonlinear and multiscale analysis was completed. The state-of-the-art edge detection was combined with multiscale restoration (as suggested by the first author) and robust results in the presence of noise were obtained. Successful applications to numerous images of interest to DOD were made. Also, a new market in the criminal justice field was developed, based in part, on this work.

  12. Simulation model accurately estimates total dietary iodine intake.

    PubMed

    Verkaik-Kloosterman, Janneke; van 't Veer, Pieter; Ocké, Marga C

    2009-07-01

    One problem with estimating iodine intake is the lack of detailed data about the discretionary use of iodized kitchen salt and iodization of industrially processed foods. To be able to take into account these uncertainties in estimating iodine intake, a simulation model combining deterministic and probabilistic techniques was developed. Data from the Dutch National Food Consumption Survey (1997-1998) and an update of the Food Composition database were used to simulate 3 different scenarios: Dutch iodine legislation until July 2008, Dutch iodine legislation after July 2008, and a potential future situation. Results from studies measuring iodine excretion during the former legislation are comparable with the iodine intakes estimated with our model. For both former and current legislation, iodine intake was adequate for a large part of the Dutch population, but some young children (<5%) were at risk of intakes that were too low. In the scenario of a potential future situation using lower salt iodine levels, the percentage of the Dutch population with intakes that were too low increased (almost 10% of young children). To keep iodine intakes adequate, salt iodine levels should not be decreased, unless many more foods will contain iodized salt. Our model should be useful in predicting the effects of food reformulation or fortification on habitual nutrient intakes.

  13. The Effect of Lidar Point Density on LAI Estimation

    NASA Astrophysics Data System (ADS)

    Cawse-Nicholson, K.; van Aardt, J. A.; Romanczyk, P.; Kelbe, D.; Bandyopadhyay, M.; Yao, W.; Krause, K.; Kampe, T. U.

    2013-12-01

    Leaf Area Index (LAI) is an important measure of forest health, biomass and carbon exchange, and is most commonly defined as the ratio of the leaf area to ground area. LAI is understood over large spatial scales and describes leaf properties over an entire forest, thus airborne imagery is ideal for capturing such data. Spectral metrics such as the normalized difference vegetation index (NDVI) have been used in the past for LAI estimation, but these metrics may saturate for high LAI values. Light detection and ranging (lidar) is an active remote sensing technology that emits light (most often at the wavelength 1064nm) and uses the return time to calculate the distance to intercepted objects. This yields information on three-dimensional structure and shape, which has been shown in recent studies to yield more accurate LAI estimates than NDVI. However, although lidar is a promising alternative for LAI estimation, minimum acquisition parameters (e.g. point density) required for accurate LAI retrieval are not yet well known. The objective of this study was to determine the minimum number of points per square meter that are required to describe the LAI measurements taken in-field. As part of a larger data collect, discrete lidar data were acquired by Kucera International Inc. over the Hemlock-Canadice State Forest, NY, USA in September 2012. The Leica ALS60 obtained point density of 12 points per square meter and effective ground sampling distance (GSD) of 0.15m. Up to three returns with intensities were recorded per pulse. As part of the same experiment, an AccuPAR LP-80 was used to collect LAI estimates at 25 sites on the ground. Sites were spaced approximately 80m apart and nine measurements were made in a grid pattern within a 20 x 20m site. Dominant species include Hemlock, Beech, Sugar Maple and Oak. This study has the benefit of very high-density data, which will enable a detailed map of intra-forest LAI. Understanding LAI at fine scales may be particularly useful

  14. Bioaccessibility tests accurately estimate bioavailability of lead to quail

    USGS Publications Warehouse

    Beyer, W. Nelson; Basta, Nicholas T; Chaney, Rufus L.; Henry, Paula F.; Mosby, David; Rattner, Barnett A.; Scheckel, Kirk G.; Sprague, Dan; Weber, John

    2016-01-01

    Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contaminated soils. Relative bioavailabilities were expressed by comparison with blood Pb concentrations in quail fed a Pb acetate reference diet. Diets containing soil from five Pb-contaminated Superfund sites had relative bioavailabilities from 33%-63%, with a mean of about 50%. Treatment of two of the soils with phosphorus significantly reduced the bioavailability of Pb. Bioaccessibility of Pb in the test soils was then measured in six in vitro tests and regressed on bioavailability. They were: the “Relative Bioavailability Leaching Procedure” (RBALP) at pH 1.5, the same test conducted at pH 2.5, the “Ohio State University In vitro Gastrointestinal” method (OSU IVG), the “Urban Soil Bioaccessible Lead Test”, the modified “Physiologically Based Extraction Test” and the “Waterfowl Physiologically Based Extraction Test.” All regressions had positive slopes. Based on criteria of slope and coefficient of determination, the RBALP pH 2.5 and OSU IVG tests performed very well. Speciation by X-ray absorption spectroscopy demonstrated that, on average, most of the Pb in the sampled soils was sorbed to minerals (30%), bound to organic matter (24%), or present as Pb sulfate (18%). Additional Pb was associated with P (chloropyromorphite, hydroxypyromorphite and tertiary Pb phosphate), and with Pb carbonates, leadhillite (a lead sulfate carbonate hydroxide), and Pb sulfide. The formation of chloropyromorphite reduced the bioavailability of Pb and the amendment of Pb-contaminated soils with P may be a thermodynamically favored means to sequester Pb.

  15. Analysis of Measurement Error and Estimator Shape in Three-Point Hydraulic Gradient Estimators

    NASA Astrophysics Data System (ADS)

    McKenna, S. A.; Wahi, A. K.

    2003-12-01

    Three spatially separated measurements of head provide a means of estimating the magnitude and orientation of the hydraulic gradient. Previous work with three-point estimators has focused on the effect of the size (area) of the three-point estimator and measurement error on the final estimates of the gradient magnitude and orientation in laboratory and field studies (Mizell, 1980; Silliman and Frost, 1995; Silliman and Mantz, 2000; Ruskauff and Rumbaugh, 1996). However, a systematic analysis of the combined effects of measurement error, estimator shape and estimator orientation relative to the gradient orientation has not previously been conducted. Monte Carlo simulation with an underlying assumption of a homogeneous transmissivity field is used to examine the effects of uncorrelated measurement error on a series of eleven different three-point estimators having the same size but different shapes as a function of the orientation of the true gradient. Results show that the variance in the estimate of both the magnitude and the orientation increase linearly with the increase in measurement error in agreement with the results of stochastic theory for estimators that are small relative to the correlation length of transmissivity (Mizell, 1980). Three-point estimator shapes with base to height ratios between 0.5 and 5.0 provide accurate estimates of magnitude and orientation across all orientations of the true gradient. As an example, these results are applied to data collected from a monitoring network of 25 wells at the WIPP site during two different time periods. The simulation results are used to reduce the set of all possible combinations of three wells to those combinations with acceptable measurement errors relative to the amount of head drop across the estimator and base to height ratios between 0.5 and 5.0. These limitations reduce the set of all possible well combinations by 98 percent and show that size alone as defined by triangle area is not a valid

  16. Bioaccessibility tests accurately estimate bioavailability of lead to quail.

    PubMed

    Beyer, W Nelson; Basta, Nicholas T; Chaney, Rufus L; Henry, Paula F P; Mosby, David E; Rattner, Barnett A; Scheckel, Kirk G; Sprague, Daniel T; Weber, John S

    2016-09-01

    Hazards of soil-borne lead (Pb) to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, the authors measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contaminated soils. Relative bioavailabilities were expressed by comparison with blood Pb concentrations in quail fed a Pb acetate reference diet. Diets containing soil from 5 Pb-contaminated Superfund sites had relative bioavailabilities from 33% to 63%, with a mean of approximately 50%. Treatment of 2 of the soils with phosphorus (P) significantly reduced the bioavailability of Pb. Bioaccessibility of Pb in the test soils was then measured in 6 in vitro tests and regressed on bioavailability: the relative bioavailability leaching procedure at pH 1.5, the same test conducted at pH 2.5, the Ohio State University in vitro gastrointestinal method, the urban soil bioaccessible lead test, the modified physiologically based extraction test, and the waterfowl physiologically based extraction test. All regressions had positive slopes. Based on criteria of slope and coefficient of determination, the relative bioavailability leaching procedure at pH 2.5 and Ohio State University in vitro gastrointestinal tests performed very well. Speciation by X-ray absorption spectroscopy demonstrated that, on average, most of the Pb in the sampled soils was sorbed to minerals (30%), bound to organic matter (24%), or present as Pb sulfate (18%). Additional Pb was associated with P (chloropyromorphite, hydroxypyromorphite, and tertiary Pb phosphate) and with Pb carbonates, leadhillite (a lead sulfate carbonate hydroxide), and Pb sulfide. The formation of chloropyromorphite reduced the bioavailability of Pb, and the amendment of Pb-contaminated soils with P may be a thermodynamically favored means to sequester Pb. Environ Toxicol Chem 2016;35:2311-2319. Published 2016 Wiley Periodicals Inc. on behalf of

  17. Comparison of methods for accurate end-point detection of potentiometric titrations

    NASA Astrophysics Data System (ADS)

    Villela, R. L. A.; Borges, P. P.; Vyskočil, L.

    2015-01-01

    Detection of the end point in potentiometric titrations has wide application on experiments that demand very low measurement uncertainties mainly for certifying reference materials. Simulations of experimental coulometric titration data and consequential error analysis of the end-point values were conducted using a programming code. These simulations revealed that the Levenberg-Marquardt method is in general more accurate than the traditional second derivative technique used currently as end-point detection for potentiometric titrations. Performance of the methods will be compared and presented in this paper.

  18. Naïve point estimation.

    PubMed

    Lindskog, Marcus; Winman, Anders; Juslin, Peter

    2013-05-01

    The capacity of short-term memory is a key constraint when people make online judgments requiring them to rely on samples retrieved from memory (e.g., Dougherty & Hunter, 2003). In this article, the authors compare 2 accounts of how people use knowledge of statistical distributions to make point estimates: either by retrieving precomputed large-sample representations or by retrieving small samples of similar observations post hoc at the time of judgment, as constrained by short-term memory capacity (the naïve sampling model: Juslin, Winman, & Hansson, 2007). Results from four experiments support the predictions by the naïve sampling model, including that participants sometimes guess values that they, when probed, demonstrably know have the lowest probability of occurring. Experiment 1 also demonstrated the operations of an unpredicted recognition-based inference. Computational modeling also incorporating this process demonstrated that the data from all 4 experiments were better predicted by assuming a post hoc sampling process constrained by short-term memory capacity than by assuming abstraction of large-sample representations of the distribution. PMID:22905935

  19. Estimating Aircraft Heading Based on Laserscanner Derived Point Clouds

    NASA Astrophysics Data System (ADS)

    Koppanyi, Z.; Toth, C., K.

    2015-03-01

    Using LiDAR sensors for tracking and monitoring an operating aircraft is a new application. In this paper, we present data processing methods to estimate the heading of a taxiing aircraft using laser point clouds. During the data acquisition, a Velodyne HDL-32E laser scanner tracked a moving Cessna 172 airplane. The point clouds captured at different times were used for heading estimation. After addressing the problem and specifying the equation of motion to reconstruct the aircraft point cloud from the consecutive scans, three methods are investigated here. The first requires a reference model to estimate the relative angle from the captured data by fitting different cross-sections (horizontal profiles). In the second approach, iterative closest point (ICP) method is used between the consecutive point clouds to determine the horizontal translation of the captured aircraft body. Regarding the ICP, three different versions were compared, namely, the ordinary 3D, 3-DoF 3D and 2-DoF 3D ICP. It was found that 2-DoF 3D ICP provides the best performance. Finally, the last algorithm searches for the unknown heading and velocity parameters by minimizing the volume of the reconstructed plane. The three methods were compared using three test datatypes which are distinguished by object-sensor distance, heading and velocity. We found that the ICP algorithm fails at long distances and when the aircraft motion direction perpendicular to the scan plane, but the first and the third methods give robust and accurate results at 40m object distance and at ~12 knots for a small Cessna airplane.

  20. Wind profile estimation from point to point laser distortion data

    NASA Technical Reports Server (NTRS)

    Leland, Robert

    1989-01-01

    The author's results on the problem of using laser distortion data to estimate the wind profile along the path of the beam are presented. A new model for the dynamics of the index of refraction in a non-constant wind is developed. The model agrees qualitatively with theoretical predictions for the index of refraction statistics in linear wind shear, and is approximated by the predictions of Taylor's hypothesis in constant wind. A framework for a potential in-flight experiment is presented, and the estimation problem is discussed in a maximum likelihood context.

  1. [A New Method of Accurately Extracting Spectral Values for Discrete Sampling Points].

    PubMed

    Lü, Zhen-zhen; Liu, Guang-ming; Yang, Jin-song

    2015-08-01

    In the establishment of remote sensing information inversion model, the actual measured data of discrete sampling points and the corresponding spectrum data to pixels of remote sensing image, are used to establish the relation, thus to realize the goal of information retrieval. Accurate extraction of spectrum value is very important to establish the remote sensing inversion mode. Converting target spot layer to ROI (region of interest) and then saving the ROI as ASCII is one of the methods that researchers often used to extract the spectral values. Analyzing the coordinate and spectrum values extracted using original coordinate in ENVI, we found that the extracted and original coordinate were not inconsistent and part of spectrum values not belong to the pixel containing the sampling point. The inversion model based on the above information cannot really reflect relationship between the target properties and spectral values; so that the model is meaningless. We equally divided the pixel into four parts and summed up the law. It was found that only when the sampling points distributed in the upper left corner of pixels, the extracted values were correct. On the basis of the above methods, this paper systematically studied the principle of extraction target coordinate and spectral values, and summarized the rule. A new method for extracting spectral parameters of the pixel that sampling point located in the environment of ENVI software. Firstly, pixel sampling point coordinates for any of the four corner points were extracted by the sample points with original coordinate in ENVI. Secondly, the sampling points were judged in which partition of pixel by comparing the absolute values of difference longitude and latitude of the original and extraction coordinates. Lastly, all points were adjusted to the upper left corner of pixels by symmetry principle and spectrum values were extracted by the same way in the first step. The results indicated that the extracted spectrum

  2. Accurate biopsy-needle depth estimation in limited-angle tomography using multi-view geometry

    NASA Astrophysics Data System (ADS)

    van der Sommen, Fons; Zinger, Sveta; de With, Peter H. N.

    2016-03-01

    Recently, compressed-sensing based algorithms have enabled volume reconstruction from projection images acquired over a relatively small angle (θ < 20°). These methods enable accurate depth estimation of surgical tools with respect to anatomical structures. However, they are computationally expensive and time consuming, rendering them unattractive for image-guided interventions. We propose an alternative approach for depth estimation of biopsy needles during image-guided interventions, in which we split the problem into two parts and solve them independently: needle-depth estimation and volume reconstruction. The complete proposed system consists of the previous two steps, preceded by needle extraction. First, we detect the biopsy needle in the projection images and remove it by interpolation. Next, we exploit epipolar geometry to find point-to-point correspondences in the projection images to triangulate the 3D position of the needle in the volume. Finally, we use the interpolated projection images to reconstruct the local anatomical structures and indicate the position of the needle within this volume. For validation of the algorithm, we have recorded a full CT scan of a phantom with an inserted biopsy needle. The performance of our approach ranges from a median error of 2.94 mm for an distributed viewing angle of 1° down to an error of 0.30 mm for an angle larger than 10°. Based on the results of this initial phantom study, we conclude that multi-view geometry offers an attractive alternative to time-consuming iterative methods for the depth estimation of surgical tools during C-arm-based image-guided interventions.

  3. Accurate estimation of motion blur parameters in noisy remote sensing image

    NASA Astrophysics Data System (ADS)

    Shi, Xueyan; Wang, Lin; Shao, Xiaopeng; Wang, Huilin; Tao, Zhong

    2015-05-01

    The relative motion between remote sensing satellite sensor and objects is one of the most common reasons for remote sensing image degradation. It seriously weakens image data interpretation and information extraction. In practice, point spread function (PSF) should be estimated firstly for image restoration. Identifying motion blur direction and length accurately is very crucial for PSF and restoring image with precision. In general, the regular light-and-dark stripes in the spectrum can be employed to obtain the parameters by using Radon transform. However, serious noise existing in actual remote sensing images often causes the stripes unobvious. The parameters would be difficult to calculate and the error of the result relatively big. In this paper, an improved motion blur parameter identification method to noisy remote sensing image is proposed to solve this problem. The spectrum characteristic of noisy remote sensing image is analyzed firstly. An interactive image segmentation method based on graph theory called GrabCut is adopted to effectively extract the edge of the light center in the spectrum. Motion blur direction is estimated by applying Radon transform on the segmentation result. In order to reduce random error, a method based on whole column statistics is used during calculating blur length. Finally, Lucy-Richardson algorithm is applied to restore the remote sensing images of the moon after estimating blur parameters. The experimental results verify the effectiveness and robustness of our algorithm.

  4. Zero-Point Calibration for AGN Black-Hole Mass Estimates

    NASA Technical Reports Server (NTRS)

    Peterson, B. M.; Onken, C. A.

    2004-01-01

    We discuss the measurement and associated uncertainties of AGN reverberation-based black-hole masses, since these provide the zero-point calibration for scaling relationships that allow black-hole mass estimates for quasars. We find that reverberation-based mass estimates appear to be accurate to within a factor of about 3.

  5. Modified estimators for the change point in hazard function

    NASA Astrophysics Data System (ADS)

    Karasoy, Durdu; Kadilar, Cem

    2009-07-01

    We propose the consistent estimators for the change point in hazard function by improving the estimators in [A.P. Basu, J.K. Ghosh, S.N. Joshi, On estimating change point in a failure rate, in: S.S. Gupta, J.O. Berger (Eds.), Statistical Decision Theory and Related Topics IV, vol. 2, Springer-Verlag, 1988, pp. 239-252] and [H.T. Nguyen, G.S. Rogers, E.A. Walker, Estimation in change point hazard rate model, Biometrika 71 (1984) 299-304]. By a simulation study, we show that the proposed estimators are more efficient than the original estimators in many cases.

  6. ROM Plus(®): accurate point-of-care detection of ruptured fetal membranes.

    PubMed

    McQuivey, Ross W; Block, Jon E

    2016-01-01

    Accurate and timely diagnosis of rupture of fetal membranes is imperative to inform and guide gestational age-specific interventions to optimize perinatal outcomes and reduce the risk of serious complications, including preterm delivery and infections. The ROM Plus is a rapid, point-of-care, qualitative immunochromatographic diagnostic test that uses a unique monoclonal/polyclonal antibody approach to detect two different proteins found in amniotic fluid at high concentrations: alpha-fetoprotein and insulin-like growth factor binding protein-1. Clinical study results have uniformly demonstrated high diagnostic accuracy and performance characteristics with this point-of-care test that exceeds conventional clinical testing with external laboratory evaluation. The description, indications for use, procedural steps, and laboratory and clinical characterization of this assay are presented in this article. PMID:27274316

  7. What's the Point of a Raster ? Advantages of 3D Point Cloud Processing over Raster Based Methods for Accurate Geomorphic Analysis of High Resolution Topography.

    NASA Astrophysics Data System (ADS)

    Lague, D.

    2014-12-01

    High Resolution Topographic (HRT) datasets are predominantly stored and analyzed as 2D raster grids of elevations (i.e., Digital Elevation Models). Raster grid processing is common in GIS software and benefits from a large library of fast algorithms dedicated to geometrical analysis, drainage network computation and topographic change measurement. Yet, all instruments or methods currently generating HRT datasets (e.g., ALS, TLS, SFM, stereo satellite imagery) output natively 3D unstructured point clouds that are (i) non-regularly sampled, (ii) incomplete (e.g., submerged parts of river channels are rarely measured), and (iii) include 3D elements (e.g., vegetation, vertical features such as river banks or cliffs) that cannot be accurately described in a DEM. Interpolating the raw point cloud onto a 2D grid generally results in a loss of position accuracy, spatial resolution and in more or less controlled interpolation. Here I demonstrate how studying earth surface topography and processes directly on native 3D point cloud datasets offers several advantages over raster based methods: point cloud methods preserve the accuracy of the original data, can better handle the evaluation of uncertainty associated to topographic change measurements and are more suitable to study vegetation characteristics and steep features of the landscape. In this presentation, I will illustrate and compare Point Cloud based and Raster based workflows with various examples involving ALS, TLS and SFM for the analysis of bank erosion processes in bedrock and alluvial rivers, rockfall statistics (including rockfall volume estimate directly from point clouds) and the interaction of vegetation/hydraulics and sedimentation in salt marshes. These workflows use 2 recently published algorithms for point cloud classification (CANUPO) and point cloud comparison (M3C2) now implemented in the open source software CloudCompare.

  8. Application of Common Mid-Point Method to Estimate Asphalt

    NASA Astrophysics Data System (ADS)

    Zhao, Shan; Al-Aadi, Imad

    2015-04-01

    3-D radar is a multi-array stepped-frequency ground penetration radar (GPR) that can measure at a very close sampling interval in both in-line and cross-line directions. Constructing asphalt layers in accordance with specified thicknesses is crucial for pavement structure capacity and pavement performance. Common mid-point method (CMP) is a multi-offset measurement method that can improve the accuracy of the asphalt layer thickness estimation. In this study, the viability of using 3-D radar to predict asphalt concrete pavement thickness with an extended CMP method was investigated. GPR signals were collected on asphalt pavements with various thicknesses. Time domain resolution of the 3-D radar was improved by applying zero-padding technique in the frequency domain. The performance of the 3-D radar was then compared to that of the air-coupled horn antenna. The study concluded that 3-D radar can be used to predict asphalt layer thickness using CMP method accurately when the layer thickness is larger than 0.13m. The lack of time domain resolution of 3-D radar can be solved by frequency zero-padding. Keywords: asphalt pavement thickness, 3-D Radar, stepped-frequency, common mid-point method, zero padding.

  9. Confidence of the three-point estimator of frequency drift

    NASA Technical Reports Server (NTRS)

    Weiss, Marc A.; Hackman, Christine

    1993-01-01

    It was shown that a three-point second difference estimator is nearly optimal for estimating frequency drift in many common atomic oscillators. A formula for the uncertainty of this estimate as a function of the integration time and of the Allan variance associated with this integration time is derived.

  10. A fast and accurate frequency estimation algorithm for sinusoidal signal with harmonic components

    NASA Astrophysics Data System (ADS)

    Hu, Jinghua; Pan, Mengchun; Zeng, Zhidun; Hu, Jiafei; Chen, Dixiang; Tian, Wugang; Zhao, Jianqiang; Du, Qingfa

    2016-10-01

    Frequency estimation is a fundamental problem in many applications, such as traditional vibration measurement, power system supervision, and microelectromechanical system sensors control. In this paper, a fast and accurate frequency estimation algorithm is proposed to deal with low efficiency problem in traditional methods. The proposed algorithm consists of coarse and fine frequency estimation steps, and we demonstrate that it is more efficient than conventional searching methods to achieve coarse frequency estimation (location peak of FFT amplitude) by applying modified zero-crossing technique. Thus, the proposed estimation algorithm requires less hardware and software sources and can achieve even higher efficiency when the experimental data increase. Experimental results with modulated magnetic signal show that the root mean square error of frequency estimation is below 0.032 Hz with the proposed algorithm, which has lower computational complexity and better global performance than conventional frequency estimation methods.

  11. Development of Classification and Story Building Data for Accurate Earthquake Damage Estimation

    NASA Astrophysics Data System (ADS)

    Sakai, Yuki; Fukukawa, Noriko; Arai, Kensuke

    We investigated the method of developing classification and story building data from census population database in order to estimate earthquake damage more accurately especially in the urban area presuming that there are correlation between numbers of non-wooden or high-rise buildings and the population. We formulated equations of estimating numbers of wooden houses, low-to-mid-rise(1-9 story) and high-rise(over 10 story) non-wooden buildings in the 1km mesh from night and daytime population database based on the building data we investigated and collected in the selected 20 meshs in Kanto area. We could accurately estimate the numbers of three classified buildings by the formulated equations, but in some special cases, such as the apartment block mesh, the estimated values are quite different from actual values.

  12. Accurate Astrometry and Photometry of Saturated and Coronagraphic Point Spread Functions

    SciTech Connect

    Marois, C; Lafreniere, D; Macintosh, B; Doyon, R

    2006-02-07

    For ground-based adaptive optics point source imaging, differential atmospheric refraction and flexure introduce a small drift of the point spread function (PSF) with time, and seeing and sky transmission variations modify the PSF flux. These effects need to be corrected to properly combine the images and obtain optimal signal-to-noise ratios, accurate relative astrometry and photometry of detected companions as well as precise detection limits. Usually, one can easily correct for these effects by using the PSF core, but this is impossible when high dynamic range observing techniques are used, like coronagraphy with a non-transmissive occulting mask, or if the stellar PSF core is saturated. We present a new technique that can solve these issues by using off-axis satellite PSFs produced by a periodic amplitude or phase mask conjugated to a pupil plane. It will be shown that these satellite PSFs track precisely the PSF position, its Strehl ratio and its intensity and can thus be used to register and to flux normalize the PSF. This approach can be easily implemented in existing adaptive optics instruments and should be considered for future extreme adaptive optics coronagraph instruments and in high-contrast imaging space observatories.

  13. Do We Know Whether Researchers and Reviewers are Estimating Risk and Benefit Accurately?

    PubMed

    Hey, Spencer Phillips; Kimmelman, Jonathan

    2016-10-01

    Accurate estimation of risk and benefit is integral to good clinical research planning, ethical review, and study implementation. Some commentators have argued that various actors in clinical research systems are prone to biased or arbitrary risk/benefit estimation. In this commentary, we suggest the evidence supporting such claims is very limited. Most prior work has imputed risk/benefit beliefs based on past behavior or goals, rather than directly measuring them. We describe an approach - forecast analysis - that would enable direct and effective measure of the quality of risk/benefit estimation. We then consider some objections and limitations to the forecasting approach. PMID:27197044

  14. Do We Know Whether Researchers and Reviewers are Estimating Risk and Benefit Accurately?

    PubMed

    Hey, Spencer Phillips; Kimmelman, Jonathan

    2016-10-01

    Accurate estimation of risk and benefit is integral to good clinical research planning, ethical review, and study implementation. Some commentators have argued that various actors in clinical research systems are prone to biased or arbitrary risk/benefit estimation. In this commentary, we suggest the evidence supporting such claims is very limited. Most prior work has imputed risk/benefit beliefs based on past behavior or goals, rather than directly measuring them. We describe an approach - forecast analysis - that would enable direct and effective measure of the quality of risk/benefit estimation. We then consider some objections and limitations to the forecasting approach.

  15. On the accurate estimation of gap fraction during daytime with digital cover photography

    NASA Astrophysics Data System (ADS)

    Hwang, Y. R.; Ryu, Y.; Kimm, H.; Macfarlane, C.; Lang, M.; Sonnentag, O.

    2015-12-01

    Digital cover photography (DCP) has emerged as an indirect method to obtain gap fraction accurately. Thus far, however, the intervention of subjectivity, such as determining the camera relative exposure value (REV) and threshold in the histogram, hindered computing accurate gap fraction. Here we propose a novel method that enables us to measure gap fraction accurately during daytime under various sky conditions by DCP. The novel method computes gap fraction using a single DCP unsaturated raw image which is corrected for scattering effects by canopies and a reconstructed sky image from the raw format image. To test the sensitivity of the novel method derived gap fraction to diverse REVs, solar zenith angles and canopy structures, we took photos in one hour interval between sunrise to midday under dense and sparse canopies with REV 0 to -5. The novel method showed little variation of gap fraction across different REVs in both dense and spares canopies across diverse range of solar zenith angles. The perforated panel experiment, which was used to test the accuracy of the estimated gap fraction, confirmed that the novel method resulted in the accurate and consistent gap fractions across different hole sizes, gap fractions and solar zenith angles. These findings highlight that the novel method opens new opportunities to estimate gap fraction accurately during daytime from sparse to dense canopies, which will be useful in monitoring LAI precisely and validating satellite remote sensing LAI products efficiently.

  16. Accurate calculation of Stokes drag for point-particle tracking in two-way coupled flows

    NASA Astrophysics Data System (ADS)

    Horwitz, J. A. K.; Mani, A.

    2016-08-01

    In this work, we propose and test a method for calculating Stokes drag applicable to particle-laden fluid flows where two-way momentum coupling is important. In the point-particle formulation, particle dynamics are coupled to fluid dynamics via a source term that appears in the respective momentum equations. When the particle Reynolds number is small and the particle diameter is smaller than the fluid scales, it is common to approximate the momentum coupling source term as the Stokes drag. The Stokes drag force depends on the difference between the undisturbed fluid velocity evaluated at the particle location, and the particle velocity. However, owing to two-way coupling, the fluid velocity is modified in the neighborhood of a particle, relative to its undisturbed value. This causes the computed Stokes drag force to be underestimated in two-way coupled point-particle simulations. We develop estimates for the drag force error as function of the particle size relative to the grid size. Because the disturbance field created by the particle contaminates the surrounding fluid, correctly calculating the drag force cannot be done solely by direct interpolation of the fluid velocity. Instead, we develop a correction method that calculates the undisturbed fluid velocity from the computed disturbed velocity field by adding an estimate of the velocity disturbance created by the particle. The correction scheme is tested for a particle settling in an otherwise quiescent fluid and is found to reduce the error in computed settling velocity by an order of magnitude compared with common interpolation schemes.

  17. Leidenfrost Point and Estimate of the Vapour Layer Thickness

    ERIC Educational Resources Information Center

    Gianino, Concetto

    2008-01-01

    In this article I describe an experiment involving the Leidenfrost phenomenon, which is the long lifetime of a water drop when it is deposited on a metal that is much hotter than the boiling point of water. The experiment was carried out with high-school students. The Leidenfrost point is measured and the heat laws are used to estimate the…

  18. How accurately can we predict the melting points of drug-like compounds?

    PubMed

    Tetko, Igor V; Sushko, Yurii; Novotarskyi, Sergii; Patiny, Luc; Kondratov, Ivan; Petrenko, Alexander E; Charochkina, Larisa; Asiri, Abdullah M

    2014-12-22

    This article contributes a highly accurate model for predicting the melting points (MPs) of medicinal chemistry compounds. The model was developed using the largest published data set, comprising more than 47k compounds. The distributions of MPs in drug-like and drug lead sets showed that >90% of molecules melt within [50,250]°C. The final model calculated an RMSE of less than 33 °C for molecules from this temperature interval, which is the most important for medicinal chemistry users. This performance was achieved using a consensus model that performed calculations to a significantly higher accuracy than the individual models. We found that compounds with reactive and unstable groups were overrepresented among outlying compounds. These compounds could decompose during storage or measurement, thus introducing experimental errors. While filtering the data by removing outliers generally increased the accuracy of individual models, it did not significantly affect the results of the consensus models. Three analyzed distance to models did not allow us to flag molecules, which had MP values fell outside the applicability domain of the model. We believe that this negative result and the public availability of data from this article will encourage future studies to develop better approaches to define the applicability domain of models. The final model, MP data, and identified reactive groups are available online at http://ochem.eu/article/55638.

  19. How accurately can we predict the melting points of drug-like compounds?

    PubMed

    Tetko, Igor V; Sushko, Yurii; Novotarskyi, Sergii; Patiny, Luc; Kondratov, Ivan; Petrenko, Alexander E; Charochkina, Larisa; Asiri, Abdullah M

    2014-12-22

    This article contributes a highly accurate model for predicting the melting points (MPs) of medicinal chemistry compounds. The model was developed using the largest published data set, comprising more than 47k compounds. The distributions of MPs in drug-like and drug lead sets showed that >90% of molecules melt within [50,250]°C. The final model calculated an RMSE of less than 33 °C for molecules from this temperature interval, which is the most important for medicinal chemistry users. This performance was achieved using a consensus model that performed calculations to a significantly higher accuracy than the individual models. We found that compounds with reactive and unstable groups were overrepresented among outlying compounds. These compounds could decompose during storage or measurement, thus introducing experimental errors. While filtering the data by removing outliers generally increased the accuracy of individual models, it did not significantly affect the results of the consensus models. Three analyzed distance to models did not allow us to flag molecules, which had MP values fell outside the applicability domain of the model. We believe that this negative result and the public availability of data from this article will encourage future studies to develop better approaches to define the applicability domain of models. The final model, MP data, and identified reactive groups are available online at http://ochem.eu/article/55638. PMID:25489863

  20. Optimization of Correlation Kernel Size for Accurate Estimation of Myocardial Contraction and Relaxation

    NASA Astrophysics Data System (ADS)

    Honjo, Yasunori; Hasegawa, Hideyuki; Kanai, Hiroshi

    2012-07-01

    For noninvasive and quantitative measurements of global two-dimensional (2D) heart wall motion, speckle tracking methods have been developed and applied. In these conventional methods, the frame rate is limited to about 200 Hz, corresponding to the sampling period of 5 ms. However, myocardial function during short periods, as obtained by these conventional speckle tracking methods, remains unclear owing to low temporal and spatial resolutions of these methods. Moreover, an important parameter, the optimal kernel size, has not been thoroughly investigated. In our previous study, the optimal kernel size was determined in a phantom experiment under a high signal-to-noise ratio (SNR), and the determined optimal kernel size was applied to the in vivo measurement of 2D displacements of the heart wall by block matching using normalized cross-correlation between RF echoes at a high frame rate of 860 Hz, corresponding to a temporal resolution of 1.1 ms. However, estimations under low SNRs and the effects of the difference in echo characteristics, i.e., specular reflection and speckle-like echoes, have not been considered, and the evaluation of accuracy in the estimation of the strain rate is still insufficient. In this study, the optimal kernel sizes were determined in a phantom experiment under several SNRs and, then, the myocardial strain rate was estimated such that the myocardial function can be measured at a high frame rate. In a basic experiment, the optimal kernel sizes at depths of 20, 40, 60, and 80 mm yielded similar results: in particular, SNR was more than 15 dB. Moreover, it was found that the kernel size at the boundary must be set larger than that at the inside. The optimal sizes of the correlation kernel were seven times and four times the size of the point spread function around the boundary and inside the silicone rubber, respectively. To compare the optimal kernel sizes, which was determined in a phantom experiment, with other sizes, the radial strain

  1. Children's Use of the Reference Point Strategy for Measurement Estimation

    ERIC Educational Resources Information Center

    Joram, Elana; Gabriele, Anthony J.; Bertheau, Myrna; Gelman, Rochel; Subrahmanyam, Kaveri

    2005-01-01

    Mathematics educators frequently recommend that students use strategies for measurement estimation, such as the reference point or benchmark strategy; however, little is known about the effects of using this strategy on estimation accuracy or representations of standard measurement units. One reason for the paucity of research in this area is that…

  2. Accurate Estimation of the Entropy of Rotation-Translation Probability Distributions.

    PubMed

    Fogolari, Federico; Dongmo Foumthuim, Cedrix Jurgal; Fortuna, Sara; Soler, Miguel Angel; Corazza, Alessandra; Esposito, Gennaro

    2016-01-12

    The estimation of rotational and translational entropies in the context of ligand binding has been the subject of long-time investigations. The high dimensionality (six) of the problem and the limited amount of sampling often prevent the required resolution to provide accurate estimates by the histogram method. Recently, the nearest-neighbor distance method has been applied to the problem, but the solutions provided either address rotation and translation separately, therefore lacking correlations, or use a heuristic approach. Here we address rotational-translational entropy estimation in the context of nearest-neighbor-based entropy estimation, solve the problem numerically, and provide an exact and an approximate method to estimate the full rotational-translational entropy.

  3. Polynomial fitting of DT-MRI fiber tracts allows accurate estimation of muscle architectural parameters.

    PubMed

    Damon, Bruce M; Heemskerk, Anneriet M; Ding, Zhaohua

    2012-06-01

    Fiber curvature is a functionally significant muscle structural property, but its estimation from diffusion-tensor magnetic resonance imaging fiber tracking data may be confounded by noise. The purpose of this study was to investigate the use of polynomial fitting of fiber tracts for improving the accuracy and precision of fiber curvature (κ) measurements. Simulated image data sets were created in order to provide data with known values for κ and pennation angle (θ). Simulations were designed to test the effects of increasing inherent fiber curvature (3.8, 7.9, 11.8 and 15.3 m(-1)), signal-to-noise ratio (50, 75, 100 and 150) and voxel geometry (13.8- and 27.0-mm(3) voxel volume with isotropic resolution; 13.5-mm(3) volume with an aspect ratio of 4.0) on κ and θ measurements. In the originally reconstructed tracts, θ was estimated accurately under most curvature and all imaging conditions studied; however, the estimates of κ were imprecise and inaccurate. Fitting the tracts to second-order polynomial functions provided accurate and precise estimates of κ for all conditions except very high curvature (κ=15.3 m(-1)), while preserving the accuracy of the θ estimates. Similarly, polynomial fitting of in vivo fiber tracking data reduced the κ values of fitted tracts from those of unfitted tracts and did not change the θ values. Polynomial fitting of fiber tracts allows accurate estimation of physiologically reasonable values of κ, while preserving the accuracy of θ estimation.

  4. Skin Temperature Over the Carotid Artery, an Accurate Non-invasive Estimation of Near Core Temperature

    PubMed Central

    Imani, Farsad; Karimi Rouzbahani, Hamid Reza; Goudarzi, Mehrdad; Tarrahi, Mohammad Javad; Ebrahim Soltani, Alireza

    2016-01-01

    Background: During anesthesia, continuous body temperature monitoring is essential, especially in children. Anesthesia can increase the risk of loss of body temperature by three to four times. Hypothermia in children results in increased morbidity and mortality. Since the measurement points of the core body temperature are not easily accessible, near core sites, like rectum, are used. Objectives: The purpose of this study was to measure skin temperature over the carotid artery and compare it with the rectum temperature, in order to propose a model for accurate estimation of near core body temperature. Patients and Methods: Totally, 124 patients within the age range of 2 - 6 years, undergoing elective surgery, were selected. Temperature of rectum and skin over the carotid artery was measured. Then, the patients were randomly divided into two groups (each including 62 subjects), namely modeling (MG) and validation groups (VG). First, in the modeling group, the average temperature of the rectum and skin over the carotid artery were measured separately. The appropriate model was determined, according to the significance of the model’s coefficients. The obtained model was used to predict the rectum temperature in the second group (VG group). Correlation of the predicted values with the real values (the measured rectum temperature) in the second group was investigated. Also, the difference in the average values of these two groups was examined in terms of significance. Results: In the modeling group, the average rectum and carotid temperatures were 36.47 ± 0.54°C and 35.45 ± 0.62°C, respectively. The final model was obtained, as follows: Carotid temperature × 0.561 + 16.583 = Rectum temperature. The predicted value was calculated based on the regression model and then compared with the measured rectum value, which showed no significant difference (P = 0.361). Conclusions: The present study was the first research, in which rectum temperature was compared with that

  5. Mental health disorders among individuals with mental retardation: challenges to accurate prevalence estimates.

    PubMed Central

    Kerker, Bonnie D.; Owens, Pamela L.; Zigler, Edward; Horwitz, Sarah M.

    2004-01-01

    OBJECTIVES: The objectives of this literature review were to assess current challenges to estimating the prevalence of mental health disorders among individuals with mental retardation (MR) and to develop recommendations to improve such estimates for this population. METHODS: The authors identified 200 peer-reviewed articles, book chapters, government documents, or reports from national and international organizations on the mental health status of people with MR. Based on the study's inclusion criteria, 52 articles were included in the review. RESULTS: Available data reveal inconsistent estimates of the prevalence of mental health disorders among those with MR, but suggest that some mental health conditions are more common among these individuals than in the general population. Two main challenges to identifying accurate prevalence estimates were found: (1) health care providers have difficulty diagnosing mental health conditions among individuals with MR; and (2) methodological limitations of previous research inhibit confidence in study results. CONCLUSIONS: Accurate prevalence estimates are necessary to ensure the availability of appropriate treatment services. To this end, health care providers should receive more training regarding the mental health treatment of individuals with MR. Further, government officials should discuss mechanisms of collecting nationally representative data, and the research community should utilize consistent methods with representative samples when studying mental health conditions in this population. PMID:15219798

  6. Accurate estimation of forest carbon stocks by 3-D remote sensing of individual trees.

    PubMed

    Omasa, Kenji; Qiu, Guo Yu; Watanuki, Kenichi; Yoshimi, Kenji; Akiyama, Yukihide

    2003-03-15

    Forests are one of the most important carbon sinks on Earth. However, owing to the complex structure, variable geography, and large area of forests, accurate estimation of forest carbon stocks is still a challenge for both site surveying and remote sensing. For these reasons, the Kyoto Protocol requires the establishment of methodologies for estimating the carbon stocks of forests (Kyoto Protocol, Article 5). A possible solution to this challenge is to remotely measure the carbon stocks of every tree in an entire forest. Here, we present a methodology for estimating carbon stocks of a Japanese cedar forest by using a high-resolution, helicopter-borne 3-dimensional (3-D) scanning lidar system that measures the 3-D canopy structure of every tree in a forest. Results show that a digital image (10-cm mesh) of woody canopy can be acquired. The treetop can be detected automatically with a reasonable accuracy. The absolute error ranges for tree height measurements are within 42 cm. Allometric relationships of height to carbon stocks then permit estimation of total carbon storage by measurement of carbon stocks of every tree. Thus, we suggest that our methodology can be used to accurately estimate the carbon stocks of Japanese cedar forests at a stand scale. Periodic measurements will reveal changes in forest carbon stocks.

  7. A Method to Accurately Estimate the Muscular Torques of Human Wearing Exoskeletons by Torque Sensors

    PubMed Central

    Hwang, Beomsoo; Jeon, Doyoung

    2015-01-01

    In exoskeletal robots, the quantification of the user’s muscular effort is important to recognize the user’s motion intentions and evaluate motor abilities. In this paper, we attempt to estimate users’ muscular efforts accurately using joint torque sensor which contains the measurements of dynamic effect of human body such as the inertial, Coriolis, and gravitational torques as well as torque by active muscular effort. It is important to extract the dynamic effects of the user’s limb accurately from the measured torque. The user’s limb dynamics are formulated and a convenient method of identifying user-specific parameters is suggested for estimating the user’s muscular torque in robotic exoskeletons. Experiments were carried out on a wheelchair-integrated lower limb exoskeleton, EXOwheel, which was equipped with torque sensors in the hip and knee joints. The proposed methods were evaluated by 10 healthy participants during body weight-supported gait training. The experimental results show that the torque sensors are to estimate the muscular torque accurately in cases of relaxed and activated muscle conditions. PMID:25860074

  8. A method to accurately estimate the muscular torques of human wearing exoskeletons by torque sensors.

    PubMed

    Hwang, Beomsoo; Jeon, Doyoung

    2015-04-09

    In exoskeletal robots, the quantification of the user's muscular effort is important to recognize the user's motion intentions and evaluate motor abilities. In this paper, we attempt to estimate users' muscular efforts accurately using joint torque sensor which contains the measurements of dynamic effect of human body such as the inertial, Coriolis, and gravitational torques as well as torque by active muscular effort. It is important to extract the dynamic effects of the user's limb accurately from the measured torque. The user's limb dynamics are formulated and a convenient method of identifying user-specific parameters is suggested for estimating the user's muscular torque in robotic exoskeletons. Experiments were carried out on a wheelchair-integrated lower limb exoskeleton, EXOwheel, which was equipped with torque sensors in the hip and knee joints. The proposed methods were evaluated by 10 healthy participants during body weight-supported gait training. The experimental results show that the torque sensors are to estimate the muscular torque accurately in cases of relaxed and activated muscle conditions.

  9. Thermal Imaging of Earth for Accurate Pointing of Deep-Space Antennas

    NASA Technical Reports Server (NTRS)

    Ortiz, Gerardo; Lee, Shinhak

    2005-01-01

    A report discusses a proposal to use thermal (long-wavelength infrared) images of the Earth, as seen from spacecraft at interplanetary distances, for pointing antennas and telescopes toward the Earth for Ka-band and optical communications. The purpose is to overcome two limitations of using visible images: (1) at large Earth phase angles, the light from the Earth is too faint; and (2) performance is degraded by large albedo variations associated with weather changes. In particular, it is proposed to use images in the wavelength band of 8 to 13 m, wherein the appearance of the Earth is substantially independent of the Earth phase angle and emissivity variations are small. The report addresses tracking requirements for optical and Ka-band communications, selection of the wavelength band, available signal level versus phase angle, background noise, and signal-to-noise ratio. Tracking errors are estimated for several conceptual systems employing currently available infrared image sensors. It is found that at Mars range, it should be possible to locate the centroid of the Earth image within a noise equivalent angle (a random angular error) between 10 and 150 nanoradians at a bias error of no more than 80 nanoradians

  10. Novel serologic biomarkers provide accurate estimates of recent Plasmodium falciparum exposure for individuals and communities

    PubMed Central

    Helb, Danica A.; Tetteh, Kevin K. A.; Felgner, Philip L.; Skinner, Jeff; Hubbard, Alan; Arinaitwe, Emmanuel; Mayanja-Kizza, Harriet; Ssewanyana, Isaac; Kamya, Moses R.; Beeson, James G.; Tappero, Jordan; Smith, David L.; Crompton, Peter D.; Rosenthal, Philip J.; Dorsey, Grant; Drakeley, Christopher J.; Greenhouse, Bryan

    2015-01-01

    Tools to reliably measure Plasmodium falciparum (Pf) exposure in individuals and communities are needed to guide and evaluate malaria control interventions. Serologic assays can potentially produce precise exposure estimates at low cost; however, current approaches based on responses to a few characterized antigens are not designed to estimate exposure in individuals. Pf-specific antibody responses differ by antigen, suggesting that selection of antigens with defined kinetic profiles will improve estimates of Pf exposure. To identify novel serologic biomarkers of malaria exposure, we evaluated responses to 856 Pf antigens by protein microarray in 186 Ugandan children, for whom detailed Pf exposure data were available. Using data-adaptive statistical methods, we identified combinations of antibody responses that maximized information on an individual’s recent exposure. Responses to three novel Pf antigens accurately classified whether an individual had been infected within the last 30, 90, or 365 d (cross-validated area under the curve = 0.86–0.93), whereas responses to six antigens accurately estimated an individual’s malaria incidence in the prior year. Cross-validated incidence predictions for individuals in different communities provided accurate stratification of exposure between populations and suggest that precise estimates of community exposure can be obtained from sampling a small subset of that community. In addition, serologic incidence predictions from cross-sectional samples characterized heterogeneity within a community similarly to 1 y of continuous passive surveillance. Development of simple ELISA-based assays derived from the successful selection strategy outlined here offers the potential to generate rich epidemiologic surveillance data that will be widely accessible to malaria control programs. PMID:26216993

  11. A Simulation Study Comparison of Bayesian Estimation with Conventional Methods for Estimating Unknown Change Points

    ERIC Educational Resources Information Center

    Wang, Lijuan; McArdle, John J.

    2008-01-01

    The main purpose of this research is to evaluate the performance of a Bayesian approach for estimating unknown change points using Monte Carlo simulations. The univariate and bivariate unknown change point mixed models were presented and the basic idea of the Bayesian approach for estimating the models was discussed. The performance of Bayesian…

  12. Leidenfrost point and estimate of the vapour layer thickness

    NASA Astrophysics Data System (ADS)

    Gianino, Concetto

    2008-11-01

    In this article I describe an experiment involving the Leidenfrost phenomenon, which is the long lifetime of a water drop when it is deposited on a metal that is much hotter than the boiling point of water. The experiment was carried out with high-school students. The Leidenfrost point is measured and the heat laws are used to estimate the thickness of the vapour layer, d≈0.06 mm, which prevents the drop from touching the hotplate.

  13. Voxel-based registration of simulated and real patient CBCT data for accurate dental implant pose estimation

    NASA Astrophysics Data System (ADS)

    Moreira, António H. J.; Queirós, Sandro; Morais, Pedro; Rodrigues, Nuno F.; Correia, André Ricardo; Fernandes, Valter; Pinho, A. C. M.; Fonseca, Jaime C.; Vilaça, João. L.

    2015-03-01

    The success of dental implant-supported prosthesis is directly linked to the accuracy obtained during implant's pose estimation (position and orientation). Although traditional impression techniques and recent digital acquisition methods are acceptably accurate, a simultaneously fast, accurate and operator-independent methodology is still lacking. Hereto, an image-based framework is proposed to estimate the patient-specific implant's pose using cone-beam computed tomography (CBCT) and prior knowledge of implanted model. The pose estimation is accomplished in a threestep approach: (1) a region-of-interest is extracted from the CBCT data using 2 operator-defined points at the implant's main axis; (2) a simulated CBCT volume of the known implanted model is generated through Feldkamp-Davis-Kress reconstruction and coarsely aligned to the defined axis; and (3) a voxel-based rigid registration is performed to optimally align both patient and simulated CBCT data, extracting the implant's pose from the optimal transformation. Three experiments were performed to evaluate the framework: (1) an in silico study using 48 implants distributed through 12 tridimensional synthetic mandibular models; (2) an in vitro study using an artificial mandible with 2 dental implants acquired with an i-CAT system; and (3) two clinical case studies. The results shown positional errors of 67+/-34μm and 108μm, and angular misfits of 0.15+/-0.08° and 1.4°, for experiment 1 and 2, respectively. Moreover, in experiment 3, visual assessment of clinical data results shown a coherent alignment of the reference implant. Overall, a novel image-based framework for implants' pose estimation from CBCT data was proposed, showing accurate results in agreement with dental prosthesis modelling requirements.

  14. Estimating the Effective Permittivity for Reconstructing Accurate Microwave-Radar Images.

    PubMed

    Lavoie, Benjamin R; Okoniewski, Michal; Fear, Elise C

    2016-01-01

    We present preliminary results from a method for estimating the optimal effective permittivity for reconstructing microwave-radar images. Using knowledge of how microwave-radar images are formed, we identify characteristics that are typical of good images, and define a fitness function to measure the relative image quality. We build a polynomial interpolant of the fitness function in order to identify the most likely permittivity values of the tissue. To make the estimation process more efficient, the polynomial interpolant is constructed using a locally and dimensionally adaptive sampling method that is a novel combination of stochastic collocation and polynomial chaos. Examples, using a series of simulated, experimental and patient data collected using the Tissue Sensing Adaptive Radar system, which is under development at the University of Calgary, are presented. These examples show how, using our method, accurate images can be reconstructed starting with only a broad estimate of the permittivity range.

  15. Estimating the Effective Permittivity for Reconstructing Accurate Microwave-Radar Images

    PubMed Central

    Lavoie, Benjamin R.; Okoniewski, Michal; Fear, Elise C.

    2016-01-01

    We present preliminary results from a method for estimating the optimal effective permittivity for reconstructing microwave-radar images. Using knowledge of how microwave-radar images are formed, we identify characteristics that are typical of good images, and define a fitness function to measure the relative image quality. We build a polynomial interpolant of the fitness function in order to identify the most likely permittivity values of the tissue. To make the estimation process more efficient, the polynomial interpolant is constructed using a locally and dimensionally adaptive sampling method that is a novel combination of stochastic collocation and polynomial chaos. Examples, using a series of simulated, experimental and patient data collected using the Tissue Sensing Adaptive Radar system, which is under development at the University of Calgary, are presented. These examples show how, using our method, accurate images can be reconstructed starting with only a broad estimate of the permittivity range. PMID:27611785

  16. Estimating the Effective Permittivity for Reconstructing Accurate Microwave-Radar Images.

    PubMed

    Lavoie, Benjamin R; Okoniewski, Michal; Fear, Elise C

    2016-01-01

    We present preliminary results from a method for estimating the optimal effective permittivity for reconstructing microwave-radar images. Using knowledge of how microwave-radar images are formed, we identify characteristics that are typical of good images, and define a fitness function to measure the relative image quality. We build a polynomial interpolant of the fitness function in order to identify the most likely permittivity values of the tissue. To make the estimation process more efficient, the polynomial interpolant is constructed using a locally and dimensionally adaptive sampling method that is a novel combination of stochastic collocation and polynomial chaos. Examples, using a series of simulated, experimental and patient data collected using the Tissue Sensing Adaptive Radar system, which is under development at the University of Calgary, are presented. These examples show how, using our method, accurate images can be reconstructed starting with only a broad estimate of the permittivity range. PMID:27611785

  17. Accurate estimation of object location in an image sequence using helicopter flight data

    NASA Technical Reports Server (NTRS)

    Tang, Yuan-Liang; Kasturi, Rangachar

    1994-01-01

    In autonomous navigation, it is essential to obtain a three-dimensional (3D) description of the static environment in which the vehicle is traveling. For a rotorcraft conducting low-latitude flight, this description is particularly useful for obstacle detection and avoidance. In this paper, we address the problem of 3D position estimation for static objects from a monocular sequence of images captured from a low-latitude flying helicopter. Since the environment is static, it is well known that the optical flow in the image will produce a radiating pattern from the focus of expansion. We propose a motion analysis system which utilizes the epipolar constraint to accurately estimate 3D positions of scene objects in a real world image sequence taken from a low-altitude flying helicopter. Results show that this approach gives good estimates of object positions near the rotorcraft's intended flight-path.

  18. Effective Echo Detection and Accurate Orbit Estimation Algorithms for Space Debris Radar

    NASA Astrophysics Data System (ADS)

    Isoda, Kentaro; Sakamoto, Takuya; Sato, Toru

    Orbit estimation of space debris, objects of no inherent value orbiting the earth, is a task that is important for avoiding collisions with spacecraft. The Kamisaibara Spaceguard Center radar system was built in 2004 as the first radar facility in Japan devoted to the observation of space debris. In order to detect the smaller debris, coherent integration is effective in improving SNR (Signal-to-Noise Ratio). However, it is difficult to apply coherent integration to real data because the motions of the targets are unknown. An effective algorithm is proposed for echo detection and orbit estimation of the faint echoes from space debris. The characteristics of the evaluation function are utilized by the algorithm. Experiments show the proposed algorithm improves SNR by 8.32dB and enables estimation of orbital parameters accurately to allow for re-tracking with a single radar.

  19. Parameter Estimation of Ion Current Formulations Requires Hybrid Optimization Approach to Be Both Accurate and Reliable

    PubMed Central

    Loewe, Axel; Wilhelms, Mathias; Schmid, Jochen; Krause, Mathias J.; Fischer, Fathima; Thomas, Dierk; Scholz, Eberhard P.; Dössel, Olaf; Seemann, Gunnar

    2016-01-01

    Computational models of cardiac electrophysiology provided insights into arrhythmogenesis and paved the way toward tailored therapies in the last years. To fully leverage in silico models in future research, these models need to be adapted to reflect pathologies, genetic alterations, or pharmacological effects, however. A common approach is to leave the structure of established models unaltered and estimate the values of a set of parameters. Today’s high-throughput patch clamp data acquisition methods require robust, unsupervised algorithms that estimate parameters both accurately and reliably. In this work, two classes of optimization approaches are evaluated: gradient-based trust-region-reflective and derivative-free particle swarm algorithms. Using synthetic input data and different ion current formulations from the Courtemanche et al. electrophysiological model of human atrial myocytes, we show that neither of the two schemes alone succeeds to meet all requirements. Sequential combination of the two algorithms did improve the performance to some extent but not satisfactorily. Thus, we propose a novel hybrid approach coupling the two algorithms in each iteration. This hybrid approach yielded very accurate estimates with minimal dependency on the initial guess using synthetic input data for which a ground truth parameter set exists. When applied to measured data, the hybrid approach yielded the best fit, again with minimal variation. Using the proposed algorithm, a single run is sufficient to estimate the parameters. The degree of superiority over the other investigated algorithms in terms of accuracy and robustness depended on the type of current. In contrast to the non-hybrid approaches, the proposed method proved to be optimal for data of arbitrary signal to noise ratio. The hybrid algorithm proposed in this work provides an important tool to integrate experimental data into computational models both accurately and robustly allowing to assess the often non

  20. New methods determine pour point more accurately than ASTM D-97

    SciTech Connect

    Khan, H.U.; Dilawar, S.V.K.; Nautiyal, S.P.; Srivastava, S.P. )

    1993-11-01

    A new, alternative method determines petroleum fluid pour points with [+-] 1 C. precision and better accuracy than the standard ASTM D-97 procedure. The new method measures the pour point of transparent fluids by determining wax appearance temperature (WAT). Also, pour points of waxy crude oils can be determined by measuring a flow characteristic called restart pressure.

  1. Evaluating lidar point densities for effective estimation of aboveground biomass

    USGS Publications Warehouse

    Wu, Zhuoting; Dye, Dennis G.; Stoker, Jason; Vogel, John M.; Velasco, Miguel G.; Middleton, Barry R.

    2016-01-01

    The U.S. Geological Survey (USGS) 3D Elevation Program (3DEP) was recently established to provide airborne lidar data coverage on a national scale. As part of a broader research effort of the USGS to develop an effective remote sensing-based methodology for the creation of an operational biomass Essential Climate Variable (Biomass ECV) data product, we evaluated the performance of airborne lidar data at various pulse densities against Landsat 8 satellite imagery in estimating above ground biomass for forests and woodlands in a study area in east-central Arizona, U.S. High point density airborne lidar data, were randomly sampled to produce five lidar datasets with reduced densities ranging from 0.5 to 8 point(s)/m2, corresponding to the point density range of 3DEP to provide national lidar coverage over time. Lidar-derived aboveground biomass estimate errors showed an overall decreasing trend as lidar point density increased from 0.5 to 8 points/m2. Landsat 8-based aboveground biomass estimates produced errors larger than the lowest lidar point density of 0.5 point/m2, and therefore Landsat 8 observations alone were ineffective relative to airborne lidar for generating a Biomass ECV product, at least for the forest and woodland vegetation types of the Southwestern U.S. While a national Biomass ECV product with optimal accuracy could potentially be achieved with 3DEP data at 8 points/m2, our results indicate that even lower density lidar data could be sufficient to provide a national Biomass ECV product with accuracies significantly higher than that from Landsat observations alone.

  2. Intraocular lens power estimation by accurate ray tracing for eyes underwent previous refractive surgeries

    NASA Astrophysics Data System (ADS)

    Yang, Que; Wang, Shanshan; Wang, Kai; Zhang, Chunyu; Zhang, Lu; Meng, Qingyu; Zhu, Qiudong

    2015-08-01

    For normal eyes without history of any ocular surgery, traditional equations for calculating intraocular lens (IOL) power, such as SRK-T, Holladay, Higis, SRK-II, et al., all were relativley accurate. However, for eyes underwent refractive surgeries, such as LASIK, or eyes diagnosed as keratoconus, these equations may cause significant postoperative refractive error, which may cause poor satisfaction after cataract surgery. Although some methods have been carried out to solve this problem, such as Hagis-L equation[1], or using preoperative data (data before LASIK) to estimate K value[2], no precise equations were available for these eyes. Here, we introduced a novel intraocular lens power estimation method by accurate ray tracing with optical design software ZEMAX. Instead of using traditional regression formula, we adopted the exact measured corneal elevation distribution, central corneal thickness, anterior chamber depth, axial length, and estimated effective lens plane as the input parameters. The calculation of intraocular lens power for a patient with keratoconus and another LASIK postoperative patient met very well with their visual capacity after cataract surgery.

  3. Accurate and quantitative polarization-sensitive OCT by unbiased birefringence estimator with noise-stochastic correction

    NASA Astrophysics Data System (ADS)

    Kasaragod, Deepa; Sugiyama, Satoshi; Ikuno, Yasushi; Alonso-Caneiro, David; Yamanari, Masahiro; Fukuda, Shinichi; Oshika, Tetsuro; Hong, Young-Joo; Li, En; Makita, Shuichi; Miura, Masahiro; Yasuno, Yoshiaki

    2016-03-01

    Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of OCT that contrasts the polarization properties of tissues. It has been applied to ophthalmology, cardiology, etc. Proper quantitative imaging is required for a widespread clinical utility. However, the conventional method of averaging to improve the signal to noise ratio (SNR) and the contrast of the phase retardation (or birefringence) images introduce a noise bias offset from the true value. This bias reduces the effectiveness of birefringence contrast for a quantitative study. Although coherent averaging of Jones matrix tomography has been widely utilized and has improved the image quality, the fundamental limitation of nonlinear dependency of phase retardation and birefringence to the SNR was not overcome. So the birefringence obtained by PS-OCT was still not accurate for a quantitative imaging. The nonlinear effect of SNR to phase retardation and birefringence measurement was previously formulated in detail for a Jones matrix OCT (JM-OCT) [1]. Based on this, we had developed a maximum a-posteriori (MAP) estimator and quantitative birefringence imaging was demonstrated [2]. However, this first version of estimator had a theoretical shortcoming. It did not take into account the stochastic nature of SNR of OCT signal. In this paper, we present an improved version of the MAP estimator which takes into account the stochastic property of SNR. This estimator uses a probability distribution function (PDF) of true local retardation, which is proportional to birefringence, under a specific set of measurements of the birefringence and SNR. The PDF was pre-computed by a Monte-Carlo (MC) simulation based on the mathematical model of JM-OCT before the measurement. A comparison between this new MAP estimator, our previous MAP estimator [2], and the standard mean estimator is presented. The comparisons are performed both by numerical simulation and in vivo measurements of anterior and

  4. READSCAN: a fast and scalable pathogen discovery program with accurate genome relative abundance estimation

    PubMed Central

    Rashid, Mamoon; Pain, Arnab

    2013-01-01

    Summary: READSCAN is a highly scalable parallel program to identify non-host sequences (of potential pathogen origin) and estimate their genome relative abundance in high-throughput sequence datasets. READSCAN accurately classified human and viral sequences on a 20.1 million reads simulated dataset in <27 min using a small Beowulf compute cluster with 16 nodes (Supplementary Material). Availability: http://cbrc.kaust.edu.sa/readscan Contact: arnab.pain@kaust.edu.sa or raeece.naeem@gmail.com Supplementary information: Supplementary data are available at Bioinformatics online. PMID:23193222

  5. Uncertainty of Areal Rainfall Estimation Using Point Measurements

    NASA Astrophysics Data System (ADS)

    McCarthy, D.; Dotto, C. B. S.; Sun, S.; Bertrand-Krajewski, J. L.; Deletic, A.

    2014-12-01

    The spatial variability of precipitation has a great influence on the quantity and quality of runoff water generated from hydrological processes. In practice, point rainfall measurements (e.g., rain gauges) are often used to represent areal rainfall in catchments. The spatial rainfall variability is difficult to be precisely captured even with many rain gauges. Thus the rainfall uncertainty due to spatial variability should be taken into account in order to provide reliable rainfall-driven process modelling results. This study investigates the uncertainty of areal rainfall estimation due to rainfall spatial variability if point measurements are applied. The areal rainfall is usually estimated as a weighted sum of data from available point measurements. The expected error of areal rainfall estimates is 0 if the estimation is an unbiased one. The variance of the error between the real and estimated areal rainfall is evaluated to indicate the uncertainty of areal rainfall estimates. This error variance can be expressed as a function of variograms, which was originally applied in geostatistics to characterize a spatial variable. The variogram can be evaluated using measurements from a dense rain gauge network. The areal rainfall errors are evaluated in two areas with distinct climate regimes and rainfall patterns: Greater Lyon area in France and Melbourne area in Australia. The variograms of the two areas are derived based on 6-minute rainfall time series data from 2010 to 2013 and are then used to estimate uncertainties of areal rainfall represented by different numbers of point measurements in synthetic catchments of various sizes. The error variance of areal rainfall using one point measurement in the centre of a 1-km2 catchment is 0.22 (mm/h)2 in Lyon. When the point measurement is placed at one corner of the same-size catchment, the error variance becomes 0.82 (mm/h)2 also in Lyon. Results for Melbourne were similar but presented larger uncertainty. Results

  6. Toward an Accurate Estimate of the Exfoliation Energy of Black Phosphorus: A Periodic Quantum Chemical Approach.

    PubMed

    Sansone, Giuseppe; Maschio, Lorenzo; Usvyat, Denis; Schütz, Martin; Karttunen, Antti

    2016-01-01

    The black phosphorus (black-P) crystal is formed of covalently bound layers of phosphorene stacked together by weak van der Waals interactions. An experimental measurement of the exfoliation energy of black-P is not available presently, making theoretical studies the most important source of information for the optimization of phosphorene production. Here, we provide an accurate estimate of the exfoliation energy of black-P on the basis of multilevel quantum chemical calculations, which include the periodic local Møller-Plesset perturbation theory of second order, augmented by higher-order corrections, which are evaluated with finite clusters mimicking the crystal. Very similar results are also obtained by density functional theory with the D3-version of Grimme's empirical dispersion correction. Our estimate of the exfoliation energy for black-P of -151 meV/atom is substantially larger than that of graphite, suggesting the need for different strategies to generate isolated layers for these two systems. PMID:26651397

  7. Toward an Accurate Estimate of the Exfoliation Energy of Black Phosphorus: A Periodic Quantum Chemical Approach.

    PubMed

    Sansone, Giuseppe; Maschio, Lorenzo; Usvyat, Denis; Schütz, Martin; Karttunen, Antti

    2016-01-01

    The black phosphorus (black-P) crystal is formed of covalently bound layers of phosphorene stacked together by weak van der Waals interactions. An experimental measurement of the exfoliation energy of black-P is not available presently, making theoretical studies the most important source of information for the optimization of phosphorene production. Here, we provide an accurate estimate of the exfoliation energy of black-P on the basis of multilevel quantum chemical calculations, which include the periodic local Møller-Plesset perturbation theory of second order, augmented by higher-order corrections, which are evaluated with finite clusters mimicking the crystal. Very similar results are also obtained by density functional theory with the D3-version of Grimme's empirical dispersion correction. Our estimate of the exfoliation energy for black-P of -151 meV/atom is substantially larger than that of graphite, suggesting the need for different strategies to generate isolated layers for these two systems.

  8. Accurate Estimation of Carotid Luminal Surface Roughness Using Ultrasonic Radio-Frequency Echo

    NASA Astrophysics Data System (ADS)

    Kitamura, Kosuke; Hasegawa, Hideyuki; Kanai, Hiroshi

    2012-07-01

    It would be useful to measure the minute surface roughness of the carotid arterial wall to detect the early stage of atherosclerosis. In conventional ultrasonography, the axial resolution of a B-mode image depends on the ultrasonic wavelength of 150 µm at 10 MHz because a B-mode image is constructed using the amplitude of the radio-frequency (RF) echo. Therefore, the surface roughness caused by atherosclerosis in an early stage cannot be measured using a conventional B-mode image obtained by ultrasonography because the roughness is 10-20 µm. We have realized accurate transcutaneous estimation of such a minute surface profile using the lateral motion of the carotid arterial wall, which is estimated by block matching of received ultrasonic signals. However, the width of the region where the surface profile is estimated depends on the magnitude of the lateral displacement of the carotid arterial wall (i.e., if the lateral displacement of the arterial wall is 1 mm, the surface profile is estimated in a region of 1 mm in width). In this study, the width was increased by combining surface profiles estimated using several ultrasonic beams. In the present study, we first measured a fine wire, whose diameter was 13 µm, using ultrasonic equipment to obtain an ultrasonic beam profile for determination of the optimal kernel size for block matching based on the correlation between RF echoes. Second, we estimated the lateral displacement and surface profile of a phantom, which had a saw tooth profile on its surface, and compared the surface profile measured by ultrasound with that measured by a laser profilometer. Finally, we estimated the lateral displacement and surface roughness of the carotid arterial wall of three healthy subjects (24-, 23-, and 23-year-old males) using the proposed method.

  9. Software cost estimation using class point metrics (CPM)

    NASA Astrophysics Data System (ADS)

    Ghode, Aditi; Periyasamy, Kasilingam

    2011-12-01

    Estimating cost for the software project is one of the most important and crucial task to maintain the software reliability. Many cost estimation models have been reported till now, but most of them have significant drawbacks due to rapid changes in the technology. For example, Source Line Of Code (SLOC) can only be counted when the software construction is complete. Function Point (FP) metric is deficient in handling Object Oriented Technology, as it was designed for procedural languages such as COBOL. Since Object-Oriented Programming became a popular development practice, most of the software companies started applying the Unified Modeling Language (UML). The objective of this research is to develop a new cost estimation model with the application of class diagram for the software cost estimation.

  10. Lamb mode selection for accurate wall loss estimation via guided wave tomography

    SciTech Connect

    Huthwaite, P.; Ribichini, R.; Lowe, M. J. S.; Cawley, P.

    2014-02-18

    Guided wave tomography offers a method to accurately quantify wall thickness losses in pipes and vessels caused by corrosion. This is achieved using ultrasonic waves transmitted over distances of approximately 1–2m, which are measured by an array of transducers and then used to reconstruct a map of wall thickness throughout the inspected region. To achieve accurate estimations of remnant wall thickness, it is vital that a suitable Lamb mode is chosen. This paper presents a detailed evaluation of the fundamental modes, S{sub 0} and A{sub 0}, which are of primary interest in guided wave tomography thickness estimates since the higher order modes do not exist at all thicknesses, to compare their performance using both numerical and experimental data while considering a range of challenging phenomena. The sensitivity of A{sub 0} to thickness variations was shown to be superior to S{sub 0}, however, the attenuation from A{sub 0} when a liquid loading was present was much higher than S{sub 0}. A{sub 0} was less sensitive to the presence of coatings on the surface of than S{sub 0}.

  11. Monte Carlo next-event point flux estimation for RCP01

    SciTech Connect

    Martz, R.L.; Gast, R.C.; Tyburski, L.J.

    1991-12-31

    Two next event point estimators have been developed and programmed into the RCP01 Monte Carlo program for solving neutron transport problems in three-dimensional geometry with detailed energy description. These estimators use a simplified but accurate flux-at-a-point tallying technique. Anisotropic scattering in the lab system at the collision site is accounted for by determining the exit energy that corresponds to the angle between the location of the collision and the point detector. Elastic, inelastic, and thermal kernel scattering events are included in this formulation. An averaging technique is used in both estimators to eliminate the well-known problem of infinite variance due to collisions close to the point detector. In a novel approach to improve the estimator`s efficiency, a Russian roulette scheme based on anticipated flux fall off is employed where averaging is not appropriate. A second estimator successfully uses a simple rejection technique in conjunction with detailed tracking where averaging isn`t needed. Test results show good agreement with known numeric solutions. Efficiencies are examined as a function of input parameter selection and problem difficulty.

  12. Accurate Estimation of the Intrinsic Dimension Using Graph Distances: Unraveling the Geometric Complexity of Datasets

    NASA Astrophysics Data System (ADS)

    Granata, Daniele; Carnevale, Vincenzo

    2016-08-01

    The collective behavior of a large number of degrees of freedom can be often described by a handful of variables. This observation justifies the use of dimensionality reduction approaches to model complex systems and motivates the search for a small set of relevant “collective” variables. Here, we analyze this issue by focusing on the optimal number of variable needed to capture the salient features of a generic dataset and develop a novel estimator for the intrinsic dimension (ID). By approximating geodesics with minimum distance paths on a graph, we analyze the distribution of pairwise distances around the maximum and exploit its dependency on the dimensionality to obtain an ID estimate. We show that the estimator does not depend on the shape of the intrinsic manifold and is highly accurate, even for exceedingly small sample sizes. We apply the method to several relevant datasets from image recognition databases and protein multiple sequence alignments and discuss possible interpretations for the estimated dimension in light of the correlations among input variables and of the information content of the dataset.

  13. Accurate Estimation of the Intrinsic Dimension Using Graph Distances: Unraveling the Geometric Complexity of Datasets

    PubMed Central

    Granata, Daniele; Carnevale, Vincenzo

    2016-01-01

    The collective behavior of a large number of degrees of freedom can be often described by a handful of variables. This observation justifies the use of dimensionality reduction approaches to model complex systems and motivates the search for a small set of relevant “collective” variables. Here, we analyze this issue by focusing on the optimal number of variable needed to capture the salient features of a generic dataset and develop a novel estimator for the intrinsic dimension (ID). By approximating geodesics with minimum distance paths on a graph, we analyze the distribution of pairwise distances around the maximum and exploit its dependency on the dimensionality to obtain an ID estimate. We show that the estimator does not depend on the shape of the intrinsic manifold and is highly accurate, even for exceedingly small sample sizes. We apply the method to several relevant datasets from image recognition databases and protein multiple sequence alignments and discuss possible interpretations for the estimated dimension in light of the correlations among input variables and of the information content of the dataset. PMID:27510265

  14. Accurate Estimation of the Intrinsic Dimension Using Graph Distances: Unraveling the Geometric Complexity of Datasets.

    PubMed

    Granata, Daniele; Carnevale, Vincenzo

    2016-01-01

    The collective behavior of a large number of degrees of freedom can be often described by a handful of variables. This observation justifies the use of dimensionality reduction approaches to model complex systems and motivates the search for a small set of relevant "collective" variables. Here, we analyze this issue by focusing on the optimal number of variable needed to capture the salient features of a generic dataset and develop a novel estimator for the intrinsic dimension (ID). By approximating geodesics with minimum distance paths on a graph, we analyze the distribution of pairwise distances around the maximum and exploit its dependency on the dimensionality to obtain an ID estimate. We show that the estimator does not depend on the shape of the intrinsic manifold and is highly accurate, even for exceedingly small sample sizes. We apply the method to several relevant datasets from image recognition databases and protein multiple sequence alignments and discuss possible interpretations for the estimated dimension in light of the correlations among input variables and of the information content of the dataset. PMID:27510265

  15. Removing the thermal component from heart rate provides an accurate VO2 estimation in forest work.

    PubMed

    Dubé, Philippe-Antoine; Imbeau, Daniel; Dubeau, Denise; Lebel, Luc; Kolus, Ahmet

    2016-05-01

    Heart rate (HR) was monitored continuously in 41 forest workers performing brushcutting or tree planting work. 10-min seated rest periods were imposed during the workday to estimate the HR thermal component (ΔHRT) per Vogt et al. (1970, 1973). VO2 was measured using a portable gas analyzer during a morning submaximal step-test conducted at the work site, during a work bout over the course of the day (range: 9-74 min), and during an ensuing 10-min rest pause taken at the worksite. The VO2 estimated, from measured HR and from corrected HR (thermal component removed), were compared to VO2 measured during work and rest. Varied levels of HR thermal component (ΔHRTavg range: 0-38 bpm) originating from a wide range of ambient thermal conditions, thermal clothing insulation worn, and physical load exerted during work were observed. Using raw HR significantly overestimated measured work VO2 by 30% on average (range: 1%-64%). 74% of VO2 prediction error variance was explained by the HR thermal component. VO2 estimated from corrected HR, was not statistically different from measured VO2. Work VO2 can be estimated accurately in the presence of thermal stress using Vogt et al.'s method, which can be implemented easily by the practitioner with inexpensive instruments.

  16. Accurate State Estimation and Tracking of a Non-Cooperative Target Vehicle

    NASA Technical Reports Server (NTRS)

    Thienel, Julie K.; Sanner, Robert M.

    2006-01-01

    Autonomous space rendezvous scenarios require knowledge of the target vehicle state in order to safely dock with the chaser vehicle. Ideally, the target vehicle state information is derived from telemetered data, or with the use of known tracking points on the target vehicle. However, if the target vehicle is non-cooperative and does not have the ability to maintain attitude control, or transmit attitude knowledge, the docking becomes more challenging. This work presents a nonlinear approach for estimating the body rates of a non-cooperative target vehicle, and coupling this estimation to a tracking control scheme. The approach is tested with the robotic servicing mission concept for the Hubble Space Telescope (HST). Such a mission would not only require estimates of the HST attitude and rates, but also precision control to achieve the desired rate and maintain the orientation to successfully dock with HST.

  17. Local surface sampling step estimation for extracting boundaries of planar point clouds

    NASA Astrophysics Data System (ADS)

    Brie, David; Bombardier, Vincent; Baeteman, Grégory; Bennis, Abdelhamid

    2016-09-01

    This paper presents a new approach to estimate the surface sampling step of planar point clouds acquired by Terrestrial Laser Scanner (TLS) which is varying with the distance to the surface and the angular positions. The local surface sampling step is obtained by doing a first order Taylor expansion of planar point coordinates. Then, it is shown how to use it in Delaunay-based boundary point extraction. The resulting approach, which is implemented in the ModiBuilding software, is applied to two facade point clouds of a building. The first is acquired with a single station and the second with two stations. In both cases, the proposed approach performs very accurately and appears to be robust to the variations of the point cloud density.

  18. Effects of LiDAR point density, sampling size and height threshold on estimation accuracy of crop biophysical parameters.

    PubMed

    Luo, Shezhou; Chen, Jing M; Wang, Cheng; Xi, Xiaohuan; Zeng, Hongcheng; Peng, Dailiang; Li, Dong

    2016-05-30

    Vegetation leaf area index (LAI), height, and aboveground biomass are key biophysical parameters. Corn is an important and globally distributed crop, and reliable estimations of these parameters are essential for corn yield forecasting, health monitoring and ecosystem modeling. Light Detection and Ranging (LiDAR) is considered an effective technology for estimating vegetation biophysical parameters. However, the estimation accuracies of these parameters are affected by multiple factors. In this study, we first estimated corn LAI, height and biomass (R2 = 0.80, 0.874 and 0.838, respectively) using the original LiDAR data (7.32 points/m2), and the results showed that LiDAR data could accurately estimate these biophysical parameters. Second, comprehensive research was conducted on the effects of LiDAR point density, sampling size and height threshold on the estimation accuracy of LAI, height and biomass. Our findings indicated that LiDAR point density had an important effect on the estimation accuracy for vegetation biophysical parameters, however, high point density did not always produce highly accurate estimates, and reduced point density could deliver reasonable estimation results. Furthermore, the results showed that sampling size and height threshold were additional key factors that affect the estimation accuracy of biophysical parameters. Therefore, the optimal sampling size and the height threshold should be determined to improve the estimation accuracy of biophysical parameters. Our results also implied that a higher LiDAR point density, larger sampling size and height threshold were required to obtain accurate corn LAI estimation when compared with height and biomass estimations. In general, our results provide valuable guidance for LiDAR data acquisition and estimation of vegetation biophysical parameters using LiDAR data. PMID:27410085

  19. Effects of LiDAR point density, sampling size and height threshold on estimation accuracy of crop biophysical parameters.

    PubMed

    Luo, Shezhou; Chen, Jing M; Wang, Cheng; Xi, Xiaohuan; Zeng, Hongcheng; Peng, Dailiang; Li, Dong

    2016-05-30

    Vegetation leaf area index (LAI), height, and aboveground biomass are key biophysical parameters. Corn is an important and globally distributed crop, and reliable estimations of these parameters are essential for corn yield forecasting, health monitoring and ecosystem modeling. Light Detection and Ranging (LiDAR) is considered an effective technology for estimating vegetation biophysical parameters. However, the estimation accuracies of these parameters are affected by multiple factors. In this study, we first estimated corn LAI, height and biomass (R2 = 0.80, 0.874 and 0.838, respectively) using the original LiDAR data (7.32 points/m2), and the results showed that LiDAR data could accurately estimate these biophysical parameters. Second, comprehensive research was conducted on the effects of LiDAR point density, sampling size and height threshold on the estimation accuracy of LAI, height and biomass. Our findings indicated that LiDAR point density had an important effect on the estimation accuracy for vegetation biophysical parameters, however, high point density did not always produce highly accurate estimates, and reduced point density could deliver reasonable estimation results. Furthermore, the results showed that sampling size and height threshold were additional key factors that affect the estimation accuracy of biophysical parameters. Therefore, the optimal sampling size and the height threshold should be determined to improve the estimation accuracy of biophysical parameters. Our results also implied that a higher LiDAR point density, larger sampling size and height threshold were required to obtain accurate corn LAI estimation when compared with height and biomass estimations. In general, our results provide valuable guidance for LiDAR data acquisition and estimation of vegetation biophysical parameters using LiDAR data.

  20. MIDAS robust trend estimator for accurate GPS station velocities without step detection

    NASA Astrophysics Data System (ADS)

    Blewitt, Geoffrey; Kreemer, Corné; Hammond, William C.; Gazeaux, Julien

    2016-03-01

    Automatic estimation of velocities from GPS coordinate time series is becoming required to cope with the exponentially increasing flood of available data, but problems detectable to the human eye are often overlooked. This motivates us to find an automatic and accurate estimator of trend that is resistant to common problems such as step discontinuities, outliers, seasonality, skewness, and heteroscedasticity. Developed here, Median Interannual Difference Adjusted for Skewness (MIDAS) is a variant of the Theil-Sen median trend estimator, for which the ordinary version is the median of slopes vij = (xj-xi)/(tj-ti) computed between all data pairs i > j. For normally distributed data, Theil-Sen and least squares trend estimates are statistically identical, but unlike least squares, Theil-Sen is resistant to undetected data problems. To mitigate both seasonality and step discontinuities, MIDAS selects data pairs separated by 1 year. This condition is relaxed for time series with gaps so that all data are used. Slopes from data pairs spanning a step function produce one-sided outliers that can bias the median. To reduce bias, MIDAS removes outliers and recomputes the median. MIDAS also computes a robust and realistic estimate of trend uncertainty. Statistical tests using GPS data in the rigid North American plate interior show ±0.23 mm/yr root-mean-square (RMS) accuracy in horizontal velocity. In blind tests using synthetic data, MIDAS velocities have an RMS accuracy of ±0.33 mm/yr horizontal, ±1.1 mm/yr up, with a 5th percentile range smaller than all 20 automatic estimators tested. Considering its general nature, MIDAS has the potential for broader application in the geosciences.

  1. Methods for accurate estimation of net discharge in a tidal channel

    USGS Publications Warehouse

    Simpson, M.R.; Bland, R.

    2000-01-01

    Accurate estimates of net residual discharge in tidally affected rivers and estuaries are possible because of recently developed ultrasonic discharge measurement techniques. Previous discharge estimates using conventional mechanical current meters and methods based on stage/discharge relations or water slope measurements often yielded errors that were as great as or greater than the computed residual discharge. Ultrasonic measurement methods consist of: 1) the use of ultrasonic instruments for the measurement of a representative 'index' velocity used for in situ estimation of mean water velocity and 2) the use of the acoustic Doppler current discharge measurement system to calibrate the index velocity measurement data. Methods used to calibrate (rate) the index velocity to the channel velocity measured using the Acoustic Doppler Current Profiler are the most critical factors affecting the accuracy of net discharge estimation. The index velocity first must be related to mean channel velocity and then used to calculate instantaneous channel discharge. Finally, discharge is low-pass filtered to remove the effects of the tides. An ultrasonic velocity meter discharge-measurement site in a tidally affected region of the Sacramento-San Joaquin Rivers was used to study the accuracy of the index velocity calibration procedure. Calibration data consisting of ultrasonic velocity meter index velocity and concurrent acoustic Doppler discharge measurement data were collected during three time periods. Two sets of data were collected during a spring tide (monthly maximum tidal current) and one of data collected during a neap tide (monthly minimum tidal current). The relative magnitude of instrumental errors, acoustic Doppler discharge measurement errors, and calibration errors were evaluated. Calibration error was found to be the most significant source of error in estimating net discharge. Using a comprehensive calibration method, net discharge estimates developed from the three

  2. MIDAS robust trend estimator for accurate GPS station velocities without step detection

    PubMed Central

    Kreemer, Corné; Hammond, William C.; Gazeaux, Julien

    2016-01-01

    Abstract Automatic estimation of velocities from GPS coordinate time series is becoming required to cope with the exponentially increasing flood of available data, but problems detectable to the human eye are often overlooked. This motivates us to find an automatic and accurate estimator of trend that is resistant to common problems such as step discontinuities, outliers, seasonality, skewness, and heteroscedasticity. Developed here, Median Interannual Difference Adjusted for Skewness (MIDAS) is a variant of the Theil‐Sen median trend estimator, for which the ordinary version is the median of slopes vij = (xj–xi)/(tj–ti) computed between all data pairs i > j. For normally distributed data, Theil‐Sen and least squares trend estimates are statistically identical, but unlike least squares, Theil‐Sen is resistant to undetected data problems. To mitigate both seasonality and step discontinuities, MIDAS selects data pairs separated by 1 year. This condition is relaxed for time series with gaps so that all data are used. Slopes from data pairs spanning a step function produce one‐sided outliers that can bias the median. To reduce bias, MIDAS removes outliers and recomputes the median. MIDAS also computes a robust and realistic estimate of trend uncertainty. Statistical tests using GPS data in the rigid North American plate interior show ±0.23 mm/yr root‐mean‐square (RMS) accuracy in horizontal velocity. In blind tests using synthetic data, MIDAS velocities have an RMS accuracy of ±0.33 mm/yr horizontal, ±1.1 mm/yr up, with a 5th percentile range smaller than all 20 automatic estimators tested. Considering its general nature, MIDAS has the potential for broader application in the geosciences. PMID:27668140

  3. MIDAS robust trend estimator for accurate GPS station velocities without step detection

    PubMed Central

    Kreemer, Corné; Hammond, William C.; Gazeaux, Julien

    2016-01-01

    Abstract Automatic estimation of velocities from GPS coordinate time series is becoming required to cope with the exponentially increasing flood of available data, but problems detectable to the human eye are often overlooked. This motivates us to find an automatic and accurate estimator of trend that is resistant to common problems such as step discontinuities, outliers, seasonality, skewness, and heteroscedasticity. Developed here, Median Interannual Difference Adjusted for Skewness (MIDAS) is a variant of the Theil‐Sen median trend estimator, for which the ordinary version is the median of slopes vij = (xj–xi)/(tj–ti) computed between all data pairs i > j. For normally distributed data, Theil‐Sen and least squares trend estimates are statistically identical, but unlike least squares, Theil‐Sen is resistant to undetected data problems. To mitigate both seasonality and step discontinuities, MIDAS selects data pairs separated by 1 year. This condition is relaxed for time series with gaps so that all data are used. Slopes from data pairs spanning a step function produce one‐sided outliers that can bias the median. To reduce bias, MIDAS removes outliers and recomputes the median. MIDAS also computes a robust and realistic estimate of trend uncertainty. Statistical tests using GPS data in the rigid North American plate interior show ±0.23 mm/yr root‐mean‐square (RMS) accuracy in horizontal velocity. In blind tests using synthetic data, MIDAS velocities have an RMS accuracy of ±0.33 mm/yr horizontal, ±1.1 mm/yr up, with a 5th percentile range smaller than all 20 automatic estimators tested. Considering its general nature, MIDAS has the potential for broader application in the geosciences.

  4. A Fast Algorithm to Estimate the Deepest Points of Lakes for Regional Lake Registration.

    PubMed

    Shen, Zhanfeng; Yu, Xinju; Sheng, Yongwei; Li, Junli; Luo, Jiancheng

    2015-01-01

    When conducting image registration in the U.S. state of Alaska, it is very difficult to locate satisfactory ground control points because ice, snow, and lakes cover much of the ground. However, GCPs can be located by seeking stable points from the extracted lake data. This paper defines a process to estimate the deepest points of lakes as the most stable ground control points for registration. We estimate the deepest point of a lake by computing the center point of the largest inner circle (LIC) of the polygon representing the lake. An LIC-seeking method based on Voronoi diagrams is proposed, and an algorithm based on medial axis simplification (MAS) is introduced. The proposed design also incorporates parallel data computing. A key issue of selecting a policy for partitioning vector data is carefully studied, the selected policy that equalize the algorithm complexity is proved the most optimized policy for vector parallel processing. Using several experimental applications, we conclude that the presented approach accurately estimates the deepest points in Alaskan lakes; furthermore, we gain perfect efficiency using MAS and a policy of algorithm complexity equalization.

  5. A Fast Algorithm to Estimate the Deepest Points of Lakes for Regional Lake Registration.

    PubMed

    Shen, Zhanfeng; Yu, Xinju; Sheng, Yongwei; Li, Junli; Luo, Jiancheng

    2015-01-01

    When conducting image registration in the U.S. state of Alaska, it is very difficult to locate satisfactory ground control points because ice, snow, and lakes cover much of the ground. However, GCPs can be located by seeking stable points from the extracted lake data. This paper defines a process to estimate the deepest points of lakes as the most stable ground control points for registration. We estimate the deepest point of a lake by computing the center point of the largest inner circle (LIC) of the polygon representing the lake. An LIC-seeking method based on Voronoi diagrams is proposed, and an algorithm based on medial axis simplification (MAS) is introduced. The proposed design also incorporates parallel data computing. A key issue of selecting a policy for partitioning vector data is carefully studied, the selected policy that equalize the algorithm complexity is proved the most optimized policy for vector parallel processing. Using several experimental applications, we conclude that the presented approach accurately estimates the deepest points in Alaskan lakes; furthermore, we gain perfect efficiency using MAS and a policy of algorithm complexity equalization. PMID:26656598

  6. A Fast Algorithm to Estimate the Deepest Points of Lakes for Regional Lake Registration

    PubMed Central

    Shen, Zhanfeng; Yu, Xinju; Sheng, Yongwei; Li, Junli; Luo, Jiancheng

    2015-01-01

    When conducting image registration in the U.S. state of Alaska, it is very difficult to locate satisfactory ground control points because ice, snow, and lakes cover much of the ground. However, GCPs can be located by seeking stable points from the extracted lake data. This paper defines a process to estimate the deepest points of lakes as the most stable ground control points for registration. We estimate the deepest point of a lake by computing the center point of the largest inner circle (LIC) of the polygon representing the lake. An LIC-seeking method based on Voronoi diagrams is proposed, and an algorithm based on medial axis simplification (MAS) is introduced. The proposed design also incorporates parallel data computing. A key issue of selecting a policy for partitioning vector data is carefully studied, the selected policy that equalize the algorithm complexity is proved the most optimized policy for vector parallel processing. Using several experimental applications, we conclude that the presented approach accurately estimates the deepest points in Alaskan lakes; furthermore, we gain perfect efficiency using MAS and a policy of algorithm complexity equalization. PMID:26656598

  7. Accurate Relative Location Estimates for the North Korean Nuclear Tests Using Empirical Slowness Corrections

    NASA Astrophysics Data System (ADS)

    Gibbons, S. J.; Pabian, F.; Näsholm, S. P.; Kværna', T.; Mykkeltveit, S.

    2016-10-01

    modified velocity gradients reduce the residuals, the relative location uncertainties, and the sensitivity to the combination of stations used. The traveltime gradients appear to be overestimated for the regional phases, and teleseismic relative location estimates are likely to be more accurate despite an apparent lower precision. Calibrations for regional phases are essential given that smaller magnitude events are likely not to be recorded teleseismically. We discuss the implications for the absolute event locations. Placing the 2006 event under a local maximum of overburden at 41.293°N, 129.105°E would imply a location of 41.299°N, 129.075°E for the January 2016 event, providing almost optimal overburden for the later four events.

  8. Rapid Bayesian point source inversion using pattern recognition --- bridging the gap between regional scaling relations and accurate physical modelling

    NASA Astrophysics Data System (ADS)

    Valentine, A. P.; Kaeufl, P.; De Wit, R. W. L.; Trampert, J.

    2014-12-01

    Obtaining knowledge about source parameters in (near) real-time during or shortly after an earthquake is essential for mitigating damage and directing resources in the aftermath of the event. Therefore, a variety of real-time source-inversion algorithms have been developed over recent decades. This has been driven by the ever-growing availability of dense seismograph networks in many seismogenic areas of the world and the significant advances in real-time telemetry. By definition, these algorithms rely on short time-windows of sparse, local and regional observations, resulting in source estimates that are highly sensitive to observational errors, noise and missing data. In order to obtain estimates more rapidly, many algorithms are either entirely based on empirical scaling relations or make simplifying assumptions about the Earth's structure, which can in turn lead to biased results. It is therefore essential that realistic uncertainty bounds are estimated along with the parameters. A natural means of propagating probabilistic information on source parameters through the entire processing chain from first observations to potential end users and decision makers is provided by the Bayesian formalism.We present a novel method based on pattern recognition allowing us to incorporate highly accurate physical modelling into an uncertainty-aware real-time inversion algorithm. The algorithm is based on a pre-computed Green's functions database, containing a large set of source-receiver paths in a highly heterogeneous crustal model. Unlike similar methods, which often employ a grid search, we use a supervised learning algorithm to relate synthetic waveforms to point source parameters. This training procedure has to be performed only once and leads to a representation of the posterior probability density function p(m|d) --- the distribution of source parameters m given observations d --- which can be evaluated quickly for new data.Owing to the flexibility of the pattern

  9. Geo-accurate model extraction from three-dimensional image-derived point clouds

    NASA Astrophysics Data System (ADS)

    Nilosek, David; Sun, Shaohui; Salvaggio, Carl

    2012-06-01

    A methodology is proposed for automatically extracting primitive models of buildings in a scene from a three-dimensional point cloud derived from multi-view depth extraction techniques. By exploring the information provided by the two-dimensional images and the three-dimensional point cloud and the relationship between the two, automated methods for extraction are presented. Using the inertial measurement unit (IMU) and global positioning system (GPS) data that accompanies the aerial imagery, the geometry is derived in a world-coordinate system so the model can be used with GIS software. This work uses imagery collected by the Rochester Institute of Technology's Digital Imaging and Remote Sensing Laboratory's WASP sensor platform. The data used was collected over downtown Rochester, New York. Multiple target buildings have their primitive three-dimensional model geometry extracted using modern point-cloud processing techniques.

  10. Change-point models to estimate the limit of detection.

    PubMed

    May, Ryan C; Chu, Haitao; Ibrahim, Joseph G; Hudgens, Michael G; Lees, Abigail C; Margolis, David M

    2013-12-10

    In many biological and environmental studies, measured data is subject to a limit of detection. The limit of detection is generally defined as the lowest concentration of analyte that can be differentiated from a blank sample with some certainty. Data falling below the limit of detection is left censored, falling below a level that is easily quantified by a measuring device. A great deal of interest lies in estimating the limit of detection for a particular measurement device. In this paper, we propose a change-point model to estimate the limit of detection by using data from an experiment with known analyte concentrations. Estimation of the limit of detection proceeds by a two-stage maximum likelihood method. Extensions are considered that allow for censored measurements and data from multiple experiments. A simulation study is conducted demonstrating that in some settings the change-point model provides less biased estimates of the limit of detection than conventional methods. The proposed method is then applied to data from an HIV pilot study.

  11. Accurate estimation of human body orientation from RGB-D sensors.

    PubMed

    Liu, Wu; Zhang, Yongdong; Tang, Sheng; Tang, Jinhui; Hong, Richang; Li, Jintao

    2013-10-01

    Accurate estimation of human body orientation can significantly enhance the analysis of human behavior, which is a fundamental task in the field of computer vision. However, existing orientation estimation methods cannot handle the various body poses and appearances. In this paper, we propose an innovative RGB-D-based orientation estimation method to address these challenges. By utilizing the RGB-D information, which can be real time acquired by RGB-D sensors, our method is robust to cluttered environment, illumination change and partial occlusions. Specifically, efficient static and motion cue extraction methods are proposed based on the RGB-D superpixels to reduce the noise of depth data. Since it is hard to discriminate all the 360 (°) orientation using static cues or motion cues independently, we propose to utilize a dynamic Bayesian network system (DBNS) to effectively employ the complementary nature of both static and motion cues. In order to verify our proposed method, we build a RGB-D-based human body orientation dataset that covers a wide diversity of poses and appearances. Our intensive experimental evaluations on this dataset demonstrate the effectiveness and efficiency of the proposed method. PMID:23893759

  12. Efficient and accurate estimation of relative order tensors from λ- maps

    NASA Astrophysics Data System (ADS)

    Mukhopadhyay, Rishi; Miao, Xijiang; Shealy, Paul; Valafar, Homayoun

    2009-06-01

    The rapid increase in the availability of RDC data from multiple alignment media in recent years has necessitated the development of more sophisticated analyses that extract the RDC data's full information content. This article presents an analysis of the distribution of RDCs from two media (2D-RDC data), using the information obtained from a λ-map. This article also introduces an efficient algorithm, which leverages these findings to extract the order tensors for each alignment medium using unassigned RDC data in the absence of any structural information. The results of applying this 2D-RDC analysis method to synthetic and experimental data are reported in this article. The relative order tensor estimates obtained from the 2D-RDC analysis are compared to order tensors obtained from the program REDCAT after using assignment and structural information. The final comparisons indicate that the relative order tensors estimated from the unassigned 2D-RDC method very closely match the results from methods that require assignment and structural information. The presented method is successful even in cases with small datasets. The results of analyzing experimental RDC data for the protein 1P7E are presented to demonstrate the potential of the presented work in accurately estimating the principal order parameters from RDC data that incompletely sample the RDC space. In addition to the new algorithm, a discussion of the uniqueness of the solutions is presented; no more than two clusters of distinct solutions have been shown to satisfy each λ-map.

  13. Accurate estimation of human body orientation from RGB-D sensors.

    PubMed

    Liu, Wu; Zhang, Yongdong; Tang, Sheng; Tang, Jinhui; Hong, Richang; Li, Jintao

    2013-10-01

    Accurate estimation of human body orientation can significantly enhance the analysis of human behavior, which is a fundamental task in the field of computer vision. However, existing orientation estimation methods cannot handle the various body poses and appearances. In this paper, we propose an innovative RGB-D-based orientation estimation method to address these challenges. By utilizing the RGB-D information, which can be real time acquired by RGB-D sensors, our method is robust to cluttered environment, illumination change and partial occlusions. Specifically, efficient static and motion cue extraction methods are proposed based on the RGB-D superpixels to reduce the noise of depth data. Since it is hard to discriminate all the 360 (°) orientation using static cues or motion cues independently, we propose to utilize a dynamic Bayesian network system (DBNS) to effectively employ the complementary nature of both static and motion cues. In order to verify our proposed method, we build a RGB-D-based human body orientation dataset that covers a wide diversity of poses and appearances. Our intensive experimental evaluations on this dataset demonstrate the effectiveness and efficiency of the proposed method.

  14. Accurate estimation of the RMS emittance from single current amplifier data

    SciTech Connect

    Stockli, Martin P.; Welton, R.F.; Keller, R.; Letchford, A.P.; Thomae, R.W.; Thomason, J.W.G.

    2002-05-31

    This paper presents the SCUBEEx rms emittance analysis, a self-consistent, unbiased elliptical exclusion method, which combines traditional data-reduction methods with statistical methods to obtain accurate estimates for the rms emittance. Rather than considering individual data, the method tracks the average current density outside a well-selected, variable boundary to separate the measured beam halo from the background. The average outside current density is assumed to be part of a uniform background and not part of the particle beam. Therefore the average outside current is subtracted from the data before evaluating the rms emittance within the boundary. As the boundary area is increased, the average outside current and the inside rms emittance form plateaus when all data containing part of the particle beam are inside the boundary. These plateaus mark the smallest acceptable exclusion boundary and provide unbiased estimates for the average background and the rms emittance. Small, trendless variations within the plateaus allow for determining the uncertainties of the estimates caused by variations of the measured background outside the smallest acceptable exclusion boundary. The robustness of the method is established with complementary variations of the exclusion boundary. This paper presents a detailed comparison between traditional data reduction methods and SCUBEEx by analyzing two complementary sets of emittance data obtained with a Lawrence Berkeley National Laboratory and an ISIS H{sup -} ion source.

  15. Motion estimation using point cluster method and Kalman filter.

    PubMed

    Senesh, M; Wolf, A

    2009-05-01

    The most frequently used method in a three dimensional human gait analysis involves placing markers on the skin of the analyzed segment. This introduces a significant artifact, which strongly influences the bone position and orientation and joint kinematic estimates. In this study, we tested and evaluated the effect of adding a Kalman filter procedure to the previously reported point cluster technique (PCT) in the estimation of a rigid body motion. We demonstrated the procedures by motion analysis of a compound planar pendulum from indirect opto-electronic measurements of markers attached to an elastic appendage that is restrained to slide along the rigid body long axis. The elastic frequency is close to the pendulum frequency, as in the biomechanical problem, where the soft tissue frequency content is similar to the actual movement of the bones. Comparison of the real pendulum angle to that obtained by several estimation procedures--PCT, Kalman filter followed by PCT, and low pass filter followed by PCT--enables evaluation of the accuracy of the procedures. When comparing the maximal amplitude, no effect was noted by adding the Kalman filter; however, a closer look at the signal revealed that the estimated angle based only on the PCT method was very noisy with fluctuation, while the estimated angle based on the Kalman filter followed by the PCT was a smooth signal. It was also noted that the instantaneous frequencies obtained from the estimated angle based on the PCT method is more dispersed than those obtained from the estimated angle based on Kalman filter followed by the PCT method. Addition of a Kalman filter to the PCT method in the estimation procedure of rigid body motion results in a smoother signal that better represents the real motion, with less signal distortion than when using a digital low pass filter. Furthermore, it can be concluded that adding a Kalman filter to the PCT procedure substantially reduces the dispersion of the maximal and minimal

  16. Quick and accurate estimation of the elastic constants using the minimum image method

    NASA Astrophysics Data System (ADS)

    Tretiakov, Konstantin V.; Wojciechowski, Krzysztof W.

    2015-04-01

    A method for determining the elastic properties using the minimum image method (MIM) is proposed and tested on a model system of particles interacting by the Lennard-Jones (LJ) potential. The elastic constants of the LJ system are determined in the thermodynamic limit, N → ∞, using the Monte Carlo (MC) method in the NVT and NPT ensembles. The simulation results show that when determining the elastic constants, the contribution of long-range interactions cannot be ignored, because that would lead to erroneous results. In addition, the simulations have revealed that the inclusion of further interactions of each particle with all its minimum image neighbors even in case of small systems leads to results which are very close to the values of elastic constants in the thermodynamic limit. This enables one for a quick and accurate estimation of the elastic constants using very small samples.

  17. Pitfalls in accurate estimation of overdiagnosis: implications for screening policy and compliance.

    PubMed

    Feig, Stephen A

    2013-01-01

    Stories in the public media that 30 to 50% of screen-detected breast cancers are overdiagnosed dissuade women from being screened because overdiagnosed cancers would never result in death if undetected yet do result in unnecessary treatment. However, such concerns are unwarranted because the frequency of overdiagnosis, when properly calculated, is only 0 to 5%. In the previous issue of Breast Cancer Research, Duffy and Parmar report that accurate estimation of the rate of overdiagnosis recognizes the effect of lead time on detection rates and the consequent requirement for an adequate number of years of follow-up. These indispensable elements were absent from highly publicized studies that overestimated the frequency of overdiagnosis.

  18. Estimation of Distributed Fermat-Point Location for Wireless Sensor Networking

    PubMed Central

    Huang, Po-Hsian; Chen, Jiann-Liang; Larosa, Yanuarius Teofilus; Chiang, Tsui-Lien

    2011-01-01

    This work presents a localization scheme for use in wireless sensor networks (WSNs) that is based on a proposed connectivity-based RF localization strategy called the distributed Fermat-point location estimation algorithm (DFPLE). DFPLE applies triangle area of location estimation formed by intersections of three neighboring beacon nodes. The Fermat point is determined as the shortest path from three vertices of the triangle. The area of estimated location then refined using Fermat point to achieve minimum error in estimating sensor nodes location. DFPLE solves problems of large errors and poor performance encountered by localization schemes that are based on a bounding box algorithm. Performance analysis of a 200-node development environment reveals that, when the number of sensor nodes is below 150, the mean error decreases rapidly as the node density increases, and when the number of sensor nodes exceeds 170, the mean error remains below 1% as the node density increases. Second, when the number of beacon nodes is less than 60, normal nodes lack sufficient beacon nodes to enable their locations to be estimated. However, the mean error changes slightly as the number of beacon nodes increases above 60. Simulation results revealed that the proposed algorithm for estimating sensor positions is more accurate than existing algorithms, and improves upon conventional bounding box strategies. PMID:22163851

  19. Estimation of distributed Fermat-point location for wireless sensor networking.

    PubMed

    Huang, Po-Hsian; Chen, Jiann-Liang; Larosa, Yanuarius Teofilus; Chiang, Tsui-Lien

    2011-01-01

    This work presents a localization scheme for use in wireless sensor networks (WSNs) that is based on a proposed connectivity-based RF localization strategy called the distributed Fermat-point location estimation algorithm (DFPLE). DFPLE applies triangle area of location estimation formed by intersections of three neighboring beacon nodes. The Fermat point is determined as the shortest path from three vertices of the triangle. The area of estimated location then refined using Fermat point to achieve minimum error in estimating sensor nodes location. DFPLE solves problems of large errors and poor performance encountered by localization schemes that are based on a bounding box algorithm. Performance analysis of a 200-node development environment reveals that, when the number of sensor nodes is below 150, the mean error decreases rapidly as the node density increases, and when the number of sensor nodes exceeds 170, the mean error remains below 1% as the node density increases. Second, when the number of beacon nodes is less than 60, normal nodes lack sufficient beacon nodes to enable their locations to be estimated. However, the mean error changes slightly as the number of beacon nodes increases above 60. Simulation results revealed that the proposed algorithm for estimating sensor positions is more accurate than existing algorithms, and improves upon conventional bounding box strategies.

  20. A Simple yet Accurate Method for the Estimation of the Biovolume of Planktonic Microorganisms.

    PubMed

    Saccà, Alessandro

    2016-01-01

    Determining the biomass of microbial plankton is central to the study of fluxes of energy and materials in aquatic ecosystems. This is typically accomplished by applying proper volume-to-carbon conversion factors to group-specific abundances and biovolumes. A critical step in this approach is the accurate estimation of biovolume from two-dimensional (2D) data such as those available through conventional microscopy techniques or flow-through imaging systems. This paper describes a simple yet accurate method for the assessment of the biovolume of planktonic microorganisms, which works with any image analysis system allowing for the measurement of linear distances and the estimation of the cross sectional area of an object from a 2D digital image. The proposed method is based on Archimedes' principle about the relationship between the volume of a sphere and that of a cylinder in which the sphere is inscribed, plus a coefficient of 'unellipticity' introduced here. Validation and careful evaluation of the method are provided using a variety of approaches. The new method proved to be highly precise with all convex shapes characterised by approximate rotational symmetry, and combining it with an existing method specific for highly concave or branched shapes allows covering the great majority of cases with good reliability. Thanks to its accuracy, consistency, and low resources demand, the new method can conveniently be used in substitution of any extant method designed for convex shapes, and can readily be coupled with automated cell imaging technologies, including state-of-the-art flow-through imaging devices. PMID:27195667

  1. A Simple yet Accurate Method for the Estimation of the Biovolume of Planktonic Microorganisms

    PubMed Central

    2016-01-01

    Determining the biomass of microbial plankton is central to the study of fluxes of energy and materials in aquatic ecosystems. This is typically accomplished by applying proper volume-to-carbon conversion factors to group-specific abundances and biovolumes. A critical step in this approach is the accurate estimation of biovolume from two-dimensional (2D) data such as those available through conventional microscopy techniques or flow-through imaging systems. This paper describes a simple yet accurate method for the assessment of the biovolume of planktonic microorganisms, which works with any image analysis system allowing for the measurement of linear distances and the estimation of the cross sectional area of an object from a 2D digital image. The proposed method is based on Archimedes’ principle about the relationship between the volume of a sphere and that of a cylinder in which the sphere is inscribed, plus a coefficient of ‘unellipticity’ introduced here. Validation and careful evaluation of the method are provided using a variety of approaches. The new method proved to be highly precise with all convex shapes characterised by approximate rotational symmetry, and combining it with an existing method specific for highly concave or branched shapes allows covering the great majority of cases with good reliability. Thanks to its accuracy, consistency, and low resources demand, the new method can conveniently be used in substitution of any extant method designed for convex shapes, and can readily be coupled with automated cell imaging technologies, including state-of-the-art flow-through imaging devices. PMID:27195667

  2. Incentives Increase Participation in Mass Dog Rabies Vaccination Clinics and Methods of Coverage Estimation Are Assessed to Be Accurate

    PubMed Central

    Steinmetz, Melissa; Czupryna, Anna; Bigambo, Machunde; Mzimbiri, Imam; Powell, George; Gwakisa, Paul

    2015-01-01

    In this study we show that incentives (dog collars and owner wristbands) are effective at increasing owner participation in mass dog rabies vaccination clinics and we conclude that household questionnaire surveys and the mark-re-sight (transect survey) method for estimating post-vaccination coverage are accurate when all dogs, including puppies, are included. Incentives were distributed during central-point rabies vaccination clinics in northern Tanzania to quantify their effect on owner participation. In villages where incentives were handed out participation increased, with an average of 34 more dogs being vaccinated. Through economies of scale, this represents a reduction in the cost-per-dog of $0.47. This represents the price-threshold under which the cost of the incentive used must fall to be economically viable. Additionally, vaccination coverage levels were determined in ten villages through the gold-standard village-wide census technique, as well as through two cheaper and quicker methods (randomized household questionnaire and the transect survey). Cost data were also collected. Both non-gold standard methods were found to be accurate when puppies were included in the calculations, although the transect survey and the household questionnaire survey over- and under-estimated the coverage respectively. Given that additional demographic data can be collected through the household questionnaire survey, and that its estimate of coverage is more conservative, we recommend this method. Despite the use of incentives the average vaccination coverage was below the 70% threshold for eliminating rabies. We discuss the reasons and suggest solutions to improve coverage. Given recent international targets to eliminate rabies, this study provides valuable and timely data to help improve mass dog vaccination programs in Africa and elsewhere. PMID:26633821

  3. Incentives Increase Participation in Mass Dog Rabies Vaccination Clinics and Methods of Coverage Estimation Are Assessed to Be Accurate.

    PubMed

    Minyoo, Abel B; Steinmetz, Melissa; Czupryna, Anna; Bigambo, Machunde; Mzimbiri, Imam; Powell, George; Gwakisa, Paul; Lankester, Felix

    2015-12-01

    In this study we show that incentives (dog collars and owner wristbands) are effective at increasing owner participation in mass dog rabies vaccination clinics and we conclude that household questionnaire surveys and the mark-re-sight (transect survey) method for estimating post-vaccination coverage are accurate when all dogs, including puppies, are included. Incentives were distributed during central-point rabies vaccination clinics in northern Tanzania to quantify their effect on owner participation. In villages where incentives were handed out participation increased, with an average of 34 more dogs being vaccinated. Through economies of scale, this represents a reduction in the cost-per-dog of $0.47. This represents the price-threshold under which the cost of the incentive used must fall to be economically viable. Additionally, vaccination coverage levels were determined in ten villages through the gold-standard village-wide census technique, as well as through two cheaper and quicker methods (randomized household questionnaire and the transect survey). Cost data were also collected. Both non-gold standard methods were found to be accurate when puppies were included in the calculations, although the transect survey and the household questionnaire survey over- and under-estimated the coverage respectively. Given that additional demographic data can be collected through the household questionnaire survey, and that its estimate of coverage is more conservative, we recommend this method. Despite the use of incentives the average vaccination coverage was below the 70% threshold for eliminating rabies. We discuss the reasons and suggest solutions to improve coverage. Given recent international targets to eliminate rabies, this study provides valuable and timely data to help improve mass dog vaccination programs in Africa and elsewhere.

  4. Incentives Increase Participation in Mass Dog Rabies Vaccination Clinics and Methods of Coverage Estimation Are Assessed to Be Accurate.

    PubMed

    Minyoo, Abel B; Steinmetz, Melissa; Czupryna, Anna; Bigambo, Machunde; Mzimbiri, Imam; Powell, George; Gwakisa, Paul; Lankester, Felix

    2015-12-01

    In this study we show that incentives (dog collars and owner wristbands) are effective at increasing owner participation in mass dog rabies vaccination clinics and we conclude that household questionnaire surveys and the mark-re-sight (transect survey) method for estimating post-vaccination coverage are accurate when all dogs, including puppies, are included. Incentives were distributed during central-point rabies vaccination clinics in northern Tanzania to quantify their effect on owner participation. In villages where incentives were handed out participation increased, with an average of 34 more dogs being vaccinated. Through economies of scale, this represents a reduction in the cost-per-dog of $0.47. This represents the price-threshold under which the cost of the incentive used must fall to be economically viable. Additionally, vaccination coverage levels were determined in ten villages through the gold-standard village-wide census technique, as well as through two cheaper and quicker methods (randomized household questionnaire and the transect survey). Cost data were also collected. Both non-gold standard methods were found to be accurate when puppies were included in the calculations, although the transect survey and the household questionnaire survey over- and under-estimated the coverage respectively. Given that additional demographic data can be collected through the household questionnaire survey, and that its estimate of coverage is more conservative, we recommend this method. Despite the use of incentives the average vaccination coverage was below the 70% threshold for eliminating rabies. We discuss the reasons and suggest solutions to improve coverage. Given recent international targets to eliminate rabies, this study provides valuable and timely data to help improve mass dog vaccination programs in Africa and elsewhere. PMID:26633821

  5. How Accurate and Robust Are the Phylogenetic Estimates of Austronesian Language Relationships?

    PubMed Central

    Greenhill, Simon J.; Drummond, Alexei J.; Gray, Russell D.

    2010-01-01

    We recently used computational phylogenetic methods on lexical data to test between two scenarios for the peopling of the Pacific. Our analyses of lexical data supported a pulse-pause scenario of Pacific settlement in which the Austronesian speakers originated in Taiwan around 5,200 years ago and rapidly spread through the Pacific in a series of expansion pulses and settlement pauses. We claimed that there was high congruence between traditional language subgroups and those observed in the language phylogenies, and that the estimated age of the Austronesian expansion at 5,200 years ago was consistent with the archaeological evidence. However, the congruence between the language phylogenies and the evidence from historical linguistics was not quantitatively assessed using tree comparison metrics. The robustness of the divergence time estimates to different calibration points was also not investigated exhaustively. Here we address these limitations by using a systematic tree comparison metric to calculate the similarity between the Bayesian phylogenetic trees and the subgroups proposed by historical linguistics, and by re-estimating the age of the Austronesian expansion using only the most robust calibrations. The results show that the Austronesian language phylogenies are highly congruent with the traditional subgroupings, and the date estimates are robust even when calculated using a restricted set of historical calibrations. PMID:20224774

  6. Accurate Estimation of the Fine Layering Effect on the Wave Propagation in the Carbonate Rocks

    NASA Astrophysics Data System (ADS)

    Bouchaala, F.; Ali, M. Y.

    2014-12-01

    The attenuation caused to the seismic wave during its propagation can be mainly divided into two parts, the scattering and the intrinsic attenuation. The scattering is an elastic redistribution of the energy due to the medium heterogeneities. However the intrinsic attenuation is an inelastic phenomenon, mainly due to the fluid-grain friction during the wave passage. The intrinsic attenuation is directly related to the physical characteristics of the medium, so this parameter is very can be used for media characterization and fluid detection, which is beneficial for the oil and gas industry. The intrinsic attenuation is estimated by subtracting the scattering from the total attenuation, therefore the accuracy of the intrinsic attenuation is directly dependent on the accuracy of the total attenuation and the scattering. The total attenuation can be estimated from the recorded waves, by using in-situ methods as the spectral ratio and frequency shift methods. The scattering is estimated by assuming the heterogeneities as a succession of stacked layers, each layer is characterized by a single density and velocity. The accuracy of the scattering is strongly dependent on the layer thicknesses, especially in the case of the media composed of carbonate rocks, such media are known for their strong heterogeneity. Previous studies gave some assumptions for the choice of the layer thickness, but they showed some limitations especially in the case of carbonate rocks. In this study we established a relationship between the layer thicknesses and the frequency of the propagation, after certain mathematical development of the Generalized O'Doherty-Anstey formula. We validated this relationship through some synthetic tests and real data provided from a VSP carried out over an onshore oilfield in the emirate of Abu Dhabi in the United Arab Emirates, primarily composed of carbonate rocks. The results showed the utility of our relationship for an accurate estimation of the scattering

  7. Reconstruction of the activity of point sources for the accurate characterization of nuclear waste drums by segmented gamma scanning.

    PubMed

    Krings, Thomas; Mauerhofer, Eric

    2011-06-01

    This work improves the reliability and accuracy in the reconstruction of the total isotope activity content in heterogeneous nuclear waste drums containing point sources. The method is based on χ(2)-fits of the angular dependent count rate distribution measured during a drum rotation in segmented gamma scanning. A new description of the analytical calculation of the angular count rate distribution is introduced based on a more precise model of the collimated detector. The new description is validated and compared to the old description using MCNP5 simulations of angular dependent count rate distributions of Co-60 and Cs-137 point sources. It is shown that the new model describes the angular dependent count rate distribution significantly more accurate compared to the old model. Hence, the reconstruction of the activity is more accurate and the errors are considerably reduced that lead to more reliable results. Furthermore, the results are compared to the conventional reconstruction method assuming a homogeneous matrix and activity distribution.

  8. Point estimation of simultaneous methods for solving polynomial equations

    NASA Astrophysics Data System (ADS)

    Petkovic, Miodrag S.; Petkovic, Ljiljana D.; Rancic, Lidija Z.

    2007-08-01

    The construction of computationally verifiable initial conditions which provide both the guaranteed and fast convergence of the numerical root-finding algorithm is one of the most important problems in solving nonlinear equations. Smale's "point estimation theory" from 1981 was a great advance in this topic; it treats convergence conditions and the domain of convergence in solving an equation f(z)=0 using only the information of f at the initial point z0. The study of a general problem of the construction of initial conditions of practical interest providing guaranteed convergence is very difficult, even in the case of algebraic polynomials. In the light of Smale's point estimation theory, an efficient approach based on some results concerning localization of polynomial zeros and convergent sequences is applied in this paper to iterative methods for the simultaneous determination of simple zeros of polynomials. We state new, improved initial conditions which provide the guaranteed convergence of frequently used simultaneous methods for solving algebraic equations: Ehrlich-Aberth's method, Ehrlich-Aberth's method with Newton's correction, Borsch-Supan's method with Weierstrass' correction and Halley-like (or Wang-Zheng) method. The introduced concept offers not only a clear insight into the convergence analysis of sequences generated by the considered methods, but also explicitly gives their order of convergence. The stated initial conditions are of significant practical importance since they are computationally verifiable; they depend only on the coefficients of a given polynomial, its degree n and initial approximations to polynomial zeros.

  9. So, What's the Answer? Moving Beyond the "Point Estimate"

    NASA Astrophysics Data System (ADS)

    Pathiraja, S. D.

    2014-12-01

    Uncertainty is an ever present issue in hydrology. The immense difficulties in resolving complex processes and their spatio-temporal variations, combined with a lack of observations means the accuracy of our predictions is often unknown. An immense amount of research is undertaken to quantify uncertainty in hydrologic predictions. With all this effort, there seems to be little appreciation of how this uncertainty quantification will be utilised in practice. Planners and decision makers who rely on modelling output from hydrologists and engineers are traditionally interested in a single point estimate - for instance, a single 1% Annual Exceedance Probability flow value. How can water scientists encourage practitioners to move beyond a "point estimate" mentality to appreciate the potentially wide range of uncertainty in our estimates? How can this uncertainty be translated into an appropriate risk based decision making framework? In this talk I investigate some of the problems with adopting a single answer for decision making and discuss some ways to promote a greater appreciation of uncertainty in practice.

  10. Exterior Orientation Estimation of Oblique Aerial Imagery Using Vanishing Points

    NASA Astrophysics Data System (ADS)

    Verykokou, Styliani; Ioannidis, Charalabos

    2016-06-01

    In this paper, a methodology for the calculation of rough exterior orientation (EO) parameters of multiple large-scale overlapping oblique aerial images, in the case that GPS/INS information is not available (e.g., for old datasets), is presented. It consists of five main steps; (a) the determination of the overlapping image pairs and the single image in which four ground control points have to be measured; (b) the computation of the transformation parameters from every image to the coordinate reference system; (c) the rough estimation of the camera interior orientation parameters; (d) the estimation of the true horizon line and the nadir point of each image; (e) the calculation of the rough EO parameters of each image. A developed software suite implementing the proposed methodology is tested using a set of UAV multi-perspective oblique aerial images. Several tests are performed for the assessment of the errors and show that the estimated EO parameters can be used either as initial approximations for a bundle adjustment procedure or as rough georeferencing information for several applications, like 3D modelling, even by non-photogrammetrists, because of the minimal user intervention needed. Finally, comparisons with a commercial software are made, in terms of automation and correctness of the computed EO parameters.

  11. Can student health professionals accurately estimate alcohol content in commonly occurring drinks?

    PubMed Central

    Sinclair, Julia; Searle, Emma

    2016-01-01

    Objectives: Correct identification of alcohol as a contributor to, or comorbidity of, many psychiatric diseases requires health professionals to be competent and confident to take an accurate alcohol history. Being able to estimate (or calculate) the alcohol content in commonly consumed drinks is a prerequisite for quantifying levels of alcohol consumption. The aim of this study was to assess this ability in medical and nursing students. Methods: A cross-sectional survey of 891 medical and nursing students across different years of training was conducted. Students were asked the alcohol content of 10 different alcoholic drinks by seeing a slide of the drink (with picture, volume and percentage of alcohol by volume) for 30 s. Results: Overall, the mean number of correctly estimated drinks (out of the 10 tested) was 2.4, increasing to just over 3 if a 10% margin of error was used. Wine and premium strength beers were underestimated by over 50% of students. Those who drank alcohol themselves, or who were further on in their clinical training, did better on the task, but overall the levels remained low. Conclusions: Knowledge of, or the ability to work out, the alcohol content of commonly consumed drinks is poor, and further research is needed to understand the reasons for this and the impact this may have on the likelihood to undertake screening or initiate treatment. PMID:27536344

  12. Ocean Lidar Measurements of Beam Attenuation and a Roadmap to Accurate Phytoplankton Biomass Estimates

    NASA Astrophysics Data System (ADS)

    Hu, Yongxiang; Behrenfeld, Mike; Hostetler, Chris; Pelon, Jacques; Trepte, Charles; Hair, John; Slade, Wayne; Cetinic, Ivona; Vaughan, Mark; Lu, Xiaomei; Zhai, Pengwang; Weimer, Carl; Winker, David; Verhappen, Carolus C.; Butler, Carolyn; Liu, Zhaoyan; Hunt, Bill; Omar, Ali; Rodier, Sharon; Lifermann, Anne; Josset, Damien; Hou, Weilin; MacDonnell, David; Rhew, Ray

    2016-06-01

    Beam attenuation coefficient, c, provides an important optical index of plankton standing stocks, such as phytoplankton biomass and total particulate carbon concentration. Unfortunately, c has proven difficult to quantify through remote sensing. Here, we introduce an innovative approach for estimating c using lidar depolarization measurements and diffuse attenuation coefficients from ocean color products or lidar measurements of Brillouin scattering. The new approach is based on a theoretical formula established from Monte Carlo simulations that links the depolarization ratio of sea water to the ratio of diffuse attenuation Kd and beam attenuation C (i.e., a multiple scattering factor). On July 17, 2014, the CALIPSO satellite was tilted 30° off-nadir for one nighttime orbit in order to minimize ocean surface backscatter and demonstrate the lidar ocean subsurface measurement concept from space. Depolarization ratios of ocean subsurface backscatter are measured accurately. Beam attenuation coefficients computed from the depolarization ratio measurements compare well with empirical estimates from ocean color measurements. We further verify the beam attenuation coefficient retrievals using aircraft-based high spectral resolution lidar (HSRL) data that are collocated with in-water optical measurements.

  13. mBEEF: An accurate semi-local Bayesian error estimation density functional

    NASA Astrophysics Data System (ADS)

    Wellendorff, Jess; Lundgaard, Keld T.; Jacobsen, Karsten W.; Bligaard, Thomas

    2014-04-01

    We present a general-purpose meta-generalized gradient approximation (MGGA) exchange-correlation functional generated within the Bayesian error estimation functional framework [J. Wellendorff, K. T. Lundgaard, A. Møgelhøj, V. Petzold, D. D. Landis, J. K. Nørskov, T. Bligaard, and K. W. Jacobsen, Phys. Rev. B 85, 235149 (2012)]. The functional is designed to give reasonably accurate density functional theory (DFT) predictions of a broad range of properties in materials physics and chemistry, while exhibiting a high degree of transferability. Particularly, it improves upon solid cohesive energies and lattice constants over the BEEF-vdW functional without compromising high performance on adsorption and reaction energies. We thus expect it to be particularly well-suited for studies in surface science and catalysis. An ensemble of functionals for error estimation in DFT is an intrinsic feature of exchange-correlation models designed this way, and we show how the Bayesian ensemble may provide a systematic analysis of the reliability of DFT based simulations.

  14. Greater contrast in Martian hydrological history from more accurate estimates of paleodischarge

    NASA Astrophysics Data System (ADS)

    Jacobsen, R. E.; Burr, D. M.

    2016-09-01

    Correlative width-discharge relationships from the Missouri River Basin are commonly used to estimate fluvial paleodischarge on Mars. However, hydraulic geometry provides alternative, and causal, width-discharge relationships derived from broader samples of channels, including those in reduced-gravity (submarine) environments. Comparison of these relationships implies that causal relationships from hydraulic geometry should yield more accurate and more precise discharge estimates. Our remote analysis of a Martian-terrestrial analog channel, combined with in situ discharge data, substantiates this implication. Applied to Martian features, these results imply that paleodischarges of interior channels of Noachian-Hesperian (~3.7 Ga) valley networks have been underestimated by a factor of several, whereas paleodischarges for smaller fluvial deposits of the Late Hesperian-Early Amazonian (~3.0 Ga) have been overestimated. Thus, these new paleodischarges significantly magnify the contrast between early and late Martian hydrologic activity. Width-discharge relationships from hydraulic geometry represent validated tools for quantifying fluvial input near candidate landing sites of upcoming missions.

  15. Area-to-point parameter estimation with geographically weighted regression

    NASA Astrophysics Data System (ADS)

    Murakami, Daisuke; Tsutsumi, Morito

    2015-07-01

    The modifiable areal unit problem (MAUP) is a problem by which aggregated units of data influence the results of spatial data analysis. Standard GWR, which ignores aggregation mechanisms, cannot be considered to serve as an efficient countermeasure of MAUP. Accordingly, this study proposes a type of GWR with aggregation mechanisms, termed area-to-point (ATP) GWR herein. ATP GWR, which is closely related to geostatistical approaches, estimates the disaggregate-level local trend parameters by using aggregated variables. We examine the effectiveness of ATP GWR for mitigating MAUP through a simulation study and an empirical study. The simulation study indicates that the method proposed herein is robust to the MAUP when the spatial scales of aggregation are not too global compared with the scale of the underlying spatial variations. The empirical studies demonstrate that the method provides intuitively consistent estimates.

  16. Unbounded Binary Search for a Fast and Accurate Maximum Power Point Tracking

    NASA Astrophysics Data System (ADS)

    Kim, Yong Sin; Winston, Roland

    2011-12-01

    This paper presents a technique for maximum power point tracking (MPPT) of a concentrating photovoltaic system using cell level power optimization. Perturb and observe (P&O) has been a standard for an MPPT, but it introduces a tradeoff between the tacking speed and the accuracy of the maximum power delivered. The P&O algorithm is not suitable for a rapid environmental condition change by partial shading and self-shading due to its tracking time being linear to the length of the voltage range. Some of researches have been worked on fast tracking but they come with internal ad hoc parameters. In this paper, by using the proposed unbounded binary search algorithm for the MPPT, tracking time becomes a logarithmic function of the voltage search range without ad hoc parameters.

  17. Smartphone-Based Accurate Analysis of Retinal Vasculature towards Point-of-Care Diagnostics

    PubMed Central

    Xu, Xiayu; Ding, Wenxiang; Wang, Xuemin; Cao, Ruofan; Zhang, Maiye; Lv, Peilin; Xu, Feng

    2016-01-01

    Retinal vasculature analysis is important for the early diagnostics of various eye and systemic diseases, making it a potentially useful biomarker, especially for resource-limited regions and countries. Here we developed a smartphone-based retinal image analysis system for point-of-care diagnostics that is able to load a fundus image, segment retinal vessels, analyze individual vessel width, and store or uplink results. The proposed system was not only evaluated on widely used public databases and compared with the state-of-the-art methods, but also validated on clinical images directly acquired with a smartphone. An Android app is also developed to facilitate on-site application of the proposed methods. Both visual assessment and quantitative assessment showed that the proposed methods achieved comparable results to the state-of-the-art methods that require high-standard workstations. The proposed system holds great potential for the early diagnostics of various diseases, such as diabetic retinopathy, for resource-limited regions and countries. PMID:27698369

  18. Closed-form solutions for estimating a rigid motion from plane correspondences extracted from point clouds

    NASA Astrophysics Data System (ADS)

    Khoshelham, Kourosh

    2016-04-01

    Registration is often a prerequisite step in processing point clouds. While planar surfaces are suitable features for registration, most of the existing plane-based registration methods rely on iterative solutions for the estimation of transformation parameters from plane correspondences. This paper presents a new closed-form solution for the estimation of a rigid motion from a set of point-plane correspondences. The role of normalization is investigated and its importance for accurate plane fitting and plane-based registration is shown. The paper also presents a thorough evaluation of the closed-form solutions and compares their performance with the iterative solution in terms of accuracy, robustness, stability and efficiency. The results suggest that the closed-form solution based on point-plane correspondences should be the method of choice in point cloud registration as it is significantly faster than the iterative solution, and performs as well as or better than the iterative solution in most situations. The normalization of the point coordinates is also recommended as an essential preprocessing step for point cloud registration. An implementation of the closed-form solutions in MATLAB is available at: http://people.eng.unimelb.edu.au/kkhoshelham/research.html#directmotion

  19. Can endocranial volume be estimated accurately from external skull measurements in great-tailed grackles (Quiscalus mexicanus)?

    PubMed Central

    Palmstrom, Christin R.

    2015-01-01

    There is an increasing need to validate and collect data approximating brain size on individuals in the field to understand what evolutionary factors drive brain size variation within and across species. We investigated whether we could accurately estimate endocranial volume (a proxy for brain size), as measured by computerized tomography (CT) scans, using external skull measurements and/or by filling skulls with beads and pouring them out into a graduated cylinder for male and female great-tailed grackles. We found that while females had higher correlations than males, estimations of endocranial volume from external skull measurements or beads did not tightly correlate with CT volumes. We found no accuracy in the ability of external skull measures to predict CT volumes because the prediction intervals for most data points overlapped extensively. We conclude that we are unable to detect individual differences in endocranial volume using external skull measurements. These results emphasize the importance of validating and explicitly quantifying the predictive accuracy of brain size proxies for each species and each sex. PMID:26082858

  20. Accurate Visual Heading Estimation at High Rotation Rate Without Oculomotor or Static-Depth Cues

    NASA Technical Reports Server (NTRS)

    Stone, Leland S.; Perrone, John A.; Null, Cynthia H. (Technical Monitor)

    1995-01-01

    It has been claimed that either oculomotor or static depth cues provide the signals about self-rotation necessary approx.-1 deg/s. We tested this hypothesis by simulating self-motion along a curved path with the eyes fixed in the head (plus or minus 16 deg/s of rotation). Curvilinear motion offers two advantages: 1) heading remains constant in retinotopic coordinates, and 2) there is no visual-oculomotor conflict (both actual and simulated eye position remain stationary). We simulated 400 ms of rotation combined with 16 m/s of translation at fixed angles with respect to gaze towards two vertical planes of random dots initially 12 and 24 m away, with a field of view of 45 degrees. Four subjects were asked to fixate a central cross and to respond whether they were translating to the left or right of straight-ahead gaze. From the psychometric curves, heading bias (mean) and precision (semi-interquartile) were derived. The mean bias over 2-5 runs was 3.0, 4.0, -2.0, -0.4 deg for the first author and three naive subjects, respectively (positive indicating towards the rotation direction). The mean precision was 2.0, 1.9, 3.1, 1.6 deg. respectively. The ability of observers to make relatively accurate and precise heading judgments, despite the large rotational flow component, refutes the view that extra-flow-field information is necessary for human visual heading estimation at high rotation rates. Our results support models that process combined translational/rotational flow to estimate heading, but should not be construed to suggest that other cues do not play an important role when they are available to the observer.

  1. Estimating the Effects of Detection Heterogeneity and Overdispersion on Trends Estimated from Avian Point Counts

    EPA Science Inventory

    Point counts are a common method for sampling avian distribution and abundance. Though methods for estimating detection probabilities are available, many analyses use raw counts and do not correct for detectability. We use a removal model of detection within an N-mixture approa...

  2. How accurately can we estimate energetic costs in a marine top predator, the king penguin?

    PubMed

    Halsey, Lewis G; Fahlman, Andreas; Handrich, Yves; Schmidt, Alexander; Woakes, Anthony J; Butler, Patrick J

    2007-01-01

    King penguins (Aptenodytes patagonicus) are one of the greatest consumers of marine resources. However, while their influence on the marine ecosystem is likely to be significant, only an accurate knowledge of their energy demands will indicate their true food requirements. Energy consumption has been estimated for many marine species using the heart rate-rate of oxygen consumption (f(H) - V(O2)) technique, and the technique has been applied successfully to answer eco-physiological questions. However, previous studies on the energetics of king penguins, based on developing or applying this technique, have raised a number of issues about the degree of validity of the technique for this species. These include the predictive validity of the present f(H) - V(O2) equations across different seasons and individuals and during different modes of locomotion. In many cases, these issues also apply to other species for which the f(H) - V(O2) technique has been applied. In the present study, the accuracy of three prediction equations for king penguins was investigated based on validity studies and on estimates of V(O2) from published, field f(H) data. The major conclusions from the present study are: (1) in contrast to that for walking, the f(H) - V(O2) relationship for swimming king penguins is not affected by body mass; (2) prediction equation (1), log(V(O2) = -0.279 + 1.24log(f(H) + 0.0237t - 0.0157log(f(H)t, derived in a previous study, is the most suitable equation presently available for estimating V(O2) in king penguins for all locomotory and nutritional states. A number of possible problems associated with producing an f(H) - V(O2) relationship are discussed in the present study. Finally, a statistical method to include easy-to-measure morphometric characteristics, which may improve the accuracy of f(H) - V(O2) prediction equations, is explained. PMID:17363231

  3. Critical point estimation of the Lennard-Jones pure fluid and binary mixtures

    NASA Astrophysics Data System (ADS)

    Pérez-Pellitero, Javier; Ungerer, Philippe; Orkoulas, Gerassimos; Mackie, Allan D.

    2006-08-01

    The apparent critical point of the pure fluid and binary mixtures interacting with the Lennard-Jones potential has been calculated using Monte Carlo histogram reweighting techniques combined with either a fourth order cumulant calculation (Binder parameter) or a mixed-field study. By extrapolating these finite system size results through a finite size scaling analysis we estimate the infinite system size critical point. Excellent agreement is found between all methodologies as well as previous works, both for the pure fluid and the binary mixture studied. The combination of the proposed cumulant method with the use of finite size scaling is found to present advantages with respect to the mixed-field analysis since no matching to the Ising universal distribution is required while maintaining the same statistical efficiency. In addition, the accurate estimation of the finite critical point becomes straightforward while the scaling of density and composition is also possible and allows for the estimation of the line of critical points for a Lennard-Jones mixture.

  4. Accurate coronary modeling procedure using 2D calibrated projections based on 2D centerline points on a single projection

    NASA Astrophysics Data System (ADS)

    Movassaghi, Babak; Rasche, Volker; Viergever, Max A.; Niessen, Wiro J.

    2004-05-01

    For the diagnosis of ischemic heart disease, accurate quantitative analysis of the coronary arteries is important. In coronary angiography, a number of projections is acquired from which 3D models of the coronaries can be reconstructed. A signifcant limitation of the current 3D modeling procedures is the required user interaction for defining the centerlines of the vessel structures in the 2D projections. Currently, the 3D centerlines of the coronary tree structure are calculated based on the interactively determined centerlines in two projections. For every interactively selected centerline point in a first projection the corresponding point in a second projection has to be determined interactively by the user. The correspondence is obtained based on the epipolar-geometry. In this paper a method is proposed to retrieve all the information required for the modeling procedure, by the interactive determination of the 2D centerline-points in only one projection. For every determined 2D centerline-point the corresponding 3D centerline-point is calculated by the analysis of the 1D gray value functions of the corresponding epipolarlines in space for all available 2D projections. This information is then used to build a 3D representation of the coronary arteries using coronary modeling techniques. The approach is illustrated on the analysis of calibrated phantom and calibrated coronary projection data.

  5. Curie Point Depth Estimates and Correlation with Subduction in Mexico

    NASA Astrophysics Data System (ADS)

    Manea, Marina; Manea, Vlad C.

    2011-08-01

    We investigate the regional thermal structure of the crust in Mexico using Curie Point Depth (CPD) estimates. The top and bottom of the magnetized crust were calculated using the power-density spectra of the total magnetic field from the freely available "Magnetic Anomaly Map of North America". We applied this method to estimate the regional crustal thermal structure in overlapping square windows of 2° × 2°. The CPD estimates range between 10 and 40 km and show several regions of relatively shallow and deep magnetic sources, with a general inverse correlation with measured heat flow. A deep CPD region (20-30 km) is located in the fore-arc area where the subducting Cocos plate has a flat-slab geometry. This deep region is bound to the NW and SE by shallow CPD areas beneath the states of Michoacan (CPD = 12-16 km) and Oaxaca (CPD = ~16 km), respectively. There is a good spatial correlation between this deep CPD area and two main fracture zones located on the incoming Cocos plate (Orozco and O'Gorman fracture zones), suggesting that subduction plays an important role in setting apart different CPD provinces along the Mexican coast. Another deep CPD (16-32 km) area corresponds to the region where the Rivera plate subducts beneath Jalisco block. The Trans-Mexican Volcanic Belt is characterized by a decrease in Curie depths from west (16-20 km) to east (10-12 km). Finally, several deep CPD areas are situated in the back-arc region where old Mesozoic terrains are present. Our results suggest that the main control on the crust's regional thermal structure in the fore-arc and volcanic arc regions is due to the subduction of the Cocos and Rivera plates beneath Mexico.

  6. Trend Estimation and Change Point Detection in Climatic Series

    NASA Astrophysics Data System (ADS)

    Bates, B. C.; Chandler, R. E.

    2011-12-01

    The problems of trend estimation and change point detection in climatic series have received substantial attention in recent years. Key issues include the magnitudes and directions of underlying trends, and the existence (or otherwise) of abrupt shifts in the mean background state. There are many procedures in use including: t-tests, Mann-Whitney and Pettit tests, linear and piecewise linear regression; cumulative sum analysis; hierarchical Bayesian change point analysis; Markov chain Monte Carlo methods; and reversible jump Markov chain Monte Carlo. The purpose of our presentation is to motivate wider use of modern regression techniques for trend estimation and change point detection in climatic series. We pay particular attention to the underlying statistical assumptions as their violation can lead to serious errors in data interpretation and study conclusions. In this context we consider two case studies. The first involves the application of local linear regression and a test for discontinuities in the regression function to the winter (December-March) North Atlantic Oscillation (NAO) index series for the period 1864-2010. This series exhibits a reversal from strongly negative values in the late 1960s to strongly positive NAO index values in the mid-1990s. The second involves the analysis of a seasonal (June to October) series of typhoon counts in the vicinity of Taiwan for the period 1970-2006. A previous investigation by other researchers concluded that an abrupt shift in this series occurred between 1999 and 2000. For both case studies, our findings indicate little evidence for abrupt shifts: rather, the decadal to multidecadal changes in the mean levels of both series appear well described by smooth trends. For the winter NAO index series, the trend is non-monotonic; for the typhoon counts, it can be regarded as linear on the square root scale. Our statistical results do not contradict those obtained by other researchers: our interpretation of these results

  7. Crop area estimation based on remotely-sensed data with an accurate but costly subsample

    NASA Technical Reports Server (NTRS)

    Gunst, R. F.

    1983-01-01

    Alternatives to sampling-theory stratified and regression estimators of crop production and timber biomass were examined. An alternative estimator which is viewed as especially promising is the errors-in-variable regression estimator. Investigations established the need for caution with this estimator when the ratio of two error variances is not precisely known.

  8. Evaluation of pedotransfer functions for estimating the soil water retention points

    NASA Astrophysics Data System (ADS)

    Bahmani, Omid; Palangi, Sahar

    2016-06-01

    Direct measurement of soil moisture has been often expensive and time-consuming. The aim of this study was determining the best method to estimate the soil moisture using the pedotransfer functions in the soil par2 model. Soil samples selected from the database UNSODA in three textures include sandy loam, silty loam and clay. In clay soil, the Campbell model indicated better results at field capacity (FC) and wilting point (WP) with RMSE = (0.06, 0.09) and d = (0.65, 0.55) respectively. In silty loam soil, the Epic model had accurate estimation with MBE = 0.00 at FC and Campbell model had the acceptable result of WP with RMSE = 0.03 and d = 0.77. In sandy loam, Hutson and Campbell models had a better result to estimation the FC and WP than others. Also Hutson model had an acceptable result to estimation the TAW (Total Available Water) with RMSE = (0.03, 0.04, 0.04) and MBE = (0.02, 0.01, 0.01) for clay, sandy loam and silty loam, respectively. These models demonstrate the moisture points had the internal linkage with the soil textures. Results indicated that the PTFs models simulate the agreement results with the experimental observations.

  9. Curie point depth estimation of the Eastern Caribbean

    NASA Astrophysics Data System (ADS)

    Garcia, Andreina; Orihuela Guevara, Nuris

    2013-04-01

    In this paper we present an estimation of the Curie point depth (CPD) on the Eastern Caribbean. The estimation of the CPD was done from satellite magnetic anomalies, by the application of the Centroid method over the studied area. In order to calculate the CPD, the area was subdivided in square windows of side equal to 2°, with an overlap distance of 1° to each other. As result of this research, it was obtained the Curie isotherm grid by using kriging interpolation method. Despite of the oceanic nature of the Eastern Caribbean plate, this map reveals important lateral variations in the interior of the plate and its boundaries. The lateral variations observed in CPD are related with the complexity of thermal processes in the subsurface of the region. From a global perspective, the earth's oceanic provinces show a CPD's smooth behavior, excepting plate boundaries of these oceanic provinces. In this case, the Eastern Caribbean plate's CPD variations are related to both: Plate's boundaries and plate's interior. The maximum CPD variations are observed in the southern boundary of Caribbean plate (9 to 35 km) and over the Lesser Antilles and Barbados prism (16 to 30 km). This behavior reflects the complex geologic evolution history of the studied area, in which has been documented the presence of extensive mantle of basalt and dolerite sills. These sills have been originated in various cycles of cretaceous mantle activity, and have been the main cause of the oceanic crust's thickening in the interior of the Caribbean plate. At the same time, this thickening of the oceanic plate explains the existence of a Mohorovičić discontinuity, with an average depth greater than other regions of the planet, with slight irregularities related to highs of the ocean floor (Nicaragua and Beata Crests, Aves High) but not similar to the magnitude of lateral variations revealed by the Curie isotherm map.

  10. Improved image quality in pinhole SPECT by accurate modeling of the point spread function in low magnification systems

    SciTech Connect

    Pino, Francisco; Roé, Nuria; Aguiar, Pablo; Falcon, Carles; Ros, Domènec; Pavía, Javier

    2015-02-15

    Purpose: Single photon emission computed tomography (SPECT) has become an important noninvasive imaging technique in small-animal research. Due to the high resolution required in small-animal SPECT systems, the spatially variant system response needs to be included in the reconstruction algorithm. Accurate modeling of the system response should result in a major improvement in the quality of reconstructed images. The aim of this study was to quantitatively assess the impact that an accurate modeling of spatially variant collimator/detector response has on image-quality parameters, using a low magnification SPECT system equipped with a pinhole collimator and a small gamma camera. Methods: Three methods were used to model the point spread function (PSF). For the first, only the geometrical pinhole aperture was included in the PSF. For the second, the septal penetration through the pinhole collimator was added. In the third method, the measured intrinsic detector response was incorporated. Tomographic spatial resolution was evaluated and contrast, recovery coefficients, contrast-to-noise ratio, and noise were quantified using a custom-built NEMA NU 4–2008 image-quality phantom. Results: A high correlation was found between the experimental data corresponding to intrinsic detector response and the fitted values obtained by means of an asymmetric Gaussian distribution. For all PSF models, resolution improved as the distance from the point source to the center of the field of view increased and when the acquisition radius diminished. An improvement of resolution was observed after a minimum of five iterations when the PSF modeling included more corrections. Contrast, recovery coefficients, and contrast-to-noise ratio were better for the same level of noise in the image when more accurate models were included. Ring-type artifacts were observed when the number of iterations exceeded 12. Conclusions: Accurate modeling of the PSF improves resolution, contrast, and recovery

  11. Zero-Cost Estimation of Zero-Point Energies.

    PubMed

    Császár, Attila G; Furtenbacher, Tibor

    2015-10-01

    An additive, linear, atom-type-based (ATB) scheme is developed allowing no-cost estimation of zero-point vibrational energies (ZPVE) of neutral, closed-shell molecules in their ground electronic states. The atom types employed correspond to those defined within the MM2 molecular mechanics force field approach. The reference training set of 156 molecules cover chained and branched alkanes, alkenes, cycloalkanes and cycloalkenes, alkynes, alcohols, aldehydes, carboxylic acids, amines, amides, ethers, esters, ketones, benzene derivatives, heterocycles, nucleobases, all the natural amino acids, some dipeptides and sugars, as well as further simple molecules and ones containing several structural units, including several vitamins. A weighted linear least-squares fit of atom-type-based ZPVE increments results in recommended values for the following atoms, with the number of atom types defined in parentheses: H(8), D(1), B(1), C(6), N(7), O(3), F(1), Si(1), P(2), S(3), and Cl(1). The average accuracy of the ATB ZPVEs is considerably better than 1 kcal mol(-1), that is, better than chemical accuracy. The proposed ATB scheme could be extended to many more atoms and atom types, following a careful validation procedure; deviation from the MM2 atom types seems to be necessary, especially for third-row elements. PMID:26398318

  12. Determining Sample Size for Accurate Estimation of the Squared Multiple Correlation Coefficient.

    ERIC Educational Resources Information Center

    Algina, James; Olejnik, Stephen

    2000-01-01

    Discusses determining sample size for estimation of the squared multiple correlation coefficient and presents regression equations that permit determination of the sample size for estimating this parameter for up to 20 predictor variables. (SLD)

  13. Accurate quantification of two key time points used in the determination of hydroxyl polyaluminum species by ferron timed spectrophotometry.

    PubMed

    Zhang, Jing; Yong, Xiaojing; Zhao, Dongyan; Shi, Qiuyi

    2015-01-01

    The content of mononuclear Al (Ala%) changed with its determination time (ta) under different dosages of Ferron (7-iodo-8-hydroxyquinoline-5-sulfonic acid, [Ferron]), and the change of Ala% with [Ferron] at different ta was systematically investigated for the first time. Thus, the most appropriate ta was found with the optimal [Ferron]. Also, the judgment of the platform (flat or level portion) of the complete reaction on the absorption-time curve determined in the hydroxyl polyaluminum solution by Ferron timed spectrophotometry (Ferron assay) was first digitized. The time point (tb) of complete reaction between the medium polyaluminum (Alb) and Ferron reagent depended on the reaction extent, and time could not be used only to judge. Thus, the tb was accurately determined and reduced to half of original, which improved the experiment efficiency significantly. The Ferron assay was completely optimized.

  14. Geometric constraints in semiclassical initial value representation calculations in Cartesian coordinates: accurate reduction in zero-point energy.

    PubMed

    Issack, Bilkiss B; Roy, Pierre-Nicholas

    2005-08-22

    An approach for the inclusion of geometric constraints in semiclassical initial value representation calculations is introduced. An important aspect of the approach is that Cartesian coordinates are used throughout. We devised an algorithm for the constrained sampling of initial conditions through the use of multivariate Gaussian distribution based on a projected Hessian. We also propose an approach for the constrained evaluation of the so-called Herman-Kluk prefactor in its exact log-derivative form. Sample calculations are performed for free and constrained rare-gas trimers. The results show that the proposed approach provides an accurate evaluation of the reduction in zero-point energy. Exact basis set calculations are used to assess the accuracy of the semiclassical results. Since Cartesian coordinates are used, the approach is general and applicable to a variety of molecular and atomic systems.

  15. On point estimation of the abnormality of a Mahalanobis index

    PubMed Central

    Elfadaly, Fadlalla G.; Garthwaite, Paul H.; Crawford, John R.

    2016-01-01

    Mahalanobis distance may be used as a measure of the disparity between an individual’s profile of scores and the average profile of a population of controls. The degree to which the individual’s profile is unusual can then be equated to the proportion of the population who would have a larger Mahalanobis distance than the individual. Several estimators of this proportion are examined. These include plug-in maximum likelihood estimators, medians, the posterior mean from a Bayesian probability matching prior, an estimator derived from a Taylor expansion, and two forms of polynomial approximation, one based on Bernstein polynomial and one on a quadrature method. Simulations show that some estimators, including the commonly-used plug-in maximum likelihood estimators, can have substantial bias for small or moderate sample sizes. The polynomial approximations yield estimators that have low bias, with the quadrature method marginally to be preferred over Bernstein polynomials. However, the polynomial estimators sometimes yield infeasible estimates that are outside the 0–1 range. While none of the estimators are perfectly unbiased, the median estimators match their definition; in simulations their estimates of the proportion have a median error close to zero. The standard median estimator can give unrealistically small estimates (including 0) and an adjustment is proposed that ensures estimates are always credible. This latter estimator has much to recommend it when unbiasedness is not of paramount importance, while the quadrature method is recommended when bias is the dominant issue. PMID:27375307

  16. A robust and accurate center-frequency estimation (RACE) algorithm for improving motion estimation performance of SinMod on tagged cardiac MR images without known tagging parameters.

    PubMed

    Liu, Hong; Wang, Jie; Xu, Xiangyang; Song, Enmin; Wang, Qian; Jin, Renchao; Hung, Chih-Cheng; Fei, Baowei

    2014-11-01

    A robust and accurate center-frequency (CF) estimation (RACE) algorithm for improving the performance of the local sine-wave modeling (SinMod) method, which is a good motion estimation method for tagged cardiac magnetic resonance (MR) images, is proposed in this study. The RACE algorithm can automatically, effectively and efficiently produce a very appropriate CF estimate for the SinMod method, under the circumstance that the specified tagging parameters are unknown, on account of the following two key techniques: (1) the well-known mean-shift algorithm, which can provide accurate and rapid CF estimation; and (2) an original two-direction-combination strategy, which can further enhance the accuracy and robustness of CF estimation. Some other available CF estimation algorithms are brought out for comparison. Several validation approaches that can work on the real data without ground truths are specially designed. Experimental results on human body in vivo cardiac data demonstrate the significance of accurate CF estimation for SinMod, and validate the effectiveness of RACE in facilitating the motion estimation performance of SinMod.

  17. Accurate and robust registration of high-speed railway viaduct point clouds using closing conditions and external geometric constraints

    NASA Astrophysics Data System (ADS)

    Ji, Zheng; Song, Mengxiao; Guan, Haiyan; Yu, Yongtao

    2015-08-01

    This paper proposes an automatic method for registering multiple laser scans without a control network. The proposed registration method first uses artificial targets to pair-wise register adjacent scans for initial transformation estimates; the proposed registration method then employs combined adjustments with closing conditions and external triangle constraints to globally register all scans along a long-range, high-speed railway corridor. The proposed registration method uses (1) closing conditions to eliminate registration errors that gradually accumulate as the length of a corridor (the number of scan stations) increases, and (2) external geometric constraints to ensure the shape correctness of an elongated high-speed railway. A 640-m high-speed railway viaduct with twenty-one piers is used to conduct experiments using our proposed registration method. A group of comparative experiments is undertaken to evaluate the robustness and efficiency of the proposed registration method to accurately register long-range corridors.

  18. Accurate Point-of-Care Detection of Ruptured Fetal Membranes: Improved Diagnostic Performance Characteristics with a Monoclonal/Polyclonal Immunoassay

    PubMed Central

    Rogers, Linda C.; Scott, Laurie; Block, Jon E.

    2016-01-01

    OBJECTIVE Accurate and timely diagnosis of rupture of membranes (ROM) is imperative to allow for gestational age-specific interventions. This study compared the diagnostic performance characteristics between two methods used for the detection of ROM as measured in the same patient. METHODS Vaginal secretions were evaluated using the conventional fern test as well as a point-of-care monoclonal/polyclonal immunoassay test (ROM Plus®) in 75 pregnant patients who presented to labor and delivery with complaints of leaking amniotic fluid. Both tests were compared to analytical confirmation of ROM using three external laboratory tests. Diagnostic performance characteristics were calculated including sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and accuracy. RESULTS Diagnostic performance characteristics uniformly favored ROM detection using the immunoassay test compared to the fern test: sensitivity (100% vs. 77.8%), specificity (94.8% vs. 79.3%), PPV (75% vs. 36.8%), NPV (100% vs. 95.8%), and accuracy (95.5% vs. 79.1%). CONCLUSIONS The point-of-care immunoassay test provides improved diagnostic accuracy for the detection of ROM compared to fern testing. It has the potential of improving patient management decisions, thereby minimizing serious complications and perinatal morbidity. PMID:27199579

  19. The effect of high leverage points on the maximum estimated likelihood for separation in logistic regression

    NASA Astrophysics Data System (ADS)

    Ariffin, Syaiba Balqish; Midi, Habshah; Arasan, Jayanthi; Rana, Md Sohel

    2015-02-01

    This article is concerned with the performance of the maximum estimated likelihood estimator in the presence of separation in the space of the independent variables and high leverage points. The maximum likelihood estimator suffers from the problem of non overlap cases in the covariates where the regression coefficients are not identifiable and the maximum likelihood estimator does not exist. Consequently, iteration scheme fails to converge and gives faulty results. To remedy this problem, the maximum estimated likelihood estimator is put forward. It is evident that the maximum estimated likelihood estimator is resistant against separation and the estimates always exist. The effect of high leverage points are then investigated on the performance of maximum estimated likelihood estimator through real data sets and Monte Carlo simulation study. The findings signify that the maximum estimated likelihood estimator fails to provide better parameter estimates in the presence of both separation, and high leverage points.

  20. Estimating the physicochemical properties of polyhalogenated aromatic and aliphatic compounds using UPPER: part 1. Boiling point and melting point.

    PubMed

    Admire, Brittany; Lian, Bo; Yalkowsky, Samuel H

    2015-01-01

    The UPPER (Unified Physicochemical Property Estimation Relationships) model uses enthalpic and entropic parameters to estimate 20 biologically relevant properties of organic compounds. The model has been validated by Lian and Yalkowsky on a data set of 700 hydrocarbons. The aim of this work is to expand the UPPER model to estimate the boiling and melting points of polyhalogenated compounds. In this work, 19 new group descriptors are defined and used to predict the transition temperatures of an additional 1288 compounds. The boiling points of 808 and the melting points of 742 polyhalogenated compounds are predicted with average absolute errors of 13.56 K and 25.85 K, respectively.

  1. Estimating the physicochemical properties of polyhalogenated aromatic and aliphatic compounds using UPPER: part 1. Boiling point and melting point.

    PubMed

    Admire, Brittany; Lian, Bo; Yalkowsky, Samuel H

    2015-01-01

    The UPPER (Unified Physicochemical Property Estimation Relationships) model uses enthalpic and entropic parameters to estimate 20 biologically relevant properties of organic compounds. The model has been validated by Lian and Yalkowsky on a data set of 700 hydrocarbons. The aim of this work is to expand the UPPER model to estimate the boiling and melting points of polyhalogenated compounds. In this work, 19 new group descriptors are defined and used to predict the transition temperatures of an additional 1288 compounds. The boiling points of 808 and the melting points of 742 polyhalogenated compounds are predicted with average absolute errors of 13.56 K and 25.85 K, respectively. PMID:25022475

  2. Bayesian parameter estimation of a k-ε model for accurate jet-in-crossflow simulations

    DOE PAGES

    Ray, Jaideep; Lefantzi, Sophia; Arunajatesan, Srinivasan; Dechant, Lawrence

    2016-05-31

    Reynolds-averaged Navier–Stokes models are not very accurate for high-Reynolds-number compressible jet-in-crossflow interactions. The inaccuracy arises from the use of inappropriate model parameters and model-form errors in the Reynolds-averaged Navier–Stokes model. In this study, the hypothesis is pursued that Reynolds-averaged Navier–Stokes predictions can be significantly improved by using parameters inferred from experimental measurements of a supersonic jet interacting with a transonic crossflow.

  3. Accurate state estimation for a hydraulic actuator via a SDRE nonlinear filter

    NASA Astrophysics Data System (ADS)

    Strano, Salvatore; Terzo, Mario

    2016-06-01

    The state estimation in hydraulic actuators is a fundamental tool for the detection of faults or a valid alternative to the installation of sensors. Due to the hard nonlinearities that characterize the hydraulic actuators, the performances of the linear/linearization based techniques for the state estimation are strongly limited. In order to overcome these limits, this paper focuses on an alternative nonlinear estimation method based on the State-Dependent-Riccati-Equation (SDRE). The technique is able to fully take into account the system nonlinearities and the measurement noise. A fifth order nonlinear model is derived and employed for the synthesis of the estimator. Simulations and experimental tests have been conducted and comparisons with the largely used Extended Kalman Filter (EKF) are illustrated. The results show the effectiveness of the SDRE based technique for applications characterized by not negligible nonlinearities such as dead zone and frictions.

  4. The GFR and GFR decline cannot be accurately estimated in type 2 diabetics.

    PubMed

    Gaspari, Flavio; Ruggenenti, Piero; Porrini, Esteban; Motterlini, Nicola; Cannata, Antonio; Carrara, Fabiola; Jiménez Sosa, Alejandro; Cella, Claudia; Ferrari, Silvia; Stucchi, Nadia; Parvanova, Aneliya; Iliev, Ilian; Trevisan, Roberto; Bossi, Antonio; Zaletel, Jelka; Remuzzi, Giuseppe

    2013-07-01

    There are no adequate studies that have formally tested the performance of different estimating formulas in patients with type 2 diabetes both with and without overt nephropathy. Here we evaluated the agreement between baseline GFRs, GFR changes at month 6, and long-term GFR decline measured by iohexol plasma clearance or estimated by 15 creatinine-based formulas in 600 type 2 diabetics followed for a median of 4.0 years. Ninety patients were hyperfiltering. The number of those identified by estimation formulas ranged from 0 to 24:58 were not identified by any formula. Baseline GFR was significantly underestimated and a 6-month GFR reduction was missed in hyperfiltering patients. Long-term GFR decline was also underestimated by all formulas in the whole study group and in hyper-, normo-, and hypofiltering patients considered separately. Five formulas generated positive slopes in hyperfiltering patients. Baseline concordance correlation coefficients and total deviation indexes ranged from 32.1% to 92.6% and from 0.21 to 0.53, respectively. Concordance correlation coefficients between estimated and measured long-term GFR decline ranged from -0.21 to 0.35. The agreement between estimated and measured values was also poor within each subgroup considered separately. Thus, our study questions the use of any estimation formula to identify hyperfiltering patients and monitor renal disease progression and response to treatment in type 2 diabetics without overt nephropathy.

  5. FAST TRACK COMMUNICATION Accurate estimate of α variation and isotope shift parameters in Na and Mg+

    NASA Astrophysics Data System (ADS)

    Sahoo, B. K.

    2010-12-01

    We present accurate calculations of fine-structure constant variation coefficients and isotope shifts in Na and Mg+ using the relativistic coupled-cluster method. In our approach, we are able to discover the roles of various correlation effects explicitly to all orders in these calculations. Most of the results, especially for the excited states, are reported for the first time. It is possible to ascertain suitable anchor and probe lines for the studies of possible variation in the fine-structure constant by using the above results in the considered systems.

  6. Some recommendations for an accurate estimation of Lanice conchilega density based on tube counts

    NASA Astrophysics Data System (ADS)

    van Hoey, Gert; Vincx, Magda; Degraer, Steven

    2006-12-01

    The tube building polychaete Lanice conchilega is a common and ecologically important species in intertidal and shallow subtidal sands. It builds a characteristic tube with ragged fringes and can retract rapidly into its tube to depths of more than 20 cm. Therefore, it is very difficult to sample L. conchilega individuals, especially with a Van Veen grab. Consequently, many studies have used tube counts as estimates of real densities. This study reports on some aspects to be considered when using tube counts as a density estimate of L. conchilega, based on intertidal and subtidal samples. Due to its accuracy and independence of sampling depth, the tube method is considered the prime method to estimate the density of L. conchilega. However, caution is needed when analyzing samples with fragile young individuals and samples from areas where temporary physical disturbance is likely to occur.

  7. A microbial clock provides an accurate estimate of the postmortem interval in a mouse model system

    PubMed Central

    Metcalf, Jessica L; Wegener Parfrey, Laura; Gonzalez, Antonio; Lauber, Christian L; Knights, Dan; Ackermann, Gail; Humphrey, Gregory C; Gebert, Matthew J; Van Treuren, Will; Berg-Lyons, Donna; Keepers, Kyle; Guo, Yan; Bullard, James; Fierer, Noah; Carter, David O; Knight, Rob

    2013-01-01

    Establishing the time since death is critical in every death investigation, yet existing techniques are susceptible to a range of errors and biases. For example, forensic entomology is widely used to assess the postmortem interval (PMI), but errors can range from days to months. Microbes may provide a novel method for estimating PMI that avoids many of these limitations. Here we show that postmortem microbial community changes are dramatic, measurable, and repeatable in a mouse model system, allowing PMI to be estimated within approximately 3 days over 48 days. Our results provide a detailed understanding of bacterial and microbial eukaryotic ecology within a decomposing corpse system and suggest that microbial community data can be developed into a forensic tool for estimating PMI. DOI: http://dx.doi.org/10.7554/eLife.01104.001 PMID:24137541

  8. Fast and accurate probability density estimation in large high dimensional astronomical datasets

    NASA Astrophysics Data System (ADS)

    Gupta, Pramod; Connolly, Andrew J.; Gardner, Jeffrey P.

    2015-01-01

    Astronomical surveys will generate measurements of hundreds of attributes (e.g. color, size, shape) on hundreds of millions of sources. Analyzing these large, high dimensional data sets will require efficient algorithms for data analysis. An example of this is probability density estimation that is at the heart of many classification problems such as the separation of stars and quasars based on their colors. Popular density estimation techniques use binning or kernel density estimation. Kernel density estimation has a small memory footprint but often requires large computational resources. Binning has small computational requirements but usually binning is implemented with multi-dimensional arrays which leads to memory requirements which scale exponentially with the number of dimensions. Hence both techniques do not scale well to large data sets in high dimensions. We present an alternative approach of binning implemented with hash tables (BASH tables). This approach uses the sparseness of data in the high dimensional space to ensure that the memory requirements are small. However hashing requires some extra computation so a priori it is not clear if the reduction in memory requirements will lead to increased computational requirements. Through an implementation of BASH tables in C++ we show that the additional computational requirements of hashing are negligible. Hence this approach has small memory and computational requirements. We apply our density estimation technique to photometric selection of quasars using non-parametric Bayesian classification and show that the accuracy of the classification is same as the accuracy of earlier approaches. Since the BASH table approach is one to three orders of magnitude faster than the earlier approaches it may be useful in various other applications of density estimation in astrostatistics.

  9. Crop area estimation based on remotely-sensed data with an accurate but costly subsample

    NASA Technical Reports Server (NTRS)

    Gunst, R. F.

    1985-01-01

    Research activities conducted under the auspices of National Aeronautics and Space Administration Cooperative Agreement NCC 9-9 are discussed. During this contract period research efforts are concentrated in two primary areas. The first are is an investigation of the use of measurement error models as alternatives to least squares regression estimators of crop production or timber biomass. The secondary primary area of investigation is on the estimation of the mixing proportion of two-component mixture models. This report lists publications, technical reports, submitted manuscripts, and oral presentation generated by these research efforts. Possible areas of future research are mentioned.

  10. Spectral estimation from laser scanner data for accurate color rendering of objects

    NASA Astrophysics Data System (ADS)

    Baribeau, Rejean

    2002-06-01

    Estimation methods are studied for the recovery of the spectral reflectance across the visible range from the sensing at just three discrete laser wavelengths. Methods based on principal component analysis and on spline interpolation are judged based on the CIE94 color differences for some reference data sets. These include the Macbeth color checker, the OSA-UCS color charts, some artist pigments, and a collection of miscellaneous surface colors. The optimal three sampling wavelengths are also investigated. It is found that color can be estimated with average accuracy ΔE94 = 2.3 when optimal wavelengths 455 nm, 540 n, and 610 nm are used.

  11. Accurate radiocarbon age estimation using "early" measurements: a new approach to reconstructing the Paleolithic absolute chronology

    NASA Astrophysics Data System (ADS)

    Omori, Takayuki; Sano, Katsuhiro; Yoneda, Minoru

    2014-05-01

    This paper presents new correction approaches for "early" radiocarbon ages to reconstruct the Paleolithic absolute chronology. In order to discuss time-space distribution about the replacement of archaic humans, including Neanderthals in Europe, by the modern humans, a massive data, which covers a wide-area, would be needed. Today, some radiocarbon databases focused on the Paleolithic have been published and used for chronological studies. From a viewpoint of current analytical technology, however, the any database have unreliable results that make interpretation of radiocarbon dates difficult. Most of these unreliable ages had been published in the early days of radiocarbon analysis. In recent years, new analytical methods to determine highly-accurate dates have been developed. Ultrafiltration and ABOx-SC methods, as new sample pretreatments for bone and charcoal respectively, have attracted attention because they could remove imperceptible contaminates and derive reliable accurately ages. In order to evaluate the reliability of "early" data, we investigated the differences and variabilities of radiocarbon ages on different pretreatments, and attempted to develop correction functions for the assessment of the reliability. It can be expected that reliability of the corrected age is increased and the age applied to chronological research together with recent ages. Here, we introduce the methodological frameworks and archaeological applications.

  12. SU-F-BRF-09: A Non-Rigid Point Matching Method for Accurate Bladder Dose Summation in Cervical Cancer HDR Brachytherapy

    SciTech Connect

    Chen, H; Zhen, X; Zhou, L; Zhong, Z; Pompos, A; Yan, H; Jiang, S; Gu, X

    2014-06-15

    Purpose: To propose and validate a deformable point matching scheme for surface deformation to facilitate accurate bladder dose summation for fractionated HDR cervical cancer treatment. Method: A deformable point matching scheme based on the thin plate spline robust point matching (TPSRPM) algorithm is proposed for bladder surface registration. The surface of bladders segmented from fractional CT images is extracted and discretized with triangular surface mesh. Deformation between the two bladder surfaces are obtained by matching the two meshes' vertices via the TPS-RPM algorithm, and the deformation vector fields (DVFs) characteristic of this deformation is estimated by B-spline approximation. Numerically, the algorithm is quantitatively compared with the Demons algorithm using five clinical cervical cancer cases by several metrics: vertex-to-vertex distance (VVD), Hausdorff distance (HD), percent error (PE), and conformity index (CI). Experimentally, the algorithm is validated on a balloon phantom with 12 surface fiducial markers. The balloon is inflated with different amount of water, and the displacement of fiducial markers is benchmarked as ground truth to study TPS-RPM calculated DVFs' accuracy. Results: In numerical evaluation, the mean VVD is 3.7(±2.0) mm after Demons, and 1.3(±0.9) mm after TPS-RPM. The mean HD is 14.4 mm after Demons, and 5.3mm after TPS-RPM. The mean PE is 101.7% after Demons and decreases to 18.7% after TPS-RPM. The mean CI is 0.63 after Demons, and increases to 0.90 after TPS-RPM. In the phantom study, the mean Euclidean distance of the fiducials is 7.4±3.0mm and 4.2±1.8mm after Demons and TPS-RPM, respectively. Conclusions: The bladder wall deformation is more accurate using the feature-based TPS-RPM algorithm than the intensity-based Demons algorithm, indicating that TPS-RPM has the potential for accurate bladder dose deformation and dose summation for multi-fractional cervical HDR brachytherapy. This work is supported in part by

  13. Deep Wideband Single Pointings and Mosaics in Radio Interferometry: How Accurately Do We Reconstruct Intensities and Spectral Indices of Faint Sources?

    NASA Astrophysics Data System (ADS)

    Rau, U.; Bhatnagar, S.; Owen, F. N.

    2016-11-01

    Many deep wideband wide-field radio interferometric surveys are being designed to accurately measure intensities, spectral indices, and polarization properties of faint source populations. In this paper, we compare various wideband imaging methods to evaluate the accuracy to which intensities and spectral indices of sources close to the confusion limit can be reconstructed. We simulated a wideband single-pointing (C-array, L-Band (1–2 GHz)) and 46-pointing mosaic (D-array, C-Band (4–8 GHz)) JVLA observation using a realistic brightness distribution ranging from 1 μJy to 100 mJy and time-, frequency-, polarization-, and direction-dependent instrumental effects. The main results from these comparisons are (a) errors in the reconstructed intensities and spectral indices are larger for weaker sources even in the absence of simulated noise, (b) errors are systematically lower for joint reconstruction methods (such as Multi-Term Multi-Frequency-Synthesis (MT-MFS)) along with A-Projection for accurate primary beam correction, and (c) use of MT-MFS for image reconstruction eliminates Clean-bias (which is present otherwise). Auxiliary tests include solutions for deficiencies of data partitioning methods (e.g., the use of masks to remove clean bias and hybrid methods to remove sidelobes from sources left un-deconvolved), the effect of sources not at pixel centers, and the consequences of various other numerical approximations within software implementations. This paper also demonstrates the level of detail at which such simulations must be done in order to reflect reality, enable one to systematically identify specific reasons for every trend that is observed, and to estimate scientifically defensible imaging performance metrics and the associated computational complexity of the algorithms/analysis procedures. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.

  14. Accurate estimation of influenza epidemics using Google search data via ARGO.

    PubMed

    Yang, Shihao; Santillana, Mauricio; Kou, S C

    2015-11-24

    Accurate real-time tracking of influenza outbreaks helps public health officials make timely and meaningful decisions that could save lives. We propose an influenza tracking model, ARGO (AutoRegression with GOogle search data), that uses publicly available online search data. In addition to having a rigorous statistical foundation, ARGO outperforms all previously available Google-search-based tracking models, including the latest version of Google Flu Trends, even though it uses only low-quality search data as input from publicly available Google Trends and Google Correlate websites. ARGO not only incorporates the seasonality in influenza epidemics but also captures changes in people's online search behavior over time. ARGO is also flexible, self-correcting, robust, and scalable, making it a potentially powerful tool that can be used for real-time tracking of other social events at multiple temporal and spatial resolutions.

  15. Accurate estimation of influenza epidemics using Google search data via ARGO.

    PubMed

    Yang, Shihao; Santillana, Mauricio; Kou, S C

    2015-11-24

    Accurate real-time tracking of influenza outbreaks helps public health officials make timely and meaningful decisions that could save lives. We propose an influenza tracking model, ARGO (AutoRegression with GOogle search data), that uses publicly available online search data. In addition to having a rigorous statistical foundation, ARGO outperforms all previously available Google-search-based tracking models, including the latest version of Google Flu Trends, even though it uses only low-quality search data as input from publicly available Google Trends and Google Correlate websites. ARGO not only incorporates the seasonality in influenza epidemics but also captures changes in people's online search behavior over time. ARGO is also flexible, self-correcting, robust, and scalable, making it a potentially powerful tool that can be used for real-time tracking of other social events at multiple temporal and spatial resolutions. PMID:26553980

  16. Accurate estimation of influenza epidemics using Google search data via ARGO

    PubMed Central

    Yang, Shihao; Santillana, Mauricio; Kou, S. C.

    2015-01-01

    Accurate real-time tracking of influenza outbreaks helps public health officials make timely and meaningful decisions that could save lives. We propose an influenza tracking model, ARGO (AutoRegression with GOogle search data), that uses publicly available online search data. In addition to having a rigorous statistical foundation, ARGO outperforms all previously available Google-search–based tracking models, including the latest version of Google Flu Trends, even though it uses only low-quality search data as input from publicly available Google Trends and Google Correlate websites. ARGO not only incorporates the seasonality in influenza epidemics but also captures changes in people’s online search behavior over time. ARGO is also flexible, self-correcting, robust, and scalable, making it a potentially powerful tool that can be used for real-time tracking of other social events at multiple temporal and spatial resolutions. PMID:26553980

  17. Raman spectroscopy for highly accurate estimation of the age of refrigerated porcine muscle

    NASA Astrophysics Data System (ADS)

    Timinis, Constantinos; Pitris, Costas

    2016-03-01

    The high water content of meat, combined with all the nutrients it contains, make it vulnerable to spoilage at all stages of production and storage even when refrigerated at 5 °C. A non-destructive and in situ tool for meat sample testing, which could provide an accurate indication of the storage time of meat, would be very useful for the control of meat quality as well as for consumer safety. The proposed solution is based on Raman spectroscopy which is non-invasive and can be applied in situ. For the purposes of this project, 42 meat samples from 14 animals were obtained and three Raman spectra per sample were collected every two days for two weeks. The spectra were subsequently processed and the sample age was calculated using a set of linear differential equations. In addition, the samples were classified in categories corresponding to the age in 2-day steps (i.e., 0, 2, 4, 6, 8, 10, 12 or 14 days old), using linear discriminant analysis and cross-validation. Contrary to other studies, where the samples were simply grouped into two categories (higher or lower quality, suitable or unsuitable for human consumption, etc.), in this study, the age was predicted with a mean error of ~ 1 day (20%) or classified, in 2-day steps, with 100% accuracy. Although Raman spectroscopy has been used in the past for the analysis of meat samples, the proposed methodology has resulted in a prediction of the sample age far more accurately than any report in the literature.

  18. Estimating Tipping Points in Feedback-Driven Financial Networks

    NASA Astrophysics Data System (ADS)

    Kostanjcar, Zvonko; Begusic, Stjepan; Stanley, Harry Eugene; Podobnik, Boris

    2016-09-01

    Much research has been conducted arguing that tipping points at which complex systems experience phase transitions are difficult to identify. To test the existence of tipping points in financial markets, based on the alternating offer strategic model we propose a network of bargaining agents who mutually either cooperate or where the feedback mechanism between trading and price dynamics is driven by an external "hidden" variable R that quantifies the degree of market overpricing. Due to the feedback mechanism, R fluctuates and oscillates over time, and thus periods when the market is underpriced and overpriced occur repeatedly. As the market becomes overpriced, bubbles are created that ultimately burst in a market crash. The probability that the index will drop in the next year exhibits a strong hysteresis behavior from which we calculate the tipping point. The probability distribution function of R has a bimodal shape characteristic of small systems near the tipping point. By examining the S&P500 index we illustrate the applicability of the model and demonstate that the financial data exhibits a hysteresis and a tipping point that agree with the model predictions. We report a cointegration between the returns of the S&P 500 index and its intrinsic value.

  19. Are satellite based rainfall estimates accurate enough for crop modelling under Sahelian climate?

    NASA Astrophysics Data System (ADS)

    Ramarohetra, J.; Sultan, B.

    2012-04-01

    Agriculture is considered as the most climate dependant human activity. In West Africa and especially in the sudano-sahelian zone, rain-fed agriculture - that represents 93% of cultivated areas and is the means of support of 70% of the active population - is highly vulnerable to precipitation variability. To better understand and anticipate climate impacts on agriculture, crop models - that estimate crop yield from climate information (e.g rainfall, temperature, insolation, humidity) - have been developed. These crop models are useful (i) in ex ante analysis to quantify the impact of different strategies implementation - crop management (e.g. choice of varieties, sowing date), crop insurance or medium-range weather forecast - on yields, (ii) for early warning systems and to (iii) assess future food security. Yet, the successful application of these models depends on the accuracy of their climatic drivers. In the sudano-sahelian zone , the quality of precipitation estimations is then a key factor to understand and anticipate climate impacts on agriculture via crop modelling and yield estimations. Different kinds of precipitation estimations can be used. Ground measurements have long-time series but an insufficient network density, a large proportion of missing values, delay in reporting time, and they have limited availability. An answer to these shortcomings may lie in the field of remote sensing that provides satellite-based precipitation estimations. However, satellite-based rainfall estimates (SRFE) are not a direct measurement but rather an estimation of precipitation. Used as an input for crop models, it determines the performance of the simulated yield, hence SRFE require validation. The SARRAH crop model is used to model three different varieties of pearl millet (HKP, MTDO, Souna3) in a square degree centred on 13.5°N and 2.5°E, in Niger. Eight satellite-based rainfall daily products (PERSIANN, CMORPH, TRMM 3b42-RT, GSMAP MKV+, GPCP, TRMM 3b42v6, RFEv2 and

  20. Techniques for accurate estimation of net discharge in a tidal channel

    USGS Publications Warehouse

    Simpson, Michael R.; Bland, Roger

    1999-01-01

    An ultrasonic velocity meter discharge-measurement site in a tidally affected region of the Sacramento-San Joaquin rivers was used to study the accuracy of the index velocity calibration procedure. Calibration data consisting of ultrasonic velocity meter index velocity and concurrent acoustic Doppler discharge measurement data were collected during three time periods. The relative magnitude of equipment errors, acoustic Doppler discharge measurement errors, and calibration errors were evaluated. Calibration error was the most significant source of error in estimating net discharge. Using a comprehensive calibration method, net discharge estimates developed from the three sets of calibration data differed by less than an average of 4 cubic meters per second. Typical maximum flow rates during the data-collection period averaged 750 cubic meters per second.

  1. Plant DNA Barcodes Can Accurately Estimate Species Richness in Poorly Known Floras

    PubMed Central

    Costion, Craig; Ford, Andrew; Cross, Hugh; Crayn, Darren; Harrington, Mark; Lowe, Andrew

    2011-01-01

    Background Widespread uptake of DNA barcoding technology for vascular plants has been slow due to the relatively poor resolution of species discrimination (∼70%) and low sequencing and amplification success of one of the two official barcoding loci, matK. Studies to date have mostly focused on finding a solution to these intrinsic limitations of the markers, rather than posing questions that can maximize the utility of DNA barcodes for plants with the current technology. Methodology/Principal Findings Here we test the ability of plant DNA barcodes using the two official barcoding loci, rbcLa and matK, plus an alternative barcoding locus, trnH-psbA, to estimate the species diversity of trees in a tropical rainforest plot. Species discrimination accuracy was similar to findings from previous studies but species richness estimation accuracy proved higher, up to 89%. All combinations which included the trnH-psbA locus performed better at both species discrimination and richness estimation than matK, which showed little enhanced species discriminatory power when concatenated with rbcLa. The utility of the trnH-psbA locus is limited however, by the occurrence of intraspecific variation observed in some angiosperm families to occur as an inversion that obscures the monophyly of species. Conclusions/Significance We demonstrate for the first time, using a case study, the potential of plant DNA barcodes for the rapid estimation of species richness in taxonomically poorly known areas or cryptic populations revealing a powerful new tool for rapid biodiversity assessment. The combination of the rbcLa and trnH-psbA loci performed better for this purpose than any two-locus combination that included matK. We show that although DNA barcodes fail to discriminate all species of plants, new perspectives and methods on biodiversity value and quantification may overshadow some of these shortcomings by applying barcode data in new ways. PMID:22096501

  2. Accurate distortion estimation and optimal bandwidth allocation for scalable H.264 video transmission over MIMO systems.

    PubMed

    Jubran, Mohammad K; Bansal, Manu; Kondi, Lisimachos P; Grover, Rohan

    2009-01-01

    In this paper, we propose an optimal strategy for the transmission of scalable video over packet-based multiple-input multiple-output (MIMO) systems. The scalable extension of H.264/AVC that provides a combined temporal, quality and spatial scalability is used. For given channel conditions, we develop a method for the estimation of the distortion of the received video and propose different error concealment schemes. We show the accuracy of our distortion estimation algorithm in comparison with simulated wireless video transmission with packet errors. In the proposed MIMO system, we employ orthogonal space-time block codes (O-STBC) that guarantee independent transmission of different symbols within the block code. In the proposed constrained bandwidth allocation framework, we use the estimated end-to-end decoder distortion to optimally select the application layer parameters, i.e., quantization parameter (QP) and group of pictures (GOP) size, and physical layer parameters, i.e., rate-compatible turbo (RCPT) code rate and symbol constellation. Results show the substantial performance gain by using different symbol constellations across the scalable layers as compared to a fixed constellation.

  3. Compact and accurate linear and nonlinear autoregressive moving average model parameter estimation using laguerre functions.

    PubMed

    Chon, K H; Cohen, R J; Holstein-Rathlou, N H

    1997-01-01

    A linear and nonlinear autoregressive moving average (ARMA) identification algorithm is developed for modeling time series data. The algorithm uses Laguerre expansion of kernals (LEK) to estimate Volterra-Wiener kernals. However, instead of estimating linear and nonlinear system dynamics via moving average models, as is the case for the Volterra-Wiener analysis, we propose an ARMA model-based approach. The proposed algorithm is essentially the same as LEK, but this algorithm is extended to include past values of the output as well. Thus, all of the advantages associated with using the Laguerre function remain with our algorithm; but, by extending the algorithm to the linear and nonlinear ARMA model, a significant reduction in the number of Laguerre functions can be made, compared with the Volterra-Wiener approach. This translates into a more compact system representation and makes the physiological interpretation of higher order kernels easier. Furthermore, simulation results show better performance of the proposed approach in estimating the system dynamics than LEK in certain cases, and it remains effective in the presence of significant additive measurement noise. PMID:9236985

  4. Evaluation of the sample needed to accurately estimate outcome-based measurements of dairy welfare on farm.

    PubMed

    Endres, M I; Lobeck-Luchterhand, K M; Espejo, L A; Tucker, C B

    2014-01-01

    Dairy welfare assessment programs are becoming more common on US farms. Outcome-based measurements, such as locomotion, hock lesion, hygiene, and body condition scores (BCS), are included in these assessments. The objective of the current study was to investigate the proportion of cows in the pen or subsamples of pens on a farm needed to provide an accurate estimate of the previously mentioned measurements. In experiment 1, we evaluated cows in 52 high pens (50 farms) for lameness using a 1- to 5-scale locomotion scoring system (1 = normal and 5 = severely lame; 24.4 and 6% of animals were scored ≥ 3 or ≥ 4, respectively). Cows were also given a BCS using a 1- to 5-scale, where 1 = emaciated and 5 = obese; cows were rarely thin (BCS ≤ 2; 0.10% of cows) or fat (BCS ≥ 4; 0.11% of cows). Hygiene scores were assessed on a 1- to 5-scale with 1 = clean and 5 = severely dirty; 54.9% of cows had a hygiene score ≥ 3. Hock injuries were classified as 1 = no lesion, 2 = mild lesion, and 3 = severe lesion; 10.6% of cows had a score of 3. Subsets of data were created with 10 replicates of random sampling that represented 100, 90, 80, 70, 60, 50, 40, 30, 20, 15, 10, 5, and 3% of the cows measured/pen. In experiment 2, we scored the same outcome measures on all cows in lactating pens from 12 farms and evaluated using pen subsamples: high; high and fresh; high, fresh, and hospital; and high, low, and hospital. For both experiments, the association between the estimates derived from all subsamples and entire pen (experiment 1) or herd (experiment 2) prevalence was evaluated using linear regression. To be considered a good estimate, 3 criteria must be met: R(2)>0.9, slope = 1, and intercept = 0. In experiment 1, on average, recording 15% of the pen represented the percentage of clinically lame cows (score ≥ 3), whereas 30% needed to be measured to estimate severe lameness (score ≥ 4). Only 15% of the pen was needed to estimate the percentage of the herd with a hygiene

  5. Critical points of multidimensional random Fourier series: Variance estimates

    NASA Astrophysics Data System (ADS)

    Nicolaescu, Liviu I.

    2016-08-01

    We investigate the number of critical points of a Gaussian random smooth function uɛ on the m-torus Tm ≔ ℝm/ℤm approximating the Gaussian white noise as ɛ → 0. Let N(uɛ) denote the number of critical points of uɛ. We prove the existence of constants C, C' such that as ɛ goes to zero, the expectation of the random variable ɛmN(uɛ) converges to C, while its variance is extremely small and behaves like C'ɛm.

  6. Estimation of melting points of organic compounds-II.

    PubMed

    Jain, Akash; Yalkowsky, Samuel H

    2006-12-01

    A model for calculation of melting points of organic compounds from structure is described. The model utilizes additive, constitutive and nonadditive, constitutive molecular properties to calculate the enthalpy of melting and the entropy of melting, respectively. Application of the model to over 2200 compounds, including a number of drugs with complex structures, gives an average absolute error of 30.1 degrees.

  7. Voronoi-Based Curvature and Feature Estimation from Point Clouds.

    PubMed

    Mérigot, Quentin; Ovsjanikov, Maks; Guibas, Leonidas

    2011-06-01

    We present an efficient and robust method for extracting curvature information, sharp features, and normal directions of a piecewise smooth surface from its point cloud sampling in a unified framework. Our method is integral in nature and uses convolved covariance matrices of Voronoi cells of the point cloud which makes it provably robust in the presence of noise. We show that these matrices contain information related to curvature in the smooth parts of the surface, and information about the directions and angles of sharp edges around the features of a piecewise-smooth surface. Our method is applicable in both two and three dimensions, and can be easily parallelized, making it possible to process arbitrarily large point clouds, which was a challenge for Voronoi-based methods. In addition, we describe a Monte-Carlo version of our method, which is applicable in any dimension. We illustrate the correctness of both principal curvature information and feature extraction in the presence of varying levels of noise and sampling density on a variety of models. As a sample application, we use our feature detection method to segment point cloud samplings of piecewise-smooth surfaces.

  8. Accurate Estimation of Airborne Ultrasonic Time-of-Flight for Overlapping Echoes

    PubMed Central

    Sarabia, Esther G.; Llata, Jose R.; Robla, Sandra; Torre-Ferrero, Carlos; Oria, Juan P.

    2013-01-01

    In this work, an analysis of the transmission of ultrasonic signals generated by piezoelectric sensors for air applications is presented. Based on this analysis, an ultrasonic response model is obtained for its application to the recognition of objects and structured environments for navigation by autonomous mobile robots. This model enables the analysis of the ultrasonic response that is generated using a pair of sensors in transmitter-receiver configuration using the pulse-echo technique. This is very interesting for recognizing surfaces that simultaneously generate a multiple echo response. This model takes into account the effect of the radiation pattern, the resonant frequency of the sensor, the number of cycles of the excitation pulse, the dynamics of the sensor and the attenuation with distance in the medium. This model has been developed, programmed and verified through a battery of experimental tests. Using this model a new procedure for obtaining accurate time of flight is proposed. This new method is compared with traditional ones, such as threshold or correlation, to highlight its advantages and drawbacks. Finally the advantages of this method are demonstrated for calculating multiple times of flight when the echo is formed by several overlapping echoes. PMID:24284774

  9. Accurate estimation of airborne ultrasonic time-of-flight for overlapping echoes.

    PubMed

    Sarabia, Esther G; Llata, Jose R; Robla, Sandra; Torre-Ferrero, Carlos; Oria, Juan P

    2013-01-01

    In this work, an analysis of the transmission of ultrasonic signals generated by piezoelectric sensors for air applications is presented. Based on this analysis, an ultrasonic response model is obtained for its application to the recognition of objects and structured environments for navigation by autonomous mobile robots. This model enables the analysis of the ultrasonic response that is generated using a pair of sensors in transmitter-receiver configuration using the pulse-echo technique. This is very interesting for recognizing surfaces that simultaneously generate a multiple echo response. This model takes into account the effect of the radiation pattern, the resonant frequency of the sensor, the number of cycles of the excitation pulse, the dynamics of the sensor and the attenuation with distance in the medium. This model has been developed, programmed and verified through a battery of experimental tests. Using this model a new procedure for obtaining accurate time of flight is proposed. This new method is compared with traditional ones, such as threshold or correlation, to highlight its advantages and drawbacks. Finally the advantages of this method are demonstrated for calculating multiple times of flight when the echo is formed by several overlapping echoes. PMID:24284774

  10. ACCURATE ESTIMATIONS OF STELLAR AND INTERSTELLAR TRANSITION LINES OF TRIPLY IONIZED GERMANIUM

    SciTech Connect

    Dutta, Narendra Nath; Majumder, Sonjoy E-mail: sonjoy@gmail.com

    2011-08-10

    In this paper, we report on weighted oscillator strengths of E1 transitions and transition probabilities of E2 transitions among different low-lying states of triply ionized germanium using highly correlated relativistic coupled cluster (RCC) method. Due to the abundance of Ge IV in the solar system, planetary nebulae, white dwarf stars, etc., the study of such transitions is important from an astrophysical point of view. The weighted oscillator strengths of E1 transitions are presented in length and velocity gauge forms to check the accuracy of the calculations. We find excellent agreement between calculated and experimental excitation energies. Oscillator strengths of few transitions, wherever studied in the literature via other theoretical and experimental approaches, are compared with our RCC calculations.

  11. Lung motion estimation using dynamic point shifting: An innovative model based on a robust point matching algorithm

    SciTech Connect

    Yi, Jianbing; Yang, Xuan Li, Yan-Ran; Chen, Guoliang

    2015-10-15

    Purpose: Image-guided radiotherapy is an advanced 4D radiotherapy technique that has been developed in recent years. However, respiratory motion causes significant uncertainties in image-guided radiotherapy procedures. To address these issues, an innovative lung motion estimation model based on a robust point matching is proposed in this paper. Methods: An innovative robust point matching algorithm using dynamic point shifting is proposed to estimate patient-specific lung motion during free breathing from 4D computed tomography data. The correspondence of the landmark points is determined from the Euclidean distance between the landmark points and the similarity between the local images that are centered at points at the same time. To ensure that the points in the source image correspond to the points in the target image during other phases, the virtual target points are first created and shifted based on the similarity between the local image centered at the source point and the local image centered at the virtual target point. Second, the target points are shifted by the constrained inverse function mapping the target points to the virtual target points. The source point set and shifted target point set are used to estimate the transformation function between the source image and target image. Results: The performances of the authors’ method are evaluated on two publicly available DIR-lab and POPI-model lung datasets. For computing target registration errors on 750 landmark points in six phases of the DIR-lab dataset and 37 landmark points in ten phases of the POPI-model dataset, the mean and standard deviation by the authors’ method are 1.11 and 1.11 mm, but they are 2.33 and 2.32 mm without considering image intensity, and 1.17 and 1.19 mm with sliding conditions. For the two phases of maximum inhalation and maximum exhalation in the DIR-lab dataset with 300 landmark points of each case, the mean and standard deviation of target registration errors on the

  12. An Energy-Efficient Strategy for Accurate Distance Estimation in Wireless Sensor Networks

    PubMed Central

    Tarrío, Paula; Bernardos, Ana M.; Casar, José R.

    2012-01-01

    In line with recent research efforts made to conceive energy saving protocols and algorithms and power sensitive network architectures, in this paper we propose a transmission strategy to minimize the energy consumption in a sensor network when using a localization technique based on the measurement of the strength (RSS) or the time of arrival (TOA) of the received signal. In particular, we find the transmission power and the packet transmission rate that jointly minimize the total consumed energy, while ensuring at the same time a desired accuracy in the RSS or TOA measurements. We also propose some corrections to these theoretical results to take into account the effects of shadowing and packet loss in the propagation channel. The proposed strategy is shown to be effective in realistic scenarios providing energy savings with respect to other transmission strategies, and also guaranteeing a given accuracy in the distance estimations, which will serve to guarantee a desired accuracy in the localization result. PMID:23202218

  13. Accurate automatic estimation of total intracranial volume: a nuisance variable with less nuisance.

    PubMed

    Malone, Ian B; Leung, Kelvin K; Clegg, Shona; Barnes, Josephine; Whitwell, Jennifer L; Ashburner, John; Fox, Nick C; Ridgway, Gerard R

    2015-01-01

    Total intracranial volume (TIV/ICV) is an important covariate for volumetric analyses of the brain and brain regions, especially in the study of neurodegenerative diseases, where it can provide a proxy of maximum pre-morbid brain volume. The gold-standard method is manual delineation of brain scans, but this requires careful work by trained operators. We evaluated Statistical Parametric Mapping 12 (SPM12) automated segmentation for TIV measurement in place of manual segmentation and also compared it with SPM8 and FreeSurfer 5.3.0. For T1-weighted MRI acquired from 288 participants in a multi-centre clinical trial in Alzheimer's disease we find a high correlation between SPM12 TIV and manual TIV (R(2)=0.940, 95% Confidence Interval (0.924, 0.953)), with a small mean difference (SPM12 40.4±35.4ml lower than manual, amounting to 2.8% of the overall mean TIV in the study). The correlation with manual measurements (the key aspect when using TIV as a covariate) for SPM12 was significantly higher (p<0.001) than for either SPM8 (R(2)=0.577 CI (0.500, 0.644)) or FreeSurfer (R(2)=0.801 CI (0.744, 0.843)). These results suggest that SPM12 TIV estimates are an acceptable substitute for labour-intensive manual estimates even in the challenging context of multiple centres and the presence of neurodegenerative pathology. We also briefly discuss some aspects of the statistical modelling approaches to adjust for TIV. PMID:25255942

  14. Developing accurate survey methods for estimating population sizes and trends of the critically endangered Nihoa Millerbird and Nihoa Finch.

    USGS Publications Warehouse

    Gorresen, P. Marcos; Camp, Richard J.; Brinck, Kevin W.; Farmer, Chris

    2012-01-01

    Point-transect surveys indicated that millerbirds were more abundant than shown by the striptransect method, and were estimated at 802 birds in 2010 (95%CI = 652 – 964) and 704 birds in 2011 (95%CI = 579 – 837). Point-transect surveys yielded population estimates with improved precision which will permit trends to be detected in shorter time periods and with greater statistical power than is available from strip-transect survey methods. Mean finch population estimates and associated uncertainty were not markedly different among the three survey methods, but the performance of models used to estimate density and population size are expected to improve as the data from additional surveys are incorporated. Using the pointtransect survey, the mean finch population size was estimated at 2,917 birds in 2010 (95%CI = 2,037 – 3,965) and 2,461 birds in 2011 (95%CI = 1,682 – 3,348). Preliminary testing of the line-transect method in 2011 showed that it would not generate sufficient detections to effectively model bird density, and consequently, relatively precise population size estimates. Both species were fairly evenly distributed across Nihoa and appear to occur in all or nearly all available habitat. The time expended and area traversed by observers was similar among survey methods; however, point-transect surveys do not require that observers walk a straight transect line, thereby allowing them to avoid culturally or biologically sensitive areas and minimize the adverse effects of recurrent travel to any particular area. In general, pointtransect surveys detect more birds than strip-survey methods, thereby improving precision and resulting population size and trend estimation. The method is also better suited for the steep and uneven terrain of Nihoa

  15. [Research on maize multispectral image accurate segmentation and chlorophyll index estimation].

    PubMed

    Wu, Qian; Sun, Hong; Li, Min-zan; Song, Yuan-yuan; Zhang, Yan-e

    2015-01-01

    In order to rapidly acquire maize growing information in the field, a non-destructive method of maize chlorophyll content index measurement was conducted based on multi-spectral imaging technique and imaging processing technology. The experiment was conducted at Yangling in Shaanxi province of China and the crop was Zheng-dan 958 planted in about 1 000 m X 600 m experiment field. Firstly, a 2-CCD multi-spectral image monitoring system was available to acquire the canopy images. The system was based on a dichroic prism, allowing precise separation of the visible (Blue (B), Green (G), Red (R): 400-700 nm) and near-infrared (NIR, 760-1 000 nm) band. The multispectral images were output as RGB and NIR images via the system vertically fixed to the ground with vertical distance of 2 m and angular field of 50°. SPAD index of each sample was'measured synchronously to show the chlorophyll content index. Secondly, after the image smoothing using adaptive smooth filtering algorithm, the NIR maize image was selected to segment the maize leaves from background, because there was a big difference showed in gray histogram between plant and soil background. The NIR image segmentation algorithm was conducted following steps of preliminary and accuracy segmentation: (1) The results of OTSU image segmentation method and the variable threshold algorithm were discussed. It was revealed that the latter was better one in corn plant and weed segmentation. As a result, the variable threshold algorithm based on local statistics was selected for the preliminary image segmentation. The expansion and corrosion were used to optimize the segmented image. (2) The region labeling algorithm was used to segment corn plants from soil and weed background with an accuracy of 95. 59 %. And then, the multi-spectral image of maize canopy was accurately segmented in R, G and B band separately. Thirdly, the image parameters were abstracted based on the segmented visible and NIR images. The average gray

  16. Thermal Conductivities in Solids from First Principles: Accurate Computations and Rapid Estimates

    NASA Astrophysics Data System (ADS)

    Carbogno, Christian; Scheffler, Matthias

    In spite of significant research efforts, a first-principles determination of the thermal conductivity κ at high temperatures has remained elusive. Boltzmann transport techniques that account for anharmonicity perturbatively become inaccurate under such conditions. Ab initio molecular dynamics (MD) techniques using the Green-Kubo (GK) formalism capture the full anharmonicity, but can become prohibitively costly to converge in time and size. We developed a formalism that accelerates such GK simulations by several orders of magnitude and that thus enables its application within the limited time and length scales accessible in ab initio MD. For this purpose, we determine the effective harmonic potential occurring during the MD, the associated temperature-dependent phonon properties and lifetimes. Interpolation in reciprocal and frequency space then allows to extrapolate to the macroscopic scale. For both force-field and ab initio MD, we validate this approach by computing κ for Si and ZrO2, two materials known for their particularly harmonic and anharmonic character. Eventually, we demonstrate how these techniques facilitate reasonable estimates of κ from existing MD calculations at virtually no additional computational cost.

  17. Accurate Estimation of Protein Folding and Unfolding Times: Beyond Markov State Models.

    PubMed

    Suárez, Ernesto; Adelman, Joshua L; Zuckerman, Daniel M

    2016-08-01

    Because standard molecular dynamics (MD) simulations are unable to access time scales of interest in complex biomolecular systems, it is common to "stitch together" information from multiple shorter trajectories using approximate Markov state model (MSM) analysis. However, MSMs may require significant tuning and can yield biased results. Here, by analyzing some of the longest protein MD data sets available (>100 μs per protein), we show that estimators constructed based on exact non-Markovian (NM) principles can yield significantly improved mean first-passage times (MFPTs) for protein folding and unfolding. In some cases, MSM bias of more than an order of magnitude can be corrected when identical trajectory data are reanalyzed by non-Markovian approaches. The NM analysis includes "history" information, higher order time correlations compared to MSMs, that is available in every MD trajectory. The NM strategy is insensitive to fine details of the states used and works well when a fine time-discretization (i.e., small "lag time") is used. PMID:27340835

  18. Accurate estimation of normal incidence absorption coefficients with confidence intervals using a scanning laser Doppler vibrometer

    NASA Astrophysics Data System (ADS)

    Vuye, Cedric; Vanlanduit, Steve; Guillaume, Patrick

    2009-06-01

    When using optical measurements of the sound fields inside a glass tube, near the material under test, to estimate the reflection and absorption coefficients, not only these acoustical parameters but also confidence intervals can be determined. The sound fields are visualized using a scanning laser Doppler vibrometer (SLDV). In this paper the influence of different test signals on the quality of the results, obtained with this technique, is examined. The amount of data gathered during one measurement scan makes a thorough statistical analysis possible leading to the knowledge of confidence intervals. The use of a multi-sine, constructed on the resonance frequencies of the test tube, shows to be a very good alternative for the traditional periodic chirp. This signal offers the ability to obtain data for multiple frequencies in one measurement, without the danger of a low signal-to-noise ratio. The variability analysis in this paper clearly shows the advantages of the proposed multi-sine compared to the periodic chirp. The measurement procedure and the statistical analysis are validated by measuring the reflection ratio at a closed end and comparing the results with the theoretical value. Results of the testing of two building materials (an acoustic ceiling tile and linoleum) are presented and compared to supplier data.

  19. Wind effect on PV module temperature: Analysis of different techniques for an accurate estimation.

    NASA Astrophysics Data System (ADS)

    Schwingshackl, Clemens; Petitta, Marcello; Ernst Wagner, Jochen; Belluardo, Giorgio; Moser, David; Castelli, Mariapina; Zebisch, Marc; Tetzlaff, Anke

    2013-04-01

    temperature estimation using meteorological parameters. References: [1] Skoplaki, E. et al., 2008: A simple correlation for the operating temperature of photovoltaic modules of arbitrary mounting, Solar Energy Materials & Solar Cells 92, 1393-1402 [2] Skoplaki, E. et al., 2008: Operating temperature of photovoltaic modules: A survey of pertinent correlations, Renewable Energy 34, 23-29 [3] Koehl, M. et al., 2011: Modeling of the nominal operating cell temperature based on outdoor weathering, Solar Energy Materials & Solar Cells 95, 1638-1646 [4] Mattei, M. et al., 2005: Calculation of the polycrystalline PV module temperature using a simple method of energy balance, Renewable Energy 31, 553-567 [5] Kurtz, S. et al.: Evaluation of high-temperature exposure of rack-mounted photovoltaic modules

  20. A new set of atomic radii for accurate estimation of solvation free energy by Poisson-Boltzmann solvent model.

    PubMed

    Yamagishi, Junya; Okimoto, Noriaki; Morimoto, Gentaro; Taiji, Makoto

    2014-11-01

    The Poisson-Boltzmann implicit solvent (PB) is widely used to estimate the solvation free energies of biomolecules in molecular simulations. An optimized set of atomic radii (PB radii) is an important parameter for PB calculations, which determines the distribution of dielectric constants around the solute. We here present new PB radii for the AMBER protein force field to accurately reproduce the solvation free energies obtained from explicit solvent simulations. The presented PB radii were optimized using results from explicit solvent simulations of the large systems. In addition, we discriminated PB radii for N- and C-terminal residues from those for nonterminal residues. The performances using our PB radii showed high accuracy for the estimation of solvation free energies at the level of the molecular fragment. The obtained PB radii are effective for the detailed analysis of the solvation effects of biomolecules.

  1. The effect of high leverage points on the logistic ridge regression estimator having multicollinearity

    NASA Astrophysics Data System (ADS)

    Ariffin, Syaiba Balqish; Midi, Habshah

    2014-06-01

    This article is concerned with the performance of logistic ridge regression estimation technique in the presence of multicollinearity and high leverage points. In logistic regression, multicollinearity exists among predictors and in the information matrix. The maximum likelihood estimator suffers a huge setback in the presence of multicollinearity which cause regression estimates to have unduly large standard errors. To remedy this problem, a logistic ridge regression estimator is put forward. It is evident that the logistic ridge regression estimator outperforms the maximum likelihood approach for handling multicollinearity. The effect of high leverage points are then investigated on the performance of the logistic ridge regression estimator through real data set and simulation study. The findings signify that logistic ridge regression estimator fails to provide better parameter estimates in the presence of both high leverage points and multicollinearity.

  2. Estimation of measurement accuracy of track point coordinates in nuclear photoemulsion

    NASA Astrophysics Data System (ADS)

    Shamanov, V. V.

    1995-03-01

    A simple method for an estimation of the measurement accuracy of track point coordinates in nuclear photoemulsion is described. The method is based on analysis of residual deviations of measured track points from a straight line approximating the track. Reliability of the algorithm is illustrated by Monte Carlo simulation. Examples of using the method for an estimation of the accuracy of track point coordinates measured with the microscope KSM-1 (VEB Carl Zeiss Jena) are given.

  3. Reservoir evaluation of thin-bedded turbidites and hydrocarbon pore thickness estimation for an accurate quantification of resource

    NASA Astrophysics Data System (ADS)

    Omoniyi, Bayonle; Stow, Dorrik

    2016-04-01

    One of the major challenges in the assessment of and production from turbidite reservoirs is to take full account of thin and medium-bedded turbidites (<10cm and <30cm respectively). Although such thinner, low-pay sands may comprise a significant proportion of the reservoir succession, they can go unnoticed by conventional analysis and so negatively impact on reserve estimation, particularly in fields producing from prolific thick-bedded turbidite reservoirs. Field development plans often take little note of such thin beds, which are therefore bypassed by mainstream production. In fact, the trapped and bypassed fluids can be vital where maximising field value and optimising production are key business drivers. We have studied in detail, a succession of thin-bedded turbidites associated with thicker-bedded reservoir facies in the North Brae Field, UKCS, using a combination of conventional logs and cores to assess the significance of thin-bedded turbidites in computing hydrocarbon pore thickness (HPT). This quantity, being an indirect measure of thickness, is critical for an accurate estimation of original-oil-in-place (OOIP). By using a combination of conventional and unconventional logging analysis techniques, we obtain three different results for the reservoir intervals studied. These results include estimated net sand thickness, average sand thickness, and their distribution trend within a 3D structural grid. The net sand thickness varies from 205 to 380 ft, and HPT ranges from 21.53 to 39.90 ft. We observe that an integrated approach (neutron-density cross plots conditioned to cores) to HPT quantification reduces the associated uncertainties significantly, resulting in estimation of 96% of actual HPT. Further work will focus on assessing the 3D dynamic connectivity of the low-pay sands with the surrounding thick-bedded turbidite facies.

  4. Performance evaluation of ocean color satellite models for deriving accurate chlorophyll estimates in the Gulf of Saint Lawrence

    NASA Astrophysics Data System (ADS)

    Montes-Hugo, M.; Bouakba, H.; Arnone, R.

    2014-06-01

    The understanding of phytoplankton dynamics in the Gulf of the Saint Lawrence (GSL) is critical for managing major fisheries off the Canadian East coast. In this study, the accuracy of two atmospheric correction techniques (NASA standard algorithm, SA, and Kuchinke's spectral optimization, KU) and three ocean color inversion models (Carder's empirical for SeaWiFS (Sea-viewing Wide Field-of-View Sensor), EC, Lee's quasi-analytical, QAA, and Garver- Siegel-Maritorena semi-empirical, GSM) for estimating the phytoplankton absorption coefficient at 443 nm (aph(443)) and the chlorophyll concentration (chl) in the GSL is examined. Each model was validated based on SeaWiFS images and shipboard measurements obtained during May of 2000 and April 2001. In general, aph(443) estimates derived from coupling KU and QAA models presented the smallest differences with respect to in situ determinations as measured by High Pressure liquid Chromatography measurements (median absolute bias per cruise up to 0.005, RMSE up to 0.013). A change on the inversion approach used for estimating aph(443) values produced up to 43.4% increase on prediction error as inferred from the median relative bias per cruise. Likewise, the impact of applying different atmospheric correction schemes was secondary and represented an additive error of up to 24.3%. By using SeaDAS (SeaWiFS Data Analysis System) default values for the optical cross section of phytoplankton (i.e., aph(443) = aph(443)/chl = 0.056 m2mg-1), the median relative bias of our chl estimates as derived from the most accurate spaceborne aph(443) retrievals and with respect to in situ determinations increased up to 29%.

  5. A Unique Equation to Estimate Flash Points of Selected Pure Liquids Application to the Correction of Probably Erroneous Flash Point Values

    NASA Astrophysics Data System (ADS)

    Catoire, Laurent; Naudet, Valérie

    2004-12-01

    A simple empirical equation is presented for the estimation of closed-cup flash points for pure organic liquids. Data needed for the estimation of a flash point (FP) are the normal boiling point (Teb), the standard enthalpy of vaporization at 298.15 K [ΔvapH°(298.15 K)] of the compound, and the number of carbon atoms (n) in the molecule. The bounds for this equation are: -100⩽FP(°C)⩽+200; 250⩽Teb(K)⩽650; 20⩽Δvap H°(298.15 K)/(kJ mol-1)⩽110; 1⩽n⩽21. Compared to other methods (empirical equations, structural group contribution methods, and neural network quantitative structure-property relationships), this simple equation is shown to predict accurately the flash points for a variety of compounds, whatever their chemical groups (monofunctional compounds and polyfunctional compounds) and whatever their structure (linear, branched, cyclic). The same equation is shown to be valid for hydrocarbons, organic nitrogen compounds, organic oxygen compounds, organic sulfur compounds, organic halogen compounds, and organic silicone compounds. It seems that the flash points of organic deuterium compounds, organic tin compounds, organic nickel compounds, organic phosphorus compounds, organic boron compounds, and organic germanium compounds can also be predicted accurately by this equation. A mean absolute deviation of about 3 °C, a standard deviation of about 2 °C, and a maximum absolute deviation of 10 °C are obtained when predictions are compared to experimental data for more than 600 compounds. For all these compounds, the absolute deviation is equal or lower than the reproductibility expected at a 95% confidence level for closed-cup flash point measurement. This estimation technique has its limitations concerning the polyhalogenated compounds for which the equation should be used with caution. The mean absolute deviation and maximum absolute deviation observed and the fact that the equation provides unbiaised predictions lead to the conclusion that

  6. Accurate Treatment of Electrostatics during Molecular Adsorption in Nanoporous Crystals without Assigning Point Charges to Framework Atoms

    SciTech Connect

    Watanabe, T; Manz, TA; Sholl, DS

    2011-03-24

    Molecular simulations have become an important complement to experiments for studying gas adsorption and separation in crystalline nanoporous materials. Conventionally, these simulations use force fields that model adsorbate-pore interactions by assigning point charges to the atoms of the adsorbent. The assignment of framework charges always introduces ambiguity because there are many different choices for defining point charges, even when the true electron density of a material is known. We show how to completely avoid such ambiguity by using the electrostatic potential energy surface (EPES) calculated from plane wave density functional theory (DFT). We illustrate this approach by simulating CO(2) adsorption in four metal-organic frameworks (MOFs): IRMOF-1, ZIE-8, ZIE-90, and Zn(nicotinate)(2). The resulting CO(2) adsorption isotherms are insensitive to the exchange-correlation functional used in the DFT calculation of the EPES but are sensitive to changes in the crystal structure and lattice parameters. Isotherms computed from the DFT EPES are compared to those computed from several point charge models. This comparison makes possible, for the first time, an unbiased assessment of the accuracy of these point charge models for describing adsorption in MOFs. We find an unusually high Henry's constant (109 mmol/g.bar) and intermediate isosteric heat of adsorption (34.9 kJ/mol) for Zn(nicotinate)(2), which makes it a potentially attractive mateiial for CO(2) adsorption applications.

  7. Accurate Treatment of Electrostatics during Molecular Adsorption in Nanoporous Crystals without Assigning Point Charges to Framework Atoms

    SciTech Connect

    Watanabe, Taku; Manz, Thomas A.; Sholl, David S.

    2011-02-28

    Molecular simulations have become an important complement to experiments for studying gas adsorption and separation in crystalline nanoporous materials. Conventionally, these simulations use force fields that model adsorbate-pore interactions by assigning point charges to the atoms of the adsorbent. The assignment of framework charges always introduces ambiguity because there are many different choices for defining point charges, even when the true electron density of a material is known. We show how to completely avoid such ambiguity by using the electrostatic potential energy surface (EPES) calculated from plane wave density functional theory (DFT). We illustrate this approach by simulating CO2 adsorption in four metal-organic frameworks (MOFs): IRMOF-1, ZIF-8, ZIF-90, and Zn(nicotinate)2. The resulting CO2 adsorption isotherms are insensitive to the exchange-correlation functional used in the DFT calculation of the EPES but are sensitive to changes in the crystal structure and lattice parameters. Isotherms computed from the DFT EPES are compared to those computed from several point charge models. This comparison makes possible, for the first time, an unbiased assessment of the accuracy of these point charge models for describing adsorption in MOFs. We find an unusually high Henry’s constant (109 mmol/g·bar) and intermediate isosteric heat of adsorption (34.9 kJ/mol) for Zn(nicotinate)2, which makes it a potentially attractive material for CO2 adsorption applications.

  8. Accurate recovery of 4D left ventricular deformations using volumetric B-splines incorporating phase based displacement estimates

    NASA Astrophysics Data System (ADS)

    Chen, Jian; Tustison, Nicholas J.; Amini, Amir A.

    2006-03-01

    In this paper, an improved framework for estimation of 3-D left-ventricular deformations from tagged MRI is presented. Contiguous short- and long-axis tagged MR images are collected and are used within a 4-D B-Spline based deformable model to determine 4-D displacements and strains. An initial 4-D B-spline model fitted to sparse tag line data is first constructed by minimizing a 4-D Chamfer distance potential-based energy function for aligning isoparametric planes of the model with tag line locations; subsequently, dense virtual tag lines based on 2-D phase-based displacement estimates and the initial model are created. A final 4-D B-spline model with increased knots is fitted to the virtual tag lines. From the final model, we can extract accurate 3-D myocardial deformation fields and corresponding strain maps which are local measures of non-rigid deformation. Lagrangian strains in simulated data are derived which show improvement over our previous work. The method is also applied to 3-D tagged MRI data collected in a canine.

  9. An Evaluation of the Plant Density Estimator the Point-Centred Quarter Method (PCQM) Using Monte Carlo Simulation

    PubMed Central

    Khan, Md Nabiul Islam; Hijbeek, Renske; Berger, Uta; Koedam, Nico; Grueters, Uwe; Islam, S. M. Zahirul; Hasan, Md Asadul; Dahdouh-Guebas, Farid

    2016-01-01

    Background In the Point-Centred Quarter Method (PCQM), the mean distance of the first nearest plants in each quadrant of a number of random sample points is converted to plant density. It is a quick method for plant density estimation. In recent publications the estimator equations of simple PCQM (PCQM1) and higher order ones (PCQM2 and PCQM3, which uses the distance of the second and third nearest plants, respectively) show discrepancy. This study attempts to review PCQM estimators in order to find the most accurate equation form. We tested the accuracy of different PCQM equations using Monte Carlo Simulations in simulated (having ‘random’, ‘aggregated’ and ‘regular’ spatial patterns) plant populations and empirical ones. Principal Findings PCQM requires at least 50 sample points to ensure a desired level of accuracy. PCQM with a corrected estimator is more accurate than with a previously published estimator. The published PCQM versions (PCQM1, PCQM2 and PCQM3) show significant differences in accuracy of density estimation, i.e. the higher order PCQM provides higher accuracy. However, the corrected PCQM versions show no significant differences among them as tested in various spatial patterns except in plant assemblages with a strong repulsion (plant competition). If N is number of sample points and R is distance, the corrected estimator of PCQM1 is 4(4N − 1)/(π ∑ R2) but not 12N/(π ∑ R2), of PCQM2 is 4(8N − 1)/(π ∑ R2) but not 28N/(π ∑ R2) and of PCQM3 is 4(12N − 1)/(π ∑ R2) but not 44N/(π ∑ R2) as published. Significance If the spatial pattern of a plant association is random, PCQM1 with a corrected equation estimator and over 50 sample points would be sufficient to provide accurate density estimation. PCQM using just the nearest tree in each quadrant is therefore sufficient, which facilitates sampling of trees, particularly in areas with just a few hundred trees per hectare. PCQM3 provides the best density estimations for all

  10. On a fourth order accurate implicit finite difference scheme for hyperbolic conservation laws. II - Five-point schemes

    NASA Technical Reports Server (NTRS)

    Harten, A.; Tal-Ezer, H.

    1981-01-01

    This paper presents a family of two-level five-point implicit schemes for the solution of one-dimensional systems of hyperbolic conservation laws, which generalized the Crank-Nicholson scheme to fourth order accuracy (4-4) in both time and space. These 4-4 schemes are nondissipative and unconditionally stable. Special attention is given to the system of linear equations associated with these 4-4 implicit schemes. The regularity of this system is analyzed and efficiency of solution-algorithms is examined. A two-datum representation of these 4-4 implicit schemes brings about a compactification of the stencil to three mesh points at each time-level. This compact two-datum representation is particularly useful in deriving boundary treatments. Numerical results are presented to illustrate some properties of the proposed scheme.

  11. Point cloud modeling using the homogeneous transformation for non-cooperative pose estimation

    NASA Astrophysics Data System (ADS)

    Lim, Tae W.

    2015-06-01

    A modeling process to simulate point cloud range data that a lidar (light detection and ranging) sensor produces is presented in this paper in order to support the development of non-cooperative pose (relative attitude and position) estimation approaches which will help improve proximity operation capabilities between two adjacent vehicles. The algorithms in the modeling process were based on the homogeneous transformation, which has been employed extensively in robotics and computer graphics, as well as in recently developed pose estimation algorithms. Using a flash lidar in a laboratory testing environment, point cloud data of a test article was simulated and compared against the measured point cloud data. The simulated and measured data sets match closely, validating the modeling process. The modeling capability enables close examination of the characteristics of point cloud images of an object as it undergoes various translational and rotational motions. Relevant characteristics that will be crucial in non-cooperative pose estimation were identified such as shift, shadowing, perspective projection, jagged edges, and differential point cloud density. These characteristics will have to be considered in developing effective non-cooperative pose estimation algorithms. The modeling capability will allow extensive non-cooperative pose estimation performance simulations prior to field testing, saving development cost and providing performance metrics of the pose estimation concepts and algorithms under evaluation. The modeling process also provides "truth" pose of the test objects with respect to the sensor frame so that the pose estimation error can be quantified.

  12. Central blood pressure estimation by using N-point moving average method in the brachial pulse wave.

    PubMed

    Sugawara, Rie; Horinaka, Shigeo; Yagi, Hiroshi; Ishimura, Kimihiko; Honda, Takeharu

    2015-05-01

    Recently, a method of estimating the central systolic blood pressure (C-SBP) using an N-point moving average method in the radial or brachial artery waveform has been reported. Then, we investigated the relationship between the C-SBP estimated from the brachial artery pressure waveform using the N-point moving average method and the C-SBP measured invasively using a catheter. C-SBP using a N/6 moving average method from the scaled right brachial artery pressure waveforms using VaSera VS-1500 was calculated. This estimated C-SBP was compared with the invasively measured C-SBP within a few minutes. In 41 patients who underwent cardiac catheterization (mean age: 65 years), invasively measured C-SBP was significantly lower than right cuff-based brachial BP (138.2 ± 26.3 vs 141.0 ± 24.9 mm Hg, difference -2.78 ± 1.36 mm Hg, P = 0.048). The cuff-based SBP was significantly higher than invasive measured C-SBP in subjects with younger than 60 years old. However, the estimated C-SBP using a N/6 moving average method from the scaled right brachial artery pressure waveforms and the invasively measured C-SBP did not significantly differ (137.8 ± 24.2 vs 138.2 ± 26.3 mm Hg, difference -0.49 ± 1.39, P = 0.73). N/6-point moving average method using the non-invasively acquired brachial artery waveform calibrated by the cuff-based brachial SBP was an accurate, convenient and useful method for estimating C-SBP. Thus, C-SBP can be estimated simply by applying a regular arm cuff, which is greatly feasible in the practical medicine.

  13. How accurate and precise are limited sampling strategies in estimating exposure to mycophenolic acid in people with autoimmune disease?

    PubMed

    Abd Rahman, Azrin N; Tett, Susan E; Staatz, Christine E

    2014-03-01

    Mycophenolic acid (MPA) is a potent immunosuppressant agent, which is increasingly being used in the treatment of patients with various autoimmune diseases. Dosing to achieve a specific target MPA area under the concentration-time curve from 0 to 12 h post-dose (AUC12) is likely to lead to better treatment outcomes in patients with autoimmune disease than a standard fixed-dose strategy. This review summarizes the available published data around concentration monitoring strategies for MPA in patients with autoimmune disease and examines the accuracy and precision of methods reported to date using limited concentration-time points to estimate MPA AUC12. A total of 13 studies were identified that assessed the correlation between single time points and MPA AUC12 and/or examined the predictive performance of limited sampling strategies in estimating MPA AUC12. The majority of studies investigated mycophenolate mofetil (MMF) rather than the enteric-coated mycophenolate sodium (EC-MPS) formulation of MPA. Correlations between MPA trough concentrations and MPA AUC12 estimated by full concentration-time profiling ranged from 0.13 to 0.94 across ten studies, with the highest associations (r (2) = 0.90-0.94) observed in lupus nephritis patients. Correlations were generally higher in autoimmune disease patients compared with renal allograft recipients and higher after MMF compared with EC-MPS intake. Four studies investigated use of a limited sampling strategy to predict MPA AUC12 determined by full concentration-time profiling. Three studies used a limited sampling strategy consisting of a maximum combination of three sampling time points with the latest sample drawn 3-6 h after MMF intake, whereas the remaining study tested all combinations of sampling times. MPA AUC12 was best predicted when three samples were taken at pre-dose and at 1 and 3 h post-dose with a mean bias and imprecision of 0.8 and 22.6 % for multiple linear regression analysis and of -5.5 and 23.0 % for

  14. Estimating the gas transfer velocity: a prerequisite for more accurate and higher resolution GHG fluxes (lower Aare River, Switzerland)

    NASA Astrophysics Data System (ADS)

    Sollberger, S.; Perez, K.; Schubert, C. J.; Eugster, W.; Wehrli, B.; Del Sontro, T.

    2013-12-01

    Currently, carbon dioxide (CO2) and methane (CH4) emissions from lakes, reservoirs and rivers are readily investigated due to the global warming potential of those gases and the role these inland waters play in the carbon cycle. However, there is a lack of high spatiotemporally-resolved emission estimates, and how to accurately assess the gas transfer velocity (K) remains controversial. In anthropogenically-impacted systems where run-of-river reservoirs disrupt the flow of sediments by increasing the erosion and load accumulation patterns, the resulting production of carbonic greenhouse gases (GH-C) is likely to be enhanced. The GH-C flux is thus counteracting the terrestrial carbon sink in these environments that act as net carbon emitters. The aim of this project was to determine the GH-C emissions from a medium-sized river heavily impacted by several impoundments and channelization through a densely-populated region of Switzerland. Estimating gas emission from rivers is not trivial and recently several models have been put forth to do so; therefore a second goal of this project was to compare the river emission models available with direct measurements. Finally, we further validated the modeled fluxes by using a combined approach with water sampling, chamber measurements, and highly temporal GH-C monitoring using an equilibrator. We conducted monthly surveys along the 120 km of the lower Aare River where we sampled for dissolved CH4 (';manual' sampling) at a 5-km sampling resolution, and measured gas emissions directly with chambers over a 35 km section. We calculated fluxes (F) via the boundary layer equation (F=K×(Cw-Ceq)) that uses the water-air GH-C concentration (C) gradient (Cw-Ceq) and K, which is the most sensitive parameter. K was estimated using 11 different models found in the literature with varying dependencies on: river hydrology (n=7), wind (2), heat exchange (1), and river width (1). We found that chamber fluxes were always higher than boundary

  15. Development of a new, robust and accurate, spectroscopic metric for scatterer size estimation in optical coherence tomography (OCT) images

    NASA Astrophysics Data System (ADS)

    Kassinopoulos, Michalis; Pitris, Costas

    2016-03-01

    The modulations appearing on the backscattering spectrum originating from a scatterer are related to its diameter as described by Mie theory for spherical particles. Many metrics for Spectroscopic Optical Coherence Tomography (SOCT) take advantage of this observation in order to enhance the contrast of Optical Coherence Tomography (OCT) images. However, none of these metrics has achieved high accuracy when calculating the scatterer size. In this work, Mie theory was used to further investigate the relationship between the degree of modulation in the spectrum and the scatterer size. From this study, a new spectroscopic metric, the bandwidth of the Correlation of the Derivative (COD) was developed which is more robust and accurate, compared to previously reported techniques, in the estimation of scatterer size. The self-normalizing nature of the derivative and the robustness of the first minimum of the correlation as a measure of its width, offer significant advantages over other spectral analysis approaches especially for scatterer sizes above 3 μm. The feasibility of this technique was demonstrated using phantom samples containing 6, 10 and 16 μm diameter microspheres as well as images of normal and cancerous human colon. The results are very promising, suggesting that the proposed metric could be implemented in OCT spectral analysis for measuring nuclear size distribution in biological tissues. A technique providing such information would be of great clinical significance since it would allow the detection of nuclear enlargement at the earliest stages of precancerous development.

  16. Optimization of tissue physical parameters for accurate temperature estimation from finite-element simulation of radiofrequency ablation

    NASA Astrophysics Data System (ADS)

    Subramanian, Swetha; Mast, T. Douglas

    2015-09-01

    Computational finite element models are commonly used for the simulation of radiofrequency ablation (RFA) treatments. However, the accuracy of these simulations is limited by the lack of precise knowledge of tissue parameters. In this technical note, an inverse solver based on the unscented Kalman filter (UKF) is proposed to optimize values for specific heat, thermal conductivity, and electrical conductivity resulting in accurately simulated temperature elevations. A total of 15 RFA treatments were performed on ex vivo bovine liver tissue. For each RFA treatment, 15 finite-element simulations were performed using a set of deterministically chosen tissue parameters to estimate the mean and variance of the resulting tissue ablation. The UKF was implemented as an inverse solver to recover the specific heat, thermal conductivity, and electrical conductivity corresponding to the measured area of the ablated tissue region, as determined from gross tissue histology. These tissue parameters were then employed in the finite element model to simulate the position- and time-dependent tissue temperature. Results show good agreement between simulated and measured temperature.

  17. Optimization of tissue physical parameters for accurate temperature estimation from finite-element simulation of radiofrequency ablation.

    PubMed

    Subramanian, Swetha; Mast, T Douglas

    2015-10-01

    Computational finite element models are commonly used for the simulation of radiofrequency ablation (RFA) treatments. However, the accuracy of these simulations is limited by the lack of precise knowledge of tissue parameters. In this technical note, an inverse solver based on the unscented Kalman filter (UKF) is proposed to optimize values for specific heat, thermal conductivity, and electrical conductivity resulting in accurately simulated temperature elevations. A total of 15 RFA treatments were performed on ex vivo bovine liver tissue. For each RFA treatment, 15 finite-element simulations were performed using a set of deterministically chosen tissue parameters to estimate the mean and variance of the resulting tissue ablation. The UKF was implemented as an inverse solver to recover the specific heat, thermal conductivity, and electrical conductivity corresponding to the measured area of the ablated tissue region, as determined from gross tissue histology. These tissue parameters were then employed in the finite element model to simulate the position- and time-dependent tissue temperature. Results show good agreement between simulated and measured temperature. PMID:26352462

  18. Inverse estimation of near-field temperature and surface heat flux via single point temperature measurement

    NASA Astrophysics Data System (ADS)

    Wu, Chen-Wu; Shu, Yong-Hua; Xie, Ji-Jia; Jiang, Jian-Zheng; Fan, Jing

    2016-05-01

    A concept was developed to inversely estimate the near-field temperature as well as the surface heat flux for the transient heat conduction problem with boundary condition of the unknown heat flux. The mathematical formula was derived for the inverse estimation of the near-field temperature and surface heat flux via a single point temperature measurement. The experiments were carried out in a vacuum chamber and the theoretically predicted temperatures were justified in specific positions. The inverse estimation principle was validated and the estimation deviation was evaluated for the present configuration.

  19. Three-point bounds and other estimates for strongly nonlinear composites

    NASA Astrophysics Data System (ADS)

    Castañeda, P. Ponte

    1998-05-01

    A variational procedure due to Ponte Castañeda et al. [Phys. Rev. B 46, 4387 (1992)] is used to determine three-point bounds and other types of estimates for the effective response of strongly nonlinear composites with random microstructures. The variational procedure makes use of estimates for the effective properties of ``linear comparison composites'' to generate corresponding estimates for nonlinear composites. Several equivalent forms of the variational procedure are derived. In particular, it is shown that the mean-field theory of Wan et al. [Phys. Rev. B 54, 3946 (1996)], which also makes use of a linear comparison composite, together with a certain ``decoupling approximation,'' leads to results that are precisely identical to those that can be obtained from the earlier variational procedure. Finally, three-point bounds and other estimates are computed for power-law composites with cell-type microstructures, and the results are compared with random resistor network simulations available from the literature.

  20. Point Cloud Based Relative Pose Estimation of a Satellite in Close Range.

    PubMed

    Liu, Lujiang; Zhao, Gaopeng; Bo, Yuming

    2016-06-04

    Determination of the relative pose of satellites is essential in space rendezvous operations and on-orbit servicing missions. The key problems are the adoption of suitable sensor on board of a chaser and efficient techniques for pose estimation. This paper aims to estimate the pose of a target satellite in close range on the basis of its known model by using point cloud data generated by a flash LIDAR sensor. A novel model based pose estimation method is proposed; it includes a fast and reliable pose initial acquisition method based on global optimal searching by processing the dense point cloud data directly, and a pose tracking method based on Iterative Closest Point algorithm. Also, a simulation system is presented in this paper in order to evaluate the performance of the sensor and generate simulated sensor point cloud data. It also provides truth pose of the test target so that the pose estimation error can be quantified. To investigate the effectiveness of the proposed approach and achievable pose accuracy, numerical simulation experiments are performed; results demonstrate algorithm capability of operating with point cloud directly and large pose variations. Also, a field testing experiment is conducted and results show that the proposed method is effective.

  1. Point Cloud Based Relative Pose Estimation of a Satellite in Close Range

    PubMed Central

    Liu, Lujiang; Zhao, Gaopeng; Bo, Yuming

    2016-01-01

    Determination of the relative pose of satellites is essential in space rendezvous operations and on-orbit servicing missions. The key problems are the adoption of suitable sensor on board of a chaser and efficient techniques for pose estimation. This paper aims to estimate the pose of a target satellite in close range on the basis of its known model by using point cloud data generated by a flash LIDAR sensor. A novel model based pose estimation method is proposed; it includes a fast and reliable pose initial acquisition method based on global optimal searching by processing the dense point cloud data directly, and a pose tracking method based on Iterative Closest Point algorithm. Also, a simulation system is presented in this paper in order to evaluate the performance of the sensor and generate simulated sensor point cloud data. It also provides truth pose of the test target so that the pose estimation error can be quantified. To investigate the effectiveness of the proposed approach and achievable pose accuracy, numerical simulation experiments are performed; results demonstrate algorithm capability of operating with point cloud directly and large pose variations. Also, a field testing experiment is conducted and results show that the proposed method is effective. PMID:27271633

  2. Point Cloud Based Relative Pose Estimation of a Satellite in Close Range.

    PubMed

    Liu, Lujiang; Zhao, Gaopeng; Bo, Yuming

    2016-01-01

    Determination of the relative pose of satellites is essential in space rendezvous operations and on-orbit servicing missions. The key problems are the adoption of suitable sensor on board of a chaser and efficient techniques for pose estimation. This paper aims to estimate the pose of a target satellite in close range on the basis of its known model by using point cloud data generated by a flash LIDAR sensor. A novel model based pose estimation method is proposed; it includes a fast and reliable pose initial acquisition method based on global optimal searching by processing the dense point cloud data directly, and a pose tracking method based on Iterative Closest Point algorithm. Also, a simulation system is presented in this paper in order to evaluate the performance of the sensor and generate simulated sensor point cloud data. It also provides truth pose of the test target so that the pose estimation error can be quantified. To investigate the effectiveness of the proposed approach and achievable pose accuracy, numerical simulation experiments are performed; results demonstrate algorithm capability of operating with point cloud directly and large pose variations. Also, a field testing experiment is conducted and results show that the proposed method is effective. PMID:27271633

  3. A double-observer approach for estimating detection probability and abundance from point counts

    USGS Publications Warehouse

    Nichols, J.D.; Hines, J.E.; Sauer, J.R.; Fallon, F.W.; Fallon, J.E.; Heglund, P.J.

    2000-01-01

    Although point counts are frequently used in ornithological studies, basic assumptions about detection probabilities often are untested. We apply a double-observer approach developed to estimate detection probabilities for aerial surveys (Cook and Jacobson 1979) to avian point counts. At each point count, a designated 'primary' observer indicates to another ('secondary') observer all birds detected. The secondary observer records all detections of the primary observer as well as any birds not detected by the primary observer. Observers alternate primary and secondary roles during the course of the survey. The approach permits estimation of observer-specific detection probabilities and bird abundance. We developed a set of models that incorporate different assumptions about sources of variation (e.g. observer, bird species) in detection probability. Seventeen field trials were conducted, and models were fit to the resulting data using program SURVIV. Single-observer point counts generally miss varying proportions of the birds actually present, and observer and bird species were found to be relevant sources of variation in detection probabilities. Overall detection probabilities (probability of being detected by at least one of the two observers) estimated using the double-observer approach were very high (>0.95), yielding precise estimates of avian abundance. We consider problems with the approach and recommend possible solutions, including restriction of the approach to fixed-radius counts to reduce the effect of variation in the effective radius of detection among various observers and to provide a basis for using spatial sampling to estimate bird abundance on large areas of interest. We believe that most questions meriting the effort required to carry out point counts also merit serious attempts to estimate detection probabilities associated with the counts. The double-observer approach is a method that can be used for this purpose.

  4. Estimated results analysis and application of the precise point positioning based high-accuracy ionosphere delay

    NASA Astrophysics Data System (ADS)

    Wang, Shi-tai; Peng, Jun-huan

    2015-12-01

    The characterization of ionosphere delay estimated with precise point positioning is analyzed in this paper. The estimation, interpolation and application of the ionosphere delay are studied based on the processing of 24-h data from 5 observation stations. The results show that the estimated ionosphere delay is affected by the hardware delay bias from receiver so that there is a difference between the estimated and interpolated results. The results also show that the RMSs (root mean squares) are bigger, while the STDs (standard deviations) are better than 0.11 m. When the satellite difference is used, the hardware delay bias can be canceled. The interpolated satellite-differenced ionosphere delay is better than 0.11 m. Although there is a difference between the between the estimated and interpolated ionosphere delay results it cannot affect its application in single-frequency positioning and the positioning accuracy can reach cm level.

  5. Using CORINE land cover and the point survey LUCAS for area estimation

    NASA Astrophysics Data System (ADS)

    Gallego, Javier; Bamps, Catharina

    2008-12-01

    CORINE land cover 2000 (CLC2000) is a European land cover map produced by photo-interpretation of Landsat ETM+ images. Its direct use for area estimation can be strongly biased and does not generally report single crops. CLC areas need to be calibrated to give acceptable statistical results. LUCAS (land use/cover area frame survey) is a point survey carried out in 2001 and 2003 in the European Union (EU15) on a systematic sample of clusters of points. LUCAS is especially useful for area estimation in geographic units that do not coincide with administrative regions, such as set of coastal areas defined with a 10 km buffer. Some variance estimation issues with systematic sampling of clusters are analysed. The contingency table obtained overlaying CLC and LUCAS gives the fine scale composition of CLC classes. Using CLC for post-stratification of LUCAS is equivalent to the direct calibration estimator when the sampling units are points. Stratification is easier to adapt to a scheme in which the sampling units are the clusters of points used in LUCAS 2001/2003.

  6. A Direct Latent Variable Modeling Based Method for Point and Interval Estimation of Coefficient Alpha

    ERIC Educational Resources Information Center

    Raykov, Tenko; Marcoulides, George A.

    2015-01-01

    A direct approach to point and interval estimation of Cronbach's coefficient alpha for multiple component measuring instruments is outlined. The procedure is based on a latent variable modeling application with widely circulated software. As a by-product, using sample data the method permits ascertaining whether the population discrepancy…

  7. Human body 3D posture estimation using significant points and two cameras.

    PubMed

    Juang, Chia-Feng; Chen, Teng-Chang; Du, Wei-Chin

    2014-01-01

    This paper proposes a three-dimensional (3D) human posture estimation system that locates 3D significant body points based on 2D body contours extracted from two cameras without using any depth sensors. The 3D significant body points that are located by this system include the head, the center of the body, the tips of the feet, the tips of the hands, the elbows, and the knees. First, a linear support vector machine- (SVM-) based segmentation method is proposed to distinguish the human body from the background in red, green, and blue (RGB) color space. The SVM-based segmentation method uses not only normalized color differences but also included angle between pixels in the current frame and the background in order to reduce shadow influence. After segmentation, 2D significant points in each of the two extracted images are located. A significant point volume matching (SPVM) method is then proposed to reconstruct the 3D significant body point locations by using 2D posture estimation results. Experimental results show that the proposed SVM-based segmentation method shows better performance than other gray level- and RGB-based segmentation approaches. This paper also shows the effectiveness of the 3D posture estimation results in different postures.

  8. Observing Volcanic Thermal Anomalies from Space: How Accurate is the Estimation of the Hotspot's Size and Temperature?

    NASA Astrophysics Data System (ADS)

    Zaksek, K.; Pick, L.; Lombardo, V.; Hort, M. K.

    2015-12-01

    Measuring the heat emission from active volcanic features on the basis of infrared satellite images contributes to the volcano's hazard assessment. Because these thermal anomalies only occupy a small fraction (< 1 %) of a typically resolved target pixel (e.g. from Landsat 7, MODIS) the accurate determination of the hotspot's size and temperature is however problematic. Conventionally this is overcome by comparing observations in at least two separate infrared spectral wavebands (Dual-Band method). We investigate the resolution limits of this thermal un-mixing technique by means of a uniquely designed indoor analog experiment. Therein the volcanic feature is simulated by an electrical heating alloy of 0.5 mm diameter installed on a plywood panel of high emissivity. Two thermographic cameras (VarioCam high resolution and ImageIR 8300 by Infratec) record images of the artificial heat source in wavebands comparable to those available from satellite data. These range from the short-wave infrared (1.4-3 µm) over the mid-wave infrared (3-8 µm) to the thermal infrared (8-15 µm). In the conducted experiment the pixel fraction of the hotspot was successively reduced by increasing the camera-to-target distance from 3 m to 35 m. On the basis of an individual target pixel the expected decrease of the hotspot pixel area with distance at a relatively constant wire temperature of around 600 °C was confirmed. The deviation of the hotspot's pixel fraction yielded by the Dual-Band method from the theoretically calculated one was found to be within 20 % up until a target distance of 25 m. This means that a reliable estimation of the hotspot size is only possible if the hotspot is larger than about 3 % of the pixel area, a resolution boundary most remotely sensed volcanic hotspots fall below. Future efforts will focus on the investigation of a resolution limit for the hotspot's temperature by varying the alloy's amperage. Moreover, the un-mixing results for more realistic multi

  9. Estimating the Contribution of Impurities to the Uncertainty of Metal Fixed-Point Temperatures

    NASA Astrophysics Data System (ADS)

    Hill, K. D.

    2014-04-01

    The estimation of the uncertainty component attributable to impurities remains a central and important topic of fixed-point research. Various methods are available for this estimation, depending on the extent of the available information. The sum of individual estimates method has considerable appeal where there is adequate knowledge of the sensitivity coefficients for each of the impurity elements and sufficiently low uncertainty regarding their concentrations. The overall maximum estimate (OME) forsakes the behavior of the individual elements by assuming that the cryoscopic constant adequately represents (or is an upper bound for) the sensitivity coefficients of the individual impurities. Validation of these methods using melting and/or freezing curves is recommended to provide confidence. Recent investigations of indium, tin, and zinc fixed points are reported. Glow discharge mass spectrometry was used to determine the impurity concentrations of the metals used to fill the cells. Melting curves were analyzed to derive an experimental overall impurity concentration (assuming that all impurities have a sensitivity coefficient equivalent to that of the cryoscopic constant). The two values (chemical and experimental) for the overall impurity concentrations were then compared. Based on the data obtained, the pragmatic approach of choosing the larger of the chemical and experimentally derived quantities as the best estimate of the influence of impurities on the temperature of the freezing point is suggested rather than relying solely on the chemical analysis and the OME method to derive the uncertainty component attributable to impurities.

  10. Monte Carlo point process estimation of electromyographic envelopes from motor cortical spikes for brain-machine interfaces

    NASA Astrophysics Data System (ADS)

    Liao, Yuxi; She, Xiwei; Wang, Yiwen; Zhang, Shaomin; Zhang, Qiaosheng; Zheng, Xiaoxiang; Principe, Jose C.

    2015-12-01

    Objective. Representation of movement in the motor cortex (M1) has been widely studied in brain-machine interfaces (BMIs). The electromyogram (EMG) has greater bandwidth than the conventional kinematic variables (such as position, velocity), and is functionally related to the discharge of cortical neurons. As the stochastic information of EMG is derived from the explicit spike time structure, point process (PP) methods will be a good solution for decoding EMG directly from neural spike trains. Previous studies usually assume linear or exponential tuning curves between neural firing and EMG, which may not be true. Approach. In our analysis, we estimate the tuning curves in a data-driven way and find both the traditional functional-excitatory and functional-inhibitory neurons, which are widely found across a rat’s motor cortex. To accurately decode EMG envelopes from M1 neural spike trains, the Monte Carlo point process (MCPP) method is implemented based on such nonlinear tuning properties. Main results. Better reconstruction of EMG signals is shown on baseline and extreme high peaks, as our method can better preserve the nonlinearity of the neural tuning during decoding. The MCPP improves the prediction accuracy (the normalized mean squared error) 57% and 66% on average compared with the adaptive point process filter using linear and exponential tuning curves respectively, for all 112 data segments across six rats. Compared to a Wiener filter using spike rates with an optimal window size of 50 ms, MCPP decoding EMG from a point process improves the normalized mean square error (NMSE) by 59% on average. Significance. These results suggest that neural tuning is constantly changing during task execution and therefore, the use of spike timing methodologies and estimation of appropriate tuning curves needs to be undertaken for better EMG decoding in motor BMIs.

  11. A method of rapidly estimating the position of the laminar separation point

    NASA Technical Reports Server (NTRS)

    Von Doenhoff, Albert E

    1938-01-01

    A method is described of rapidly estimating the position of the laminar separation point from the given pressure distribution along a body; the method is applicable to a fairly wide variety of cases. The laminar separation point is found by the von Karman-Millikan method for a series of velocity distributions along a flat plate, which consist of a region of uniform velocity followed by a region of uniform decreased velocity. It is shown that such a velocity distribution can frequently replace the actual velocity distribution along a body insofar as the effects on laminar separation are concerned. An example of the application of the method is given by using it to calculate the position of the laminar separation point on the NACA 0012 airfoil section at zero lift. The agreement between the position of the separation point calculated according to the present method and that found from more elaborate computations is very good.

  12. Estimating the melting point, entropy of fusion, and enthalpy of fusion of organic compounds via SPARC.

    PubMed

    Whiteside, T S; Hilal, S H; Brenner, A; Carreira, L A

    2016-08-01

    The entropy of fusion, enthalpy of fusion, and melting point of organic compounds can be estimated through three models developed using the SPARC (SPARC Performs Automated Reasoning in Chemistry) platform. The entropy of fusion is modelled through a combination of interaction terms and physical descriptors. The enthalpy of fusion is modelled as a function of the entropy of fusion, boiling point, and flexibility of the molecule. The melting point model is the enthalpy of fusion divided by the entropy of fusion. These models were developed in part to improve SPARC's vapour pressure and solubility models. These models have been tested on 904 unique compounds. The entropy model has a RMS of 12.5 J mol(-1) K(-1). The enthalpy model has a RMS of 4.87 kJ mol(-1). The melting point model has a RMS of 54.4°C. PMID:27586365

  13. An opportunity for directly estimating the characteristics of zero-point dynamics in polyethylene crystals

    NASA Astrophysics Data System (ADS)

    Vettegren, V. I.; Slutsker, A. I.; Titenkov, L. S.; Kulik, V. B.; Gilyarov, V. L.

    2007-02-01

    For large polyethylene crystallites (100 × 60 × 60 nm), the width of the Raman band at 1129 cm-1 and the angular position of the x-ray equatorial 110 reflection were measured as a function of temperature over the range 5-300 K. It is found that the Raman bandwidth has an athermic (zero-point) component at low temperatures. This component is used to estimate the zero-point energies of torsional and bending vibrations of polyethylene molecules. These energies are close to those obtained from analyzing the x-ray diffraction data. It is concluded that the characteristics of zero-point dynamics can be determined directly from measuring the zero-point width of a Raman band.

  14. Estimating the melting point, entropy of fusion, and enthalpy of fusion of organic compounds via SPARC.

    PubMed

    Whiteside, T S; Hilal, S H; Brenner, A; Carreira, L A

    2016-08-01

    The entropy of fusion, enthalpy of fusion, and melting point of organic compounds can be estimated through three models developed using the SPARC (SPARC Performs Automated Reasoning in Chemistry) platform. The entropy of fusion is modelled through a combination of interaction terms and physical descriptors. The enthalpy of fusion is modelled as a function of the entropy of fusion, boiling point, and flexibility of the molecule. The melting point model is the enthalpy of fusion divided by the entropy of fusion. These models were developed in part to improve SPARC's vapour pressure and solubility models. These models have been tested on 904 unique compounds. The entropy model has a RMS of 12.5 J mol(-1) K(-1). The enthalpy model has a RMS of 4.87 kJ mol(-1). The melting point model has a RMS of 54.4°C.

  15. Estimation of the auto frequency response function at unexcited points using dummy masses

    NASA Astrophysics Data System (ADS)

    Hosoya, Naoki; Yaginuma, Shinji; Onodera, Hiroshi; Yoshimura, Takuya

    2015-02-01

    If structures with complex shapes have space limitations, vibration tests using an exciter or impact hammer for the excitation are difficult. Although measuring the auto frequency response function at an unexcited point may not be practical via a vibration test, it can be obtained by assuming that the inertia acting on a dummy mass is an external force on the target structure upon exciting a different excitation point. We propose a method to estimate the auto frequency response functions at unexcited points by attaching a small mass (dummy mass), which is comparable to the accelerometer mass. The validity of the proposed method is demonstrated by comparing the auto frequency response functions estimated at unexcited points in a beam structure to those obtained from numerical simulations. We also consider random measurement errors by finite element analysis and vibration tests, but not bias errors. Additionally, the applicability of the proposed method is demonstrated by applying it to estimate the auto frequency response function of the lower arm in a car suspension.

  16. A removal model for estimating detection probabilities from point-count surveys

    USGS Publications Warehouse

    Farnsworth, G.L.; Pollock, K.H.; Nichols, J.D.; Simons, T.R.; Hines, J.E.; Sauer, J.R.

    2002-01-01

    Use of point-count surveys is a popular method for collecting data on abundance and distribution of birds. However, analyses of such data often ignore potential differences in detection probability. We adapted a removal model to directly estimate detection probability during point-count surveys. The model assumes that singing frequency is a major factor influencing probability of detection when birds are surveyed using point counts. This may be appropriate for surveys in which most detections are by sound. The model requires counts to be divided into several time intervals. Point counts are often conducted for 10 min, where the number of birds recorded is divided into those first observed in the first 3 min, the subsequent 2 min, and the last 5 min. We developed a maximum-likelihood estimator for the detectability of birds recorded during counts divided into those intervals. This technique can easily be adapted to point counts divided into intervals of any length. We applied this method to unlimited-radius counts conducted in Great Smoky Mountains National Park. We used model selection criteria to identify whether detection probabilities varied among species, throughout the morning, throughout the season, and among different observers. We found differences in detection probability among species. Species that sing frequently such as Winter Wren (Troglodytes troglodytes) and Acadian Flycatcher (Empidonax virescens) had high detection probabilities (~90%) and species that call infrequently such as Pileated Woodpecker (Dryocopus pileatus) had low detection probability (36%). We also found detection probabilities varied with the time of day for some species (e.g. thrushes) and between observers for other species. We used the same approach to estimate detection probability and density for a subset of the observations with limited-radius point counts.

  17. A non-rigid point matching method with local topology preservation for accurate bladder dose summation in high dose rate cervical brachytherapy

    NASA Astrophysics Data System (ADS)

    Chen, Haibin; Zhong, Zichun; Liao, Yuliang; Pompoš, Arnold; Hrycushko, Brian; Albuquerque, Kevin; Zhen, Xin; Zhou, Linghong; Gu, Xuejun

    2016-02-01

    GEC-ESTRO guidelines for high dose rate cervical brachytherapy advocate the reporting of the D2cc (the minimum dose received by the maximally exposed 2cc volume) to organs at risk. Due to large interfractional organ motion, reporting of accurate cumulative D2cc over a multifractional course is a non-trivial task requiring deformable image registration and deformable dose summation. To efficiently and accurately describe the point-to-point correspondence of the bladder wall over all treatment fractions while preserving local topologies, we propose a novel graphic processing unit (GPU)-based non-rigid point matching algorithm. This is achieved by introducing local anatomic information into the iterative update of correspondence matrix computation in the ‘thin plate splines-robust point matching’ (TPS-RPM) scheme. The performance of the GPU-based TPS-RPM with local topology preservation algorithm (TPS-RPM-LTP) was evaluated using four numerically simulated synthetic bladders having known deformations, a custom-made porcine bladder phantom embedded with twenty one fiducial markers, and 29 fractional computed tomography (CT) images from seven cervical cancer patients. Results show that TPS-RPM-LTP achieved excellent geometric accuracy with landmark residual distance error (RDE) of 0.7  ±  0.3 mm for the numerical synthetic data with different scales of bladder deformation and structure complexity, and 3.7  ±  1.8 mm and 1.6  ±  0.8 mm for the porcine bladder phantom with large and small deformation, respectively. The RDE accuracy of the urethral orifice landmarks in patient bladders was 3.7  ±  2.1 mm. When compared to the original TPS-RPM, the TPS-RPM-LTP improved landmark matching by reducing landmark RDE by 50  ±  19%, 37  ±  11% and 28  ±  11% for the synthetic, porcine phantom and the patient bladders, respectively. This was achieved with a computational time of less than 15 s in all cases

  18. Interior-point methods for estimating seasonal parameters in discrete-time infectious disease models.

    PubMed

    Word, Daniel P; Young, James K; Cummings, Derek A T; Iamsirithaworn, Sopon; Laird, Carl D

    2013-01-01

    Infectious diseases remain a significant health concern around the world. Mathematical modeling of these diseases can help us understand their dynamics and develop more effective control strategies. In this work, we show the capabilities of interior-point methods and nonlinear programming (NLP) formulations to efficiently estimate parameters in multiple discrete-time disease models using measles case count data from three cities. These models include multiplicative measurement noise and incorporate seasonality into multiple model parameters. Our results show that nearly identical patterns are estimated even when assuming seasonality in different model parameters, and that these patterns show strong correlation to school term holidays across very different social settings and holiday schedules. We show that interior-point methods provide a fast and flexible approach to parameterizing models that can be an alternative to more computationally intensive methods. PMID:24167542

  19. Interior-point methods for estimating seasonal parameters in discrete-time infectious disease models.

    PubMed

    Word, Daniel P; Young, James K; Cummings, Derek A T; Iamsirithaworn, Sopon; Laird, Carl D

    2013-01-01

    Infectious diseases remain a significant health concern around the world. Mathematical modeling of these diseases can help us understand their dynamics and develop more effective control strategies. In this work, we show the capabilities of interior-point methods and nonlinear programming (NLP) formulations to efficiently estimate parameters in multiple discrete-time disease models using measles case count data from three cities. These models include multiplicative measurement noise and incorporate seasonality into multiple model parameters. Our results show that nearly identical patterns are estimated even when assuming seasonality in different model parameters, and that these patterns show strong correlation to school term holidays across very different social settings and holiday schedules. We show that interior-point methods provide a fast and flexible approach to parameterizing models that can be an alternative to more computationally intensive methods.

  20. Comparison of Single-Point and Continuous Sampling Methods for Estimating Residential Indoor Temperature and Humidity.

    PubMed

    Johnston, James D; Magnusson, Brianna M; Eggett, Dennis; Collingwood, Scott C; Bernhardt, Scott A

    2015-01-01

    Residential temperature and humidity are associated with multiple health effects. Studies commonly use single-point measures to estimate indoor temperature and humidity exposures, but there is little evidence to support this sampling strategy. This study evaluated the relationship between single-point and continuous monitoring of air temperature, apparent temperature, relative humidity, and absolute humidity over four exposure intervals (5-min, 30-min, 24-hr, and 12-days) in 9 northern Utah homes, from March-June 2012. Three homes were sampled twice, for a total of 12 observation periods. Continuous data-logged sampling was conducted in homes for 2-3 wks, and simultaneous single-point measures (n = 114) were collected using handheld thermo-hygrometers. Time-centered single-point measures were moderately correlated with short-term (30-min) data logger mean air temperature (r = 0.76, β = 0.74), apparent temperature (r = 0.79, β = 0.79), relative humidity (r = 0.70, β = 0.63), and absolute humidity (r = 0.80, β = 0.80). Data logger 12-day means were also moderately correlated with single-point air temperature (r = 0.64, β = 0.43) and apparent temperature (r = 0.64, β = 0.44), but were weakly correlated with single-point relative humidity (r = 0.53, β = 0.35) and absolute humidity (r = 0.52, β = 0.39). Of the single-point RH measures, 59 (51.8%) deviated more than ±5%, 21 (18.4%) deviated more than ±10%, and 6 (5.3%) deviated more than ±15% from data logger 12-day means. Where continuous indoor monitoring is not feasible, single-point sampling strategies should include multiple measures collected at prescribed time points based on local conditions.

  1. Comparison of Single-Point and Continuous Sampling Methods for Estimating Residential Indoor Temperature and Humidity.

    PubMed

    Johnston, James D; Magnusson, Brianna M; Eggett, Dennis; Collingwood, Scott C; Bernhardt, Scott A

    2015-01-01

    Residential temperature and humidity are associated with multiple health effects. Studies commonly use single-point measures to estimate indoor temperature and humidity exposures, but there is little evidence to support this sampling strategy. This study evaluated the relationship between single-point and continuous monitoring of air temperature, apparent temperature, relative humidity, and absolute humidity over four exposure intervals (5-min, 30-min, 24-hr, and 12-days) in 9 northern Utah homes, from March-June 2012. Three homes were sampled twice, for a total of 12 observation periods. Continuous data-logged sampling was conducted in homes for 2-3 wks, and simultaneous single-point measures (n = 114) were collected using handheld thermo-hygrometers. Time-centered single-point measures were moderately correlated with short-term (30-min) data logger mean air temperature (r = 0.76, β = 0.74), apparent temperature (r = 0.79, β = 0.79), relative humidity (r = 0.70, β = 0.63), and absolute humidity (r = 0.80, β = 0.80). Data logger 12-day means were also moderately correlated with single-point air temperature (r = 0.64, β = 0.43) and apparent temperature (r = 0.64, β = 0.44), but were weakly correlated with single-point relative humidity (r = 0.53, β = 0.35) and absolute humidity (r = 0.52, β = 0.39). Of the single-point RH measures, 59 (51.8%) deviated more than ±5%, 21 (18.4%) deviated more than ±10%, and 6 (5.3%) deviated more than ±15% from data logger 12-day means. Where continuous indoor monitoring is not feasible, single-point sampling strategies should include multiple measures collected at prescribed time points based on local conditions. PMID:26030088

  2. Shear wavelength estimation based on inverse filtering and multiple-point shear wave generation

    NASA Astrophysics Data System (ADS)

    Kitazaki, Tomoaki; Kondo, Kengo; Yamakawa, Makoto; Shiina, Tsuyoshi

    2016-07-01

    Elastography provides important diagnostic information because tissue elasticity is related to pathological conditions. For example, in a mammary gland, higher grade malignancies yield harder tumors. Estimating shear wave speed enables the quantification of tissue elasticity imaging using time-of-flight. However, time-of-flight measurement is based on an assumption about the propagation direction of a shear wave which is highly affected by reflection and refraction, and thus might cause an artifact. An alternative elasticity estimation approach based on shear wavelength was proposed and applied to passive configurations. To determine the elasticity of tissue more quickly and more accurately, we proposed a new method for shear wave elasticity imaging that combines the shear wavelength approach and inverse filtering with multiple shear wave sources induced by acoustic radiation force (ARF). The feasibility of the proposed method was verified using an elasticity phantom with a hard inclusion.

  3. Position Estimation of Access Points in 802.11 Wireless Networks

    SciTech Connect

    Kent, C A; Dowla, F U; Atwal, P K; Lennon, W J

    2003-12-05

    We developed a technique to locate wireless network nodes using multiple time-of-flight range measurements in a position estimate. When used with communication methods that allow propagation through walls, such as Ultra-Wideband and 802.11, we can locate network nodes in buildings and in caves where GPS is unavailable. This paper details the implementation on an 802.11a network where we demonstrated the ability to locate a network access point to within 20 feet.

  4. A removal model for estimating detection probabilities from point-count surveys

    USGS Publications Warehouse

    Farnsworth, G.L.; Pollock, K.H.; Nichols, J.D.; Simons, T.R.; Hines, J.E.; Sauer, J.R.

    2000-01-01

    We adapted a removal model to estimate detection probability during point count surveys. The model assumes one factor influencing detection during point counts is the singing frequency of birds. This may be true for surveys recording forest songbirds when most detections are by sound. The model requires counts to be divided into several time intervals. We used time intervals of 2, 5, and 10 min to develop a maximum-likelihood estimator for the detectability of birds during such surveys. We applied this technique to data from bird surveys conducted in Great Smoky Mountains National Park. We used model selection criteria to identify whether detection probabilities varied among species, throughout the morning, throughout the season, and among different observers. The overall detection probability for all birds was 75%. We found differences in detection probability among species. Species that sing frequently such as Winter Wren and Acadian Flycatcher had high detection probabilities (about 90%) and species that call infrequently such as Pileated Woodpecker had low detection probability (36%). We also found detection probabilities varied with the time of day for some species (e.g. thrushes) and between observers for other species. This method of estimating detectability during point count surveys offers a promising new approach to using count data to address questions of the bird abundance, density, and population trends.

  5. Estimation of melting points of large set of persistent organic pollutants utilizing QSPR approach.

    PubMed

    Watkins, Marquita; Sizochenko, Natalia; Rasulev, Bakhtiyor; Leszczynski, Jerzy

    2016-03-01

    The presence of polyhalogenated persistent organic pollutants (POPs), such as Cl/Br-substituted benzenes, biphenyls, diphenyl ethers, and naphthalenes has been identified in all environmental compartments. The exposure to these compounds can pose potential risk not only for ecological systems, but also for human health. Therefore, efficient tools for comprehensive environmental risk assessment for POPs are required. Among the factors vital for environmental transport and fate processes is melting point of a compound. In this study, we estimated the melting points of a large group (1419 compounds) of chloro- and bromo- derivatives of dibenzo-p-dioxins, dibenzofurans, biphenyls, naphthalenes, diphenylethers, and benzenes by utilizing quantitative structure-property relationship (QSPR) techniques. The compounds were classified by applying structure-based clustering methods followed by GA-PLS modeling. In addition, random forest method has been applied to develop more general models. Factors responsible for melting point behavior and predictive ability of each method were discussed.

  6. Articulated Non-Rigid Point Set Registration for Human Pose Estimation from 3D Sensors

    PubMed Central

    Ge, Song; Fan, Guoliang

    2015-01-01

    We propose a generative framework for 3D human pose estimation that is able to operate on both individual point sets and sequential depth data. We formulate human pose estimation as a point set registration problem, where we propose three new approaches to address several major technical challenges in this research. First, we integrate two registration techniques that have a complementary nature to cope with non-rigid and articulated deformations of the human body under a variety of poses. This unique combination allows us to handle point sets of complex body motion and large pose variation without any initial conditions, as required by most existing approaches. Second, we introduce an efficient pose tracking strategy to deal with sequential depth data, where the major challenge is the incomplete data due to self-occlusions and view changes. We introduce a visible point extraction method to initialize a new template for the current frame from the previous frame, which effectively reduces the ambiguity and uncertainty during registration. Third, to support robust and stable pose tracking, we develop a segment volume validation technique to detect tracking failures and to re-initialize pose registration if needed. The experimental results on both benchmark 3D laser scan and depth datasets demonstrate the effectiveness of the proposed framework when compared with state-of-the-art algorithms. PMID:26131673

  7. Point and Fixed Plot Sampling Inventory Estimates at the Savannah River Site, South Carolina.

    SciTech Connect

    Parresol, Bernard, R.

    2004-02-01

    This report provides calculation of systematic point sampling volume estimates for trees greater than or equal to 5 inches diameter breast height (dbh) and fixed radius plot volume estimates for trees < 5 inches dbh at the Savannah River Site (SRS), Aiken County, South Carolina. The inventory of 622 plots was started in March 1999 and completed in January 2002 (Figure 1). Estimates are given in cubic foot volume. The analyses are presented in a series of Tables and Figures. In addition, a preliminary analysis of fuel levels on the SRS is given, based on depth measurements of the duff and litter layers on the 622 inventory plots plus line transect samples of down coarse woody material. Potential standing live fuels are also included. The fuels analyses are presented in a series of tables.

  8. Benchmark atomization energy of ethane : importance of accurate zero-point vibrational energies and diagonal Born-Oppenheimer corrections for a 'simple' organic molecule.

    SciTech Connect

    Karton, A.; Martin, J. M. L.; Ruscic, B.; Chemistry; Weizmann Institute of Science

    2007-06-01

    A benchmark calculation of the atomization energy of the 'simple' organic molecule C2H6 (ethane) has been carried out by means of W4 theory. While the molecule is straightforward in terms of one-particle and n-particle basis set convergence, its large zero-point vibrational energy (and anharmonic correction thereto) and nontrivial diagonal Born-Oppenheimer correction (DBOC) represent interesting challenges. For the W4 set of molecules and C2H6, we show that DBOCs to the total atomization energy are systematically overestimated at the SCF level, and that the correlation correction converges very rapidly with the basis set. Thus, even at the CISD/cc-pVDZ level, useful correlation corrections to the DBOC are obtained. When applying such a correction, overall agreement with experiment was only marginally improved, but a more significant improvement is seen when hydrogen-containing systems are considered in isolation. We conclude that for closed-shell organic molecules, the greatest obstacles to highly accurate computational thermochemistry may not lie in the solution of the clamped-nuclei Schroedinger equation, but rather in the zero-point vibrational energy and the diagonal Born-Oppenheimer correction.

  9. Quaternion-based unscented Kalman filter for accurate indoor heading estimation using wearable multi-sensor system.

    PubMed

    Yuan, Xuebing; Yu, Shuai; Zhang, Shengzhi; Wang, Guoping; Liu, Sheng

    2015-01-01

    Inertial navigation based on micro-electromechanical system (MEMS) inertial measurement units (IMUs) has attracted numerous researchers due to its high reliability and independence. The heading estimation, as one of the most important parts of inertial navigation, has been a research focus in this field. Heading estimation using magnetometers is perturbed by magnetic disturbances, such as indoor concrete structures and electronic equipment. The MEMS gyroscope is also used for heading estimation. However, the accuracy of gyroscope is unreliable with time. In this paper, a wearable multi-sensor system has been designed to obtain the high-accuracy indoor heading estimation, according to a quaternion-based unscented Kalman filter (UKF) algorithm. The proposed multi-sensor system including one three-axis accelerometer, three single-axis gyroscopes, one three-axis magnetometer and one microprocessor minimizes the size and cost. The wearable multi-sensor system was fixed on waist of pedestrian and the quadrotor unmanned aerial vehicle (UAV) for heading estimation experiments in our college building. The results show that the mean heading estimation errors are less 10° and 5° to multi-sensor system fixed on waist of pedestrian and the quadrotor UAV, respectively, compared to the reference path. PMID:25961384

  10. Quaternion-Based Unscented Kalman Filter for Accurate Indoor Heading Estimation Using Wearable Multi-Sensor System

    PubMed Central

    Yuan, Xuebing; Yu, Shuai; Zhang, Shengzhi; Wang, Guoping; Liu, Sheng

    2015-01-01

    Inertial navigation based on micro-electromechanical system (MEMS) inertial measurement units (IMUs) has attracted numerous researchers due to its high reliability and independence. The heading estimation, as one of the most important parts of inertial navigation, has been a research focus in this field. Heading estimation using magnetometers is perturbed by magnetic disturbances, such as indoor concrete structures and electronic equipment. The MEMS gyroscope is also used for heading estimation. However, the accuracy of gyroscope is unreliable with time. In this paper, a wearable multi-sensor system has been designed to obtain the high-accuracy indoor heading estimation, according to a quaternion-based unscented Kalman filter (UKF) algorithm. The proposed multi-sensor system including one three-axis accelerometer, three single-axis gyroscopes, one three-axis magnetometer and one microprocessor minimizes the size and cost. The wearable multi-sensor system was fixed on waist of pedestrian and the quadrotor unmanned aerial vehicle (UAV) for heading estimation experiments in our college building. The results show that the mean heading estimation errors are less 10° and 5° to multi-sensor system fixed on waist of pedestrian and the quadrotor UAV, respectively, compared to the reference path. PMID:25961384

  11. Quaternion-based unscented Kalman filter for accurate indoor heading estimation using wearable multi-sensor system.

    PubMed

    Yuan, Xuebing; Yu, Shuai; Zhang, Shengzhi; Wang, Guoping; Liu, Sheng

    2015-05-07

    Inertial navigation based on micro-electromechanical system (MEMS) inertial measurement units (IMUs) has attracted numerous researchers due to its high reliability and independence. The heading estimation, as one of the most important parts of inertial navigation, has been a research focus in this field. Heading estimation using magnetometers is perturbed by magnetic disturbances, such as indoor concrete structures and electronic equipment. The MEMS gyroscope is also used for heading estimation. However, the accuracy of gyroscope is unreliable with time. In this paper, a wearable multi-sensor system has been designed to obtain the high-accuracy indoor heading estimation, according to a quaternion-based unscented Kalman filter (UKF) algorithm. The proposed multi-sensor system including one three-axis accelerometer, three single-axis gyroscopes, one three-axis magnetometer and one microprocessor minimizes the size and cost. The wearable multi-sensor system was fixed on waist of pedestrian and the quadrotor unmanned aerial vehicle (UAV) for heading estimation experiments in our college building. The results show that the mean heading estimation errors are less 10° and 5° to multi-sensor system fixed on waist of pedestrian and the quadrotor UAV, respectively, compared to the reference path.

  12. Accurate experimental determination of the isotope effects on the triple point temperature of water. I. Dependence on the 2H abundance

    NASA Astrophysics Data System (ADS)

    Faghihi, V.; Peruzzi, A.; Aerts-Bijma, A. T.; Jansen, H. G.; Spriensma, J. J.; van Geel, J.; Meijer, H. A. J.

    2015-12-01

    Variation in the isotopic composition of water is one of the major contributors to uncertainty in the realization of the triple point of water (TPW). Although the dependence of the TPW on the isotopic composition of the water has been known for years, there is still a lack of a detailed and accurate experimental determination of the values for the correction constants. This paper is the first of two articles (Part I and Part II) that address quantification of isotope abundance effects on the triple point temperature of water. In this paper, we describe our experimental assessment of the 2H isotope effect. We manufactured five triple point cells with prepared water mixtures with a range of 2H isotopic abundances encompassing widely the natural abundance range, while the 18O and 17O isotopic abundance were kept approximately constant and the 18O  -  17O ratio was close to the Meijer-Li relationship for natural waters. The selected range of 2H isotopic abundances led to cells that realised TPW temperatures between approximately  -140 μK to  +2500 μK with respect to the TPW temperature as realized by VSMOW (Vienna Standard Mean Ocean Water). Our experiment led to determination of the value for the δ2H correction parameter of A2H  =  673 μK / (‰ deviation of δ2H from VSMOW) with a combined uncertainty of 4 μK (k  =  1, or 1σ).

  13. Vein visualization using a smart phone with multispectral Wiener estimation for point-of-care applications.

    PubMed

    Song, Jae Hee; Kim, Choye; Yoo, Yangmo

    2015-03-01

    Effective vein visualization is clinically important for various point-of-care applications, such as needle insertion. It can be achieved by utilizing ultrasound imaging or by applying infrared laser excitation and monitoring its absorption. However, while these approaches can be used for vein visualization, they are not suitable for point-of-care applications because of their cost, time, and accessibility. In this paper, a new vein visualization method based on multispectral Wiener estimation is proposed and its real-time implementation on a smart phone is presented. In the proposed method, a conventional RGB camera on a commercial smart phone (i.e., Galaxy Note 2, Samsung Electronics Inc., Suwon, Korea) is used to acquire reflectance information from veins. Wiener estimation is then applied to extract the multispectral information from the veins. To evaluate the performance of the proposed method, an experiment was conducted using a color calibration chart (ColorChecker Classic, X-rite, Grand Rapids, MI, USA) and an average root-mean-square error of 12.0% was obtained. In addition, an in vivo subcutaneous vein imaging experiment was performed to explore the clinical performance of the smart phone-based Wiener estimation. From the in vivo experiment, the veins at various sites were successfully localized using the reconstructed multispectral images and these results were confirmed by ultrasound B-mode and color Doppler images. These results indicate that the presented multispectral Wiener estimation method can be used for visualizing veins using a commercial smart phone for point-of-care applications (e.g., vein puncture guidance). PMID:24691170

  14. Star Tracker Based ATP System Conceptual Design and Pointing Accuracy Estimation

    NASA Technical Reports Server (NTRS)

    Orfiz, Gerardo G.; Lee, Shinhak

    2006-01-01

    A star tracker based beaconless (a.k.a. non-cooperative beacon) acquisition, tracking and pointing concept for precisely pointing an optical communication beam is presented as an innovative approach to extend the range of high bandwidth (> 100 Mbps) deep space optical communication links throughout the solar system and to remove the need for a ground based high power laser as a beacon source. The basic approach for executing the ATP functions involves the use of stars as the reference sources from which the attitude knowledge is obtained and combined with high bandwidth gyroscopes for propagating the pointing knowledge to the beam pointing mechanism. Details of the conceptual design are presented including selection of an orthogonal telescope configuration and the introduction of an optical metering scheme to reduce misalignment error. Also, estimates are presented that demonstrate that aiming of the communications beam to the Earth based receive terminal can be achieved with a total system pointing accuracy of better than 850 nanoradians (3 sigma) from anywhere in the solar system.

  15. Assignment of Calibration Information to Deeper Phylogenetic Nodes is More Effective in Obtaining Precise and Accurate Divergence Time Estimates.

    PubMed

    Mello, Beatriz; Schrago, Carlos G

    2014-01-01

    Divergence time estimation has become an essential tool for understanding macroevolutionary events. Molecular dating aims to obtain reliable inferences, which, within a statistical framework, means jointly increasing the accuracy and precision of estimates. Bayesian dating methods exhibit the propriety of a linear relationship between uncertainty and estimated divergence dates. This relationship occurs even if the number of sites approaches infinity and places a limit on the maximum precision of node ages. However, how the placement of calibration information may affect the precision of divergence time estimates remains an open question. In this study, relying on simulated and empirical data, we investigated how the location of calibration within a phylogeny affects the accuracy and precision of time estimates. We found that calibration priors set at median and deep phylogenetic nodes were associated with higher precision values compared to analyses involving calibration at the shallowest node. The results were independent of the tree symmetry. An empirical mammalian dataset produced results that were consistent with those generated by the simulated sequences. Assigning time information to the deeper nodes of a tree is crucial to guarantee the accuracy and precision of divergence times. This finding highlights the importance of the appropriate choice of outgroups in molecular dating. PMID:24855333

  16. Iterative image reconstruction for positron emission tomography based on a detector response function estimated from point source measurements

    NASA Astrophysics Data System (ADS)

    Tohme, Michel S.; Qi, Jinyi

    2009-06-01

    The accuracy of the system model in an iterative reconstruction algorithm greatly affects the quality of reconstructed positron emission tomography (PET) images. For efficient computation in reconstruction, the system model in PET can be factored into a product of a geometric projection matrix and sinogram blurring matrix, where the former is often computed based on analytical calculation, and the latter is estimated using Monte Carlo simulations. Direct measurement of a sinogram blurring matrix is difficult in practice because of the requirement of a collimated source. In this work, we propose a method to estimate the 2D blurring kernels from uncollimated point source measurements. Since the resulting sinogram blurring matrix stems from actual measurements, it can take into account the physical effects in the photon detection process that are difficult or impossible to model in a Monte Carlo (MC) simulation, and hence provide a more accurate system model. Another advantage of the proposed method over MC simulation is that it can easily be applied to data that have undergone a transformation to reduce the data size (e.g., Fourier rebinning). Point source measurements were acquired with high count statistics in a relatively fine grid inside the microPET II scanner using a high-precision 2D motion stage. A monotonically convergent iterative algorithm has been derived to estimate the detector blurring matrix from the point source measurements. The algorithm takes advantage of the rotational symmetry of the PET scanner and explicitly models the detector block structure. The resulting sinogram blurring matrix is incorporated into a maximum a posteriori (MAP) image reconstruction algorithm. The proposed method has been validated using a 3 × 3 line phantom, an ultra-micro resolution phantom and a 22Na point source superimposed on a warm background. The results of the proposed method show improvements in both resolution and contrast ratio when compared with the MAP

  17. Iterative Image Reconstruction for Positron Emission Tomography Based on Detector Response Function Estimated from Point Source Measurements

    PubMed Central

    Tohme, Michel S.; Qi, Jinyi

    2009-01-01

    The accuracy of the system model in an iterative reconstruction algorithm greatly affects the quality of reconstructed positron emission tomography (PET) images. For efficient computation in reconstruction, the system model in PET can be factored into a product of a geometric projection matrix and sinogram blurring matrix, where the former is often computed based on analytical calculation, and the latter is estimated using Monte Carlo simulations. Direct measurement of sinogram blurring matrix is difficult in practice because of the requirement of a collimated source. In this work, we propose a method to estimate the 2D blurring kernels from uncollimated point source measurements. Since the resulting sinogram blurring matrix stems from actual measurements, it can take into account the physical effects in the photon detection process that are difficult or impossible to model in a Monte Carlo (MC) simulation, and hence provide a more accurate system model. Another advantage of the proposed method over MC simulation is that it can be easily applied to data that have undergone a transformation to reduce the data size (e.g., Fourier rebinning). Point source measurements were acquired with high count statistics in a relatively fine grid inside the microPET II scanner using a high-precision 2-D motion stage. A monotonically convergent iterative algorithm has been derived to estimate the detector blurring matrix from the point source measurements. The algorithm takes advantage of the rotational symmetry of the PET scanner and explicitly models the detector block structure. The resulting sinogram blurring matrix is incorporated into a maximum a posteriori (MAP) image reconstruction algorithm. The proposed method has been validated using a 3-by-3 line phantom, an ultra-micro resolution phantom, and a 22Na point source superimposed on a warm background. The results of the proposed method show improvements in both resolution and contrast ratio when compared with the MAP

  18. Rosiglitazone: can meta-analysis accurately estimate excess cardiovascular risk given the available data? Re-analysis of randomized trials using various methodologic approaches

    PubMed Central

    Friedrich, Jan O; Beyene, Joseph; Adhikari, Neill KJ

    2009-01-01

    statistically significant. Conclusion We have shown that alternative reasonable methodological approaches to the rosiglitazone meta-analysis can yield increased or decreased risks that are either statistically significant or not significant at the p = 0.05 level for both myocardial infarction and cardiovascular death. Completion of ongoing trials may help to generate more accurate estimates of rosiglitazone's effect on cardiovascular outcomes. However, given that almost all point estimates suggest harm rather than benefit and the availability of alternative agents, the use of rosiglitazone may greatly decline prior to more definitive safety data being generated. PMID:19134216

  19. The Point Count Transect Method for Estimates of Biodiversity on Coral Reefs: Improving the Sampling of Rare Species.

    PubMed

    Roberts, T Edward; Bridge, Thomas C; Caley, M Julian; Baird, Andrew H

    2016-01-01

    Understanding patterns in species richness and diversity over environmental gradients (such as altitude and depth) is an enduring component of ecology. As most biological communities feature few common and many rare species, quantifying the presence and abundance of rare species is a crucial requirement for analysis of these patterns. Coral reefs present specific challenges for data collection, with limitations on time and site accessibility making efficiency crucial. Many commonly used methods, such as line intercept transects (LIT), are poorly suited to questions requiring the detection of rare events or species. Here, an alternative method for surveying reef-building corals is presented; the point count transect (PCT). The PCT consists of a count of coral colonies at a series of sample stations, located at regular intervals along a transect. In contrast the LIT records the proportion of each species occurring under a transect tape of a given length. The same site was surveyed using PCT and LIT to compare species richness estimates between the methods. The total number of species increased faster per individual sampled and unit of time invested using PCT. Furthermore, 41 of the 44 additional species recorded by the PCT occurred ≤ 3 times, demonstrating the increased capacity of PCT to detect rare species. PCT provides a more accurate estimate of local-scale species richness than the LIT, and is an efficient alternative method for surveying reef corals to address questions associated with alpha-diversity, and rare or incidental events. PMID:27011368

  20. The Point Count Transect Method for Estimates of Biodiversity on Coral Reefs: Improving the Sampling of Rare Species

    PubMed Central

    Roberts, T. Edward; Bridge, Thomas C.; Caley, M. Julian; Baird, Andrew H.

    2016-01-01

    Understanding patterns in species richness and diversity over environmental gradients (such as altitude and depth) is an enduring component of ecology. As most biological communities feature few common and many rare species, quantifying the presence and abundance of rare species is a crucial requirement for analysis of these patterns. Coral reefs present specific challenges for data collection, with limitations on time and site accessibility making efficiency crucial. Many commonly used methods, such as line intercept transects (LIT), are poorly suited to questions requiring the detection of rare events or species. Here, an alternative method for surveying reef-building corals is presented; the point count transect (PCT). The PCT consists of a count of coral colonies at a series of sample stations, located at regular intervals along a transect. In contrast the LIT records the proportion of each species occurring under a transect tape of a given length. The same site was surveyed using PCT and LIT to compare species richness estimates between the methods. The total number of species increased faster per individual sampled and unit of time invested using PCT. Furthermore, 41 of the 44 additional species recorded by the PCT occurred ≤ 3 times, demonstrating the increased capacity of PCT to detect rare species. PCT provides a more accurate estimate of local-scale species richness than the LIT, and is an efficient alternative method for surveying reef corals to address questions associated with alpha-diversity, and rare or incidental events. PMID:27011368

  1. The Point Count Transect Method for Estimates of Biodiversity on Coral Reefs: Improving the Sampling of Rare Species.

    PubMed

    Roberts, T Edward; Bridge, Thomas C; Caley, M Julian; Baird, Andrew H

    2016-01-01

    Understanding patterns in species richness and diversity over environmental gradients (such as altitude and depth) is an enduring component of ecology. As most biological communities feature few common and many rare species, quantifying the presence and abundance of rare species is a crucial requirement for analysis of these patterns. Coral reefs present specific challenges for data collection, with limitations on time and site accessibility making efficiency crucial. Many commonly used methods, such as line intercept transects (LIT), are poorly suited to questions requiring the detection of rare events or species. Here, an alternative method for surveying reef-building corals is presented; the point count transect (PCT). The PCT consists of a count of coral colonies at a series of sample stations, located at regular intervals along a transect. In contrast the LIT records the proportion of each species occurring under a transect tape of a given length. The same site was surveyed using PCT and LIT to compare species richness estimates between the methods. The total number of species increased faster per individual sampled and unit of time invested using PCT. Furthermore, 41 of the 44 additional species recorded by the PCT occurred ≤ 3 times, demonstrating the increased capacity of PCT to detect rare species. PCT provides a more accurate estimate of local-scale species richness than the LIT, and is an efficient alternative method for surveying reef corals to address questions associated with alpha-diversity, and rare or incidental events.

  2. How accurately can students estimate their performance on an exam and how does this relate to their actual performance on the exam?

    NASA Astrophysics Data System (ADS)

    Rebello, N. Sanjay

    2012-02-01

    Research has shown students' beliefs regarding their own abilities in math and science can influence their performance in these disciplines. I investigated the relationship between students' estimated performance and actual performance on five exams in a second semester calculus-based physics class. Students in a second-semester calculus-based physics class were given about 72 hours after the completion of each of five exams, to estimate their individual and class mean score on each exam. Students were given extra credit worth 1% of the exam points for estimating their score correct within 2% of the actual score and another 1% extra credit for estimating the class mean score within 2% of the correct value. I compared students' individual and mean score estimations with the actual scores to investigate the relationship between estimation accuracies and exam performance of the students as well as trends over the semester.

  3. How Accurate Are German Work-Time Data? A Comparison of Time-Diary Reports and Stylized Estimates

    ERIC Educational Resources Information Center

    Otterbach, Steffen; Sousa-Poza, Alfonso

    2010-01-01

    This study compares work time data collected by the German Time Use Survey (GTUS) using the diary method with stylized work time estimates from the GTUS, the German Socio-Economic Panel, and the German Microcensus. Although on average the differences between the time-diary data and the interview data is not large, our results show that significant…

  4. Estimating forest structure at five tropical forested sites using lidar point cloud data

    NASA Astrophysics Data System (ADS)

    Palace, M. W.; Sullivan, F.; Treuhaft, R. N.; Keller, M. M.

    2014-12-01

    Tropical forests are fundamental components in the global carbon cycle and are threatened by deforestation and climate change. Because of their importance in carbon dynamics, understanding the structural architecture of these forests is vital. Airborne lidar data provides a unique opportunity to examine not only the height of these forests, which is often used to estimate biomass, but also the crown geometry and vertical profile of the canopy. These structural attributes inform temporal and spatial apsects of carbon dynamics providing insight into the past disturbances and growth of forests. We examined airborne lidar point cloud data from five sites in the Brazilian Amazon collected during the years 2012 to 2014. We generated both digital elevation maps, canopy height models (CHM), and vertical vegetation profiles (VVP) in our analysis. We analyzed the CHM using crown delineation with an iterative maximum finding routine to find the tops of canopies, local maxima to determine edges of crowns, and two parameters that control termination of crown edges. We also ran textural analysis methods on the CHM and VVP. Using multiple linear regression models and boosted regression trees we estimated forest structural parameters including biomass, stem density, basal area, width and depth of crowns and stem size distribution. Structural attributes estimated from lidar point cloud data can improve our understanding of the carbon dynamics of tropical forests on a landscape level and regional level.

  5. Minimum Number of Observation Points for LEO Satellite Orbit Estimation by OWL Network

    NASA Astrophysics Data System (ADS)

    Park, Maru; Jo, Jung Hyun; Cho, Sungki; Choi, Jin; Kim, Chun-Hwey; Park, Jang-Hyun; Yim, Hong-Suh; Choi, Young-Jun; Moon, Hong-Kyu; Bae, Young-Ho; Park, Sun-Youp; Kim, Ji-Hye; Roh, Dong-Goo; Jang, Hyun-Jung; Park, Young-Sik; Jeong, Min-Ji

    2015-12-01

    By using the Optical Wide-field Patrol (OWL) network developed by the Korea Astronomy and Space Science Institute (KASI) we generated the right ascension and declination angle data from optical observation of Low Earth Orbit (LEO) satellites. We performed an analysis to verify the optimum number of observations needed per arc for successful estimation of orbit. The currently functioning OWL observatories are located in Daejeon (South Korea), Songino (Mongolia), and Oukaïmeden (Morocco). The Daejeon Observatory is functioning as a test bed. In this study, the observed targets were Gravity Probe B, COSMOS 1455, COSMOS 1726, COSMOS 2428, SEASAT 1, ATV-5, and CryoSat-2 (all in LEO). These satellites were observed from the test bed and the Songino Observatory of the OWL network during 21 nights in 2014 and 2015. After we estimated the orbit from systematically selected sets of observation points (20, 50, 100, and 150) for each pass, we compared the difference between the orbit estimates for each case, and the Two Line Element set (TLE) from the Joint Space Operation Center (JSpOC). Then, we determined the average of the difference and selected the optimal observation points by comparing the average values.

  6. Analysis of open-loop conical scan pointing error and variance estimators

    NASA Technical Reports Server (NTRS)

    Alvarez, L. S.

    1993-01-01

    General pointing error and variance estimators for an open-loop conical scan (conscan) system are derived and analyzed. The conscan algorithm is modeled as a weighted least-squares estimator whose inputs are samples of receiver carrier power and its associated measurement uncertainty. When the assumptions of constant measurement noise and zero pointing error estimation are applied, the variance equation is then strictly a function of the carrier power to uncertainty ratio and the operator selectable radius and period input to the algorithm. The performance equation is applied to a 34-m mirror-based beam-waveguide conscan system interfaced with the Block V Receiver Subsystem tracking a Ka-band (32-GHz) downlink. It is shown that for a carrier-to-noise power ratio greater than or equal to 30 dB-Hz, the conscan period for Ka-band operation may be chosen well below the current DSN minimum of 32 sec. The analysis presented forms the basis of future conscan work in both research and development as well as for the upcoming DSN antenna controller upgrade for the new DSS-24 34-m beam-waveguide antenna.

  7. Two-point correlation functions to characterize microgeometry and estimate permeabilities of synthetic and natural sandstones

    SciTech Connect

    Blair, S.C.; Berge, P.A.; Berryman, J.G.

    1993-08-01

    We have developed an image-processing method for characterizing the microstructure of rock and other porous materials, and for providing a quantitative means for understanding the dependence of physical properties on the pore structure. This method is based upon the statistical properties of the microgeometry as observed in scanning electron micrograph (SEM) images of cross sections of porous materials. The method utilizes a simple statistical function, called the spatial correlation function, which can be used to predict bounds on permeability and other physical properties. We obtain estimates of the porosity and specific surface area of the material from the two-point correlation function. The specific surface area can be related to the permeability of porous materials using a Kozeny-Carman relation, and we show that the specific surface area measured on images of sandstones is consistent with the specific surface area used in a simple flow model for computation of permeability. In this paper, we discuss the two-point spatial correlation function and its use in characterizing microstructure features such as pore and grain sizes. We present estimates of permeabilities found using SEM images of several different synthetic and natural sandstones. Comparison of the estimates to laboratory measurements shows good agreement. Finally, we briefly discuss extension of this technique to two-phase flow.

  8. Estimating the influence of impurities on the freezing point of tin

    NASA Astrophysics Data System (ADS)

    Fellmuth, Bernd; Hill, Kenneth D.

    2006-02-01

    The sum of individual estimates (SIE) and the overall maximum estimate (OME) are two methods recommended to estimate the influence of impurities on the temperatures of the liquid-solid phase transformations of high-purity substances. The methods are discussed starting with the basic crystallographic facts, and their application is demonstrated in detail by applying them to the freezing point of tin as a first example. The SIE method leads to a temperature correction with a corresponding uncertainty while the OME method yields only an uncertainty that is, perhaps not unexpectedly, larger than that of the SIE approach. The necessary sensitivity coefficients (derivatives of the liquidus lines) are tabulated, together with the equilibrium distribution coefficients. Other than the necessity of obtaining a complete elemental analysis of the fixed-point material using glow discharge mass spectrometry (or other suitable techniques), there remain no technical barriers to adopting the preferred SIE method. While the use of the method, and particularly the application of a temperature correction to account for the impurity influence, requires a paradigm shift within the thermometry community, improved interoperability and harmonization of approach are highly desirable goals. The SIE approach maximizes the application of scientific knowledge and represents the best chance of achieving these common goals.

  9. The number of alleles at a microsatellite defines the allele frequency spectrum and facilitates fast accurate estimation of theta.

    PubMed

    Haasl, Ryan J; Payseur, Bret A

    2010-12-01

    Theoretical work focused on microsatellite variation has produced a number of important results, including the expected distribution of repeat sizes and the expected squared difference in repeat size between two randomly selected samples. However, closed-form expressions for the sampling distribution and frequency spectrum of microsatellite variation have not been identified. Here, we use coalescent simulations of the stepwise mutation model to develop gamma and exponential approximations of the microsatellite allele frequency spectrum, a distribution central to the description of microsatellite variation across the genome. For both approximations, the parameter of biological relevance is the number of alleles at a locus, which we express as a function of θ, the population-scaled mutation rate, based on simulated data. Discovered relationships between θ, the number of alleles, and the frequency spectrum support the development of three new estimators of microsatellite θ. The three estimators exhibit roughly similar mean squared errors (MSEs) and all are biased. However, across a broad range of sample sizes and θ values, the MSEs of these estimators are frequently lower than all other estimators tested. The new estimators are also reasonably robust to mutation that includes step sizes greater than one. Finally, our approximation to the microsatellite allele frequency spectrum provides a null distribution of microsatellite variation. In this context, a preliminary analysis of the effects of demographic change on the frequency spectrum is performed. We suggest that simulations of the microsatellite frequency spectrum under evolutionary scenarios of interest may guide investigators to the use of relevant and sometimes novel summary statistics.

  10. Effect of distance-related heterogeneity on population size estimates from point counts

    USGS Publications Warehouse

    Efford, Murray G.; Dawson, Deanna K.

    2009-01-01

    Point counts are used widely to index bird populations. Variation in the proportion of birds counted is a known source of error, and for robust inference it has been advocated that counts be converted to estimates of absolute population size. We used simulation to assess nine methods for the conduct and analysis of point counts when the data included distance-related heterogeneity of individual detection probability. Distance from the observer is a ubiquitous source of heterogeneity, because nearby birds are more easily detected than distant ones. Several recent methods (dependent double-observer, time of first detection, time of detection, independent multiple-observer, and repeated counts) do not account for distance-related heterogeneity, at least in their simpler forms. We assessed bias in estimates of population size by simulating counts with fixed radius w over four time intervals (occasions). Detection probability per occasion was modeled as a half-normal function of distance with scale parameter sigma and intercept g(0) = 1.0. Bias varied with sigma/w; values of sigma inferred from published studies were often 50% for a 100-m fixed-radius count. More critically, the bias of adjusted counts sometimes varied more than that of unadjusted counts, and inference from adjusted counts would be less robust. The problem was not solved by using mixture models or including distance as a covariate. Conventional distance sampling performed well in simulations, but its assumptions are difficult to meet in the field. We conclude that no existing method allows effective estimation of population size from point counts.

  11. Pollutant runoff from non-point sources and its estimation by runoff models.

    PubMed

    Noguchi, M; Hiwatashi, T; Mizuno, Y; Minematsu, M

    2002-01-01

    In order to attain a sound and sustainable water environment, it is important to carry out the environmental management of the watershed. For this purpose, knowledge on the pollutant runoff mechanism from non-point sources becomes very important especially under rainy conditions. At Isahaya, Nagasaki, Japan, a big project of construction of sea-dyke and reclamation is now going on, so reducing the pollutant runoff, especially from non-point sources, becomes more important. Some runoff models of rainwater are developed to predict the rate of pollutant loads from the non-point sources, and their results are compared with each other to investigate the accuracy of prediction. In this paper, runoff analysis of both rainwater and pollutants has been carried out using three models: the tank model, the kinematic wave (K-W) model, and a model using the digital elevation model (DEM). For precise estimation, it becomes necessary to identify the parameters included in these models. Here, total nitrogen has been considered as pollutants, and detachment rates are evaluated, correlated with a class of land use, soil type, and moisture content. Finally, it has been shown that pollutant runoff from non-point sources can be predicted fairly well, identifying the model parameter appropriately.

  12. Unbalanced and Minimal Point Equivalent Estimation Second-Order Split-Plot Designs

    NASA Technical Reports Server (NTRS)

    Parker, Peter A.; Kowalski, Scott M.; Vining, G. Geoffrey

    2007-01-01

    Restricting the randomization of hard-to-change factors in industrial experiments is often performed by employing a split-plot design structure. From an economic perspective, these designs minimize the experimental cost by reducing the number of resets of the hard-to- change factors. In this paper, unbalanced designs are considered for cases where the subplots are relatively expensive and the experimental apparatus accommodates an unequal number of runs per whole-plot. We provide construction methods for unbalanced second-order split- plot designs that possess the equivalence estimation optimality property, providing best linear unbiased estimates of the parameters; independent of the variance components. Unbalanced versions of the central composite and Box-Behnken designs are developed. For cases where the subplot cost approaches the whole-plot cost, minimal point designs are proposed and illustrated with a split-plot Notz design.

  13. Estimate of Shock Standoff Distance Ahead of a General Stagnation Point

    NASA Technical Reports Server (NTRS)

    Reshotko, Eli

    1961-01-01

    The shock standoff distance ahead of a general rounded stagnation point has been estimated under the assumption of a constant-density-shock layer. It is found that, with the exception of almost-two-dimensional bodies with very strong shock waves, the present theoretical calculations and the experimental data of Zakkay and Visich for toroids are well represented by the relation Delta-3D/R(s) = ((Delta-ax sym)/(R(s))/(2/(K+1))) where Delta is the shock standoff distance, R(s),x is the smaller principal shock radius, and K is the ratio of the smaller to the larger of the principal shock radii.

  14. Precise Point Positioning with Ionosphere Estimation and application of Regional Ionospheric Maps

    NASA Astrophysics Data System (ADS)

    Galera Monico, J. F.; Marques, H. A.; Rocha, G. D. D. C.

    2015-12-01

    The ionosphere is one of most difficult source of errors to be modelled in the GPS positioning, mainly when applying data collected by single frequency receivers. Considering Precise Point Positioning (PPP) with single frequency data the options available include, for example, the use of Klobuchar model or applying Global Ionosphere Maps (GIM). The GIM contains Vertical Electron Content (VTEC) values that are commonly estimated considering a global network with poor covering in certain regions. For this reason Regional Ionosphere Maps (RIM) have been developed considering local GNSS network, for instance, the La Plata Ionospheric Model (LPIM) developed inside the context of SIRGAS (Geocentric Reference System for Americas). The South American RIM are produced with data from nearly 50 GPS ground receivers and considering these maps are generated for each hour with spatial resolution of one degree it is expected to provide better accuracy in GPS positioning for such region. Another possibility to correct for ionosphere effects in the PPP is to apply the ionosphere estimation technique based on Kalman filter. In this case, the ionosphere can be treated as a stochastic process and a good initial guess is necessary what can be obtained from an ionospheric map. In this paper we present the methodology involved with ionosphere estimation by using Kalman filter and also the application of global and regional ionospheric maps in the PPP as first guess. The ionosphere estimation strategy was implemented in the house software called RT_PPP that is capable of accomplishing PPP either for single or dual frequency data. GPS data from Brazilian station near equatorial region were processed and results with regional maps were compared with those by using global maps. Improvements of the order 15% were observed. In case of ionosphere estimation, the estimated coordinates were compared with ionosphere free solution and after PPP convergence the results reached centimeter accuracy.

  15. A Comparison of Real-Time Precise Point Positioning Zenith Total Delay Estimates

    NASA Astrophysics Data System (ADS)

    Ahmed, F.; Vaclavovic, P.; Dousa, J.; Teferle, F. N.; Laurichesse, D.; Bingley, R.

    2013-12-01

    The use of observations from Global Navigation Satellite Systems (GNSS) in operational meteorology is increasing worldwide due to the continuous evolution of GNSS. The assimilation of near real-time (NRT) GNSS-derived zenith total delay (ZTD) estimates into local, regional and global scale numerical weather prediction (NWP) models is now in operation at a number of meteorological institutions. The development of NWP models with high update cycles for now-casting and monitoring of extreme weather events in recent years, requires the estimation of ZTD with minimal latencies, i.e. from 5 to 10 minutes, while maintaining an adequate level of accuracy for these. The availability of real-time (RT) observations and products from the IGS RT service and associated analysis centers make it possible to compute precise point positioning (PPP) solutions in RT, which provide ZTD along with position estimates. This study presents a comparison of the RT ZTD estimates from three different PPP software packages (G-Nut/Tefnut, BNC2.7 and PPP-Wizard) to the state-of-the-art IGS Final Troposphere Product employing PPP in the Bernese GPS Software. Overall, the ZTD time series obtained by the software packages agree fairly well with the estimates following the variations of the other solutions, but showing various biases with the reference. After correction of these the RMS differences are at the order of 0.01 m. The application of PPP ambiguity resolution in one solution or the use of different RT product streams shows little impact on the ZTD estimates.

  16. Travel Time Estimation Using Freeway Point Detector Data Based on Evolving Fuzzy Neural Inference System.

    PubMed

    Tang, Jinjun; Zou, Yajie; Ash, John; Zhang, Shen; Liu, Fang; Wang, Yinhai

    2016-01-01

    Travel time is an important measurement used to evaluate the extent of congestion within road networks. This paper presents a new method to estimate the travel time based on an evolving fuzzy neural inference system. The input variables in the system are traffic flow data (volume, occupancy, and speed) collected from loop detectors located at points both upstream and downstream of a given link, and the output variable is the link travel time. A first order Takagi-Sugeno fuzzy rule set is used to complete the inference. For training the evolving fuzzy neural network (EFNN), two learning processes are proposed: (1) a K-means method is employed to partition input samples into different clusters, and a Gaussian fuzzy membership function is designed for each cluster to measure the membership degree of samples to the cluster centers. As the number of input samples increases, the cluster centers are modified and membership functions are also updated; (2) a weighted recursive least squares estimator is used to optimize the parameters of the linear functions in the Takagi-Sugeno type fuzzy rules. Testing datasets consisting of actual and simulated data are used to test the proposed method. Three common criteria including mean absolute error (MAE), root mean square error (RMSE), and mean absolute relative error (MARE) are utilized to evaluate the estimation performance. Estimation results demonstrate the accuracy and effectiveness of the EFNN method through comparison with existing methods including: multiple linear regression (MLR), instantaneous model (IM), linear model (LM), neural network (NN), and cumulative plots (CP).

  17. Assessment of the point-source method for estimating dose rates to members of the public from exposure to patients with 131I thyroid treatment

    SciTech Connect

    Dewji, Shaheen Azim; Bellamy, Michael B.; Hertel, Nolan E.; Leggett, Richard Wayne; Sherbini, Sami; Saba, Mohammad S.; Eckerman, Keith F.

    2015-09-01

    The U.S. Nuclear Regulatory Commission (USNRC) initiated a contract with Oak Ridge National Laboratory (ORNL) to calculate radiation dose rates to members of the public that may result from exposure to patients recently administered iodine-131 (131I) as part of medical therapy. The main purpose was to compare dose rate estimates based on a point source and target with values derived from more realistic simulations that considered the time-dependent distribution of 131I in the patient and attenuation of emitted photons by the patient’s tissues. The external dose rate estimates were derived using Monte Carlo methods and two representations of the Phantom with Movable Arms and Legs, previously developed by ORNL and the USNRC, to model the patient and a nearby member of the public. Dose rates to tissues and effective dose rates were calculated for distances ranging from 10 to 300 cm between the phantoms and compared to estimates based on the point-source method, as well as to results of previous studies that estimated exposure from 131I patients. The point-source method overestimates dose rates to members of the public in very close proximity to an 131I patient but is a broadly accurate method of dose rate estimation at separation distances of 300 cm or more at times closer to administration.

  18. Assessment of the point-source method for estimating dose rates to members of the public from exposure to patients with 131I thyroid treatment

    DOE PAGES

    Dewji, Shaheen Azim; Bellamy, Michael B.; Hertel, Nolan E.; Leggett, Richard Wayne; Sherbini, Sami; Saba, Mohammad S.; Eckerman, Keith F.

    2015-09-01

    The U.S. Nuclear Regulatory Commission (USNRC) initiated a contract with Oak Ridge National Laboratory (ORNL) to calculate radiation dose rates to members of the public that may result from exposure to patients recently administered iodine-131 (131I) as part of medical therapy. The main purpose was to compare dose rate estimates based on a point source and target with values derived from more realistic simulations that considered the time-dependent distribution of 131I in the patient and attenuation of emitted photons by the patient’s tissues. The external dose rate estimates were derived using Monte Carlo methods and two representations of the Phantommore » with Movable Arms and Legs, previously developed by ORNL and the USNRC, to model the patient and a nearby member of the public. Dose rates to tissues and effective dose rates were calculated for distances ranging from 10 to 300 cm between the phantoms and compared to estimates based on the point-source method, as well as to results of previous studies that estimated exposure from 131I patients. The point-source method overestimates dose rates to members of the public in very close proximity to an 131I patient but is a broadly accurate method of dose rate estimation at separation distances of 300 cm or more at times closer to administration.« less

  19. Centroid Tracker Aim Point Estimation In The Presence Of Sensor Noise And Clutter

    NASA Astrophysics Data System (ADS)

    Van Rheeden, Don R.; Jones, Richard A.

    1986-03-01

    The development of automatic target tracking systems has enabled more accurate determination of target position, velocity, acceleration, and other parameters needed for weapons guidance and target designation. With the advent of low cost, high speed digital computers and image processing hardware, it has become more and more feasible to incorporate digital image processing techniques in target tracking systems. When computing target position ( aim point) from discrete images, several problems can arise. A major problem is caused by noise. In target tracking noise originates from two sources. The first source is system and sensor noise. This is usually modeled by additive white Gaussian noise. Another type of noise is caused by clutter near the target. Clutter objects can make it more difficult for the tracker to separate the target from its surrounding background. The result is target pixels can be classified as background pixels and visa versa. This, in turn, causes errors in computing the target aim point. An investigation of the effects of system and sensor noise on target tracking is presented. Two statistical models are derived and simulation results are presented showing the accuracy of the models. The results obtained are applicable to the clutter noise case when clutter causes pixel classification errors which are random in nature.

  20. Estimating dispersed and point source emissions of methane in East Anglia: results and implications

    NASA Astrophysics Data System (ADS)

    Harris, Neil; Connors, Sarah; Hancock, Ben; Jones, Pip; Murphy, Jonathan; Riddick, Stuart; Robinson, Andrew; Skelton, Robert; Manning, Alistair; Forster, Grant; Oram, David; O'Doherty, Simon; Young, Dickon; Stavert, Ann; Fisher, Rebecca; Lowry, David; Nisbet, Euan; Zazzeri, Guilia; Allen, Grant; Pitt, Joseph

    2016-04-01

    We have been investigating ways to estimate dispersed and point source emissions of methane. To do so we have used continuous measurements from a small network of instruments at 4 sites across East Anglia since 2012. These long-term series have been supplemented by measurements taken in focussed studies at landfills, which are important point sources of methane, and by measurements of the 13C:12C ratio in methane to provide additional information about its sources. These measurements have been analysed using the NAME InTEM inversion model to provide county-level emissions (~30 km x ~30 km) in East Anglia. A case study near a landfill just north of Cambridge was also analysed using a Gaussian plume model and the Windtrax dispersion model. The resulting emission estimates from the three techniques are consistent within the uncertainties, despite the different spatial scales being considered. A seasonal cycle in emissions from the landfill (identified by the isotopic measurements) is observed with higher emissions in winter than summer. This would be expected from consideration of the likely activity of methanogenic bacteria in the landfill, but is not currently represented in emission inventories such as the UK National Atmospheric Emissions Inventory. The possibility of assessing North Sea gas field emissions using ground-based measurements will also be discussed.

  1. Estimation of the temperature dependent interaction between uncharged point defects in Si

    SciTech Connect

    Kamiyama, Eiji; Vanhellemont, Jan; Sueoka, Koji

    2015-01-15

    A method is described to estimate the temperature dependent interaction between two uncharged point defects in Si based on DFT calculations. As an illustration, the formation of the uncharged di-vacancy V{sub 2} is discussed, based on the temperature dependent attractive field between both vacancies. For that purpose, all irreducible configurations of two uncharged vacancies are determined, each with their weight given by the number of equivalent configurations. Using a standard 216-atoms supercell, nineteen irreducible configurations of two vacancies are obtained. The binding energies of all these configurations are calculated. Each vacancy is surrounded by several attractive sites for another vacancy. The obtained temperature dependent of total volume of these attractive sites has a radius that is closely related with the capture radius for the formation of a di-vacancy that is used in continuum theory. The presented methodology can in principle also be applied to estimate the capture radius for pair formation of any type of point defects.

  2. A Roving Dual-Presentation Simultaneity-Judgment Task to Estimate the Point of Subjective Simultaneity

    PubMed Central

    Yarrow, Kielan; Martin, Sian E.; Di Costa, Steven; Solomon, Joshua A.; Arnold, Derek H.

    2016-01-01

    The most popular tasks with which to investigate the perception of subjective synchrony are the temporal order judgment (TOJ) and the simultaneity judgment (SJ). Here, we discuss a complementary approach—a dual-presentation (2x) SJ task—and focus on appropriate analysis methods for a theoretically desirable “roving” design. Two stimulus pairs are presented on each trial and the observer must select the most synchronous. To demonstrate this approach, in Experiment 1 we tested the 2xSJ task alongside TOJ, SJ, and simple reaction-time (RT) tasks using audiovisual stimuli. We interpret responses from each task using detection-theoretic models, which assume variable arrival times for sensory signals at critical brain structures for timing perception. All tasks provide similar estimates of the point of subjective simultaneity (PSS) on average, and PSS estimates from some tasks were correlated on an individual basis. The 2xSJ task produced lower and more stable estimates of model-based (and thus comparable) sensory/decision noise than the TOJ. In Experiment 2 we obtained similar results using RT, TOJ, ternary, and 2xSJ tasks for all combinations of auditory, visual, and tactile stimuli. In Experiment 3 we investigated attentional prior entry, using both TOJs and 2xSJs. We found that estimates of prior-entry magnitude correlated across these tasks. Overall, our study establishes the practicality of the roving dual-presentation SJ task, but also illustrates the additional complexity of the procedure. We consider ways in which this task might complement more traditional procedures, particularly when it is important to estimate both PSS and sensory/decisional noise. PMID:27047434

  3. Estimating the Critical Point of Crowding in the Emergency Department for the Warning System

    NASA Astrophysics Data System (ADS)

    Chang, Y.; Pan, C.; Tseng, C.; Wen, J.

    2011-12-01

    The purpose of this study is to deduce a function from the admissions/discharge rate of patient flow to estimate a "Critical Point" that provides a reference for warning systems in regards to crowding in the emergency department (ED) of a hospital or medical clinic. In this study, a model of "Input-Throughput-Output" was used in our established mathematical function to evaluate the critical point. The function is defined as dPin/dt=dwait/dt+Cp×B+ dPout/dt where Pin= number of registered patients, Pwait= number of waiting patients, Cp= retention rate per bed (calculated for the critical point), B= number of licensed beds in the treatment area, and Pout= number of patients discharged from the treatment area. Using the average Cp of ED crowding, we could start the warning system at an appropriate time and then plan for necessary emergency response to facilitate the patient process more smoothly. It was concluded that ED crowding could be quantified using the average value of Cp and the value could be used as a reference for medical staff to give optimal emergency medical treatment to patients. Therefore, additional practical work should be launched to collect more precise quantitative data.

  4. Benthic remineralisation rates in southern North Sea - from point measurements to areal estimates

    NASA Astrophysics Data System (ADS)

    Neumann, Andreas; Friedrich, Jana; van Beusekom, Justus; Naderipour, Céline

    2015-04-01

    The southern North Sea is enclosed by densely populated hinterland with intensive use by agriculture and industry and thus substantially affected by anthropogenic influences. As a coastal subsystem, this applies especially to the German Wadden Sea, a system of back-barrier tidal flats along the whole German Bight. Ongoing efforts to implement environmental protection policies during the last decades changed the significance of various pollutants such as reactive nitrogen or phosphate, which raises the desire for constant monitoring of the coastal ecosystem to assess the efficiency of the employed environmental protection measures. Environmental monitoring is limited to point measurements which thus have to be interpolated with appropriate models. However, existing models to estimate various sediment characteristics for the interpolation of point measurements appear insufficient when compared with actual field measurements in the southern North Sea. We therefore seek to improve these models by identifying and quantifying key variables of benthic solute fluxes by comprehensive measurements which cover the complete spatial and seasonal variability. We employ in-situ measurements with the eddy-correlation technique and flux chambers in combination with ex-situ incubations of sediment cores to establish benthic fluxes of oxygen and nutrients. Additional ex-situ measurements determine basic sediment characteristics such as permeability, volumetric reaction rates, and substrate concentration. With our first results we mapped the distribution of measured sediment permeability, which suggest that areas with water depth greater than 30 m are impervious whereas sediment in shallower water at the Dogger Bank and along the coast is substantially permeable with permeability between 10-12 m2 and 10-10 m2. This implies that benthic fluxes can be estimated with simple diffusion-type models for water depths >30 m, whereas estimates especially for coastal sediments require

  5. Contaminant point source localization error estimates as functions of data quantity and model quality

    NASA Astrophysics Data System (ADS)

    Hansen, Scott K.; Vesselinov, Velimir V.

    2016-10-01

    We develop empirically-grounded error envelopes for localization of a point contamination release event in the saturated zone of a previously uncharacterized heterogeneous aquifer into which a number of plume-intercepting wells have been drilled. We assume that flow direction in the aquifer is known exactly and velocity is known to within a factor of two of our best guess from well observations prior to source identification. Other aquifer and source parameters must be estimated by interpretation of well breakthrough data via the advection-dispersion equation. We employ high performance computing to generate numerous random realizations of aquifer parameters and well locations, simulate well breakthrough data, and then employ unsupervised machine optimization techniques to estimate the most likely spatial (or space-time) location of the source. Tabulating the accuracy of these estimates from the multiple realizations, we relate the size of 90% and 95% confidence envelopes to the data quantity (number of wells) and model quality (fidelity of ADE interpretation model to actual concentrations in a heterogeneous aquifer with channelized flow). We find that for purely spatial localization of the contaminant source, increased data quantities can make up for reduced model quality. For space-time localization, we find similar qualitative behavior, but significantly degraded spatial localization reliability and less improvement from extra data collection. Since the space-time source localization problem is much more challenging, we also tried a multiple-initial-guess optimization strategy. This greatly enhanced performance, but gains from additional data collection remained limited.

  6. Curie Point Depth Estimates Beneath the Incipient Okavango Rift Zone, Northwest Botswana

    NASA Astrophysics Data System (ADS)

    Leseane, K.; Atekwana, E. A.; Mickus, K. L.; Mohamed, A.; Atekwana, E. A.

    2013-12-01

    We investigated the regional thermal structure of the crust beneath the Okavango Rift Zone (ORZ), surrounding cratons and orogenic mobile belts using the Curie Point Depth (CPD) estimates. Estimating the depth to the base of magnetic sources is important in understanding and constraining the thermal structure of the crust in zones of incipient continental rifting where no other data are available to image the crustal thermal structure. Our objective was to determine if there are any thermal perturbations within the lithosphere during rift initiation. The top and bottom of the magnetized crust were calculated using the two dimensional (2D) power-density spectra analysis and three dimensional (3D) inversions of the total field magnetic data of Botswana in overlapping square windows of 1degree x 1 degree. The calculated CPD estimates varied between ~8 km and ~24 km. The deepest CPD values (16-24 km) occur under the surrounding cratons and orogenic mobile belts whereas the shallowest CPD values were found within the ORZ. CPD values of 8 to 10 km occur in the northeastern part of ORZ; a site of more developed rift structures and where hot springs are known to occur. CPD values of 12 to 16 km were obtained in the southwestern part of the ORZ where rift structures are progressively less developed and where the rift terminates. The results suggests possible thermal anomaly beneath the incipient ORZ. Further geophysical studies as part of the PRIDE (Project for Rift Initiation Development and Evolution) project are needed to confirm this proposition.

  7. Estimation of the skull insertion loss using an optoacoustic point source

    NASA Astrophysics Data System (ADS)

    Estrada, Héctor; Rebling, Johannes; Turner, Jake; Kneipp, Moritz; Shoham, Shy; Razansky, Daniel

    2016-03-01

    The acoustically-mismatched skull bone poses significant challenges for the application of ultrasonic and optical techniques in neuroimaging, still typically requiring invasive approaches using craniotomy or skull thinning. Optoacoustic imaging partially circumvents the acoustic distortions due to the skull because the induced wave is transmitted only once as opposed to the round trip in pulse-echo ultrasonography. To this end, the mouse brain has been successfully imaged transcranially by optoacoustic scanning microscopy. Yet, the skull may adversely affect the lateral and axial resolution of transcranial brain images. In order to accurately characterize the complex behavior of the optoacoustic signal as it traverses through the skull, one needs to consider the ultrawideband nature of the optoacoustic signals. Here the insertion loss of murine skull has been measured by means of a hybrid optoacoustic-ultrasound scanning microscope having a spherically focused PVDF transducer and pulsed laser excitation at 532 nm of a 20 μm diameter absorbing microsphere acting as an optoacoustic point source. Accurate modeling of the acoustic transmission through the skull is further performed using a Fourier-domain expansion of a solid-plate model, based on the simultaneously acquired pulse-echo ultrasound image providing precise information about the skull's position and its orientation relative to the optoacoustic source. Good qualitative agreement has been found between the a solid-plate model and experimental measurements. The presented strategy might pave the way for modeling skull effects and deriving efficient correction schemes to account for acoustic distortions introduced by an adult murine skull, thus improving the spatial resolution, effective penetration depth and overall image quality of transcranial optoacoustic brain microscopy.

  8. Accurate spike estimation from noisy calcium signals for ultrafast three-dimensional imaging of large neuronal populations in vivo

    PubMed Central

    Deneux, Thomas; Kaszas, Attila; Szalay, Gergely; Katona, Gergely; Lakner, Tamás; Grinvald, Amiram; Rózsa, Balázs; Vanzetta, Ivo

    2016-01-01

    Extracting neuronal spiking activity from large-scale two-photon recordings remains challenging, especially in mammals in vivo, where large noises often contaminate the signals. We propose a method, MLspike, which returns the most likely spike train underlying the measured calcium fluorescence. It relies on a physiological model including baseline fluctuations and distinct nonlinearities for synthetic and genetically encoded indicators. Model parameters can be either provided by the user or estimated from the data themselves. MLspike is computationally efficient thanks to its original discretization of probability representations; moreover, it can also return spike probabilities or samples. Benchmarked on extensive simulations and real data from seven different preparations, it outperformed state-of-the-art algorithms. Combined with the finding obtained from systematic data investigation (noise level, spiking rate and so on) that photonic noise is not necessarily the main limiting factor, our method allows spike extraction from large-scale recordings, as demonstrated on acousto-optical three-dimensional recordings of over 1,000 neurons in vivo. PMID:27432255

  9. Novel point estimation from a semiparametric ratio estimator (SPRE): long-term health outcomes from short-term linear data, with application to weight loss in obesity.

    PubMed

    Weissman-Miller, Deborah

    2013-01-01

    Point estimation is particularly important in predicting weight loss in individuals or small groups. In this analysis, a new health response function is based on a model of human response over time to estimate long-term health outcomes from a change point in short-term linear regression. This important estimation capability is addressed for small groups and single-subject designs in pilot studies for clinical trials, medical and therapeutic clinical practice. These estimations are based on a change point given by parameters derived from short-term participant data in ordinary least squares (OLS) regression. The development of the change point in initial OLS data and the point estimations are given in a new semiparametric ratio estimator (SPRE) model. The new response function is taken as a ratio of two-parameter Weibull distributions times a prior outcome value that steps estimated outcomes forward in time, where the shape and scale parameters are estimated at the change point. The Weibull distributions used in this ratio are derived from a Kelvin model in mechanics taken here to represent human beings. A distinct feature of the SPRE model in this article is that initial treatment response for a small group or a single subject is reflected in long-term response to treatment. This model is applied to weight loss in obesity in a secondary analysis of data from a classic weight loss study, which has been selected due to the dramatic increase in obesity in the United States over the past 20 years. A very small relative error of estimated to test data is shown for obesity treatment with the weight loss medication phentermine or placebo for the test dataset. An application of SPRE in clinical medicine or occupational therapy is to estimate long-term weight loss for a single subject or a small group near the beginning of treatment. PMID:24190595

  10. Estimating forest biomass from LiDAR data: A comparison of the raster-based and point-cloud data approach

    NASA Astrophysics Data System (ADS)

    Garcia-Alonso, M.; Ferraz, A.; Saatchi, S. S.; Casas, A.; Koltunov, A.; Ustin, S.; Ramirez, C.; Balzter, H.

    2015-12-01

    Accurate knowledge of forest biomass and its dynamics is critical for better understanding the carbon cycle and improving forest management decisions to ensure forest sustainability. LiDAR technology provides accurate estimates of aboveground biomass in different ecosystems, minimizing the signal saturation problems that are common with other remote sensing technologies. LiDAR data processing can be based on two different approaches. The first is based on deriving structural metrics from returns classified as vegetation, while the second one is based on metrics derived from the canopy height model (CHM). The CHM is obtained by subtracting the digital elevation model (DEM) that was created from the ground returns, from the digital surface model (DSM), which was itself constructed using the maximum height within each grid cell. The former approach provides a better description of the vertical distribution of the vegetation, whereas the latter significantly reduces the computational burden involved in processing point cloud data at the expense of losing information. This study evaluates the performance of both approaches for biomass estimation over very different ecosystems, including a Mediterranean forest in the Sierra Nevada Mountains of California and a tropical forest in Barro Colorado Island (Panama). In addition, the effect of point density on the variables derived, and ultimately on the estimated biomass, will be assessed.

  11. Travel Time Estimation Using Freeway Point Detector Data Based on Evolving Fuzzy Neural Inference System.

    PubMed

    Tang, Jinjun; Zou, Yajie; Ash, John; Zhang, Shen; Liu, Fang; Wang, Yinhai

    2016-01-01

    Travel time is an important measurement used to evaluate the extent of congestion within road networks. This paper presents a new method to estimate the travel time based on an evolving fuzzy neural inference system. The input variables in the system are traffic flow data (volume, occupancy, and speed) collected from loop detectors located at points both upstream and downstream of a given link, and the output variable is the link travel time. A first order Takagi-Sugeno fuzzy rule set is used to complete the inference. For training the evolving fuzzy neural network (EFNN), two learning processes are proposed: (1) a K-means method is employed to partition input samples into different clusters, and a Gaussian fuzzy membership function is designed for each cluster to measure the membership degree of samples to the cluster centers. As the number of input samples increases, the cluster centers are modified and membership functions are also updated; (2) a weighted recursive least squares estimator is used to optimize the parameters of the linear functions in the Takagi-Sugeno type fuzzy rules. Testing datasets consisting of actual and simulated data are used to test the proposed method. Three common criteria including mean absolute error (MAE), root mean square error (RMSE), and mean absolute relative error (MARE) are utilized to evaluate the estimation performance. Estimation results demonstrate the accuracy and effectiveness of the EFNN method through comparison with existing methods including: multiple linear regression (MLR), instantaneous model (IM), linear model (LM), neural network (NN), and cumulative plots (CP). PMID:26829639

  12. Travel Time Estimation Using Freeway Point Detector Data Based on Evolving Fuzzy Neural Inference System

    PubMed Central

    Tang, Jinjun; Zou, Yajie; Ash, John; Zhang, Shen; Liu, Fang; Wang, Yinhai

    2016-01-01

    Travel time is an important measurement used to evaluate the extent of congestion within road networks. This paper presents a new method to estimate the travel time based on an evolving fuzzy neural inference system. The input variables in the system are traffic flow data (volume, occupancy, and speed) collected from loop detectors located at points both upstream and downstream of a given link, and the output variable is the link travel time. A first order Takagi-Sugeno fuzzy rule set is used to complete the inference. For training the evolving fuzzy neural network (EFNN), two learning processes are proposed: (1) a K-means method is employed to partition input samples into different clusters, and a Gaussian fuzzy membership function is designed for each cluster to measure the membership degree of samples to the cluster centers. As the number of input samples increases, the cluster centers are modified and membership functions are also updated; (2) a weighted recursive least squares estimator is used to optimize the parameters of the linear functions in the Takagi-Sugeno type fuzzy rules. Testing datasets consisting of actual and simulated data are used to test the proposed method. Three common criteria including mean absolute error (MAE), root mean square error (RMSE), and mean absolute relative error (MARE) are utilized to evaluate the estimation performance. Estimation results demonstrate the accuracy and effectiveness of the EFNN method through comparison with existing methods including: multiple linear regression (MLR), instantaneous model (IM), linear model (LM), neural network (NN), and cumulative plots (CP). PMID:26829639

  13. Parameter Estimation of Fossil Oysters from High Resolution 3D Point Cloud and Image Data

    NASA Astrophysics Data System (ADS)

    Djuricic, Ana; Harzhauser, Mathias; Dorninger, Peter; Nothegger, Clemens; Mandic, Oleg; Székely, Balázs; Molnár, Gábor; Pfeifer, Norbert

    2014-05-01

    A unique fossil oyster reef was excavated at Stetten in Lower Austria, which is also the highlight of the geo-edutainment park 'Fossilienwelt Weinviertel'. It provides the rare opportunity to study the Early Miocene flora and fauna of the Central Paratethys Sea. The site presents the world's largest fossil oyster biostrome formed about 16.5 million years ago in a tropical estuary of the Korneuburg Basin. About 15,000 up to 80-cm-long shells of Crassostrea gryphoides cover a 400 m2 large area. Our project 'Smart-Geology for the World's largest fossil oyster reef' combines methods of photogrammetry, geology and paleontology to document, evaluate and quantify the shell bed. This interdisciplinary approach will be applied to test hypotheses on the genesis of the taphocenosis (e.g.: tsunami versus major storm) and to reconstruct pre- and post-event processes. Hence, we are focusing on using visualization technologies from photogrammetry in geology and paleontology in order to develop new methods for automatic and objective evaluation of 3D point clouds. These will be studied on the basis of a very dense surface reconstruction of the oyster reef. 'Smart Geology', as extension of the classic discipline, exploits massive data, automatic interpretation, and visualization. Photogrammetry provides the tools for surface acquisition and objective, automated interpretation. We also want to stress the economic aspect of using automatic shape detection in paleontology, which saves manpower and increases efficiency during the monitoring and evaluation process. Currently, there are many well known algorithms for 3D shape detection of certain objects. We are using dense 3D laser scanning data from an instrument utilizing the phase shift measuring principle, which provides accurate geometrical basis < 3 mm. However, the situation is difficult in this multiple object scenario where more than 15,000 complete or fragmentary parts of an object with random orientation are found. The goal

  14. Line-point intercept, grid-point intercept, and ocular estimate methods: their relative value for rangeland assessment and monitoring

    Technology Transfer Automated Retrieval System (TEKTRAN)

    We compared the utility of three methods for rangeland assessment and monitoring based on the number of species detected, foliar cover, precision (coefficient of variation) and time required for each method. We used four 70-m transects in 15 sites of five vegetation types (3 sites/type). Point inter...

  15. Adaptive robust maximum power point tracking control for perturbed photovoltaic systems with output voltage estimation.

    PubMed

    Koofigar, Hamid Reza

    2016-01-01

    The problem of maximum power point tracking (MPPT) in photovoltaic (PV) systems, despite the model uncertainties and the variations in environmental circumstances, is addressed. Introducing a mathematical description, an adaptive sliding mode control (ASMC) algorithm is first developed. Unlike many previous investigations, the output voltage is not required to be sensed and the upper bound of system uncertainties and the variations of irradiance and temperature are not required to be known. Estimating the output voltage by an update law, an adaptive-based H∞ tracking algorithm is then developed for the case the perturbations are energy-bounded. The stability analysis is presented for the proposed tracking control schemes, based on the Lyapunov stability theorem. From a comparison viewpoint, some numerical and experimental studies are also presented and discussed. PMID:26606851

  16. Adaptive robust maximum power point tracking control for perturbed photovoltaic systems with output voltage estimation.

    PubMed

    Koofigar, Hamid Reza

    2016-01-01

    The problem of maximum power point tracking (MPPT) in photovoltaic (PV) systems, despite the model uncertainties and the variations in environmental circumstances, is addressed. Introducing a mathematical description, an adaptive sliding mode control (ASMC) algorithm is first developed. Unlike many previous investigations, the output voltage is not required to be sensed and the upper bound of system uncertainties and the variations of irradiance and temperature are not required to be known. Estimating the output voltage by an update law, an adaptive-based H∞ tracking algorithm is then developed for the case the perturbations are energy-bounded. The stability analysis is presented for the proposed tracking control schemes, based on the Lyapunov stability theorem. From a comparison viewpoint, some numerical and experimental studies are also presented and discussed.

  17. Consistent multi-time-point brain atrophy estimation from the boundary shift integral.

    PubMed

    Leung, Kelvin K; Ridgway, Gerard R; Ourselin, Sébastien; Fox, Nick C

    2012-02-15

    Brain atrophy measurement is increasingly important in studies of neurodegenerative diseases such as Alzheimer's disease (AD), with particular relevance to trials of potential disease-modifying drugs. Automated registration-based methods such as the boundary shift integral (BSI) have been developed to provide more precise measures of change from a pair of serial MR scans. However, when a method treats one image of the pair (typically the baseline) as the reference to which the other is compared, this systematic asymmetry risks introducing bias into the measurement. Recent concern about potential biases in longitudinal studies has led to several suggestions to use symmetric image registration, though some of these methods are limited to two time-points per subject. Therapeutic trials and natural history studies increasingly involve several serial scans, it would therefore be useful to have a method that can consistently estimate brain atrophy over multiple time-points. Here, we use the log-Euclidean concept of a within-subject average to develop affine registration and differential bias correction methods suitable for any number of time-points, yielding a longitudinally consistent multi-time-point BSI technique. Baseline, 12-month and 24-month MR scans of healthy controls, subjects with mild cognitive impairment and AD patients from the Alzheimer's Disease Neuroimaging Initiative are used for testing the bias in processing scans with different amounts of atrophy. Four tests are used to assess bias in brain volume loss from BSI: (a) inverse consistency with respect to ordering of pairs of scans 12 months apart; (b) transitivity consistency over three time-points; (c) randomly ordered back-to-back scans, expected to show no consistent change over subjects; and (d) linear regression of the atrophy rates calculated from the baseline and 12-month scans and the baseline and 24-month scans, where any additive bias should be indicated by a non-zero intercept. Results

  18. The Look-point Aircraft Coordinate Estimator (LACE) and potential applications

    NASA Technical Reports Server (NTRS)

    Anderson, W. W.

    1979-01-01

    A look-point aircraft coordinate estimator (LACE) consisting of a windshield runway symbol projector, pilot input controls, microprocessor, and eye-alignment device is described. The estimator is used by a pilot to determine his aircraft's position relative to a runway or other visible terrain or target. The pilot initially superimposes and then corrects the superposition of the runway symbol over the runway during approach during periods when the runway is visible. Using the pilot's inputs the microprocessor calculates the position of the aircraft in terms of runway coordinates, then generates an approach trajectory and issues instructions to an autopilot. The microprocessor contains a model of the aircraft's dynamics and calculates a theoretical aircraft trajectory. The theoretical position of the aircraft is then used to drive the runway symbol, with the pilot's input being additive. The system thus acts as an aid in making low visibility approaches and landings when only an occasional glimpse of the runway is possible and no ground referenced landing systems are available. The system can also be used as an independent landing monitor for ground referenced landing systems.

  19. Estimation of normalized point-source sensitivity of segment surface specifications for extremely large telescopes.

    PubMed

    Seo, Byoung-Joon; Nissly, Carl; Troy, Mitchell; Angeli, George; Bernier, Robert; Stepp, Larry; Williams, Eric

    2013-06-20

    We present a method which estimates the normalized point-source sensitivity (PSSN) of a segmented telescope when only information from a single segment surface is known. The estimation principle is based on a statistical approach with an assumption that all segment surfaces have the same power spectral density (PSD) as the given segment surface. As presented in this paper, the PSSN based on this statistical approach represents a worst-case scenario among statistical random realizations of telescopes when all segment surfaces have the same PSD. Therefore, this method, which we call the vendor table, is expected to be useful for individual segment specification such as the segment polishing specification. The specification based on the vendor table can be directly related to a science metric such as PSSN and provides the mirror vendors significant flexibility by specifying a single overall PSSN value for them to meet. We build a vendor table for the Thirty Meter Telescope (TMT) and test it using multiple mirror samples from various mirror vendors to prove its practical utility. Accordingly, TMT has a plan to adopt this vendor table for its M1 segment final mirror polishing requirement.

  20. Shorter sampling periods and accurate estimates of milk volume and components are possible for pasture based dairy herds milked with automated milking systems.

    PubMed

    Kamphuis, Claudia; Burke, Jennie K; Taukiri, Sarah; Petch, Susan-Fay; Turner, Sally-Anne

    2016-08-01

    Dairy cows grazing pasture and milked using automated milking systems (AMS) have lower milking frequencies than indoor fed cows milked using AMS. Therefore, milk recording intervals used for herd testing indoor fed cows may not be suitable for cows on pasture based farms. We hypothesised that accurate standardised 24 h estimates could be determined for AMS herds with milk recording intervals of less than the Gold Standard (48 hs), but that the optimum milk recording interval would depend on the herd average for milking frequency. The Gold Standard protocol was applied on five commercial dairy farms with AMS, between December 2011 and February 2013. From 12 milk recording test periods, involving 2211 cow-test days and 8049 cow milkings, standardised 24 h estimates for milk volume and milk composition were calculated for the Gold Standard protocol and compared with those collected during nine alternative sampling scenarios, including six shorter sampling periods and three in which a fixed number of milk samples per cow were collected. Results infer a 48 h milk recording protocol is unnecessarily long for collecting accurate estimates during milk recording on pasture based AMS farms. Collection of two milk samples only per cow was optimal in terms of high concordance correlation coefficients for milk volume and components and a low proportion of missed cow-test days. Further research is required to determine the effects of diurnal variations in milk composition on standardised 24 h estimates for milk volume and components, before a protocol based on a fixed number of samples could be considered. Based on the results of this study New Zealand have adopted a split protocol for herd testing based on the average milking frequency for the herd (NZ Herd Test Standard 8100:2015). PMID:27600967

  1. Contaminant point source localization error estimates as functions of data quantity and model quality

    DOE PAGES

    Hansen, Scott K.; Vesselinov, Velimir Valentinov

    2016-09-09

    We develop empirically-grounded error envelopes for localization of a point contamination release event in the saturated zone of a previously uncharacterized heterogeneous aquifer into which a number of plume-intercepting wells have been drilled. We assume that flow direction in the aquifer is known exactly and velocity is known to within a factor of two of our best guess from well observations prior to source identification. Other aquifer and source parameters must be estimated by interpretation of well breakthrough data via the advection-dispersion equation. We employ high performance computing to generate numerous random realizations of aquifer parameters and well locations, simulatemore » well breakthrough data, and then employ unsupervised machine optimization techniques to estimate the most likely spatial (or space-time) location of the source. Tabulating the accuracy of these estimates from the multiple realizations, we relate the size of 90% and 95% confidence envelopes to the data quantity (number of wells) and model quality (fidelity of ADE interpretation model to actual concentrations in a heterogeneous aquifer with channelized flow). We find that for purely spatial localization of the contaminant source, increased data quantities can make up for reduced model quality. For space-time localization, we find similar qualitative behavior, but significantly degraded spatial localization reliability and less improvement from extra data collection. Since the space-time source localization problem is much more challenging, we also tried a multiple-initial-guess optimization strategy. Furthermore, this greatly enhanced performance, but gains from additional data collection remained limited.« less

  2. Subcutaneous nerve activity is more accurate than the heart rate variability in estimating cardiac sympathetic tone in ambulatory dogs with myocardial infarction

    PubMed Central

    Chan, Yi-Hsin; Tsai, Wei-Chung; Shen, Changyu; Han, Seongwook; Chen, Lan S.; Lin, Shien-Fong; Chen, Peng-Sheng

    2015-01-01

    Background We recently reported that subcutaneous nerve activity (SCNA) can be used to estimate sympathetic tone. Objectives To test the hypothesis that left thoracic SCNA is more accurate than heart rate variability (HRV) in estimating cardiac sympathetic tone in ambulatory dogs with myocardial infarction (MI). Methods We used an implanted radiotransmitter to study left stellate ganglion nerve activity (SGNA), vagal nerve activity (VNA), and thoracic SCNA in 9 dogs at baseline and up to 8 weeks after MI. HRV was determined based by time-domain, frequency-domain and non-linear analyses. Results The correlation coefficients between integrated SGNA and SCNA averaged 0.74 (95% confidence interval (CI), 0.41–1.06) at baseline and 0.82 (95% CI, 0.63–1.01) after MI (P<.05 for both). The absolute values of the correlation coefficients were significant larger than that between SGNA and HRV analysis based on time-domain, frequency-domain and non-linear analyses, respectively, at baseline (P<.05 for all) and after MI (P<.05 for all). There was a clear increment of SGNA and SCNA at 2, 4, 6 and 8 weeks after MI, while HRV parameters showed no significant changes. Significant circadian variations were noted in SCNA, SGNA and all HRV parameters at baseline and after MI, respectively. Atrial tachycardia (AT) episodes were invariably preceded by the SCNA and SGNA, which were progressively increased from 120th, 90th, 60th to 30th s before the AT onset. No such changes of HRV parameters were observed before AT onset. Conclusion SCNA is more accurate than HRV in estimating cardiac sympathetic tone in ambulatory dogs with MI. PMID:25778433

  3. A new methodology in fast and accurate matching of the 2D and 3D point clouds extracted by laser scanner systems

    NASA Astrophysics Data System (ADS)

    Torabi, M.; Mousavi G., S. M.; Younesian, D.

    2015-03-01

    Registration of the point clouds is a conventional challenge in computer vision related applications. As an application, matching of train wheel profiles extracted from two viewpoints is studied in this paper. The registration problem is formulated into an optimization problem. An error minimization function for registration of the two partially overlapping point clouds is presented. The error function is defined as the sum of the squared distance between the source points and their corresponding pairs which should be minimized. The corresponding pairs are obtained thorough Iterative Closest Point (ICP) variants. Here, a point-to-plane ICP variant is employed. Principal Component Analysis (PCA) is used to obtain tangent planes. Thus it is shown that minimization of the proposed objective function diminishes point-to-plane ICP variant. We utilized this algorithm to register point clouds of two partially overlapping profiles of wheel train extracted from two viewpoints in 2D. Also, a number of synthetic point clouds and a number of real point clouds in 3D are studied to evaluate the reliability and rate of convergence in our method compared with other registration methods.

  4. Estimating the operating point of the cochlear transducer using low-frequency biased distortion products

    PubMed Central

    Brown, Daniel J.; Hartsock, Jared J.; Gill, Ruth M.; Fitzgerald, Hillary E.; Salt, Alec N.

    2009-01-01

    Distortion products in the cochlear microphonic (CM) and in the ear canal in the form of distortion product otoacoustic emissions (DPOAEs) are generated by nonlinear transduction in the cochlea and are related to the resting position of the organ of Corti (OC). A 4.8 Hz acoustic bias tone was used to displace the OC, while the relative amplitude and phase of distortion products evoked by a single tone [most often 500 Hz, 90 dB SPL (sound pressure level)] or two simultaneously presented tones (most often 4 kHz and 4.8 kHz, 80 dB SPL) were monitored. Electrical responses recorded from the round window, scala tympani and scala media of the basal turn, and acoustic emissions in the ear canal were simultaneously measured and compared during the bias. Bias-induced changes in the distortion products were similar to those predicted from computer models of a saturating transducer with a first-order Boltzmann distribution. Our results suggest that biased DPOAEs can be used to non-invasively estimate the OC displacement, producing a measurement equivalent to the transducer operating point obtained via Boltzmann analysis of the basal turn CM. Low-frequency biased DPOAEs might provide a diagnostic tool to objectively diagnose abnormal displacements of the OC, as might occur with endolymphatic hydrops. PMID:19354389

  5. Image adaptive point-spread function estimation and deconvolution for in vivo confocal microscopy.

    PubMed

    Von Tiedemann, M; Fridberger, A; Ulfendahl, M; Tomo, I; Boutet de Monvel, J; De Monvel, J Boutet

    2006-01-01

    Visualizing deep inside the tissue of a thick biological sample often poses severe constraints on image conditions. Standard restoration techniques (denoising and deconvolution) can then be very useful, allowing one to increase the signal-to-noise ratio and the resolution of the images. In this paper, we consider the problem of obtaining a good determination of the point-spread function (PSF) of a confocal microscope, a prerequisite for applying deconvolution to three-dimensional image stacks acquired with this system. Because of scattering and optical distortion induced by the sample, the PSF has to be acquired anew for each experiment. To tackle this problem, we used a screening approach to estimate the PSF adaptively and automatically from the images. Small PSF-like structures were detected in the images, and a theoretical PSF model reshaped to match the geometric characteristics of these structures. We used numerical experiments to quantify the sensitivity of our detection method, and we demonstrated its usefulness by deconvolving images of the hearing organ acquired in vitro and in vivo.

  6. Sigma-point Kalman filtering for battery management systems of LiPB-based HEV battery packs. Part 2: Simultaneous state and parameter estimation

    NASA Astrophysics Data System (ADS)

    Plett, Gregory L.

    We have previously described algorithms for a battery management system (BMS) that uses Kalman filtering (KF) techniques to estimate such quantities as: cell self-discharge rate, state-of-charge, nominal capacity, resistance, and others. Since the dynamics of electrochemical cells are not linear, we used a nonlinear extension to the original KF called the extended Kalman filter (EKF). Now, we introduce an alternative nonlinear Kalman filtering technique known as "sigma-point Kalman filtering" (SPKF), which has some theoretical advantages that manifest themselves in more accurate predictions. The computational complexity of SPKF is of the same order as EKF, so the gains are made at little or no additional cost. This paper is the second in a two-part series. The first paper explored the theoretical background to the Kalman filter, the extended Kalman filter, and the sigma-point Kalman filter. It explained why the SPKF is often superior to the EKF and applied SPKF to estimate the state of a third-generation prototype lithium-ion polymer battery (LiPB) cell in dynamic conditions, including the state-of-charge of the cell. In this paper, we first investigate the use of the SPKF method to estimate battery parameters. A numerically efficient "square-root sigma-point Kalman filter" (SR-SPKF) is introduced for this purpose. Additionally, we discuss two SPKF-based methods for simultaneous estimation of both the quickly time-varying state and slowly time-varying parameters. Results are presented for a battery pack based on a fourth-generation prototype LiPB cell, and some limitations of the current approach, based on the probability density functions of estimation error, are also discussed.

  7. A systematic approach for the accurate non-invasive estimation of blood glucose utilizing a novel light-tissue interaction adaptive modelling scheme

    NASA Astrophysics Data System (ADS)

    Rybynok, V. O.; Kyriacou, P. A.

    2007-10-01

    Diabetes is one of the biggest health challenges of the 21st century. The obesity epidemic, sedentary lifestyles and an ageing population mean prevalence of the condition is currently doubling every generation. Diabetes is associated with serious chronic ill health, disability and premature mortality. Long-term complications including heart disease, stroke, blindness, kidney disease and amputations, make the greatest contribution to the costs of diabetes care. Many of these long-term effects could be avoided with earlier, more effective monitoring and treatment. Currently, blood glucose can only be monitored through the use of invasive techniques. To date there is no widely accepted and readily available non-invasive monitoring technique to measure blood glucose despite the many attempts. This paper challenges one of the most difficult non-invasive monitoring techniques, that of blood glucose, and proposes a new novel approach that will enable the accurate, and calibration free estimation of glucose concentration in blood. This approach is based on spectroscopic techniques and a new adaptive modelling scheme. The theoretical implementation and the effectiveness of the adaptive modelling scheme for this application has been described and a detailed mathematical evaluation has been employed to prove that such a scheme has the capability of extracting accurately the concentration of glucose from a complex biological media.

  8. Properties of alkali-metal atoms and alkaline-earth-metal ions for an accurate estimate of their long-range interactions

    NASA Astrophysics Data System (ADS)

    Kaur, Jasmeet; Nandy, D. K.; Arora, Bindiya; Sahoo, B. K.

    2015-01-01

    Accurate knowledge of interaction potentials among the alkali-metal atoms and alkaline-earth ions is very useful in the studies of cold atom physics. Here we carry out theoretical studies of the long-range interactions among the Li, Na, K, and Rb alkali-metal atoms with the Ca+, Ba+, Sr+, and Ra+ alkaline-earth ions systematically, which are largely motivated by their importance in a number of applications. These interactions are expressed as a power series in the inverse of the internuclear separation R . Both the dispersion and induction components of these interactions are determined accurately from the algebraic coefficients corresponding to each power combination in the series. Ultimately, these coefficients are expressed in terms of the electric multipole polarizabilities of the above-mentioned systems, which are calculated using the matrix elements obtained from a relativistic coupled-cluster method and core contributions to these quantities from the random-phase approximation. We also compare our estimated polarizabilities with the other available theoretical and experimental results to verify accuracies in our calculations. In addition, we also evaluate the lifetimes of the first two low-lying states of the ions using the above matrix elements. Graphical representations of the dispersion coefficients versus R are given among all the alkaline ions with Rb.

  9. GIS based probabilistic analysis for shallow landslide susceptibility using Point Estimate Method

    NASA Astrophysics Data System (ADS)

    Park, Hyuck-Jin; Lee, Jung-Hyun

    2016-04-01

    The mechanical properties of soil materials (such as cohesion and friction angle) used in physically based model for landslide susceptibility analyses have been identified as the major source of uncertainty caused by complex geological conditions and spatial variability. In addition, limited sampling is another source of the uncertainty since the input parameters were obtained from broad areas. Therefore, in order to properly account for the uncertainty in mechanical parameters, the parameters were considered as random variables and the probabilistic analysis method has been used. In many previous researches, the Monte Carlo simulation has been widely used as the probabilistic analysis. However, since the Monte Carlo method requires a large number of repeated calculations and a great deal of calculation time to evaluate the probability of failure, it is not easy to adopt this approach to extensive study area due to a huge amount of computation time for regional study area. Therefore, this study proposes the alternative probabilistic analysis approach using the Point Estimate method (PEM), which has the advantage overcoming the shortcomings of the Monte Carlo simulation. This is because PEM requires only the mean and standard deviation of random variables and can obtain the probability of failure with a simple calculation. This proposed approach was performed in GIS based environments and applied to the study are which was experienced a large amount of landslides. The spatial database for input parameters and landslide inventory map were constructed in a grid-based GIS environment. To evaluate the performance of the model, the results of the landslide susceptibility assessment were compared with the landslide inventories using ROC graph.

  10. Effects of age, weight, and fat slaughter end points on estimates of breed and retained heterosis effects for carcass traits.

    PubMed

    Ríos-Utrera, A; Cundiff, L V; Gregory, K E; Koch, R M; Dikeman, M E; Koohmaraie, M; Van Vleck, L D

    2006-01-01

    The influence of different levels of adjusted fat thickness (AFT) and HCW slaughter end points (covariates) on estimates of breed and retained heterosis effects was studied for 14 carcass traits from serially slaughtered purebred and composite steers from the US Meat Animal Research Center (MARC). Contrasts among breed solutions were estimated at 0.7, 1.1, and 1.5 cm of AFT, and at 295.1, 340.5, and 385.9 kg of HCW. For constant slaughter age, contrasts were adjusted to the overall mean (432.5 d). Breed effects for Red Poll, Hereford, Limousin, Braunvieh, Pinzgauer, Gelbvieh, Simmental, Charolais, MARC I, MARC II, and MARC III were estimated as deviations from Angus. In addition, purebreds were pooled into 3 groups based on lean-to-fat ratio, and then differences were estimated among groups. Retention of combined individual and maternal heterosis was estimated for each composite. Mean retained heterosis for the 3 composites also was estimated. Breed rankings and expression of heterosis varied within and among end points. For example, Charolais had greater (P < 0.05) dressing percentages than Angus at the 2 largest levels of AFT and smaller (P < 0.01) percentages at the 2 largest levels of HCW, whereas the 2 breeds did not differ (P > or = 0.05) at a constant age. The MARC III composite produced 9.7 kg more (P < 0.01) fat than Angus at AFT of 0.7 cm, but 7.9 kg less (P < 0.05) at AFT of 1.5 cm. For MARC III, the estimate of retained heterosis for HCW was significant (P < 0.05) at the lowest level of AFT, but at the intermediate and greatest levels estimates were nil. The pattern was the same for MARC I and MARC III for LM area. Adjustment for age resulted in near zero estimates of retained heterosis for AFT, and similarly, adjustment for HCW resulted in nil estimates of retained heterosis for LM area. For actual retail product as a percentage of HCW, the estimate of retained heterosis for MARC III was negative (-1.27%; P < 0.05) at 0.7 cm but was significantly

  11. Effects of age, weight, and fat slaughter end points on estimates of breed and retained heterosis effects for carcass traits.

    PubMed

    Ríos-Utrera, A; Cundiff, L V; Gregory, K E; Koch, R M; Dikeman, M E; Koohmaraie, M; Van Vleck, L D

    2006-01-01

    The influence of different levels of adjusted fat thickness (AFT) and HCW slaughter end points (covariates) on estimates of breed and retained heterosis effects was studied for 14 carcass traits from serially slaughtered purebred and composite steers from the US Meat Animal Research Center (MARC). Contrasts among breed solutions were estimated at 0.7, 1.1, and 1.5 cm of AFT, and at 295.1, 340.5, and 385.9 kg of HCW. For constant slaughter age, contrasts were adjusted to the overall mean (432.5 d). Breed effects for Red Poll, Hereford, Limousin, Braunvieh, Pinzgauer, Gelbvieh, Simmental, Charolais, MARC I, MARC II, and MARC III were estimated as deviations from Angus. In addition, purebreds were pooled into 3 groups based on lean-to-fat ratio, and then differences were estimated among groups. Retention of combined individual and maternal heterosis was estimated for each composite. Mean retained heterosis for the 3 composites also was estimated. Breed rankings and expression of heterosis varied within and among end points. For example, Charolais had greater (P < 0.05) dressing percentages than Angus at the 2 largest levels of AFT and smaller (P < 0.01) percentages at the 2 largest levels of HCW, whereas the 2 breeds did not differ (P > or = 0.05) at a constant age. The MARC III composite produced 9.7 kg more (P < 0.01) fat than Angus at AFT of 0.7 cm, but 7.9 kg less (P < 0.05) at AFT of 1.5 cm. For MARC III, the estimate of retained heterosis for HCW was significant (P < 0.05) at the lowest level of AFT, but at the intermediate and greatest levels estimates were nil. The pattern was the same for MARC I and MARC III for LM area. Adjustment for age resulted in near zero estimates of retained heterosis for AFT, and similarly, adjustment for HCW resulted in nil estimates of retained heterosis for LM area. For actual retail product as a percentage of HCW, the estimate of retained heterosis for MARC III was negative (-1.27%; P < 0.05) at 0.7 cm but was significantly

  12. Estimating the divergence point: a novel distributional analysis procedure for determining the onset of the influence of experimental variables.

    PubMed

    Reingold, Eyal M; Sheridan, Heather

    2014-01-01

    The divergence point analysis procedure is aimed at obtaining an estimate of the onset of the influence of an experimental variable on response latencies (e.g., fixation duration, reaction time). The procedure involves generating survival curves for two conditions, and using a bootstrapping technique to estimate the timing of the earliest discernible divergence between curves. In the present paper, several key extensions for this procedure were proposed and evaluated by conducting simulations and by reanalyzing data from previous studies. Our findings indicate that the modified versions of the procedure performed substantially better than the original procedure under conditions of low experimental power. Furthermore, unlike the original procedure, the modified procedures provided divergence point estimates for individual participants and permitted testing the significance of the difference between estimates across conditions. The advantages of the modified procedures are illustrated, the theoretical and methodological implications are discussed, and promising future directions are outlined.

  13. Group vector space method for estimating enthalpy of vaporization of organic compounds at the normal boiling point.

    PubMed

    Wenying, Wei; Jinyu, Han; Wen, Xu

    2004-01-01

    The specific position of a group in the molecule has been considered, and a group vector space method for estimating enthalpy of vaporization at the normal boiling point of organic compounds has been developed. Expression for enthalpy of vaporization Delta(vap)H(T(b)) has been established and numerical values of relative group parameters obtained. The average percent deviation of estimation of Delta(vap)H(T(b)) is 1.16, which show that the present method demonstrates significant improvement in applicability to predict the enthalpy of vaporization at the normal boiling point, compared the conventional group methods.

  14. Improved nonparametric estimation of the optimal diagnostic cut-off point associated with the Youden index under different sampling schemes.

    PubMed

    Yin, Jingjing; Samawi, Hani; Linder, Daniel

    2016-07-01

    A diagnostic cut-off point of a biomarker measurement is needed for classifying a random subject to be either diseased or healthy. However, the cut-off point is usually unknown and needs to be estimated by some optimization criteria. One important criterion is the Youden index, which has been widely adopted in practice. The Youden index, which is defined as the maximum of (sensitivity + specificity -1), directly measures the largest total diagnostic accuracy a biomarker can achieve. Therefore, it is desirable to estimate the optimal cut-off point associated with the Youden index. Sometimes, taking the actual measurements of a biomarker is very difficult and expensive, while ranking them without the actual measurement can be relatively easy. In such cases, ranked set sampling can give more precise estimation than simple random sampling, as ranked set samples are more likely to span the full range of the population. In this study, kernel density estimation is utilized to numerically solve for an estimate of the optimal cut-off point. The asymptotic distributions of the kernel estimators based on two sampling schemes are derived analytically and we prove that the estimators based on ranked set sampling are relatively more efficient than that of simple random sampling and both estimators are asymptotically unbiased. Furthermore, the asymptotic confidence intervals are derived. Intensive simulations are carried out to compare the proposed method using ranked set sampling with simple random sampling, with the proposed method outperforming simple random sampling in all cases. A real data set is analyzed for illustrating the proposed method.

  15. Improved nonparametric estimation of the optimal diagnostic cut-off point associated with the Youden index under different sampling schemes.

    PubMed

    Yin, Jingjing; Samawi, Hani; Linder, Daniel

    2016-07-01

    A diagnostic cut-off point of a biomarker measurement is needed for classifying a random subject to be either diseased or healthy. However, the cut-off point is usually unknown and needs to be estimated by some optimization criteria. One important criterion is the Youden index, which has been widely adopted in practice. The Youden index, which is defined as the maximum of (sensitivity + specificity -1), directly measures the largest total diagnostic accuracy a biomarker can achieve. Therefore, it is desirable to estimate the optimal cut-off point associated with the Youden index. Sometimes, taking the actual measurements of a biomarker is very difficult and expensive, while ranking them without the actual measurement can be relatively easy. In such cases, ranked set sampling can give more precise estimation than simple random sampling, as ranked set samples are more likely to span the full range of the population. In this study, kernel density estimation is utilized to numerically solve for an estimate of the optimal cut-off point. The asymptotic distributions of the kernel estimators based on two sampling schemes are derived analytically and we prove that the estimators based on ranked set sampling are relatively more efficient than that of simple random sampling and both estimators are asymptotically unbiased. Furthermore, the asymptotic confidence intervals are derived. Intensive simulations are carried out to compare the proposed method using ranked set sampling with simple random sampling, with the proposed method outperforming simple random sampling in all cases. A real data set is analyzed for illustrating the proposed method. PMID:26756282

  16. On the Bayesness, minimaxity and admissibility of point estimators of allelic frequencies.

    PubMed

    Martínez, Carlos Alberto; Khare, Kshitij; Elzo, Mauricio A

    2015-10-21

    In this paper, decision theory was used to derive Bayes and minimax decision rules to estimate allelic frequencies and to explore their admissibility. Decision rules with uniformly smallest risk usually do not exist and one approach to solve this problem is to use the Bayes principle and the minimax principle to find decision rules satisfying some general optimality criterion based on their risk functions. Two cases were considered, the simpler case of biallelic loci and the more complex case of multiallelic loci. For each locus, the sampling model was a multinomial distribution and the prior was a Beta (biallelic case) or a Dirichlet (multiallelic case) distribution. Three loss functions were considered: squared error loss (SEL), Kulback-Leibler loss (KLL) and quadratic error loss (QEL). Bayes estimators were derived under these three loss functions and were subsequently used to find minimax estimators using results from decision theory. The Bayes estimators obtained from SEL and KLL turned out to be the same. Under certain conditions, the Bayes estimator derived from QEL led to an admissible minimax estimator (which was also equal to the maximum likelihood estimator). The SEL also allowed finding admissible minimax estimators. Some estimators had uniformly smaller variance than the MLE and under suitable conditions the remaining estimators also satisfied this property. In addition to their statistical properties, the estimators derived here allow variation in allelic frequencies, which is closer to the reality of finite populations exposed to evolutionary forces. PMID:26271891

  17. Universality: Accurate Checks in Dyson's Hierarchical Model

    NASA Astrophysics Data System (ADS)

    Godina, J. J.; Meurice, Y.; Oktay, M. B.

    2003-06-01

    In this talk we present high-accuracy calculations of the susceptibility near βc for Dyson's hierarchical model in D = 3. Using linear fitting, we estimate the leading (γ) and subleading (Δ) exponents. Independent estimates are obtained by calculating the first two eigenvalues of the linearized renormalization group transformation. We found γ = 1.29914073 ± 10 -8 and, Δ = 0.4259469 ± 10-7 independently of the choice of local integration measure (Ising or Landau-Ginzburg). After a suitable rescaling, the approximate fixed points for a large class of local measure coincide accurately with a fixed point constructed by Koch and Wittwer.

  18. Accurate approach for determining fresh-water carbonate (H2CO3(*)) alkalinity, using a single H3PO4 titration point.

    PubMed

    Birnhack, Liat; Sabach, Sara; Lahav, Ori

    2012-10-15

    A new, simple and accurate method is introduced for determining H(2)CO(3)(*) alkalinity in fresh waters dominated by the carbonate weak-acid system. The method relies on a single H(3)PO(4) dosage and two pH readings (acidic pH value target: pH~4.0). The computation algorithm is based on the concept that the overall alkalinity mass of a solution does not change upon the addition of a non-proton-accepting species. The accuracy of the new method was assessed batch-wise with both synthetic and actual tap waters and the results were compared to those obtained from two widely used alkalinity analysis methods (titration to pH~4.5 and the Gran titration method). The experimental results, which were deliberately obtained with simple laboratory equipment (glass buret, general-purpose pH electrode, magnetic stirrer) proved the method to be as accurate as the conventional methods at a wide range of alkalinity values (20-400 mg L(-1) as CaCO(3)). Analysis of the relative error attained in the proposed method as a function of the target (acidic) pH showed that at the range 4.0

  19. Accurate 3D point cloud comparison and volumetric change analysis of Terrestrial Laser Scan data in a hard rock coastal cliff environment

    NASA Astrophysics Data System (ADS)

    Earlie, C. S.; Masselink, G.; Russell, P.; Shail, R.; Kingston, K.

    2013-12-01

    Our understanding of the evolution of hard rock coastlines is limited due to the episodic nature and ';slow' rate at which changes occur. High-resolution surveying techniques, such as Terrestrial Laser Scanning (TLS), have just begun to be adopted as a method of obtaining detailed point cloud data to monitor topographical changes over short periods of time (weeks to months). However, the difficulties involved in comparing consecutive point cloud data sets in a complex three-dimensional plane, such as occlusion due to surface roughness and positioning of data capture point as a result of a consistently changing environment (a beach profile), mean that comparing data sets can lead to errors in the region of 10 - 20 cm. Meshing techniques are often used for point cloud data analysis for simple surfaces, but in surfaces such as rocky cliff faces, this technique has been found to be ineffective. Recession rates of hard rock coastlines in the UK are typically determined using aerial photography or airborne LiDAR data, yet the detail of the important changes occurring to the cliff face and toe are missed using such techniques. In this study we apply an algorithm (M3C2 - Multiscale Model to Model Cloud Comparison), initially developed for analysing fluvial morphological change, that directly compares point to point cloud data using surface normals that are consistent with surface roughness and measure the change that occurs along the normal direction (Lague et al., 2013). The surfaces changes are analysed using a set of user defined scales based on surface roughness and registration error. Once the correct parameters are defined, the volumetric cliff face changes are calculated by integrating the mean distance between the point clouds. The analysis has been undertaken at two hard rock sites identified for their active erosion located on the UK's south west peninsular at Porthleven in south west Cornwall and Godrevy in north Cornwall. Alongside TLS point cloud data, in

  20. Bayesian Estimation of Fugitive Methane Point Source Emission Rates from a SingleDownwind High-Frequency Gas Sensor

    EPA Science Inventory

    Bayesian Estimation of Fugitive Methane Point Source Emission Rates from a Single Downwind High-Frequency Gas Sensor With the tremendous advances in onshore oil and gas exploration and production (E&P) capability comes the realization that new tools are needed to support env...

  1. Incorporating variability in point estimates in risk assessment: Bridging the gap between LC50 and population endpoints.

    PubMed

    Stark, John D; Vargas, Roger I; Banks, John E

    2015-07-01

    Historically, point estimates such as the median lethal concentration (LC50) have been instrumental in assessing risks associated with toxicants to rare or economically important species. In recent years, growing awareness of the shortcomings of this approach has led to an increased focus on analyses using population endpoints. However, risk assessment of pesticides still relies heavily on large amounts of LC50 data amassed over decades in the laboratory. Despite the fact that these data are generally well replicated, little or no attention has been given to the sometime high levels of variability associated with the generation of point estimates. This is especially important in agroecosystems where arthropod predator-prey interactions are often disrupted by the use of pesticides. Using laboratory derived data of 4 economically important species (2 fruit fly pest species and 2 braconid parasitoid species) and matrix based population models, the authors demonstrate in the present study a method for bridging traditional point estimate risk assessments with population outcomes. The results illustrate that even closely related species can show strikingly divergent responses to the same exposures to pesticides. Furthermore, the authors show that using different values within the 95% confidence intervals of LC50 values can result in very different population outcomes, ranging from quick recovery to extinction for both pest and parasitoid species. The authors discuss the implications of these results and emphasize the need to incorporate variability and uncertainty in point estimates for use in risk assessment.

  2. Incorporating variability in point estimates in risk assessment: bridging the gap between LC50 and population endpoints

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Historically, the use of point estimates such as the LC50 has been instrumental in assessing the risk associated with toxicants to rare or economically important species. In recent years, growing awareness of the shortcomings of this approach has led to an increased focus on analyses using populatio...

  3. Weakly Informative Prior for Point Estimation of Covariance Matrices in Hierarchical Models

    ERIC Educational Resources Information Center

    Chung, Yeojin; Gelman, Andrew; Rabe-Hesketh, Sophia; Liu, Jingchen; Dorie, Vincent

    2015-01-01

    When fitting hierarchical regression models, maximum likelihood (ML) estimation has computational (and, for some users, philosophical) advantages compared to full Bayesian inference, but when the number of groups is small, estimates of the covariance matrix (S) of group-level varying coefficients are often degenerate. One can do better, even from…

  4. Automatic NMO Correction and Full Common Depth Point NMO Velocity Field Estimation in Anisotropic Media

    NASA Astrophysics Data System (ADS)

    Sedek, Mohamed; Gross, Lutz; Tyson, Stephen

    2016-07-01

    We present a new computational method of automatic normal moveout (NMO) correction that not only accurately flattens and corrects the far offset data, but simultaneously provides NMO velocity (v_nmo ) for each individual seismic trace. The method is based on a predefined number of NMO velocity sweeps using linear vertical interpolation of different NMO velocities at each seismic trace. At each sweep, we measure the semblance between the zero offset trace (pilot trace) and the next seismic trace using a trace-by-trace rather than sample-by-sample based semblance measure; then after all the sweeps are done, the one with the maximum semblance value is chosen, which is assumed to be the most suitable NMO velocity trace that accurately flattens seismic reflection events. Other traces follow the same process, and a final velocity field is then extracted. Isotropic, anisotropic and lateral heterogenous synthetic geological models were built to test the method. A range of synthetic background noise, ranging from 10 to 30 %, was applied to the models. In addition, the method was tested on Hess's VTI (vertical transverse isotropy) model. Furthermore, we tested our method on a real pre-stack seismic CDP gathered from a gas field in Alaska. The results from the presented examples show an excellent NMO correction and extracted a reasonably accurate NMO velocity field.

  5. Impact of Footprint Diameter and Off-Nadir Pointing on the Precision of Canopy Height Estimates from Spaceborne Lidar

    NASA Technical Reports Server (NTRS)

    Pang, Yong; Lefskky, Michael; Sun, Guoqing; Ranson, Jon

    2011-01-01

    A spaceborne lidar mission could serve multiple scientific purposes including remote sensing of ecosystem structure, carbon storage, terrestrial topography and ice sheet monitoring. The measurement requirements of these different goals will require compromises in sensor design. Footprint diameters that would be larger than optimal for vegetation studies have been proposed. Some spaceborne lidar mission designs include the possibility that a lidar sensor would share a platform with another sensor, which might require off-nadir pointing at angles of up to 16 . To resolve multiple mission goals and sensor requirements, detailed knowledge of the sensitivity of sensor performance to these aspects of mission design is required. This research used a radiative transfer model to investigate the sensitivity of forest height estimates to footprint diameter, off-nadir pointing and their interaction over a range of forest canopy properties. An individual-based forest model was used to simulate stands of mixed conifer forest in the Tahoe National Forest (Northern California, USA) and stands of deciduous forests in the Bartlett Experimental Forest (New Hampshire, USA). Waveforms were simulated for stands generated by a forest succession model using footprint diameters of 20 m to 70 m. Off-nadir angles of 0 to 16 were considered for a 25 m diameter footprint diameter. Footprint diameters in the range of 25 m to 30 m were optimal for estimates of maximum forest height (R(sup 2) of 0.95 and RMSE of 3 m). As expected, the contribution of vegetation height to the vertical extent of the waveform decreased with larger footprints, while the contribution of terrain slope increased. Precision of estimates decreased with an increasing off-nadir pointing angle, but off-nadir pointing had less impact on height estimates in deciduous forests than in coniferous forests. When pointing off-nadir, the decrease in precision was dependent on local incidence angle (the angle between the off

  6. Sigma-point Kalman filtering for battery management systems of LiPB-based HEV battery packs. Part 1: Introduction and state estimation

    NASA Astrophysics Data System (ADS)

    Plett, Gregory L.

    We have previously described algorithms for a battery management system (BMS) that uses Kalman filtering (KF) techniques to estimate such quantities as: cell self-discharge rate, state-of-charge (SOC), nominal capacity, resistance, and others. Since the dynamics of electrochemical cells are not linear, we used a non-linear extension to the original KF called the extended Kalman filter (EKF). We were able to achieve very good estimates of SOC and other states and parameters using EKF. However, some applications e.g., that of the battery-management-system (BMS) of a hybrid-electric-vehicle (HEV) can require even more accurate estimates than these. To see how to improve on EKF, we must examine the mathematical foundation of that algorithm in more detail than we presented in the prior work to discover the assumptions that are made in its derivation. Since these suppositions are not met exactly in BMS application, we explore an alternative non-linear Kalman filtering techniques known as "sigma-point Kalman filtering" (SPKF), which has some theoretical advantages that manifest themselves in more accurate predictions. The computational complexity of SPKF is of the same order as EKF, so the gains are made at little or no additional cost. The SPKF method as applied to BMS algorithms is presented here in a series of two papers. This first paper is devoted primarily to deriving the EKF and SPKF algorithms using the framework of sequential probabilistic inference. This is done to show that the two algorithms, which at first may look quite different, are actually very similar in most respects; also, we discover why we might expect the SPKF to outperform EKF in non-linear estimation applications. Results are presented for a battery pack based on a third-generation prototype LiPB cell, and compared with prior results using EKF. As expected, SPKF outperforms EKF, both in its estimate of SOC and in its estimate of the error bounds thereof. The second paper presents some more

  7. Estimation of point source fugitive emission rates from a single sensor time series: A conditionally-sampled Gaussian plume reconstruction

    NASA Astrophysics Data System (ADS)

    Foster-Wittig, Tierney A.; Thoma, Eben D.; Albertson, John D.

    2015-08-01

    Emerging mobile fugitive emissions detection and measurement approaches require robust inverse source algorithms to be effective. Two Gaussian plume inverse approaches are described for estimating emission rates from ground-level point sources observed from remote vantage points. The techniques were tested using data from 41 controlled methane release experiments (14 studies) and further investigated using 7 field studies executed downwind of oil and gas well pads in Wyoming. Analyzed measurements were acquired from stationary observation locations 18-106 m downwind of the emission sources. From the fluctuating wind direction, the lateral plume geometry is reconstructed using a derived relationship between the wind direction and crosswind plume position. The crosswind plume spread is determined with both modeled and reconstructed Gaussian plume approaches and estimates of source emission rates are found through inversion. The source emission rates were compared to a simple point source Gaussian emission estimation approach that is part of Draft EPA Method OTM 33A. Compared to the known release rates, the modeled, reconstructed, and point source Gaussian controlled release results yield average percent errors of -5%, -2%, and 6% with standard deviations of 29%, 25%, and 37%, respectively. Compared to each other, the three methods agree within 30% for 78% of all 48 observations (41 CR and 7 Wyoming).

  8. A hierarchical model combining distance sampling and time removal to estimate detection probability during avian point counts

    USGS Publications Warehouse

    Amundson, Courtney L.; Royle, J. Andrew; Handel, Colleen M.

    2014-01-01

    Imperfect detection during animal surveys biases estimates of abundance and can lead to improper conclusions regarding distribution and population trends. Farnsworth et al. (2005) developed a combined distance-sampling and time-removal model for point-transect surveys that addresses both availability (the probability that an animal is available for detection; e.g., that a bird sings) and perceptibility (the probability that an observer detects an animal, given that it is available for detection). We developed a hierarchical extension of the combined model that provides an integrated analysis framework for a collection of survey points at which both distance from the observer and time of initial detection are recorded. Implemented in a Bayesian framework, this extension facilitates evaluating covariates on abundance and detection probability, incorporating excess zero counts (i.e. zero-inflation), accounting for spatial autocorrelation, and estimating population density. Species-specific characteristics, such as behavioral displays and territorial dispersion, may lead to different patterns of availability and perceptibility, which may, in turn, influence the performance of such hierarchical models. Therefore, we first test our proposed model using simulated data under different scenarios of availability and perceptibility. We then illustrate its performance with empirical point-transect data for a songbird that consistently produces loud, frequent, primarily auditory signals, the Golden-crowned Sparrow (Zonotrichia atricapilla); and for 2 ptarmigan species (Lagopus spp.) that produce more intermittent, subtle, and primarily visual cues. Data were collected by multiple observers along point transects across a broad landscape in southwest Alaska, so we evaluated point-level covariates on perceptibility (observer and habitat), availability (date within season and time of day), and abundance (habitat, elevation, and slope), and included a nested point

  9. One-norm geometric quantum discord and critical point estimation in the XY spin chain

    NASA Astrophysics Data System (ADS)

    Cheng, Chang-Cheng; Wang, Yao; Guo, Jin-Liang

    2016-11-01

    In contrast with entanglement and quantum discord (QD), we investigate the thermal quantum correlation in terms of Schatten one-norm geometric quantum discord (GQD) in the XY spin chain, and analyze their capabilities in detecting the critical point of quantum phase transition. We show that the one-norm GQD can reveal more properties about quantum correlation between two spins, especially for the long-range quantum correlation at finite temperature. Under the influences of site distance, anisotropy and temperature, one-norm GQD and its first derivative make it possible to detect the critical point efficiently for a general XY spin chain.

  10. Estimating abundance from repeated presence-absence data or point counts

    USGS Publications Warehouse

    Royle, J. Andrew; Nichols, J.D.

    2003-01-01

    We describe an approach for estimating occupancy rate or the proportion of area occupied when heterogeneity in detection probability exists as a result of variation in abundance of the organism under study. The key feature of such problems, which we exploit, is that variation in abundance induces variation in detection probability. Thus, heterogeneity in abundance can be modeled as heterogeneity in detection probability. Moreover, this linkage between heterogeneity in abundance and heterogeneity in detection probability allows one to exploit a heterogeneous detection probability model to estimate the underlying distribution of abundances. Therefore, our method allows estimation of abundance from repeated observations of the presence or absence of animals without having to uniquely mark individuals in the population.

  11. A method for estimating spikelet number per panicle: Integrating image analysis and a 5-point calibration model

    PubMed Central

    Zhao, Sanqin; Gu, Jiabing; Zhao, Youyong; Hassan, Muhammad; Li, Yinian; Ding, Weimin

    2015-01-01

    Spikelet number per panicle (SNPP) is one of the most important yield components used to estimate rice yields. The use of high-throughput quantitative image analysis methods for understanding the diversity of the panicle has increased rapidly. However, it is difficult to simultaneously extract panicle branch and spikelet/grain information from images at the same resolution due to the different scales of these traits. To use a lower resolution and meet the accuracy requirement, we proposed an interdisciplinary method that integrated image analysis and a 5-point calibration model to rapidly estimate SNPP. First, a linear relationship model between the total length of the primary branch (TLPB) and the SNPP was established based on the physiological characteristics of the panicle. Second, the TLPB and area (the primary branch region) traits were rapidly extracted by developing image analysis algorithm. Finally, a 5-point calibration method was adopted to improve the universality of the model. The number of panicle samples that the error of the SNPP estimates was less than 10% was greater than 90% by the proposed method. The estimation accuracy was consistent with the accuracy determined using manual measurements. The proposed method uses available concepts and techniques for automated estimations of rice yield information. PMID:26542412

  12. Pain point system scale (PPSS): a method for postoperative pain estimation in retrospective studies

    PubMed Central

    Gkotsi, Anastasia; Petsas, Dimosthenis; Sakalis, Vasilios; Fotas, Asterios; Triantafyllidis, Argyrios; Vouros, Ioannis; Saridakis, Evangelos; Salpiggidis, Georgios; Papathanasiou, Athanasios

    2012-01-01

    Purpose Pain rating scales are widely used for pain assessment. Nevertheless, a new tool is required for pain assessment needs in retrospective studies. Methods The postoperative pain episodes, during the first postoperative day, of three patient groups were analyzed. Each pain episode was assessed by a visual analog scale, numerical rating scale, verbal rating scale, and a new tool – pain point system scale (PPSS) – based on the analgesics administered. The type of analgesic was defined based on the authors’ clinic protocol, patient comorbidities, pain assessment tool scores, and preadministered medications by an artificial neural network system. At each pain episode, each patient was asked to fill the three pain scales. Bartlett’s test and Kaiser–Meyer–Olkin criterion were used to evaluate sample sufficiency. The proper scoring system was defined by varimax rotation. Spearman’s and Pearson’s coefficients assessed PPSS correlation to the known pain scales. Results A total of 262 pain episodes were evaluated in 124 patients. The PPSS scored one point for each dose of paracetamol, three points for each nonsteroidal antiinflammatory drug or codeine, and seven points for each dose of opioids. The correlation between the visual analog scale and PPSS was found to be strong and linear (rho: 0.715; P < 0.001 and Pearson: 0.631; P < 0.001). Conclusion PPSS correlated well with the known pain scale and could be used safely in the evaluation of postoperative pain in retrospective studies. PMID:23152699

  13. Using a focal-plane array to estimate antenna pointing errors

    NASA Technical Reports Server (NTRS)

    Zohar, S.; Vilnrotter, V. A.

    1991-01-01

    The use of extra collecting horns in the focal plane of an antenna as a means of determining the Direction of Arrival (DOA) of the signal impinging on it, provided it is within the antenna beam, is considered. Our analysis yields a relatively simple algorithm to extract the DOA from the horns' outputs. An algorithm which, in effect, measures the thermal noise of the horns' signals and determines its effect on the uncertainty of the extracted DOA parameters is developed. Both algorithms were implemented in software and tested in simulated data. Based on these tests, it is concluded that this is a viable approach to the DOA determination. Though the results obtained are of general applicability, the particular motivation for the present work is their application to the pointing of a mechanically deformed antenna. It is anticipated that the pointing algorithm developed for a deformed antenna could be obtained as a small perturbation of the algorithm developed for an undeformed antenna. In this context, it should be pointed out that, with a deformed antenna, the array of horns and its associated circuitry constitute the main part of the deformation-compensation system. In this case, the pointing system proposed may be viewed as an additional task carried out by the deformation-compensation hardware.

  14. Screening-level estimates of mass discharge uncertainty from point measurement methods

    EPA Science Inventory

    The uncertainty of mass discharge measurements associated with point-scale measurement techniques was investigated by deriving analytical solutions for the mass discharge coefficient of variation for two simplified, conceptual models. In the first case, a depth-averaged domain w...

  15. Estimating a Meaningful Point of Change: A Comparison of Exploratory Techniques Based on Nonparametric Regression

    ERIC Educational Resources Information Center

    Klotsche, Jens; Gloster, Andrew T.

    2012-01-01

    Longitudinal studies are increasingly common in psychological research. Characterized by repeated measurements, longitudinal designs aim to observe phenomena that change over time. One important question involves identification of the exact point in time when the observed phenomena begin to meaningfully change above and beyond baseline…

  16. Point Estimates and Confidence Intervals for Variable Importance in Multiple Linear Regression

    ERIC Educational Resources Information Center

    Thomas, D. Roland; Zhu, PengCheng; Decady, Yves J.

    2007-01-01

    The topic of variable importance in linear regression is reviewed, and a measure first justified theoretically by Pratt (1987) is examined in detail. Asymptotic variance estimates are used to construct individual and simultaneous confidence intervals for these importance measures. A simulation study of their coverage properties is reported, and an…

  17. ESTIMATING THE EXPOSURE POINT CONCENTRATION TERM USING PROUCL, VERSION 3.0

    EPA Science Inventory

    In superfund and RCRA Projects of the U.S. EPA, cleanup, exposure, and risk assessment decisions are often made based upon the mean concentrations of the contaminants of potential concern (COPC). A 95% upper confidence limit (UCL) of the population mean is used to estimate the e...

  18. Using a genetic algorithm to estimate the details of earthquake slip distributions from point surface displacements

    NASA Astrophysics Data System (ADS)

    Lindsay, A.; McCloskey, J.; Nic Bhloscaidh, M.

    2016-03-01

    Examining fault activity over several earthquake cycles is necessary for long-term modeling of the fault strain budget and stress state. While this requires knowledge of coseismic slip distributions for successive earthquakes along the fault, these exist only for the most recent events. However, overlying the Sunda Trench, sparsely distributed coral microatolls are sensitive to tectonically induced changes in relative sea levels and provide a century-spanning paleogeodetic and paleoseismic record. Here we present a new technique called the Genetic Algorithm Slip Estimator to constrain slip distributions from observed surface deformations of corals. We identify a suite of models consistent with the observations, and from them we compute an ensemble estimate of the causative slip. We systematically test our technique using synthetic data. Applying the technique to observed coral displacements for the 2005 Nias-Simeulue earthquake and 2007 Mentawai sequence, we reproduce key features of slip present in previously published inversions such as the magnitude and location of slip asperities. From the displacement data available for the 1797 and 1833 Mentawai earthquakes, we present slip estimates reproducing observed displacements. The areas of highest modeled slip in the paleoearthquake are nonoverlapping, and our solutions appear to tile the plate interface, complementing one another. This observation is supported by the complex rupture pattern of the 2007 Mentawai sequence, underlining the need to examine earthquake occurrence through long-term strain budget and stress modeling. Although developed to estimate earthquake slip, the technique is readily adaptable for a wider range of applications.

  19. A novel asymmetric-loop molecular beacon-based two-phase hybridization assay for accurate and high-throughput detection of multiple drug resistance-conferring point mutations in Mycobacterium tuberculosis.

    PubMed

    Chen, Qinghai; Wu, Nan; Xie, Meng; Zhang, Bo; Chen, Ming; Li, Jianjun; Zhuo, Lisha; Kuang, Hong; Fu, Weiling

    2012-04-01

    The accurate and high-throughput detection of drug resistance-related multiple point mutations remains a challenge. Although the combination of molecular beacons with bio-immobilization technology, such as microarray, is promising, its application is difficult due to the ineffective immobilization of molecular beacons on the chip surface. Here, we propose a novel asymmetric-loop molecular beacon in which the loop consists of 2 parts. One is complementary to a target, while the other is complementary to an oligonucleotide probe immobilized on the chip surface. With this novel probe, a two-phase hybridization assay can be used for simultaneously detecting multiple point mutations. This assay will have advantages, such as easy probe availability, multiplex detection, low background, and high-efficiency hybridization, and may provide a new avenue for the immobilization of molecular beacons and high-throughput detection of point mutations.

  20. Enhancing efficiency and quality of statistical estimation of immunogenicity assay cut points through standardization and automation.

    PubMed

    Su, Cheng; Zhou, Lei; Hu, Zheng; Weng, Winnie; Subramani, Jayanthi; Tadkod, Vineet; Hamilton, Kortney; Bautista, Ami; Wu, Yu; Chirmule, Narendra; Zhong, Zhandong Don

    2015-10-01

    Biotherapeutics can elicit immune responses, which can alter the exposure, safety, and efficacy of the therapeutics. A well-designed and robust bioanalytical method is critical for the detection and characterization of relevant anti-drug antibody (ADA) and the success of an immunogenicity study. As a fundamental criterion in immunogenicity testing, assay cut points need to be statistically established with a risk-based approach to reduce subjectivity. This manuscript describes the development of a validated, web-based, multi-tier customized assay statistical tool (CAST) for assessing cut points of ADA assays. The tool provides an intuitive web interface that allows users to import experimental data generated from a standardized experimental design, select the assay factors, run the standardized analysis algorithms, and generate tables, figures, and listings (TFL). It allows bioanalytical scientists to perform complex statistical analysis at a click of the button to produce reliable assay parameters in support of immunogenicity studies.

  1. Screening-level estimates of mass discharge uncertainty from point measurement methods.

    PubMed

    Brooks, Michael C; Cha, Ki Young; Wood, A Lynn; Annable, Michael D

    2015-01-01

    The uncertainty of mass discharge measurements associated with point-scale measurement techniques was investigated by deriving analytical solutions for the mass discharge coefficient of variation for two simplified, conceptual models. In the first case, a depth-averaged domain was assumed, consisting of one-dimensional groundwater flow perpendicular to a one-dimensional control plane of uniformly spaced sampling points. The contaminant flux along the control plane was assumed to be normally distributed. The second case consisted of one-dimensional groundwater flow perpendicular to a two-dimensional control plane of uniformly spaced sampling points. The contaminant flux in this case was assumed to be distributed according to a bivariate normal distribution. The center point for the flux distributions in both cases was allowed to vary in the domain of the control plane as a uniform random variable. Simplified equations for the uncertainty were investigated to facilitate screening-level evaluations of uncertainty as a function of sampling network design. Results were used to express uncertainty as a function of the length of the control plane and number of wells, or alternatively as a function of the sample spacing. Uncertainty was also expressed as a function of a new dimensionless parameter, Ω, defined as the ratio of the maximum local flux to the product of mass discharge and sample density. Expressing uncertainty as a function of Ω provided a convenient means to demonstrate the relationship between uncertainty, the magnitude of a local hot spot, magnitude of mass discharge, distribution of the contaminant across the control plane, and the sampling density. PMID:25965419

  2. Screening-level estimates of mass discharge uncertainty from point measurement methods.

    PubMed

    Brooks, Michael C; Cha, Ki Young; Wood, A Lynn; Annable, Michael D

    2015-01-01

    The uncertainty of mass discharge measurements associated with point-scale measurement techniques was investigated by deriving analytical solutions for the mass discharge coefficient of variation for two simplified, conceptual models. In the first case, a depth-averaged domain was assumed, consisting of one-dimensional groundwater flow perpendicular to a one-dimensional control plane of uniformly spaced sampling points. The contaminant flux along the control plane was assumed to be normally distributed. The second case consisted of one-dimensional groundwater flow perpendicular to a two-dimensional control plane of uniformly spaced sampling points. The contaminant flux in this case was assumed to be distributed according to a bivariate normal distribution. The center point for the flux distributions in both cases was allowed to vary in the domain of the control plane as a uniform random variable. Simplified equations for the uncertainty were investigated to facilitate screening-level evaluations of uncertainty as a function of sampling network design. Results were used to express uncertainty as a function of the length of the control plane and number of wells, or alternatively as a function of the sample spacing. Uncertainty was also expressed as a function of a new dimensionless parameter, Ω, defined as the ratio of the maximum local flux to the product of mass discharge and sample density. Expressing uncertainty as a function of Ω provided a convenient means to demonstrate the relationship between uncertainty, the magnitude of a local hot spot, magnitude of mass discharge, distribution of the contaminant across the control plane, and the sampling density.

  3. Estimating animal resource selection from telemetry data using point process models

    USGS Publications Warehouse

    Johnson, Devin S.; Hooten, Mevin B.; Kuhn, Carey E.

    2013-01-01

    To demonstrate the analysis of telemetry data with the point process approach, we analysed a data set of telemetry locations from northern fur seals (Callorhinus ursinus) in the Pribilof Islands, Alaska. Both a space–time and an aggregated space-only model were fitted. At the individual level, the space–time analysis showed little selection relative to the habitat covariates. However, at the study area level, the space-only model showed strong selection relative to the covariates.

  4. The estimation of melting points and fusion enthalpies using experimental solubilities, estimated total phase change entropies, and mobile order and disorder theory.

    PubMed

    Chickos, James S; Nichols, Gary; Ruelle, Paul

    2002-01-01

    Melting points and fusion enthalpies are predicted for a series of 81 compounds by combining experimental solubilities in a variety of solvents and analyzed according to the theory of mobile order and disorder (MOD) and using the total phase change entropy estimated by a group additivity method. The error associated in predicting melting points is dependent on the magnitude of the temperature predicted. An error of +/- 12 K (+/- 1 sigma) was obtained for compounds melting between ambient temperature and 350 K (24 entries). This error increased to +/- 23 K when the temperature range was expanded to 400 K (46 entries) and +/- 39 K for the temperature range 298-555 K (79 entries). Fusion enthalpies were predicted within +/- 2sigma of the experimental values (+/- 6.4 kJ mol(-1)) for 79 entries. The uncertainty in the fusion enthalpy did not appear dependent on the magnitude of the melting point. Two outliers, adamantane and camphor, have significant phase transitions that occur below room temperature. Estimates of melting temperature and fusion enthalpy for these compounds were characterized by significantly larger errors.

  5. THEORETICAL ESTIMATES OF TWO-POINT SHEAR CORRELATION FUNCTIONS USING TANGLED MAGNETIC FIELDS

    SciTech Connect

    Pandey, Kanhaiya L.; Sethi, Shiv K.

    2012-03-20

    The existence of primordial magnetic fields can induce matter perturbations with additional power at small scales as compared to the usual {Lambda}CDM model. We study its implication within the context of a two-point shear correlation function from gravitational lensing. We show that a primordial magnetic field can leave its imprints on the shear correlation function at angular scales {approx}< a few arcminutes. The results are compared with CFHTLS data, which yield some of the strongest known constraints on the parameters (strength and spectral index) of the primordial magnetic field. We also discuss the possibility of detecting sub-nano Gauss fields using future missions such as SNAP.

  6. Gamma-point lattice free energy estimates from O1 force calculations.

    PubMed

    Voss, Johannes; Vegge, Tejs

    2008-05-14

    We present a new method for estimating the vibrational free energy of crystal (and molecular) structures employing only a single force calculation, for a particularly displaced configuration, in addition to the calculation of the ground state configuration. This displacement vector is the sum of the phonon eigenvectors obtained from a fast-relative to, e.g., density-functional theory (DFT)-Hessian calculation using interatomic potentials. These potentials are based here on effective charges obtained from a DFT calculation of the ground state electronic charge density but could also be based on other, e.g., empiric approaches.

  7. Trend estimation and change point detection in individual climatic series using flexible regression methods

    NASA Astrophysics Data System (ADS)

    Bates, Bryson C.; Chandler, Richard E.; Bowman, Adrian W.

    2012-08-01

    Over recent years, considerable attention has been given to the problem of detecting trends and change points (discontinuities) in climatic series. This has led to the use of a plethora of detection techniques, ranging from the very simple (e.g., linear regression and t-tests) to the relatively complex (e.g., Markov chain Monte Carlo methods). However, many of these techniques are quite restricted in their range of application and care is needed to avoid misinterpretation of their results. In this paper we highlight the availability of modern regression methods that allow for both smooth trends and abrupt changes, and a discontinuity test that enables discrimination between the two. Our framework can accommodate constant mean levels, linear or smooth trends, and can test for genuine change points in an objective and data-driven way. We demonstrate its capabilities using the winter (December-March) North Atlantic Oscillation, an annual mean relative humidity series and a seasonal (June to October) typhoon count series as case studies. We show that the framework is less restrictive than many alternatives in allowing the data to speak for themselves and can give different and more credible results from those of conventional methods. The research findings from such analyses can be used to appropriately inform the design of subsequent studies of temporal changes in underlying physical mechanisms, and the development of policy responses that are appropriate for smoothly varying rather than abrupt climate change (and vice versa).

  8. A quantum mechanical/neural net model for boiling points with error estimation.

    PubMed

    Chalk, A J; Beck, B; Clark, T

    2001-01-01

    We present QSPR models for normal boiling points employing a neural network approach and descriptors calculated using semiempirical MO theory (AM1 and PM3). These models are based on a data set of 6000 compounds with widely varying functionality and should therefore be applicable to a diverse range of systems. We include cross-validation by simultaneously training 10 different networks, each with different training and test sets. The predicted boiling point is given by the mean of the 10 results, and the individual error of each compound is related to the standard deviation of these predictions. For our best model we find that the standard deviation of the training error is 16.5 K for 6000 compounds and the correlation coefficient (R2) between our prediction and experiment is 0.96. We also examine the effect of different conformations and tautomerism on our calculated results. Large deviations between our predictions and experiment can generally be explained by experimental errors or problems with the semiempirical methods.

  9. Roughness Estimation from Point Clouds - A Comparison of Terrestrial Laser Scanning and Image Matching by Unmanned Aerial Vehicle Acquisitions

    NASA Astrophysics Data System (ADS)

    Rutzinger, Martin; Bremer, Magnus; Ragg, Hansjörg

    2013-04-01

    Recently, terrestrial laser scanning (TLS) and matching of images acquired by unmanned arial vehicles (UAV) are operationally used for 3D geodata acquisition in Geoscience applications. However, the two systems cover different application domains in terms of acquisition conditions and data properties i.e. accuracy and line of sight. In this study we investigate the major differences between the two platforms for terrain roughness estimation. Terrain roughness is an important input for various applications such as morphometry studies, geomorphologic mapping, and natural process modeling (e.g. rockfall, avalanche, and hydraulic modeling). Data has been collected simultaneously by TLS using an Optech ILRIS3D and a rotary UAV using an octocopter from twins.nrn for a 900 m² test site located in a riverbed in Tyrol, Austria (Judenbach, Mieming). The TLS point cloud has been acquired from three scan positions. These have been registered using iterative closest point algorithm and a target-based referencing approach. For registration geometric targets (spheres) with a diameter of 20 cm were used. These targets were measured with dGPS for absolute georeferencing. The TLS point cloud has an average point density of 19,000 pts/m², which represents a point spacing of about 5 mm. 15 images where acquired by UAV in a height of 20 m using a calibrated camera with focal length of 18.3 mm. A 3D point cloud containing RGB attributes was derived using APERO/MICMAC software, by a direct georeferencing approach based on the aircraft IMU data. The point cloud is finally co-registered with the TLS data to guarantee an optimal preparation in order to perform the analysis. The UAV point cloud has an average point density of 17,500 pts/m², which represents a point spacing of 7.5 mm. After registration and georeferencing the level of detail of roughness representation in both point clouds have been compared considering elevation differences, roughness and representation of different grain

  10. Iterative reconstruction of Fourier-rebinned PET data using sinogram blurring function estimated from point source scans

    PubMed Central

    Tohme, Michel S.; Qi, Jinyi

    2010-01-01

    Purpose: The accuracy of the system model that governs the transformation from the image space to the projection space in positron emission tomography (PET) greatly affects the quality of reconstructed images. For efficient computation in iterative reconstructions, the system model in PET can be factored into a product of geometric projection and sinogram blurring function. To further speed up reconstruction, fully 3D PET data can be rebinned into a stack of 2D sinograms and then be reconstructed using 2D iterative algorithms. The purpose of this work is to develop a method to estimate the sinogram blurring function to be used in reconstruction of Fourier-rebinned data. Methods: In a previous work, the authors developed an approach to estimating the sinogram blurring function of nonrebinned PET data from experimental scans of point sources. In this study, the authors extend this method to the estimation of sinogram blurring function for Fourier-rebinned PET data. A point source was scanned at a set of sampled positions in the microPET II scanner. The sinogram blurring function is considered to be separable between the transaxial and axial directions. A radially and angularly variant 2D blurring function is estimated from Fourier-rebinned point source scans to model the transaxial blurring with consideration of the detector block structure of the scanner; a space-variant 1D blurring kernel along the axial direction is estimated separately to model the correlation between neighboring planes due to detector intrinsic blurring and Fourier rebinning. The estimated sinogram blurring function is incorporated in a 2D maximum a posteriori (MAP) reconstruction algorithm for image reconstruction. Results: Physical phantom experiments were performed on the microPET II scanner to validate the proposed method. The authors compared the proposed method to 2D MAP reconstruction without sinogram blurring model and 2D MAP reconstruction with a Monte Carlo based blurring model. The

  11. In silico approaches to explore toxicity end points: issues and concerns for estimating human health effects.

    PubMed

    Matthews, Edwin J; Contrera, Joseph F

    2007-02-01

    The European Chemicals Bureau and the Organisation for Economic Cooperation and Development are currently compiling a sanctioned list of quantitative structure-activity relationship (QSAR) risk assessment models and data sets to predict the physiological properties, environmental fate, ecological effects and human health effects of new and existing chemicals in commerce in the European Union. This action implements the technical requirements of the European Commission's Registration, Evaluation and Authorisation of Chemicals legislation. The goal is to identify a battery of QSARs that can furnish rapid, reliable and cost-effective decision support information for regulatory decisions that can substitute for results from animal studies. This report discusses issues and concerns that need to be addressed when selecting QSARs to predict human health effect end points. PMID:17269899

  12. Estimating Limit Reference Points for Western Pacific Leatherback Turtles (Dermochelys coriacea) in the U.S. West Coast EEZ.

    PubMed

    Curtis, K Alexandra; Moore, Jeffrey E; Benson, Scott R

    2015-01-01

    Biological limit reference points (LRPs) for fisheries catch represent upper bounds that avoid undesirable population states. LRPs can support consistent management evaluation among species and regions, and can advance ecosystem-based fisheries management. For transboundary species, LRPs prorated by local abundance can inform local management decisions when international coordination is lacking. We estimated LRPs for western Pacific leatherbacks in the U.S. West Coast Exclusive Economic Zone (WCEEZ) using three approaches with different types of information on local abundance. For the current application, the best-informed LRP used a local abundance estimate derived from nest counts, vital rate information, satellite tag data, and fishery observer data, and was calculated with a Potential Biological Removal estimator. Management strategy evaluation was used to set tuning parameters of the LRP estimators to satisfy risk tolerances for falling below population thresholds, and to evaluate sensitivity of population outcomes to bias in key inputs. We estimated local LRPs consistent with three hypothetical management objectives: allowing the population to rebuild to its maximum net productivity level (4.7 turtles per five years), limiting delay of population rebuilding (0.8 turtles per five years), or only preventing further decline (7.7 turtles per five years). These LRPs pertain to all human-caused removals and represent the WCEEZ contribution to meeting population management objectives within a broader international cooperative framework. We present multi-year estimates, because at low LRP values, annual assessments are prone to substantial error that can lead to volatile and costly management without providing further conservation benefit. The novel approach and the performance criteria used here are not a direct expression of the "jeopardy" standard of the U.S. Endangered Species Act, but they provide useful assessment information and could help guide international

  13. Estimating Limit Reference Points for Western Pacific Leatherback Turtles (Dermochelys coriacea) in the U.S. West Coast EEZ

    PubMed Central

    Curtis, K. Alexandra; Moore, Jeffrey E.; Benson, Scott R.

    2015-01-01

    Biological limit reference points (LRPs) for fisheries catch represent upper bounds that avoid undesirable population states. LRPs can support consistent management evaluation among species and regions, and can advance ecosystem-based fisheries management. For transboundary species, LRPs prorated by local abundance can inform local management decisions when international coordination is lacking. We estimated LRPs for western Pacific leatherbacks in the U.S. West Coast Exclusive Economic Zone (WCEEZ) using three approaches with different types of information on local abundance. For the current application, the best-informed LRP used a local abundance estimate derived from nest counts, vital rate information, satellite tag data, and fishery observer data, and was calculated with a Potential Biological Removal estimator. Management strategy evaluation was used to set tuning parameters of the LRP estimators to satisfy risk tolerances for falling below population thresholds, and to evaluate sensitivity of population outcomes to bias in key inputs. We estimated local LRPs consistent with three hypothetical management objectives: allowing the population to rebuild to its maximum net productivity level (4.7 turtles per five years), limiting delay of population rebuilding (0.8 turtles per five years), or only preventing further decline (7.7 turtles per five years). These LRPs pertain to all human-caused removals and represent the WCEEZ contribution to meeting population management objectives within a broader international cooperative framework. We present multi-year estimates, because at low LRP values, annual assessments are prone to substantial error that can lead to volatile and costly management without providing further conservation benefit. The novel approach and the performance criteria used here are not a direct expression of the “jeopardy” standard of the U.S. Endangered Species Act, but they provide useful assessment information and could help guide

  14. Estimating Limit Reference Points for Western Pacific Leatherback Turtles (Dermochelys coriacea) in the U.S. West Coast EEZ.

    PubMed

    Curtis, K Alexandra; Moore, Jeffrey E; Benson, Scott R

    2015-01-01

    Biological limit reference points (LRPs) for fisheries catch represent upper bounds that avoid undesirable population states. LRPs can support consistent management evaluation among species and regions, and can advance ecosystem-based fisheries management. For transboundary species, LRPs prorated by local abundance can inform local management decisions when international coordination is lacking. We estimated LRPs for western Pacific leatherbacks in the U.S. West Coast Exclusive Economic Zone (WCEEZ) using three approaches with different types of information on local abundance. For the current application, the best-informed LRP used a local abundance estimate derived from nest counts, vital rate information, satellite tag data, and fishery observer data, and was calculated with a Potential Biological Removal estimator. Management strategy evaluation was used to set tuning parameters of the LRP estimators to satisfy risk tolerances for falling below population thresholds, and to evaluate sensitivity of population outcomes to bias in key inputs. We estimated local LRPs consistent with three hypothetical management objectives: allowing the population to rebuild to its maximum net productivity level (4.7 turtles per five years), limiting delay of population rebuilding (0.8 turtles per five years), or only preventing further decline (7.7 turtles per five years). These LRPs pertain to all human-caused removals and represent the WCEEZ contribution to meeting population management objectives within a broader international cooperative framework. We present multi-year estimates, because at low LRP values, annual assessments are prone to substantial error that can lead to volatile and costly management without providing further conservation benefit. The novel approach and the performance criteria used here are not a direct expression of the "jeopardy" standard of the U.S. Endangered Species Act, but they provide useful assessment information and could help guide international

  15. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  16. Theoretical analysis of errors when estimating snow distribution through point measurements

    NASA Astrophysics Data System (ADS)

    Trujillo, E.; Lehning, M.

    2015-06-01

    In recent years, marked improvements in our knowledge of the statistical properties of the spatial distribution of snow properties have been achieved thanks to improvements in measuring technologies (e.g., LIDAR, terrestrial laser scanning (TLS), and ground-penetrating radar (GPR)). Despite this, objective and quantitative frameworks for the evaluation of errors in snow measurements have been lacking. Here, we present a theoretical framework for quantitative evaluations of the uncertainty in average snow depth derived from point measurements over a profile section or an area. The error is defined as the expected value of the squared difference between the real mean of the profile/field and the sample mean from a limited number of measurements. The model is tested for one- and two-dimensional survey designs that range from a single measurement to an increasing number of regularly spaced measurements. Using high-resolution (~ 1 m) LIDAR snow depths at two locations in Colorado, we show that the sample errors follow the theoretical behavior. Furthermore, we show how the determination of the spatial location of the measurements can be reduced to an optimization problem for the case of the predefined number of measurements, or to the designation of an acceptable uncertainty level to determine the total number of regularly spaced measurements required to achieve such an error. On this basis, a series of figures are presented as an aid for snow survey design under the conditions described, and under the assumption of prior knowledge of the spatial covariance/correlation properties. With this methodology, better objective survey designs can be accomplished that are tailored to the specific applications for which the measurements are going to be used. The theoretical framework can be extended to other spatially distributed snow variables (e.g., SWE - snow water equivalent) whose statistical properties are comparable to those of snow depth.

  17. High-Precision Lunar Ranging and Gravitational Parameter Estimation With the Apache Point Observatory Lunar Laser-ranging Operation

    NASA Astrophysics Data System (ADS)

    Johnson, Nathan H.

    This dissertation is concerned with several problems of instrumentation and data analysis encountered by the Apache Point Observatory Lunar Laser-ranging Operation. Chapter 2 considers crosstalk between elements of a single-photon avalanche photodiode detector. Experimental and analytic methods were developed to determine crosstalk rates, and empirical findings are presented. Chapter 3 details electronics developments that have improved the quality of data collected by detectors of the same type. Chapter 4 explores the challenges of estimating gravitational parameters on the basis of ranging data collected by this and other experiments and presents resampling techniques for the derivation of standard errors for estimates of such parameters determined by the Planetary Ephemeris Program (PEP), a solar-system model and data-fitting code. Possible directions for future work are discussed in Chapter 5. A manual of instructions for working with PEP is presented as an appendix.

  18. Estimating SO2 emissions from a large point source using 10 year OMI SO2 observations: Afsin Elbistan Power Plant

    NASA Astrophysics Data System (ADS)

    Kaynak Tezel, Burcak; Firatli, Ertug

    2016-04-01

    SO2 pollution has still been a problem for parts of Turkey, especially regions with large scale coal power plants. In this study, 10 year Ozone Monitoring Instrument (OMI) SO2 observations are used for estimating SO2 emissions from large point sources in Turkey. We aim to estimate SO2 emissions from coal power plants where no online monitoring is available and improve the emissions given in current emission inventories with these top-down estimates. High-resolution yearly averaged maps are created on a domain over large point sources by oversampling SO2 columns for each grid for the years 2005-2014. This method reduced the noise and resulted in a better signal from large point sources and it was used for coal power plants in U.S and India, previously. The SO2 signal over selected power plants are observed with this method, and the spatiotemporal changes of SO2 signal are analyzed. With the assumption that OMI SO2 observations are correlating with emissions, long-term OMI SO2 observation averages can be used to estimate emission levels of significant point sources. Two-dimensional Gaussian function is used for explaining the relationships between OMI SO2 observations and emissions. Afsin Elbistan Power Plant, which is the largest capacity coal power plant in Turkey, is investigated in detail as a case study. The satellite scans within 50 km of the power plant are selected and averaged over a 2 x 2 km2 gridded domain by smoothing method for 2005-2014. The yearly averages of OMI SO2 are calculated to investigate the magnitude and the impact area of the SO2 emissions of the power plant. A significant increase in OMI SO2 observations over Afsin Elbistan from 2005 to 2009 was observed (over 2 times) possibly due to the capacity increase from 1715 to 2795 MW in 2006. Comparison between the yearly gross electricity production of the plant and OMI SO2 observations indicated consistency until 2009, but OMI SO2 observations indicated a rapid increase while gross electricity

  19. Multi-directional search from the primitive initial point for Gaussian mixture estimation using variational Bayes method.

    PubMed

    Ishikawa, Yuta; Takeuchi, Ichiro; Nakano, Ryohei

    2010-04-01

    Gaussian mixture model (GMM) is widely used in many applications because it can approximate various forms of probability distributions. In this paper, we are concerned with GMM estimation problem using the variational Bayes (VB) method. In this approach, one can only find local optima because the free energy function of the problem is multimodal. In order to find better solutions, deterministic annealing was recently adapted to the VB method (DAVB method). In this paper, we offer an alternative approach to the DAVB method for GMM estimation problem. We propose a multi-directional search method from the primitive initial point (PIP), which is defined as the solution of the DAVB method at the highest temperature. Investigation on the curvature information of the original (not annealed) free energy function reveals that the PIP is a saddle point. An efficient multi-directional search strategy from the neighborhoods of the PIP is proposed using the eigen-analysis of the Hessian matrix. Numerical experiments using real data sets demonstrate the effectiveness of our method.

  20. Using ToxCast™ Data to Reconstruct Dynamic Cell State Trajectories and Estimate Toxicological Points of Departure

    PubMed Central

    Shah, Imran; Setzer, R. Woodrow; Jack, John; Houck, Keith A.; Judson, Richard S.; Knudsen, Thomas B.; Liu, Jie; Martin, Matthew T.; Reif, David M.; Richard, Ann M.; Thomas, Russell S.; Crofton, Kevin M.; Dix, David J.; Kavlock, Robert J.

    2015-01-01

    state trajectories and estimate toxicological points of departure. Environ Health Perspect 124:910–919; http://dx.doi.org/10.1289/ehp.1409029 PMID:26473631

  1. Accurate calculation of diffraction-limited encircled and ensquared energy.

    PubMed

    Andersen, Torben B

    2015-09-01

    Mathematical properties of the encircled and ensquared energy functions for the diffraction-limited point-spread function (PSF) are presented. These include power series and a set of linear differential equations that facilitate the accurate calculation of these functions. Asymptotic expressions are derived that provide very accurate estimates for the relative amount of energy in the diffraction PSF that fall outside a square or rectangular large detector. Tables with accurate values of the encircled and ensquared energy functions are also presented. PMID:26368873

  2. Position-dependent velocity of an effective temperature point for the estimation of the thermal diffusivity of solids

    NASA Astrophysics Data System (ADS)

    Balachandar, Settu; Shivaprakash, N. C.; Kameswara Rao, L.

    2016-01-01

    A new approach is proposed to estimate the thermal diffusivity of optically transparent solids at ambient temperature based on the velocity of an effective temperature point (ETP), and by using a two-beam interferometer the proposed concept is corroborated. 1D unsteady heat flow via step-temperature excitation is interpreted as a ‘micro-scale rectilinear translatory motion’ of an ETP. The velocity dependent function is extracted by revisiting the Fourier heat diffusion equation. The relationship between the velocity of the ETP with thermal diffusivity is modeled using a standard solution. Under optimized thermal excitation, the product of the ‘velocity of the ETP’ and the distance is a new constitutive equation for the thermal diffusivity of the solid. The experimental approach involves the establishment of a 1D unsteady heat flow inside the sample through step-temperature excitation. In the moving isothermal surfaces, the ETP is identified using a two-beam interferometer. The arrival-time of the ETP to reach a fixed distance away from heat source is measured, and its velocity is calculated. The velocity of the ETP and a given distance is sufficient to estimate the thermal diffusivity of a solid. The proposed method is experimentally verified for BK7 glass samples and the measured results are found to match closely with the reported value.

  3. Novel applications using maximum-likelihood estimation in optical metrology and nuclear medical imaging: Point-diffraction interferometry and BazookaPET

    NASA Astrophysics Data System (ADS)

    Park, Ryeojin

    This dissertation aims to investigate two different applications in optics using maximum-likelihood (ML) estimation. The first application of ML estimation is used in optical metrology. For this application, an innovative iterative search method called the synthetic phase-shifting (SPS) algorithm is proposed. This search algorithm is used for estimation of a wavefront that is described by a finite set of Zernike Fringe (ZF) polynomials. In this work, we estimate the ZF coefficient, or parameter values of the wavefront using a single interferogram obtained from a point-diffraction interferometer (PDI). In order to find the estimates, we first calculate the squared-difference between the measured and simulated interferograms. Under certain assumptions, this squared-difference image can be treated as an interferogram showing the phase difference between the true wavefront deviation and simulated wavefront deviation. The wavefront deviation is defined as the difference between the reference and the test wavefronts. We calculate the phase difference using a traditional phase-shifting technique without physical phase-shifters. We present a detailed forward model for the PDI interferogram, including the effect of the nite size of a detector pixel. The algorithm was validated with computational studies and its performance and constraints are discussed. A prototype PDI was built and the algorithm was also experimentally validated. A large wavefront deviation was successfully estimated without using null optics or physical phase-shifters. The experimental result shows that the proposed algorithm has great potential to provide an accurate tool for non-null testing. The second application of ML estimation is used in nuclear medical imaging. A high-resolution positron tomography scanner called BazookaPET is proposed. We have designed and developed a novel proof-of-concept detector element for a PET system called BazookaPET. In order to complete the PET configuration, at least

  4. Estimation of Minimal Breakdown Point in a GaP Plasma Structure and Discharge Features in Air and Argon Media

    NASA Astrophysics Data System (ADS)

    Kurt, H. Hilal; Tanrıverdi, Evrim

    2016-08-01

    We present gas discharge phenomena in argon and air media using a gallium phosphide (GaP) semiconductor and metal electrodes. The system has a large-diameter ( D) semiconductor and a microscaled adjustable interelectrode gap ( d). Both theoretical and experimental findings are discussed for a direct-current (dc) electric field ( E) applied to this structure with parallel-plate geometry. As one of the main parameters, the pressure p takes an adjustable value from 0.26 kPa to 101 kPa. After collection of experimental data, a new theoretical formula is developed to estimate the minimal breakdown point of the system as a function of p and d. It is proven that the minimal breakdown point in the semiconductor and metal electrode system differs dramatically from that in metal and metal electrode systems. In addition, the surface charge density σ and spatial electron distribution n e are calculated theoretically. Current-voltage characteristics (CVCs) demonstrate that there exist certain negative differential resistance (NDR) regions for small interelectrode separations (i.e., d = 50 μm) and low and moderate pressures between 3.7 kPa and 13 kPa in Ar medium. From the difference of currents in CVCs, the bifurcation of the discharge current is clarified for an applied voltage U. Since the current differences in NDRs have various values from 1 μA to 7.24 μA for different pressures, the GaP semiconductor plasma structure can be used in microwave diode systems due to its clear NDR region.

  5. How accurately are maximal metabolic equivalents estimated based on the treadmill workload in healthy people and asymptomatic subjects with cardiovascular risk factors?

    PubMed

    Maeder, M T; Muenzer, T; Rickli, H; Brunner-La Rocca, H P; Myers, J; Ammann, P

    2008-08-01

    Maximal exercise capacity expressed as metabolic equivalents (METs) is rarely directly measured (measured METs; mMETs) but estimated from maximal workload (estimated METs; eMETs). We assessed the accuracy of predicting mMETs by eMETs in asymptomatic subjects. Thirty-four healthy volunteers without cardiovascular risk factors (controls) and 90 patients with at least one risk factor underwent cardiopulmonary exercise testing using individualized treadmill ramp protocols. The equation of the American College of Sports Medicine (ACSM) was employed to calculate eMETs. Despite a close correlation between eMETs and mMETs (patients: r = 0.82, controls: r = 0.88; p < 0.001 for both), eMETs were higher than mMETs in both patients [11.7 (8.9 - 13.4) vs. 8.2 (7.0 - 10.6) METs; p < 0.001] and controls [17.0 (16.2 - 18.2) vs. 15.6 (14.2 - 17.0) METs; p < 0.001]. The absolute [2.5 (1.6 - 3.7) vs. 1.3 (0.9 - 2.1) METs; p < 0.001] and the relative [28 (19 - 47) vs. 9 (6 - 14) %; p < 0.001] difference between eMETs and mMETs was higher in patients. In patients, ratio limits of agreement of 1.33 (*/ divided by 1.40) between eMETs and mMETs were obtained, whereas the ratio limits of agreement were 1.09 (*/ divided by 1.13) in controls. The ACSM equation is associated with a significant overestimation of mMETs in young and fit subjects, which is markedly more pronounced in older and less fit subjects with cardiovascular risk factors.

  6. Estimating a societal value of earth science information in the assessment of non-point source pollutants

    NASA Astrophysics Data System (ADS)

    Bernknopf, Richard L.; Allison Lenkeit, K.; Dinitz, Laura B.; Loague, Keith

    The availability of potable groundwater supplies is a major environmental-quality concern throughout the U.S. Remediation measures exist as one possible means of "cleaning up" groundwater-contamination problems. An alternative preventive approach to mitigate future contamination incidents is regional-scale non-point source (NPS) vulnerability assessments. The method of assessing groundwater vulnerability in this study is founded on the Retardation Factor (RF), a screening index which is based on Earth Science information. In this chapter the RF index is used as the core of a risk-based regulation to permit the application of specific pesticides in specific soils to avoid future contamination. An integrated Earth Science-Economics model is developed to estimate the benefits of an ex ante informational approach to decision making in a regulatory framework. The RF-based preventive measure is then compared in a cost-effectiveness analysis to a wellhead treatment program in a hypothetical case study for the Hawaiian island of Oahu. The comparison demonstrates that an RF-based regulation has positive net benefits and under certain circumstance can be more efficient than the example wellhead treatment program.

  7. Temperature mapping in bread dough using SE and GE two-point MRI methods: experimental and theoretical estimation of uncertainty.

    PubMed

    Lucas, Tiphaine; Musse, Maja; Bornert, Mélanie; Davenel, Armel; Quellec, Stéphane

    2012-04-01

    Two-dimensional (2D)-SE, 2D-GE and tri-dimensional (3D)-GE two-point T(1)-weighted MRI methods were evaluated in this study in order to maximize the accuracy of temperature mapping of bread dough during thermal processing. Uncertainties were propagated throughout each protocol of measurement, and comparisons demonstrated that all the methods with comparable acquisition times minimized the temperature uncertainty to similar extent. The experimental uncertainties obtained with low-field MRI were also compared to the theoretical estimations. Some discrepancies were reported between experimental and theoretical values of uncertainties of temperature; however, experimental and theoretical trends with varying parameters agreed to a large extent for both SE and GE methods. The 2D-SE method was chosen for further applications on prefermented dough because of its lower sensitivity to susceptibility differences in porous media. It was applied for temperature mapping in prefermented dough during chilling prior to freezing and compared locally to optical fiber measurements.

  8. Temperature mapping in bread dough using SE and GE two-point MRI methods: experimental and theoretical estimation of uncertainty.

    PubMed

    Lucas, Tiphaine; Musse, Maja; Bornert, Mélanie; Davenel, Armel; Quellec, Stéphane

    2012-04-01

    Two-dimensional (2D)-SE, 2D-GE and tri-dimensional (3D)-GE two-point T(1)-weighted MRI methods were evaluated in this study in order to maximize the accuracy of temperature mapping of bread dough during thermal processing. Uncertainties were propagated throughout each protocol of measurement, and comparisons demonstrated that all the methods with comparable acquisition times minimized the temperature uncertainty to similar extent. The experimental uncertainties obtained with low-field MRI were also compared to the theoretical estimations. Some discrepancies were reported between experimental and theoretical values of uncertainties of temperature; however, experimental and theoretical trends with varying parameters agreed to a large extent for both SE and GE methods. The 2D-SE method was chosen for further applications on prefermented dough because of its lower sensitivity to susceptibility differences in porous media. It was applied for temperature mapping in prefermented dough during chilling prior to freezing and compared locally to optical fiber measurements. PMID:22227351

  9. Two-compartment, two-sample technique for accurate estimation of effective renal plasma flow: Theoretical development and comparison with other methods

    SciTech Connect

    Lear, J.L.; Feyerabend, A.; Gregory, C.

    1989-08-01

    Discordance between effective renal plasma flow (ERPF) measurements from radionuclide techniques that use single versus multiple plasma samples was investigated. In particular, the authors determined whether effects of variations in distribution volume (Vd) of iodine-131 iodohippurate on measurement of ERPF could be ignored, an assumption implicit in the single-sample technique. The influence of Vd on ERPF was found to be significant, a factor indicating an important and previously unappreciated source of error in the single-sample technique. Therefore, a new two-compartment, two-plasma-sample technique was developed on the basis of the observations that while variations in Vd occur from patient to patient, the relationship between intravascular and extravascular components of Vd and the rate of iodohippurate exchange between the components are stable throughout a wide range of physiologic and pathologic conditions. The new technique was applied in a series of 30 studies in 19 patients. Results were compared with those achieved with the reference, single-sample, and slope-intercept techniques. The new two-compartment, two-sample technique yielded estimates of ERPF that more closely agreed with the reference multiple-sample method than either the single-sample or slope-intercept techniques.

  10. Normal Tissue Complication Probability Estimation by the Lyman-Kutcher-Burman Method Does Not Accurately Predict Spinal Cord Tolerance to Stereotactic Radiosurgery

    SciTech Connect

    Daly, Megan E.; Luxton, Gary; Choi, Clara Y.H.; Gibbs, Iris C.; Chang, Steven D.; Adler, John R.; Soltys, Scott G.

    2012-04-01

    traditionally used to estimate spinal cord NTCP may not apply to the dosimetry of SRS. Further research with additional NTCP models is needed.

  11. An SVM-Based Classifier for Estimating the State of Various Rotating Components in Agro-Industrial Machinery with a Vibration Signal Acquired from a Single Point on the Machine Chassis

    PubMed Central

    Ruiz-Gonzalez, Ruben; Gomez-Gil, Jaime; Gomez-Gil, Francisco Javier; Martínez-Martínez, Víctor

    2014-01-01

    The goal of this article is to assess the feasibility of estimating the state of various rotating components in agro-industrial machinery by employing just one vibration signal acquired from a single point on the machine chassis. To do so, a Support Vector Machine (SVM)-based system is employed. Experimental tests evaluated this system by acquiring vibration data from a single point of an agricultural harvester, while varying several of its working conditions. The whole process included two major steps. Initially, the vibration data were preprocessed through twelve feature extraction algorithms, after which the Exhaustive Search method selected the most suitable features. Secondly, the SVM-based system accuracy was evaluated by using Leave-One-Out cross-validation, with the selected features as the input data. The results of this study provide evidence that (i) accurate estimation of the status of various rotating components in agro-industrial machinery is possible by processing the vibration signal acquired from a single point on the machine structure; (ii) the vibration signal can be acquired with a uniaxial accelerometer, the orientation of which does not significantly affect the classification accuracy; and, (iii) when using an SVM classifier, an 85% mean cross-validation accuracy can be reached, which only requires a maximum of seven features as its input, and no significant improvements are noted between the use of either nonlinear or linear kernels. PMID:25372618

  12. Estimation of the measurement uncertainty in quantitative determination of ketamine and norketamine in urine using a one-point calibration method.

    PubMed

    Ma, Yi-Chun; Wang, Che-Wei; Hung, Sih-Hua; Chang, Yan-Zin; Liu, Chia-Reiy; Her, Guor-Rong

    2012-09-01

    An approach was proposed for the estimation of measurement uncertainty for analytical methods based on one-point calibration. The proposed approach is similar to the popular multiple-point calibration approach. However, the standard deviation of calibration was estimated externally. The approach was applied to the estimation of measurement uncertainty for the quantitative determination of ketamine (K) and norketamine (NK) at a 100 ng/mL threshold concentration in urine. In addition to uncertainty due to calibration, sample analysis was the other major source of uncertainty. To include the variation due to matrix effect and temporal effect in sample analysis, different blank urines were spiked with K and NK and analyzed at equal time intervals within and between batches. The expanded uncertainties (k = 2) were estimated to be 10 and 8 ng/mL for K and NK, respectively.

  13. A test of the 'one-point method' for estimating maximum carboxylation capacity from field-measured, light-saturated photosynthesis.

    PubMed

    De Kauwe, Martin G; Lin, Yan-Shih; Wright, Ian J; Medlyn, Belinda E; Crous, Kristine Y; Ellsworth, David S; Maire, Vincent; Prentice, I Colin; Atkin, Owen K; Rogers, Alistair; Niinemets, Ülo; Serbin, Shawn P; Meir, Patrick; Uddling, Johan; Togashi, Henrique F; Tarvainen, Lasse; Weerasinghe, Lasantha K; Evans, Bradley J; Ishida, F Yoko; Domingues, Tomas F

    2016-05-01

    Simulations of photosynthesis by terrestrial biosphere models typically need a specification of the maximum carboxylation rate (Vcmax ). Estimating this parameter using A-Ci curves (net photosynthesis, A, vs intercellular CO2 concentration, Ci ) is laborious, which limits availability of Vcmax data. However, many multispecies field datasets include net photosynthetic rate at saturating irradiance and at ambient atmospheric CO2 concentration (Asat ) measurements, from which Vcmax can be extracted using a 'one-point method'. We used a global dataset of A-Ci curves (564 species from 46 field sites, covering a range of plant functional types) to test the validity of an alternative approach to estimate Vcmax from Asat via this 'one-point method'. If leaf respiration during the day (Rday ) is known exactly, Vcmax can be estimated with an r(2) value of 0.98 and a root-mean-squared error (RMSE) of 8.19 μmol m(-2) s(-1) . However, Rday typically must be estimated. Estimating Rday as 1.5% of Vcmax, we found that Vcmax could be estimated with an r(2) of 0.95 and an RMSE of 17.1 μmol m(-2) s(-1) . The one-point method provides a robust means to expand current databases of field-measured Vcmax , giving new potential to improve vegetation models and quantify the environmental drivers of Vcmax variation. PMID:26719951

  14. Accurate cloud-based smart IMT measurement, its validation and stroke risk stratification in carotid ultrasound: A web-based point-of-care tool for multicenter clinical trial.

    PubMed

    Saba, Luca; Banchhor, Sumit K; Suri, Harman S; Londhe, Narendra D; Araki, Tadashi; Ikeda, Nobutaka; Viskovic, Klaudija; Shafique, Shoaib; Laird, John R; Gupta, Ajay; Nicolaides, Andrew; Suri, Jasjit S

    2016-08-01

    This study presents AtheroCloud™ - a novel cloud-based smart carotid intima-media thickness (cIMT) measurement tool using B-mode ultrasound for stroke/cardiovascular risk assessment and its stratification. This is an anytime-anywhere clinical tool for routine screening and multi-center clinical trials. In this pilot study, the physician can upload ultrasound scans in one of the following formats (DICOM, JPEG, BMP, PNG, GIF or TIFF) directly into the proprietary cloud of AtheroPoint from the local server of the physician's office. They can then run the intelligent and automated AtheroCloud™ cIMT measurements in point-of-care settings in less than five seconds per image, while saving the vascular reports in the cloud. We statistically benchmark AtheroCloud™ cIMT readings against sonographer (a registered vascular technologist) readings and manual measurements derived from the tracings of the radiologist. One hundred patients (75 M/25 F, mean age: 68±11 years), IRB approved, Toho University, Japan, consisted of Left/Right common carotid artery (CCA) artery (200 ultrasound scans), (Toshiba, Tokyo, Japan) were collected using a 7.5MHz transducer. The measured cIMTs for L/R carotid were as follows (in mm): (i) AtheroCloud™ (0.87±0.20, 0.77±0.20); (ii) sonographer (0.97±0.26, 0.89±0.29) and (iii) manual (0.90±0.20, 0.79±0.20), respectively. The coefficient of correlation (CC) between sonographer and manual for L/R cIMT was 0.74 (P<0.0001) and 0.65 (P<0.0001), while, between AtheroCloud™ and manual was 0.96 (P<0.0001) and 0.97 (P<0.0001), respectively. We observed that 91.15% of the population in AtheroCloud™ had a mean cIMT error less than 0.11mm compared to sonographer's 68.31%. The area under curve for receiving operating characteristics was 0.99 for AtheroCloud™ against 0.81 for sonographer. Our Framingham Risk Score stratified the population into three bins as follows: 39% in low-risk, 70.66% in medium-risk and 10.66% in high-risk bins

  15. Accurate cloud-based smart IMT measurement, its validation and stroke risk stratification in carotid ultrasound: A web-based point-of-care tool for multicenter clinical trial.

    PubMed

    Saba, Luca; Banchhor, Sumit K; Suri, Harman S; Londhe, Narendra D; Araki, Tadashi; Ikeda, Nobutaka; Viskovic, Klaudija; Shafique, Shoaib; Laird, John R; Gupta, Ajay; Nicolaides, Andrew; Suri, Jasjit S

    2016-08-01

    This study presents AtheroCloud™ - a novel cloud-based smart carotid intima-media thickness (cIMT) measurement tool using B-mode ultrasound for stroke/cardiovascular risk assessment and its stratification. This is an anytime-anywhere clinical tool for routine screening and multi-center clinical trials. In this pilot study, the physician can upload ultrasound scans in one of the following formats (DICOM, JPEG, BMP, PNG, GIF or TIFF) directly into the proprietary cloud of AtheroPoint from the local server of the physician's office. They can then run the intelligent and automated AtheroCloud™ cIMT measurements in point-of-care settings in less than five seconds per image, while saving the vascular reports in the cloud. We statistically benchmark AtheroCloud™ cIMT readings against sonographer (a registered vascular technologist) readings and manual measurements derived from the tracings of the radiologist. One hundred patients (75 M/25 F, mean age: 68±11 years), IRB approved, Toho University, Japan, consisted of Left/Right common carotid artery (CCA) artery (200 ultrasound scans), (Toshiba, Tokyo, Japan) were collected using a 7.5MHz transducer. The measured cIMTs for L/R carotid were as follows (in mm): (i) AtheroCloud™ (0.87±0.20, 0.77±0.20); (ii) sonographer (0.97±0.26, 0.89±0.29) and (iii) manual (0.90±0.20, 0.79±0.20), respectively. The coefficient of correlation (CC) between sonographer and manual for L/R cIMT was 0.74 (P<0.0001) and 0.65 (P<0.0001), while, between AtheroCloud™ and manual was 0.96 (P<0.0001) and 0.97 (P<0.0001), respectively. We observed that 91.15% of the population in AtheroCloud™ had a mean cIMT error less than 0.11mm compared to sonographer's 68.31%. The area under curve for receiving operating characteristics was 0.99 for AtheroCloud™ against 0.81 for sonographer. Our Framingham Risk Score stratified the population into three bins as follows: 39% in low-risk, 70.66% in medium-risk and 10.66% in high-risk bins

  16. A point-infiltration model for estimating runoff from rainfall on small basins in semiarid areas of Wyoming

    USGS Publications Warehouse

    Rankl, James G.

    1990-01-01

    A physically based point-infiltration model was developed for computing infiltration of rainfall into soils and the resulting runoff from small basins in Wyoming. The user describes a 'design storm' in terms of average rainfall intensity and storm duration. Information required to compute runoff for the design storm by using the model include (1) soil type and description, and (2) two infiltration parameters and a surface-retention storage parameter. Parameter values are tabulated in the report. Rainfall and runoff data for three ephemeral-stream basins that contain only one type of soil were used to develop the model. Two assumptions were necessary: antecedent soil moisture is some long-term average, and storm rainfall is uniform in both time and space. The infiltration and surface-retention storage parameters were determined for the soil of each basin. Observed rainstorm and runoff data were used to develop a separation curve, or incipient-runoff curve, which distinguishes between runoff and nonrunoff rainfall data. The position of this curve defines the infiltration and surface-retention storage parameters. A procedure for applying the model to basins that contain more than one type of soil was developed using data from 7 of the 10 study basins. For these multiple-soil basins, the incipient-runoff curve defines the infiltration and retention-storage parameters for the soil having the highest runoff potential. Parameters were defined by ranking the soils according to their relative permeabilities and optimizing the position of the incipient-runoff curve by using measured runoff as a control for the fit. Analyses of runoff from multiple-soil basins indicate that the effective contributing area of runoff is less than the drainage area of the basin. In this study, the effective drainage area ranged from 41.6 to 71.1 percent of the total drainage area. Information on effective drainage area is useful in evaluating drainage area as an independent variable in

  17. Digital signal processing and control and estimation theory -- Points of tangency, area of intersection, and parallel directions

    NASA Technical Reports Server (NTRS)

    Willsky, A. S.

    1976-01-01

    A number of current research directions in the fields of digital signal processing and modern control and estimation theory were studied. Topics such as stability theory, linear prediction and parameter identification, system analysis and implementation, two-dimensional filtering, decentralized control and estimation, image processing, and nonlinear system theory were examined in order to uncover some of the basic similarities and differences in the goals, techniques, and philosophy of the two disciplines. An extensive bibliography is included.

  18. Point: Clarifying Policy Evidence With Potential-Outcomes Thinking—Beyond Exposure-Response Estimation in Air Pollution Epidemiology

    PubMed Central

    Zigler, Corwin Matthew; Dominici, Francesca

    2014-01-01

    The regulatory environment surrounding policies to control air pollution warrants a new type of epidemiologic evidence. Whereas air pollution epidemiology has typically informed policies with estimates of exposure-response relationships between pollution and health outcomes, these estimates alone cannot support current debates surrounding the actual health effects of air quality regulations. We argue that directly evaluating specific control strategies is distinct from estimating exposure-response relationships and that increased emphasis on estimating effects of well-defined regulatory interventions would enhance the evidence that supports policy decisions. Appealing to similar calls for accountability assessment of whether regulatory actions impact health outcomes, we aim to sharpen the analytic distinctions between studies that directly evaluate policies and those that estimate exposure-response relationships, with particular focus on perspectives for causal inference. Our goal is not to review specific methodologies or studies, nor is it to extoll the advantages of “causal” versus “associational” evidence. Rather, we argue that potential-outcomes perspectives can elevate current policy debates with more direct evidence of the extent to which complex regulatory interventions affect health. Augmenting the existing body of exposure-response estimates with rigorous evidence of the causal effects of well-defined actions will ensure that the highest-level epidemiologic evidence continues to support regulatory policies. PMID:25399414

  19. Point: clarifying policy evidence with potential-outcomes thinking--beyond exposure-response estimation in air pollution epidemiology.

    PubMed

    Zigler, Corwin Matthew; Dominici, Francesca

    2014-12-15

    The regulatory environment surrounding policies to control air pollution warrants a new type of epidemiologic evidence. Whereas air pollution epidemiology has typically informed policies with estimates of exposure-response relationships between pollution and health outcomes, these estimates alone cannot support current debates surrounding the actual health effects of air quality regulations. We argue that directly evaluating specific control strategies is distinct from estimating exposure-response relationships and that increased emphasis on estimating effects of well-defined regulatory interventions would enhance the evidence that supports policy decisions. Appealing to similar calls for accountability assessment of whether regulatory actions impact health outcomes, we aim to sharpen the analytic distinctions between studies that directly evaluate policies and those that estimate exposure-response relationships, with particular focus on perspectives for causal inference. Our goal is not to review specific methodologies or studies, nor is it to extoll the advantages of "causal" versus "associational" evidence. Rather, we argue that potential-outcomes perspectives can elevate current policy debates with more direct evidence of the extent to which complex regulatory interventions affect health. Augmenting the existing body of exposure-response estimates with rigorous evidence of the causal effects of well-defined actions will ensure that the highest-level epidemiologic evidence continues to support regulatory policies.

  20. Estimation of point source fugitive emission rates from a single sensor time series: a conditionally-sampled Gaussian plume reconstruction

    EPA Science Inventory

    This paper presents a technique for determining the trace gas emission rate from a point source. The technique was tested using data from controlled methane release experiments and from measurement downwind of a natural gas production facility in Wyoming. Concentration measuremen...

  1. Effects of Varying Epoch Lengths, Wear Time Algorithms, and Activity Cut-Points on Estimates of Child Sedentary Behavior and Physical Activity from Accelerometer Data

    PubMed Central

    Banda, Jorge A.; Haydel, K. Farish; Davila, Tania; Desai, Manisha; Haskell, William L.; Matheson, Donna; Robinson, Thomas N.

    2016-01-01

    Objective To examine the effects of accelerometer epoch lengths, wear time (WT) algorithms, and activity cut-points on estimates of WT, sedentary behavior (SB), and physical activity (PA). Methods 268 7–11 year-olds with BMI ≥ 85th percentile for age and sex wore accelerometers on their right hips for 4–7 days. Data were processed and analyzed at epoch lengths of 1-, 5-, 10-, 15-, 30-, and 60-seconds. For each epoch length, WT minutes/day was determined using three common WT algorithms, and minutes/day and percent time spent in SB, light (LPA), moderate (MPA), and vigorous (VPA) PA were determined using five common activity cut-points. ANOVA tested differences in WT, SB, LPA, MPA, VPA, and MVPA when using the different epoch lengths, WT algorithms, and activity cut-points. Results WT minutes/day varied significantly by epoch length when using the NHANES WT algorithm (p < .0001), but did not vary significantly by epoch length when using the ≥ 20 minute consecutive zero or Choi WT algorithms. Minutes/day and percent time spent in SB, LPA, MPA, VPA, and MVPA varied significantly by epoch length for all sets of activity cut-points tested with all three WT algorithms (all p < .0001). Across all epoch lengths, minutes/day and percent time spent in SB, LPA, MPA, VPA, and MVPA also varied significantly across all sets of activity cut-points with all three WT algorithms (all p < .0001). Conclusions The common practice of converting WT algorithms and activity cut-point definitions to match different epoch lengths may introduce significant errors. Estimates of SB and PA from studies that process and analyze data using different epoch lengths, WT algorithms, and/or activity cut-points are not comparable, potentially leading to very different results, interpretations, and conclusions, misleading research and public policy. PMID:26938240

  2. A test of the 'one-point method' for estimating maximum carboxylation capacity from field-measured, light-saturated photosynthesis

    DOE PAGES

    Martin G. De Kauwe; Serbin, Shawn P.; Lin, Yan -Shih; Wright, Ian J.; Medlyn, Belinda E.; Crous, Kristine Y.; Ellsworth, David S.; Maire, Vincent; Prentice, I. Colin; Atkin, Owen K.; et al

    2015-12-31

    Here, simulations of photosynthesis by terrestrial biosphere models typically need a specification of the maximum carboxylation rate (Vcmax). Estimating this parameter using A–Ci curves (net photosynthesis, A, vs intercellular CO2 concentration, Ci) is laborious, which limits availability of Vcmax data. However, many multispecies field datasets include net photosynthetic rate at saturating irradiance and at ambient atmospheric CO2 concentration (Asat) measurements, from which Vcmax can be extracted using a ‘one-point method’.

  3. Sci—Thur AM: YIS - 11: Estimation of Bladder-Wall Cumulative Dose in Multi-Fraction Image-Based Gynaecological Brachytherapy Using Deformable Point Set Registration

    SciTech Connect

    Zakariaee, R; Brown, C J; Hamarneh, G; Parsons, C A; Spadinger, I

    2014-08-15

    Dosimetric parameters based on dose-volume histograms (DVH) of contoured structures are routinely used to evaluate dose delivered to target structures and organs at risk. However, the DVH provides no information on the spatial distribution of the dose in situations of repeated fractions with changes in organ shape or size. The aim of this research was to develop methods to more accurately determine geometrically localized, cumulative dose to the bladder wall in intracavitary brachytherapy for cervical cancer. The CT scans and treatment plans of 20 cervical cancer patients were used. Each patient was treated with five high-dose-rate (HDR) brachytherapy fractions of 600cGy prescribed dose. The bladder inner and outer surfaces were delineated using MIM Maestro software (MIM Software Inc.) and were imported into MATLAB (MathWorks) as 3-dimensional point clouds constituting the “bladder wall”. A point-set registration toolbox for MATLAB, Coherent Point Drift (CPD), was used to non-rigidly transform the bladder-wall points from four of the fractions to the coordinate system of the remaining (reference) fraction, which was chosen to be the emptiest bladder for each patient. The doses were accumulated on the reference fraction and new cumulative dosimetric parameters were calculated. The LENT-SOMA toxicity scores of these patients were studied against the cumulative dose parameters. Based on this study, there was no significant correlation between the toxicity scores and the determined cumulative dose parameters.

  4. Future PMPs Estimation in Korea under AR5 RCP 8.5 Climate Change Scenario: Focus on Dew Point Temperature Change

    NASA Astrophysics Data System (ADS)

    Okjeong, Lee; Sangdan, Kim

    2016-04-01

    According to future climate change scenarios, future temperature is expected to increase gradually. Therefore, it is necessary to reflect the effects of these climate changes to predict Probable Maximum Precipitations (PMPs). In this presentation, PMPs will be estimated with future dew point temperature change. After selecting 174 major storm events from 1981 to 2005, new PMPs will be proposed with respect to storm areas (25, 100, 225, 400, 900, 2,025, 4,900, 10,000 and 19,600 km2) and storm durations (1, 2, 4, 6, 8, 12, 18, 24, 48 and 72 hours) using the Korea hydro-meteorological method. Also, orographic transposition factor will be applied in place of the conventional terrain impact factor which has been used in previous Korean PMPs estimation reports. After estimating dew point temperature using future temperature and representative humidity information under the Korea Meteorological Administration AR5 RCP 8.5, changes in the PMPs under dew point temperature change will be investigated by comparison with present and future PMPs. This research was supported by a grant(14AWMP-B082564-01) from Advanced Water Management Research Program funded by Ministry of Land, Infrastructure and Transport of Korean government.

  5. Modification and fixed-point analysis of a Kalman filter for orientation estimation based on 9D inertial measurement unit data.

    PubMed

    Brückner, Hans-Peter; Spindeldreier, Christian; Blume, Holger

    2013-01-01

    A common approach for high accuracy sensor fusion based on 9D inertial measurement unit data is Kalman filtering. State of the art floating-point filter algorithms differ in their computational complexity nevertheless, real-time operation on a low-power microcontroller at high sampling rates is not possible. This work presents algorithmic modifications to reduce the computational demands of a two-step minimum order Kalman filter. Furthermore, the required bit-width of a fixed-point filter version is explored. For evaluation real-world data captured using an Xsens MTx inertial sensor is used. Changes in computational latency and orientation estimation accuracy due to the proposed algorithmic modifications and fixed-point number representation are evaluated in detail on a variety of processing platforms enabling on-board processing on wearable sensor platforms. PMID:24110597

  6. A fast and accurate method for echocardiography strain rate imaging

    NASA Astrophysics Data System (ADS)

    Tavakoli, Vahid; Sahba, Nima; Hajebi, Nima; Nambakhsh, Mohammad Saleh

    2009-02-01

    Recently Strain and strain rate imaging have proved their superiority with respect to classical motion estimation methods in myocardial evaluation as a novel technique for quantitative analysis of myocardial function. Here in this paper, we propose a novel strain rate imaging algorithm using a new optical flow technique which is more rapid and accurate than the previous correlation-based methods. The new method presumes a spatiotemporal constancy of intensity and Magnitude of the image. Moreover the method makes use of the spline moment in a multiresolution approach. Moreover cardiac central point is obtained using a combination of center of mass and endocardial tracking. It is proved that the proposed method helps overcome the intensity variations of ultrasound texture while preserving the ability of motion estimation technique for different motions and orientations. Evaluation is performed on simulated, phantom (a contractile rubber balloon) and real sequences and proves that this technique is more accurate and faster than the previous methods.

  7. Global accuracy estimates of point and mean undulation differences obtained from gravity disturbances, gravity anomalies and potential coefficients

    NASA Technical Reports Server (NTRS)

    Jekeli, C.

    1979-01-01

    Through the method of truncation functions, the oceanic geoid undulation is divided into two constituents: an inner zone contribution expressed as an integral of surface gravity disturbances over a spherical cap; and an outer zone contribution derived from a finite set of potential harmonic coefficients. Global, average error estimates are formulated for undulation differences, thereby providing accuracies for a relative geoid. The error analysis focuses on the outer zone contribution for which the potential coefficient errors are modeled. The method of computing undulations based on gravity disturbance data for the inner zone is compared to the similar, conventional method which presupposes gravity anomaly data within this zone.

  8. Impact of single-point GPS integrated water vapor estimates on short-range WRF model forecasts over southern India

    NASA Astrophysics Data System (ADS)

    Kumar, Prashant; Gopalan, Kaushik; Shukla, Bipasha Paul; Shyam, Abhineet

    2016-09-01

    Specifying physically consistent and accurate initial conditions is one of the major challenges of numerical weather prediction (NWP) models. In this study, ground-based global positioning system (GPS) integrated water vapor (IWV) measurements available from the International Global Navigation Satellite Systems (GNSS) Service (IGS) station in Bangalore, India, are used to assess the impact of GPS data on NWP model forecasts over southern India. Two experiments are performed with and without assimilation of GPS-retrieved IWV observations during the Indian winter monsoon period (November-December, 2012) using a four-dimensional variational (4D-Var) data assimilation method. Assimilation of GPS data improved the model IWV analysis as well as the subsequent forecasts. There is a positive impact of ˜10 % over Bangalore and nearby regions. The Weather Research and Forecasting (WRF) model-predicted 24-h surface temperature forecasts have also improved when compared with observations. Small but significant improvements were found in the rainfall forecasts compared to control experiments.

  9. A Method to Estimate the Probability That Any Individual Lightning Stroke Contacted the Surface Within Any Radius of Any Point

    NASA Technical Reports Server (NTRS)

    Huddleston, Lisa L.; Roeder, William; Merceret, Francis J.

    2010-01-01

    A technique has been developed to calculate the probability that any nearby lightning stroke is within any radius of any point of interest. In practice, this provides the probability that a nearby lightning stroke was within a key distance of a facility, rather than the error ellipses centered on the stroke. This process takes the current bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to get the probability that the stroke is inside any specified radius. This new facility-centric technique will be much more useful to the space launch customers and may supersede the lightning error ellipse approach discussed in [5], [6].

  10. Point-source CO2 emission estimation from airborne sampled CO2 mass density: a case study for an industrial plant in Biganos, Southern France.

    NASA Astrophysics Data System (ADS)

    Carotenuto, Federico; Gioli, Beniamino; Toscano, Piero; Zaldei, Alessandro; Miglietta, Franco

    2013-04-01

    One interesting aspect in the airborne sampling of ground emissions of all types (from CO2 to particulate matter) is the ability to understand the source from which these emissions originated and, therefore, obtain an estimation of that ground source's strength. Recently an aerial campaign has been conducted in order to sample emissions coming from a paper production plant in Biganos (France). The campaign made use of a Sky Arrow ERA (Environmental Research Aircraft) equipped with a mobile flux platform system. This latter system couples (among the various instrumentation) a turbulence probe (BAT) and a LICOR 7500 open-path infra-red gas analyzer that also enables the estimation of high-resolution fluxes of different scalars via the spatial-integrated eddy-covariance technique. Aircraft data showed a marked increase in CO2 mass density downwind the industrial area, while vertical profiles samplings showed that concentrations were changing with altitude. The estimation of the CO2 source was obtained using a simple mass balance approach, that is, by integrating the product of CO2 concentration and the mass flow rate through a cross-sectional area downwind of the point source. The results were compared with those obtained by means of a "forward-mode" Lagrangian dispersion model operated iteratively. CO2 source strength were varied at each iteration to obtain an optimal convergence between the modeled atmospheric concentrations and the concentration data observed by the aircraft. The procedure makes use of wind speed and atmospheric turbulence data which are directly measured by the BAT probe at different altitudes. The two methods provided comparable estimates of the CO2 source thus providing a substantial validation of the model-based iterative dispersion procedure. We consider that this data-model integration approach involving aircraft surveys and models may substantially enhance the estimation of point and area sources of any scalar, even in more complex

  11. Structural Constraints and Earthquake Recurrence Estimates for the West Tahoe-Dollar Point Fault, Lake Tahoe Basin, California

    NASA Astrophysics Data System (ADS)

    Maloney, J. M.; Driscoll, N. W.; Kent, G.; Brothers, D. S.; Baskin, R. L.; Babcock, J. M.; Noble, P. J.; Karlin, R. E.

    2011-12-01

    Previous work in the Lake Tahoe Basin (LTB), California, identified the West Tahoe-Dollar Point Fault (WTDPF) as the most hazardous fault in the region. Onshore and offshore geophysical mapping delineated three segments of the WTDPF extending along the western margin of the LTB. The rupture patterns between the three WTDPF segments remain poorly understood. Fallen Leaf Lake (FLL), Cascade Lake, and Emerald Bay are three sub-basins of the LTB, located south of Lake Tahoe, that provide an opportunity to image primary earthquake deformation along the WTDPF and associated landslide deposits. We present results from recent (June 2011) high-resolution seismic CHIRP surveys in FLL and Cascade Lake, as well as complete multibeam swath bathymetry coverage of FLL. Radiocarbon dates obtained from the new piston cores acquired in FLL provide age constraints on the older FLL slide deposits and build on and complement previous work that dated the most recent event (MRE) in Fallen Leaf Lake at ~4.1-4.5 k.y. BP. The CHIRP data beneath FLL image slide deposits that appear to correlate with contemporaneous slide deposits in Emerald Bay and Lake Tahoe. A major slide imaged in FLL CHIRP data is slightly younger than the Tsoyowata ash (7950-7730 cal yrs BP) identified in sediment cores and appears synchronous with a major Lake Tahoe slide deposit (7890-7190 cal yrs BP). The equivalent age of these slides suggests the penultimate earthquake on the WTDPF may have triggered them. If correct, we postulate a recurrence interval of ~3-4 k.y. These results suggest the FLL segment of the WTDPF is near its seismic recurrence cycle. Additionally, CHIRP profiles acquired in Cascade Lake image the WTDPF for the first time in this sub-basin, which is located near the transition zone between the FLL and Rubicon Point Sections of the WTDPF. We observe two fault-strands trending N45°W across southern Cascade Lake for ~450 m. The strands produce scarps of ~5 m and ~2.7 m, respectively, on the lake

  12. Equipment Errors: A Prevalent Cause for Fallacy in Blood Pressure Recording - A Point Prevalence Estimate from an Indian Health University

    PubMed Central

    Mishra, Badrinarayan; Sinha, Nidhi Dinesh; Gidwani, Hitesh; Shukla, Sushil Kumar; Kawatra, Abhishek; Mehta, SC

    2013-01-01

    prevalent arm bladder cuff-mismatching can be important barriers to accurate BP measurement. PMID:23559698

  13. On the Choice of Access Point Selection Criterion and Other Position Estimation Characteristics for WLAN-Based Indoor Positioning.

    PubMed

    Laitinen, Elina; Lohan, Elena Simona

    2016-01-01

    The positioning based on Wireless Local Area Networks (WLAN) is one of the most promising technologies for indoor location-based services, generally using the information carried by Received Signal Strengths (RSS). One challenge, however, is the huge amount of data in the radiomap database due to the enormous number of hearable Access Points (AP) that could make the positioning system very complex. This paper concentrates on WLAN-based indoor location by comparing fingerprinting, path loss and weighted centroid based positioning approaches in terms of complexity and performance and studying the effects of grid size and AP reduction with several choices for appropriate selection criterion. All results are based on real field measurements in three multi-floor buildings. We validate our earlier findings concerning several different AP selection criteria and conclude that the best results are obtained with a maximum RSS-based criterion, which also proved to be the most consistent among the different investigated approaches. We show that the weighted centroid based low-complexity method is very sensitive to AP reduction, while the path loss-based method is also very robust to high percentage removals. Indeed, for fingerprinting, 50% of the APs can be removed safely with a properly chosen removal criterion without increasing the positioning error much. PMID:27213395

  14. On the Choice of Access Point Selection Criterion and Other Position Estimation Characteristics for WLAN-Based Indoor Positioning

    PubMed Central

    Laitinen, Elina; Lohan, Elena Simona

    2016-01-01

    The positioning based on Wireless Local Area Networks (WLAN) is one of the most promising technologies for indoor location-based services, generally using the information carried by Received Signal Strengths (RSS). One challenge, however, is the huge amount of data in the radiomap database due to the enormous number of hearable Access Points (AP) that could make the positioning system very complex. This paper concentrates on WLAN-based indoor location by comparing fingerprinting, path loss and weighted centroid based positioning approaches in terms of complexity and performance and studying the effects of grid size and AP reduction with several choices for appropriate selection criterion. All results are based on real field measurements in three multi-floor buildings. We validate our earlier findings concerning several different AP selection criteria and conclude that the best results are obtained with a maximum RSS-based criterion, which also proved to be the most consistent among the different investigated approaches. We show that the weighted centroid based low-complexity method is very sensitive to AP reduction, while the path loss-based method is also very robust to high percentage removals. Indeed, for fingerprinting, 50% of the APs can be removed safely with a properly chosen removal criterion without increasing the positioning error much. PMID:27213395

  15. On the Choice of Access Point Selection Criterion and Other Position Estimation Characteristics for WLAN-Based Indoor Positioning.

    PubMed

    Laitinen, Elina; Lohan, Elena Simona

    2016-05-20

    The positioning based on Wireless Local Area Networks (WLAN) is one of the most promising technologies for indoor location-based services, generally using the information carried by Received Signal Strengths (RSS). One challenge, however, is the huge amount of data in the radiomap database due to the enormous number of hearable Access Points (AP) that could make the positioning system very complex. This paper concentrates on WLAN-based indoor location by comparing fingerprinting, path loss and weighted centroid based positioning approaches in terms of complexity and performance and studying the effects of grid size and AP reduction with several choices for appropriate selection criterion. All results are based on real field measurements in three multi-floor buildings. We validate our earlier findings concerning several different AP selection criteria and conclude that the best results are obtained with a maximum RSS-based criterion, which also proved to be the most consistent among the different investigated approaches. We show that the weighted centroid based low-complexity method is very sensitive to AP reduction, while the path loss-based method is also very robust to high percentage removals. Indeed, for fingerprinting, 50% of the APs can be removed safely with a properly chosen removal criterion without increasing the positioning error much.

  16. Contaminant dispersion prediction and source estimation with integrated Gaussian-machine learning network model for point source emission in atmosphere.

    PubMed

    Ma, Denglong; Zhang, Zaoxiao

    2016-07-01

    Gas dispersion model is important for predicting the gas concentrations when contaminant gas leakage occurs. Intelligent network models such as radial basis function (RBF), back propagation (BP) neural network and support vector machine (SVM) model can be used for gas dispersion prediction. However, the prediction results from these network models with too many inputs based on original monitoring parameters are not in good agreement with the experimental data. Then, a new series of machine learning algorithms (MLA) models combined classic Gaussian model with MLA algorithm has been presented. The prediction results from new models are improved greatly. Among these models, Gaussian-SVM model performs best and its computation time is close to that of classic Gaussian dispersion model. Finally, Gaussian-MLA models were applied to identifying the emission source parameters with the particle swarm optimization (PSO) method. The estimation performance of PSO with Gaussian-MLA is better than that with Gaussian, Lagrangian stochastic (LS) dispersion model and network models based on original monitoring parameters. Hence, the new prediction model based on Gaussian-MLA is potentially a good method to predict contaminant gas dispersion as well as a good forward model in emission source parameters identification problem. PMID:27035273

  17. A test of the 'one-point method' for estimating maximum carboxylation capacity from field-measured, light-saturated photosynthesis

    SciTech Connect

    Martin G. De Kauwe; Serbin, Shawn P.; Lin, Yan -Shih; Wright, Ian J.; Medlyn, Belinda E.; Crous, Kristine Y.; Ellsworth, David S.; Maire, Vincent; Prentice, I. Colin; Atkin, Owen K.; Rogers, Alistair; Niinemets, Ulo; Meir, Patrick; Uddling, Johan; Togashi, Henrique F.; Tarvainen, Lasse; Weerasinghe, Lasantha K.; Evans, Bradley J.; Ishida, F. Yoko; Domingues, Tomas F.

    2015-12-31

    Here, simulations of photosynthesis by terrestrial biosphere models typically need a specification of the maximum carboxylation rate (Vcmax). Estimating this parameter using A–Ci curves (net photosynthesis, A, vs intercellular CO2 concentration, Ci) is laborious, which limits availability of Vcmax data. However, many multispecies field datasets include net photosynthetic rate at saturating irradiance and at ambient atmospheric CO2 concentration (Asat) measurements, from which Vcmax can be extracted using a ‘one-point method’.

  18. BeiDou phase bias estimation and its application in precise point positioning with triple-frequency observable

    NASA Astrophysics Data System (ADS)

    Gu, Shengfeng; Lou, Yidong; Shi, Chuang; Liu, Jingnan

    2015-10-01

    At present, the BeiDou system (BDS) enables the practical application of triple-frequency observable in the Asia-Pacific region, of many possible benefits from the additional signal; this study focuses on exploiting the contribution of zero difference (ZD) ambiguity resolution (AR) to the precise point positioning (PPP). A general modeling strategy for multi-frequency PPP AR is presented, in which, the least squares ambiguity decorrelation adjustment (LAMBDA) method is employed in ambiguity fixing based on the full variance-covariance ambiguity matrix generated from the raw data processing model. Because of the reliable fixing of BDS L1 ambiguity faces more difficulty, the LAMBDA method with partial ambiguity fixing is proposed to enable the independent and instantaneous resolution of extra wide-lane (EWL) and wide-lane (WL). This mechanism of sequential ambiguity fixing is demonstrated for resolving ZD satellite phase bias and performing triple-frequency PPP AR with two reference station networks with a typical baseline of up to 400 and 800 km, respectively. Tests show that about of the EWL and WL phase bias of BDS has a consistency of better than 0.1 cycle, and this value decreases to 80 % for L1 phase bias for Experiment I, while all the solutions of Experiment II have a similar RMS of about 0.12 cycles. In addition, the repeatability of the daily mean phase bias agree to 0.093 cycles and 0.095 cycles for EWL and WL on average, which is much smaller than 0.20 cycles of L1. To assess the improvement of fixed PPP brought by applying the third frequency signal as well as the above phase bias, various ambiguity fixing strategy are considered in the numerical demonstration. It is shown that the impact of the additional signal is almost negligible when only float solution involved. It is also shown that by fixing EWL and WL together, as opposed to the single ambiguity fixing, will leads to an improvement in PPP accuracy by about on average. Attributed to the efficient

  19. Accurate monotone cubic interpolation

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1991-01-01

    Monotone piecewise cubic interpolants are simple and effective. They are generally third-order accurate, except near strict local extrema where accuracy degenerates to second-order due to the monotonicity constraint. Algorithms for piecewise cubic interpolants, which preserve monotonicity as well as uniform third and fourth-order accuracy are presented. The gain of accuracy is obtained by relaxing the monotonicity constraint in a geometric framework in which the median function plays a crucial role.

  20. A Method to Estimate the Probability that any Individual Cloud-to-Ground Lightning Stroke was Within any Radius of any Point

    NASA Technical Reports Server (NTRS)

    Huddleston, Lisa L.; Roeder, William P.; Merceret, Francis J.

    2011-01-01

    A new technique has been developed to estimate the probability that a nearby cloud to ground lightning stroke was within a specified radius of any point of interest. This process uses the bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to determine the probability that the stroke is inside any specified radius of any location, even if that location is not centered on or even with the location error ellipse. This technique is adapted from a method of calculating the probability of debris collision with spacecraft. Such a technique is important in spaceport processing activities because it allows engineers to quantify the risk of induced current damage to critical electronics due to nearby lightning strokes. This technique was tested extensively and is now in use by space launch organizations at Kennedy Space Center and Cape Canaveral Air Force Station. Future applications could include forensic meteorology.

  1. A Method to Estimate the Probability That Any Individual Cloud-to-Ground Lightning Stroke Was Within Any Radius of Any Point

    NASA Technical Reports Server (NTRS)

    Huddleston, Lisa L.; Roeder, William P.; Merceret, Francis J.

    2010-01-01

    A new technique has been developed to estimate the probability that a nearby cloud-to-ground lightning stroke was within a specified radius of any point of interest. This process uses the bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to determine the probability that the stroke is inside any specified radius of any location, even if that location is not centered on or even within the location error ellipse. This technique is adapted from a method of calculating the probability of debris collision with spacecraft. Such a technique is important in spaceport processing activities because it allows engineers to quantify the risk of induced current damage to critical electronics due to nearby lightning strokes. This technique was tested extensively and is now in use by space launch organizations at Kennedy Space Center and Cape Canaveral Air Force station.

  2. High resolution measurements supported by electronic structure calculations of two naphthalene derivatives: [1,5]- and [1,6]-naphthyridine--estimation of the zero point inertial defect for planar polycyclic aromatic compounds.

    PubMed

    Gruet, S; Goubet, M; Pirali, O

    2014-06-21

    Polycyclic aromatic hydrocarbons (PAHs) molecules are suspected to be present in the interstellar medium and to participate to the broad and unresolved emissions features, the so-called unidentified infrared bands. In the laboratory, very few studies report the rotationally resolved structure of such important class of molecules. In the present work, both experimental and theoretical approaches provide the first accurate determination of the rotational energy levels of two diazanaphthalene: [1,5]- and [1,6]-naphthyridine. [1,6]-naphthyridine has been studied at high resolution, in the microwave (MW) region using a Fourier transform microwave spectrometer and in the far-infrared (FIR) region using synchrotron-based Fourier transform spectroscopy. The very accurate set of ground state (GS) constants deduced from the analysis of the MW spectrum allowed the analysis of the most intense modes in the FIR (ν38-GS centered at about 483 cm(-1) and ν34-GS centered at about 842 cm(-1)). In contrast with [1,6]-naphthyridine, pure rotation spectroscopy of [1,5]-naphthyridine cannot be performed for symmetry reasons so the combined study of the two intense FIR modes (ν22-GS centered at about 166 cm(-1) and ν18-GS centered at about 818 cm(-1)) provided the GS and the excited states constants. Although the analysis of the very dense rotational patterns for such large molecules remains very challenging, relatively accurate anharmonic density functional theory calculations appeared as a highly relevant supporting tool to the analysis for both molecules. In addition, the good agreement between the experimental and calculated infrared spectrum shows that the present theoretical approach should provide useful data for the astrophysical models. Moreover, inertial defects calculated in the GS (ΔGS) of both molecules exhibit slightly negative values as previously observed for planar species of this molecular family. We adjusted the semi-empirical relations to estimate the zero-point

  3. High resolution measurements supported by electronic structure calculations of two naphthalene derivatives: [1,5]- and [1,6]-naphthyridine—Estimation of the zero point inertial defect for planar polycyclic aromatic compounds

    SciTech Connect

    Gruet, S. E-mail: manuel.goubet@univ-lille1.fr; Pirali, O.; Goubet, M. E-mail: manuel.goubet@univ-lille1.fr

    2014-06-21

    the semi-empirical relations to estimate the zero-point inertial defect (Δ{sub 0}) of polycyclic aromatic molecules and confirmed the contribution of low frequency out-of-plane vibrational modes to the GS inertial defects of PAHs, which is indeed a key parameter to validate the analysis of such large molecules.

  4. Using Mean Absolute Relative Phase, Deviation Phase and Point-Estimation Relative Phase to Measure Postural Coordination in a Serial Reaching Task.

    PubMed

    Galgon, Anne K; Shewokis, Patricia A

    2016-03-01

    The objectives of this communication are to present the methods used to calculate mean absolute relative phase (MARP), deviation phase (DP) and point estimate relative phase (PRP) and compare their utility in measuring postural coordination during the performance of a serial reaching task. MARP and DP are derived from continuous relative phase time series representing the relationship between two body segments or joints during movements. MARP is a single measure used to quantify the coordination pattern and DP measures the stability of the coordination pattern. PRP also quantifies coordination patterns by measuring the relationship between the timing of maximal or minimal angular displacements of two segments within cycles of movement. Seven young adults practiced a bilateral serial reaching task 300 times over 3 days. Relative phase measures were used to evaluate inter-joint relationships for shoulder-hip (proximal) and hip-ankle (distal) postural coordination at early and late learning. MARP, PRP and DP distinguished between proximal and distal postural coordination. There was no effect of practice on any of the relative phase measures for the group, but individual differences were seen over practice. Combined, MARP and DP estimated stability of in-phase and anti-phase postural coordination patterns, however additional qualitative movement analyses may be needed to interpret findings in a serial task. We discuss the strengths and limitations of using MARP and DP and compare MARP and DP to PRP measures in assessing coordination patterns in the context of various types of skillful tasks. Key pointsMARP, DP and PRP measures coordination between segments or joint anglesAdvantages and disadvantages of each measure should be considered in relationship to the performance taskMARP and DP may capture coordination patterns and stability of the patterns during discrete tasks or phases of movements within a taskPRP and SD or PRP may capture coordination patterns and

  5. Estimating Curie Point Depth and Heat Flow Map for Northern Red Sea Rift of Egypt and Its Surroundings, from Aeromagnetic Data

    NASA Astrophysics Data System (ADS)

    Saleh, Salah; Salk, Müjgan; Pamukçu, Oya

    2013-05-01

    In this study, we aim to map the Curie point depth surface for the northern Red Sea rift region and its surroundings based on the spectral analysis of aeromagnetic data. Spectral analysis technique was used to estimate the boundaries (top and bottom) of the magnetized crust. The Curie point depth (CPD) estimates of the Red Sea rift from 112 overlapping blocks vary from 5 to 20 km. The depths obtained for the bottom of the magnetized crust are assumed to correspond to Curie point depths where the magnetic layer loses its magnetization. Intermediate to deep Curie point depth anomalies (10-16 km) were observed in southern and central Sinai and the Gulf of Suez (intermediate heat flow) due to the uplifted basement rocks. The shallowest CPD of 5 km (associated with very high heat flow, ~235 mW m-2) is located at/around the axial trough of the Red Sea rift region especially at Brothers Island and Conrad Deep due to its association with both the concentration of rifting to the axial depression and the magmatic activity, whereas, beneath the Gulf of Aqaba, three Curie point depth anomalies belonging to three major basins vary from 10 km in the north to about 14 km in the south (with a mean heat flow of about 85 mW m-2). Moreover, low CPD anomalies (high heat flow) were also observed beneath some localities in the northern part of the Gulf of Suez at Hammam Fraun, at Esna city along River Nile, at west Ras Gharib in the eastern desert and at Safaga along the western shore line of the Red Sea rift. These resulted from deviatoric tensional stresses developing in the lithosphere which contribute to its further extension and may be due to the opening of the Gulf of Suez and/or the Red Sea rift. Furthermore, low CPD (with high heat flow anomaly) was observed in the eastern border of the study area, beneath northern Arabia, due to the quasi-vertical low-velocity anomaly which extends into the lower mantle and may be related to volcanism in northern Arabia. Dense microearthquakes

  6. Location and depth estimation of point-dipole and line of dipoles using analytic signals of the magnetic gradient tensor and magnitude of vector components

    NASA Astrophysics Data System (ADS)

    Oruç, Bülent

    2010-01-01

    The magnetic gradient tensor (MGT) provides gradient components of potential fields with mathematical properties which allow processing techniques e.g. analytic signal techniques. With MGT emerging as a new tool for geophysical exploration, the mathematical modelling of gradient tensor fields is necessary for interpretation of magnetic field measurements. The point-dipole and line of dipoles are used to approximate various magnetic objects. I investigate the maxima of the magnitude of magnetic vector components (MMVC) and analytic signals of magnetic gradient tensor (ASMGT) resulting from point-dipole and line of dipoles sources in determining horizontal locations. I also present a method in which depths of these sources are estimated from the ratio of the maximum of MMVC to the maximum of ASMGT. Theoretical examples have been carried out to test the feasibility of the method in obtaining source locations and depths. The method has been applied to the MMVC and ASMGT computed from the total field data over a basic/ultrabasic body at the emerald deposit of Socotó, Bahia, Brazil and buried water supply pipe near Jadaguda Township, India. In both field examples, the method produces good correlations with previous interpretations.

  7. Investigating flow patterns and related dynamics in multi-instability turbulent plasmas using a three-point cross-phase time delay estimation velocimetry scheme

    NASA Astrophysics Data System (ADS)

    Brandt, C.; Thakur, S. C.; Tynan, G. R.

    2016-04-01

    Complexities of flow patterns in the azimuthal cross-section of a cylindrical magnetized helicon plasma and the corresponding plasma dynamics are investigated by means of a novel scheme for time delay estimation velocimetry. The advantage of this introduced method is the capability of calculating the time-averaged 2D velocity fields of propagating wave-like structures and patterns in complex spatiotemporal data. It is able to distinguish and visualize the details of simultaneously present superimposed entangled dynamics and it can be applied to fluid-like systems exhibiting frequently repeating patterns (e.g., waves in plasmas, waves in fluids, dynamics in planetary atmospheres, etc.). The velocity calculations are based on time delay estimation obtained from cross-phase analysis of time series. Each velocity vector is unambiguously calculated from three time series measured at three different non-collinear spatial points. This method, when applied to fast imaging, has been crucial to understand the rich plasma dynamics in the azimuthal cross-section of a cylindrical linear magnetized helicon plasma. The capabilities and the limitations of this velocimetry method are discussed and demonstrated for two completely different plasma regimes, i.e., for quasi-coherent wave dynamics and for complex broadband wave dynamics involving simultaneously present multiple instabilities.

  8. Simple and Fast Continuous Estimation Method of Respiratory Frequency During Sleep using the Number of Extreme Points of Heart Rate Time Series

    NASA Astrophysics Data System (ADS)

    Yoshida, Yutaka; Yokoyama, Kiyoko; Ishii, Naohiro

    It is reported that frequency component of approximately 0.25Hz of heart rate time series (RSA) is corresponding to the respiratory frequency. In this paper, we proposed that continuous estimation method of respiratory fequency during sleep using the number of extreme points of heart rate time series in real time. Equation for calculation of the method is very simple and the method can continuously calculate frequency by window width of about 18 beats. To evaluate accuracy of proposal method, RSA frequency was calculated using proposal method from the heart rate time series during supine rest. Result, minimum error rate was observed when RSA had time lag for about 11s and error rate was about 13.8%. Result of estimating RSA frequency time series during sleep, it varied regularly during non-REM and varied irregularly during REM. This result is similar as report of previous study about respiratory variability during sleep. Therefore, it is considered that proposal method possible to apply respiratory monitoring system during sleep.

  9. Estimated times to exhaustion and power outputs at the gas exchange threshold, physical working capacity at the rating of perceived exertion threshold, and respiratory compensation point.

    PubMed

    Bergstrom, Haley C; Housh, Terry J; Zuniga, Jorge M; Camic, Clayton L; Traylor, Daniel A; Schmidt, Richard J; Johnson, Glen O

    2012-10-01

    The purposes of this study were to compare the power outputs and estimated times to exhaustion (T(lim)) at the gas exchange threshold (GET), physical working capacity at the rating of perceived exertion threshold (PWC(RPE)), and respiratory compensation point (RCP). Three male and 5 female subjects (mean ± SD: age, 22.4 ± 2.8 years) performed an incremental test to exhaustion on an electronically braked cycle ergometer to determine peak oxygen consumption rate, GET, and RCP. The PWC(RPE) was determined from ratings of perceived exertion data recorded during 3 continuous workbouts to exhaustion. The estimated T(lim) values for each subject at GET, PWC(RPE), and RCP were determined from power curve analyses (T(lim) = ax(b)). The results indicated that the PWC(RPE) (176 ± 55 W) was not significantly different from RCP (181 ± 54 W); however, GET (155 ± 42 W) was significantly less than PWC(RPE) and RCP. The estimated T(lim) for the GET (26.1 ± 9.8 min) was significantly greater than PWC(RPE) (14.6 ± 5.6 min) and RCP (11.2 ± 3.1 min). The PWC(RPE) occurred at a mean power output that was 13.5% greater than the GET and, therefore, it is likely that the perception of effort is not driven by the same mechanism that underlies the GET (i.e., lactate buffering). Furthermore, the PWC(RPE) and RCP were not significantly different and, therefore, these thresholds may be associated with the same mechanisms of fatigue, such as increased levels of interstitial and (or) arterial [K⁺]. PMID:22716291

  10. Application of the N-point moving average method for brachial pressure waveform-derived estimation of central aortic systolic pressure.

    PubMed

    Shih, Yuan-Ta; Cheng, Hao-Min; Sung, Shih-Hsien; Hu, Wei-Chih; Chen, Chen-Huan

    2014-04-01

    The N-point moving average (NPMA) is a mathematical low-pass filter that can smooth peaked noninvasively acquired radial pressure waveforms to estimate central aortic systolic pressure using a common denominator of N/4 (where N=the acquisition sampling frequency). The present study investigated whether the NPMA method can be applied to brachial pressure waveforms. In the derivation group, simultaneously recorded invasive high-fidelity brachial and central aortic pressure waveforms from 40 subjects were analyzed to identify the best common denominator. In the validation group, the NPMA method with the obtained common denominator was applied on noninvasive brachial pressure waveforms of 100 subjects. Validity was tested by comparing the noninvasive with the simultaneously recorded invasive central aortic systolic pressure. Noninvasive brachial pressure waveforms were calibrated to the cuff systolic and diastolic blood pressures. In the derivation study, an optimal denominator of N/6 was identified for NPMA to derive central aortic systolic pressure. The mean difference between the invasively/noninvasively estimated (N/6) and invasively measured central aortic systolic pressure was 0.1±3.5 and -0.6±7.6 mm Hg in the derivation and validation study, respectively. It satisfied the Association for the Advancement of Medical Instrumentation standard of 5±8 mm Hg. In conclusion, this method for estimating central aortic systolic pressure using either invasive or noninvasive brachial pressure waves requires a common denominator of N/6. By integrating the NPMA method into the ordinary oscillometric blood pressure determining process, convenient noninvasive central aortic systolic pressure values could be obtained with acceptable accuracy.

  11. Analysing the Information Content of Point Measurements of the Soil Hydraulic State Variables by Global Sensitivity Analysis and Multiobjective Parameter Estimation

    NASA Astrophysics Data System (ADS)

    Werisch, Stefan; Lennartz, Franz; Schütze, Niels

    2015-04-01

    Inverse modeling has become a common approach to infer the parameters of the water retention and hydraulic conductivity functions from observations of the vadose zone state variables during dynamic experiments under varying boundary conditions. This study focuses on the estimation and investigation of the feasibility of effective soil hydraulic properties to describe the soil water flow in an undisturbed 1m³ lysimeter. The lysimeter is equipped with 6 one-dimensional observation arrays consisting of 4 tensiometers and 4 water content probes each, leading to 6 replicated one-dimensional observations which establish the calibration data base. Methods of global sensitivity analysis and multiobjective calibration strategies have been applied to examine the information content about the soil hydraulic parameters of the Mualem-van Genuchten (MvG) model contained in the individual data sets, to assess the tradeoffs between the different calibration data sets and to infer effective soil hydraulic properties for each of the arrays. The results show that (1) information about the MvG model parameters decreases with increasing depth, due to effects of overlapping soil layers and reduced soil water dynamics, (2) parameter uncertainty is affected by correlation between the individual parameters. Despite these difficulties, (3) effective one-dimensional parameter sets, which produce satisfying fits and have acceptable trade-offs, can be identified for all arrays, but (4) the array specific parameter sets vary significantly and cannot be transferred to simulate the water flow in other arrays, and (5) none of the parameter sets is suitable to simulate the integral water flow within the lysimeter. The results of the study challenge the feasibility of the inversely estimated soil hydraulic properties from multiple point measurements of the soil hydraulic state variables. Relying only on point measurements inverse modeling can lead to promising results regarding the observations

  12. Analysing the Information Content of Point Measurements of the Vadose Zone State Variables for the Inverse Estimation of Soil Hydraulic Parameters

    NASA Astrophysics Data System (ADS)

    Werisch, S.; Lennartz, F.

    2014-12-01

    Inverse modeling has become a common approach to infer the parameters of the water retention and hydraulic conductivity functions from observations of the vadose zone state variables during dynamic experiments under varying boundary conditions. This study focuses on the estimation and investigation of the feasibility of effective soil hydraulic properties to describe the soil water flow in an undisturbed 1m³ lysimeter. The lysimeter is equipped with 6 one-dimensional observation arrays consisting of 4 tensiometers and 4 water content probes each, leading to 6 replicated one-dimensional observations which establish the calibration data base. Methods of global sensitivity analysis and multiobjective calibration strategies have been applied to examine the information content about the soil hydraulic parameters of the Mualem-van Genuchten (MvG) model contained in the individual data sets, to assess the tradeoffs between the different calibration data sets and to infer effective soil hydraulic properties for each of the arrays. The results show that (1) information about the MvG model parameters decreases with increasing depth, due to effects of overlapping soil layers and reduced soil water dynamics, (2) parameter uncertainty is affected by correlation between the individual parameters. Despite these difficulties, (3) effective one-dimensional parameter sets, which produce satisfying fits and have acceptable trade-offs, can be identified for all arrays, but (4) the array specific parameter sets vary significantly and cannot be transferred to simulate the water flow in other arrays, and (5) none of the parameter sets is suitable to simulate the integral water flow within the lysimeter. The results of the study challenge the feasibility of the inversely estimated soil hydraulic properties from multiple point measurements of the soil hydraulic state variables. Relying only on point measurements of the state variables, which is the usual case, inverse modeling can lead to

  13. Estimating extragalactic Faraday rotation

    NASA Astrophysics Data System (ADS)

    Oppermann, N.; Junklewitz, H.; Greiner, M.; Enßlin, T. A.; Akahori, T.; Carretti, E.; Gaensler, B. M.; Goobar, A.; Harvey-Smith, L.; Johnston-Hollitt, M.; Pratley, L.; Schnitzeler, D. H. F. M.; Stil, J. M.; Vacca, V.

    2015-03-01

    Observations of Faraday rotation for extragalactic sources probe magnetic fields both inside and outside the Milky Way. Building on our earlier estimate of the Galactic contribution, we set out to estimate the extragalactic contributions. We discuss the problems involved; in particular, we point out that taking the difference between the observed values and the Galactic foreground reconstruction is not a good estimate for the extragalactic contributions. We point out a degeneracy between the contributions to the observed values due to extragalactic magnetic fields and observational noise and comment on the dangers of over-interpreting an estimate without taking into account its uncertainty information. To overcome these difficulties, we develop an extended reconstruction algorithm based on the assumption that the observational uncertainties are accurately described for a subset of the data, which can overcome the degeneracy with the extragalactic contributions. We present a probabilistic derivation of the algorithm and demonstrate its performance using a simulation, yielding a high quality reconstruction of the Galactic Faraday rotation foreground, a precise estimate of the typical extragalactic contribution, and a well-defined probabilistic description of the extragalactic contribution for each data point. We then apply this reconstruction technique to a catalog of Faraday rotation observations for extragalactic sources. The analysis is done for several different scenarios, for which we consider the error bars of different subsets of the data to accurately describe the observational uncertainties. By comparing the results, we argue that a split that singles out only data near the Galactic poles is the most robust approach. We find that the dispersion of extragalactic contributions to observed Faraday depths is most likely lower than 7 rad/m2, in agreement with earlier results, and that the extragalactic contribution to an individual data point is poorly

  14. Optimal cut-off points of fasting plasma glucose for two-step strategy in estimating prevalence and screening undiagnosed diabetes and pre-diabetes in Harbin, China.

    PubMed

    Bao, Chundan; Zhang, Dianfeng; Sun, Bo; Lan, Li; Cui, Wenxiu; Xu, Guohua; Sui, Conglan; Wang, Yibaina; Zhao, Yashuang; Wang, Jian; Li, Hongyuan

    2015-01-01

    To identify optimal cut-off points of fasting plasma glucose (FPG) for two-step strategy in screening abnormal glucose metabolism and estimating prevalence in general Chinese population. A population-based cross-sectional study was conducted on 7913 people aged 20 to 74 years in Harbin. Diabetes and pre-diabetes were determined by fasting and 2 hour post-load glucose from the oral glucose tolerance test in all participants. Screening potential of FPG, cost per case identified by two-step strategy, and optimal FPG cut-off points were described. The prevalence of diabetes was 12.7%, of which 65.2% was undiagnosed. Twelve percent or 9.0% of participants were diagnosed with pre-diabetes using 2003 ADA criteria or 1999 WHO criteria, respectively. The optimal FPG cut-off points for two-step strategy were 5.6 mmol/l for previously undiagnosed diabetes (area under the receiver-operating characteristic curve of FPG 0.93; sensitivity 82.0%; cost per case identified by two-step strategy ¥261), 5.3 mmol/l for both diabetes and pre-diabetes or pre-diabetes alone using 2003 ADA criteria (0.89 or 0.85; 72.4% or 62.9%; ¥110 or ¥258), 5.0 mmol/l for pre-diabetes using 1999 WHO criteria (0.78; 66.8%; ¥399), and 4.9 mmol/l for IGT alone (0.74; 62.2%; ¥502). Using the two-step strategy, the underestimates of prevalence reduced to nearly 38% for pre-diabetes or 18.7% for undiagnosed diabetes, respectively. Approximately a quarter of the general population in Harbin was in hyperglycemic condition. Using optimal FPG cut-off points for two-step strategy in Chinese population may be more effective and less costly for reducing the missed diagnosis of hyperglycemic condition.

  15. Optimal cut-off points of fasting plasma glucose for two-step strategy in estimating prevalence and screening undiagnosed diabetes and pre-diabetes in Harbin, China.

    PubMed

    Bao, Chundan; Zhang, Dianfeng; Sun, Bo; Lan, Li; Cui, Wenxiu; Xu, Guohua; Sui, Conglan; Wang, Yibaina; Zhao, Yashuang; Wang, Jian; Li, Hongyuan

    2015-01-01

    To identify optimal cut-off points of fasting plasma glucose (FPG) for two-step strategy in screening abnormal glucose metabolism and estimating prevalence in general Chinese population. A population-based cross-sectional study was conducted on 7913 people aged 20 to 74 years in Harbin. Diabetes and pre-diabetes were determined by fasting and 2 hour post-load glucose from the oral glucose tolerance test in all participants. Screening potential of FPG, cost per case identified by two-step strategy, and optimal FPG cut-off points were described. The prevalence of diabetes was 12.7%, of which 65.2% was undiagnosed. Twelve percent or 9.0% of participants were diagnosed with pre-diabetes using 2003 ADA criteria or 1999 WHO criteria, respectively. The optimal FPG cut-off points for two-step strategy were 5.6 mmol/l for previously undiagnosed diabetes (area under the receiver-operating characteristic curve of FPG 0.93; sensitivity 82.0%; cost per case identified by two-step strategy ¥261), 5.3 mmol/l for both diabetes and pre-diabetes or pre-diabetes alone using 2003 ADA criteria (0.89 or 0.85; 72.4% or 62.9%; ¥110 or ¥258), 5.0 mmol/l for pre-diabetes using 1999 WHO criteria (0.78; 66.8%; ¥399), and 4.9 mmol/l for IGT alone (0.74; 62.2%; ¥502). Using the two-step strategy, the underestimates of prevalence reduced to nearly 38% for pre-diabetes or 18.7% for undiagnosed diabetes, respectively. Approximately a quarter of the general population in Harbin was in hyperglycemic condition. Using optimal FPG cut-off points for two-step strategy in Chinese population may be more effective and less costly for reducing the missed diagnosis of hyperglycemic condition. PMID:25785585

  16. Optimal Cut-Off Points of Fasting Plasma Glucose for Two-Step Strategy in Estimating Prevalence and Screening Undiagnosed Diabetes and Pre-Diabetes in Harbin, China

    PubMed Central

    Sun, Bo; Lan, Li; Cui, Wenxiu; Xu, Guohua; Sui, Conglan; Wang, Yibaina; Zhao, Yashuang; Wang, Jian; Li, Hongyuan

    2015-01-01

    To identify optimal cut-off points of fasting plasma glucose (FPG) for two-step strategy in screening abnormal glucose metabolism and estimating prevalence in general Chinese population. A population-based cross-sectional study was conducted on 7913 people aged 20 to 74 years in Harbin. Diabetes and pre-diabetes were determined by fasting and 2 hour post-load glucose from the oral glucose tolerance test in all participants. Screening potential of FPG, cost per case identified by two-step strategy, and optimal FPG cut-off points were described. The prevalence of diabetes was 12.7%, of which 65.2% was undiagnosed. Twelve percent or 9.0% of participants were diagnosed with pre-diabetes using 2003 ADA criteria or 1999 WHO criteria, respectively. The optimal FPG cut-off points for two-step strategy were 5.6 mmol/l for previously undiagnosed diabetes (area under the receiver-operating characteristic curve of FPG 0.93; sensitivity 82.0%; cost per case identified by two-step strategy ¥261), 5.3 mmol/l for both diabetes and pre-diabetes or pre-diabetes alone using 2003 ADA criteria (0.89 or 0.85; 72.4% or 62.9%; ¥110 or ¥258), 5.0 mmol/l for pre-diabetes using 1999 WHO criteria (0.78; 66.8%; ¥399), and 4.9 mmol/l for IGT alone (0.74; 62.2%; ¥502). Using the two-step strategy, the underestimates of prevalence reduced to nearly 38% for pre-diabetes or 18.7% for undiagnosed diabetes, respectively. Approximately a quarter of the general population in Harbin was in hyperglycemic condition. Using optimal FPG cut-off points for two-step strategy in Chinese population may be more effective and less costly for reducing the missed diagnosis of hyperglycemic condition. PMID:25785585

  17. Accurate quantum chemical calculations

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.

    1989-01-01

    An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.

  18. A Method to Estimate the Probability that Any Individual Cloud-to-Ground Lightning Stroke was Within Any Radius of Any Point

    NASA Technical Reports Server (NTRS)

    Huddleston, Lisa; Roeder, WIlliam P.; Merceret, Francis J.

    2011-01-01

    A new technique has been developed to estimate the probability that a nearby cloud-to-ground lightning stroke was within a specified radius of any point of interest. This process uses the bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to determine the probability that the stroke is inside any specified radius of any location, even if that location is not centered on or even within the location error ellipse. This technique is adapted from a method of calculating the probability of debris collision with spacecraft. Such a technique is important in spaceport processing activities because it allows engineers to quantify the risk of induced current damage to critical electronics due to nearby lightning strokes. This technique was tested extensively and is now in use by space launch organizations at Kennedy Space Center and Cape Canaveral Air Force station. Future applications could include forensic meteorology.

  19. Accurate Optical Reference Catalogs

    NASA Astrophysics Data System (ADS)

    Zacharias, N.

    2006-08-01

    Current and near future all-sky astrometric catalogs on the ICRF are reviewed with the emphasis on reference star data at optical wavelengths for user applications. The standard error of a Hipparcos Catalogue star position is now about 15 mas per coordinate. For the Tycho-2 data it is typically 20 to 100 mas, depending on magnitude. The USNO CCD Astrograph Catalog (UCAC) observing program was completed in 2004 and reductions toward the final UCAC3 release are in progress. This all-sky reference catalogue will have positional errors of 15 to 70 mas for stars in the 10 to 16 mag range, with a high degree of completeness. Proper motions for the about 60 million UCAC stars will be derived by combining UCAC astrometry with available early epoch data, including yet unpublished scans of the complete set of AGK2, Hamburg Zone astrograph and USNO Black Birch programs. Accurate positional and proper motion data are combined in the Naval Observatory Merged Astrometric Dataset (NOMAD) which includes Hipparcos, Tycho-2, UCAC2, USNO-B1, NPM+SPM plate scan data for astrometry, and is supplemented by multi-band optical photometry as well as 2MASS near infrared photometry. The Milli-Arcsecond Pathfinder Survey (MAPS) mission is currently being planned at USNO. This is a micro-satellite to obtain 1 mas positions, parallaxes, and 1 mas/yr proper motions for all bright stars down to about 15th magnitude. This program will be supplemented by a ground-based program to reach 18th magnitude on the 5 mas level.

  20. Application of modified export coefficient method on the load estimation of non-point source nitrogen and phosphorus pollution of soil and water loss in semiarid regions.

    PubMed

    Wu, Lei; Gao, Jian-en; Ma, Xiao-yi; Li, Dan

    2015-07-01

    Chinese Loess Plateau is considered as one of the most serious soil loss regions in the world, its annual sediment output accounts for 90 % of the total sediment loads of the Yellow River, and most of the Loess Plateau has a very typical characteristic of "soil and water flow together", and water flow in this area performs with a high sand content. Serious soil loss results in nitrogen and phosphorus loss of soil. Special processes of water and soil in the Loess Plateau lead to the loss mechanisms of water, sediment, nitrogen, and phosphorus are different from each other, which are greatly different from other areas of China. In this study, the modified export coefficient method considering the rainfall erosivity factor was proposed to simulate and evaluate non-point source (NPS) nitrogen and phosphorus loss load caused by soil and water loss in the Yanhe River basin of the hilly and gully area, Loess Plateau. The results indicate that (1) compared with the traditional export coefficient method, annual differences of NPS total nitrogen (TN) and total phosphorus (TP) load after considering the rainfall erosivity factor are obvious; it is more in line with the general law of NPS pollution formation in a watershed, and it can reflect the annual variability of NPS pollution more accurately. (2) Under the traditional and modified conditions, annual changes of NPS TN and TP load in four counties (districts) took on the similar trends from 1999 to 2008; the load emission intensity not only is closely related to rainfall intensity but also to the regional distribution of land use and other pollution sources. (3) The output structure, source composition, and contribution rate of NPS pollution load under the modified method are basically the same with the traditional method. The average output structure of TN from land use and rural life is about 66.5 and 17.1 %, the TP is about 53.8 and 32.7 %; the maximum source composition of TN (59 %) is farmland; the maximum source

  1. Real-time estimation of prostate tumor rotation and translation with a kV imaging system based on an iterative closest point algorithm.

    PubMed

    Tehrani, Joubin Nasehi; O'Brien, Ricky T; Poulsen, Per Rugaard; Keall, Paul

    2013-12-01

    Previous studies have shown that during cancer radiotherapy a small translation or rotation of the tumor can lead to errors in dose delivery. Current best practice in radiotherapy accounts for tumor translations, but is unable to address rotation due to a lack of a reliable real-time estimate. We have developed a method based on the iterative closest point (ICP) algorithm that can compute rotation from kilovoltage x-ray images acquired during radiation treatment delivery. A total of 11 748 kilovoltage (kV) images acquired from ten patients (one fraction for each patient) were used to evaluate our tumor rotation algorithm. For each kV image, the three dimensional coordinates of three fiducial markers inside the prostate were calculated. The three dimensional coordinates were used as input to the ICP algorithm to calculate the real-time tumor rotation and translation around three axes. The results show that the root mean square error was improved for real-time calculation of tumor displacement from a mean of 0.97 mm with the stand alone translation to a mean of 0.16 mm by adding real-time rotation and translation displacement with the ICP algorithm. The standard deviation (SD) of rotation for the ten patients was 2.3°, 0.89° and 0.72° for rotation around the right-left (RL), anterior-posterior (AP) and superior-inferior (SI) directions respectively. The correlation between all six degrees of freedom showed that the highest correlation belonged to the AP and SI translation with a correlation of 0.67. The second highest correlation in our study was between the rotation around RL and rotation around AP, with a correlation of -0.33. Our real-time algorithm for calculation of rotation also confirms previous studies that have shown the maximum SD belongs to AP translation and rotation around RL. ICP is a reliable and fast algorithm for estimating real-time tumor rotation which could create a pathway to investigational clinical treatment studies requiring real

  2. Estimating contaminant mass discharge: a field comparison of the multilevel point measurement and the integral pumping investigation approaches and their uncertainties.

    PubMed

    Béland-Pelletier, Caroline; Fraser, Michelle; Barker, Jim; Ptak, Thomas

    2011-03-25

    In this field study, two approaches to assess contaminant mass discharge were compared: the sampling of multilevel wells (MLS) and the integral groundwater investigation (or integral pumping test, IPT) that makes use of the concentration-time series obtained from pumping wells. The MLS approached used concentrations, hydraulic conductivity and gradient rather than direct chemical flux measurements, while the IPT made use of a simplified analytical inversion. The two approaches were applied at a control plane located approximately 40m downgradient of a gasoline source at Canadian Forces Base Borden, Ontario, Canada. The methods yielded similar estimates of the mass discharging across the control plane. The sources of uncertainties in the mass discharge in each approach were evaluated, including the uncertainties inherent in the underlying assumptions and procedures. The maximum uncertainty of the MLS method was about 67%, and about 28% for the IPT method in this specific field situation. For the MLS method, the largest relative uncertainty (62%) was attributed to the limited sampling density (0.63 points/m(2)), through a novel comparison with a denser sampling grid nearby. A five-fold increase of the sampling grid density would have been required to reduce the overall relative uncertainty for the MLS method to about the same level as that for the IPT method. Uncertainty in the complete coverage of the control plane provided the largest relative uncertainty (37%) in the IPT method. While MLS or IPT methods to assess contaminant mass discharge are attractive assessment tools, the large relative uncertainty in either method found for this reasonable well monitored and simple aquifer suggests that results in more complex plumes in more heterogeneous aquifers should be viewed with caution.

  3. Estimating contaminant mass discharge: A field comparison of the multilevel point measurement and the integral pumping investigation approaches and their uncertainties

    NASA Astrophysics Data System (ADS)

    Béland-Pelletier, Caroline; Fraser, Michelle; Barker, Jim; Ptak, Thomas

    2011-03-01

    In this field study, two approaches to assess contaminant mass discharge were compared: the sampling of multilevel wells (MLS) and the integral groundwater investigation (or integral pumping test, IPT) that makes use of the concentration-time series obtained from pumping wells. The MLS approached used concentrations, hydraulic conductivity and gradient rather than direct chemical flux measurements, while the IPT made use of a simplified analytical inversion. The two approaches were applied at a control plane located approximately 40 m downgradient of a gasoline source at Canadian Forces Base Borden, Ontario, Canada. The methods yielded similar estimates of the mass discharging across the control plane. The sources of uncertainties in the mass discharge in each approach were evaluated, including the uncertainties inherent in the underlying assumptions and procedures. The maximum uncertainty of the MLS method was about 67%, and about 28% for the IPT method in this specific field situation. For the MLS method, the largest relative uncertainty (62%) was attributed to the limited sampling density (0.63 points/m 2), through a novel comparison with a denser sampling grid nearby. A five-fold increase of the sampling grid density would have been required to reduce the overall relative uncertainty for the MLS method to about the same level as that for the IPT method. Uncertainty in the complete coverage of the control plane provided the largest relative uncertainty (37%) in the IPT method. While MLS or IPT methods to assess contaminant mass discharge are attractive assessment tools, the large relative uncertainty in either method found for this reasonable well monitored and simple aquifer suggests that results in more complex plumes in more heterogeneous aquifers should be viewed with caution.

  4. A rapid and accurate method for the quantitative estimation of natural polysaccharides and their fractions using high performance size exclusion chromatography coupled with multi-angle laser light scattering and refractive index detector.

    PubMed

    Cheong, Kit-Leong; Wu, Ding-Tao; Zhao, Jing; Li, Shao-Ping

    2015-06-26

    In this study, a rapid and accurate method for quantitative analysis of natural polysaccharides and their different fractions was developed. Firstly, high performance size exclusion chromatography (HPSEC) was utilized to separate natural polysaccharides. And then the molecular masses of their fractions were determined by multi-angle laser light scattering (MALLS). Finally, quantification of polysaccharides or their fractions was performed based on their response to refractive index detector (RID) and their universal refractive index increment (dn/dc). Accuracy of the developed method for the quantification of individual and mixed polysaccharide standards, including konjac glucomannan, CM-arabinan, xyloglucan, larch arabinogalactan, oat β-glucan, dextran (410, 270, and 25 kDa), mixed xyloglucan and CM-arabinan, and mixed dextran 270 K and CM-arabinan was determined, and their average recoveries were between 90.6% and 98.3%. The limits of detection (LOD) and quantification (LOQ) were ranging from 10.68 to 20.25 μg/mL, and 42.70 to 68.85 μg/mL, respectively. Comparing to the conventional phenol sulfuric acid assay and HPSEC coupled with evaporative light scattering detection (HPSEC-ELSD) analysis, the developed HPSEC-MALLS-RID method based on universal dn/dc for the quantification of polysaccharides and their fractions is much more simple, rapid, and accurate with no need of individual polysaccharide standard, as well as free of calibration curve. The developed method was also successfully utilized for quantitative analysis of polysaccharides and their different fractions from three medicinal plants of Panax genus, Panax ginseng, Panax notoginseng and Panax quinquefolius. The results suggested that the HPSEC-MALLS-RID method based on universal dn/dc could be used as a routine technique for the quantification of polysaccharides and their fractions in natural resources.

  5. A rapid and accurate method for the quantitative estimation of natural polysaccharides and their fractions using high performance size exclusion chromatography coupled with multi-angle laser light scattering and refractive index detector.

    PubMed

    Cheong, Kit-Leong; Wu, Ding-Tao; Zhao, Jing; Li, Shao-Ping

    2015-06-26

    In this study, a rapid and accurate method for quantitative analysis of natural polysaccharides and their different fractions was developed. Firstly, high performance size exclusion chromatography (HPSEC) was utilized to separate natural polysaccharides. And then the molecular masses of their fractions were determined by multi-angle laser light scattering (MALLS). Finally, quantification of polysaccharides or their fractions was performed based on their response to refractive index detector (RID) and their universal refractive index increment (dn/dc). Accuracy of the developed method for the quantification of individual and mixed polysaccharide standards, including konjac glucomannan, CM-arabinan, xyloglucan, larch arabinogalactan, oat β-glucan, dextran (410, 270, and 25 kDa), mixed xyloglucan and CM-arabinan, and mixed dextran 270 K and CM-arabinan was determined, and their average recoveries were between 90.6% and 98.3%. The limits of detection (LOD) and quantification (LOQ) were ranging from 10.68 to 20.25 μg/mL, and 42.70 to 68.85 μg/mL, respectively. Comparing to the conventional phenol sulfuric acid assay and HPSEC coupled with evaporative light scattering detection (HPSEC-ELSD) analysis, the developed HPSEC-MALLS-RID method based on universal dn/dc for the quantification of polysaccharides and their fractions is much more simple, rapid, and accurate with no need of individual polysaccharide standard, as well as free of calibration curve. The developed method was also successfully utilized for quantitative analysis of polysaccharides and their different fractions from three medicinal plants of Panax genus, Panax ginseng, Panax notoginseng and Panax quinquefolius. The results suggested that the HPSEC-MALLS-RID method based on universal dn/dc could be used as a routine technique for the quantification of polysaccharides and their fractions in natural resources. PMID:25990349

  6. Melting point, boiling point, and symmetry.

    PubMed

    Abramowitz, R; Yalkowsky, S H

    1990-09-01

    The relationship between the melting point of a compound and its chemical structure remains poorly understood. The melting point of a compound can be related to certain of its other physical chemical properties. The boiling point of a compound can be determined from additive constitutive properties, but the melting point can be estimated only with the aid of nonadditive constitutive parameters. The melting point of some non-hydrogen-bonding, rigid compounds can be estimated by the equation MP = 0.772 * BP + 110.8 * SIGMAL + 11.56 * ORTHO + 31.9 * EXPAN - 240.7 where MP is the melting point of the compound in Kelvin, BP is the boiling point, SIGMAL is the logarithm of the symmetry number, EXPAN is the cube of the eccentricity of the compound, and ORTHO indicates the number of groups that are ortho to another group.

  7. Estimating Implementation and Operational Costs of an Integrated Tiered CD4 Service including Laboratory and Point of Care Testing in a Remote Health District in South Africa

    PubMed Central

    Cassim, Naseem; Coetzee, Lindi M.; Schnippel, Kathryn; Glencross, Deborah K.

    2014-01-01

    Background An integrated tiered service delivery model (ITSDM) has been proposed to provide ‘full-coverage’ of CD4 services throughout South Africa. Five tiers are described, defined by testing volumes and number of referring health-facilities. These include: (1) Tier-1/decentralized point-of-care service (POC) in a single site; Tier-2/POC-hub servicing processing <30–40 samples from 8–10 health-clinics; Tier-3/Community laboratories servicing ∼50 health-clinics, processing <150 samples/day; high-volume centralized laboratories (Tier-4 and Tier-5) processing <300 or >600 samples/day and serving >100 or >200 health-clinics, respectively. The objective of this study was to establish costs of existing and ITSDM-tiers 1, 2 and 3 in a remote, under-serviced district in South Africa. Methods Historical health-facility workload volumes from the Pixley-ka-Seme district, and the total volumes of CD4 tests performed by the adjacent district referral CD4 laboratories, linked to locations of all referring clinics and related laboratory-to-result turn-around time (LTR-TAT) data, were extracted from the NHLS Corporate-Data-Warehouse for the period April-2012 to March-2013. Tiers were costed separately (as a cost-per-result) including equipment, staffing, reagents and test consumable costs. A one-way sensitivity analyses provided for changes in reagent price, test volumes and personnel time. Results The lowest cost-per-result was noted for the existing laboratory-based Tiers- 4 and 5 ($6.24 and $5.37 respectively), but with related increased LTR-TAT of >24–48 hours. Full service coverage with TAT <6-hours could be achieved with placement of twenty-seven Tier-1/POC or eight Tier-2/POC-hubs, at a cost-per-result of $32.32 and $15.88 respectively. A single district Tier-3 laboratory also ensured ‘full service coverage’ and <24 hour LTR-TAT for the district at $7.42 per-test. Conclusion Implementing a single Tier-3/community laboratory to extend and improve delivery

  8. Fatty acid ethyl esters in hair as alcohol markers: estimating a reliable cut-off point by evaluation of 1,057 autopsy cases.

    PubMed

    Hastedt, Martin; Bossers, Lydia; Krumbiegel, Franziska; Herre, Sieglinde; Hartwig, Sven

    2013-06-01

    Alcohol abuse is a widespread problem, especially in Western countries. Therefore, it is important to have markers of alcohol consumption with validated cut-off points. For many years research has focused on analysis of hair for alcohol markers, but data on the performance and reliability of cut-off values are still lacking. Evaluating 1,057 cases from 2005 to 2011, included a large sample group for the estimation of an applicable cut-off value when compared to earlier studies on fatty acid ethyl esters (FAEEs) in hair. The FAEEs concentrations in hair, police investigation reports, medical history, and the macroscopic and microscopic alcohol-typical results from autopsy, such as liver, pancreas, and cardiac findings, were taken into account in this study. In 80.2 % of all 1,057 cases pathologic findings that may be related to alcohol abuse were reported. The cases were divided into social drinkers (n = 168), alcohol abusers (n = 502), and cases without information on alcohol use. The median FAEEs concentration in the group of social drinkers was 0.302 ng/mg (range 0.008-14.3 ng/mg). In the group of alcohol abusers a median of 1.346 ng/mg (range 0.010-83.7 ng/mg) was found. Before June 2009 the hair FAEEs test was routinely applied to a proximal hair segment of 0-6 cm, changing to a routinely investigated hair length of 3 cm after 2009, as proposed by the Society of Hair Testing (SoHT). The method showed significant differences between the groups of social drinkers and alcoholics, leading to an improvement in the postmortem detection of alcohol abuse. Nevertheless, the performance of the method was rather poor, with an area under the curve calculated from receiver operating characteristic (ROC curve AUC) of 0.745. The optimum cut-off value for differentiation between social and chronic excessive drinking calculated for hair FAEEs was 1.08 ng/mg, with a sensitivity of 56 % and a specificity of 80 %. In relation to the "Consensus on Alcohol Markers 2012

  9. Thunderstorm activity in early Earth: same estimations from point of view a role of electric discharges in formation of prebiotic conditions

    NASA Astrophysics Data System (ADS)

    Serozhkin, Yu.

    2008-09-01

    increase quantity of lightning at 50 % [7]. The examinations of processes of separation of charges in clouds result in a very narrow diapason of temperature and pressure of an atmosphere, at which the separation of charges is possible. It is necessary to tell that the electrostatic charging of thunderstorm clouds not received a satisfactory explanation. One of not explained properties is the formation at the altitude 6 … 8 km at temperature about -15o the negatively charged layer by thickness some hundreds meters. At this altitude at such pressure the water can exist in three phases. In this layer because of interaction of the ice crystals with snow pellets there is a separation of charges. Above this layer there is a so-called charge reverse - a not explained phenomenon causing that the ice crystals are lower this layer are charged positively, and above negatively. The snow pellets are higher this layer is charged positively, and below negatively. Thus negatively charged layer consists of negatively charged ice crystals and snow pellets. Positively charged snow pellets form a charge at the top of a cloud, and positively charged ice crystals form positive charge in the bottom of a cloud. It follows that the dependence of the electrostatic charging of thunderstorm clouds from parameters of atmosphere is extremely difficult to estimate. About influence of pressure it is possible to tell the general words. It is possible to tell that at pressure corresponding to the point of charge reverse (about 250 Torr at the altitude 8 km) usual thunderstorm activity will decrease. It means that if the atmospheric pressure during formation pre-biotic conditions was less than 100 Torr, it is necessary to discuss a role of electrical discharges, which are connected with accumulation of charges on particles (sand storms, tornado) or ashes at eruption of volcano. What tracks of thunderstorm activity it is possible to search in the past? It is know that the cloud - ground lightning

  10. How accurate are precipitation retrievals from space-borne passive microwave radiometers? - Evaluation of satellite retrieval errors in rain estimates from TMI, AMSR-E, SSM/I, SSMIS, AMSU-B, and MHS over the continental United States

    NASA Astrophysics Data System (ADS)

    tang, ling; tian, yudong; lin, xin

    2014-05-01

    Precipitation retrievals from space-borne Passive Microwave (PMW) radiometers are the major source in modern satellite-based global rainfall dataset. The error characteristics in these individual retrievals directly affect the merged end products and applications, but have not been systematically studied. In this paper, we undertake a critical investigation of the seasonal and sensor type skill and errors of both in PMW radiometers over the continental United States (CONUS). A high-resolution ground radar-based datasets - NOAA's National Severe Storms Laboratory (NSSL) Q2 radar derived precipitation estimates are used as the ground reference. The high spatial and temporal resolution of the reference data allows near-instantaneous collocation (within 5 minutes) and relatively more precise comparison with the satellite overpasses. We compare precipitation retrievals from twelve satellites, including six imagers (one TMI, AMSR-E, SSM/I and three SSMIS) and six sounders (three AMSU-B and three MHS) against the Q2 radar precipitation. Results show that precipitation retrievals from PMW radiometers exhibit fairly systematic biases depending on season and precipitation intensity, with overestimates in summer at moderate to high precipitation rates and underestimates in winter at low and moderate precipitation rates. This result is also showing in satellite-based multi-sensor precipitation products, indicating the transferring of uncertainties from single sensor input to multi-sensor precipitation estimates. Meanwhile, retrievals from the microwave imagers have notably better performance than those from the microwave sounders. The sounders have higher biases, about two times at small rain rates and two-three times at the moderate to high end rain rates, compared to the imagers. The sounders also have a narrower dynamic range, and higher random errors, which are also detailed in the paper.

  11. How to estimate the cost of point-of-care CD4 testing in program settings: an example using the Alere Pima Analyzer in South Africa.

    PubMed

    Larson, Bruce; Schnippel, Kathryn; Ndibongo, Buyiswa; Long, Lawrence; Fox, Matthew P; Rosen, Sydney

    2012-01-01

    Integrating POC CD4 testing technologies into HIV counseling and testing (HCT) programs may improve post-HIV testing linkage to care and treatment. As evaluations of these technologies in program settings continue, estimates of the costs of POC CD4 tests to the service provider will be needed and estimates have begun to be reported. Without a consistent and transparent methodology, estimates of the cost per CD4 test using POC technologies are likely to be difficult to compare and may lead to erroneous conclusions about costs and cost-effectiveness. This paper provides a step-by-step approach for estimating the cost per CD4 test from a provider's perspective. As an example, the approach is applied to one specific POC technology, the Pima Analyzer. The costing approach is illustrated with data from a mobile HCT program in Gauteng Province of South Africa. For this program, the cost per test in 2010 was estimated at $23.76 (material costs  = $8.70; labor cost per test  = $7.33; and equipment, insurance, and daily quality control  = $7.72). Labor and equipment costs can vary widely depending on how the program operates and the number of CD4 tests completed over time. Additional costs not included in the above analysis, for on-going training, supervision, and quality control, are likely to increase further the cost per test. The main contribution of this paper is to outline a methodology for estimating the costs of incorporating POC CD4 testing technologies into an HCT program. The details of the program setting matter significantly for the cost estimate, so that such details should be clearly documented to improve the consistency, transparency, and comparability of cost estimates.

  12. NNLOPS accurate associated HW production

    NASA Astrophysics Data System (ADS)

    Astill, William; Bizon, Wojciech; Re, Emanuele; Zanderighi, Giulia

    2016-06-01

    We present a next-to-next-to-leading order accurate description of associated HW production consistently matched to a parton shower. The method is based on reweighting events obtained with the HW plus one jet NLO accurate calculation implemented in POWHEG, extended with the MiNLO procedure, to reproduce NNLO accurate Born distributions. Since the Born kinematics is more complex than the cases treated before, we use a parametrization of the Collins-Soper angles to reduce the number of variables required for the reweighting. We present phenomenological results at 13 TeV, with cuts suggested by the Higgs Cross section Working Group.

  13. A fast and accurate decoder for underwater acoustic telemetry

    NASA Astrophysics Data System (ADS)

    Ingraham, J. M.; Deng, Z. D.; Li, X.; Fu, T.; McMichael, G. A.; Trumbo, B. A.

    2014-07-01

    The Juvenile Salmon Acoustic Telemetry System, developed by the U.S. Army Corps of Engineers, Portland District, has been used to monitor the survival of juvenile salmonids passing through hydroelectric facilities in the Federal Columbia River Power System. Cabled hydrophone arrays deployed at dams receive coded transmissions sent from acoustic transmitters implanted in fish. The signals' time of arrival on different hydrophones is used to track fish in 3D. In this article, a new algorithm that decodes the received transmissions is described and the results are compared to results for the previous decoding algorithm. In a laboratory environment, the new decoder was able to decode signals with lower signal strength than the previous decoder, effectively increasing decoding efficiency and range. In field testing, the new algorithm decoded significantly more signals than the previous decoder and three-dimensional tracking experiments showed that the new decoder's time-of-arrival estimates were accurate. At multiple distances from hydrophones, the new algorithm tracked more points more accurately than the previous decoder. The new algorithm was also more than 10 times faster, which is critical for real-time applications on an embedded system.

  14. A fast and accurate decoder for underwater acoustic telemetry.

    PubMed

    Ingraham, J M; Deng, Z D; Li, X; Fu, T; McMichael, G A; Trumbo, B A

    2014-07-01

    The Juvenile Salmon Acoustic Telemetry System, developed by the U.S. Army Corps of Engineers, Portland District, has been used to monitor the survival of juvenile salmonids passing through hydroelectric facilities in the Federal Columbia River Power System. Cabled hydrophone arrays deployed at dams receive coded transmissions sent from acoustic transmitters implanted in fish. The signals' time of arrival on different hydrophones is used to track fish in 3D. In this article, a new algorithm that decodes the received transmissions is described and the results are compared to results for the previous decoding algorithm. In a laboratory environment, the new decoder was able to decode signals with lower signal strength than the previous decoder, effectively increasing decoding efficiency and range. In field testing, the new algorithm decoded significantly more signals than the previous decoder and three-dimensional tracking experiments showed that the new decoder's time-of-arrival estimates were accurate. At multiple distances from hydrophones, the new algorithm tracked more points more accurately than the previous decoder. The new algorithm was also more than 10 times faster, which is critical for real-time applications on an embedded system. PMID:25085162

  15. The Relationship of Actigraph Accelerometer Cut-Points for Estimating Physical Activity with Selected Health Outcomes: Results from NHANES 2003-06

    ERIC Educational Resources Information Center

    Loprinzi, Paul D.; Lee, Hyo; Cardinal, Bradley J.; Crespo, Carlos J.; Andersen, Ross E.; Smit, Ellen

    2012-01-01

    The purpose of this study was to examine the influence of child and adult cut-points on physical activity (PA) intensity, the prevalence of meeting PA guidelines, and association with selected health outcomes. Participants (6,578 adults greater than or equal to 18 years, and 3,174 children and adolescents less than or equal to 17 years) from the…

  16. Study on the Realization of Zinc Point and the Zinc-Point Cell Comparison

    NASA Astrophysics Data System (ADS)

    Widiatmo, J. V.; Sakai, M.; Satou, K.; Yamazawa, K.; Tamba, J.; Arai, M.

    2011-01-01

    Continuing our study on aluminum, tin, and silver points, a study on the realization of the zinc point was conducted. Zinc-point cells were newly fabricated using 6N-nominal grade zinc samples, impurity elements of which were analyzed extensively based on glow-discharge mass spectrometry (GDMS). The present paper reports the temperature measurements done using the newly fabricated cells during the zinc freezing process, under which the zinc fixed point is defined, and the analysis of the freezing curve obtained. Comparisons of zinc-point temperatures realized by the newly fabricated cells (cell-to-cell comparisons) were also conducted. Zinc-point depression due to impurity elements was calculated based on the sum of individual estimates and the impurity element analysis. One of the cells evaluated was drawn out from its crucible and analyzed by GDMS at four points, namely, at around the center of the top, of the middle, of the bottom, and around the outer part of the middle area. The purpose of this cell disassembly is to see whether or not there has been some difference before and after cell fabrication, as well as difference in impurity element distribution within the ingot. From the aforementioned studies, some findings were obtained. First finding is that the homogeneity of the zinc ingot was within 30%, except for Pb, which was more concentrated in the center part. Second finding is that the cell-to-cell temperature difference changes along with the progressing solidification process. As a consequence, for an accurate cell-to-cell comparison, the locus in the freezing plateau where the comparison is done should be determined. Third finding is that the slope analysis estimates accurately the cell-to-cell comparison, and is consistent with the impurity analysis. This shows that the slope analysis gives extensive information about the effect of impurity to the zinc-point realization, especially after the cell fabrication.

  17. Change point detection in risk adjusted control charts.

    PubMed

    Assareh, Hassan; Smith, Ian; Mengersen, Kerrie

    2015-12-01

    Precise identification of the time when a change in a clinical process has occurred enables experts to identify a potential special cause more effectively. In this article, we develop change point estimation methods for a clinical dichotomous process in the presence of case mix. We apply Bayesian hierarchical models to formulate the change point where there exists a step change in the odds ratio and logit of risk of a Bernoulli process. Markov Chain Monte Carlo is used to obtain posterior distributions of the change point parameters including location and magnitude of changes and also corresponding probabilistic intervals and inferences. The performance of the Bayesian estimator is investigated through simulations and the result shows that precise estimates can be obtained when they are used in conjunction with the risk-adjusted CUSUM and EWMA control charts. In comparison with alternative EWMA and CUSUM estimators, more accurate and precise estimates are obtained by the Bayesian estimator. These superiorities enhance when probability quantification, flexibility and generaliability of the Bayesian change point detection model are also considered. The Deviance Information Criterion, as a model selection criterion in the Bayesian context, is applied to find the best change point model for a given dataset where there is no prior knowledge about the change type in the process.

  18. Accurate estimation of the elastic properties of porous fibers

    SciTech Connect

    Thissell, W.R.; Zurek, A.K.; Addessio, F.

    1997-05-01

    A procedure is described to calculate polycrystalline anisotropic fiber elastic properties with cylindrical symmetry and porosity. It uses a preferred orientation model (Tome ellipsoidal self-consistent model) for the determination of anisotropic elastic properties for the case of highly oriented carbon fibers. The model predictions, corrected for porosity, are compared to back-calculated fiber elastic properties of an IM6/3501-6 unidirectional composite whose elastic properties have been determined via resonant ultrasound spectroscopy. The Halpin-Tsai equations used to back-calculated fiber elastic properties are found to be inappropriate for anisotropic composite constituents. Modifications are proposed to the Halpin-Tsai equations to expand their applicability to anisotropic reinforcement materials.

  19. Development of high-accuracy pointing verification for ALMA antenna

    NASA Astrophysics Data System (ADS)

    Matsuzawa, Ayumu; Saito, Masao; Iguchi, Satoru; Nakanishi, Kouichiro; Saito, Hiro

    2014-07-01

    Pointing performance of a radio telescope antenna is important in radio astronomical observations to obtain accurate intensity of a target source. The pointing errors of the ALMA ACA antenna are required to be better than 0.6 arcsec rss, which corresponds to 1/10 and 1/20 of the field of view of the ALMA ACA 12-m and 7-m antenna at 950 GHz, respectively. The pointing verification measurements of the ACA antenna were performed using an Optical pointing telescope (OPT) mounted on the antenna backup structure at the ALMA Operations Site Facility at 2900m above the sea level. Pointing errors of these OPT measurements contain three different origins; originated from antenna, originated of atmosphere (optical seeing), and originated of OPT itself. In order to estimate pointing errors of the antenna origin, we need to subtract the components of optical seeing and OPT itself accurately, while we need to add components that cannot be measured in the OPT measurements. The ACA antenna verification test report demonstrated that all the ACA 7-m antenna meets pointing specification of ALMA. However, about one-third of datasets, values of estimated optical seeing is larger than measured pointing errors. We re-examined a procedure to estimate optical seeing, by investigating the property of optical seeing from the high-sampling OPT pointing measurements of long tracking a bright star for 15 minutes. Particularly, we examined the relation between optical seeing and sampling rate derived from Kolmogorov PSD. Our analysis indicated that the optical seeing at ALMA site may have been overestimated in the verification test. We present a new relation between optical seeing and sampling rate proportional to average wind velocity during measurement. We used this new relation to derive the optical seeing and as a result the number of datasets becomes half in which the optical seeing is larger than measured pointing errors. As a result, we successfully develop a new verification method of

  20. Spatio-temporal statistical model for the optimal combination of precipitation measured at different time scales for estimating unobserved point values and disaggregating to finer timescales

    NASA Astrophysics Data System (ADS)

    Bàrdossy, Andràs; Pegram, Geoffrey

    2015-04-01

    Precipitation observations are unique in space and time, so if not observed, the values can only be estimated. Many applications, such as the calculation of water balances, calibration of hydrological models or the provision of unbiased ground truth for remote sensing require full datasets. Thus a reliable estimation of the missing observations is of great importance. The problem is exacerbated by the ubiquitous decimation of gauge networks. We consider 2 problems as examples of the methodology: (i) infilling monthly data where some days are missing in the monthly records and (ii) infilling missing hourly values in daily records with the assistance of some nearby pluviometers. The key is that we need estimates of the distributions of the infilled values, not just their expectations, as we have found that the traditional 'best' values bias the spatial estimates. We first performed monthly precipitation interpolation using 311 full records, 31 stations of which were randomly decimated to artificially create incomplete records as inequality constraints. Interpolation was carried out (i) without using these 31 in any way and (ii) using them as inequality constraints, in the sense that we determine a lower limit by aggregating the surviving data in a decimated record. We compare the errors if (i) the 31 stations with incomplete records are not considered against (ii) the errors if the incomplete records are considered as inequalities, and found that the partially decimated data add considerable value, as compared to neglecting them. In a second application we performed a disaggregation in time. We take a set of complete hourly pluviometer data, then aggregate some stations to days. These then have their hourly missing data reconstructed and we evaluate the success of the procedure by cross-validation. In this application the daily sums for a location are considered as a constraint and the disaggregated daily data are compared to their observed hourly precipitation. The

  1. An Estimation of the Likelihood of Significant Eruptions During 2000-2009 Using Poisson Statistics on Two-Point Moving Averages of the Volcanic Time Series

    NASA Technical Reports Server (NTRS)

    Wilson, Robert M.

    2001-01-01

    Since 1750, the number of cataclysmic volcanic eruptions (volcanic explosivity index (VEI)>=4) per decade spans 2-11, with 96 percent located in the tropics and extra-tropical Northern Hemisphere. A two-point moving average of the volcanic time series has higher values since the 1860's than before, being 8.00 in the 1910's (the highest value) and 6.50 in the 1980's, the highest since the 1910's peak. Because of the usual behavior of the first difference of the two-point moving averages, one infers that its value for the 1990's will measure approximately 6.50 +/- 1, implying that approximately 7 +/- 4 cataclysmic volcanic eruptions should be expected during the present decade (2000-2009). Because cataclysmic volcanic eruptions (especially those having VEI>=5) nearly always have been associated with short-term episodes of global cooling, the occurrence of even one might confuse our ability to assess the effects of global warming. Poisson probability distributions reveal that the probability of one or more events with a VEI>=4 within the next ten years is >99 percent. It is approximately 49 percent for an event with a VEI>=5, and 18 percent for an event with a VEI>=6. Hence, the likelihood that a climatically significant volcanic eruption will occur within the next ten years appears reasonably high.

  2. Active point out-of-plane ultrasound calibration

    NASA Astrophysics Data System (ADS)

    Cheng, Alexis; Guo, Xiaoyu; Zhang, Haichong K.; Kang, Hyunjae; Etienne-Cummings, Ralph; Boctor, Emad M.

    2015-03-01

    Image-guided surgery systems are often used to provide surgeons with informational support. Due to several unique advantages such as ease of use, real-time image acquisition, and no ionizing radiation, ultrasound is a common intraoperative medical imaging modality used in image-guided surgery systems. To perform advanced forms of guidance with ultrasound, such as virtual image overlays or automated robotic actuation, an ultrasound calibration process must be performed. This process recovers the rigid body transformation between a tracked marker attached to the transducer and the ultrasound image. Point-based phantoms are considered to be accurate, but their calibration framework assumes that the point is in the image plane. In this work, we present the use of an active point phantom and a calibration framework that accounts for the elevational uncertainty of the point. Given the lateral and axial position of the point in the ultrasound image, we approximate a circle in the axial-elevational plane with a radius equal to the axial position. The standard approach transforms all of the imaged points to be a single physical point. In our approach, we minimize the distances between the circular subsets of each image, with them ideally intersecting at a single point. We simulated in noiseless and noisy cases, presenting results on out-of-plane estimation errors, calibration estimation errors, and point reconstruction precision. We also performed an experiment using a robot arm as the tracker, resulting in a point reconstruction precision of 0.64mm.

  3. Revised Filter Profiles and Zero Points for Broadband Photometry

    NASA Astrophysics Data System (ADS)

    Mann, Andrew W.; von Braun, Kaspar

    2015-02-01

    Estimating accurate bolometric fluxes for stars requires reliable photometry to absolutely flux calibrate the spectra. This is a significant problem for studies of very bright stars, which are generally saturated in modern photometric surveys. Instead we must rely on photometry with less precise calibration. We utilize precisely flux-calibrated spectra to derive improved filter bandpasses and zero points for the most common sources of photometry for bright stars. In total, we test 39 different filters in the General Catalog of Photometric Data as well as those from Tycho-2 and Hipparcos. We show that utilizing inaccurate filter profiles from the literature can create significant color terms resulting in fluxes that deviate by gsim10% from actual values. To remedy this we employ an empirical approach; we iteratively adjust the literature filter profile and zero point, convolve it with catalog spectra, and compare to the corresponding flux from the photometry. We adopt the passband values that produce the best agreement between photometry and spectroscopy and are independent of stellar color. We find that while most zero points change by < 5%, a few systems change by 10-15%. Our final profiles and zero points are similar to recent estimates from the literature. Based on determinations of systematic errors in our selected spectroscopic libraries, we estimate that most of our improved zero points are accurate to 0.5-1%.

  4. Estimating the Speed of Light with a TV Set.

    ERIC Educational Resources Information Center

    Schroeder, Michael C.; Smith, Charles W.

    1985-01-01

    A television set, piece of aluminum foil, and meter stick can be used to estimate the speed of light within a few percentage points. The activity provides students with success and generates interest in physical optics. Steps in the experiment are outlined along with suggestions for obtaining accurate results. (DH)

  5. High Frequency QRS ECG Accurately Detects Cardiomyopathy

    NASA Technical Reports Server (NTRS)

    Schlegel, Todd T.; Arenare, Brian; Poulin, Gregory; Moser, Daniel R.; Delgado, Reynolds

    2005-01-01

    High frequency (HF, 150-250 Hz) analysis over the entire QRS interval of the ECG is more sensitive than conventional ECG for detecting myocardial ischemia. However, the accuracy of HF QRS ECG for detecting cardiomyopathy is unknown. We obtained simultaneous resting conventional and HF QRS 12-lead ECGs in 66 patients with cardiomyopathy (EF = 23.2 plus or minus 6.l%, mean plus or minus SD) and in 66 age- and gender-matched healthy controls using PC-based ECG software recently developed at NASA. The single most accurate ECG parameter for detecting cardiomyopathy was an HF QRS morphological score that takes into consideration the total number and severity of reduced amplitude zones (RAZs) present plus the clustering of RAZs together in contiguous leads. This RAZ score had an area under the receiver operator curve (ROC) of 0.91, and was 88% sensitive, 82% specific and 85% accurate for identifying cardiomyopathy at optimum score cut-off of 140 points. Although conventional ECG parameters such as the QRS and QTc intervals were also significantly longer in patients than controls (P less than 0.001, BBBs excluded), these conventional parameters were less accurate (area under the ROC = 0.77 and 0.77, respectively) than HF QRS morphological parameters for identifying underlying cardiomyopathy. The total amplitude of the HF QRS complexes, as measured by summed root mean square voltages (RMSVs), also differed between patients and controls (33.8 plus or minus 11.5 vs. 41.5 plus or minus 13.6 mV, respectively, P less than 0.003), but this parameter was even less accurate in distinguishing the two groups (area under ROC = 0.67) than the HF QRS morphologic and conventional ECG parameters. Diagnostic accuracy was optimal (86%) when the RAZ score from the HF QRS ECG and the QTc interval from the conventional ECG were used simultaneously with cut-offs of greater than or equal to 40 points and greater than or equal to 445 ms, respectively. In conclusion 12-lead HF QRS ECG employing

  6. An assessment of vapour pressure estimation methods.

    PubMed

    O'Meara, Simon; Booth, Alastair Murray; Barley, Mark Howard; Topping, David; McFiggans, Gordon

    2014-09-28

    Laboratory measurements of vapour pressures for atmospherically relevant compounds were collated and used to assess the accuracy of vapour pressure estimates generated by seven estimation methods and impacts on predicted secondary organic aerosol. Of the vapour pressure estimation methods that were applicable to all the test set compounds, the Lee-Kesler [Reid et al., The Properties of Gases and Liquids, 1987] method showed the lowest mean absolute error and the Nannoolal et al. [Nannoonal et al., Fluid Phase Equilib., 2008, 269, 117-133] method showed the lowest mean bias error (when both used normal boiling points estimated using the Nannoolal et al. [Nannoolal et al., Fluid Phase Equilib., 2004, 226, 45-63] method). The effect of varying vapour pressure estimation methods on secondary organic aerosol (SOA) mass loading and composition was investigated using an absorptive partitioning equilibrium model. The Myrdal and Yalkowsky [Myrdal and Yalkowsky, Ind. Eng. Chem. Res., 1997, 36, 2494-2499] vapour pressure estimation method using the Nannoolal et al. [Nannoolal et al., Fluid Phase Equilib., 2004, 226, 45-63] normal boiling point gave the most accurate estimation of SOA loading despite not being the most accurate for vapour pressures alone. PMID:25105180

  7. Profitable capitation requires accurate costing.

    PubMed

    West, D A; Hicks, L L; Balas, E A; West, T D

    1996-01-01

    In the name of costing accuracy, nurses are asked to track inventory use on per treatment basis when more significant costs, such as general overhead and nursing salaries, are usually allocated to patients or treatments on an average cost basis. Accurate treatment costing and financial viability require analysis of all resources actually consumed in treatment delivery, including nursing services and inventory. More precise costing information enables more profitable decisions as is demonstrated by comparing the ratio-of-cost-to-treatment method (aggregate costing) with alternative activity-based costing methods (ABC). Nurses must participate in this costing process to assure that capitation bids are based upon accurate costs rather than simple averages. PMID:8788799

  8. A novel modelling framework to prioritize estimation of non-point source pollution parameters for quantifying pollutant origin and discharge in urban catchments.

    PubMed

    Fraga, I; Charters, F J; O'Sullivan, A D; Cochrane, T A

    2016-02-01

    Stormwater runoff in urban catchments contains heavy metals (zinc, copper, lead) and suspended solids (TSS) which can substantially degrade urban waterways. To identify these pollutant sources and quantify their loads the MEDUSA (Modelled Estimates of Discharges for Urban Stormwater Assessments) modelling framework was developed. The model quantifies pollutant build-up and wash-off from individual impervious roof, road and car park surfaces for individual rain events, incorporating differences in pollutant dynamics between surface types and rainfall characteristics. This requires delineating all impervious surfaces and their material types, the drainage network, rainfall characteristics and coefficients for the pollutant dynamics equations. An example application of the model to a small urban catchment demonstrates how the model can be used to identify the magnitude of pollutant loads, their spatial origin and the response of the catchment to changes in specific rainfall characteristics. A sensitivity analysis then identifies the key parameters influencing each pollutant load within the stormwater given the catchment characteristics, which allows development of a targeted calibration process that will enhance the certainty of the model outputs, while minimizing the data collection required for effective calibration. A detailed explanation of the modelling framework and pre-calibration sensitivity analysis is presented. PMID:26613353

  9. Pointing to others: How the target gender influences pointing performance.

    PubMed

    Cleret de Langavant, Laurent; Jacquemot, Charlotte; Cruveiller, Virginie; Dupoux, Emmanuel; Bachoud-Lévi, Anne-Catherine

    2016-01-01

    Pointing is a communicative gesture that allows individuals to share information about surrounding objects with other humans. Patients with heterotopagnosia are specifically impaired in pointing to other humans' body parts but not in pointing to themselves or to objects. Here, we describe a female patient with heterotopagnosia who was more accurate in pointing to men's body parts than to women's body parts. We replicated this gender effect in healthy participants with faster reaction times for pointing to men's body parts than to women's body parts. We discuss the role of gender stereotypes in explaining why it is more difficult to point to women than to men. PMID:27593456

  10. A new TDOA estimation method in Three-satellite interference localisation

    NASA Astrophysics Data System (ADS)

    Dou, Huijing; Lei, Qian; Li, Wenxue; Xing, Qingqing

    2015-05-01

    Time difference of arrival (TDOA) parameter estimation is the key to Three-satellite interference localisation. Therefore, in order to improve the accuracy of Three-satellite interference location, we must estimate the TDOA parameter accurately and effectively. Based on the study of wavelet transform correlation TDOA estimation algorithm, combining with correlation and Hilbert subtraction method, we put forward a high precision TDOA estimation method for Three-satellite interference location. The proposed algorithm utilises the characteristics of the zero-crossing point of Hilbert transform method corresponding to the correlation peak point of correlation method, using correlation function of wavelet transform correlation method minus the absolute value of its Hilbert transform, to sharpen peak point and improve the TDOA estimation precision, so that the positioning is more accurate and effective.

  11. Airborne Light Detection and Ranging (lidar) Derived Deformation from the MW 6.0 24 August, 2014 South Napa Earthquake Estimated by Two and Three Dimensional Point Cloud Change Detection Techniques

    NASA Astrophysics Data System (ADS)

    Lyda, A. W.; Zhang, X.; Glennie, C. L.; Hudnut, K.; Brooks, B. A.

    2016-06-01

    Remote sensing via LiDAR (Light Detection And Ranging) has proven extremely useful in both Earth science and hazard related studies. Surveys taken before and after an earthquake for example, can provide decimeter-level, 3D near-field estimates of land deformation that offer better spatial coverage of the near field rupture zone than other geodetic methods (e.g., InSAR, GNSS, or alignment array). In this study, we compare and contrast estimates of deformation obtained from different pre and post-event airborne laser scanning (ALS) data sets of the 2014 South Napa Earthquake using two change detection algorithms, Iterative Control Point (ICP) and Particle Image Velocimetry (PIV). The ICP algorithm is a closest point based registration algorithm that can iteratively acquire three dimensional deformations from airborne LiDAR data sets. By employing a newly proposed partition scheme, "moving window," to handle the large spatial scale point cloud over the earthquake rupture area, the ICP process applies a rigid registration of data sets within an overlapped window to enhance the change detection results of the local, spatially varying surface deformation near-fault. The other algorithm, PIV, is a well-established, two dimensional image co-registration and correlation technique developed in fluid mechanics research and later applied to geotechnical studies. Adapted here for an earthquake with little vertical movement, the 3D point cloud is interpolated into a 2D DTM image and horizontal deformation is determined by assessing the cross-correlation of interrogation areas within the images to find the most likely deformation between two areas. Both the PIV process and the ICP algorithm are further benefited by a presented, novel use of urban geodetic markers. Analogous to the persistent scatterer technique employed with differential radar observations, this new LiDAR application exploits a classified point cloud dataset to assist the change detection algorithms. Ground

  12. Direction Estimation Using Square Lattice and Cadastral Map Assembling

    NASA Astrophysics Data System (ADS)

    Takahashi, Yusuke; Fei, Liu; Ohyama, Wataru; Wakabayashi, Tetsushi; Kimura, Fumitaka

    This paper proposes a technique for direction estimation by means of square grid points in order to improve the performance of cadastral map assembling technique based on Merlin-Farber (MF) algorithm. The MF algorithm requires direction normalization of the segments (of cadastral map) preceding the assembling. Proposed direction estimation technique is based on the spatial frequency analysis of autocorrelation by MF algorithm for the square grid points regularly drawn with constant intervals on the segments. Since many square grid points are drawn over entire area of the segments the direction can be estimated more accurately with those points when compared the direction is estimated with single north arrow. To assemble two adjacent segments the longest common boundary is detected by MF algorithm. Evaluation experiments are performed to compare the accuracy and the success rate of map assembling when the direction is estimated and normalized based on the square grid points and when estimated and normalized based on the north arrow. Total of 324 map segments of 47 district provided by Institut Geographique National France are used in the experiments. While the map assembling based on the north arrow tends to form inaccurate cadastral maps the proposed technique assembles the map more accurately. The results of experiments shows that the proposed technique achieves sufficient success rate and accuracy so that it effectively reduces the labor cost and time of the cadastral map assembling.

  13. Accurate Alignment of Plasma Channels Based on Laser Centroid Oscillations

    SciTech Connect

    Gonsalves, Anthony; Nakamura, Kei; Lin, Chen; Osterhoff, Jens; Shiraishi, Satomi; Schroeder, Carl; Geddes, Cameron; Toth, Csaba; Esarey, Eric; Leemans, Wim

    2011-03-23

    A technique has been developed to accurately align a laser beam through a plasma channel by minimizing the shift in laser centroid and angle at the channel outptut. If only the shift in centroid or angle is measured, then accurate alignment is provided by minimizing laser centroid motion at the channel exit as the channel properties are scanned. The improvement in alignment accuracy provided by this technique is important for minimizing electron beam pointing errors in laser plasma accelerators.

  14. Short communication: Diet-induced variations in milk fatty acid composition have minor effects on the estimated melting point of milk fat in cows, goats, and ewes: Insights from a meta-analysis.

    PubMed

    Toral, P G; Bernard, L; Chilliard, Y; Glasser, F

    2013-02-01

    In ruminants, the ability to maintain milk fat melting point within physiological values could play a role in the regulation of milk fat secretion when milk fatty acid (FA) composition varies, such as in response to feeding factors. However, the relationship between milk fat fluidity and changes in milk FA composition is difficult to study experimentally. A meta-analysis was therefore conducted to compare the magnitude of diet-induced variations in milk FA composition and the calculated melting point of milk FA (used as a proxy to estimate the variations in the melting point of milk fat) in 3 dairy ruminant species (cow, goat, and sheep). The coefficient of variation (CV), a scale-free measure of statistical dispersion, was used to compare the variability of criteria differing in their order of magnitude. The analysis of a database of milk FA profiles from cows, goats, and sheep fed different dietary treatments (unsupplemented diets and diets supplemented with lipids rich in oleic acid, linoleic acid, linolenic acid, or C20-22 polyunsaturated FA) revealed that the variability of the calculated melting point of milk FA was narrow (CV of 5%) compared with the variability of milk FA percentages (CV of 18 to 72%). The regulation of the melting point of milk fat is thus probably involved in the control of diet-induced variations in milk fat secretion. The calculated melting point of ewe milk FA was approximately 3°C lower than that of goats or cows across all types of diets, which might be linked to differences in milk fat content (higher in sheep) or the structure of milk triacylglycerides among these species. Lipid supplementation increased the calculated melting point of C18 FA in milk, whereas that of total FA was significantly reduced by supplements rich in oleic, linoleic, and linolenic acids but not C20-22 polyunsaturated FA. However, the slight effects of dietary treatments on the calculated melting point of milk FA did not differ between cows, goats, and ewes.

  15. Accurate documentation and wound measurement.

    PubMed

    Hampton, Sylvie

    This article, part 4 in a series on wound management, addresses the sometimes routine yet crucial task of documentation. Clear and accurate records of a wound enable its progress to be determined so the appropriate treatment can be applied. Thorough records mean any practitioner picking up a patient's notes will know when the wound was last checked, how it looked and what dressing and/or treatment was applied, ensuring continuity of care. Documenting every assessment also has legal implications, demonstrating due consideration and care of the patient and the rationale for any treatment carried out. Part 5 in the series discusses wound dressing characteristics and selection.

  16. Nonlinear analysis and performance evaluation of the Annular Suspension and Pointing System (ASPS)

    NASA Technical Reports Server (NTRS)

    Joshi, S. M.

    1978-01-01

    The Annular Suspension and Pointing System (ASPS) can provide high accurate fine pointing for a variety of solar-, stellar-, and Earth-viewing scientific instruments during space shuttle orbital missions. In this report, a detailed nonlinear mathematical model is developed for the ASPS/Space Shuttle system. The equations are augmented with nonlinear models of components such as magnetic actuators and gimbal torquers. Control systems and payload attitude state estimators are designed in order to obtain satisfactory pointing performance, and statistical pointing performance is predicted in the presence of measurement noise and disturbances.

  17. Pointing knowledge accuracy of the star tracker based ATP system

    NASA Astrophysics Data System (ADS)

    Lee, Shinhak; Ortiz, Gerardo G.; Alexander, James W.

    2005-04-01

    The pointing knowledge for the deep space optical communications should be accurate and the estimate update rate needs to be sufficiently higher to compensate the spacecraft vibration. Our objective is to meet these two requirements, high accuracy and update rate, using the combinations of star trackers and inertial sensors. Star trackers are very accurate and provide absolute pointing knowledge with low update rate depending on the star magnitude. On the other hand, inertial sensors provide relative pointing knowledge with high update rates. In this paper, we describe how the star tracker and inertial sensor measurements are combined to reduce the pointing knowledge jitter. This method is based on the 'iterative averaging' of the star tracker and gyro measurements. Angle sensor measurements are to fill in between the two gyro measurements for higher update rate and the total RMS error (or jitter) increases in RSS (Root-Sum-Squared) sense. The estimated pointing jitter is on the order of 150 nrad which is well below the typical requirements of the deep space optical communications. This 150 nrad jitter can be achieved with 8 cm diameter of telescope aperture. Additional expectations include 1/25 pixel accuracy per star, SIRTF class gyros (ARW = 0.0001 deg/root-hr), 5 Hz star trackers with ~5.0 degree FOV, detector of 1000 by 1000 pixels, and stars of roughly 9 to 9.5 magnitudes.

  18. Clinically accurate fetal ECG parameters acquired from maternal abdominal sensors

    PubMed Central

    CLIFFORD, Gari; SAMENI, Reza; WARD, Mr. Jay; ROBINSON, Julian; WOLFBERG, Adam J.

    2011-01-01

    OBJECTIVE To evaluate the accuracy of a novel system for measuring fetal heart rate and ST-segment changes using non-invasive electrodes on the maternal abdomen. STUDY DESIGN Fetal ECGs were recorded using abdominal sensors from 32 term laboring women who had a fetal scalp electrode (FSE) placed for a clinical indication. RESULTS Good quality data for FHR estimation was available in 91.2% of the FSE segments, and 89.9% of the abdominal electrode segments. The root mean square (RMS) error between the FHR data calculated by both methods over all processed segments was 0.36 beats per minute. ST deviation from the isoelectric point ranged from 0 to 14.2% of R-wave amplitude. The RMS error between the ST change calculated by both methods averaged over all processed segments was 3.2%. CONCLUSION FHR and ST change acquired from the maternal abdomen is highly accurate and on average is clinically indistinguishable from FHR and ST change calculated using FSE data. PMID:21514560

  19. Fast and Accurate Construction of Confidence Intervals for Heritability.

    PubMed

    Schweiger, Regev; Kaufman, Shachar; Laaksonen, Reijo; Kleber, Marcus E; März, Winfried; Eskin, Eleazar; Rosset, Saharon; Halperin, Eran

    2016-06-01

    Estimation of heritability is fundamental in genetic studies. Recently, heritability estimation using linear mixed models (LMMs) has gained popularity because these estimates can be obtained from unrelated individuals collected in genome-wide association studies. Typically, heritability estimation under LMMs uses the restricted maximum likelihood (REML) approach. Existing methods for the construction of confidence intervals and estimators of SEs for REML rely on asymptotic properties. However, these assumptions are often violated because of the bounded parameter space, statistical dependencies, and limited sample size, leading to biased estimates and inflated or deflated confidence intervals. Here, we show that the estimation of confidence intervals by state-of-the-art methods is inaccurate, especially when the true heritability is relatively low or relatively high. We further show that these inaccuracies occur in datasets including thousands of individuals. Such biases are present, for example, in estimates of heritability of gene expression in the Genotype-Tissue Expression project and of lipid profiles in the Ludwigshafen Risk and Cardiovascular Health study. We also show that often the probability that the genetic component is estimated as 0 is high even when the true heritability is bounded away from 0, emphasizing the need for accurate confidence intervals. We propose a computationally efficient method, ALBI (accurate LMM-based heritability bootstrap confidence intervals), for estimating the distribution of the heritability estimator and for constructing accurate confidence intervals. Our method can be used as an add-on to existing methods for estimating heritability and variance components, such as GCTA, FaST-LMM, GEMMA, or EMMAX. PMID:27259052

  20. Tipping Points

    NASA Astrophysics Data System (ADS)

    Hansen, J.

    2007-12-01

    A climate tipping point, at least as I have used the phrase, refers to a situation in which a changing climate forcing has reached a point such that little additional forcing (or global temperature change) is needed to cause large, relatively rapid, climate change. Present examples include potential loss of all Arctic sea ice and instability of the West Antarctic and Greenland ice sheets. Tipping points are characterized by ready feedbacks that amplify the effect of forcings. The notion that these may be runaway feedbacks is a misconception. However, present "unrealized" global warming, due to the climate system's thermal inertia, exacerbates the difficulty of avoiding global warming tipping points. I argue that prompt efforts to slow CO2 emissions and absolutely reduce non-CO2 forcings are both essential if we are to avoid tipping points that would be disastrous for humanity and creation, the planet as civilization knows it.

  1. Toward Accurate and Quantitative Comparative Metagenomics.

    PubMed

    Nayfach, Stephen; Pollard, Katherine S

    2016-08-25

    Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341

  2. Toward Accurate and Quantitative Comparative Metagenomics

    PubMed Central

    Nayfach, Stephen; Pollard, Katherine S.

    2016-01-01

    Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341

  3. SPLASH: Accurate OH maser positions

    NASA Astrophysics Data System (ADS)

    Walsh, Andrew; Gomez, Jose F.; Jones, Paul; Cunningham, Maria; Green, James; Dawson, Joanne; Ellingsen, Simon; Breen, Shari; Imai, Hiroshi; Lowe, Vicki; Jones, Courtney

    2013-10-01

    The hydroxyl (OH) 18 cm lines are powerful and versatile probes of diffuse molecular gas, that may trace a largely unstudied component of the Galactic ISM. SPLASH (the Southern Parkes Large Area Survey in Hydroxyl) is a large, unbiased and fully-sampled survey of OH emission, absorption and masers in the Galactic Plane that will achieve sensitivities an order of magnitude better than previous work. In this proposal, we request ATCA time to follow up OH maser candidates. This will give us accurate (~10") positions of the masers, which can be compared to other maser positions from HOPS, MMB and MALT-45 and will provide full polarisation measurements towards a sample of OH masers that have not been observed in MAGMO.

  4. Accurate thickness measurement of graphene

    NASA Astrophysics Data System (ADS)

    Shearer, Cameron J.; Slattery, Ashley D.; Stapleton, Andrew J.; Shapter, Joseph G.; Gibson, Christopher T.

    2016-03-01

    Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.

  5. Accurate thickness measurement of graphene.

    PubMed

    Shearer, Cameron J; Slattery, Ashley D; Stapleton, Andrew J; Shapter, Joseph G; Gibson, Christopher T

    2016-03-29

    Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.

  6. Price Estimation Guidelines

    NASA Technical Reports Server (NTRS)

    Chamberlain, R. G.; Aster, R. W.; Firnett, P. J.; Miller, M. A.

    1985-01-01

    Improved Price Estimation Guidelines, IPEG4, program provides comparatively simple, yet relatively accurate estimate of price of manufactured product. IPEG4 processes user supplied input data to determine estimate of price per unit of production. Input data include equipment cost, space required, labor cost, materials and supplies cost, utility expenses, and production volume on industry wide or process wide basis.

  7. Baseline Estimation Algorithm with Block Adjustment for Multi-Pass Dual-Antenna Insar

    NASA Astrophysics Data System (ADS)

    Jin, Guowang; Xiong, Xin; Xu, Qing; Gong, Zhihui; Zhou, Yang

    2016-06-01

    Baseline parameters and interferometric phase offset need to be estimated accurately, for they are key parameters in processing of InSAR (Interferometric Synthetic Aperture Radar). If adopting baseline estimation algorithm with single pass, it needs large quantities of ground control points to estimate interferometric parameters for mosaicking multiple passes dual-antenna airborne InSAR data that covers large areas. What's more, there will be great difference between heights derived from different passes due to the errors of estimated parameters. So, an estimation algorithm of interferometric parameters with block adjustment for multi-pass dual-antenna InSAR is presented to reduce the needed ground control points and height's difference between different passes. The baseline estimation experiments were done with multi-pass InSAR data obtained by Chinese dual-antenna airborne InSAR system. Although there were less ground control points, the satisfied results were obtained, as validated the proposed baseline estimation algorithm.

  8. Tipping Point

    MedlinePlus

    ... Tipping Point by CPSC Blogger September 22 appliance child Childproofing CPSC danger death electrical fall furniture head injury product safety television tipover tv Watch the video in Adobe Flash format. Almost weekly, we see ...

  9. Optimization of Pilot Point Locations: an efficient and geostatistical perspective

    NASA Astrophysics Data System (ADS)

    Mehne, J.; Nowak, W.

    2012-04-01

    The pilot point method is a wide-spread method for calibrating ensembles of heterogeneous aquifer models on available field data such as hydraulic heads. The pilot points are virtual measurements of conductivity, introduced as localized carriers of information in the inverse procedure. For each heterogeneous aquifer realization, the pilot point values are calibrated until all calibration data are honored. Adequate placement and numbers of pilot points are crucial both for accurate representation of heterogeneity and to keep the computational costs of calibration at an acceptable level. Current placement methods for pilot points either rely solely on the expertise of the modeler, or they involve computationally costly sensitivity analyses. None of the existing placement methods directly addressed the geostatistical character of the placement and calibration problem. This study presents a new method for optimal selection of pilot point locations. We combine ideas from Ensemble Kalman Filtering and geostatistical optimal design with straightforward optimization. In a first step, we emulate the pilot point method with a modified Ensemble Kalman Filter for parameter estimation at drastically reduced computational costs. This avoids the costly evaluation of sensitivity coefficients often used for optimal placement of pilot points. Second, we define task-driven objective functions for the optimal placement of pilot points, based on ideas from geostatistical optimal design of experiments. These objective functions can be evaluated at speed, without carrying out the actual calibration process, requiring nothing else but ensemble covariances that are available from step one. By formal optimization, we can find pilot point placement schemes that are optimal in representing the data for the task-at-hand with minimal numbers of pilot points. In small synthetic test applications, we demonstrate the promising computational performance and the geostatistically logical choice of

  10. Accurate adiabatic correction in the hydrogen molecule

    NASA Astrophysics Data System (ADS)

    Pachucki, Krzysztof; Komasa, Jacek

    2014-12-01

    A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10-12 at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H2, HD, HT, D2, DT, and T2 has been determined. For the ground state of H2 the estimated precision is 3 × 10-7 cm-1, which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels.

  11. Accurate adiabatic correction in the hydrogen molecule

    SciTech Connect

    Pachucki, Krzysztof; Komasa, Jacek

    2014-12-14

    A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10{sup −12} at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H{sub 2}, HD, HT, D{sub 2}, DT, and T{sub 2} has been determined. For the ground state of H{sub 2} the estimated precision is 3 × 10{sup −7} cm{sup −1}, which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels.

  12. Accurate adiabatic correction in the hydrogen molecule.

    PubMed

    Pachucki, Krzysztof; Komasa, Jacek

    2014-12-14

    A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10(-12) at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H2, HD, HT, D2, DT, and T2 has been determined. For the ground state of H2 the estimated precision is 3 × 10(-7) cm(-1), which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels. PMID:25494728

  13. Automated Point Cloud Correspondence Detection for Underwater Mapping Using AUVs

    NASA Technical Reports Server (NTRS)

    Hammond, Marcus; Clark, Ashley; Mahajan, Aditya; Sharma, Sumant; Rock, Stephen

    2015-01-01

    An algorithm for automating correspondence detection between point clouds composed of multibeam sonar data is presented. This allows accurate initialization for point cloud alignment techniques even in cases where accurate inertial navigation is not available, such as iceberg profiling or vehicles with low-grade inertial navigation systems. Techniques from computer vision literature are used to extract, label, and match keypoints between "pseudo-images" generated from these point clouds. Image matches are refined using RANSAC and information about the vehicle trajectory. The resulting correspondences can be used to initialize an iterative closest point (ICP) registration algorithm to estimate accumulated navigation error and aid in the creation of accurate, self-consistent maps. The results presented use multibeam sonar data obtained from multiple overlapping passes of an underwater canyon in Monterey Bay, California. Using strict matching criteria, the method detects 23 between-swath correspondence events in a set of 155 pseudo-images with zero false positives. Using less conservative matching criteria doubles the number of matches but introduces several false positive matches as well. Heuristics based on known vehicle trajectory information are used to eliminate these.

  14. Accurate paleointensities - the multi-method approach

    NASA Astrophysics Data System (ADS)

    de Groot, Lennart

    2016-04-01

    The accuracy of models describing rapid changes in the geomagnetic field over the past millennia critically depends on the availability of reliable paleointensity estimates. Over the past decade methods to derive paleointensities from lavas (the only recorder of the geomagnetic field that is available all over the globe and through geologic times) have seen significant improvements and various alternative techniques were proposed. The 'classical' Thellier-style approach was optimized and selection criteria were defined in the 'Standard Paleointensity Definitions' (Paterson et al, 2014). The Multispecimen approach was validated and the importance of additional tests and criteria to assess Multispecimen results must be emphasized. Recently, a non-heating, relative paleointensity technique was proposed -the pseudo-Thellier protocol- which shows great potential in both accuracy and efficiency, but currently lacks a solid theoretical underpinning. Here I present work using all three of the aforementioned paleointensity methods on suites of young lavas taken from the volcanic islands of Hawaii, La Palma, Gran Canaria, Tenerife, and Terceira. Many of the sampled cooling units are <100 years old, the actual field strength at the time of cooling is therefore reasonably well known. Rather intuitively, flows that produce coherent results from two or more different paleointensity methods yield the most accurate estimates of the paleofield. Furthermore, the results for some flows pass the selection criteria for one method, but fail in other techniques. Scrutinizing and combing all acceptable results yielded reliable paleointensity estimates for 60-70% of all sampled cooling units - an exceptionally high success rate. This 'multi-method paleointensity approach' therefore has high potential to provide the much-needed paleointensities to improve geomagnetic field models for the Holocene.

  15. Precise and Accurate Density Determination of Explosives Using Hydrostatic Weighing

    SciTech Connect

    B. Olinger

    2005-07-01

    Precise and accurate density determination requires weight measurements in air and water using sufficiently precise analytical balances, knowledge of the densities of air and water, knowledge of thermal expansions, availability of a density standard, and a method to estimate the time to achieve thermal equilibrium with water. Density distributions in pressed explosives are inferred from the densities of elements from a central slice.

  16. Accurate Weather Forecasting for Radio Astronomy

    NASA Astrophysics Data System (ADS)

    Maddalena, Ronald J.

    2010-01-01

    The NRAO Green Bank Telescope routinely observes at wavelengths from 3 mm to 1 m. As with all mm-wave telescopes, observing conditions depend upon the variable atmospheric water content. The site provides over 100 days/yr when opacities are low enough for good observing at 3 mm, but winds on the open-air structure reduce the time suitable for 3-mm observing where pointing is critical. Thus, to maximum productivity the observing wavelength needs to match weather conditions. For 6 years the telescope has used a dynamic scheduling system (recently upgraded; www.gb.nrao.edu/DSS) that requires accurate multi-day forecasts for winds and opacities. Since opacity forecasts are not provided by the National Weather Services (NWS), I have developed an automated system that takes available forecasts, derives forecasted opacities, and deploys the results on the web in user-friendly graphical overviews (www.gb.nrao.edu/ rmaddale/Weather). The system relies on the "North American Mesoscale" models, which are updated by the NWS every 6 hrs, have a 12 km horizontal resolution, 1 hr temporal resolution, run to 84 hrs, and have 60 vertical layers that extend to 20 km. Each forecast consists of a time series of ground conditions, cloud coverage, etc, and, most importantly, temperature, pressure, humidity as a function of height. I use the Liebe's MWP model (Radio Science, 20, 1069, 1985) to determine the absorption in each layer for each hour for 30 observing wavelengths. Radiative transfer provides, for each hour and wavelength, the total opacity and the radio brightness of the atmosphere, which contributes substantially at some wavelengths to Tsys and the observational noise. Comparisons of measured and forecasted Tsys at 22.2 and 44 GHz imply that the forecasted opacities are good to about 0.01 Nepers, which is sufficient for forecasting and accurate calibration. Reliability is high out to 2 days and degrades slowly for longer-range forecasts.

  17. Evaluation of Piloted Inputs for Onboard Frequency Response Estimation

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.; Martos, Borja

    2013-01-01

    Frequency response estimation results are presented using piloted inputs and a real-time estimation method recently developed for multisine inputs. A nonlinear simulation of the F-16 and a Piper Saratoga research aircraft were subjected to different piloted test inputs while the short period stabilator/elevator to pitch rate frequency response was estimated. Results show that the method can produce accurate results using wide-band piloted inputs instead of multisines. A new metric is introduced for evaluating which data points to include in the analysis and recommendations are provided for applying this method with piloted inputs.

  18. Laser Guided Automated Calibrating System for Accurate Bracket Placement

    PubMed Central

    Anitha, A; Kumar, AJ; Mascarenhas, R; Husain, A

    2015-01-01

    Background: The basic premise of preadjusted bracket system is accurate bracket positioning. It is widely recognized that accurate bracket placement is of critical importance in the efficient application of biomechanics and in realizing the full potential of a preadjusted edgewise appliance. Aim: The purpose of this study was to design a calibrating system to accurately detect a point on a plane as well as to determine the accuracy of the Laser Guided Automated Calibrating (LGAC) System. Materials and Methods: To the lowest order of approximation a plane having two parallel lines is used to verify the accuracy of the system. On prescribing the distance of a point from the line, images of the plane are analyzed from controlled angles, calibrated and the point is identified with a laser marker. Results: The image was captured and analyzed using MATLAB ver. 7 software (The MathWorks Inc.). Each pixel in the image corresponded to a distance of 1cm/413 (10 mm/413) = 0.0242 mm (L/P). This implies any variations in distance above 0.024 mm can be measured and acted upon, and sets the highest possible accuracy for this system. Conclusion: A new automated system is introduced having an accuracy of 0.024 mm for accurate bracket placement. PMID:25745575

  19. Finite-size scaling of two-point statistics and the turbulent energy cascade generators.

    PubMed

    Cleve, Jochen; Dziekan, Thomas; Schmiegel, Jürgen; Barndorff-Nielsen, Ole E; Pearson, Bruce R; Sreenivasan, Katepalli R; Greiner, Martin

    2005-02-01

    Within the framework of random multiplicative energy cascade models of fully developed turbulence, finite-size-scaling expressions for two-point correlators and cumulants are derived, taking into account the observationally unavoidable conversion from an ultrametric to an Euclidean two-point distance. The comparison with two-point statistics of the surrogate energy dissipation, extracted from various wind tunnel and atmospheric boundary layer records, allows an accurate deduction of multiscaling exponents and cumulants, even at moderate Reynolds numbers for which simple power-law fits are not feasible. The extracted exponents serve as input for parametric estimates of the probabilistic cascade generator. Various cascade generators are evaluated.

  20. First- and second-order error estimates in Monte Carlo integration

    NASA Astrophysics Data System (ADS)

    Bakx, R.; Kleiss, R. H. P.; Versteegen, F.

    2016-11-01

    In Monte Carlo integration an accurate and reliable determination of the numerical integration error is essential. We point out the need for an independent estimate of the error on this error, for which we present an unbiased estimator. In contrast to the usual (first-order) error estimator, this second-order estimator can be shown to be not necessarily positive in an actual Monte Carlo computation. We propose an alternative and indicate how this can be computed in linear time without risk of large rounding errors. In addition, we comment on the relatively very slow convergence of the second-order error estimate.

  1. Accurate segmentation framework for the left ventricle wall from cardiac cine MRI

    NASA Astrophysics Data System (ADS)

    Sliman, H.; Khalifa, F.; Elnakib, A.; Soliman, A.; Beache, G. M.; Gimel'farb, G.; Emam, A.; Elmaghraby, A.; El-Baz, A.

    2013-10-01

    We propose a novel, fast, robust, bi-directional coupled parametric deformable model to segment the left ventricle (LV) wall borders using first- and second-order visual appearance features. These features are embedded in a new stochastic external force that preserves the topology of LV wall to track the evolution of the parametric deformable models control points. To accurately estimate the marginal density of each deformable model control point, the empirical marginal grey level distributions (first-order appearance) inside and outside the boundary of the deformable model are modeled with adaptive linear combinations of discrete Gaussians (LCDG). The second order visual appearance of the LV wall is accurately modeled with a new rotationally invariant second-order Markov-Gibbs random field (MGRF). We tested the proposed segmentation approach on 15 data sets in 6 infarction patients using the Dice similarity coefficient (DSC) and the average distance (AD) between the ground truth and automated segmentation contours. Our approach achieves a mean DSC value of 0.926±0.022 and AD value of 2.16±0.60 compared to two other level set methods that achieve 0.904±0.033 and 0.885±0.02 for DSC; and 2.86±1.35 and 5.72±4.70 for AD, respectively.

  2. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  3. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2013-07-01 2013-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  4. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2011-07-01 2011-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  5. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2014-07-01 2014-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  6. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2012-07-01 2012-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  7. Image Capture with Synchronized Multiple-Cameras for Extraction of Accurate Geometries

    NASA Astrophysics Data System (ADS)

    Koehl, M.; Delacourt, T.; Boutry, C.

    2016-06-01

    This paper presents a project of recording and modelling tunnels, traffic circles and roads from multiple sensors. The aim is the representation and the accurate 3D modelling of a selection of road infrastructures as dense point clouds in order to extract profiles and metrics from it. Indeed, these models will be used for the sizing of infrastructures in order to simulate exceptional convoy truck routes. The objective is to extract directly from the point clouds the heights, widths and lengths of bridges and tunnels, the diameter of gyrating and to highlight potential obstacles for a convoy. Light, mobile and fast acquisition approaches based on images and videos from a set of synchronized sensors have been tested in order to obtain useable point clouds. The presented solution is based on a combination of multiple low-cost cameras designed on an on-boarded device allowing dynamic captures. The experimental device containing GoPro Hero4 cameras has been set up and used for tests in static or mobile acquisitions. That way, various configurations have been tested by using multiple synchronized cameras. These configurations are discussed in order to highlight the best operational configuration according to the shape of the acquired objects. As the precise calibration of each sensor and its optics are major factors in the process of creation of accurate dense point clouds, and in order to reach the best quality available from such cameras, the estimation of the internal parameters of fisheye lenses of the cameras has been processed. Reference measures were also realized by using a 3D TLS (Faro Focus 3D) to allow the accuracy assessment.

  8. Accurate segmentation of partially overlapping cervical cells based on dynamic sparse contour searching and GVF snake model.

    PubMed

    Guan, Tao; Zhou, Dongxiang; Liu, Yunhui

    2015-07-01

    Overlapping cells segmentation is one of the challenging topics in medical image processing. In this paper, we propose to approximately represent the cell contour as a set of sparse contour points, which can be further partitioned into two parts: the strong contour points and the weak contour points. We consider the cell contour extraction as a contour points locating problem and propose an effective and robust framework for segmentation of partially overlapping cells in cervical smear images. First, the cell nucleus and the background are extracted by a morphological filtering-based K-means clustering algorithm. Second, a gradient decomposition-based edge enhancement method is developed for enhancing the true edges belonging to the center cell. Then, a dynamic sparse contour searching algorithm is proposed to gradually locate the weak contour points in the cell overlapping regions based on the strong contour points. This algorithm involves the least squares estimation and a dynamic searching principle, and is thus effective to cope with the cell overlapping problem. Using the located contour points, the Gradient Vector Flow Snake model is finally employed to extract the accurate cell contour. Experiments have been performed on two cervical smear image datasets containing both single cells and partially overlapping cells. The high accuracy of the cell contour extraction result validates the effectiveness of the proposed method.

  9. Fast and Provably Accurate Bilateral Filtering.

    PubMed

    Chaudhury, Kunal N; Dabhade, Swapnil D

    2016-06-01

    The bilateral filter is a non-linear filter that uses a range filter along with a spatial filter to perform edge-preserving smoothing of images. A direct computation of the bilateral filter requires O(S) operations per pixel, where S is the size of the support of the spatial filter. In this paper, we present a fast and provably accurate algorithm for approximating the bilateral filter when the range kernel is Gaussian. In particular, for box and Gaussian spatial filters, the proposed algorithm can cut down the complexity to O(1) per pixel for any arbitrary S . The algorithm has a simple implementation involving N+1 spatial filterings, where N is the approximation order. We give a detailed analysis of the filtering accuracy that can be achieved by the proposed approximation in relation to the target bilateral filter. This allows us to estimate the order N required to obtain a given accuracy. We also present comprehensive numerical results to demonstrate that the proposed algorithm is competitive with the state-of-the-art methods in terms of speed and accuracy. PMID:27093722

  10. Building dynamic population graph for accurate correspondence detection.

    PubMed

    Du, Shaoyi; Guo, Yanrong; Sanroma, Gerard; Ni, Dong; Wu, Guorong; Shen, Dinggang

    2015-12-01

    In medical imaging studies, there is an increasing trend for discovering the intrinsic anatomical difference across individual subjects in a dataset, such as hand images for skeletal bone age estimation. Pair-wise matching is often used to detect correspondences between each individual subject and a pre-selected model image with manually-placed landmarks. However, the large anatomical variability across individual subjects can easily compromise such pair-wise matching step. In this paper, we present a new framework to simultaneously detect correspondences among a population of individual subjects, by propagating all manually-placed landmarks from a small set of model images through a dynamically constructed image graph. Specifically, we first establish graph links between models and individual subjects according to pair-wise shape similarity (called as forward step). Next, we detect correspondences for the individual subjects with direct links to any of model images, which is achieved by a new multi-model correspondence detection approach based on our recently-published sparse point matching method. To correct those inaccurate correspondences, we further apply an error detection mechanism to automatically detect wrong correspondences and then update the image graph accordingly (called as backward step). After that, all subject images with detected correspondences are included into the set of model images, and the above two steps of graph expansion and error correction are repeated until accurate correspondences for all subject images are established. Evaluations on real hand X-ray images demonstrate that our proposed method using a dynamic graph construction approach can achieve much higher accuracy and robustness, when compared with the state-of-the-art pair-wise correspondence detection methods as well as a similar method but using static population graph.

  11. Hole-ness of point clouds

    NASA Astrophysics Data System (ADS)

    Gronz, Oliver; Seeger, Manuel; Klaes, Björn; Casper, Markus C.; Ries, Johannes B.

    2015-04-01

    Accurate and dense 3D models of soil surfaces can be used in various ways: They can be used as initial shapes for erosion models. They can be used as benchmark shapes for erosion model outputs. They can be used to derive metrics, such as random roughness... One easy and low-cost method to produce these models is structure from motion (SfM). Using this method, two questions arise: Does the soil moisture, which changes the colour, albedo and reflectivity of the soil, influence the model quality? How can the model quality be evaluated? To answer these questions, a suitable data set has been produced: soil has been placed on a tray and area