Science.gov

Sample records for accurate point estimates

  1. Robust and Accurate Vision-Based Pose Estimation Algorithm Based on Four Coplanar Feature Points

    PubMed Central

    Zhang, Zimiao; Zhang, Shihai; Li, Qiu

    2016-01-01

    Vision-based pose estimation is an important application of machine vision. Currently, analytical and iterative methods are used to solve the object pose. The analytical solutions generally take less computation time. However, the analytical solutions are extremely susceptible to noise. The iterative solutions minimize the distance error between feature points based on 2D image pixel coordinates. However, the non-linear optimization needs a good initial estimate of the true solution, otherwise they are more time consuming than analytical solutions. Moreover, the image processing error grows rapidly with measurement range increase. This leads to pose estimation errors. All the reasons mentioned above will cause accuracy to decrease. To solve this problem, a novel pose estimation method based on four coplanar points is proposed. Firstly, the coordinates of feature points are determined according to the linear constraints formed by the four points. The initial coordinates of feature points acquired through the linear method are then optimized through an iterative method. Finally, the coordinate system of object motion is established and a method is introduced to solve the object pose. The growing image processing error causes pose estimation errors the measurement range increases. Through the coordinate system, the pose estimation errors could be decreased. The proposed method is compared with two other existing methods through experiments. Experimental results demonstrate that the proposed method works efficiently and stably. PMID:27999338

  2. Precision Pointing Control to and Accurate Target Estimation of a Non-Cooperative Vehicle

    NASA Technical Reports Server (NTRS)

    VanEepoel, John; Thienel, Julie; Sanner, Robert M.

    2006-01-01

    In 2004, NASA began investigating a robotic servicing mission for the Hubble Space Telescope (HST). Such a mission would not only require estimates of the HST attitude and rates in order to achieve capture by the proposed Hubble Robotic Vehicle (HRV), but also precision control to achieve the desired rate and maintain the orientation to successfully dock with HST. To generalize the situation, HST is the target vehicle and HRV is the chaser. This work presents a nonlinear approach for estimating the body rates of a non-cooperative target vehicle, and coupling this estimation to a control scheme. Non-cooperative in this context relates to the target vehicle no longer having the ability to maintain attitude control or transmit attitude knowledge.

  3. BIOACCESSIBILITY TESTS ACCURATELY ESTIMATE ...

    EPA Pesticide Factsheets

    Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contaminated soils. Relative bioavailabilities were expressed by comparison with blood Pb concentrations in quail fed a Pb acetate reference diet. Diets containing soil from five Pb-contaminated Superfund sites had relative bioavailabilities from 33%-63%, with a mean of about 50%. Treatment of two of the soils with P significantly reduced the bioavailability of Pb. The bioaccessibility of the Pb in the test soils was then measured in six in vitro tests and regressed on bioavailability. They were: the “Relative Bioavailability Leaching Procedure” (RBALP) at pH 1.5, the same test conducted at pH 2.5, the “Ohio State University In vitro Gastrointestinal” method (OSU IVG), the “Urban Soil Bioaccessible Lead Test”, the modified “Physiologically Based Extraction Test” and the “Waterfowl Physiologically Based Extraction Test.” All regressions had positive slopes. Based on criteria of slope and coefficient of determination, the RBALP pH 2.5 and OSU IVG tests performed very well. Speciation by X-ray absorption spectroscopy demonstrated that, on average, most of the Pb in the sampled soils was sorbed to minerals (30%), bound to organic matter 24%, or present as Pb sulfate 18%. Ad

  4. Accurate pointing of tungsten welding electrodes

    NASA Technical Reports Server (NTRS)

    Ziegelmeier, P.

    1971-01-01

    Thoriated-tungsten is pointed accurately and quickly by using sodium nitrite. Point produced is smooth and no effort is necessary to hold the tungsten rod concentric. The chemically produced point can be used several times longer than ground points. This method reduces time and cost of preparing tungsten electrodes.

  5. Accurate Biomass Estimation via Bayesian Adaptive Sampling

    NASA Astrophysics Data System (ADS)

    Wheeler, K.; Knuth, K.; Castle, P.

    2005-12-01

    Typical estimates of standing wood derived from remote sensing sources take advantage of aggregate measurements of canopy heights (e.g. LIDAR) and canopy diameters (segmentation of IKONOS imagery) to obtain a wood volume estimate by assuming homogeneous species and a fixed function that returns volume. The validation of such techniques use manually measured diameter at breast height records (DBH). Our goal is to improve the accuracy and applicability of biomass estimation methods to heterogeneous forests and transitional areas. We are developing estimates with quantifiable uncertainty using a new form of estimation function, active sampling, and volumetric reconstruction image rendering for species specific mass truth. Initially we are developing a Bayesian adaptive sampling method for BRDF associated with the MISR Rahman model with respect to categorical biomes. This involves characterizing the probability distributions of the 3 free parameters of the Rahman model for the 6 categories of biomes used by MISR. Subsequently, these distributions can be used to determine the optimal sampling methodology to distinguish biomes during acquisition. We have a remotely controlled semi-autonomous helicopter that has stereo imaging, lidar, differential GPS, and spectrometers covering wavelengths from visible to NIR. We intend to automatically vary the way points of the flight path via the Bayesian adaptive sampling method. The second critical part of this work is in automating the validation of biomass estimates via using machine vision techniques. This involves taking 2-D pictures of trees of known species, and then via Bayesian techniques, reconstructing 3-D models of the trees to estimate the distribution moments associated with wood volume. Similar techniques have been developed by the medical imaging community. This then provides probability distributions conditional upon species. The final part of this work is in relating the BRDF actively sampled measurements to species

  6. Accurate pose estimation for forensic identification

    NASA Astrophysics Data System (ADS)

    Merckx, Gert; Hermans, Jeroen; Vandermeulen, Dirk

    2010-04-01

    In forensic authentication, one aims to identify the perpetrator among a series of suspects or distractors. A fundamental problem in any recognition system that aims for identification of subjects in a natural scene is the lack of constrains on viewing and imaging conditions. In forensic applications, identification proves even more challenging, since most surveillance footage is of abysmal quality. In this context, robust methods for pose estimation are paramount. In this paper we will therefore present a new pose estimation strategy for very low quality footage. Our approach uses 3D-2D registration of a textured 3D face model with the surveillance image to obtain accurate far field pose alignment. Starting from an inaccurate initial estimate, the technique uses novel similarity measures based on the monogenic signal to guide a pose optimization process. We will illustrate the descriptive strength of the introduced similarity measures by using them directly as a recognition metric. Through validation, using both real and synthetic surveillance footage, our pose estimation method is shown to be accurate, and robust to lighting changes and image degradation.

  7. Point estimates for probability moments

    PubMed Central

    Rosenblueth, Emilio

    1975-01-01

    Given a well-behaved real function Y of a real random variable X and the first two or three moments of X, expressions are derived for the moments of Y as linear combinations of powers of the point estimates y(x+) and y(x-), where x+ and x- are specific values of X. Higher-order approximations and approximations for discontinuous Y using more point estimates are also given. Second-moment approximations are generalized to the case when Y is a function of several variables. PMID:16578731

  8. Fast and accurate estimation for astrophysical problems in large databases

    NASA Astrophysics Data System (ADS)

    Richards, Joseph W.

    2010-10-01

    A recent flood of astronomical data has created much demand for sophisticated statistical and machine learning tools that can rapidly draw accurate inferences from large databases of high-dimensional data. In this Ph.D. thesis, methods for statistical inference in such databases will be proposed, studied, and applied to real data. I use methods for low-dimensional parametrization of complex, high-dimensional data that are based on the notion of preserving the connectivity of data points in the context of a Markov random walk over the data set. I show how this simple parameterization of data can be exploited to: define appropriate prototypes for use in complex mixture models, determine data-driven eigenfunctions for accurate nonparametric regression, and find a set of suitable features to use in a statistical classifier. In this thesis, methods for each of these tasks are built up from simple principles, compared to existing methods in the literature, and applied to data from astronomical all-sky surveys. I examine several important problems in astrophysics, such as estimation of star formation history parameters for galaxies, prediction of redshifts of galaxies using photometric data, and classification of different types of supernovae based on their photometric light curves. Fast methods for high-dimensional data analysis are crucial in each of these problems because they all involve the analysis of complicated high-dimensional data in large, all-sky surveys. Specifically, I estimate the star formation history parameters for the nearly 800,000 galaxies in the Sloan Digital Sky Survey (SDSS) Data Release 7 spectroscopic catalog, determine redshifts for over 300,000 galaxies in the SDSS photometric catalog, and estimate the types of 20,000 supernovae as part of the Supernova Photometric Classification Challenge. Accurate predictions and classifications are imperative in each of these examples because these estimates are utilized in broader inference problems

  9. Accurate Biomass Estimation via Bayesian Adaptive Sampling

    NASA Technical Reports Server (NTRS)

    Wheeler, Kevin R.; Knuth, Kevin H.; Castle, Joseph P.; Lvov, Nikolay

    2005-01-01

    The following concepts were introduced: a) Bayesian adaptive sampling for solving biomass estimation; b) Characterization of MISR Rahman model parameters conditioned upon MODIS landcover. c) Rigorous non-parametric Bayesian approach to analytic mixture model determination. d) Unique U.S. asset for science product validation and verification.

  10. Preparing Rapid, Accurate Construction Cost Estimates with a Personal Computer.

    ERIC Educational Resources Information Center

    Gerstel, Sanford M.

    1986-01-01

    An inexpensive and rapid method for preparing accurate cost estimates of construction projects in a university setting, using a personal computer, purchased software, and one estimator, is described. The case against defined estimates, the rapid estimating system, and adjusting standard unit costs are discussed. (MLW)

  11. Quantifying Accurate Calorie Estimation Using the "Think Aloud" Method

    ERIC Educational Resources Information Center

    Holmstrup, Michael E.; Stearns-Bruening, Kay; Rozelle, Jeffrey

    2013-01-01

    Objective: Clients often have limited time in a nutrition education setting. An improved understanding of the strategies used to accurately estimate calories may help to identify areas of focused instruction to improve nutrition knowledge. Methods: A "Think Aloud" exercise was recorded during the estimation of calories in a standard dinner meal…

  12. Accurate genome relative abundance estimation based on shotgun metagenomic reads.

    PubMed

    Xia, Li C; Cram, Jacob A; Chen, Ting; Fuhrman, Jed A; Sun, Fengzhu

    2011-01-01

    Accurate estimation of microbial community composition based on metagenomic sequencing data is fundamental for subsequent metagenomics analysis. Prevalent estimation methods are mainly based on directly summarizing alignment results or its variants; often result in biased and/or unstable estimates. We have developed a unified probabilistic framework (named GRAMMy) by explicitly modeling read assignment ambiguities, genome size biases and read distributions along the genomes. Maximum likelihood method is employed to compute Genome Relative Abundance of microbial communities using the Mixture Model theory (GRAMMy). GRAMMy has been demonstrated to give estimates that are accurate and robust across both simulated and real read benchmark datasets. We applied GRAMMy to a collection of 34 metagenomic read sets from four metagenomics projects and identified 99 frequent species (minimally 0.5% abundant in at least 50% of the data-sets) in the human gut samples. Our results show substantial improvements over previous studies, such as adjusting the over-estimated abundance for Bacteroides species for human gut samples, by providing a new reference-based strategy for metagenomic sample comparisons. GRAMMy can be used flexibly with many read assignment tools (mapping, alignment or composition-based) even with low-sensitivity mapping results from huge short-read datasets. It will be increasingly useful as an accurate and robust tool for abundance estimation with the growing size of read sets and the expanding database of reference genomes.

  13. Accurate absolute GPS positioning through satellite clock error estimation

    NASA Astrophysics Data System (ADS)

    Han, S.-C.; Kwon, J. H.; Jekeli, C.

    2001-05-01

    An algorithm for very accurate absolute positioning through Global Positioning System (GPS) satellite clock estimation has been developed. Using International GPS Service (IGS) precise orbits and measurements, GPS clock errors were estimated at 30-s intervals. Compared to values determined by the Jet Propulsion Laboratory, the agreement was at the level of about 0.1 ns (3 cm). The clock error estimates were then applied to an absolute positioning algorithm in both static and kinematic modes. For the static case, an IGS station was selected and the coordinates were estimated every 30 s. The estimated absolute position coordinates and the known values had a mean difference of up to 18 cm with standard deviation less than 2 cm. For the kinematic case, data obtained every second from a GPS buoy were tested and the result from the absolute positioning was compared to a differential GPS (DGPS) solution. The mean differences between the coordinates estimated by the two methods are less than 40 cm and the standard deviations are less than 25 cm. It was verified that this poorer standard deviation on 1-s position results is due to the clock error interpolation from 30-s estimates with Selective Availability (SA). After SA was turned off, higher-rate clock error estimates (such as 1 s) could be obtained by a simple interpolation with negligible corruption. Therefore, the proposed absolute positioning technique can be used to within a few centimeters' precision at any rate by estimating 30-s satellite clock errors and interpolating them.

  14. An Accurate Link Correlation Estimator for Improving Wireless Protocol Performance

    PubMed Central

    Zhao, Zhiwei; Xu, Xianghua; Dong, Wei; Bu, Jiajun

    2015-01-01

    Wireless link correlation has shown significant impact on the performance of various sensor network protocols. Many works have been devoted to exploiting link correlation for protocol improvements. However, the effectiveness of these designs heavily relies on the accuracy of link correlation measurement. In this paper, we investigate state-of-the-art link correlation measurement and analyze the limitations of existing works. We then propose a novel lightweight and accurate link correlation estimation (LACE) approach based on the reasoning of link correlation formation. LACE combines both long-term and short-term link behaviors for link correlation estimation. We implement LACE as a stand-alone interface in TinyOS and incorporate it into both routing and flooding protocols. Simulation and testbed results show that LACE: (1) achieves more accurate and lightweight link correlation measurements than the state-of-the-art work; and (2) greatly improves the performance of protocols exploiting link correlation. PMID:25686314

  15. Accurate estimation of sigma(exp 0) using AIRSAR data

    NASA Technical Reports Server (NTRS)

    Holecz, Francesco; Rignot, Eric

    1995-01-01

    During recent years signature analysis, classification, and modeling of Synthetic Aperture Radar (SAR) data as well as estimation of geophysical parameters from SAR data have received a great deal of interest. An important requirement for the quantitative use of SAR data is the accurate estimation of the backscattering coefficient sigma(exp 0). In terrain with relief variations radar signals are distorted due to the projection of the scene topography into the slant range-Doppler plane. The effect of these variations is to change the physical size of the scattering area, leading to errors in the radar backscatter values and incidence angle. For this reason the local incidence angle, derived from sensor position and Digital Elevation Model (DEM) data must always be considered. Especially in the airborne case, the antenna gain pattern can be an additional source of radiometric error, because the radar look angle is not known precisely as a result of the the aircraft motions and the local surface topography. Consequently, radiometric distortions due to the antenna gain pattern must also be corrected for each resolution cell, by taking into account aircraft displacements (position and attitude) and position of the backscatter element, defined by the DEM data. In this paper, a method to derive an accurate estimation of the backscattering coefficient using NASA/JPL AIRSAR data is presented. The results are evaluated in terms of geometric accuracy, radiometric variations of sigma(exp 0), and precision of the estimated forest biomass.

  16. How accurate are physical property estimation programs for organosilicon compounds?

    PubMed

    Boethling, Robert; Meylan, William

    2013-11-01

    Organosilicon compounds are important in chemistry and commerce, and nearly 10% of new chemical substances for which premanufacture notifications are processed by the US Environmental Protection Agency (USEPA) contain silicon (Si). Yet, remarkably few measured values are submitted for key physical properties, and the accuracy of estimation programs such as the Estimation Programs Interface (EPI) Suite and the SPARC Performs Automated Reasoning in Chemistry (SPARC) system is largely unknown. To address this issue, the authors developed an extensive database of measured property values for organic compounds containing Si and evaluated the performance of no-cost estimation programs for several properties of importance in environmental assessment. These included melting point (mp), boiling point (bp), vapor pressure (vp), water solubility, n-octanol/water partition coefficient (log KOW ), and Henry's law constant. For bp and the larger of 2 vp datasets, SPARC, MPBPWIN, and the USEPA's Toxicity Estimation Software Tool (TEST) had similar accuracy. For log KOW and water solubility, the authors tested 11 and 6 no-cost estimators, respectively. The best performers were Molinspiration and WSKOWWIN, respectively. The TEST's consensus mp method outperformed that of MPBPWIN by a considerable margin. Generally, the best programs estimated the listed properties of diverse organosilicon compounds with accuracy sufficient for chemical screening. The results also highlight areas where improvement is most needed.

  17. Accurate Satellite-Derived Estimates of Tropospheric Ozone Radiative Forcing

    NASA Technical Reports Server (NTRS)

    Joiner, Joanna; Schoeberl, Mark R.; Vasilkov, Alexander P.; Oreopoulos, Lazaros; Platnick, Steven; Livesey, Nathaniel J.; Levelt, Pieternel F.

    2008-01-01

    Estimates of the radiative forcing due to anthropogenically-produced tropospheric O3 are derived primarily from models. Here, we use tropospheric ozone and cloud data from several instruments in the A-train constellation of satellites as well as information from the GEOS-5 Data Assimilation System to accurately estimate the instantaneous radiative forcing from tropospheric O3 for January and July 2005. We improve upon previous estimates of tropospheric ozone mixing ratios from a residual approach using the NASA Earth Observing System (EOS) Aura Ozone Monitoring Instrument (OMI) and Microwave Limb Sounder (MLS) by incorporating cloud pressure information from OMI. Since we cannot distinguish between natural and anthropogenic sources with the satellite data, our estimates reflect the total forcing due to tropospheric O3. We focus specifically on the magnitude and spatial structure of the cloud effect on both the shortand long-wave radiative forcing. The estimates presented here can be used to validate present day O3 radiative forcing produced by models.

  18. Towards accurate and precise estimates of lion density.

    PubMed

    Elliot, Nicholas B; Gopalaswamy, Arjun M

    2016-12-13

    Reliable estimates of animal density are fundamental to our understanding of ecological processes and population dynamics. Furthermore, their accuracy is vital to conservation biology since wildlife authorities rely on these figures to make decisions. However, it is notoriously difficult to accurately estimate density for wide-ranging species such as carnivores that occur at low densities. In recent years, significant progress has been made in density estimation of Asian carnivores, but the methods have not been widely adapted to African carnivores. African lions (Panthera leo) provide an excellent example as although abundance indices have been shown to produce poor inferences, they continue to be used to estimate lion density and inform management and policy. In this study we adapt a Bayesian spatially explicit capture-recapture model to estimate lion density in the Maasai Mara National Reserve (MMNR) and surrounding conservancies in Kenya. We utilize sightings data from a three-month survey period to produce statistically rigorous spatial density estimates. Overall posterior mean lion density was estimated to be 16.85 (posterior standard deviation = 1.30) lions over one year of age per 100km(2) with a sex ratio of 2.2♀:1♂. We argue that such methods should be developed, improved and favored over less reliable methods such as track and call-up surveys. We caution against trend analyses based on surveys of differing reliability and call for a unified framework to assess lion numbers across their range in order for better informed management and policy decisions to be made. This article is protected by copyright. All rights reserved.

  19. Motion Estimation System Utilizing Point Cloud Registration

    NASA Technical Reports Server (NTRS)

    Chen, Qi (Inventor)

    2016-01-01

    A system and method of estimation motion of a machine is disclosed. The method may include determining a first point cloud and a second point cloud corresponding to an environment in a vicinity of the machine. The method may further include generating a first extended gaussian image (EGI) for the first point cloud and a second EGI for the second point cloud. The method may further include determining a first EGI segment based on the first EGI and a second EGI segment based on the second EGI. The method may further include determining a first two dimensional distribution for points in the first EGI segment and a second two dimensional distribution for points in the second EGI segment. The method may further include estimating motion of the machine based on the first and second two dimensional distributions.

  20. Accurate estimators of correlation functions in Fourier space

    NASA Astrophysics Data System (ADS)

    Sefusatti, E.; Crocce, M.; Scoccimarro, R.; Couchman, H. M. P.

    2016-08-01

    Efficient estimators of Fourier-space statistics for large number of objects rely on fast Fourier transforms (FFTs), which are affected by aliasing from unresolved small-scale modes due to the finite FFT grid. Aliasing takes the form of a sum over images, each of them corresponding to the Fourier content displaced by increasing multiples of the sampling frequency of the grid. These spurious contributions limit the accuracy in the estimation of Fourier-space statistics, and are typically ameliorated by simultaneously increasing grid size and discarding high-frequency modes. This results in inefficient estimates for e.g. the power spectrum when desired systematic biases are well under per cent level. We show that using interlaced grids removes odd images, which include the dominant contribution to aliasing. In addition, we discuss the choice of interpolation kernel used to define density perturbations on the FFT grid and demonstrate that using higher order interpolation kernels than the standard Cloud-In-Cell algorithm results in significant reduction of the remaining images. We show that combining fourth-order interpolation with interlacing gives very accurate Fourier amplitudes and phases of density perturbations. This results in power spectrum and bispectrum estimates that have systematic biases below 0.01 per cent all the way to the Nyquist frequency of the grid, thus maximizing the use of unbiased Fourier coefficients for a given grid size and greatly reducing systematics for applications to large cosmological data sets.

  1. How utilities can achieve more accurate decommissioning cost estimates

    SciTech Connect

    Knight, R.

    1999-07-01

    The number of commercial nuclear power plants that are undergoing decommissioning coupled with the economic pressure of deregulation has increased the focus on adequate funding for decommissioning. The introduction of spent-fuel storage and disposal of low-level radioactive waste into the cost analysis places even greater concern as to the accuracy of the fund calculation basis. The size and adequacy of the decommissioning fund have also played a major part in the negotiations for transfer of plant ownership. For all of these reasons, it is important that the operating plant owner reduce the margin of error in the preparation of decommissioning cost estimates. To data, all of these estimates have been prepared via the building block method. That is, numerous individual calculations defining the planning, engineering, removal, and disposal of plant systems and structures are performed. These activity costs are supplemented by the period-dependent costs reflecting the administration, control, licensing, and permitting of the program. This method will continue to be used in the foreseeable future until adequate performance data are available. The accuracy of the activity cost calculation is directly related to the accuracy of the inventory of plant system component, piping and equipment, and plant structural composition. Typically, it is left up to the cost-estimating contractor to develop this plant inventory. The data are generated by searching and analyzing property asset records, plant databases, piping and instrumentation drawings, piping system isometric drawings, and component assembly drawings. However, experience has shown that these sources may not be up to date, discrepancies may exist, there may be missing data, and the level of detail may not be sufficient. Again, typically, the time constraints associated with the development of the cost estimate preclude perfect resolution of the inventory questions. Another problem area in achieving accurate cost

  2. Estimating Function Approaches for Spatial Point Processes

    NASA Astrophysics Data System (ADS)

    Deng, Chong

    Spatial point pattern data consist of locations of events that are often of interest in biological and ecological studies. Such data are commonly viewed as a realization from a stochastic process called spatial point process. To fit a parametric spatial point process model to such data, likelihood-based methods have been widely studied. However, while maximum likelihood estimation is often too computationally intensive for Cox and cluster processes, pairwise likelihood methods such as composite likelihood, Palm likelihood usually suffer from the loss of information due to the ignorance of correlation among pairs. For many types of correlated data other than spatial point processes, when likelihood-based approaches are not desirable, estimating functions have been widely used for model fitting. In this dissertation, we explore the estimating function approaches for fitting spatial point process models. These approaches, which are based on the asymptotic optimal estimating function theories, can be used to incorporate the correlation among data and yield more efficient estimators. We conducted a series of studies to demonstrate that these estmating function approaches are good alternatives to balance the trade-off between computation complexity and estimating efficiency. First, we propose a new estimating procedure that improves the efficiency of pairwise composite likelihood method in estimating clustering parameters. Our approach combines estimating functions derived from pairwise composite likeli-hood estimation and estimating functions that account for correlations among the pairwise contributions. Our method can be used to fit a variety of parametric spatial point process models and can yield more efficient estimators for the clustering parameters than pairwise composite likelihood estimation. We demonstrate its efficacy through a simulation study and an application to the longleaf pine data. Second, we further explore the quasi-likelihood approach on fitting

  3. Optimizing Probability of Detection Point Estimate Demonstration

    NASA Technical Reports Server (NTRS)

    Koshti, Ajay M.

    2017-01-01

    Probability of detection (POD) analysis is used in assessing reliably detectable flaw size in nondestructive evaluation (NDE). MIL-HDBK-18231and associated mh18232POD software gives most common methods of POD analysis. Real flaws such as cracks and crack-like flaws are desired to be detected using these NDE methods. A reliably detectable crack size is required for safe life analysis of fracture critical parts. The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using Point Estimate Method. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible.

  4. Software Size Estimation Using Activity Point

    NASA Astrophysics Data System (ADS)

    Densumite, S.; Muenchaisri, P.

    2017-03-01

    Software size is widely recognized as an important parameter for effort and cost estimation. Currently there are many methods for measuring software size including Source Line of Code (SLOC), Function Points (FP), Netherlands Software Metrics Users Association (NESMA), Common Software Measurement International Consortium (COSMIC), and Use Case Points (UCP). SLOC is physically counted after the software is developed. Other methods compute size from functional, technical, and/or environment aspects at early phase of software development. In this research, activity point approach is proposed to be another software size estimation method. Activity point is computed using activity diagram and adjusted with technical complexity factors (TCF), environment complexity factors (ECF), and people risk factors (PRF). An evaluation of the approach is present.

  5. Naïve Point Estimation

    ERIC Educational Resources Information Center

    Lindskog, Marcus; Winman, Anders; Juslin, Peter

    2013-01-01

    The capacity of short-term memory is a key constraint when people make online judgments requiring them to rely on samples retrieved from memory (e.g., Dougherty & Hunter, 2003). In this article, the authors compare 2 accounts of how people use knowledge of statistical distributions to make point estimates: either by retrieving precomputed…

  6. Estimation of bone permeability using accurate microstructural measurements.

    PubMed

    Beno, Thoma; Yoon, Young-June; Cowin, Stephen C; Fritton, Susannah P

    2006-01-01

    While interstitial fluid flow is necessary for the viability of osteocytes, it is also believed to play a role in bone's mechanosensory system by shearing bone cell membranes or causing cytoskeleton deformation and thus activating biochemical responses that lead to the process of bone adaptation. However, the fluid flow properties that regulate bone's adaptive response are poorly understood. In this paper, we present an analytical approach to determine the degree of anisotropy of the permeability of the lacunar-canalicular porosity in bone. First, we estimate the total number of canaliculi emanating from each osteocyte lacuna based on published measurements from parallel-fibered shaft bones of several species (chick, rabbit, bovine, horse, dog, and human). Next, we determine the local three-dimensional permeability of the lacunar-canalicular porosity for these species using recent microstructural measurements and adapting a previously developed model. Results demonstrated that the number of canaliculi per osteocyte lacuna ranged from 41 for human to 115 for horse. Permeability coefficients were found to be different in three local principal directions, indicating local orthotropic symmetry of bone permeability in parallel-fibered cortical bone for all species examined. For the range of parameters investigated, the local lacunar-canalicular permeability varied more than three orders of magnitude, with the osteocyte lacunar shape and size along with the 3-D canalicular distribution determining the degree of anisotropy of the local permeability. This two-step theoretical approach to determine the degree of anisotropy of the permeability of the lacunar-canalicular porosity will be useful for accurate quantification of interstitial fluid movement in bone.

  7. Bayesian Missile System Reliability from Point Estimates

    DTIC Science & Technology

    2014-10-28

    AVAILABILITY STATEMENT Approved for public release, distribution unlimited 13. SUPPLEMENTARY NOTES 14. ABSTRACT This paper applies the Maximum Entropy ...reliability, maximum entropy 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT SAR 18. NUMBER OF PAGES 13 19a. NAME OF RESPONSIBLE PERSON a...assessment. In this paper, we propose using the Maximum Entropy Principle (MEP) to convert point estimates or design values, as obtained pro forma from

  8. Anatomy guided automated SPECT renal seed point estimation

    NASA Astrophysics Data System (ADS)

    Dwivedi, Shekhar; Kumar, Sailendra

    2010-04-01

    Quantification of SPECT(Single Photon Emission Computed Tomography) images can be more accurate if correct segmentation of region of interest (ROI) is achieved. Segmenting ROI from SPECT images is challenging due to poor image resolution. SPECT is utilized to study the kidney function, though the challenge involved is to accurately locate the kidneys and bladder for analysis. This paper presents an automated method for generating seed point location of both kidneys using anatomical location of kidneys and bladder. The motivation for this work is based on the premise that the anatomical location of the bladder relative to the kidneys will not differ much. A model is generated based on manual segmentation of the bladder and both the kidneys on 10 patient datasets (including sum and max images). Centroid is estimated for manually segmented bladder and kidneys. Relatively easier bladder segmentation is followed by feeding bladder centroid coordinates into the model to generate seed point for kidneys. Percentage error observed in centroid coordinates of organs from ground truth to estimated values from our approach are acceptable. Percentage error of approximately 1%, 6% and 2% is observed in X coordinates and approximately 2%, 5% and 8% is observed in Y coordinates of bladder, left kidney and right kidney respectively. Using a regression model and the location of the bladder, the ROI generation for kidneys is facilitated. The model based seed point estimation will enhance the robustness of kidney ROI estimation for noisy cases.

  9. Fast and Accurate Learning When Making Discrete Numerical Estimates

    PubMed Central

    Sanborn, Adam N.; Beierholm, Ulrik R.

    2016-01-01

    Many everyday estimation tasks have an inherently discrete nature, whether the task is counting objects (e.g., a number of paint buckets) or estimating discretized continuous variables (e.g., the number of paint buckets needed to paint a room). While Bayesian inference is often used for modeling estimates made along continuous scales, discrete numerical estimates have not received as much attention, despite their common everyday occurrence. Using two tasks, a numerosity task and an area estimation task, we invoke Bayesian decision theory to characterize how people learn discrete numerical distributions and make numerical estimates. Across three experiments with novel stimulus distributions we found that participants fell between two common decision functions for converting their uncertain representation into a response: drawing a sample from their posterior distribution and taking the maximum of their posterior distribution. While this was consistent with the decision function found in previous work using continuous estimation tasks, surprisingly the prior distributions learned by participants in our experiments were much more adaptive: When making continuous estimates, participants have required thousands of trials to learn bimodal priors, but in our tasks participants learned discrete bimodal and even discrete quadrimodal priors within a few hundred trials. This makes discrete numerical estimation tasks good testbeds for investigating how people learn and make estimates. PMID:27070155

  10. Accurate detection and quantitation of heteroplasmic mitochondrial point mutations by pyrosequencing.

    PubMed

    White, Helen E; Durston, Victoria J; Seller, Anneke; Fratter, Carl; Harvey, John F; Cross, Nicholas C P

    2005-01-01

    Disease-causing mutations in mitochondrial DNA (mtDNA) are typically heteroplasmic and therefore interpretation of genetic tests for mitochondrial disorders can be problematic. Detection of low level heteroplasmy is technically demanding and it is often difficult to discriminate between the absence of a mutation or the failure of a technique to detect the mutation in a particular tissue. The reliable measurement of heteroplasmy in different tissues may help identify individuals who are at risk of developing specific complications and allow improved prognostic advice for patients and family members. We have evaluated Pyrosequencing technology for the detection and estimation of heteroplasmy for six mitochondrial point mutations associated with the following diseases: Leber's hereditary optical neuropathy (LHON), G3460A, G11778A, and T14484C; mitochondrial encephalopathy with lactic acidosis and stroke-like episodes (MELAS), A3243G; myoclonus epilepsy with ragged red fibers (MERRF), A8344G, and neurogenic muscle weakness, ataxia, and retinitis pigmentosa (NARP)/Leighs: T8993G/C. Results obtained from the Pyrosequencing assays for 50 patients with presumptive mitochondrial disease were compared to those obtained using the commonly used diagnostic technique of polymerase chain reaction (PCR) and restriction enzyme digestion. The Pyrosequencing assays provided accurate genotyping and quantitative determination of mutational load with a sensitivity and specificity of 100%. The MELAS A3243G mutation was detected reliably at a level of 1% heteroplasmy. We conclude that Pyrosequencing is a rapid and robust method for detecting heteroplasmic mitochondrial point mutations.

  11. BIOACCESSIBILITY TESTS ACCURATELY ESTIMATE BIOAVAILABILITY OF LEAD TO QUAIL

    EPA Science Inventory

    Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contami...

  12. Bioaccessibility tests accurately estimate bioavailability of lead to quail

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb, we incorporated Pb-contaminated soils or Pb acetate into diets for Japanese quail (Coturnix japonica), fed the quail for 15 days, and ...

  13. Triple point of e-deuterium as an accurate thermometric fixed point

    SciTech Connect

    Pavese, F.; McConville, G.T.

    1986-01-01

    The triple point of deuterium (18.7/sup 0/K) is the only possibility for excluding vapor pressure measurements in the definition of a temperature scale based on fixed points between 13.81 and 24.562/sup 0/K. This paper reports an investigation made at the Istituto di Metrologia and Mound Laboratory, using extremely pure deuterium directly sealed at the production plant into small metal cells. The large contamination by HD of commercially available gas, that cannot be accounted and corrected for due to its increase in handling, was found to be very stable with time after sealing in IMGC cells. HD contamination can be limited to less than 100 ppM in Monsanto cells, both with n-D/sub 2/ and e-D/sub 2/, when filled directly from the thermal diffusion column and sealed at the factory. e-D/sub 2/ requires a special deuterated catalyst. The triple point temperature of e-D/sub 2/ has been determined to be: T(NPL-IPTS-68) = 18.7011 +- 0.002/sup 0/K. 20 refs., 3 figs., 2 tabs.

  14. The Effect of Lidar Point Density on LAI Estimation

    NASA Astrophysics Data System (ADS)

    Cawse-Nicholson, K.; van Aardt, J. A.; Romanczyk, P.; Kelbe, D.; Bandyopadhyay, M.; Yao, W.; Krause, K.; Kampe, T. U.

    2013-12-01

    Leaf Area Index (LAI) is an important measure of forest health, biomass and carbon exchange, and is most commonly defined as the ratio of the leaf area to ground area. LAI is understood over large spatial scales and describes leaf properties over an entire forest, thus airborne imagery is ideal for capturing such data. Spectral metrics such as the normalized difference vegetation index (NDVI) have been used in the past for LAI estimation, but these metrics may saturate for high LAI values. Light detection and ranging (lidar) is an active remote sensing technology that emits light (most often at the wavelength 1064nm) and uses the return time to calculate the distance to intercepted objects. This yields information on three-dimensional structure and shape, which has been shown in recent studies to yield more accurate LAI estimates than NDVI. However, although lidar is a promising alternative for LAI estimation, minimum acquisition parameters (e.g. point density) required for accurate LAI retrieval are not yet well known. The objective of this study was to determine the minimum number of points per square meter that are required to describe the LAI measurements taken in-field. As part of a larger data collect, discrete lidar data were acquired by Kucera International Inc. over the Hemlock-Canadice State Forest, NY, USA in September 2012. The Leica ALS60 obtained point density of 12 points per square meter and effective ground sampling distance (GSD) of 0.15m. Up to three returns with intensities were recorded per pulse. As part of the same experiment, an AccuPAR LP-80 was used to collect LAI estimates at 25 sites on the ground. Sites were spaced approximately 80m apart and nine measurements were made in a grid pattern within a 20 x 20m site. Dominant species include Hemlock, Beech, Sugar Maple and Oak. This study has the benefit of very high-density data, which will enable a detailed map of intra-forest LAI. Understanding LAI at fine scales may be particularly useful

  15. Accurate feature detection and estimation using nonlinear and multiresolution analysis

    NASA Astrophysics Data System (ADS)

    Rudin, Leonid; Osher, Stanley

    1994-11-01

    A program for feature detection and estimation using nonlinear and multiscale analysis was completed. The state-of-the-art edge detection was combined with multiscale restoration (as suggested by the first author) and robust results in the presence of noise were obtained. Successful applications to numerous images of interest to DOD were made. Also, a new market in the criminal justice field was developed, based in part, on this work.

  16. A hardware error estimate for floating-point computations

    NASA Astrophysics Data System (ADS)

    Lang, Tomás; Bruguera, Javier D.

    2008-08-01

    We propose a hardware-computed estimate of the roundoff error in floating-point computations. The estimate is computed concurrently with the execution of the program and gives an estimation of the accuracy of the result. The intention is to have a qualitative indication when the accuracy of the result is low. We aim for a simple implementation and a negligible effect on the execution of the program. Large errors due to roundoff occur in some computations, producing inaccurate results. However, usually these large errors occur only for some values of the data, so that the result is accurate in most executions. As a consequence, the computation of an estimate of the error during execution would allow the use of algorithms that produce accurate results most of the time. In contrast, if an error estimate is not available, the solution is to perform an error analysis. However, this analysis is complex or impossible in some cases, and it produces a worst-case error bound. The proposed approach is to keep with each value an estimate of its error, which is computed when the value is produced. This error is the sum of a propagated error, due to the errors of the operands, plus the generated error due to roundoff during the operation. Since roundoff errors are signed values (when rounding to nearest is used), the computation of the error allows for compensation when errors are of different sign. However, since the error estimate is of finite precision, it suffers from similar accuracy problems as any floating-point computation. Moreover, it is not an error bound. Ideally, the estimate should be large when the error is large and small when the error is small. Since this cannot be achieved always with an inexact estimate, we aim at assuring the first property always, and the second most of the time. As a minimum, we aim to produce a qualitative indication of the error. To indicate the accuracy of the value, the most appropriate type of error is the relative error. However

  17. Accurate estimates of age at maturity from the growth trajectories of fishes and other ectotherms.

    PubMed

    Honsey, Andrew E; Staples, David F; Venturelli, Paul A

    2017-01-01

    Age at maturity (AAM) is a key life history trait that provides insight into ecology, evolution, and population dynamics. However, maturity data can be costly to collect or may not be available. Life history theory suggests that growth is biphasic for many organisms, with a change-point in growth occurring at maturity. If so, then it should be possible to use a biphasic growth model to estimate AAM from growth data. To test this prediction, we used the Lester biphasic growth model in a likelihood profiling framework to estimate AAM from length at age data. We fit our model to simulated growth trajectories to determine minimum data requirements (in terms of sample size, precision in length at age, and the cost to somatic growth of maturity) for accurate AAM estimates. We then applied our method to a large walleye Sander vitreus data set and show that our AAM estimates are in close agreement with conventional estimates when our model fits well. Finally, we highlight the potential of our method by applying it to length at age data for a variety of ectotherms. Our method shows promise as a tool for estimating AAM and other life history traits from contemporary and historical samples.

  18. Analysis of Measurement Error and Estimator Shape in Three-Point Hydraulic Gradient Estimators

    NASA Astrophysics Data System (ADS)

    McKenna, S. A.; Wahi, A. K.

    2003-12-01

    Three spatially separated measurements of head provide a means of estimating the magnitude and orientation of the hydraulic gradient. Previous work with three-point estimators has focused on the effect of the size (area) of the three-point estimator and measurement error on the final estimates of the gradient magnitude and orientation in laboratory and field studies (Mizell, 1980; Silliman and Frost, 1995; Silliman and Mantz, 2000; Ruskauff and Rumbaugh, 1996). However, a systematic analysis of the combined effects of measurement error, estimator shape and estimator orientation relative to the gradient orientation has not previously been conducted. Monte Carlo simulation with an underlying assumption of a homogeneous transmissivity field is used to examine the effects of uncorrelated measurement error on a series of eleven different three-point estimators having the same size but different shapes as a function of the orientation of the true gradient. Results show that the variance in the estimate of both the magnitude and the orientation increase linearly with the increase in measurement error in agreement with the results of stochastic theory for estimators that are small relative to the correlation length of transmissivity (Mizell, 1980). Three-point estimator shapes with base to height ratios between 0.5 and 5.0 provide accurate estimates of magnitude and orientation across all orientations of the true gradient. As an example, these results are applied to data collected from a monitoring network of 25 wells at the WIPP site during two different time periods. The simulation results are used to reduce the set of all possible combinations of three wells to those combinations with acceptable measurement errors relative to the amount of head drop across the estimator and base to height ratios between 0.5 and 5.0. These limitations reduce the set of all possible well combinations by 98 percent and show that size alone as defined by triangle area is not a valid

  19. Spontaneous fluctuation indices of the cardiovagal baroreflex accurately measure the baroreflex sensitivity at the operating point during upright tilt.

    PubMed

    Schwartz, Christopher E; Medow, Marvin S; Messer, Zachary; Stewart, Julian M

    2013-06-15

    Spontaneous fluctuation indices of cardiovagal baroreflex have been suggested to be inaccurate measures of baroreflex function during orthostatic stress compared with alternate open-loop methods (e.g. neck pressure/suction, modified Oxford method). We therefore tested the hypothesis that spontaneous fluctuation measurements accurately reflect local baroreflex gain (slope) at the operating point measured by the modified Oxford method, and that apparent differences between these two techniques during orthostasis can be explained by a resetting of the baroreflex function curve. We computed the sigmoidal baroreflex function curves supine and during 70° tilt in 12 young, healthy individuals. With the use of the modified Oxford method, slopes (gains) of supine and upright curves were computed at their maxima (Gmax) and operating points. These were compared with measurements of spontaneous indices in both positions. Supine spontaneous analyses of operating point slope were similar to calculated Gmax of the modified Oxford curve. In contrast, upright operating point was distant from the centering point of the reset curve and fell on the nonlinear portion of the curve. Whereas spontaneous fluctuation measurements were commensurate with the calculated slope of the upright modified Oxford curve at the operating point, they were significantly lower than Gmax. In conclusion, spontaneous measurements of cardiovagal baroreflex function accurately estimate the slope near operating points in both supine and upright position.

  20. Accurate tempo estimation based on harmonic + noise decomposition

    NASA Astrophysics Data System (ADS)

    Alonso, Miguel; Richard, Gael; David, Bertrand

    2006-12-01

    We present an innovative tempo estimation system that processes acoustic audio signals and does not use any high-level musical knowledge. Our proposal relies on a harmonic + noise decomposition of the audio signal by means of a subspace analysis method. Then, a technique to measure the degree of musical accentuation as a function of time is developed and separately applied to the harmonic and noise parts of the input signal. This is followed by a periodicity estimation block that calculates the salience of musical accents for a large number of potential periods. Next, a multipath dynamic programming searches among all the potential periodicities for the most consistent prospects through time, and finally the most energetic candidate is selected as tempo. Our proposal is validated using a manually annotated test-base containing 961 music signals from various musical genres. In addition, the performance of the algorithm under different configurations is compared. The robustness of the algorithm when processing signals of degraded quality is also measured.

  1. Fast and Accurate Estimates of Divergence Times from Big Data.

    PubMed

    Mello, Beatriz; Tao, Qiqing; Tamura, Koichiro; Kumar, Sudhir

    2017-01-01

    Ongoing advances in sequencing technology have led to an explosive expansion in the molecular data available for building increasingly larger and more comprehensive timetrees. However, Bayesian relaxed-clock approaches frequently used to infer these timetrees impose a large computational burden and discourage critical assessment of the robustness of inferred times to model assumptions, influence of calibrations, and selection of optimal data subsets. We analyzed eight large, recently published, empirical datasets to compare time estimates produced by RelTime (a non-Bayesian method) with those reported by using Bayesian approaches. We find that RelTime estimates are very similar to Bayesian approaches, yet RelTime requires orders of magnitude less computational time. This means that the use of RelTime will enable greater rigor in molecular dating, because faster computational speeds encourage more extensive testing of the robustness of inferred timetrees to prior assumptions (models and calibrations) and data subsets. Thus, RelTime provides a reliable and computationally thrifty approach for dating the tree of life using large-scale molecular datasets.

  2. Bioaccessibility tests accurately estimate bioavailability of lead to quail

    USGS Publications Warehouse

    Beyer, W. Nelson; Basta, Nicholas T; Chaney, Rufus L.; Henry, Paula F.; Mosby, David; Rattner, Barnett A.; Scheckel, Kirk G.; Sprague, Dan; Weber, John

    2016-01-01

    Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contaminated soils. Relative bioavailabilities were expressed by comparison with blood Pb concentrations in quail fed a Pb acetate reference diet. Diets containing soil from five Pb-contaminated Superfund sites had relative bioavailabilities from 33%-63%, with a mean of about 50%. Treatment of two of the soils with phosphorus significantly reduced the bioavailability of Pb. Bioaccessibility of Pb in the test soils was then measured in six in vitro tests and regressed on bioavailability. They were: the “Relative Bioavailability Leaching Procedure” (RBALP) at pH 1.5, the same test conducted at pH 2.5, the “Ohio State University In vitro Gastrointestinal” method (OSU IVG), the “Urban Soil Bioaccessible Lead Test”, the modified “Physiologically Based Extraction Test” and the “Waterfowl Physiologically Based Extraction Test.” All regressions had positive slopes. Based on criteria of slope and coefficient of determination, the RBALP pH 2.5 and OSU IVG tests performed very well. Speciation by X-ray absorption spectroscopy demonstrated that, on average, most of the Pb in the sampled soils was sorbed to minerals (30%), bound to organic matter (24%), or present as Pb sulfate (18%). Additional Pb was associated with P (chloropyromorphite, hydroxypyromorphite and tertiary Pb phosphate), and with Pb carbonates, leadhillite (a lead sulfate carbonate hydroxide), and Pb sulfide. The formation of chloropyromorphite reduced the bioavailability of Pb and the amendment of Pb-contaminated soils with P may be a thermodynamically favored means to sequester Pb.

  3. Bioaccessibility tests accurately estimate bioavailability of lead to quail.

    PubMed

    Beyer, W Nelson; Basta, Nicholas T; Chaney, Rufus L; Henry, Paula F P; Mosby, David E; Rattner, Barnett A; Scheckel, Kirk G; Sprague, Daniel T; Weber, John S

    2016-09-01

    Hazards of soil-borne lead (Pb) to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, the authors measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contaminated soils. Relative bioavailabilities were expressed by comparison with blood Pb concentrations in quail fed a Pb acetate reference diet. Diets containing soil from 5 Pb-contaminated Superfund sites had relative bioavailabilities from 33% to 63%, with a mean of approximately 50%. Treatment of 2 of the soils with phosphorus (P) significantly reduced the bioavailability of Pb. Bioaccessibility of Pb in the test soils was then measured in 6 in vitro tests and regressed on bioavailability: the relative bioavailability leaching procedure at pH 1.5, the same test conducted at pH 2.5, the Ohio State University in vitro gastrointestinal method, the urban soil bioaccessible lead test, the modified physiologically based extraction test, and the waterfowl physiologically based extraction test. All regressions had positive slopes. Based on criteria of slope and coefficient of determination, the relative bioavailability leaching procedure at pH 2.5 and Ohio State University in vitro gastrointestinal tests performed very well. Speciation by X-ray absorption spectroscopy demonstrated that, on average, most of the Pb in the sampled soils was sorbed to minerals (30%), bound to organic matter (24%), or present as Pb sulfate (18%). Additional Pb was associated with P (chloropyromorphite, hydroxypyromorphite, and tertiary Pb phosphate) and with Pb carbonates, leadhillite (a lead sulfate carbonate hydroxide), and Pb sulfide. The formation of chloropyromorphite reduced the bioavailability of Pb, and the amendment of Pb-contaminated soils with P may be a thermodynamically favored means to sequester Pb. Environ Toxicol Chem 2016;35:2311-2319. Published 2016 Wiley Periodicals Inc. on behalf of

  4. Estimating Aircraft Heading Based on Laserscanner Derived Point Clouds

    NASA Astrophysics Data System (ADS)

    Koppanyi, Z.; Toth, C., K.

    2015-03-01

    Using LiDAR sensors for tracking and monitoring an operating aircraft is a new application. In this paper, we present data processing methods to estimate the heading of a taxiing aircraft using laser point clouds. During the data acquisition, a Velodyne HDL-32E laser scanner tracked a moving Cessna 172 airplane. The point clouds captured at different times were used for heading estimation. After addressing the problem and specifying the equation of motion to reconstruct the aircraft point cloud from the consecutive scans, three methods are investigated here. The first requires a reference model to estimate the relative angle from the captured data by fitting different cross-sections (horizontal profiles). In the second approach, iterative closest point (ICP) method is used between the consecutive point clouds to determine the horizontal translation of the captured aircraft body. Regarding the ICP, three different versions were compared, namely, the ordinary 3D, 3-DoF 3D and 2-DoF 3D ICP. It was found that 2-DoF 3D ICP provides the best performance. Finally, the last algorithm searches for the unknown heading and velocity parameters by minimizing the volume of the reconstructed plane. The three methods were compared using three test datatypes which are distinguished by object-sensor distance, heading and velocity. We found that the ICP algorithm fails at long distances and when the aircraft motion direction perpendicular to the scan plane, but the first and the third methods give robust and accurate results at 40m object distance and at ~12 knots for a small Cessna airplane.

  5. Point Estimation and Confidence Interval Estimation for Binomial and Multinomial Parameters

    DTIC Science & Technology

    1975-12-31

    AD-A021 208 POINT ESTIMATION AND CONFIDENCE INTERVAL ESTIMATION FOR BINOMIAL AND MULTINOMIAL PARAMETERS Ramesh Chandra Union College...I 00 064098 O < POINT ESTIMATION AND CONFIDENCE INTERVAL ESTIMATION FOR BINOMIAL AND MULTINOMIAL PARAMETERS AES-7514 ■ - 1976...AES-7514 2 COVT ACCESSION NO * TITLC fan« Submit) Point Estimation and Confidence Interval Estimation for Binomial and Multinomial Parameters

  6. Accurate microfour-point probe sheet resistance measurements on small samples.

    PubMed

    Thorsteinsson, Sune; Wang, Fei; Petersen, Dirch H; Hansen, Torben Mikael; Kjaer, Daniel; Lin, Rong; Kim, Jang-Yong; Nielsen, Peter F; Hansen, Ole

    2009-05-01

    We show that accurate sheet resistance measurements on small samples may be performed using microfour-point probes without applying correction factors. Using dual configuration measurements, the sheet resistance may be extracted with high accuracy when the microfour-point probes are in proximity of a mirror plane on small samples with dimensions of a few times the probe pitch. We calculate theoretically the size of the "sweet spot," where sufficiently accurate sheet resistances result and show that even for very small samples it is feasible to do correction free extraction of the sheet resistance with sufficient accuracy. As an example, the sheet resistance of a 40 microm (50 microm) square sample may be characterized with an accuracy of 0.3% (0.1%) using a 10 microm pitch microfour-point probe and assuming a probe alignment accuracy of +/-2.5 microm.

  7. Wind profile estimation from point to point laser distortion data

    NASA Technical Reports Server (NTRS)

    Leland, Robert

    1989-01-01

    The author's results on the problem of using laser distortion data to estimate the wind profile along the path of the beam are presented. A new model for the dynamics of the index of refraction in a non-constant wind is developed. The model agrees qualitatively with theoretical predictions for the index of refraction statistics in linear wind shear, and is approximated by the predictions of Taylor's hypothesis in constant wind. A framework for a potential in-flight experiment is presented, and the estimation problem is discussed in a maximum likelihood context.

  8. Accurate and Robust Attitude Estimation Using MEMS Gyroscopes and a Monocular Camera

    NASA Astrophysics Data System (ADS)

    Kobori, Norimasa; Deguchi, Daisuke; Takahashi, Tomokazu; Ide, Ichiro; Murase, Hiroshi

    In order to estimate accurate rotations of mobile robots and vehicle, we propose a hybrid system which combines a low-cost monocular camera with gyro sensors. Gyro sensors have drift errors that accumulate over time. On the other hand, a camera cannot obtain the rotation continuously in the case where feature points cannot be extracted from images, although the accuracy is better than gyro sensors. To solve these problems we propose a method for combining these sensors based on Extended Kalman Filter. The errors of the gyro sensors are corrected by referring to the rotations obtained from the camera. In addition, by using the reliability judgment of camera rotations and devising the state value of the Extended Kalman Filter, even when the rotation is not continuously observable from the camera, the proposed method shows a good performance. Experimental results showed the effectiveness of the proposed method.

  9. Accurate biopsy-needle depth estimation in limited-angle tomography using multi-view geometry

    NASA Astrophysics Data System (ADS)

    van der Sommen, Fons; Zinger, Sveta; de With, Peter H. N.

    2016-03-01

    Recently, compressed-sensing based algorithms have enabled volume reconstruction from projection images acquired over a relatively small angle (θ < 20°). These methods enable accurate depth estimation of surgical tools with respect to anatomical structures. However, they are computationally expensive and time consuming, rendering them unattractive for image-guided interventions. We propose an alternative approach for depth estimation of biopsy needles during image-guided interventions, in which we split the problem into two parts and solve them independently: needle-depth estimation and volume reconstruction. The complete proposed system consists of the previous two steps, preceded by needle extraction. First, we detect the biopsy needle in the projection images and remove it by interpolation. Next, we exploit epipolar geometry to find point-to-point correspondences in the projection images to triangulate the 3D position of the needle in the volume. Finally, we use the interpolated projection images to reconstruct the local anatomical structures and indicate the position of the needle within this volume. For validation of the algorithm, we have recorded a full CT scan of a phantom with an inserted biopsy needle. The performance of our approach ranges from a median error of 2.94 mm for an distributed viewing angle of 1° down to an error of 0.30 mm for an angle larger than 10°. Based on the results of this initial phantom study, we conclude that multi-view geometry offers an attractive alternative to time-consuming iterative methods for the depth estimation of surgical tools during C-arm-based image-guided interventions.

  10. Zero-Point Calibration for AGN Black-Hole Mass Estimates

    NASA Technical Reports Server (NTRS)

    Peterson, B. M.; Onken, C. A.

    2004-01-01

    We discuss the measurement and associated uncertainties of AGN reverberation-based black-hole masses, since these provide the zero-point calibration for scaling relationships that allow black-hole mass estimates for quasars. We find that reverberation-based mass estimates appear to be accurate to within a factor of about 3.

  11. Accurate estimation of motion blur parameters in noisy remote sensing image

    NASA Astrophysics Data System (ADS)

    Shi, Xueyan; Wang, Lin; Shao, Xiaopeng; Wang, Huilin; Tao, Zhong

    2015-05-01

    The relative motion between remote sensing satellite sensor and objects is one of the most common reasons for remote sensing image degradation. It seriously weakens image data interpretation and information extraction. In practice, point spread function (PSF) should be estimated firstly for image restoration. Identifying motion blur direction and length accurately is very crucial for PSF and restoring image with precision. In general, the regular light-and-dark stripes in the spectrum can be employed to obtain the parameters by using Radon transform. However, serious noise existing in actual remote sensing images often causes the stripes unobvious. The parameters would be difficult to calculate and the error of the result relatively big. In this paper, an improved motion blur parameter identification method to noisy remote sensing image is proposed to solve this problem. The spectrum characteristic of noisy remote sensing image is analyzed firstly. An interactive image segmentation method based on graph theory called GrabCut is adopted to effectively extract the edge of the light center in the spectrum. Motion blur direction is estimated by applying Radon transform on the segmentation result. In order to reduce random error, a method based on whole column statistics is used during calculating blur length. Finally, Lucy-Richardson algorithm is applied to restore the remote sensing images of the moon after estimating blur parameters. The experimental results verify the effectiveness and robustness of our algorithm.

  12. Radioisotopic Tie Points of the Quaternary Geomagnetic Instability Time Scale (GITS): How Accurate and Precise?

    NASA Astrophysics Data System (ADS)

    Singer, B. S.

    2014-12-01

    Reversals and excursions of the geomagnetic field are recorded globally by sedimentary and volcanic rocks. These geodynamo instabilities provide a rich set of chronostratigraphic tie points for the Quaternary period that can provide tests of age models central to paleoclimate studies. Radioisotopic dating of volcanic rocks, mainly 40Ar/39Ar dating of lava flows, coupled with astronomically-dated deep sea sediments, reveals 10 polarity reversals and 27 field excursions during the Quaternary (Singer, 2014). A key question concerns the uncertainties associated with radioisotopic dates of those geodynamo instabilities that have been identified both in terrestrial volcanic rocks and in deep sea sediments. These particular features offer the highest confidence in linking 40Ar/39Ar dates to the global marine climate record. Geological issues aside, for rocks in which the build-up of 40Ar by decay of 40K may be overwhelmed by atmospheric 40Ar at the time of eruption, the uncertainty in 40Ar/39Ar dates derives from three sources: (1) analytical uncertainty associated with measurement of the isotopes; this is straightforward to estimate; (2) systematic uncertainties stemming from the age of standard minerals, such as the Fish Canyon sanidine, and in the 40K decay constant; and (3) systematic uncertainty introduced during analysis, mainly the size and reproducibility of procedural blanks. Whereas 1 and 2 control the precision of an age determination, 2 and 3 also control accuracy. In parallel with an astronomical calibration of 28.201 Ma for the Fish Canyon sanidine standard, awareness of the importance of procedural blanks, and a new generation multi-collector mass spectrometer capable of exceptionally low-blank and isobar-free analysis, are improving both accuracy and precision of 40Ar/39Ar dates. Results from lavas recording the Matuyama-Brunhes reversal, the Santa Rosa excursion, and the reversal at the top of the Cobb Mtn subchron demonstrate these advances. Current best

  13. The MATPHOT Algorithm for Accurate and Precise Stellar Photometry and Astrometry Using Discrete Point Spread Functions

    NASA Astrophysics Data System (ADS)

    Mighell, K. J.

    2004-12-01

    I describe the key features of my MATPHOT algorithm for accurate and precise stellar photometry and astrometry using discrete Point Spread Functions. A discrete Point Spread Function (PSF) is a sampled version of a continuous two-dimensional PSF. The shape information about the photon scattering pattern of a discrete PSF is typically encoded using a numerical table (matrix) or a FITS image file. The MATPHOT algorithm shifts discrete PSFs within an observational model using a 21-pixel-wide damped sinc function and position partial derivatives are computed using a five-point numerical differentiation formula. The MATPHOT algorithm achieves accurate and precise stellar photometry and astrometry of undersampled CCD observations by using supersampled discrete PSFs that are sampled 2, 3, or more times more finely than the observational data. I have written a C-language computer program called MPD which is based on the current implementation of the MATPHOT algorithm; all source code and documentation for MPD and support software is freely available at the following website: http://www.noao.edu/staff/mighell/matphot . I demonstrate the use of MPD and present a detailed MATPHOT analysis of simulated James Webb Space Telescope observations which demonstrates that millipixel relative astrometry and millimag photometric accuracy is achievable with very complicated space-based discrete PSFs. This work was supported by a grant from the National Aeronautics and Space Administration (NASA), Interagency Order No. S-13811-G, which was awarded by the Applied Information Systems Research (AISR) Program of NASA's Science Mission Directorate.

  14. Correction for solute/solvent interaction extends accurate freezing point depression theory to high concentration range.

    PubMed

    Fullerton, G D; Keener, C R; Cameron, I L

    1994-12-01

    The authors describe empirical corrections to ideally dilute expressions for freezing point depression of aqueous solutions to arrive at new expressions accurate up to three molal concentration. The method assumes non-ideality is due primarily to solute/solvent interactions such that the correct free water mass Mwc is the mass of water in solution Mw minus I.M(s) where M(s) is the mass of solute and I an empirical solute/solvent interaction coefficient. The interaction coefficient is easily derived from the constant in the linear regression fit to the experimental plot of Mw/M(s) as a function of 1/delta T (inverse freezing point depression). The I-value, when substituted into the new thermodynamic expressions derived from the assumption of equivalent activity of water in solution and ice, provides accurate predictions of freezing point depression (+/- 0.05 degrees C) up to 2.5 molal concentration for all the test molecules evaluated; glucose, sucrose, glycerol and ethylene glycol. The concentration limit is the approximate monolayer water coverage limit for the solutes which suggests that direct solute/solute interactions are negligible below this limit. This is contrary to the view of many authors due to the common practice of including hydration forces (a soft potential added to the hard core atomic potential) in the interaction potential between solute particles. When this is recognized the two viewpoints are in fundamental agreement.

  15. Estimation of Lightning Striking Points Based on Measurements of Vertical E-Field

    NASA Astrophysics Data System (ADS)

    Michishita, Koji; Nishihira, Takayuki; Hongo, Yasuji; Yokoyama, Shigeru

    Accurate estimation of the lightning striking points is practically important to find the fault location of the power line. Recent progress of the lightning locating technique has made it possible to estimate the striking points with the accuracy of less than 1km. To enhance the location accuracy, the authors measure the vertical electric field at six points around Shirakawa area in Fukushima prefecture. The distance between measuring points is about 10km, which is less than one tenth of the distance of the direction finders (DF) of the lightning location system. In this paper, the authors estimated the location of the lightning striking points by using the time of arrival technique and compared with the results obtained by the lightning location system. Furthermore, the cause of the discrepancy between estimated lightning striking points is discussed, and the location accuracy by the authors' method is evaluated.

  16. ROM Plus®: accurate point-of-care detection of ruptured fetal membranes

    PubMed Central

    McQuivey, Ross W; Block, Jon E

    2016-01-01

    Accurate and timely diagnosis of rupture of fetal membranes is imperative to inform and guide gestational age-specific interventions to optimize perinatal outcomes and reduce the risk of serious complications, including preterm delivery and infections. The ROM Plus is a rapid, point-of-care, qualitative immunochromatographic diagnostic test that uses a unique monoclonal/polyclonal antibody approach to detect two different proteins found in amniotic fluid at high concentrations: alpha-fetoprotein and insulin-like growth factor binding protein-1. Clinical study results have uniformly demonstrated high diagnostic accuracy and performance characteristics with this point-of-care test that exceeds conventional clinical testing with external laboratory evaluation. The description, indications for use, procedural steps, and laboratory and clinical characterization of this assay are presented in this article. PMID:27274316

  17. ROM Plus(®): accurate point-of-care detection of ruptured fetal membranes.

    PubMed

    McQuivey, Ross W; Block, Jon E

    2016-01-01

    Accurate and timely diagnosis of rupture of fetal membranes is imperative to inform and guide gestational age-specific interventions to optimize perinatal outcomes and reduce the risk of serious complications, including preterm delivery and infections. The ROM Plus is a rapid, point-of-care, qualitative immunochromatographic diagnostic test that uses a unique monoclonal/polyclonal antibody approach to detect two different proteins found in amniotic fluid at high concentrations: alpha-fetoprotein and insulin-like growth factor binding protein-1. Clinical study results have uniformly demonstrated high diagnostic accuracy and performance characteristics with this point-of-care test that exceeds conventional clinical testing with external laboratory evaluation. The description, indications for use, procedural steps, and laboratory and clinical characterization of this assay are presented in this article.

  18. What's the Point of a Raster ? Advantages of 3D Point Cloud Processing over Raster Based Methods for Accurate Geomorphic Analysis of High Resolution Topography.

    NASA Astrophysics Data System (ADS)

    Lague, D.

    2014-12-01

    High Resolution Topographic (HRT) datasets are predominantly stored and analyzed as 2D raster grids of elevations (i.e., Digital Elevation Models). Raster grid processing is common in GIS software and benefits from a large library of fast algorithms dedicated to geometrical analysis, drainage network computation and topographic change measurement. Yet, all instruments or methods currently generating HRT datasets (e.g., ALS, TLS, SFM, stereo satellite imagery) output natively 3D unstructured point clouds that are (i) non-regularly sampled, (ii) incomplete (e.g., submerged parts of river channels are rarely measured), and (iii) include 3D elements (e.g., vegetation, vertical features such as river banks or cliffs) that cannot be accurately described in a DEM. Interpolating the raw point cloud onto a 2D grid generally results in a loss of position accuracy, spatial resolution and in more or less controlled interpolation. Here I demonstrate how studying earth surface topography and processes directly on native 3D point cloud datasets offers several advantages over raster based methods: point cloud methods preserve the accuracy of the original data, can better handle the evaluation of uncertainty associated to topographic change measurements and are more suitable to study vegetation characteristics and steep features of the landscape. In this presentation, I will illustrate and compare Point Cloud based and Raster based workflows with various examples involving ALS, TLS and SFM for the analysis of bank erosion processes in bedrock and alluvial rivers, rockfall statistics (including rockfall volume estimate directly from point clouds) and the interaction of vegetation/hydraulics and sedimentation in salt marshes. These workflows use 2 recently published algorithms for point cloud classification (CANUPO) and point cloud comparison (M3C2) now implemented in the open source software CloudCompare.

  19. Accurate and flexible calibration technique for fringe projection profilometry by using encoded points and Fourier analysis

    NASA Astrophysics Data System (ADS)

    González, Andrés. L.; Contreras, Carlos R.; Meneses, Jaime E.

    2014-05-01

    In order to get measures with a high accurate, three-dimensional reconstruction systems are implemented in industrial, medical, and investigative fields. To obtain high accurate is necessary to carry out an appropriate calibration procedure. In fringe projection profilometry, this procedure allows obtaining a relation between absolute phase and three-dimensional (3D) information of the object in study; however, to execute such procedure a precise movement stage is required. A fringe projection system is formed by a projector, a digital camera and a control unit, called like a projection-acquisition unit in this paper. The calibration of the projection-acquisition unit consists in to establish the parameters that are required to transform the phase of the projected fringes to metric coordinates of the object surface. These parameters are a function of the intrinsic and extrinsic parameters of both camera and projector, due to the projector is modeled as an inverse camera. For this purpose, in this paper a novel and flexible calibration method that allows calibrating any device that works with fringe projection profilometry is proposed. In this method is used a reference plane placed in random positions and the projection of an encoded pattern of control points. The camera parameters are computed using Zhang's calibration method; and the projector parameters are computed from the camera parameters and the phase of the pattern of control points, which is determined by using Fourier analysis. Experimental results are presented to demonstrate the performance of the calibration method.

  20. Confidence of the three-point estimator of frequency drift

    NASA Technical Reports Server (NTRS)

    Weiss, Marc A.; Hackman, Christine

    1993-01-01

    It was shown that a three-point second difference estimator is nearly optimal for estimating frequency drift in many common atomic oscillators. A formula for the uncertainty of this estimate as a function of the integration time and of the Allan variance associated with this integration time is derived.

  1. Accurate Non-parametric Estimation of Recent Effective Population Size from Segments of Identity by Descent.

    PubMed

    Browning, Sharon R; Browning, Brian L

    2015-09-03

    Existing methods for estimating historical effective population size from genetic data have been unable to accurately estimate effective population size during the most recent past. We present a non-parametric method for accurately estimating recent effective population size by using inferred long segments of identity by descent (IBD). We found that inferred segments of IBD contain information about effective population size from around 4 generations to around 50 generations ago for SNP array data and to over 200 generations ago for sequence data. In human populations that we examined, the estimates of effective size were approximately one-third of the census size. We estimate the effective population size of European-ancestry individuals in the UK four generations ago to be eight million and the effective population size of Finland four generations ago to be 0.7 million. Our method is implemented in the open-source IBDNe software package.

  2. Accurate Non-parametric Estimation of Recent Effective Population Size from Segments of Identity by Descent

    PubMed Central

    Browning, Sharon R.; Browning, Brian L.

    2015-01-01

    Existing methods for estimating historical effective population size from genetic data have been unable to accurately estimate effective population size during the most recent past. We present a non-parametric method for accurately estimating recent effective population size by using inferred long segments of identity by descent (IBD). We found that inferred segments of IBD contain information about effective population size from around 4 generations to around 50 generations ago for SNP array data and to over 200 generations ago for sequence data. In human populations that we examined, the estimates of effective size were approximately one-third of the census size. We estimate the effective population size of European-ancestry individuals in the UK four generations ago to be eight million and the effective population size of Finland four generations ago to be 0.7 million. Our method is implemented in the open-source IBDNe software package. PMID:26299365

  3. LSimpute: accurate estimation of missing values in microarray data with least squares methods.

    PubMed

    Bø, Trond Hellem; Dysvik, Bjarte; Jonassen, Inge

    2004-02-20

    Microarray experiments generate data sets with information on the expression levels of thousands of genes in a set of biological samples. Unfortunately, such experiments often produce multiple missing expression values, normally due to various experimental problems. As many algorithms for gene expression analysis require a complete data matrix as input, the missing values have to be estimated in order to analyze the available data. Alternatively, genes and arrays can be removed until no missing values remain. However, for genes or arrays with only a small number of missing values, it is desirable to impute those values. For the subsequent analysis to be as informative as possible, it is essential that the estimates for the missing gene expression values are accurate. A small amount of badly estimated missing values in the data might be enough for clustering methods, such as hierachical clustering or K-means clustering, to produce misleading results. Thus, accurate methods for missing value estimation are needed. We present novel methods for estimation of missing values in microarray data sets that are based on the least squares principle, and that utilize correlations between both genes and arrays. For this set of methods, we use the common reference name LSimpute. We compare the estimation accuracy of our methods with the widely used KNNimpute on three complete data matrices from public data sets by randomly knocking out data (labeling as missing). From these tests, we conclude that our LSimpute methods produce estimates that consistently are more accurate than those obtained using KNNimpute. Additionally, we examine a more classic approach to missing value estimation based on expectation maximization (EM). We refer to our EM implementations as EMimpute, and the estimate errors using the EMimpute methods are compared with those our novel methods produce. The results indicate that on average, the estimates from our best performing LSimpute method are at least as

  4. Point counts of birds: what are we estimating?

    USGS Publications Warehouse

    Johnson, D.H.

    1995-01-01

    Point counts of birds are made for many reasons, including estimating local densities, determining population trends, assessing habitat preferences, and exploiting the activities of recreational birdwatchers. Problems arise unless there is a clear understanding of what point counts mean in terms of actual populations of birds. Criteria for conducting point counts depend strongly on the purposes to which they will be put. This paper provides a simple mathematical conceptualization of point counts and illustrates graphically some of the influences on them.

  5. Point-of-care cardiac troponin test accurately predicts heat stroke severity in rats.

    PubMed

    Audet, Gerald N; Quinn, Carrie M; Leon, Lisa R

    2015-11-15

    Heat stroke (HS) remains a significant public health concern. Despite the substantial threat posed by HS, there is still no field or clinical test of HS severity. We suggested previously that circulating cardiac troponin (cTnI) could serve as a robust biomarker of HS severity after heating. In the present study, we hypothesized that (cTnI) point-of-care test (ctPOC) could be used to predict severity and organ damage at the onset of HS. Conscious male Fischer 344 rats (n = 16) continuously monitored for heart rate (HR), blood pressure (BP), and core temperature (Tc) (radiotelemetry) were heated to maximum Tc (Tc,Max) of 41.9 ± 0.1°C and recovered undisturbed for 24 h at an ambient temperature of 20°C. Blood samples were taken at Tc,Max and 24 h after heat via submandibular bleed and analyzed on ctPOC test. POC cTnI band intensity was ranked using a simple four-point scale via two blinded observers and compared with cTnI levels measured by a clinical blood analyzer. Blood was also analyzed for biomarkers of systemic organ damage. HS severity, as previously defined using HR, BP, and recovery Tc profile during heat exposure, correlated strongly with cTnI (R(2) = 0.69) at Tc,Max. POC cTnI band intensity ranking accurately predicted cTnI levels (R(2) = 0.64) and HS severity (R(2) = 0.83). Five markers of systemic organ damage also correlated with ctPOC score (albumin, alanine aminotransferase, blood urea nitrogen, cholesterol, and total bilirubin; R(2) > 0.4). This suggests that cTnI POC tests can accurately determine HS severity and could serve as simple, portable, cost-effective HS field tests.

  6. A fast and accurate frequency estimation algorithm for sinusoidal signal with harmonic components

    NASA Astrophysics Data System (ADS)

    Hu, Jinghua; Pan, Mengchun; Zeng, Zhidun; Hu, Jiafei; Chen, Dixiang; Tian, Wugang; Zhao, Jianqiang; Du, Qingfa

    2016-10-01

    Frequency estimation is a fundamental problem in many applications, such as traditional vibration measurement, power system supervision, and microelectromechanical system sensors control. In this paper, a fast and accurate frequency estimation algorithm is proposed to deal with low efficiency problem in traditional methods. The proposed algorithm consists of coarse and fine frequency estimation steps, and we demonstrate that it is more efficient than conventional searching methods to achieve coarse frequency estimation (location peak of FFT amplitude) by applying modified zero-crossing technique. Thus, the proposed estimation algorithm requires less hardware and software sources and can achieve even higher efficiency when the experimental data increase. Experimental results with modulated magnetic signal show that the root mean square error of frequency estimation is below 0.032 Hz with the proposed algorithm, which has lower computational complexity and better global performance than conventional frequency estimation methods.

  7. Charged Point Defects in the Flatland: Accurate Formation Energy Calculations in Two-Dimensional Materials

    NASA Astrophysics Data System (ADS)

    Komsa, Hannu-Pekka; Berseneva, Natalia; Krasheninnikov, Arkady V.; Nieminen, Risto M.

    2014-07-01

    Impurities and defects frequently govern materials properties, with the most prominent example being the doping of bulk semiconductors where a minute amount of foreign atoms can be responsible for the operation of the electronic devices. Several computational schemes based on a supercell approach have been developed to get insights into types and equilibrium concentrations of point defects, which successfully work in bulk materials. Here, we show that many of these schemes cannot directly be applied to two-dimensional (2D) systems, as formation energies of charged point defects are dominated by large spurious electrostatic interactions between defects in inhomogeneous environments. We suggest two approaches that solve this problem and give accurate formation energies of charged defects in 2D systems in the dilute limit. Our methods, which are applicable to all kinds of charged defects in any 2D system, are benchmarked for impurities in technologically important h-BN and MoS2 2D materials, and they are found to perform equally well for substitutional and adatom impurities.

  8. Accurate Astrometry and Photometry of Saturated and Coronagraphic Point Spread Functions

    SciTech Connect

    Marois, C; Lafreniere, D; Macintosh, B; Doyon, R

    2006-02-07

    For ground-based adaptive optics point source imaging, differential atmospheric refraction and flexure introduce a small drift of the point spread function (PSF) with time, and seeing and sky transmission variations modify the PSF flux. These effects need to be corrected to properly combine the images and obtain optimal signal-to-noise ratios, accurate relative astrometry and photometry of detected companions as well as precise detection limits. Usually, one can easily correct for these effects by using the PSF core, but this is impossible when high dynamic range observing techniques are used, like coronagraphy with a non-transmissive occulting mask, or if the stellar PSF core is saturated. We present a new technique that can solve these issues by using off-axis satellite PSFs produced by a periodic amplitude or phase mask conjugated to a pupil plane. It will be shown that these satellite PSFs track precisely the PSF position, its Strehl ratio and its intensity and can thus be used to register and to flux normalize the PSF. This approach can be easily implemented in existing adaptive optics instruments and should be considered for future extreme adaptive optics coronagraph instruments and in high-contrast imaging space observatories.

  9. Sample Size Requirements for Accurate Estimation of Squared Semi-Partial Correlation Coefficients.

    ERIC Educational Resources Information Center

    Algina, James; Moulder, Bradley C.; Moser, Barry K.

    2002-01-01

    Studied the sample size requirements for accurate estimation of squared semi-partial correlation coefficients through simulation studies. Results show that the sample size necessary for adequate accuracy depends on: (1) the population squared multiple correlation coefficient (p squared); (2) the population increase in p squared; and (3) the…

  10. Anharmonic zero point vibrational energies: Tipping the scales in accurate thermochemistry calculations?

    NASA Astrophysics Data System (ADS)

    Pfeiffer, Florian; Rauhut, Guntram; Feller, David; Peterson, Kirk A.

    2013-01-01

    Anharmonic zero point vibrational energies (ZPVEs) calculated using both conventional CCSD(T) and MP2 in combination with vibrational second-order perturbation theory (VPT2) are compared to explicitly correlated CCSD(T)-F12 and MP2-F12 results that utilize vibrational configuration interaction (VCI) theory for 26 molecules of varying size. Sequences of correlation consistent basis sets are used throughout. It is found that the explicitly correlated methods yield results close to the basis set limit even with double-zeta quality basis sets. In particular, the anharmonic contributions to the ZPVE are accurately recovered at just the MP2 (or MP2-F12) level of theory. Somewhat surprisingly, the best vibrational CI results agreed with the VPT2 values with a mean unsigned deviation of just 0.09 kJ/mol and a standard deviation of just 0.11 kJ/mol. The largest difference was observed for C4H4O (0.34 kJ/mol). A simplified version of the vibrational CI procedure that limited the modal expansion to at most 2-mode coupling yielded anharmonic corrections generally within about 0.1 kJ/mol of the full 3- or 4-mode results, except in the cases of C3H8 and C4H4O where the contributions were underestimated by 1.3 and 0.8 kJ/mol, respectively (34% and 40%, respectively). For the molecules considered in this work, accurate anharmonic ZPVEs are most economically obtained by combining CCSD(T)-F12a/cc-pVDZ-F12 harmonic frequencies with either MP2/aug-cc-pVTZ/VPT2 or MP2-F12/cc-pVDZ-F12/VCI anharmonic corrections.

  11. Leidenfrost Point and Estimate of the Vapour Layer Thickness

    ERIC Educational Resources Information Center

    Gianino, Concetto

    2008-01-01

    In this article I describe an experiment involving the Leidenfrost phenomenon, which is the long lifetime of a water drop when it is deposited on a metal that is much hotter than the boiling point of water. The experiment was carried out with high-school students. The Leidenfrost point is measured and the heat laws are used to estimate the…

  12. Estimation of viable airborne microbes downwind from a point source.

    PubMed Central

    Lighthart, B; Frisch, A S

    1976-01-01

    Modification of the Pasquill atmospheric diffusion equations for estimating viable microbial airborne cell concentrations downwind form a continuous point source is presented. A graphical method is given to estimate the ground level cell concentration given (i) microbial death rate, (ii) mean wind speed, (iii) atmospheric stability class, (iv) downwind sample distance from the source, and (v) source height. PMID:1275491

  13. Children's Use of the Reference Point Strategy for Measurement Estimation

    ERIC Educational Resources Information Center

    Joram, Elana; Gabriele, Anthony J.; Bertheau, Myrna; Gelman, Rochel; Subrahmanyam, Kaveri

    2005-01-01

    Mathematics educators frequently recommend that students use strategies for measurement estimation, such as the reference point or benchmark strategy; however, little is known about the effects of using this strategy on estimation accuracy or representations of standard measurement units. One reason for the paucity of research in this area is that…

  14. Do We Know Whether Researchers and Reviewers are Estimating Risk and Benefit Accurately?

    PubMed

    Hey, Spencer Phillips; Kimmelman, Jonathan

    2016-10-01

    Accurate estimation of risk and benefit is integral to good clinical research planning, ethical review, and study implementation. Some commentators have argued that various actors in clinical research systems are prone to biased or arbitrary risk/benefit estimation. In this commentary, we suggest the evidence supporting such claims is very limited. Most prior work has imputed risk/benefit beliefs based on past behavior or goals, rather than directly measuring them. We describe an approach - forecast analysis - that would enable direct and effective measure of the quality of risk/benefit estimation. We then consider some objections and limitations to the forecasting approach.

  15. Towards an accurate estimation of the isosteric heat of adsorption - A correlation with the potential theory.

    PubMed

    Askalany, Ahmed A; Saha, Bidyut B

    2017-03-15

    Accurate estimation of the isosteric heat of adsorption is mandatory for a good modeling of adsorption processes. In this paper a thermodynamic formalism on adsorbed phase volume which is a function of adsorption pressure and temperature has been proposed for the precise estimation of the isosteric heat of adsorption. The estimated isosteric heat of adsorption using the new correlation has been compared with measured values of prudently selected several adsorbent-refrigerant pairs from open literature. Results showed that the proposed isosteric heat of adsorption correlation fits the experimentally measured values better than the Clausius-Clapeyron equation.

  16. On the accurate estimation of gap fraction during daytime with digital cover photography

    NASA Astrophysics Data System (ADS)

    Hwang, Y. R.; Ryu, Y.; Kimm, H.; Macfarlane, C.; Lang, M.; Sonnentag, O.

    2015-12-01

    Digital cover photography (DCP) has emerged as an indirect method to obtain gap fraction accurately. Thus far, however, the intervention of subjectivity, such as determining the camera relative exposure value (REV) and threshold in the histogram, hindered computing accurate gap fraction. Here we propose a novel method that enables us to measure gap fraction accurately during daytime under various sky conditions by DCP. The novel method computes gap fraction using a single DCP unsaturated raw image which is corrected for scattering effects by canopies and a reconstructed sky image from the raw format image. To test the sensitivity of the novel method derived gap fraction to diverse REVs, solar zenith angles and canopy structures, we took photos in one hour interval between sunrise to midday under dense and sparse canopies with REV 0 to -5. The novel method showed little variation of gap fraction across different REVs in both dense and spares canopies across diverse range of solar zenith angles. The perforated panel experiment, which was used to test the accuracy of the estimated gap fraction, confirmed that the novel method resulted in the accurate and consistent gap fractions across different hole sizes, gap fractions and solar zenith angles. These findings highlight that the novel method opens new opportunities to estimate gap fraction accurately during daytime from sparse to dense canopies, which will be useful in monitoring LAI precisely and validating satellite remote sensing LAI products efficiently.

  17. Accurate Estimation of the Entropy of Rotation-Translation Probability Distributions.

    PubMed

    Fogolari, Federico; Dongmo Foumthuim, Cedrix Jurgal; Fortuna, Sara; Soler, Miguel Angel; Corazza, Alessandra; Esposito, Gennaro

    2016-01-12

    The estimation of rotational and translational entropies in the context of ligand binding has been the subject of long-time investigations. The high dimensionality (six) of the problem and the limited amount of sampling often prevent the required resolution to provide accurate estimates by the histogram method. Recently, the nearest-neighbor distance method has been applied to the problem, but the solutions provided either address rotation and translation separately, therefore lacking correlations, or use a heuristic approach. Here we address rotational-translational entropy estimation in the context of nearest-neighbor-based entropy estimation, solve the problem numerically, and provide an exact and an approximate method to estimate the full rotational-translational entropy.

  18. [Guidelines for Accurate and Transparent Health Estimates Reporting: the GATHER Statement].

    PubMed

    Stevens, Gretchen A; Alkema, Leontine; Black, Robert E; Boerma, J Ties; Collins, Gary S; Ezzati, Majid; Grove, John T; Hogan, Daniel R; Hogan, Margaret C; Horton, Richard; Lawn, Joy E; Marušic, Ana; Mathers, Colin D; Murray, Christopher J L; Rudan, Igor; Salomon, Joshua A; Simpson, Paul J; Vos, Theo; Welch, Vivian

    2017-01-01

    Measurements of health indicators are rarely available for every population and period of interest, and available data may not be comparable. The Guidelines for Accurate and Transparent Health Estimates Reporting (GATHER) define best reporting practices for studies that calculate health estimates for multiple populations (in time or space) using multiple information sources. Health estimates that fall within the scope of GATHER include all quantitative population-level estimates (including global, regional, national, or subnational estimates) of health indicators, including indicators of health status, incidence and prevalence of diseases, injuries, and disability and functioning; and indicators of health determinants, including health behaviours and health exposures. GATHER comprises a checklist of 18 items that are essential for best reporting practice. A more detailed explanation and elaboration document, describing the interpretation and rationale of each reporting item along with examples of good reporting, is available on the GATHER website (http://gather-statement.org).

  19. Highly effective and accurate weak point monitoring method for advanced design rule (1x nm) devices

    NASA Astrophysics Data System (ADS)

    Ahn, Jeongho; Seong, ShiJin; Yoon, Minjung; Park, Il-Suk; Kim, HyungSeop; Ihm, Dongchul; Chin, Soobok; Sivaraman, Gangadharan; Li, Mingwei; Babulnath, Raghav; Lee, Chang Ho; Kurada, Satya; Brown, Christine; Galani, Rajiv; Kim, JaeHyun

    2014-04-01

    Historically when we used to manufacture semiconductor devices for 45 nm or above design rules, IC manufacturing yield was mainly determined by global random variations and therefore the chip manufacturers / manufacturing team were mainly responsible for yield improvement. With the introduction of sub-45 nm semiconductor technologies, yield started to be dominated by systematic variations, primarily centered on resolution problems, copper/low-k interconnects and CMP. These local systematic variations, which have become decisively greater than global random variations, are design-dependent [1, 2] and therefore designers now share the responsibility of increasing yield with manufacturers / manufacturing teams. A widening manufacturing gap has led to a dramatic increase in design rules that are either too restrictive or do not guarantee a litho/etch hotspot-free design. The semiconductor industry is currently limited to 193 nm scanners and no relief is expected from the equipment side to prevent / eliminate these systematic hotspots. Hence we have seen a lot of design houses coming up with innovative design products to check hotspots based on model based lithography checks to validate design manufacturability, which will also account for complex two-dimensional effects that stem from aggressive scaling of 193 nm lithography. Most of these hotspots (a.k.a., weak points) are especially seen on Back End of the Line (BEOL) process levels like Mx ADI, Mx Etch and Mx CMP. Inspecting some of these BEOL levels can be extremely challenging as there are lots of wafer noises or nuisances that can hinder an inspector's ability to detect and monitor the defects or weak points of interest. In this work we have attempted to accurately inspect the weak points using a novel broadband plasma optical inspection approach that enhances defect signal from patterns of interest (POI) and precisely suppresses surrounding wafer noises. This new approach is a paradigm shift in wafer inspection

  20. Effects of LiDAR point density and landscape context on estimates of urban forest biomass

    NASA Astrophysics Data System (ADS)

    Singh, Kunwar K.; Chen, Gang; McCarter, James B.; Meentemeyer, Ross K.

    2015-03-01

    Light Detection and Ranging (LiDAR) data is being increasingly used as an effective alternative to conventional optical remote sensing to accurately estimate aboveground forest biomass ranging from individual tree to stand levels. Recent advancements in LiDAR technology have resulted in higher point densities and improved data accuracies accompanied by challenges for procuring and processing voluminous LiDAR data for large-area assessments. Reducing point density lowers data acquisition costs and overcomes computational challenges for large-area forest assessments. However, how does lower point density impact the accuracy of biomass estimation in forests containing a great level of anthropogenic disturbance? We evaluate the effects of LiDAR point density on the biomass estimation of remnant forests in the rapidly urbanizing region of Charlotte, North Carolina, USA. We used multiple linear regression to establish a statistical relationship between field-measured biomass and predictor variables derived from LiDAR data with varying densities. We compared the estimation accuracies between a general Urban Forest type and three Forest Type models (evergreen, deciduous, and mixed) and quantified the degree to which landscape context influenced biomass estimation. The explained biomass variance of the Urban Forest model, using adjusted R2, was consistent across the reduced point densities, with the highest difference of 11.5% between the 100% and 1% point densities. The combined estimates of Forest Type biomass models outperformed the Urban Forest models at the representative point densities (100% and 40%). The Urban Forest biomass model with development density of 125 m radius produced the highest adjusted R2 (0.83 and 0.82 at 100% and 40% LiDAR point densities, respectively) and the lowest RMSE values, highlighting a distance impact of development on biomass estimation. Our evaluation suggests that reducing LiDAR point density is a viable solution to regional

  1. Polynomial fitting of DT-MRI fiber tracts allows accurate estimation of muscle architectural parameters.

    PubMed

    Damon, Bruce M; Heemskerk, Anneriet M; Ding, Zhaohua

    2012-06-01

    Fiber curvature is a functionally significant muscle structural property, but its estimation from diffusion-tensor magnetic resonance imaging fiber tracking data may be confounded by noise. The purpose of this study was to investigate the use of polynomial fitting of fiber tracts for improving the accuracy and precision of fiber curvature (κ) measurements. Simulated image data sets were created in order to provide data with known values for κ and pennation angle (θ). Simulations were designed to test the effects of increasing inherent fiber curvature (3.8, 7.9, 11.8 and 15.3 m(-1)), signal-to-noise ratio (50, 75, 100 and 150) and voxel geometry (13.8- and 27.0-mm(3) voxel volume with isotropic resolution; 13.5-mm(3) volume with an aspect ratio of 4.0) on κ and θ measurements. In the originally reconstructed tracts, θ was estimated accurately under most curvature and all imaging conditions studied; however, the estimates of κ were imprecise and inaccurate. Fitting the tracts to second-order polynomial functions provided accurate and precise estimates of κ for all conditions except very high curvature (κ=15.3 m(-1)), while preserving the accuracy of the θ estimates. Similarly, polynomial fitting of in vivo fiber tracking data reduced the κ values of fitted tracts from those of unfitted tracts and did not change the θ values. Polynomial fitting of fiber tracts allows accurate estimation of physiologically reasonable values of κ, while preserving the accuracy of θ estimation.

  2. Polynomial Fitting of DT-MRI Fiber Tracts Allows Accurate Estimation of Muscle Architectural Parameters

    PubMed Central

    Damon, Bruce M.; Heemskerk, Anneriet M.; Ding, Zhaohua

    2012-01-01

    Fiber curvature is a functionally significant muscle structural property, but its estimation from diffusion-tensor MRI fiber tracking data may be confounded by noise. The purpose of this study was to investigate the use of polynomial fitting of fiber tracts for improving the accuracy and precision of fiber curvature (κ) measurements. Simulated image datasets were created in order to provide data with known values for κ and pennation angle (θ). Simulations were designed to test the effects of increasing inherent fiber curvature (3.8, 7.9, 11.8, and 15.3 m−1), signal-to-noise ratio (50, 75, 100, and 150), and voxel geometry (13.8 and 27.0 mm3 voxel volume with isotropic resolution; 13.5 mm3 volume with an aspect ratio of 4.0) on κ and θ measurements. In the originally reconstructed tracts, θ was estimated accurately under most curvature and all imaging conditions studied; however, the estimates of κ were imprecise and inaccurate. Fitting the tracts to 2nd order polynomial functions provided accurate and precise estimates of κ for all conditions except very high curvature (κ=15.3 m−1), while preserving the accuracy of the θ estimates. Similarly, polynomial fitting of in vivo fiber tracking data reduced the κ values of fitted tracts from those of unfitted tracts and did not change the θ values. Polynomial fitting of fiber tracts allows accurate estimation of physiologically reasonable values of κ, while preserving the accuracy of θ estimation. PMID:22503094

  3. Analytical formula for three points sinusoidal signals amplitude estimation errors

    NASA Astrophysics Data System (ADS)

    Nicolae Vizireanu, Dragos; Viorica Halunga, Simona

    2012-01-01

    In this note, we show that the amplitude estimation of sinusoidal signals proposed in Wu and Hong [Wu, S.T., and Hong, J.L. (2010), 'Five-point Amplitude Estimation of Sinusoidal Signals: With Application to LVDT Signal Conditioning', IEEE Transactions on Instrumentation and Measurement, 59, 623-630] is a particular case of Vizireanu and Halunga [Vizireanu, D.N, and Halunga, S.V. (2011), 'Single Sine Wave Parameters Estimation Method Based on Four Equally Spaced Samples', International Journal of Electronics, 98(7), pp. 941-948]. An analytical formula for amplitude estimation errors as effects of sampling period deviation is obtained.

  4. Robust and accurate fundamental frequency estimation based on dominant harmonic components.

    PubMed

    Nakatani, Tomohiro; Irino, Toshio

    2004-12-01

    This paper presents a new method for robust and accurate fundamental frequency (F0) estimation in the presence of background noise and spectral distortion. Degree of dominance and dominance spectrum are defined based on instantaneous frequencies. The degree of dominance allows one to evaluate the magnitude of individual harmonic components of the speech signals relative to background noise while reducing the influence of spectral distortion. The fundamental frequency is more accurately estimated from reliable harmonic components which are easy to select given the dominance spectra. Experiments are performed using white and babble background noise with and without spectral distortion as produced by a SRAEN filter. The results show that the present method is better than previously reported methods in terms of both gross and fine F0 errors.

  5. Raoult’s law revisited: accurately predicting equilibrium relative humidity points for humidity control experiments

    PubMed Central

    Bowler, Michael G.

    2017-01-01

    The humidity surrounding a sample is an important variable in scientific experiments. Biological samples in particular require not just a humid atmosphere but often a relative humidity (RH) that is in equilibrium with a stabilizing solution required to maintain the sample in the same state during measurements. The controlled dehydration of macromolecular crystals can lead to significant increases in crystal order, leading to higher diffraction quality. Devices that can accurately control the humidity surrounding crystals while monitoring diffraction have led to this technique being increasingly adopted, as the experiments become easier and more reproducible. Matching the RH to the mother liquor is the first step in allowing the stable mounting of a crystal. In previous work [Wheeler, Russi, Bowler & Bowler (2012). Acta Cryst. F68, 111–114], the equilibrium RHs were measured for a range of concentrations of the most commonly used precipitants in macromolecular crystallography and it was shown how these related to Raoult’s law for the equilibrium vapour pressure of water above a solution. However, a discrepancy between the measured values and those predicted by theory could not be explained. Here, a more precise humidity control device has been used to determine equilibrium RH points. The new results are in agreement with Raoult’s law. A simple argument in statistical mechanics is also presented, demonstrating that the equilibrium vapour pressure of a solvent is proportional to its mole fraction in an ideal solution: Raoult’s law. The same argument can be extended to the case where the solvent and solute molecules are of different sizes, as is the case with polymers. The results provide a framework for the correct maintenance of the RH surrounding a sample. PMID:28381983

  6. Raoult's law revisited: accurately predicting equilibrium relative humidity points for humidity control experiments.

    PubMed

    Bowler, Michael G; Bowler, David R; Bowler, Matthew W

    2017-04-01

    The humidity surrounding a sample is an important variable in scientific experiments. Biological samples in particular require not just a humid atmosphere but often a relative humidity (RH) that is in equilibrium with a stabilizing solution required to maintain the sample in the same state during measurements. The controlled dehydration of macromolecular crystals can lead to significant increases in crystal order, leading to higher diffraction quality. Devices that can accurately control the humidity surrounding crystals while monitoring diffraction have led to this technique being increasingly adopted, as the experiments become easier and more reproducible. Matching the RH to the mother liquor is the first step in allowing the stable mounting of a crystal. In previous work [Wheeler, Russi, Bowler & Bowler (2012). Acta Cryst. F68, 111-114], the equilibrium RHs were measured for a range of concentrations of the most commonly used precipitants in macromolecular crystallography and it was shown how these related to Raoult's law for the equilibrium vapour pressure of water above a solution. However, a discrepancy between the measured values and those predicted by theory could not be explained. Here, a more precise humidity control device has been used to determine equilibrium RH points. The new results are in agreement with Raoult's law. A simple argument in statistical mechanics is also presented, demonstrating that the equilibrium vapour pressure of a solvent is proportional to its mole fraction in an ideal solution: Raoult's law. The same argument can be extended to the case where the solvent and solute molecules are of different sizes, as is the case with polymers. The results provide a framework for the correct maintenance of the RH surrounding a sample.

  7. Data Anonymization that Leads to the Most Accurate Estimates of Statistical Characteristics: Fuzzy-Motivated Approach

    PubMed Central

    Xiang, G.; Ferson, S.; Ginzburg, L.; Longpré, L.; Mayorga, E.; Kosheleva, O.

    2013-01-01

    To preserve privacy, the original data points (with exact values) are replaced by boxes containing each (inaccessible) data point. This privacy-motivated uncertainty leads to uncertainty in the statistical characteristics computed based on this data. In a previous paper, we described how to minimize this uncertainty under the assumption that we use the same standard statistical estimates for the desired characteristics. In this paper, we show that we can further decrease the resulting uncertainty if we allow fuzzy-motivated weighted estimates, and we explain how to optimally select the corresponding weights. PMID:25187183

  8. Development of Star Tracker System for Accurate Estimation of Spacecraft Attitude

    DTIC Science & Technology

    2009-12-01

    TRACKER SYSTEM FOR ACCURATE ESTIMATION OF SPACECRAFT ATTITUDE by Jack A. Tappe December 2009 Thesis Co-Advisors: Jae Jun Kim Brij N... Brij N. Agrawal Co-Advisor Dr. Knox T. Millsaps Chairman, Department of Mechanical and Astronautical Engineering iv THIS PAGE...much with my studies here. I would like to especially thank Professors Barry Leonard, Brij Agrawal, Grand Master Shin, and Comrade Oleg Yakimenko

  9. A method to accurately estimate the muscular torques of human wearing exoskeletons by torque sensors.

    PubMed

    Hwang, Beomsoo; Jeon, Doyoung

    2015-04-09

    In exoskeletal robots, the quantification of the user's muscular effort is important to recognize the user's motion intentions and evaluate motor abilities. In this paper, we attempt to estimate users' muscular efforts accurately using joint torque sensor which contains the measurements of dynamic effect of human body such as the inertial, Coriolis, and gravitational torques as well as torque by active muscular effort. It is important to extract the dynamic effects of the user's limb accurately from the measured torque. The user's limb dynamics are formulated and a convenient method of identifying user-specific parameters is suggested for estimating the user's muscular torque in robotic exoskeletons. Experiments were carried out on a wheelchair-integrated lower limb exoskeleton, EXOwheel, which was equipped with torque sensors in the hip and knee joints. The proposed methods were evaluated by 10 healthy participants during body weight-supported gait training. The experimental results show that the torque sensors are to estimate the muscular torque accurately in cases of relaxed and activated muscle conditions.

  10. Accurate Attitude Estimation Using ARS under Conditions of Vehicle Movement Based on Disturbance Acceleration Adaptive Estimation and Correction

    PubMed Central

    Xing, Li; Hang, Yijun; Xiong, Zhi; Liu, Jianye; Wan, Zhong

    2016-01-01

    This paper describes a disturbance acceleration adaptive estimate and correction approach for an attitude reference system (ARS) so as to improve the attitude estimate precision under vehicle movement conditions. The proposed approach depends on a Kalman filter, where the attitude error, the gyroscope zero offset error and the disturbance acceleration error are estimated. By switching the filter decay coefficient of the disturbance acceleration model in different acceleration modes, the disturbance acceleration is adaptively estimated and corrected, and then the attitude estimate precision is improved. The filter was tested in three different disturbance acceleration modes (non-acceleration, vibration-acceleration and sustained-acceleration mode, respectively) by digital simulation. Moreover, the proposed approach was tested in a kinematic vehicle experiment as well. Using the designed simulations and kinematic vehicle experiments, it has been shown that the disturbance acceleration of each mode can be accurately estimated and corrected. Moreover, compared with the complementary filter, the experimental results have explicitly demonstrated the proposed approach further improves the attitude estimate precision under vehicle movement conditions. PMID:27754469

  11. Thermal Imaging of Earth for Accurate Pointing of Deep-Space Antennas

    NASA Technical Reports Server (NTRS)

    Ortiz, Gerardo; Lee, Shinhak

    2005-01-01

    A report discusses a proposal to use thermal (long-wavelength infrared) images of the Earth, as seen from spacecraft at interplanetary distances, for pointing antennas and telescopes toward the Earth for Ka-band and optical communications. The purpose is to overcome two limitations of using visible images: (1) at large Earth phase angles, the light from the Earth is too faint; and (2) performance is degraded by large albedo variations associated with weather changes. In particular, it is proposed to use images in the wavelength band of 8 to 13 m, wherein the appearance of the Earth is substantially independent of the Earth phase angle and emissivity variations are small. The report addresses tracking requirements for optical and Ka-band communications, selection of the wavelength band, available signal level versus phase angle, background noise, and signal-to-noise ratio. Tracking errors are estimated for several conceptual systems employing currently available infrared image sensors. It is found that at Mars range, it should be possible to locate the centroid of the Earth image within a noise equivalent angle (a random angular error) between 10 and 150 nanoradians at a bias error of no more than 80 nanoradians

  12. Single-point position estimation in interplanetary trajectories using star trackers

    NASA Astrophysics Data System (ADS)

    Mortari, Daniele; Conway, Dylan

    2016-11-01

    This study provides a single-point position estimation technique for interplanetary missions by observing visible planets using star trackers. Closed-form least-squares solution is obtained by minimizing the sum of the expected object-space squared distance errors. A weighted least-squares solution is provided by an iterative procedure. The weights are evaluated using the distances to the planets estimated by the least-squares solution. It is shown that the weighted approach only requires one iteration to converge and results in significant accuracy gains compared to simple least squares approach. The light-time correction is taken into account while the star-light aberration cannot be implemented in single-point estimation as it requires knowledge of the observer velocity. The proposed method is numerically validated through a statistical scenario as follows. A three-dimensional grid of test cases is generated: two dimensions sweep through the ecliptic plane and the third dimension sweeps through time from January 1, 2018 to January 1, 2043 in 5-year increments. The observer position is estimated at each test case and the estimate error is recorded. The results obtained show that a large majority of positions are well suited to position estimation by using star trackers pointing to visible planets, and reliable and accurate single-point position estimations can be provided in interplanetary missions. The proposed approach is suitable to be used to initiate a filtering technique to increase the estimation accuracy.

  13. Novel serologic biomarkers provide accurate estimates of recent Plasmodium falciparum exposure for individuals and communities.

    PubMed

    Helb, Danica A; Tetteh, Kevin K A; Felgner, Philip L; Skinner, Jeff; Hubbard, Alan; Arinaitwe, Emmanuel; Mayanja-Kizza, Harriet; Ssewanyana, Isaac; Kamya, Moses R; Beeson, James G; Tappero, Jordan; Smith, David L; Crompton, Peter D; Rosenthal, Philip J; Dorsey, Grant; Drakeley, Christopher J; Greenhouse, Bryan

    2015-08-11

    Tools to reliably measure Plasmodium falciparum (Pf) exposure in individuals and communities are needed to guide and evaluate malaria control interventions. Serologic assays can potentially produce precise exposure estimates at low cost; however, current approaches based on responses to a few characterized antigens are not designed to estimate exposure in individuals. Pf-specific antibody responses differ by antigen, suggesting that selection of antigens with defined kinetic profiles will improve estimates of Pf exposure. To identify novel serologic biomarkers of malaria exposure, we evaluated responses to 856 Pf antigens by protein microarray in 186 Ugandan children, for whom detailed Pf exposure data were available. Using data-adaptive statistical methods, we identified combinations of antibody responses that maximized information on an individual's recent exposure. Responses to three novel Pf antigens accurately classified whether an individual had been infected within the last 30, 90, or 365 d (cross-validated area under the curve = 0.86-0.93), whereas responses to six antigens accurately estimated an individual's malaria incidence in the prior year. Cross-validated incidence predictions for individuals in different communities provided accurate stratification of exposure between populations and suggest that precise estimates of community exposure can be obtained from sampling a small subset of that community. In addition, serologic incidence predictions from cross-sectional samples characterized heterogeneity within a community similarly to 1 y of continuous passive surveillance. Development of simple ELISA-based assays derived from the successful selection strategy outlined here offers the potential to generate rich epidemiologic surveillance data that will be widely accessible to malaria control programs.

  14. Point and interval estimation in the combination of bioassay results.

    PubMed Central

    Armitage, P.; Bennett, B. M.; Finney, D. J.

    1976-01-01

    A procedure for combining evidence from different biological assays is shown to be equivalent both to generalized least-squares and to maximum-likelihood estimation. By appropriate nesting of hypotheses, the likelihood function can be used to test the agreement between the assays and to obtain probability limits for the combined estimate of potency. The properties of these limits are examined, with particular reference to the situation, unusual but not impossible in practice, in which the values of relative potency that they define consist of several disjoint segments instead of a single interval. The connection with general theory of estimating linear functional relations is pointed out. PMID:1060692

  15. Voxel-based registration of simulated and real patient CBCT data for accurate dental implant pose estimation

    NASA Astrophysics Data System (ADS)

    Moreira, António H. J.; Queirós, Sandro; Morais, Pedro; Rodrigues, Nuno F.; Correia, André Ricardo; Fernandes, Valter; Pinho, A. C. M.; Fonseca, Jaime C.; Vilaça, João. L.

    2015-03-01

    The success of dental implant-supported prosthesis is directly linked to the accuracy obtained during implant's pose estimation (position and orientation). Although traditional impression techniques and recent digital acquisition methods are acceptably accurate, a simultaneously fast, accurate and operator-independent methodology is still lacking. Hereto, an image-based framework is proposed to estimate the patient-specific implant's pose using cone-beam computed tomography (CBCT) and prior knowledge of implanted model. The pose estimation is accomplished in a threestep approach: (1) a region-of-interest is extracted from the CBCT data using 2 operator-defined points at the implant's main axis; (2) a simulated CBCT volume of the known implanted model is generated through Feldkamp-Davis-Kress reconstruction and coarsely aligned to the defined axis; and (3) a voxel-based rigid registration is performed to optimally align both patient and simulated CBCT data, extracting the implant's pose from the optimal transformation. Three experiments were performed to evaluate the framework: (1) an in silico study using 48 implants distributed through 12 tridimensional synthetic mandibular models; (2) an in vitro study using an artificial mandible with 2 dental implants acquired with an i-CAT system; and (3) two clinical case studies. The results shown positional errors of 67+/-34μm and 108μm, and angular misfits of 0.15+/-0.08° and 1.4°, for experiment 1 and 2, respectively. Moreover, in experiment 3, visual assessment of clinical data results shown a coherent alignment of the reference implant. Overall, a novel image-based framework for implants' pose estimation from CBCT data was proposed, showing accurate results in agreement with dental prosthesis modelling requirements.

  16. Accurate estimation of object location in an image sequence using helicopter flight data

    NASA Technical Reports Server (NTRS)

    Tang, Yuan-Liang; Kasturi, Rangachar

    1994-01-01

    In autonomous navigation, it is essential to obtain a three-dimensional (3D) description of the static environment in which the vehicle is traveling. For a rotorcraft conducting low-latitude flight, this description is particularly useful for obstacle detection and avoidance. In this paper, we address the problem of 3D position estimation for static objects from a monocular sequence of images captured from a low-latitude flying helicopter. Since the environment is static, it is well known that the optical flow in the image will produce a radiating pattern from the focus of expansion. We propose a motion analysis system which utilizes the epipolar constraint to accurately estimate 3D positions of scene objects in a real world image sequence taken from a low-altitude flying helicopter. Results show that this approach gives good estimates of object positions near the rotorcraft's intended flight-path.

  17. Estimating the Effective Permittivity for Reconstructing Accurate Microwave-Radar Images

    PubMed Central

    Lavoie, Benjamin R.; Okoniewski, Michal; Fear, Elise C.

    2016-01-01

    We present preliminary results from a method for estimating the optimal effective permittivity for reconstructing microwave-radar images. Using knowledge of how microwave-radar images are formed, we identify characteristics that are typical of good images, and define a fitness function to measure the relative image quality. We build a polynomial interpolant of the fitness function in order to identify the most likely permittivity values of the tissue. To make the estimation process more efficient, the polynomial interpolant is constructed using a locally and dimensionally adaptive sampling method that is a novel combination of stochastic collocation and polynomial chaos. Examples, using a series of simulated, experimental and patient data collected using the Tissue Sensing Adaptive Radar system, which is under development at the University of Calgary, are presented. These examples show how, using our method, accurate images can be reconstructed starting with only a broad estimate of the permittivity range. PMID:27611785

  18. Evaluating lidar point densities for effective estimation of aboveground biomass

    USGS Publications Warehouse

    Wu, Zhuoting; Dye, Dennis G.; Stoker, Jason M.; Vogel, John M.; Velasco, Miguel G.; Middleton, Barry R.

    2016-01-01

    The U.S. Geological Survey (USGS) 3D Elevation Program (3DEP) was recently established to provide airborne lidar data coverage on a national scale. As part of a broader research effort of the USGS to develop an effective remote sensing-based methodology for the creation of an operational biomass Essential Climate Variable (Biomass ECV) data product, we evaluated the performance of airborne lidar data at various pulse densities against Landsat 8 satellite imagery in estimating above ground biomass for forests and woodlands in a study area in east-central Arizona, U.S. High point density airborne lidar data, were randomly sampled to produce five lidar datasets with reduced densities ranging from 0.5 to 8 point(s)/m2, corresponding to the point density range of 3DEP to provide national lidar coverage over time. Lidar-derived aboveground biomass estimate errors showed an overall decreasing trend as lidar point density increased from 0.5 to 8 points/m2. Landsat 8-based aboveground biomass estimates produced errors larger than the lowest lidar point density of 0.5 point/m2, and therefore Landsat 8 observations alone were ineffective relative to airborne lidar for generating a Biomass ECV product, at least for the forest and woodland vegetation types of the Southwestern U.S. While a national Biomass ECV product with optimal accuracy could potentially be achieved with 3DEP data at 8 points/m2, our results indicate that even lower density lidar data could be sufficient to provide a national Biomass ECV product with accuracies significantly higher than that from Landsat observations alone.

  19. Intraocular lens power estimation by accurate ray tracing for eyes underwent previous refractive surgeries

    NASA Astrophysics Data System (ADS)

    Yang, Que; Wang, Shanshan; Wang, Kai; Zhang, Chunyu; Zhang, Lu; Meng, Qingyu; Zhu, Qiudong

    2015-08-01

    For normal eyes without history of any ocular surgery, traditional equations for calculating intraocular lens (IOL) power, such as SRK-T, Holladay, Higis, SRK-II, et al., all were relativley accurate. However, for eyes underwent refractive surgeries, such as LASIK, or eyes diagnosed as keratoconus, these equations may cause significant postoperative refractive error, which may cause poor satisfaction after cataract surgery. Although some methods have been carried out to solve this problem, such as Hagis-L equation[1], or using preoperative data (data before LASIK) to estimate K value[2], no precise equations were available for these eyes. Here, we introduced a novel intraocular lens power estimation method by accurate ray tracing with optical design software ZEMAX. Instead of using traditional regression formula, we adopted the exact measured corneal elevation distribution, central corneal thickness, anterior chamber depth, axial length, and estimated effective lens plane as the input parameters. The calculation of intraocular lens power for a patient with keratoconus and another LASIK postoperative patient met very well with their visual capacity after cataract surgery.

  20. Accurate and quantitative polarization-sensitive OCT by unbiased birefringence estimator with noise-stochastic correction

    NASA Astrophysics Data System (ADS)

    Kasaragod, Deepa; Sugiyama, Satoshi; Ikuno, Yasushi; Alonso-Caneiro, David; Yamanari, Masahiro; Fukuda, Shinichi; Oshika, Tetsuro; Hong, Young-Joo; Li, En; Makita, Shuichi; Miura, Masahiro; Yasuno, Yoshiaki

    2016-03-01

    Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of OCT that contrasts the polarization properties of tissues. It has been applied to ophthalmology, cardiology, etc. Proper quantitative imaging is required for a widespread clinical utility. However, the conventional method of averaging to improve the signal to noise ratio (SNR) and the contrast of the phase retardation (or birefringence) images introduce a noise bias offset from the true value. This bias reduces the effectiveness of birefringence contrast for a quantitative study. Although coherent averaging of Jones matrix tomography has been widely utilized and has improved the image quality, the fundamental limitation of nonlinear dependency of phase retardation and birefringence to the SNR was not overcome. So the birefringence obtained by PS-OCT was still not accurate for a quantitative imaging. The nonlinear effect of SNR to phase retardation and birefringence measurement was previously formulated in detail for a Jones matrix OCT (JM-OCT) [1]. Based on this, we had developed a maximum a-posteriori (MAP) estimator and quantitative birefringence imaging was demonstrated [2]. However, this first version of estimator had a theoretical shortcoming. It did not take into account the stochastic nature of SNR of OCT signal. In this paper, we present an improved version of the MAP estimator which takes into account the stochastic property of SNR. This estimator uses a probability distribution function (PDF) of true local retardation, which is proportional to birefringence, under a specific set of measurements of the birefringence and SNR. The PDF was pre-computed by a Monte-Carlo (MC) simulation based on the mathematical model of JM-OCT before the measurement. A comparison between this new MAP estimator, our previous MAP estimator [2], and the standard mean estimator is presented. The comparisons are performed both by numerical simulation and in vivo measurements of anterior and

  1. Estimating the melting point, entropy of fusion, and enthalpy of ...

    EPA Pesticide Factsheets

    The entropies of fusion, enthalies of fusion, and melting points of organic compounds can be estimated through three models developed using the SPARC (SPARC Performs Automated Reasoning in Chemistry) platform. The entropy of fusion is modeled through a combination of interaction terms and physical descriptors. The enthalpy of fusion is modeled as a function of the entropy of fusion, boiling point, and fexibility of the molecule. The melting point model is the enthlapy of fusion divided by the entropy of fusion. These models were developed in part to improve SPARC's vapor pressure and solubility models. These models have been tested on 904 unique compounds. The entropy model has a RMS of 12.5 J mol-1K-1. The enthalpy model has a RMS of 4.87 kJ mol-1. The melting point model has a RMS of 54.4°C. Published in the journal, SAR and QSAR in Environmental Research

  2. Accurate estimation of cardinal growth temperatures of Escherichia coli from optimal dynamic experiments.

    PubMed

    Van Derlinden, E; Bernaerts, K; Van Impe, J F

    2008-11-30

    Prediction of the microbial growth rate as a response to changing temperatures is an important aspect in the control of food safety and food spoilage. Accurate model predictions of the microbial evolution ask for correct model structures and reliable parameter values with good statistical quality. Given the widely accepted validity of the Cardinal Temperature Model with Inflection (CTMI) [Rosso, L., Lobry, J. R., Bajard, S. and Flandrois, J. P., 1995. Convenient model to describe the combined effects of temperature and pH on microbial growth, Applied and Environmental Microbiology, 61: 610-616], this paper focuses on the accurate estimation of its four parameters (T(min), T(opt), T(max) and micro(opt)) by applying the technique of optimal experiment design for parameter estimation (OED/PE). This secondary model describes the influence of temperature on the microbial specific growth rate from the minimum to the maximum temperature for growth. Dynamic temperature profiles are optimized within two temperature regions ([15 degrees C, 43 degrees C] and [15 degrees C, 45 degrees C]), focusing on the minimization of the parameter estimation (co)variance (D-optimal design). The optimal temperature profiles are implemented in a computer controlled bioreactor, and the CTMI parameters are identified from the resulting experimental data. Approximately equal CTMI parameter values were derived irrespective of the temperature region, except for T(max). The latter could only be estimated accurately from the optimal experiments within [15 degrees C, 45 degrees C]. This observation underlines the importance of selecting the upper temperature constraint for OED/PE as close as possible to the true T(max). Cardinal temperature estimates resulting from designs within [15 degrees C, 45 degrees C] correspond with values found in literature, are characterized by a small uncertainty error and yield a good result during validation. As compared to estimates from non-optimized dynamic

  3. A Machine-Independent ALGOL Procedure for Accurate Floating-Point Summation

    DTIC Science & Technology

    The paper describes all ALGOL 60 procedure which is an implementation of the floating - point summation technique described in Malcolm (1971). This...implementation is machine-independent in the sense that it will work on nay computer having a floating - point number system F characterized as follows...number 0 is contained in F , but no assumption is made about its representation. All floating - point operations (e.g., addition and multiplication) are

  4. Software cost estimation using class point metrics (CPM)

    NASA Astrophysics Data System (ADS)

    Ghode, Aditi; Periyasamy, Kasilingam

    2011-12-01

    Estimating cost for the software project is one of the most important and crucial task to maintain the software reliability. Many cost estimation models have been reported till now, but most of them have significant drawbacks due to rapid changes in the technology. For example, Source Line Of Code (SLOC) can only be counted when the software construction is complete. Function Point (FP) metric is deficient in handling Object Oriented Technology, as it was designed for procedural languages such as COBOL. Since Object-Oriented Programming became a popular development practice, most of the software companies started applying the Unified Modeling Language (UML). The objective of this research is to develop a new cost estimation model with the application of class diagram for the software cost estimation.

  5. READSCAN: a fast and scalable pathogen discovery program with accurate genome relative abundance estimation

    PubMed Central

    Rashid, Mamoon; Pain, Arnab

    2013-01-01

    Summary: READSCAN is a highly scalable parallel program to identify non-host sequences (of potential pathogen origin) and estimate their genome relative abundance in high-throughput sequence datasets. READSCAN accurately classified human and viral sequences on a 20.1 million reads simulated dataset in <27 min using a small Beowulf compute cluster with 16 nodes (Supplementary Material). Availability: http://cbrc.kaust.edu.sa/readscan Contact: arnab.pain@kaust.edu.sa or raeece.naeem@gmail.com Supplementary information: Supplementary data are available at Bioinformatics online. PMID:23193222

  6. Navigable points estimation for mobile robots using binary image skeletonization

    NASA Astrophysics Data System (ADS)

    Martinez S., Fernando; Jacinto G., Edwar; Montiel A., Holman

    2017-02-01

    This paper describes the use of image skeletonization for the estimation of all the navigable points, inside a scene of mobile robots navigation. Those points are used for computing a valid navigation path, using standard methods. The main idea is to find the middle and the extreme points of the obstacles in the scene, taking into account the robot size, and create a map of navigable points, in order to reduce the amount of information for the planning algorithm. Those points are located by means of the skeletonization of a binary image of the obstacles and the scene background, along with some other digital image processing algorithms. The proposed algorithm automatically gives a variable number of navigable points per obstacle, depending on the complexity of its shape. As well as, the way how the algorithm can change some of their parameters in order to change the final number of the resultant key points is shown. The results shown here were obtained applying different kinds of digital image processing algorithms on static scenes.

  7. The estimation of tumor cell percentage for molecular testing by pathologists is not accurate.

    PubMed

    Smits, Alexander J J; Kummer, J Alain; de Bruin, Peter C; Bol, Mijke; van den Tweel, Jan G; Seldenrijk, Kees A; Willems, Stefan M; Offerhaus, G Johan A; de Weger, Roel A; van Diest, Paul J; Vink, Aryan

    2014-02-01

    Molecular pathology is becoming more and more important in present day pathology. A major challenge for any molecular test is its ability to reliably detect mutations in samples consisting of mixtures of tumor cells and normal cells, especially when the tumor content is low. The minimum percentage of tumor cells required to detect genetic abnormalities is a major variable. Information on tumor cell percentage is essential for a correct interpretation of the result. In daily practice, the percentage of tumor cells is estimated by pathologists on hematoxylin and eosin (H&E)-stained slides, the reliability of which has been questioned. This study aimed to determine the reliability of estimated tumor cell percentages in tissue samples by pathologists. On 47 H&E-stained slides of lung tumors a tumor area was marked. The percentage of tumor cells within this area was estimated independently by nine pathologists, using categories of 0-5%, 6-10%, 11-20%, 21-30%, and so on, until 91-100%. As gold standard, the percentage of tumor cells was counted manually. On average, the range between the lowest and the highest estimate per sample was 6.3 categories. In 33% of estimates, the deviation from the gold standard was at least three categories. The mean absolute deviation was 2.0 categories (range between observers 1.5-3.1 categories). There was a significant difference between the observers (P<0.001). If 20% of tumor cells were considered the lower limit to detect a mutation, samples with an insufficient tumor cell percentage (<20%) would have been estimated to contain enough tumor cells in 27/72 (38%) observations, possibly causing false negative results. In conclusion, estimates of tumor cell percentages on H&E-stained slides are not accurate, which could result in misinterpretation of test results. Reliability could possibly be improved by using a training set with feedback.

  8. Toward an Accurate Estimate of the Exfoliation Energy of Black Phosphorus: A Periodic Quantum Chemical Approach.

    PubMed

    Sansone, Giuseppe; Maschio, Lorenzo; Usvyat, Denis; Schütz, Martin; Karttunen, Antti

    2016-01-07

    The black phosphorus (black-P) crystal is formed of covalently bound layers of phosphorene stacked together by weak van der Waals interactions. An experimental measurement of the exfoliation energy of black-P is not available presently, making theoretical studies the most important source of information for the optimization of phosphorene production. Here, we provide an accurate estimate of the exfoliation energy of black-P on the basis of multilevel quantum chemical calculations, which include the periodic local Møller-Plesset perturbation theory of second order, augmented by higher-order corrections, which are evaluated with finite clusters mimicking the crystal. Very similar results are also obtained by density functional theory with the D3-version of Grimme's empirical dispersion correction. Our estimate of the exfoliation energy for black-P of -151 meV/atom is substantially larger than that of graphite, suggesting the need for different strategies to generate isolated layers for these two systems.

  9. Effects of LiDAR point density, sampling size and height threshold on estimation accuracy of crop biophysical parameters.

    PubMed

    Luo, Shezhou; Chen, Jing M; Wang, Cheng; Xi, Xiaohuan; Zeng, Hongcheng; Peng, Dailiang; Li, Dong

    2016-05-30

    Vegetation leaf area index (LAI), height, and aboveground biomass are key biophysical parameters. Corn is an important and globally distributed crop, and reliable estimations of these parameters are essential for corn yield forecasting, health monitoring and ecosystem modeling. Light Detection and Ranging (LiDAR) is considered an effective technology for estimating vegetation biophysical parameters. However, the estimation accuracies of these parameters are affected by multiple factors. In this study, we first estimated corn LAI, height and biomass (R2 = 0.80, 0.874 and 0.838, respectively) using the original LiDAR data (7.32 points/m2), and the results showed that LiDAR data could accurately estimate these biophysical parameters. Second, comprehensive research was conducted on the effects of LiDAR point density, sampling size and height threshold on the estimation accuracy of LAI, height and biomass. Our findings indicated that LiDAR point density had an important effect on the estimation accuracy for vegetation biophysical parameters, however, high point density did not always produce highly accurate estimates, and reduced point density could deliver reasonable estimation results. Furthermore, the results showed that sampling size and height threshold were additional key factors that affect the estimation accuracy of biophysical parameters. Therefore, the optimal sampling size and the height threshold should be determined to improve the estimation accuracy of biophysical parameters. Our results also implied that a higher LiDAR point density, larger sampling size and height threshold were required to obtain accurate corn LAI estimation when compared with height and biomass estimations. In general, our results provide valuable guidance for LiDAR data acquisition and estimation of vegetation biophysical parameters using LiDAR data.

  10. Measuring the Six dof Driving Point Impedance Function and AN Application to RB Inertia Property Estimation

    NASA Astrophysics Data System (ADS)

    Witter, M. C.; Brown, D. L.; Blough, J. R.

    2000-01-01

    An accurate driving point measurement is imperative in structural dynamic testing. For example, it is used to derive modal scaling, for experimental correlation of finite element models, impedance modelling and extracting the rigid body (RB) inertia properties of an object. A typical driving point measurement gives the linear force/displacement relationship at a single degree of freedom (dof), but any point on an object actually has rotational dofs as well. For example these rotational dofs must be measured in an impedance model where moments are transmitted at the connection point of two substructures. By ignoring the rotations, an inaccurate model will result. In the past, dynamic sensing technology has been limited to the accurate measurement of translational dofs. While rotational sensors do exist, their accuracy is called into question for certain applications. Rotational dofs have tended to be ignored in the measurement process. Applications, which require their use, such as impedance modelling and RB inertia property estimation, have suffered as a result. A process/sensor is being developed to accurately measure the driving point impedance function in all six dofs. The sensor as well as a calibration procedure will be presented here. In order to verify the validity of the calibration and measurement procedure, a new method for measuring the RB inertia properties of an object will be presented. This new method requires an accurate six dof driving point impedance measurement to provide accurate results. The inertia properties of an automotive brake rotor will be measured and compared with the results of a traditional pendulous swing test.

  11. A Fast Algorithm to Estimate the Deepest Points of Lakes for Regional Lake Registration.

    PubMed

    Shen, Zhanfeng; Yu, Xinju; Sheng, Yongwei; Li, Junli; Luo, Jiancheng

    2015-01-01

    When conducting image registration in the U.S. state of Alaska, it is very difficult to locate satisfactory ground control points because ice, snow, and lakes cover much of the ground. However, GCPs can be located by seeking stable points from the extracted lake data. This paper defines a process to estimate the deepest points of lakes as the most stable ground control points for registration. We estimate the deepest point of a lake by computing the center point of the largest inner circle (LIC) of the polygon representing the lake. An LIC-seeking method based on Voronoi diagrams is proposed, and an algorithm based on medial axis simplification (MAS) is introduced. The proposed design also incorporates parallel data computing. A key issue of selecting a policy for partitioning vector data is carefully studied, the selected policy that equalize the algorithm complexity is proved the most optimized policy for vector parallel processing. Using several experimental applications, we conclude that the presented approach accurately estimates the deepest points in Alaskan lakes; furthermore, we gain perfect efficiency using MAS and a policy of algorithm complexity equalization.

  12. Applications of operational calculus: equations for the five-point rectangle and robust center point estimators

    SciTech Connect

    Silver, Gary L

    2009-01-01

    Equations for interpolating five data in rectangular array are seldom encountered in textbooks. This paper describes a new method that renders polynomial and exponential equations for the design. Operational center point estimators are often more more resistant to the effects of an outlying datum than the mean.

  13. Lamb mode selection for accurate wall loss estimation via guided wave tomography

    SciTech Connect

    Huthwaite, P.; Ribichini, R.; Lowe, M. J. S.; Cawley, P.

    2014-02-18

    Guided wave tomography offers a method to accurately quantify wall thickness losses in pipes and vessels caused by corrosion. This is achieved using ultrasonic waves transmitted over distances of approximately 1–2m, which are measured by an array of transducers and then used to reconstruct a map of wall thickness throughout the inspected region. To achieve accurate estimations of remnant wall thickness, it is vital that a suitable Lamb mode is chosen. This paper presents a detailed evaluation of the fundamental modes, S{sub 0} and A{sub 0}, which are of primary interest in guided wave tomography thickness estimates since the higher order modes do not exist at all thicknesses, to compare their performance using both numerical and experimental data while considering a range of challenging phenomena. The sensitivity of A{sub 0} to thickness variations was shown to be superior to S{sub 0}, however, the attenuation from A{sub 0} when a liquid loading was present was much higher than S{sub 0}. A{sub 0} was less sensitive to the presence of coatings on the surface of than S{sub 0}.

  14. Magnetic gaps in organic tri-radicals: From a simple model to accurate estimates

    NASA Astrophysics Data System (ADS)

    Barone, Vincenzo; Cacelli, Ivo; Ferretti, Alessandro; Prampolini, Giacomo

    2017-03-01

    The calculation of the energy gap between the magnetic states of organic poly-radicals still represents a challenging playground for quantum chemistry, and high-level techniques are required to obtain accurate estimates. On these grounds, the aim of the present study is twofold. From the one side, it shows that, thanks to recent algorithmic and technical improvements, we are able to compute reliable quantum mechanical results for the systems of current fundamental and technological interest. From the other side, proper parameterization of a simple Hubbard Hamiltonian allows for a sound rationalization of magnetic gaps in terms of basic physical effects, unraveling the role played by electron delocalization, Coulomb repulsion, and effective exchange in tuning the magnetic character of the ground state. As case studies, we have chosen three prototypical organic tri-radicals, namely, 1,3,5-trimethylenebenzene, 1,3,5-tridehydrobenzene, and 1,2,3-tridehydrobenzene, which differ either for geometric or electronic structure. After discussing the differences among the three species and their consequences on the magnetic properties in terms of the simple model mentioned above, accurate and reliable values for the energy gap between the lowest quartet and doublet states are computed by means of the so-called difference dedicated configuration interaction (DDCI) technique, and the final results are discussed and compared to both available experimental and computational estimates.

  15. Accurate State Estimation and Tracking of a Non-Cooperative Target Vehicle

    NASA Technical Reports Server (NTRS)

    Thienel, Julie K.; Sanner, Robert M.

    2006-01-01

    Autonomous space rendezvous scenarios require knowledge of the target vehicle state in order to safely dock with the chaser vehicle. Ideally, the target vehicle state information is derived from telemetered data, or with the use of known tracking points on the target vehicle. However, if the target vehicle is non-cooperative and does not have the ability to maintain attitude control, or transmit attitude knowledge, the docking becomes more challenging. This work presents a nonlinear approach for estimating the body rates of a non-cooperative target vehicle, and coupling this estimation to a tracking control scheme. The approach is tested with the robotic servicing mission concept for the Hubble Space Telescope (HST). Such a mission would not only require estimates of the HST attitude and rates, but also precision control to achieve the desired rate and maintain the orientation to successfully dock with HST.

  16. Do modelled or satellite-based estimates of surface solar irradiance accurately describe its temporal variability?

    NASA Astrophysics Data System (ADS)

    Bengulescu, Marc; Blanc, Philippe; Boilley, Alexandre; Wald, Lucien

    2017-02-01

    This study investigates the characteristic time-scales of variability found in long-term time-series of daily means of estimates of surface solar irradiance (SSI). The study is performed at various levels to better understand the causes of variability in the SSI. First, the variability of the solar irradiance at the top of the atmosphere is scrutinized. Then, estimates of the SSI in cloud-free conditions as provided by the McClear model are dealt with, in order to reveal the influence of the clear atmosphere (aerosols, water vapour, etc.). Lastly, the role of clouds on variability is inferred by the analysis of in-situ measurements. A description of how the atmosphere affects SSI variability is thus obtained on a time-scale basis. The analysis is also performed with estimates of the SSI provided by the satellite-derived HelioClim-3 database and by two numerical weather re-analyses: ERA-Interim and MERRA2. It is found that HelioClim-3 estimates render an accurate picture of the variability found in ground measurements, not only globally, but also with respect to individual characteristic time-scales. On the contrary, the variability found in re-analyses correlates poorly with all scales of ground measurements variability.

  17. Removing the thermal component from heart rate provides an accurate VO2 estimation in forest work.

    PubMed

    Dubé, Philippe-Antoine; Imbeau, Daniel; Dubeau, Denise; Lebel, Luc; Kolus, Ahmet

    2016-05-01

    Heart rate (HR) was monitored continuously in 41 forest workers performing brushcutting or tree planting work. 10-min seated rest periods were imposed during the workday to estimate the HR thermal component (ΔHRT) per Vogt et al. (1970, 1973). VO2 was measured using a portable gas analyzer during a morning submaximal step-test conducted at the work site, during a work bout over the course of the day (range: 9-74 min), and during an ensuing 10-min rest pause taken at the worksite. The VO2 estimated, from measured HR and from corrected HR (thermal component removed), were compared to VO2 measured during work and rest. Varied levels of HR thermal component (ΔHRTavg range: 0-38 bpm) originating from a wide range of ambient thermal conditions, thermal clothing insulation worn, and physical load exerted during work were observed. Using raw HR significantly overestimated measured work VO2 by 30% on average (range: 1%-64%). 74% of VO2 prediction error variance was explained by the HR thermal component. VO2 estimated from corrected HR, was not statistically different from measured VO2. Work VO2 can be estimated accurately in the presence of thermal stress using Vogt et al.'s method, which can be implemented easily by the practitioner with inexpensive instruments.

  18. Accurate Estimation of the Intrinsic Dimension Using Graph Distances: Unraveling the Geometric Complexity of Datasets

    PubMed Central

    Granata, Daniele; Carnevale, Vincenzo

    2016-01-01

    The collective behavior of a large number of degrees of freedom can be often described by a handful of variables. This observation justifies the use of dimensionality reduction approaches to model complex systems and motivates the search for a small set of relevant “collective” variables. Here, we analyze this issue by focusing on the optimal number of variable needed to capture the salient features of a generic dataset and develop a novel estimator for the intrinsic dimension (ID). By approximating geodesics with minimum distance paths on a graph, we analyze the distribution of pairwise distances around the maximum and exploit its dependency on the dimensionality to obtain an ID estimate. We show that the estimator does not depend on the shape of the intrinsic manifold and is highly accurate, even for exceedingly small sample sizes. We apply the method to several relevant datasets from image recognition databases and protein multiple sequence alignments and discuss possible interpretations for the estimated dimension in light of the correlations among input variables and of the information content of the dataset. PMID:27510265

  19. Accurate Estimation of the Intrinsic Dimension Using Graph Distances: Unraveling the Geometric Complexity of Datasets

    NASA Astrophysics Data System (ADS)

    Granata, Daniele; Carnevale, Vincenzo

    2016-08-01

    The collective behavior of a large number of degrees of freedom can be often described by a handful of variables. This observation justifies the use of dimensionality reduction approaches to model complex systems and motivates the search for a small set of relevant “collective” variables. Here, we analyze this issue by focusing on the optimal number of variable needed to capture the salient features of a generic dataset and develop a novel estimator for the intrinsic dimension (ID). By approximating geodesics with minimum distance paths on a graph, we analyze the distribution of pairwise distances around the maximum and exploit its dependency on the dimensionality to obtain an ID estimate. We show that the estimator does not depend on the shape of the intrinsic manifold and is highly accurate, even for exceedingly small sample sizes. We apply the method to several relevant datasets from image recognition databases and protein multiple sequence alignments and discuss possible interpretations for the estimated dimension in light of the correlations among input variables and of the information content of the dataset.

  20. Grid-point requirements for large eddy simulation: Chapman's estimates revisited

    NASA Astrophysics Data System (ADS)

    Choi, Haecheon; Moin, Parviz

    2012-01-01

    Resolution requirements for large eddy simulation (LES), estimated by Chapman [AIAA J. 17, 1293 (1979)], are modified using accurate formulae for high Reynolds number boundary layer flow. The new estimates indicate that the number of grid points (N) required for wall-modeled LES is proportional to ReLx , but a wall-resolving LES requires N ˜ReLx 13 /7 , where Lx is the flat-plate length in the streamwise direction. On the other hand, direct numerical simulation, resolving the Kolmogorov length scale, requires N ˜ReLx 37 /14 .

  1. MIDAS robust trend estimator for accurate GPS station velocities without step detection.

    PubMed

    Blewitt, Geoffrey; Kreemer, Corné; Hammond, William C; Gazeaux, Julien

    2016-03-01

    Automatic estimation of velocities from GPS coordinate time series is becoming required to cope with the exponentially increasing flood of available data, but problems detectable to the human eye are often overlooked. This motivates us to find an automatic and accurate estimator of trend that is resistant to common problems such as step discontinuities, outliers, seasonality, skewness, and heteroscedasticity. Developed here, Median Interannual Difference Adjusted for Skewness (MIDAS) is a variant of the Theil-Sen median trend estimator, for which the ordinary version is the median of slopes vij  = (xj-xi )/(tj-ti ) computed between all data pairs i > j. For normally distributed data, Theil-Sen and least squares trend estimates are statistically identical, but unlike least squares, Theil-Sen is resistant to undetected data problems. To mitigate both seasonality and step discontinuities, MIDAS selects data pairs separated by 1 year. This condition is relaxed for time series with gaps so that all data are used. Slopes from data pairs spanning a step function produce one-sided outliers that can bias the median. To reduce bias, MIDAS removes outliers and recomputes the median. MIDAS also computes a robust and realistic estimate of trend uncertainty. Statistical tests using GPS data in the rigid North American plate interior show ±0.23 mm/yr root-mean-square (RMS) accuracy in horizontal velocity. In blind tests using synthetic data, MIDAS velocities have an RMS accuracy of ±0.33 mm/yr horizontal, ±1.1 mm/yr up, with a 5th percentile range smaller than all 20 automatic estimators tested. Considering its general nature, MIDAS has the potential for broader application in the geosciences.

  2. Methods for accurate estimation of net discharge in a tidal channel

    USGS Publications Warehouse

    Simpson, M.R.; Bland, R.

    2000-01-01

    Accurate estimates of net residual discharge in tidally affected rivers and estuaries are possible because of recently developed ultrasonic discharge measurement techniques. Previous discharge estimates using conventional mechanical current meters and methods based on stage/discharge relations or water slope measurements often yielded errors that were as great as or greater than the computed residual discharge. Ultrasonic measurement methods consist of: 1) the use of ultrasonic instruments for the measurement of a representative 'index' velocity used for in situ estimation of mean water velocity and 2) the use of the acoustic Doppler current discharge measurement system to calibrate the index velocity measurement data. Methods used to calibrate (rate) the index velocity to the channel velocity measured using the Acoustic Doppler Current Profiler are the most critical factors affecting the accuracy of net discharge estimation. The index velocity first must be related to mean channel velocity and then used to calculate instantaneous channel discharge. Finally, discharge is low-pass filtered to remove the effects of the tides. An ultrasonic velocity meter discharge-measurement site in a tidally affected region of the Sacramento-San Joaquin Rivers was used to study the accuracy of the index velocity calibration procedure. Calibration data consisting of ultrasonic velocity meter index velocity and concurrent acoustic Doppler discharge measurement data were collected during three time periods. Two sets of data were collected during a spring tide (monthly maximum tidal current) and one of data collected during a neap tide (monthly minimum tidal current). The relative magnitude of instrumental errors, acoustic Doppler discharge measurement errors, and calibration errors were evaluated. Calibration error was found to be the most significant source of error in estimating net discharge. Using a comprehensive calibration method, net discharge estimates developed from the three

  3. MIDAS robust trend estimator for accurate GPS station velocities without step detection

    NASA Astrophysics Data System (ADS)

    Blewitt, Geoffrey; Kreemer, Corné; Hammond, William C.; Gazeaux, Julien

    2016-03-01

    Automatic estimation of velocities from GPS coordinate time series is becoming required to cope with the exponentially increasing flood of available data, but problems detectable to the human eye are often overlooked. This motivates us to find an automatic and accurate estimator of trend that is resistant to common problems such as step discontinuities, outliers, seasonality, skewness, and heteroscedasticity. Developed here, Median Interannual Difference Adjusted for Skewness (MIDAS) is a variant of the Theil-Sen median trend estimator, for which the ordinary version is the median of slopes vij = (xj-xi)/(tj-ti) computed between all data pairs i > j. For normally distributed data, Theil-Sen and least squares trend estimates are statistically identical, but unlike least squares, Theil-Sen is resistant to undetected data problems. To mitigate both seasonality and step discontinuities, MIDAS selects data pairs separated by 1 year. This condition is relaxed for time series with gaps so that all data are used. Slopes from data pairs spanning a step function produce one-sided outliers that can bias the median. To reduce bias, MIDAS removes outliers and recomputes the median. MIDAS also computes a robust and realistic estimate of trend uncertainty. Statistical tests using GPS data in the rigid North American plate interior show ±0.23 mm/yr root-mean-square (RMS) accuracy in horizontal velocity. In blind tests using synthetic data, MIDAS velocities have an RMS accuracy of ±0.33 mm/yr horizontal, ±1.1 mm/yr up, with a 5th percentile range smaller than all 20 automatic estimators tested. Considering its general nature, MIDAS has the potential for broader application in the geosciences.

  4. MIDAS robust trend estimator for accurate GPS station velocities without step detection

    PubMed Central

    Kreemer, Corné; Hammond, William C.; Gazeaux, Julien

    2016-01-01

    Abstract Automatic estimation of velocities from GPS coordinate time series is becoming required to cope with the exponentially increasing flood of available data, but problems detectable to the human eye are often overlooked. This motivates us to find an automatic and accurate estimator of trend that is resistant to common problems such as step discontinuities, outliers, seasonality, skewness, and heteroscedasticity. Developed here, Median Interannual Difference Adjusted for Skewness (MIDAS) is a variant of the Theil‐Sen median trend estimator, for which the ordinary version is the median of slopes vij = (xj–xi)/(tj–ti) computed between all data pairs i > j. For normally distributed data, Theil‐Sen and least squares trend estimates are statistically identical, but unlike least squares, Theil‐Sen is resistant to undetected data problems. To mitigate both seasonality and step discontinuities, MIDAS selects data pairs separated by 1 year. This condition is relaxed for time series with gaps so that all data are used. Slopes from data pairs spanning a step function produce one‐sided outliers that can bias the median. To reduce bias, MIDAS removes outliers and recomputes the median. MIDAS also computes a robust and realistic estimate of trend uncertainty. Statistical tests using GPS data in the rigid North American plate interior show ±0.23 mm/yr root‐mean‐square (RMS) accuracy in horizontal velocity. In blind tests using synthetic data, MIDAS velocities have an RMS accuracy of ±0.33 mm/yr horizontal, ±1.1 mm/yr up, with a 5th percentile range smaller than all 20 automatic estimators tested. Considering its general nature, MIDAS has the potential for broader application in the geosciences. PMID:27668140

  5. Ratio-based estimators for a change point in persistence.

    PubMed

    Halunga, Andreea G; Osborn, Denise R

    2012-11-01

    We study estimation of the date of change in persistence, from [Formula: see text] to [Formula: see text] or vice versa. Contrary to statements in the original papers, our analytical results establish that the ratio-based break point estimators of Kim [Kim, J.Y., 2000. Detection of change in persistence of a linear time series. Journal of Econometrics 95, 97-116], Kim et al. [Kim, J.Y., Belaire-Franch, J., Badillo Amador, R., 2002. Corringendum to "Detection of change in persistence of a linear time series". Journal of Econometrics 109, 389-392] and Busetti and Taylor [Busetti, F., Taylor, A.M.R., 2004. Tests of stationarity against a change in persistence. Journal of Econometrics 123, 33-66] are inconsistent when a mean (or other deterministic component) is estimated for the process. In such cases, the estimators converge to random variables with upper bound given by the true break date when persistence changes from [Formula: see text] to [Formula: see text]. A Monte Carlo study confirms the large sample downward bias and also finds substantial biases in moderate sized samples, partly due to properties at the end points of the search interval.

  6. Optimal point process filtering and estimation of the coalescent process.

    PubMed

    Parag, Kris V; Pybus, Oliver G

    2017-04-03

    The coalescent process is a widely used approach for inferring the demographic history of a population, from samples of its genetic diversity. Several parametric and non-parametric coalescent inference methods, involving Markov chain Monte Carlo, Gaussian processes, and other algorithms, already exist. However, these techniques are not always easy to adapt and apply, thus creating a need for alternative methodologies. We introduce the Bayesian Snyder filter as an easily implementable and flexible minimum mean square error estimator for parametric demographic functions on fixed genealogies. By reinterpreting the coalescent as a self-exciting Markov process, we show that the Snyder filter can be applied to both isochronously and heterochronously sampled datasets. We analytically solve the filter equations for the constant population size Kingman coalescent, derive expressions for its mean squared estimation error, and estimate its robustness to prior distribution specification. For populations with deterministically time-varying size we numerically solve the Snyder equations, and test this solution on common demographic models. We find that the Snyder filter accurately recovers the true demographic history for these models. We also apply the filter to a well-studied, dataset of hepatitis C virus sequences and show that the filter compares well to a popular phylodynamic inference method. The Snyder filter is an exact (given discretised priors, it does not approximate the posterior) and direct Bayesian estimation method that has the potential to become a useful alternative tool for coalescent inference.

  7. Rapid Bayesian point source inversion using pattern recognition --- bridging the gap between regional scaling relations and accurate physical modelling

    NASA Astrophysics Data System (ADS)

    Valentine, A. P.; Kaeufl, P.; De Wit, R. W. L.; Trampert, J.

    2014-12-01

    Obtaining knowledge about source parameters in (near) real-time during or shortly after an earthquake is essential for mitigating damage and directing resources in the aftermath of the event. Therefore, a variety of real-time source-inversion algorithms have been developed over recent decades. This has been driven by the ever-growing availability of dense seismograph networks in many seismogenic areas of the world and the significant advances in real-time telemetry. By definition, these algorithms rely on short time-windows of sparse, local and regional observations, resulting in source estimates that are highly sensitive to observational errors, noise and missing data. In order to obtain estimates more rapidly, many algorithms are either entirely based on empirical scaling relations or make simplifying assumptions about the Earth's structure, which can in turn lead to biased results. It is therefore essential that realistic uncertainty bounds are estimated along with the parameters. A natural means of propagating probabilistic information on source parameters through the entire processing chain from first observations to potential end users and decision makers is provided by the Bayesian formalism.We present a novel method based on pattern recognition allowing us to incorporate highly accurate physical modelling into an uncertainty-aware real-time inversion algorithm. The algorithm is based on a pre-computed Green's functions database, containing a large set of source-receiver paths in a highly heterogeneous crustal model. Unlike similar methods, which often employ a grid search, we use a supervised learning algorithm to relate synthetic waveforms to point source parameters. This training procedure has to be performed only once and leads to a representation of the posterior probability density function p(m|d) --- the distribution of source parameters m given observations d --- which can be evaluated quickly for new data.Owing to the flexibility of the pattern

  8. Accurate relative location estimates for the North Korean nuclear tests using empirical slowness corrections

    NASA Astrophysics Data System (ADS)

    Gibbons, S. J.; Pabian, F.; Näsholm, S. P.; Kværna, T.; Mykkeltveit, S.

    2017-01-01

    velocity gradients reduce the residuals, the relative location uncertainties and the sensitivity to the combination of stations used. The traveltime gradients appear to be overestimated for the regional phases, and teleseismic relative location estimates are likely to be more accurate despite an apparent lower precision. Calibrations for regional phases are essential given that smaller magnitude events are likely not to be recorded teleseismically. We discuss the implications for the absolute event locations. Placing the 2006 event under a local maximum of overburden at 41.293°N, 129.105°E would imply a location of 41.299°N, 129.075°E for the January 2016 event, providing almost optimal overburden for the later four events.

  9. Accurate Relative Location Estimates for the North Korean Nuclear Tests Using Empirical Slowness Corrections

    NASA Astrophysics Data System (ADS)

    Gibbons, S. J.; Pabian, F.; Näsholm, S. P.; Kværna', T.; Mykkeltveit, S.

    2016-10-01

    modified velocity gradients reduce the residuals, the relative location uncertainties, and the sensitivity to the combination of stations used. The traveltime gradients appear to be overestimated for the regional phases, and teleseismic relative location estimates are likely to be more accurate despite an apparent lower precision. Calibrations for regional phases are essential given that smaller magnitude events are likely not to be recorded teleseismically. We discuss the implications for the absolute event locations. Placing the 2006 event under a local maximum of overburden at 41.293°N, 129.105°E would imply a location of 41.299°N, 129.075°E for the January 2016 event, providing almost optimal overburden for the later four events.

  10. Motion estimation using point cluster method and Kalman filter.

    PubMed

    Senesh, M; Wolf, A

    2009-05-01

    The most frequently used method in a three dimensional human gait analysis involves placing markers on the skin of the analyzed segment. This introduces a significant artifact, which strongly influences the bone position and orientation and joint kinematic estimates. In this study, we tested and evaluated the effect of adding a Kalman filter procedure to the previously reported point cluster technique (PCT) in the estimation of a rigid body motion. We demonstrated the procedures by motion analysis of a compound planar pendulum from indirect opto-electronic measurements of markers attached to an elastic appendage that is restrained to slide along the rigid body long axis. The elastic frequency is close to the pendulum frequency, as in the biomechanical problem, where the soft tissue frequency content is similar to the actual movement of the bones. Comparison of the real pendulum angle to that obtained by several estimation procedures--PCT, Kalman filter followed by PCT, and low pass filter followed by PCT--enables evaluation of the accuracy of the procedures. When comparing the maximal amplitude, no effect was noted by adding the Kalman filter; however, a closer look at the signal revealed that the estimated angle based only on the PCT method was very noisy with fluctuation, while the estimated angle based on the Kalman filter followed by the PCT was a smooth signal. It was also noted that the instantaneous frequencies obtained from the estimated angle based on the PCT method is more dispersed than those obtained from the estimated angle based on Kalman filter followed by the PCT method. Addition of a Kalman filter to the PCT method in the estimation procedure of rigid body motion results in a smoother signal that better represents the real motion, with less signal distortion than when using a digital low pass filter. Furthermore, it can be concluded that adding a Kalman filter to the PCT procedure substantially reduces the dispersion of the maximal and minimal

  11. Painfree and accurate Bayesian estimation of psychometric functions for (potentially) overdispersed data.

    PubMed

    Schütt, Heiko H; Harmeling, Stefan; Macke, Jakob H; Wichmann, Felix A

    2016-05-01

    The psychometric function describes how an experimental variable, such as stimulus strength, influences the behaviour of an observer. Estimation of psychometric functions from experimental data plays a central role in fields such as psychophysics, experimental psychology and in the behavioural neurosciences. Experimental data may exhibit substantial overdispersion, which may result from non-stationarity in the behaviour of observers. Here we extend the standard binomial model which is typically used for psychometric function estimation to a beta-binomial model. We show that the use of the beta-binomial model makes it possible to determine accurate credible intervals even in data which exhibit substantial overdispersion. This goes beyond classical measures for overdispersion-goodness-of-fit-which can detect overdispersion but provide no method to do correct inference for overdispersed data. We use Bayesian inference methods for estimating the posterior distribution of the parameters of the psychometric function. Unlike previous Bayesian psychometric inference methods our software implementation-psignifit 4-performs numerical integration of the posterior within automatically determined bounds. This avoids the use of Markov chain Monte Carlo (MCMC) methods typically requiring expert knowledge. Extensive numerical tests show the validity of the approach and we discuss implications of overdispersion for experimental design. A comprehensive MATLAB toolbox implementing the method is freely available; a python implementation providing the basic capabilities is also available.

  12. Accurate estimation of the RMS emittance from single current amplifier data

    SciTech Connect

    Stockli, Martin P.; Welton, R.F.; Keller, R.; Letchford, A.P.; Thomae, R.W.; Thomason, J.W.G.

    2002-05-31

    This paper presents the SCUBEEx rms emittance analysis, a self-consistent, unbiased elliptical exclusion method, which combines traditional data-reduction methods with statistical methods to obtain accurate estimates for the rms emittance. Rather than considering individual data, the method tracks the average current density outside a well-selected, variable boundary to separate the measured beam halo from the background. The average outside current density is assumed to be part of a uniform background and not part of the particle beam. Therefore the average outside current is subtracted from the data before evaluating the rms emittance within the boundary. As the boundary area is increased, the average outside current and the inside rms emittance form plateaus when all data containing part of the particle beam are inside the boundary. These plateaus mark the smallest acceptable exclusion boundary and provide unbiased estimates for the average background and the rms emittance. Small, trendless variations within the plateaus allow for determining the uncertainties of the estimates caused by variations of the measured background outside the smallest acceptable exclusion boundary. The robustness of the method is established with complementary variations of the exclusion boundary. This paper presents a detailed comparison between traditional data reduction methods and SCUBEEx by analyzing two complementary sets of emittance data obtained with a Lawrence Berkeley National Laboratory and an ISIS H{sup -} ion source.

  13. Accurate estimation of human body orientation from RGB-D sensors.

    PubMed

    Liu, Wu; Zhang, Yongdong; Tang, Sheng; Tang, Jinhui; Hong, Richang; Li, Jintao

    2013-10-01

    Accurate estimation of human body orientation can significantly enhance the analysis of human behavior, which is a fundamental task in the field of computer vision. However, existing orientation estimation methods cannot handle the various body poses and appearances. In this paper, we propose an innovative RGB-D-based orientation estimation method to address these challenges. By utilizing the RGB-D information, which can be real time acquired by RGB-D sensors, our method is robust to cluttered environment, illumination change and partial occlusions. Specifically, efficient static and motion cue extraction methods are proposed based on the RGB-D superpixels to reduce the noise of depth data. Since it is hard to discriminate all the 360 (°) orientation using static cues or motion cues independently, we propose to utilize a dynamic Bayesian network system (DBNS) to effectively employ the complementary nature of both static and motion cues. In order to verify our proposed method, we build a RGB-D-based human body orientation dataset that covers a wide diversity of poses and appearances. Our intensive experimental evaluations on this dataset demonstrate the effectiveness and efficiency of the proposed method.

  14. Multiple candidates and multiple constraints based accurate depth estimation for multi-view stereo

    NASA Astrophysics Data System (ADS)

    Zhang, Chao; Zhou, Fugen; Xue, Bindang

    2017-02-01

    In this paper, we propose a depth estimation method for multi-view image sequence. To enhance the accuracy of dense matching and reduce the inaccurate matching which is produced by inaccurate feature description, we select multiple matching points to build candidate matching sets. Then we compute an optimal depth from a candidate matching set which satisfies multiple constraints (epipolar constraint, similarity constraint and depth consistency constraint). To further increase the accuracy of depth estimation, depth consistency constraint of neighbor pixels is used to filter the inaccurate matching. On this basis, in order to get more complete depth map, depth diffusion is performed by neighbor pixels' depth consistency constraint. Through experiments on the benchmark datasets for multiple view stereo, we demonstrate the superiority of proposed method over the state-of-the-art method in terms of accuracy.

  15. Two-wavelength interferometry: extended range and accurate optical path difference analytical estimator.

    PubMed

    Houairi, Kamel; Cassaing, Frédéric

    2009-12-01

    Two-wavelength interferometry combines measurement at two wavelengths lambda(1) and lambda(2) in order to increase the unambiguous range (UR) for the measurement of an optical path difference. With the usual algorithm, the UR is equal to the synthetic wavelength Lambda=lambda(1)lambda(2)/|lambda(1)-lambda(2)|, and the accuracy is a fraction of Lambda. We propose here a new analytical algorithm based on arithmetic properties, allowing estimation of the absolute fringe order of interference in a noniterative way. This algorithm has nice properties compared with the usual algorithm: it is at least as accurate as the most accurate measurement at one wavelength, whereas the UR is extended to several times the synthetic wavelength. The analysis presented shows how the actual UR depends on the wavelengths and different sources of error. The simulations presented are confirmed by experimental results, showing that the new algorithm has enabled us to reach an UR of 17.3 microm, much larger than the synthetic wavelength, which is only Lambda=2.2 microm. Applications to metrology and fringe tracking are discussed.

  16. Incentives Increase Participation in Mass Dog Rabies Vaccination Clinics and Methods of Coverage Estimation Are Assessed to Be Accurate

    PubMed Central

    Steinmetz, Melissa; Czupryna, Anna; Bigambo, Machunde; Mzimbiri, Imam; Powell, George; Gwakisa, Paul

    2015-01-01

    In this study we show that incentives (dog collars and owner wristbands) are effective at increasing owner participation in mass dog rabies vaccination clinics and we conclude that household questionnaire surveys and the mark-re-sight (transect survey) method for estimating post-vaccination coverage are accurate when all dogs, including puppies, are included. Incentives were distributed during central-point rabies vaccination clinics in northern Tanzania to quantify their effect on owner participation. In villages where incentives were handed out participation increased, with an average of 34 more dogs being vaccinated. Through economies of scale, this represents a reduction in the cost-per-dog of $0.47. This represents the price-threshold under which the cost of the incentive used must fall to be economically viable. Additionally, vaccination coverage levels were determined in ten villages through the gold-standard village-wide census technique, as well as through two cheaper and quicker methods (randomized household questionnaire and the transect survey). Cost data were also collected. Both non-gold standard methods were found to be accurate when puppies were included in the calculations, although the transect survey and the household questionnaire survey over- and under-estimated the coverage respectively. Given that additional demographic data can be collected through the household questionnaire survey, and that its estimate of coverage is more conservative, we recommend this method. Despite the use of incentives the average vaccination coverage was below the 70% threshold for eliminating rabies. We discuss the reasons and suggest solutions to improve coverage. Given recent international targets to eliminate rabies, this study provides valuable and timely data to help improve mass dog vaccination programs in Africa and elsewhere. PMID:26633821

  17. Incentives Increase Participation in Mass Dog Rabies Vaccination Clinics and Methods of Coverage Estimation Are Assessed to Be Accurate.

    PubMed

    Minyoo, Abel B; Steinmetz, Melissa; Czupryna, Anna; Bigambo, Machunde; Mzimbiri, Imam; Powell, George; Gwakisa, Paul; Lankester, Felix

    2015-12-01

    In this study we show that incentives (dog collars and owner wristbands) are effective at increasing owner participation in mass dog rabies vaccination clinics and we conclude that household questionnaire surveys and the mark-re-sight (transect survey) method for estimating post-vaccination coverage are accurate when all dogs, including puppies, are included. Incentives were distributed during central-point rabies vaccination clinics in northern Tanzania to quantify their effect on owner participation. In villages where incentives were handed out participation increased, with an average of 34 more dogs being vaccinated. Through economies of scale, this represents a reduction in the cost-per-dog of $0.47. This represents the price-threshold under which the cost of the incentive used must fall to be economically viable. Additionally, vaccination coverage levels were determined in ten villages through the gold-standard village-wide census technique, as well as through two cheaper and quicker methods (randomized household questionnaire and the transect survey). Cost data were also collected. Both non-gold standard methods were found to be accurate when puppies were included in the calculations, although the transect survey and the household questionnaire survey over- and under-estimated the coverage respectively. Given that additional demographic data can be collected through the household questionnaire survey, and that its estimate of coverage is more conservative, we recommend this method. Despite the use of incentives the average vaccination coverage was below the 70% threshold for eliminating rabies. We discuss the reasons and suggest solutions to improve coverage. Given recent international targets to eliminate rabies, this study provides valuable and timely data to help improve mass dog vaccination programs in Africa and elsewhere.

  18. A Simple yet Accurate Method for the Estimation of the Biovolume of Planktonic Microorganisms

    PubMed Central

    2016-01-01

    Determining the biomass of microbial plankton is central to the study of fluxes of energy and materials in aquatic ecosystems. This is typically accomplished by applying proper volume-to-carbon conversion factors to group-specific abundances and biovolumes. A critical step in this approach is the accurate estimation of biovolume from two-dimensional (2D) data such as those available through conventional microscopy techniques or flow-through imaging systems. This paper describes a simple yet accurate method for the assessment of the biovolume of planktonic microorganisms, which works with any image analysis system allowing for the measurement of linear distances and the estimation of the cross sectional area of an object from a 2D digital image. The proposed method is based on Archimedes’ principle about the relationship between the volume of a sphere and that of a cylinder in which the sphere is inscribed, plus a coefficient of ‘unellipticity’ introduced here. Validation and careful evaluation of the method are provided using a variety of approaches. The new method proved to be highly precise with all convex shapes characterised by approximate rotational symmetry, and combining it with an existing method specific for highly concave or branched shapes allows covering the great majority of cases with good reliability. Thanks to its accuracy, consistency, and low resources demand, the new method can conveniently be used in substitution of any extant method designed for convex shapes, and can readily be coupled with automated cell imaging technologies, including state-of-the-art flow-through imaging devices. PMID:27195667

  19. Accurate and unbiased estimation of power-law exponents from single-emitter blinking data.

    PubMed

    Hoogenboom, Jacob P; den Otter, Wouter K; Offerhaus, Herman L

    2006-11-28

    Single emitter blinking with a power-law distribution for the on and off times has been observed on a variety of systems including semiconductor nanocrystals, conjugated polymers, fluorescent proteins, and organic fluorophores. The origin of this behavior is still under debate. Reliable estimation of power exponents from experimental data is crucial in validating the various models under consideration. We derive a maximum likelihood estimator for power-law distributed data and analyze its accuracy as a function of data set size and power exponent both analytically and numerically. Results are compared to least-squares fitting of the double logarithmically transformed probability density. We demonstrate that least-squares fitting introduces a severe bias in the estimation result and that the maximum likelihood procedure is superior in retrieving the correct exponent and reducing the statistical error. For a data set as small as 50 data points, the error margins of the maximum likelihood estimator are already below 7%, giving the possibility to quantify blinking behavior when data set size is limited, e.g., due to photobleaching.

  20. How Accurate and Robust Are the Phylogenetic Estimates of Austronesian Language Relationships?

    PubMed Central

    Greenhill, Simon J.; Drummond, Alexei J.; Gray, Russell D.

    2010-01-01

    We recently used computational phylogenetic methods on lexical data to test between two scenarios for the peopling of the Pacific. Our analyses of lexical data supported a pulse-pause scenario of Pacific settlement in which the Austronesian speakers originated in Taiwan around 5,200 years ago and rapidly spread through the Pacific in a series of expansion pulses and settlement pauses. We claimed that there was high congruence between traditional language subgroups and those observed in the language phylogenies, and that the estimated age of the Austronesian expansion at 5,200 years ago was consistent with the archaeological evidence. However, the congruence between the language phylogenies and the evidence from historical linguistics was not quantitatively assessed using tree comparison metrics. The robustness of the divergence time estimates to different calibration points was also not investigated exhaustively. Here we address these limitations by using a systematic tree comparison metric to calculate the similarity between the Bayesian phylogenetic trees and the subgroups proposed by historical linguistics, and by re-estimating the age of the Austronesian expansion using only the most robust calibrations. The results show that the Austronesian language phylogenies are highly congruent with the traditional subgroupings, and the date estimates are robust even when calculated using a restricted set of historical calibrations. PMID:20224774

  1. Reconstruction of the activity of point sources for the accurate characterization of nuclear waste drums by segmented gamma scanning.

    PubMed

    Krings, Thomas; Mauerhofer, Eric

    2011-06-01

    This work improves the reliability and accuracy in the reconstruction of the total isotope activity content in heterogeneous nuclear waste drums containing point sources. The method is based on χ(2)-fits of the angular dependent count rate distribution measured during a drum rotation in segmented gamma scanning. A new description of the analytical calculation of the angular count rate distribution is introduced based on a more precise model of the collimated detector. The new description is validated and compared to the old description using MCNP5 simulations of angular dependent count rate distributions of Co-60 and Cs-137 point sources. It is shown that the new model describes the angular dependent count rate distribution significantly more accurate compared to the old model. Hence, the reconstruction of the activity is more accurate and the errors are considerably reduced that lead to more reliable results. Furthermore, the results are compared to the conventional reconstruction method assuming a homogeneous matrix and activity distribution.

  2. The potential of more accurate InSAR covariance matrix estimation for land cover mapping

    NASA Astrophysics Data System (ADS)

    Jiang, Mi; Yong, Bin; Tian, Xin; Malhotra, Rakesh; Hu, Rui; Li, Zhiwei; Yu, Zhongbo; Zhang, Xinxin

    2017-04-01

    Synthetic aperture radar (SAR) and Interferometric SAR (InSAR) provide both structural and electromagnetic information for the ground surface and therefore have been widely used for land cover classification. However, relatively few studies have developed analyses that investigate SAR datasets over richly textured areas where heterogeneous land covers exist and intermingle over short distances. One of main difficulties is that the shapes of the structures in a SAR image cannot be represented in detail as mixed pixels are likely to occur when conventional InSAR parameter estimation methods are used. To solve this problem and further extend previous research into remote monitoring of urban environments, we address the use of accurate InSAR covariance matrix estimation to improve the accuracy of land cover mapping. The standard and updated methods were tested using the HH-polarization TerraSAR-X dataset and compared with each other using the random forest classifier. A detailed accuracy assessment complied for six types of surfaces shows that the updated method outperforms the standard approach by around 9%, with an overall accuracy of 82.46% over areas with rich texture in Zhuhai, China. This paper demonstrates that the accuracy of land cover mapping can benefit from the 3 enhancement of the quality of the observations in addition to classifiers selection and multi-source data ingratiation reported in previous studies.

  3. Can student health professionals accurately estimate alcohol content in commonly occurring drinks?

    PubMed Central

    Sinclair, Julia; Searle, Emma

    2016-01-01

    Objectives: Correct identification of alcohol as a contributor to, or comorbidity of, many psychiatric diseases requires health professionals to be competent and confident to take an accurate alcohol history. Being able to estimate (or calculate) the alcohol content in commonly consumed drinks is a prerequisite for quantifying levels of alcohol consumption. The aim of this study was to assess this ability in medical and nursing students. Methods: A cross-sectional survey of 891 medical and nursing students across different years of training was conducted. Students were asked the alcohol content of 10 different alcoholic drinks by seeing a slide of the drink (with picture, volume and percentage of alcohol by volume) for 30 s. Results: Overall, the mean number of correctly estimated drinks (out of the 10 tested) was 2.4, increasing to just over 3 if a 10% margin of error was used. Wine and premium strength beers were underestimated by over 50% of students. Those who drank alcohol themselves, or who were further on in their clinical training, did better on the task, but overall the levels remained low. Conclusions: Knowledge of, or the ability to work out, the alcohol content of commonly consumed drinks is poor, and further research is needed to understand the reasons for this and the impact this may have on the likelihood to undertake screening or initiate treatment. PMID:27536344

  4. Greater contrast in Martian hydrological history from more accurate estimates of paleodischarge

    NASA Astrophysics Data System (ADS)

    Jacobsen, R. E.; Burr, D. M.

    2016-09-01

    Correlative width-discharge relationships from the Missouri River Basin are commonly used to estimate fluvial paleodischarge on Mars. However, hydraulic geometry provides alternative, and causal, width-discharge relationships derived from broader samples of channels, including those in reduced-gravity (submarine) environments. Comparison of these relationships implies that causal relationships from hydraulic geometry should yield more accurate and more precise discharge estimates. Our remote analysis of a Martian-terrestrial analog channel, combined with in situ discharge data, substantiates this implication. Applied to Martian features, these results imply that paleodischarges of interior channels of Noachian-Hesperian (~3.7 Ga) valley networks have been underestimated by a factor of several, whereas paleodischarges for smaller fluvial deposits of the Late Hesperian-Early Amazonian (~3.0 Ga) have been overestimated. Thus, these new paleodischarges significantly magnify the contrast between early and late Martian hydrologic activity. Width-discharge relationships from hydraulic geometry represent validated tools for quantifying fluvial input near candidate landing sites of upcoming missions.

  5. Ocean Lidar Measurements of Beam Attenuation and a Roadmap to Accurate Phytoplankton Biomass Estimates

    NASA Astrophysics Data System (ADS)

    Hu, Yongxiang; Behrenfeld, Mike; Hostetler, Chris; Pelon, Jacques; Trepte, Charles; Hair, John; Slade, Wayne; Cetinic, Ivona; Vaughan, Mark; Lu, Xiaomei; Zhai, Pengwang; Weimer, Carl; Winker, David; Verhappen, Carolus C.; Butler, Carolyn; Liu, Zhaoyan; Hunt, Bill; Omar, Ali; Rodier, Sharon; Lifermann, Anne; Josset, Damien; Hou, Weilin; MacDonnell, David; Rhew, Ray

    2016-06-01

    Beam attenuation coefficient, c, provides an important optical index of plankton standing stocks, such as phytoplankton biomass and total particulate carbon concentration. Unfortunately, c has proven difficult to quantify through remote sensing. Here, we introduce an innovative approach for estimating c using lidar depolarization measurements and diffuse attenuation coefficients from ocean color products or lidar measurements of Brillouin scattering. The new approach is based on a theoretical formula established from Monte Carlo simulations that links the depolarization ratio of sea water to the ratio of diffuse attenuation Kd and beam attenuation C (i.e., a multiple scattering factor). On July 17, 2014, the CALIPSO satellite was tilted 30° off-nadir for one nighttime orbit in order to minimize ocean surface backscatter and demonstrate the lidar ocean subsurface measurement concept from space. Depolarization ratios of ocean subsurface backscatter are measured accurately. Beam attenuation coefficients computed from the depolarization ratio measurements compare well with empirical estimates from ocean color measurements. We further verify the beam attenuation coefficient retrievals using aircraft-based high spectral resolution lidar (HSRL) data that are collocated with in-water optical measurements.

  6. Discrete state model and accurate estimation of loop entropy of RNA secondary structures.

    PubMed

    Zhang, Jian; Lin, Ming; Chen, Rong; Wang, Wei; Liang, Jie

    2008-03-28

    Conformational entropy makes important contribution to the stability and folding of RNA molecule, but it is challenging to either measure or compute conformational entropy associated with long loops. We develop optimized discrete k-state models of RNA backbone based on known RNA structures for computing entropy of loops, which are modeled as self-avoiding walks. To estimate entropy of hairpin, bulge, internal loop, and multibranch loop of long length (up to 50), we develop an efficient sampling method based on the sequential Monte Carlo principle. Our method considers excluded volume effect. It is general and can be applied to calculating entropy of loops with longer length and arbitrary complexity. For loops of short length, our results are in good agreement with a recent theoretical model and experimental measurement. For long loops, our estimated entropy of hairpin loops is in excellent agreement with the Jacobson-Stockmayer extrapolation model. However, for bulge loops and more complex secondary structures such as internal and multibranch loops, we find that the Jacobson-Stockmayer extrapolation model has large errors. Based on estimated entropy, we have developed empirical formulae for accurate calculation of entropy of long loops in different secondary structures. Our study on the effect of asymmetric size of loops suggest that loop entropy of internal loops is largely determined by the total loop length, and is only marginally affected by the asymmetric size of the two loops. Our finding suggests that the significant asymmetric effects of loop length in internal loops measured by experiments are likely to be partially enthalpic. Our method can be applied to develop improved energy parameters important for studying RNA stability and folding, and for predicting RNA secondary and tertiary structures. The discrete model and the program used to calculate loop entropy can be downloaded at http://gila.bioengr.uic.edu/resources/RNA.html.

  7. Simulation estimates of cloud points of polydisperse fluids.

    PubMed

    Buzzacchi, Matteo; Sollich, Peter; Wilding, Nigel B; Müller, Marcus

    2006-04-01

    We describe two distinct approaches to obtaining the cloud-point densities and coexistence properties of polydisperse fluid mixtures by Monte Carlo simulation within the grand-canonical ensemble. The first method determines the chemical potential distribution mu(sigma) (with the polydisperse attribute) under the constraint that the ensemble average of the particle density distribution rho(sigma) match a prescribed parent form. Within the region of phase coexistence (delineated by the cloud curve) this leads to a distribution of the fluctuating overall particle density n, p(n), that necessarily has unequal peak weights in order to satisfy a generalized lever rule. A theoretical analysis shows that as a consequence, finite-size corrections to estimates of coexistence properties are power laws in the system size. The second method assigns mu(sigma) such that an equal-peak-weight criterion is satisfied for p(n) for all points within the coexistence region. However, since equal volumes of the coexisting phases cannot satisfy the lever rule for the prescribed parent, their relative contributions must be weighted appropriately when determining mu(sigma). We show how to ascertain the requisite weight factor operationally. A theoretical analysis of the second method suggests that it leads to finite-size corrections to estimates of coexistence properties which are exponentially small in the system size. The scaling predictions for both methods are tested via Monte Carlo simulations of a polydisperse lattice-gas model near its cloud curve, the results showing excellent quantitative agreement with the theory.

  8. Closed-form solutions for estimating a rigid motion from plane correspondences extracted from point clouds

    NASA Astrophysics Data System (ADS)

    Khoshelham, Kourosh

    2016-04-01

    Registration is often a prerequisite step in processing point clouds. While planar surfaces are suitable features for registration, most of the existing plane-based registration methods rely on iterative solutions for the estimation of transformation parameters from plane correspondences. This paper presents a new closed-form solution for the estimation of a rigid motion from a set of point-plane correspondences. The role of normalization is investigated and its importance for accurate plane fitting and plane-based registration is shown. The paper also presents a thorough evaluation of the closed-form solutions and compares their performance with the iterative solution in terms of accuracy, robustness, stability and efficiency. The results suggest that the closed-form solution based on point-plane correspondences should be the method of choice in point cloud registration as it is significantly faster than the iterative solution, and performs as well as or better than the iterative solution in most situations. The normalization of the point coordinates is also recommended as an essential preprocessing step for point cloud registration. An implementation of the closed-form solutions in MATLAB is available at: http://people.eng.unimelb.edu.au/kkhoshelham/research.html#directmotion.

  9. Smartphone-Based Accurate Analysis of Retinal Vasculature towards Point-of-Care Diagnostics

    PubMed Central

    Xu, Xiayu; Ding, Wenxiang; Wang, Xuemin; Cao, Ruofan; Zhang, Maiye; Lv, Peilin; Xu, Feng

    2016-01-01

    Retinal vasculature analysis is important for the early diagnostics of various eye and systemic diseases, making it a potentially useful biomarker, especially for resource-limited regions and countries. Here we developed a smartphone-based retinal image analysis system for point-of-care diagnostics that is able to load a fundus image, segment retinal vessels, analyze individual vessel width, and store or uplink results. The proposed system was not only evaluated on widely used public databases and compared with the state-of-the-art methods, but also validated on clinical images directly acquired with a smartphone. An Android app is also developed to facilitate on-site application of the proposed methods. Both visual assessment and quantitative assessment showed that the proposed methods achieved comparable results to the state-of-the-art methods that require high-standard workstations. The proposed system holds great potential for the early diagnostics of various diseases, such as diabetic retinopathy, for resource-limited regions and countries. PMID:27698369

  10. Sliding control of pointing and tracking with operator spline estimation

    NASA Technical Reports Server (NTRS)

    Dwyer, Thomas A. W., III; Fakhreddine, Karray; Kim, Jinho

    1989-01-01

    It is shown how a variable structure control technique could be implemented to achieve precise pointing and good tracking of a deformable structure subject to fast slewing maneuvers. The correction torque that has to be applied to the structure is based on estimates of upper bounds on the model errors. For a rapid rotation of the deformable structure, the elastic response can be modeled by oscillators driven by angular acceleration, and where stiffness and damping coefficients are also angular velocity and acceleration dependent. By transforming this slew-driven elastic dynamics into bilinear form (be regarding the vector made up of the angular velocity, squared angular velocity and angular acceleration components, which appear in the coefficients as the input to the deformation dynamics), an operator spline can be constructed, that gives a low order estimate of the induced disturbance. Moreover, a worst case error bound between the estimated deformation and the unknown exact deformation is also generated, which can be used where required in the sliding control correction.

  11. Estimating the Effects of Detection Heterogeneity and Overdispersion on Trends Estimated from Avian Point Counts

    EPA Science Inventory

    Point counts are a common method for sampling avian distribution and abundance. Though methods for estimating detection probabilities are available, many analyses use raw counts and do not correct for detectability. We use a removal model of detection within an N-mixture approa...

  12. Can endocranial volume be estimated accurately from external skull measurements in great-tailed grackles (Quiscalus mexicanus)?

    PubMed Central

    Palmstrom, Christin R.

    2015-01-01

    There is an increasing need to validate and collect data approximating brain size on individuals in the field to understand what evolutionary factors drive brain size variation within and across species. We investigated whether we could accurately estimate endocranial volume (a proxy for brain size), as measured by computerized tomography (CT) scans, using external skull measurements and/or by filling skulls with beads and pouring them out into a graduated cylinder for male and female great-tailed grackles. We found that while females had higher correlations than males, estimations of endocranial volume from external skull measurements or beads did not tightly correlate with CT volumes. We found no accuracy in the ability of external skull measures to predict CT volumes because the prediction intervals for most data points overlapped extensively. We conclude that we are unable to detect individual differences in endocranial volume using external skull measurements. These results emphasize the importance of validating and explicitly quantifying the predictive accuracy of brain size proxies for each species and each sex. PMID:26082858

  13. Accurate optical flow field estimation using mechanical properties of soft tissues

    NASA Astrophysics Data System (ADS)

    Mehrabian, Hatef; Karimi, Hirad; Samani, Abbas

    2009-02-01

    A novel optical flow based technique is presented in this paper to measure the nodal displacements of soft tissue undergoing large deformations. In hyperelasticity imaging, soft tissues maybe compressed extensively [1] and the deformation may exceed the number of pixels ordinary optical flow approaches can detect. Furthermore in most biomedical applications there is a large amount of image information that represent the geometry of the tissue and the number of tissue types present in the organ of interest. Such information is often ignored in applications such as image registration. In this work we incorporate the information pertaining to soft tissue mechanical behavior (Neo-Hookean hyperelastic model is used here) in addition to the tissue geometry before compression into a hierarchical Horn-Schunck optical flow method to overcome this large deformation detection weakness. Applying the proposed method to a phantom using several compression levels proved that it yields reasonably accurate displacement fields. Estimated displacement results of this phantom study obtained for displacement fields of 85 pixels/frame and 127 pixels/frame are reported and discussed in this paper.

  14. Estimation of aim point for endgame based on IR image sequence

    NASA Astrophysics Data System (ADS)

    Wang, Hongbo; Zhuang, Zhihong; Zheng, Huali; Zhang, Qingtai

    2003-09-01

    Because of the limit of finite missile time response and the field of view (FOV) of imaging infrared (IR) seeker during the endgame of an intercept, the image of the fighter grows larger gradually and finally will overflow the FOV as the missle approaches the fighter. It then results in losing control of the seeker and affecting the precision of burst control of imaging IR fuze based upon the guidance integrated fuzing (GIF) technology. The aim of the research presented in this paper is to decrease the blind range of imaging IR seeker and improve the precision of aim-point parameters through pose recognition. On the basis of the moving characters of missle and fighter during high-speed encounter and the high correlation of frame to frame in image sequence obtained b imaging IR seeker, a novel method of fighter axis pose recognition and aim-point estimation is proposed. Within this methodology, the spatial pose of fighter axis is recognized before the image overflow the FOV, and then the tracking mode of seeker is transformed from general tracking mode to partial image tracking mode in right time. During partial image tracking, the seeker is controlled to keep the partial image track point in the FOV, then the aim-point parameters can be calculated accurately by utilizing the fighter axis pose, parameters of track point and the relative distance of track point and aim-point.

  15. How accurately can we estimate energetic costs in a marine top predator, the king penguin?

    PubMed

    Halsey, Lewis G; Fahlman, Andreas; Handrich, Yves; Schmidt, Alexander; Woakes, Anthony J; Butler, Patrick J

    2007-01-01

    King penguins (Aptenodytes patagonicus) are one of the greatest consumers of marine resources. However, while their influence on the marine ecosystem is likely to be significant, only an accurate knowledge of their energy demands will indicate their true food requirements. Energy consumption has been estimated for many marine species using the heart rate-rate of oxygen consumption (f(H) - V(O2)) technique, and the technique has been applied successfully to answer eco-physiological questions. However, previous studies on the energetics of king penguins, based on developing or applying this technique, have raised a number of issues about the degree of validity of the technique for this species. These include the predictive validity of the present f(H) - V(O2) equations across different seasons and individuals and during different modes of locomotion. In many cases, these issues also apply to other species for which the f(H) - V(O2) technique has been applied. In the present study, the accuracy of three prediction equations for king penguins was investigated based on validity studies and on estimates of V(O2) from published, field f(H) data. The major conclusions from the present study are: (1) in contrast to that for walking, the f(H) - V(O2) relationship for swimming king penguins is not affected by body mass; (2) prediction equation (1), log(V(O2) = -0.279 + 1.24log(f(H) + 0.0237t - 0.0157log(f(H)t, derived in a previous study, is the most suitable equation presently available for estimating V(O2) in king penguins for all locomotory and nutritional states. A number of possible problems associated with producing an f(H) - V(O2) relationship are discussed in the present study. Finally, a statistical method to include easy-to-measure morphometric characteristics, which may improve the accuracy of f(H) - V(O2) prediction equations, is explained.

  16. Estimation of the Hopf Bifurcation Point for Aeroelastic Systems

    NASA Astrophysics Data System (ADS)

    SEDAGHAT, A.; COOPER, J. E.; LEUNG, A. Y. T.; WRIGHT, J. R.

    2001-11-01

    The estimation of the Hopf bifurcation point is an important prerequisite for the non-linear analysis of non-linear instabilities in aircraft using the classical normal form theory. For unsteady transonic aerodynamics, the aeroelastic response is frequency-dependent and therefore a very costly trial-and-error and iterative scheme, frequency-matching, is used to determine flutter conditions. Furthermore, the standard algebraic methods have usually been used for systems not bigger than two degrees of freedom and do not appear to have been applied for frequency-dependent aerodynamics. In this study, a procedure is developed to produce and solve algebraic equations for any order aeroelastic systems, with and without frequency-dependent aerodynamics, to predict the Hopf bifurcation point. The approach performs the computation in a single step using symbolic programming and does not require trial and error and repeated calculations at various speeds required when using classical iterative methods. To investigate the validity of the approach, a Hancock two-degrees-of-freedom aeroelastic wing model and a multi-degree-of-freedom cantilever wind model were studied in depth. Hancock experimental data was used for curve fitting the unsteady aerodynamic damping term as a function of frequency. Fairly close agreement was obtained between the analytical and simulated aeroelastic solutions with and without frequency-dependent aerodynamics.

  17. Curie point depth estimation of the Eastern Caribbean

    NASA Astrophysics Data System (ADS)

    Garcia, Andreina; Orihuela Guevara, Nuris

    2013-04-01

    In this paper we present an estimation of the Curie point depth (CPD) on the Eastern Caribbean. The estimation of the CPD was done from satellite magnetic anomalies, by the application of the Centroid method over the studied area. In order to calculate the CPD, the area was subdivided in square windows of side equal to 2°, with an overlap distance of 1° to each other. As result of this research, it was obtained the Curie isotherm grid by using kriging interpolation method. Despite of the oceanic nature of the Eastern Caribbean plate, this map reveals important lateral variations in the interior of the plate and its boundaries. The lateral variations observed in CPD are related with the complexity of thermal processes in the subsurface of the region. From a global perspective, the earth's oceanic provinces show a CPD's smooth behavior, excepting plate boundaries of these oceanic provinces. In this case, the Eastern Caribbean plate's CPD variations are related to both: Plate's boundaries and plate's interior. The maximum CPD variations are observed in the southern boundary of Caribbean plate (9 to 35 km) and over the Lesser Antilles and Barbados prism (16 to 30 km). This behavior reflects the complex geologic evolution history of the studied area, in which has been documented the presence of extensive mantle of basalt and dolerite sills. These sills have been originated in various cycles of cretaceous mantle activity, and have been the main cause of the oceanic crust's thickening in the interior of the Caribbean plate. At the same time, this thickening of the oceanic plate explains the existence of a Mohorovičić discontinuity, with an average depth greater than other regions of the planet, with slight irregularities related to highs of the ocean floor (Nicaragua and Beata Crests, Aves High) but not similar to the magnitude of lateral variations revealed by the Curie isotherm map.

  18. Evaluation of pedotransfer functions for estimating the soil water retention points

    NASA Astrophysics Data System (ADS)

    Bahmani, Omid; Palangi, Sahar

    2016-06-01

    Direct measurement of soil moisture has been often expensive and time-consuming. The aim of this study was determining the best method to estimate the soil moisture using the pedotransfer functions in the soil par2 model. Soil samples selected from the database UNSODA in three textures include sandy loam, silty loam and clay. In clay soil, the Campbell model indicated better results at field capacity (FC) and wilting point (WP) with RMSE = (0.06, 0.09) and d = (0.65, 0.55) respectively. In silty loam soil, the Epic model had accurate estimation with MBE = 0.00 at FC and Campbell model had the acceptable result of WP with RMSE = 0.03 and d = 0.77. In sandy loam, Hutson and Campbell models had a better result to estimation the FC and WP than others. Also Hutson model had an acceptable result to estimation the TAW (Total Available Water) with RMSE = (0.03, 0.04, 0.04) and MBE = (0.02, 0.01, 0.01) for clay, sandy loam and silty loam, respectively. These models demonstrate the moisture points had the internal linkage with the soil textures. Results indicated that the PTFs models simulate the agreement results with the experimental observations.

  19. A new reference point for patient dose estimation in neurovascular interventional radiology.

    PubMed

    Kawasaki, Kohei; Imazeki, Masaharu; Hasegawa, Ryota; Shiba, Shinichi; Takahashi, Hiroyuki; Sato, Kazuhiko; Ota, Jyoji; Suzuki, Hiroaki; Awai, Kazuo; Sakamoto, Hajime; Tajima, Osamu; Tsukamoto, Atsuko; Kikuchi, Tatsuya; Kageyama, Takahiro; Kato, Kyoichi

    2013-07-01

    In interventional radiology, dose estimation using the interventional reference point (IRP) is a practical method for obtaining the real-time skin dose of a patient. However, the IRP is defined in terms of adult cardiovascular radiology and is not suitable for dosimetry of the head. In the present study, we defined a new reference point (neuro-IRP) for neuro-interventional procedures. The neuro-IRP was located on the central ray of the X-ray beam, 9 cm from the isocenter, toward the focal spot. To verify whether the neuro-IRP was accurate in dose estimation, we compared calculated doses at the neuro-IRP and actual measured doses at the surface of the head phantom for various directions of the X-ray projection. The resulting calculated doses were fairly consistent with actual measured doses, with the error in this estimation within approximately 15%. These data suggest that dose estimation using the neuro-IRP for the head is valid.

  20. Crop area estimation based on remotely-sensed data with an accurate but costly subsample

    NASA Technical Reports Server (NTRS)

    Gunst, R. F.

    1983-01-01

    Alternatives to sampling-theory stratified and regression estimators of crop production and timber biomass were examined. An alternative estimator which is viewed as especially promising is the errors-in-variable regression estimator. Investigations established the need for caution with this estimator when the ratio of two error variances is not precisely known.

  1. Improved image quality in pinhole SPECT by accurate modeling of the point spread function in low magnification systems

    SciTech Connect

    Pino, Francisco; Roé, Nuria; Aguiar, Pablo; Falcon, Carles; Ros, Domènec; Pavía, Javier

    2015-02-15

    Purpose: Single photon emission computed tomography (SPECT) has become an important noninvasive imaging technique in small-animal research. Due to the high resolution required in small-animal SPECT systems, the spatially variant system response needs to be included in the reconstruction algorithm. Accurate modeling of the system response should result in a major improvement in the quality of reconstructed images. The aim of this study was to quantitatively assess the impact that an accurate modeling of spatially variant collimator/detector response has on image-quality parameters, using a low magnification SPECT system equipped with a pinhole collimator and a small gamma camera. Methods: Three methods were used to model the point spread function (PSF). For the first, only the geometrical pinhole aperture was included in the PSF. For the second, the septal penetration through the pinhole collimator was added. In the third method, the measured intrinsic detector response was incorporated. Tomographic spatial resolution was evaluated and contrast, recovery coefficients, contrast-to-noise ratio, and noise were quantified using a custom-built NEMA NU 4–2008 image-quality phantom. Results: A high correlation was found between the experimental data corresponding to intrinsic detector response and the fitted values obtained by means of an asymmetric Gaussian distribution. For all PSF models, resolution improved as the distance from the point source to the center of the field of view increased and when the acquisition radius diminished. An improvement of resolution was observed after a minimum of five iterations when the PSF modeling included more corrections. Contrast, recovery coefficients, and contrast-to-noise ratio were better for the same level of noise in the image when more accurate models were included. Ring-type artifacts were observed when the number of iterations exceeded 12. Conclusions: Accurate modeling of the PSF improves resolution, contrast, and recovery

  2. Geometric constraints in semiclassical initial value representation calculations in Cartesian coordinates: accurate reduction in zero-point energy.

    PubMed

    Issack, Bilkiss B; Roy, Pierre-Nicholas

    2005-08-22

    An approach for the inclusion of geometric constraints in semiclassical initial value representation calculations is introduced. An important aspect of the approach is that Cartesian coordinates are used throughout. We devised an algorithm for the constrained sampling of initial conditions through the use of multivariate Gaussian distribution based on a projected Hessian. We also propose an approach for the constrained evaluation of the so-called Herman-Kluk prefactor in its exact log-derivative form. Sample calculations are performed for free and constrained rare-gas trimers. The results show that the proposed approach provides an accurate evaluation of the reduction in zero-point energy. Exact basis set calculations are used to assess the accuracy of the semiclassical results. Since Cartesian coordinates are used, the approach is general and applicable to a variety of molecular and atomic systems.

  3. Insights on the role of accurate state estimation in coupled model parameter estimation by a conceptual climate model study

    NASA Astrophysics Data System (ADS)

    Yu, Xiaolin; Zhang, Shaoqing; Lin, Xiaopei; Li, Mingkui

    2017-03-01

    The uncertainties in values of coupled model parameters are an important source of model bias that causes model climate drift. The values can be calibrated by a parameter estimation procedure that projects observational information onto model parameters. The signal-to-noise ratio of error covariance between the model state and the parameter being estimated directly determines whether the parameter estimation succeeds or not. With a conceptual climate model that couples the stochastic atmosphere and slow-varying ocean, this study examines the sensitivity of state-parameter covariance on the accuracy of estimated model states in different model components of a coupled system. Due to the interaction of multiple timescales, the fast-varying atmosphere with a chaotic nature is the major source of the inaccuracy of estimated state-parameter covariance. Thus, enhancing the estimation accuracy of atmospheric states is very important for the success of coupled model parameter estimation, especially for the parameters in the air-sea interaction processes. The impact of chaotic-to-periodic ratio in state variability on parameter estimation is also discussed. This simple model study provides a guideline when real observations are used to optimize model parameters in a coupled general circulation model for improving climate analysis and predictions.

  4. Estimating the physicochemical properties of polyhalogenated aromatic and aliphatic compounds using UPPER: part 1. Boiling point and melting point.

    PubMed

    Admire, Brittany; Lian, Bo; Yalkowsky, Samuel H

    2015-01-01

    The UPPER (Unified Physicochemical Property Estimation Relationships) model uses enthalpic and entropic parameters to estimate 20 biologically relevant properties of organic compounds. The model has been validated by Lian and Yalkowsky on a data set of 700 hydrocarbons. The aim of this work is to expand the UPPER model to estimate the boiling and melting points of polyhalogenated compounds. In this work, 19 new group descriptors are defined and used to predict the transition temperatures of an additional 1288 compounds. The boiling points of 808 and the melting points of 742 polyhalogenated compounds are predicted with average absolute errors of 13.56 K and 25.85 K, respectively.

  5. Accurate Point-of-Care Detection of Ruptured Fetal Membranes: Improved Diagnostic Performance Characteristics with a Monoclonal/Polyclonal Immunoassay

    PubMed Central

    Rogers, Linda C.; Scott, Laurie; Block, Jon E.

    2016-01-01

    OBJECTIVE Accurate and timely diagnosis of rupture of membranes (ROM) is imperative to allow for gestational age-specific interventions. This study compared the diagnostic performance characteristics between two methods used for the detection of ROM as measured in the same patient. METHODS Vaginal secretions were evaluated using the conventional fern test as well as a point-of-care monoclonal/polyclonal immunoassay test (ROM Plus®) in 75 pregnant patients who presented to labor and delivery with complaints of leaking amniotic fluid. Both tests were compared to analytical confirmation of ROM using three external laboratory tests. Diagnostic performance characteristics were calculated including sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and accuracy. RESULTS Diagnostic performance characteristics uniformly favored ROM detection using the immunoassay test compared to the fern test: sensitivity (100% vs. 77.8%), specificity (94.8% vs. 79.3%), PPV (75% vs. 36.8%), NPV (100% vs. 95.8%), and accuracy (95.5% vs. 79.1%). CONCLUSIONS The point-of-care immunoassay test provides improved diagnostic accuracy for the detection of ROM compared to fern testing. It has the potential of improving patient management decisions, thereby minimizing serious complications and perinatal morbidity. PMID:27199579

  6. Children Can Accurately Monitor and Control Their Number-Line Estimation Performance

    ERIC Educational Resources Information Center

    Wall, Jenna L.; Thompson, Clarissa A.; Dunlosky, John; Merriman, William E.

    2016-01-01

    Accurate monitoring and control are essential for effective self-regulated learning. These metacognitive abilities may be particularly important for developing math skills, such as when children are deciding whether a math task is difficult or whether they made a mistake on a particular item. The present experiments investigate children's ability…

  7. Bi-fluorescence imaging for estimating accurately the nuclear condition of Rhizoctonia spp.

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Aims: To simplify the determination of the nuclear condition of the pathogenic Rhizoctonia, which currently needs to be performed either using two fluorescent dyes, thus is more costly and time-consuming, or using only one fluorescent dye, and thus less accurate. Methods and Results: A red primary ...

  8. Identifiability and Estimation in Random Translations of Marked Point Processes.

    DTIC Science & Technology

    1982-10-01

    inversion of the Laplace Transform Hermit* Distribution Parmetrc and Non- parametric Estimation 20 ABSTRACT ContInwue en reverse side If neeee ww ftn id... Parametric Estimation of h(.) and P(.) If h(.) and P(.) belong to a certain known family of functions with some unknown parameters, the expression (3...complex type of data is required and nothing can be learned about the arrival process. Non- Parametric Estimation of P(’) Let h(.) be a completely

  9. Bayesian parameter estimation of a k-ε model for accurate jet-in-crossflow simulations

    SciTech Connect

    Ray, Jaideep; Lefantzi, Sophia; Arunajatesan, Srinivasan; Dechant, Lawrence

    2016-05-31

    Reynolds-averaged Navier–Stokes models are not very accurate for high-Reynolds-number compressible jet-in-crossflow interactions. The inaccuracy arises from the use of inappropriate model parameters and model-form errors in the Reynolds-averaged Navier–Stokes model. In this study, the hypothesis is pursued that Reynolds-averaged Navier–Stokes predictions can be significantly improved by using parameters inferred from experimental measurements of a supersonic jet interacting with a transonic crossflow.

  10. Accurate state estimation for a hydraulic actuator via a SDRE nonlinear filter

    NASA Astrophysics Data System (ADS)

    Strano, Salvatore; Terzo, Mario

    2016-06-01

    The state estimation in hydraulic actuators is a fundamental tool for the detection of faults or a valid alternative to the installation of sensors. Due to the hard nonlinearities that characterize the hydraulic actuators, the performances of the linear/linearization based techniques for the state estimation are strongly limited. In order to overcome these limits, this paper focuses on an alternative nonlinear estimation method based on the State-Dependent-Riccati-Equation (SDRE). The technique is able to fully take into account the system nonlinearities and the measurement noise. A fifth order nonlinear model is derived and employed for the synthesis of the estimator. Simulations and experimental tests have been conducted and comparisons with the largely used Extended Kalman Filter (EKF) are illustrated. The results show the effectiveness of the SDRE based technique for applications characterized by not negligible nonlinearities such as dead zone and frictions.

  11. Accurate liability estimation improves power in ascertained case-control studies.

    PubMed

    Weissbrod, Omer; Lippert, Christoph; Geiger, Dan; Heckerman, David

    2015-04-01

    Linear mixed models (LMMs) have emerged as the method of choice for confounded genome-wide association studies. However, the performance of LMMs in nonrandomly ascertained case-control studies deteriorates with increasing sample size. We propose a framework called LEAP (liability estimator as a phenotype; https://github.com/omerwe/LEAP) that tests for association with estimated latent values corresponding to severity of phenotype, and we demonstrate that this can lead to a substantial power increase.

  12. Accurate and efficient velocity estimation using Transmission matrix formalism based on the domain decomposition method

    NASA Astrophysics Data System (ADS)

    Wang, Benfeng; Jakobsen, Morten; Wu, Ru-Shan; Lu, Wenkai; Chen, Xiaohong

    2017-03-01

    Full waveform inversion (FWI) has been regarded as an effective tool to build the velocity model for the following pre-stack depth migration. Traditional inversion methods are built on Born approximation and are initial model dependent, while this problem can be avoided by introducing Transmission matrix (T-matrix), because the T-matrix includes all orders of scattering effects. The T-matrix can be estimated from the spatial aperture and frequency bandwidth limited seismic data using linear optimization methods. However the full T-matrix inversion method (FTIM) is always required in order to estimate velocity perturbations, which is very time consuming. The efficiency can be improved using the previously proposed inverse thin-slab propagator (ITSP) method, especially for large scale models. However, the ITSP method is currently designed for smooth media, therefore the estimation results are unsatisfactory when the velocity perturbation is relatively large. In this paper, we propose a domain decomposition method (DDM) to improve the efficiency of the velocity estimation for models with large perturbations, as well as guarantee the estimation accuracy. Numerical examples for smooth Gaussian ball models and a reservoir model with sharp boundaries are performed using the ITSP method, the proposed DDM and the FTIM. The estimated velocity distributions, the relative errors and the elapsed time all demonstrate the validity of the proposed DDM.

  13. Comparing the standards of one metabolic equivalent of task in accurately estimating physical activity energy expenditure based on acceleration.

    PubMed

    Kim, Dohyun; Lee, Jongshill; Park, Hoon Ki; Jang, Dong Pyo; Song, Soohwa; Cho, Baek Hwan; Jung, Yoo-Suk; Park, Rae-Woong; Joo, Nam-Seok; Kim, In Young

    2016-08-24

    The purpose of the study is to analyse how the standard of resting metabolic rate (RMR) affects estimation of the metabolic equivalent of task (MET) using an accelerometer. In order to investigate the effect on estimation according to intensity of activity, comparisons were conducted between the 3.5 ml O2 · kg(-1) · min(-1) and individually measured resting VO2 as the standard of 1 MET. MET was estimated by linear regression equations that were derived through five-fold cross-validation using 2 types of MET values and accelerations; the accuracy of estimation was analysed through cross-validation, Bland and Altman plot, and one-way ANOVA test. There were no significant differences in the RMS error after cross-validation. However, the individual RMR-based estimations had as many as 0.5 METs of mean difference in modified Bland and Altman plots than RMR of 3.5 ml O2 · kg(-1) · min(-1). Finally, the results of an ANOVA test indicated that the individual RMR-based estimations had less significant differences between the reference and estimated values at each intensity of activity. In conclusion, the RMR standard is a factor that affects accurate estimation of METs by acceleration; therefore, RMR requires individual specification when it is used for estimation of METs using an accelerometer.

  14. Accurate kinetic parameter estimation during progress curve analysis of systems with endogenous substrate production.

    PubMed

    Goudar, Chetan T

    2011-10-01

    We have identified an error in the published integral form of the modified Michaelis-Menten equation that accounts for endogenous substrate production. The correct solution is presented and the error in both the substrate concentration, S, and the kinetic parameters Vm , Km , and R resulting from the incorrect solution was characterized. The incorrect integral form resulted in substrate concentration errors as high as 50% resulting in 7-50% error in kinetic parameter estimates. To better reflect experimental scenarios, noise containing substrate depletion data were analyzed by both the incorrect and correct integral equations. While both equations resulted in identical fits to substrate depletion data, the final estimates of Vm , Km , and R were different and Km and R estimates from the incorrect integral equation deviated substantially from the actual values. Another observation was that at R = 0, the incorrect integral equation reduced to the correct form of the Michaelis-Menten equation. We believe this combination of excellent fits to experimental data, albeit with incorrect kinetic parameter estimates, and the reduction to the Michaelis-Menten equation at R = 0 is primarily responsible for the incorrectness to go unnoticed. However, the resulting error in kinetic parameter estimates will lead to incorrect biological interpretation and we urge the use of the correct integral form presented in this study.

  15. Evaluation of a pan-serotype point-of-care rapid diagnostic assay for accurate detection of acute dengue infection.

    PubMed

    Vivek, Rosario; Ahamed, Syed Fazil; Kotabagi, Shalini; Chandele, Anmol; Khanna, Ira; Khanna, Navin; Nayak, Kaustuv; Dias, Mary; Kaja, Murali-Krishna; Shet, Anita

    2017-03-01

    The catastrophic rise in dengue infections in India and globally has created a need for an accurate, validated low-cost rapid diagnostic test (RDT) for dengue. We prospectively evaluated the diagnostic performance of NS1/IgM RDT (dengue day 1) using 211 samples from a pediatric dengue cohort representing all 4 serotypes in southern India. The dengue-positive panel consisted of 179 dengue real-time polymerase chain reaction (RT-PCR) positive samples from symptomatic children. The dengue-negative panel consisted of 32 samples from dengue-negative febrile children and asymptomatic individuals that were negative for dengue RT-PCR/NS1 enzyme-linked immunosorbent assay/IgM/IgG. NS1/IgM RDT sensitivity was 89.4% and specificity was 93.8%. The NS1/IgM RDT showed high sensitivity throughout the acute phase of illness, in primary and secondary infections, in different severity groups, and detected all 4 dengue serotypes, including coinfections. This NS1/IgM RDT is a useful point-of-care assay for rapid and reliable diagnosis of acute dengue and an excellent surveillance tool in our battle against dengue.

  16. Alpha's standard error (ASE): an accurate and precise confidence interval estimate.

    PubMed

    Duhachek, Adam; Lacobucci, Dawn

    2004-10-01

    This research presents the inferential statistics for Cronbach's coefficient alpha on the basis of the standard statistical assumption of multivariate normality. The estimation of alpha's standard error (ASE) and confidence intervals are described, and the authors analytically and empirically investigate the effects of the components of these equations. The authors then demonstrate the superiority of this estimate compared with previous derivations of ASE in a separate Monte Carlo simulation. The authors also present a sampling error and test statistic for a test of independent sample alphas. They conclude with a recommendation that all alpha coefficients be reported in conjunction with standard error or confidence interval estimates and offer SAS and SPSS programming codes for easy implementation.

  17. A microbial clock provides an accurate estimate of the postmortem interval in a mouse model system

    PubMed Central

    Metcalf, Jessica L; Wegener Parfrey, Laura; Gonzalez, Antonio; Lauber, Christian L; Knights, Dan; Ackermann, Gail; Humphrey, Gregory C; Gebert, Matthew J; Van Treuren, Will; Berg-Lyons, Donna; Keepers, Kyle; Guo, Yan; Bullard, James; Fierer, Noah; Carter, David O; Knight, Rob

    2013-01-01

    Establishing the time since death is critical in every death investigation, yet existing techniques are susceptible to a range of errors and biases. For example, forensic entomology is widely used to assess the postmortem interval (PMI), but errors can range from days to months. Microbes may provide a novel method for estimating PMI that avoids many of these limitations. Here we show that postmortem microbial community changes are dramatic, measurable, and repeatable in a mouse model system, allowing PMI to be estimated within approximately 3 days over 48 days. Our results provide a detailed understanding of bacterial and microbial eukaryotic ecology within a decomposing corpse system and suggest that microbial community data can be developed into a forensic tool for estimating PMI. DOI: http://dx.doi.org/10.7554/eLife.01104.001 PMID:24137541

  18. Fast and accurate probability density estimation in large high dimensional astronomical datasets

    NASA Astrophysics Data System (ADS)

    Gupta, Pramod; Connolly, Andrew J.; Gardner, Jeffrey P.

    2015-01-01

    Astronomical surveys will generate measurements of hundreds of attributes (e.g. color, size, shape) on hundreds of millions of sources. Analyzing these large, high dimensional data sets will require efficient algorithms for data analysis. An example of this is probability density estimation that is at the heart of many classification problems such as the separation of stars and quasars based on their colors. Popular density estimation techniques use binning or kernel density estimation. Kernel density estimation has a small memory footprint but often requires large computational resources. Binning has small computational requirements but usually binning is implemented with multi-dimensional arrays which leads to memory requirements which scale exponentially with the number of dimensions. Hence both techniques do not scale well to large data sets in high dimensions. We present an alternative approach of binning implemented with hash tables (BASH tables). This approach uses the sparseness of data in the high dimensional space to ensure that the memory requirements are small. However hashing requires some extra computation so a priori it is not clear if the reduction in memory requirements will lead to increased computational requirements. Through an implementation of BASH tables in C++ we show that the additional computational requirements of hashing are negligible. Hence this approach has small memory and computational requirements. We apply our density estimation technique to photometric selection of quasars using non-parametric Bayesian classification and show that the accuracy of the classification is same as the accuracy of earlier approaches. Since the BASH table approach is one to three orders of magnitude faster than the earlier approaches it may be useful in various other applications of density estimation in astrostatistics.

  19. Spectral estimation from laser scanner data for accurate color rendering of objects

    NASA Astrophysics Data System (ADS)

    Baribeau, Rejean

    2002-06-01

    Estimation methods are studied for the recovery of the spectral reflectance across the visible range from the sensing at just three discrete laser wavelengths. Methods based on principal component analysis and on spline interpolation are judged based on the CIE94 color differences for some reference data sets. These include the Macbeth color checker, the OSA-UCS color charts, some artist pigments, and a collection of miscellaneous surface colors. The optimal three sampling wavelengths are also investigated. It is found that color can be estimated with average accuracy ΔE94 = 2.3 when optimal wavelengths 455 nm, 540 n, and 610 nm are used.

  20. Crop area estimation based on remotely-sensed data with an accurate but costly subsample

    NASA Technical Reports Server (NTRS)

    Gunst, R. F.

    1985-01-01

    Research activities conducted under the auspices of National Aeronautics and Space Administration Cooperative Agreement NCC 9-9 are discussed. During this contract period research efforts are concentrated in two primary areas. The first are is an investigation of the use of measurement error models as alternatives to least squares regression estimators of crop production or timber biomass. The secondary primary area of investigation is on the estimation of the mixing proportion of two-component mixture models. This report lists publications, technical reports, submitted manuscripts, and oral presentation generated by these research efforts. Possible areas of future research are mentioned.

  1. SU-F-BRF-09: A Non-Rigid Point Matching Method for Accurate Bladder Dose Summation in Cervical Cancer HDR Brachytherapy

    SciTech Connect

    Chen, H; Zhen, X; Zhou, L; Zhong, Z; Pompos, A; Yan, H; Jiang, S; Gu, X

    2014-06-15

    Purpose: To propose and validate a deformable point matching scheme for surface deformation to facilitate accurate bladder dose summation for fractionated HDR cervical cancer treatment. Method: A deformable point matching scheme based on the thin plate spline robust point matching (TPSRPM) algorithm is proposed for bladder surface registration. The surface of bladders segmented from fractional CT images is extracted and discretized with triangular surface mesh. Deformation between the two bladder surfaces are obtained by matching the two meshes' vertices via the TPS-RPM algorithm, and the deformation vector fields (DVFs) characteristic of this deformation is estimated by B-spline approximation. Numerically, the algorithm is quantitatively compared with the Demons algorithm using five clinical cervical cancer cases by several metrics: vertex-to-vertex distance (VVD), Hausdorff distance (HD), percent error (PE), and conformity index (CI). Experimentally, the algorithm is validated on a balloon phantom with 12 surface fiducial markers. The balloon is inflated with different amount of water, and the displacement of fiducial markers is benchmarked as ground truth to study TPS-RPM calculated DVFs' accuracy. Results: In numerical evaluation, the mean VVD is 3.7(±2.0) mm after Demons, and 1.3(±0.9) mm after TPS-RPM. The mean HD is 14.4 mm after Demons, and 5.3mm after TPS-RPM. The mean PE is 101.7% after Demons and decreases to 18.7% after TPS-RPM. The mean CI is 0.63 after Demons, and increases to 0.90 after TPS-RPM. In the phantom study, the mean Euclidean distance of the fiducials is 7.4±3.0mm and 4.2±1.8mm after Demons and TPS-RPM, respectively. Conclusions: The bladder wall deformation is more accurate using the feature-based TPS-RPM algorithm than the intensity-based Demons algorithm, indicating that TPS-RPM has the potential for accurate bladder dose deformation and dose summation for multi-fractional cervical HDR brachytherapy. This work is supported in part by

  2. Deep Wideband Single Pointings and Mosaics in Radio Interferometry: How Accurately Do We Reconstruct Intensities and Spectral Indices of Faint Sources?

    NASA Astrophysics Data System (ADS)

    Rau, U.; Bhatnagar, S.; Owen, F. N.

    2016-11-01

    Many deep wideband wide-field radio interferometric surveys are being designed to accurately measure intensities, spectral indices, and polarization properties of faint source populations. In this paper, we compare various wideband imaging methods to evaluate the accuracy to which intensities and spectral indices of sources close to the confusion limit can be reconstructed. We simulated a wideband single-pointing (C-array, L-Band (1-2 GHz)) and 46-pointing mosaic (D-array, C-Band (4-8 GHz)) JVLA observation using a realistic brightness distribution ranging from 1 μJy to 100 mJy and time-, frequency-, polarization-, and direction-dependent instrumental effects. The main results from these comparisons are (a) errors in the reconstructed intensities and spectral indices are larger for weaker sources even in the absence of simulated noise, (b) errors are systematically lower for joint reconstruction methods (such as Multi-Term Multi-Frequency-Synthesis (MT-MFS)) along with A-Projection for accurate primary beam correction, and (c) use of MT-MFS for image reconstruction eliminates Clean-bias (which is present otherwise). Auxiliary tests include solutions for deficiencies of data partitioning methods (e.g., the use of masks to remove clean bias and hybrid methods to remove sidelobes from sources left un-deconvolved), the effect of sources not at pixel centers, and the consequences of various other numerical approximations within software implementations. This paper also demonstrates the level of detail at which such simulations must be done in order to reflect reality, enable one to systematically identify specific reasons for every trend that is observed, and to estimate scientifically defensible imaging performance metrics and the associated computational complexity of the algorithms/analysis procedures. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.

  3. Voronoi-Based Curvature and Feature Estimation from Point Clouds.

    PubMed

    Mérigot, Quentin; Ovsjanikov, Maks; Guibas, Leonidas

    2011-06-01

    We present an efficient and robust method for extracting curvature information, sharp features, and normal directions of a piecewise smooth surface from its point cloud sampling in a unified framework. Our method is integral in nature and uses convolved covariance matrices of Voronoi cells of the point cloud which makes it provably robust in the presence of noise. We show that these matrices contain information related to curvature in the smooth parts of the surface, and information about the directions and angles of sharp edges around the features of a piecewise-smooth surface. Our method is applicable in both two and three dimensions, and can be easily parallelized, making it possible to process arbitrarily large point clouds, which was a challenge for Voronoi-based methods. In addition, we describe a Monte-Carlo version of our method, which is applicable in any dimension. We illustrate the correctness of both principal curvature information and feature extraction in the presence of varying levels of noise and sampling density on a variety of models. As a sample application, we use our feature detection method to segment point cloud samplings of piecewise-smooth surfaces.

  4. Accurate estimation of influenza epidemics using Google search data via ARGO

    PubMed Central

    Yang, Shihao; Santillana, Mauricio; Kou, S. C.

    2015-01-01

    Accurate real-time tracking of influenza outbreaks helps public health officials make timely and meaningful decisions that could save lives. We propose an influenza tracking model, ARGO (AutoRegression with GOogle search data), that uses publicly available online search data. In addition to having a rigorous statistical foundation, ARGO outperforms all previously available Google-search–based tracking models, including the latest version of Google Flu Trends, even though it uses only low-quality search data as input from publicly available Google Trends and Google Correlate websites. ARGO not only incorporates the seasonality in influenza epidemics but also captures changes in people’s online search behavior over time. ARGO is also flexible, self-correcting, robust, and scalable, making it a potentially powerful tool that can be used for real-time tracking of other social events at multiple temporal and spatial resolutions. PMID:26553980

  5. Do hand-held calorimeters provide reliable and accurate estimates of resting metabolic rate?

    PubMed

    Van Loan, Marta D

    2007-12-01

    This paper provides an overview of a new technique for indirect calorimetry and the assessment of resting metabolic rate. Information from the research literature includes findings on the reliability and validity of a new hand-held indirect calorimeter as well as use in clinical and field settings. Research findings to date are of mixed results. The MedGem instrument has provided more consistent results when compared to the Douglas bag method of measuring metabolic rate. The BodyGem instrument has been shown to be less accurate when compared to standard metabolic carts. Furthermore, when the Body Gem has been used with clinical patients or with under nourished individuals the results have not been acceptable. Overall, there is not a large enough body of evidence to definitively support the use of these hand-held devices for assessment of metabolic rate in a wide variety of clinical or research environments.

  6. Accurate estimation of influenza epidemics using Google search data via ARGO.

    PubMed

    Yang, Shihao; Santillana, Mauricio; Kou, S C

    2015-11-24

    Accurate real-time tracking of influenza outbreaks helps public health officials make timely and meaningful decisions that could save lives. We propose an influenza tracking model, ARGO (AutoRegression with GOogle search data), that uses publicly available online search data. In addition to having a rigorous statistical foundation, ARGO outperforms all previously available Google-search-based tracking models, including the latest version of Google Flu Trends, even though it uses only low-quality search data as input from publicly available Google Trends and Google Correlate websites. ARGO not only incorporates the seasonality in influenza epidemics but also captures changes in people's online search behavior over time. ARGO is also flexible, self-correcting, robust, and scalable, making it a potentially powerful tool that can be used for real-time tracking of other social events at multiple temporal and spatial resolutions.

  7. Raman spectroscopy for highly accurate estimation of the age of refrigerated porcine muscle

    NASA Astrophysics Data System (ADS)

    Timinis, Constantinos; Pitris, Costas

    2016-03-01

    The high water content of meat, combined with all the nutrients it contains, make it vulnerable to spoilage at all stages of production and storage even when refrigerated at 5 °C. A non-destructive and in situ tool for meat sample testing, which could provide an accurate indication of the storage time of meat, would be very useful for the control of meat quality as well as for consumer safety. The proposed solution is based on Raman spectroscopy which is non-invasive and can be applied in situ. For the purposes of this project, 42 meat samples from 14 animals were obtained and three Raman spectra per sample were collected every two days for two weeks. The spectra were subsequently processed and the sample age was calculated using a set of linear differential equations. In addition, the samples were classified in categories corresponding to the age in 2-day steps (i.e., 0, 2, 4, 6, 8, 10, 12 or 14 days old), using linear discriminant analysis and cross-validation. Contrary to other studies, where the samples were simply grouped into two categories (higher or lower quality, suitable or unsuitable for human consumption, etc.), in this study, the age was predicted with a mean error of ~ 1 day (20%) or classified, in 2-day steps, with 100% accuracy. Although Raman spectroscopy has been used in the past for the analysis of meat samples, the proposed methodology has resulted in a prediction of the sample age far more accurately than any report in the literature.

  8. Lower bound on reliability for Weibull distribution when shape parameter is not estimated accurately

    NASA Technical Reports Server (NTRS)

    Huang, Zhaofeng; Porter, Albert A.

    1990-01-01

    The mathematical relationships between the shape parameter Beta and estimates of reliability and a life limit lower bound for the two parameter Weibull distribution are investigated. It is shown that under rather general conditions, both the reliability lower bound and the allowable life limit lower bound (often called a tolerance limit) have unique global minimums over a range of Beta. Hence lower bound solutions can be obtained without assuming or estimating Beta. The existence and uniqueness of these lower bounds are proven. Some real data examples are given to show how these lower bounds can be easily established and to demonstrate their practicality. The method developed here has proven to be extremely useful when using the Weibull distribution in analysis of no-failure or few-failures data. The results are applicable not only in the aerospace industry but anywhere that system reliabilities are high.

  9. Accurate dynamic power estimation for CMOS combinational logic circuits with real gate delay model.

    PubMed

    Fadl, Omnia S; Abu-Elyazeed, Mohamed F; Abdelhalim, Mohamed B; Amer, Hassanein H; Madian, Ahmed H

    2016-01-01

    Dynamic power estimation is essential in designing VLSI circuits where many parameters are involved but the only circuit parameter that is related to the circuit operation is the nodes' toggle rate. This paper discusses a deterministic and fast method to estimate the dynamic power consumption for CMOS combinational logic circuits using gate-level descriptions based on the Logic Pictures concept to obtain the circuit nodes' toggle rate. The delay model for the logic gates is the real-delay model. To validate the results, the method is applied to several circuits and compared against exhaustive, as well as Monte Carlo, simulations. The proposed technique was shown to save up to 96% processing time compared to exhaustive simulation.

  10. Accurate group velocity estimation for unmanned aerial vehicle-based acoustic atmospheric tomography.

    PubMed

    Rogers, Kevin J; Finn, Anthony

    2017-02-01

    Acoustic atmospheric tomography calculates temperature and wind velocity fields in a slice or volume of atmosphere based on travel time estimates between strategically located sources and receivers. The technique discussed in this paper uses the natural acoustic signature of an unmanned aerial vehicle as it overflies an array of microphones on the ground. The sound emitted by the aircraft is recorded on-board and by the ground microphones. The group velocities of the intersecting sound rays are then derived by comparing these measurements. Tomographic inversion is used to estimate the temperature and wind fields from the group velocity measurements. This paper describes a technique for deriving travel time (and hence group velocity) with an accuracy of 0.1% using these assets. This is shown to be sufficient to obtain highly plausible tomographic inversion results that correlate well with independent SODAR measurements.

  11. Techniques for accurate estimation of net discharge in a tidal channel

    USGS Publications Warehouse

    Simpson, Michael R.; Bland, Roger

    1999-01-01

    An ultrasonic velocity meter discharge-measurement site in a tidally affected region of the Sacramento-San Joaquin rivers was used to study the accuracy of the index velocity calibration procedure. Calibration data consisting of ultrasonic velocity meter index velocity and concurrent acoustic Doppler discharge measurement data were collected during three time periods. The relative magnitude of equipment errors, acoustic Doppler discharge measurement errors, and calibration errors were evaluated. Calibration error was the most significant source of error in estimating net discharge. Using a comprehensive calibration method, net discharge estimates developed from the three sets of calibration data differed by less than an average of 4 cubic meters per second. Typical maximum flow rates during the data-collection period averaged 750 cubic meters per second.

  12. Lower bound on reliability for Weibull distribution when shape parameter is not estimated accurately

    NASA Technical Reports Server (NTRS)

    Huang, Zhaofeng; Porter, Albert A.

    1991-01-01

    The mathematical relationships between the shape parameter Beta and estimates of reliability and a life limit lower bound for the two parameter Weibull distribution are investigated. It is shown that under rather general conditions, both the reliability lower bound and the allowable life limit lower bound (often called a tolerance limit) have unique global minimums over a range of Beta. Hence lower bound solutions can be obtained without assuming or estimating Beta. The existence and uniqueness of these lower bounds are proven. Some real data examples are given to show how these lower bounds can be easily established and to demonstrate their practicality. The method developed here has proven to be extremely useful when using the Weibull distribution in analysis of no-failure or few-failures data. The results are applicable not only in the aerospace industry but anywhere that system reliabilities are high.

  13. Lung motion estimation using dynamic point shifting: An innovative model based on a robust point matching algorithm

    SciTech Connect

    Yi, Jianbing; Yang, Xuan Li, Yan-Ran; Chen, Guoliang

    2015-10-15

    Purpose: Image-guided radiotherapy is an advanced 4D radiotherapy technique that has been developed in recent years. However, respiratory motion causes significant uncertainties in image-guided radiotherapy procedures. To address these issues, an innovative lung motion estimation model based on a robust point matching is proposed in this paper. Methods: An innovative robust point matching algorithm using dynamic point shifting is proposed to estimate patient-specific lung motion during free breathing from 4D computed tomography data. The correspondence of the landmark points is determined from the Euclidean distance between the landmark points and the similarity between the local images that are centered at points at the same time. To ensure that the points in the source image correspond to the points in the target image during other phases, the virtual target points are first created and shifted based on the similarity between the local image centered at the source point and the local image centered at the virtual target point. Second, the target points are shifted by the constrained inverse function mapping the target points to the virtual target points. The source point set and shifted target point set are used to estimate the transformation function between the source image and target image. Results: The performances of the authors’ method are evaluated on two publicly available DIR-lab and POPI-model lung datasets. For computing target registration errors on 750 landmark points in six phases of the DIR-lab dataset and 37 landmark points in ten phases of the POPI-model dataset, the mean and standard deviation by the authors’ method are 1.11 and 1.11 mm, but they are 2.33 and 2.32 mm without considering image intensity, and 1.17 and 1.19 mm with sliding conditions. For the two phases of maximum inhalation and maximum exhalation in the DIR-lab dataset with 300 landmark points of each case, the mean and standard deviation of target registration errors on the

  14. Towards accurate estimates of the spin-state energetics of spin-crossover complexes within density functional theory: a comparative case study of cobalt(II) complexes.

    PubMed

    Vargas, Alfredo; Krivokapic, Itana; Hauser, Andreas; Lawson Daku, Latévi Max

    2013-03-21

    We report a detailed DFT study of the energetic and structural properties of the spin-crossover Co(ii) complex [Co(tpy)(2)](2+) (tpy = 2,2':6',2''-terpyridine) in the low-spin (LS) and the high-spin (HS) states, using several generalized gradient approximation and hybrid functionals. In either spin-state, the results obtained with the functionals are consistent with one another and in good agreement with available experimental data. Although the different functionals correctly predict the LS state as the electronic ground state of [Co(tpy)(2)](2+), they give estimates of the HS-LS zero-point energy difference which strongly depend on the functional used. This dependency on the functional was also reported for the DFT estimates of the zero-point energy difference in the HS complex [Co(bpy)(3)](2+) (bpy = 2,2'-bipyridine) [A. Vargas, A. Hauser and L. M. Lawson Daku, J. Chem. Theory Comput., 2009, 5, 97]. The comparison of the and estimates showed that all functionals correctly predict an increase of the zero-point energy difference upon the bpy → tpy ligand substitution, which furthermore weakly depends on the functionals, amounting to . From these results and basic thermodynamic considerations, we establish that, despite their limitations, current DFT methods can be applied to the accurate determination of the spin-state energetics of complexes of a transition metal ion, or of these complexes in different environments, provided that the spin-state energetics is accurately known in one case. Thus, making use of the availability of a highly accurate ab initio estimate of the HS-LS energy difference in the complex [Co(NCH)(6)](2+) [L. M. Lawson Daku, F. Aquilante, T. W. Robinson and A. Hauser, J. Chem. Theory Comput., 2012, 8, 4216], we obtain for [Co(tpy)(2)](2+) and [Co(bpy)(3)](2+) best estimates of and , in good agreement with the known magnetic behaviour of the two complexes.

  15. Estimation of Watershed Scale Soil Moisture from Point Measurements in SMEX02

    NASA Astrophysics Data System (ADS)

    Cosh, M. H.; Jackson, T. J.; Bindlish, R.; Prueger, J.

    2002-12-01

    Understanding watershed scale soil moisture distributions is necessary to validate current remote sensing, such as the Advanced Microwave Scanning Radiometer (AMSR). Unfortunately, remote sensing technology does not currently resolve the land surface at a scale that can be easily validated with ground observations. One method of validation uses existing soil moisture measurement networks and scales up to the resolution of these remote sensing footprints. Soil Moisture Experiment 2002 (SMEX02) was an excellent opportunity to implement one such soil moisture gaging system which, when calibrated, provided robust estimates of the watershed scale soil moisture throughout the summer of 2002. Twelve fields distributed across the Walnut Creek watershed were instrumented with in situ soil moisture probes and were intensively sampled during the experiment, between June 25 and July 12, 2002. The sampling sites were analyzed for temporal stability and scaling relationships were developed. These point measurements were scaled up to the field scale (~ 800 m) and then to the watershed scale (~ 25 km) for the field experiment period and were shown to be accurate indicators of the large-scale soil moisture distribution. Point measurements were then used as a basis for a watershed estimate for several months beyond SMEX02, thereby providing a long record of watershed scale soil moisture which can be used for validation. The ability to estimate the soil moisture is measured by a variety of techniques, including split sample verification. This analysis is a first step in the implementation of large-scale soil moisture validation utilizing networks such as the Soil Climate Analysis Network (SCAN) as a basis for calibrating soil moisture satellite products.

  16. A Simple and Accurate Equation for Peak Capacity Estimation in Two Dimensional Liquid Chromatography

    PubMed Central

    Li, Xiaoping; Stoll, Dwight R.; Carr, Peter W.

    2009-01-01

    Two dimensional liquid chromatography (2DLC) is a very powerful way to greatly increase the resolving power and overall peak capacity of liquid chromatography. The traditional “product rule” for peak capacity usually overestimates the true resolving power due to neglect of the often quite severe under-sampling effect and thus provides poor guidance for optimizing the separation and biases comparisons to optimized one dimensional gradient liquid chromatography. Here we derive a simple yet accurate equation for the effective two dimensional peak capacity that incorporates a correction for under-sampling of the first dimension. The results show that not only is the speed of the second dimension separation important for reducing the overall analysis time, but it plays a vital role in determining the overall peak capacity when the first dimension is under-sampled. A surprising subsidiary finding is that for relatively short 2DLC separations (much less than a couple of hours), the first dimension peak capacity is far less important than is commonly believed and need not be highly optimized, for example through use of long columns or very small particles. PMID:19053226

  17. Accurate Estimation of Expression Levels of Homologous Genes in RNA-seq Experiments

    NASA Astrophysics Data System (ADS)

    Paşaniuc, Bogdan; Zaitlen, Noah; Halperin, Eran

    Next generation high throughput sequencing (NGS) is poised to replace array based technologies as the experiment of choice for measuring RNA expression levels. Several groups have demonstrated the power of this new approach (RNA-seq), making significant and novel contributions and simultaneously proposing methodologies for the analysis of RNA-seq data. In a typical experiment, millions of short sequences (reads) are sampled from RNA extracts and mapped back to a reference genome. The number of reads mapping to each gene is used as proxy for its corresponding RNA concentration. A significant challenge in analyzing RNA expression of homologous genes is the large fraction of the reads that map to multiple locations in the reference genome. Currently, these reads are either dropped from the analysis, or a naïve algorithm is used to estimate their underlying distribution. In this work, we present a rigorous alternative for handling the reads generated in an RNA-seq experiment within a probabilistic model for RNA-seq data; we develop maximum likelihood based methods for estimating the model parameters. In contrast to previous methods, our model takes into account the fact that the DNA of the sequenced individual is not a perfect copy of the reference sequence. We show with both simulated and real RNA-seq data that our new method improves the accuracy and power of RNA-seq experiments.

  18. Accurate estimation of expression levels of homologous genes in RNA-seq experiments.

    PubMed

    Paşaniuc, Bogdan; Zaitlen, Noah; Halperin, Eran

    2011-03-01

    Abstract Next generation high-throughput sequencing (NGS) is poised to replace array-based technologies as the experiment of choice for measuring RNA expression levels. Several groups have demonstrated the power of this new approach (RNA-seq), making significant and novel contributions and simultaneously proposing methodologies for the analysis of RNA-seq data. In a typical experiment, millions of short sequences (reads) are sampled from RNA extracts and mapped back to a reference genome. The number of reads mapping to each gene is used as proxy for its corresponding RNA concentration. A significant challenge in analyzing RNA expression of homologous genes is the large fraction of the reads that map to multiple locations in the reference genome. Currently, these reads are either dropped from the analysis, or a naive algorithm is used to estimate their underlying distribution. In this work, we present a rigorous alternative for handling the reads generated in an RNA-seq experiment within a probabilistic model for RNA-seq data; we develop maximum likelihood-based methods for estimating the model parameters. In contrast to previous methods, our model takes into account the fact that the DNA of the sequenced individual is not a perfect copy of the reference sequence. We show with both simulated and real RNA-seq data that our new method improves the accuracy and power of RNA-seq experiments.

  19. ACCURATE ESTIMATIONS OF STELLAR AND INTERSTELLAR TRANSITION LINES OF TRIPLY IONIZED GERMANIUM

    SciTech Connect

    Dutta, Narendra Nath; Majumder, Sonjoy E-mail: sonjoy@gmail.com

    2011-08-10

    In this paper, we report on weighted oscillator strengths of E1 transitions and transition probabilities of E2 transitions among different low-lying states of triply ionized germanium using highly correlated relativistic coupled cluster (RCC) method. Due to the abundance of Ge IV in the solar system, planetary nebulae, white dwarf stars, etc., the study of such transitions is important from an astrophysical point of view. The weighted oscillator strengths of E1 transitions are presented in length and velocity gauge forms to check the accuracy of the calculations. We find excellent agreement between calculated and experimental excitation energies. Oscillator strengths of few transitions, wherever studied in the literature via other theoretical and experimental approaches, are compared with our RCC calculations.

  20. Developing accurate survey methods for estimating population sizes and trends of the critically endangered Nihoa Millerbird and Nihoa Finch.

    USGS Publications Warehouse

    Gorresen, P. Marcos; Camp, Richard J.; Brinck, Kevin W.; Farmer, Chris

    2012-01-01

    Point-transect surveys indicated that millerbirds were more abundant than shown by the striptransect method, and were estimated at 802 birds in 2010 (95%CI = 652 – 964) and 704 birds in 2011 (95%CI = 579 – 837). Point-transect surveys yielded population estimates with improved precision which will permit trends to be detected in shorter time periods and with greater statistical power than is available from strip-transect survey methods. Mean finch population estimates and associated uncertainty were not markedly different among the three survey methods, but the performance of models used to estimate density and population size are expected to improve as the data from additional surveys are incorporated. Using the pointtransect survey, the mean finch population size was estimated at 2,917 birds in 2010 (95%CI = 2,037 – 3,965) and 2,461 birds in 2011 (95%CI = 1,682 – 3,348). Preliminary testing of the line-transect method in 2011 showed that it would not generate sufficient detections to effectively model bird density, and consequently, relatively precise population size estimates. Both species were fairly evenly distributed across Nihoa and appear to occur in all or nearly all available habitat. The time expended and area traversed by observers was similar among survey methods; however, point-transect surveys do not require that observers walk a straight transect line, thereby allowing them to avoid culturally or biologically sensitive areas and minimize the adverse effects of recurrent travel to any particular area. In general, pointtransect surveys detect more birds than strip-survey methods, thereby improving precision and resulting population size and trend estimation. The method is also better suited for the steep and uneven terrain of Nihoa

  1. [Research on maize multispectral image accurate segmentation and chlorophyll index estimation].

    PubMed

    Wu, Qian; Sun, Hong; Li, Min-zan; Song, Yuan-yuan; Zhang, Yan-e

    2015-01-01

    In order to rapidly acquire maize growing information in the field, a non-destructive method of maize chlorophyll content index measurement was conducted based on multi-spectral imaging technique and imaging processing technology. The experiment was conducted at Yangling in Shaanxi province of China and the crop was Zheng-dan 958 planted in about 1 000 m X 600 m experiment field. Firstly, a 2-CCD multi-spectral image monitoring system was available to acquire the canopy images. The system was based on a dichroic prism, allowing precise separation of the visible (Blue (B), Green (G), Red (R): 400-700 nm) and near-infrared (NIR, 760-1 000 nm) band. The multispectral images were output as RGB and NIR images via the system vertically fixed to the ground with vertical distance of 2 m and angular field of 50°. SPAD index of each sample was'measured synchronously to show the chlorophyll content index. Secondly, after the image smoothing using adaptive smooth filtering algorithm, the NIR maize image was selected to segment the maize leaves from background, because there was a big difference showed in gray histogram between plant and soil background. The NIR image segmentation algorithm was conducted following steps of preliminary and accuracy segmentation: (1) The results of OTSU image segmentation method and the variable threshold algorithm were discussed. It was revealed that the latter was better one in corn plant and weed segmentation. As a result, the variable threshold algorithm based on local statistics was selected for the preliminary image segmentation. The expansion and corrosion were used to optimize the segmented image. (2) The region labeling algorithm was used to segment corn plants from soil and weed background with an accuracy of 95. 59 %. And then, the multi-spectral image of maize canopy was accurately segmented in R, G and B band separately. Thirdly, the image parameters were abstracted based on the segmented visible and NIR images. The average gray

  2. A process-based approach to estimate point snow instability

    NASA Astrophysics Data System (ADS)

    Reuter, B.; Schweizer, J.; van Herwijnen, A.

    2015-05-01

    Snow instability data provide information about the mechanical state of the snow cover and are essential for forecasting snow avalanches. So far, direct observations of instability (recent avalanches, shooting cracks or whumpf sounds) are complemented with field tests such as the rutschblock test, since no measurement method for instability exists. We propose a new approach based on snow mechanical properties derived from the snow micro-penetrometer that takes into account the two essential processes during dry-snow avalanche release: failure initiation and crack propagation. To estimate the propensity of failure initiation we define a stress-based failure criterion, whereas the propensity of crack propagation is described by the critical cut length as obtained with a propagation saw test. The input parameters include layer thickness, snow density, effective elastic modulus, strength and specific fracture energy of the weak layer - all derived from the penetration-force signal acquired with the snow micro-penetrometer. Both instability measures were validated with independent field data and correlated well with results from field tests. Comparisons with observed signs of instability clearly indicated that a snowpack is only prone to avalanche if the two separate conditions for failure initiation and crack propagation are fulfilled. To our knowledge, this is the first time that an objective method for estimating snow instability has been proposed. The approach can either be used directly based on field measurements with the snow micro-penetrometer, or be implemented in numerical snow cover models. With an objective measure of instability at hand, the problem of spatial variations of instability and its causes can now be tackled.

  3. The challenges of accurately estimating time of long bone injury in children.

    PubMed

    Pickett, Tracy A

    2015-07-01

    The ability to determine the time an injury occurred can be of crucial significance in forensic medicine and holds special relevance to the investigation of child abuse. However, dating paediatric long bone injury, including fractures, is nuanced by complexities specific to the paediatric population. These challenges include the ability to identify bone injury in a growing or only partially-calcified skeleton, different injury patterns seen within the spectrum of the paediatric population, the effects of bone growth on healing as a separate entity from injury, differential healing rates seen at different ages, and the relative scarcity of information regarding healing rates in children, especially the very young. The challenges posed by these factors are compounded by a lack of consistency in defining and categorizing healing parameters. This paper sets out the primary limitations of existing knowledge regarding estimating timing of paediatric bone injury. Consideration and understanding of the multitude of factors affecting bone injury and healing in children will assist those providing opinion in the medical-legal forum.

  4. Error Estimation And Accurate Mapping Based ALE Formulation For 3D Simulation Of Friction Stir Welding

    NASA Astrophysics Data System (ADS)

    Guerdoux, Simon; Fourment, Lionel

    2007-05-01

    An Arbitrary Lagrangian Eulerian (ALE) formulation is developed to simulate the different stages of the Friction Stir Welding (FSW) process with the FORGE3® F.E. software. A splitting method is utilized: a) the material velocity/pressure and temperature fields are calculated, b) the mesh velocity is derived from the domain boundary evolution and an adaptive refinement criterion provided by error estimation, c) P1 and P0 variables are remapped. Different velocity computation and remap techniques have been investigated, providing significant improvement with respect to more standard approaches. The proposed ALE formulation is applied to FSW simulation. Steady state welding, but also transient phases are simulated, showing good robustness and accuracy of the developed formulation. Friction parameters are identified for an Eulerian steady state simulation by comparison with experimental results. Void formation can be simulated. Simulations of the transient plunge and welding phases help to better understand the deposition process that occurs at the trailing edge of the probe. Flexibility and robustness of the model finally allows investigating the influence of new tooling designs on the deposition process.

  5. A new method based on the subpixel Gaussian model for accurate estimation of asteroid coordinates

    NASA Astrophysics Data System (ADS)

    Savanevych, V. E.; Briukhovetskyi, O. B.; Sokovikova, N. S.; Bezkrovny, M. M.; Vavilova, I. B.; Ivashchenko, Yu. M.; Elenin, L. V.; Khlamov, S. V.; Movsesian, Ia. S.; Dashkova, A. M.; Pogorelov, A. V.

    2015-08-01

    We describe a new iteration method to estimate asteroid coordinates, based on a subpixel Gaussian model of the discrete object image. The method operates by continuous parameters (asteroid coordinates) in a discrete observational space (the set of pixel potentials) of the CCD frame. In this model, the kind of coordinate distribution of the photons hitting a pixel of the CCD frame is known a priori, while the associated parameters are determined from a real digital object image. The method that is developed, which is flexible in adapting to any form of object image, has a high measurement accuracy along with a low calculating complexity, due to the maximum-likelihood procedure that is implemented to obtain the best fit instead of a least-squares method and Levenberg-Marquardt algorithm for minimization of the quadratic form. Since 2010, the method has been tested as the basis of our Collection Light Technology (COLITEC) software, which has been installed at several observatories across the world with the aim of the automatic discovery of asteroids and comets in sets of CCD frames. As a result, four comets (C/2010 X1 (Elenin), P/2011 NO1(Elenin), C/2012 S1 (ISON) and P/2013 V3 (Nevski)) as well as more than 1500 small Solar system bodies (including five near-Earth objects (NEOs), 21 Trojan asteroids of Jupiter and one Centaur object) have been discovered. We discuss these results, which allowed us to compare the accuracy parameters of the new method and confirm its efficiency. In 2014, the COLITEC software was recommended to all members of the Gaia-FUN-SSO network for analysing observations as a tool to detect faint moving objects in frames.

  6. Investigation of Alternative Methods Including Jackknifing for Estimating Point Availability of a System.

    DTIC Science & Technology

    1981-09-01

    Properties of two alternative procedures to the Jackknife Point and Confidence Interval Estimation Procedure of Gaver and Chu have been studied. They...Jackknife Point and Confidence Interval Availability Estimation Procedure. Numerical results from simulations are presented in this report. (Author)

  7. Aggregate versus individual-level sexual behavior assessment: how much detail is needed to accurately estimate HIV/STI risk?

    PubMed

    Pinkerton, Steven D; Galletly, Carol L; McAuliffe, Timothy L; DiFranceisco, Wayne; Raymond, H Fisher; Chesson, Harrell W

    2010-02-01

    The sexual behaviors of HIV/sexually transmitted infection (STI) prevention intervention participants can be assessed on a partner-by-partner basis: in aggregate (i.e., total numbers of sex acts, collapsed across partners) or using a combination of these two methods (e.g., assessing five partners in detail and any remaining partners in aggregate). There is a natural trade-off between the level of sexual behavior detail and the precision of HIV/STI acquisition risk estimates. The results of this study indicate that relatively simple aggregate data collection techniques suffice to adequately estimate HIV risk. For highly infectious STIs, in contrast, accurate STI risk assessment requires more intensive partner-by-partner methods.

  8. Linear-In-The-Parameters Oblique Least Squares (LOLS) Provides More Accurate Estimates of Density-Dependent Survival

    PubMed Central

    Vieira, Vasco M. N. C. S.; Engelen, Aschwin H.; Huanel, Oscar R.; Guillemin, Marie-Laure

    2016-01-01

    Survival is a fundamental demographic component and the importance of its accurate estimation goes beyond the traditional estimation of life expectancy. The evolutionary stability of isomorphic biphasic life-cycles and the occurrence of its different ploidy phases at uneven abundances are hypothesized to be driven by differences in survival rates between haploids and diploids. We monitored Gracilaria chilensis, a commercially exploited red alga with an isomorphic biphasic life-cycle, having found density-dependent survival with competition and Allee effects. While estimating the linear-in-the-parameters survival function, all model I regression methods (i.e, vertical least squares) provided biased line-fits rendering them inappropriate for studies about ecology, evolution or population management. Hence, we developed an iterative two-step non-linear model II regression (i.e, oblique least squares), which provided improved line-fits and estimates of survival function parameters, while robust to the data aspects that usually turn the regression methods numerically unstable. PMID:27936048

  9. Accurate Estimation of Fungal Diversity and Abundance through Improved Lineage-Specific Primers Optimized for Illumina Amplicon Sequencing

    PubMed Central

    Walters, William A.; Lennon, Niall J.; Bochicchio, James; Krohn, Andrew; Pennanen, Taina

    2016-01-01

    ABSTRACT While high-throughput sequencing methods are revolutionizing fungal ecology, recovering accurate estimates of species richness and abundance has proven elusive. We sought to design internal transcribed spacer (ITS) primers and an Illumina protocol that would maximize coverage of the kingdom Fungi while minimizing nontarget eukaryotes. We inspected alignments of the 5.8S and large subunit (LSU) ribosomal genes and evaluated potential primers using PrimerProspector. We tested the resulting primers using tiered-abundance mock communities and five previously characterized soil samples. We recovered operational taxonomic units (OTUs) belonging to all 8 members in both mock communities, despite DNA abundances spanning 3 orders of magnitude. The expected and observed read counts were strongly correlated (r = 0.94 to 0.97). However, several taxa were consistently over- or underrepresented, likely due to variation in rRNA gene copy numbers. The Illumina data resulted in clustering of soil samples identical to that obtained with Sanger sequence clone library data using different primers. Furthermore, the two methods produced distance matrices with a Mantel correlation of 0.92. Nonfungal sequences comprised less than 0.5% of the soil data set, with most attributable to vascular plants. Our results suggest that high-throughput methods can produce fairly accurate estimates of fungal abundances in complex communities. Further improvements might be achieved through corrections for rRNA copy number and utilization of standardized mock communities. IMPORTANCE Fungi play numerous important roles in the environment. Improvements in sequencing methods are providing revolutionary insights into fungal biodiversity, yet accurate estimates of the number of fungal species (i.e., richness) and their relative abundances in an environmental sample (e.g., soil, roots, water, etc.) remain difficult to obtain. We present improved methods for high-throughput Illumina sequencing of the

  10. An Efficient Operator for the Change Point Estimation in Partial Spline Model.

    PubMed

    Han, Sung Won; Zhong, Hua; Putt, Mary

    2015-05-01

    In bio-informatics application, the estimation of the starting and ending points of drop-down in the longitudinal data is important. One possible approach to estimate such change times is to use the partial spline model with change points. In order to use estimate change time, the minimum operator in terms of a smoothing parameter has been widely used, but we showed that the minimum operator causes large MSE of change point estimates. In this paper, we proposed the summation operator in terms of a smoothing parameter, and our simulation study showed that the summation operator gives smaller MSE for estimated change points than the minimum one. We also applied the proposed approach to the experiment data, blood flow during photodynamic cancer therapy.

  11. A simple method for accurate liver volume estimation by use of curve-fitting: a pilot study.

    PubMed

    Aoyama, Masahito; Nakayama, Yoshiharu; Awai, Kazuo; Inomata, Yukihiro; Yamashita, Yasuyuki

    2013-01-01

    In this paper, we describe the effectiveness of our curve-fitting method by comparing liver volumes estimated by our new technique to volumes obtained with the standard manual contour-tracing method. Hepatic parenchymal-phase images of 13 patients were obtained with multi-detector CT scanners after intravenous bolus administration of 120-150 mL of contrast material (300 mgI/mL). The liver contours of all sections were traced manually by an abdominal radiologist, and the liver volume was computed by summing of the volumes inside the contours. The section number between the first and last slice was then divided into 100 equal parts, and each volume was re-sampled by use of linear interpolation. We generated 13 model profile curves by averaging 12 cases, leaving out one case, and we estimated the profile curve for each patient by fitting the volume values at 4 points using a scale and translation transform. Finally, we determined the liver volume by integrating the sampling points of the profile curve. We used Bland-Altman analysis to evaluate the agreement between the volumes estimated with our curve-fitting method and the volumes measured by the manual contour-tracing method. The correlation between the volume measured by manual tracing and that estimated with our curve-fitting method was relatively high (r = 0.98; slope 0.97; p < 0.001). The mean difference between the manual tracing and our method was -22.9 cm(3) (SD of the difference, 46.2 cm(3)). Our volume-estimating technique that requires the tracing of only 4 images exhibited a relatively high linear correlation with the manual tracing technique.

  12. Kidney Stone Volume Estimation from Computerized Tomography Images Using a Model Based Method of Correcting for the Point Spread Function

    PubMed Central

    Duan, Xinhui; Wang, Jia; Qu, Mingliang; Leng, Shuai; Liu, Yu; Krambeck, Amy; McCollough, Cynthia

    2014-01-01

    Purpose We propose a method to improve the accuracy of volume estimation of kidney stones from computerized tomography images. Materials and Methods The proposed method consisted of 2 steps. A threshold equal to the average of the computerized tomography number of the object and the background was first applied to determine full width at half maximum volume. Correction factors were then applied, which were precalculated based on a model of a sphere and a 3-dimensional Gaussian point spread function. The point spread function was measured in a computerized tomography scanner to represent the response of the scanner to a point-like object. Method accuracy was validated using 6 small cylindrical phantoms with 2 volumes of 21.87 and 99.9 mm3, and 3 attenuations, respectively, and 76 kidney stones with a volume range of 6.3 to 317.4 mm3. Volumes estimated by the proposed method were compared with full width at half maximum volumes. Results The proposed method was significantly more accurate than full width at half maximum volume (p <0.0001). The magnitude of improvement depended on stone volume with smaller stones benefiting more from the method. For kidney stones 10 to 20 mm3 in volume the average improvement in accuracy was the greatest at 19.6%. Conclusions The proposed method achieved significantly improved accuracy compared with threshold methods. This may lead to more accurate stone management. PMID:22819107

  13. Bayesian deconvolution of scanning electron microscopy images using point-spread function estimation and non-local regularization.

    PubMed

    Roels, Joris; Aelterman, Jan; De Vylder, Jonas; Hiep Luong; Saeys, Yvan; Philips, Wilfried

    2016-08-01

    Microscopy is one of the most essential imaging techniques in life sciences. High-quality images are required in order to solve (potentially life-saving) biomedical research problems. Many microscopy techniques do not achieve sufficient resolution for these purposes, being limited by physical diffraction and hardware deficiencies. Electron microscopy addresses optical diffraction by measuring emitted or transmitted electrons instead of photons, yielding nanometer resolution. Despite pushing back the diffraction limit, blur should still be taken into account because of practical hardware imperfections and remaining electron diffraction. Deconvolution algorithms can remove some of the blur in post-processing but they depend on knowledge of the point-spread function (PSF) and should accurately regularize noise. Any errors in the estimated PSF or noise model will reduce their effectiveness. This paper proposes a new procedure to estimate the lateral component of the point spread function of a 3D scanning electron microscope more accurately. We also propose a Bayesian maximum a posteriori deconvolution algorithm with a non-local image prior which employs this PSF estimate and previously developed noise statistics. We demonstrate visual quality improvements and show that applying our method improves the quality of subsequent segmentation steps.

  14. Accurate estimation of entropy in very short physiological time series: the problem of atrial fibrillation detection in implanted ventricular devices.

    PubMed

    Lake, Douglas E; Moorman, J Randall

    2011-01-01

    Entropy estimation is useful but difficult in short time series. For example, automated detection of atrial fibrillation (AF) in very short heart beat interval time series would be useful in patients with cardiac implantable electronic devices that record only from the ventricle. Such devices require efficient algorithms, and the clinical situation demands accuracy. Toward these ends, we optimized the sample entropy measure, which reports the probability that short templates will match with others within the series. We developed general methods for the rational selection of the template length m and the tolerance matching r. The major innovation was to allow r to vary so that sufficient matches are found for confident entropy estimation, with conversion of the final probability to a density by dividing by the matching region volume, 2r(m). The optimized sample entropy estimate and the mean heart beat interval each contributed to accurate detection of AF in as few as 12 heartbeats. The final algorithm, called the coefficient of sample entropy (COSEn), was developed using the canonical MIT-BIH database and validated in a new and much larger set of consecutive Holter monitor recordings from the University of Virginia. In patients over the age of 40 yr old, COSEn has high degrees of accuracy in distinguishing AF from normal sinus rhythm in 12-beat calculations performed hourly. The most common errors are atrial or ventricular ectopy, which increase entropy despite sinus rhythm, and atrial flutter, which can have low or high entropy states depending on dynamics of atrioventricular conduction.

  15. Parameter estimation by fixed point of function of information processing intensity

    NASA Astrophysics Data System (ADS)

    Jankowski, Robert; Makowski, Marcin; Piotrowski, Edward W.

    2014-12-01

    We present a new method of estimating the dispersion of a distribution which is based on the surprising property of a function that measures information processing intensity. It turns out that this function has a maximum at its fixed point. Fixed-point equation is used to estimate the parameter of the distribution that is of interest to us. The main result consists in showing that only part of available experimental data is relevant for the parameters estimation process. We illustrate the estimation method by using the example of an exponential distribution.

  16. Impact of interfacial high-density water layer on accurate estimation of adsorption free energy by Jarzynski's equality

    NASA Astrophysics Data System (ADS)

    Zhang, Zhisen; Wu, Tao; Wang, Qi; Pan, Haihua; Tang, Ruikang

    2014-01-01

    The interactions between proteins/peptides and materials are crucial to research and development in many biomedical engineering fields. The energetics of such interactions are key in the evaluation of new proteins/peptides and materials. Much research has recently focused on the quality of free energy profiles by Jarzynski's equality, a widely used equation in biosystems. In the present work, considerable discrepancies were observed between the results obtained by Jarzynski's equality and those derived by umbrella sampling in biomaterial-water model systems. Detailed analyses confirm that such discrepancies turn up only when the target molecule moves in the high-density water layer on a material surface. Then a hybrid scheme was adopted based on this observation. The agreement between the results of the hybrid scheme and umbrella sampling confirms the former observation, which indicates an approach to a fast and accurate estimation of adsorption free energy for large biomaterial interfacial systems.

  17. A Unique Equation to Estimate Flash Points of Selected Pure Liquids Application to the Correction of Probably Erroneous Flash Point Values

    NASA Astrophysics Data System (ADS)

    Catoire, Laurent; Naudet, Valérie

    2004-12-01

    A simple empirical equation is presented for the estimation of closed-cup flash points for pure organic liquids. Data needed for the estimation of a flash point (FP) are the normal boiling point (Teb), the standard enthalpy of vaporization at 298.15 K [ΔvapH°(298.15 K)] of the compound, and the number of carbon atoms (n) in the molecule. The bounds for this equation are: -100⩽FP(°C)⩽+200; 250⩽Teb(K)⩽650; 20⩽Δvap H°(298.15 K)/(kJ mol-1)⩽110; 1⩽n⩽21. Compared to other methods (empirical equations, structural group contribution methods, and neural network quantitative structure-property relationships), this simple equation is shown to predict accurately the flash points for a variety of compounds, whatever their chemical groups (monofunctional compounds and polyfunctional compounds) and whatever their structure (linear, branched, cyclic). The same equation is shown to be valid for hydrocarbons, organic nitrogen compounds, organic oxygen compounds, organic sulfur compounds, organic halogen compounds, and organic silicone compounds. It seems that the flash points of organic deuterium compounds, organic tin compounds, organic nickel compounds, organic phosphorus compounds, organic boron compounds, and organic germanium compounds can also be predicted accurately by this equation. A mean absolute deviation of about 3 °C, a standard deviation of about 2 °C, and a maximum absolute deviation of 10 °C are obtained when predictions are compared to experimental data for more than 600 compounds. For all these compounds, the absolute deviation is equal or lower than the reproductibility expected at a 95% confidence level for closed-cup flash point measurement. This estimation technique has its limitations concerning the polyhalogenated compounds for which the equation should be used with caution. The mean absolute deviation and maximum absolute deviation observed and the fact that the equation provides unbiaised predictions lead to the conclusion that

  18. Accurate state estimation from uncertain data and models: an application of data assimilation to mathematical models of human brain tumors

    PubMed Central

    2011-01-01

    Background Data assimilation refers to methods for updating the state vector (initial condition) of a complex spatiotemporal model (such as a numerical weather model) by combining new observations with one or more prior forecasts. We consider the potential feasibility of this approach for making short-term (60-day) forecasts of the growth and spread of a malignant brain cancer (glioblastoma multiforme) in individual patient cases, where the observations are synthetic magnetic resonance images of a hypothetical tumor. Results We apply a modern state estimation algorithm (the Local Ensemble Transform Kalman Filter), previously developed for numerical weather prediction, to two different mathematical models of glioblastoma, taking into account likely errors in model parameters and measurement uncertainties in magnetic resonance imaging. The filter can accurately shadow the growth of a representative synthetic tumor for 360 days (six 60-day forecast/update cycles) in the presence of a moderate degree of systematic model error and measurement noise. Conclusions The mathematical methodology described here may prove useful for other modeling efforts in biology and oncology. An accurate forecast system for glioblastoma may prove useful in clinical settings for treatment planning and patient counseling. Reviewers This article was reviewed by Anthony Almudevar, Tomas Radivoyevitch, and Kristin Swanson (nominated by Georg Luebeck). PMID:22185645

  19. Reservoir evaluation of thin-bedded turbidites and hydrocarbon pore thickness estimation for an accurate quantification of resource

    NASA Astrophysics Data System (ADS)

    Omoniyi, Bayonle; Stow, Dorrik

    2016-04-01

    One of the major challenges in the assessment of and production from turbidite reservoirs is to take full account of thin and medium-bedded turbidites (<10cm and <30cm respectively). Although such thinner, low-pay sands may comprise a significant proportion of the reservoir succession, they can go unnoticed by conventional analysis and so negatively impact on reserve estimation, particularly in fields producing from prolific thick-bedded turbidite reservoirs. Field development plans often take little note of such thin beds, which are therefore bypassed by mainstream production. In fact, the trapped and bypassed fluids can be vital where maximising field value and optimising production are key business drivers. We have studied in detail, a succession of thin-bedded turbidites associated with thicker-bedded reservoir facies in the North Brae Field, UKCS, using a combination of conventional logs and cores to assess the significance of thin-bedded turbidites in computing hydrocarbon pore thickness (HPT). This quantity, being an indirect measure of thickness, is critical for an accurate estimation of original-oil-in-place (OOIP). By using a combination of conventional and unconventional logging analysis techniques, we obtain three different results for the reservoir intervals studied. These results include estimated net sand thickness, average sand thickness, and their distribution trend within a 3D structural grid. The net sand thickness varies from 205 to 380 ft, and HPT ranges from 21.53 to 39.90 ft. We observe that an integrated approach (neutron-density cross plots conditioned to cores) to HPT quantification reduces the associated uncertainties significantly, resulting in estimation of 96% of actual HPT. Further work will focus on assessing the 3D dynamic connectivity of the low-pay sands with the surrounding thick-bedded turbidite facies.

  20. Accurate Treatment of Electrostatics during Molecular Adsorption in Nanoporous Crystals without Assigning Point Charges to Framework Atoms

    SciTech Connect

    Watanabe, Taku; Manz, Thomas A.; Sholl, David S.

    2011-02-28

    Molecular simulations have become an important complement to experiments for studying gas adsorption and separation in crystalline nanoporous materials. Conventionally, these simulations use force fields that model adsorbate-pore interactions by assigning point charges to the atoms of the adsorbent. The assignment of framework charges always introduces ambiguity because there are many different choices for defining point charges, even when the true electron density of a material is known. We show how to completely avoid such ambiguity by using the electrostatic potential energy surface (EPES) calculated from plane wave density functional theory (DFT). We illustrate this approach by simulating CO2 adsorption in four metal-organic frameworks (MOFs): IRMOF-1, ZIF-8, ZIF-90, and Zn(nicotinate)2. The resulting CO2 adsorption isotherms are insensitive to the exchange-correlation functional used in the DFT calculation of the EPES but are sensitive to changes in the crystal structure and lattice parameters. Isotherms computed from the DFT EPES are compared to those computed from several point charge models. This comparison makes possible, for the first time, an unbiased assessment of the accuracy of these point charge models for describing adsorption in MOFs. We find an unusually high Henry’s constant (109 mmol/g·bar) and intermediate isosteric heat of adsorption (34.9 kJ/mol) for Zn(nicotinate)2, which makes it a potentially attractive material for CO2 adsorption applications.

  1. Accurate Treatment of Electrostatics during Molecular Adsorption in Nanoporous Crystals without Assigning Point Charges to Framework Atoms

    SciTech Connect

    Watanabe, T; Manz, TA; Sholl, DS

    2011-03-24

    Molecular simulations have become an important complement to experiments for studying gas adsorption and separation in crystalline nanoporous materials. Conventionally, these simulations use force fields that model adsorbate-pore interactions by assigning point charges to the atoms of the adsorbent. The assignment of framework charges always introduces ambiguity because there are many different choices for defining point charges, even when the true electron density of a material is known. We show how to completely avoid such ambiguity by using the electrostatic potential energy surface (EPES) calculated from plane wave density functional theory (DFT). We illustrate this approach by simulating CO(2) adsorption in four metal-organic frameworks (MOFs): IRMOF-1, ZIE-8, ZIE-90, and Zn(nicotinate)(2). The resulting CO(2) adsorption isotherms are insensitive to the exchange-correlation functional used in the DFT calculation of the EPES but are sensitive to changes in the crystal structure and lattice parameters. Isotherms computed from the DFT EPES are compared to those computed from several point charge models. This comparison makes possible, for the first time, an unbiased assessment of the accuracy of these point charge models for describing adsorption in MOFs. We find an unusually high Henry's constant (109 mmol/g.bar) and intermediate isosteric heat of adsorption (34.9 kJ/mol) for Zn(nicotinate)(2), which makes it a potentially attractive mateiial for CO(2) adsorption applications.

  2. Estimating the state of a geophysical system with sparse observations: time delay methods to achieve accurate initial states for prediction

    NASA Astrophysics Data System (ADS)

    An, Zhe; Rey, Daniel; Ye, Jingxin; Abarbanel, Henry D. I.

    2017-01-01

    The problem of forecasting the behavior of a complex dynamical system through analysis of observational time-series data becomes difficult when the system expresses chaotic behavior and the measurements are sparse, in both space and/or time. Despite the fact that this situation is quite typical across many fields, including numerical weather prediction, the issue of whether the available observations are "sufficient" for generating successful forecasts is still not well understood. An analysis by Whartenby et al. (2013) found that in the context of the nonlinear shallow water equations on a β plane, standard nudging techniques require observing approximately 70 % of the full set of state variables. Here we examine the same system using a method introduced by Rey et al. (2014a), which generalizes standard nudging methods to utilize time delayed measurements. We show that in certain circumstances, it provides a sizable reduction in the number of observations required to construct accurate estimates and high-quality predictions. In particular, we find that this estimate of 70 % can be reduced to about 33 % using time delays, and even further if Lagrangian drifter locations are also used as measurements.

  3. Central blood pressure estimation by using N-point moving average method in the brachial pulse wave.

    PubMed

    Sugawara, Rie; Horinaka, Shigeo; Yagi, Hiroshi; Ishimura, Kimihiko; Honda, Takeharu

    2015-05-01

    Recently, a method of estimating the central systolic blood pressure (C-SBP) using an N-point moving average method in the radial or brachial artery waveform has been reported. Then, we investigated the relationship between the C-SBP estimated from the brachial artery pressure waveform using the N-point moving average method and the C-SBP measured invasively using a catheter. C-SBP using a N/6 moving average method from the scaled right brachial artery pressure waveforms using VaSera VS-1500 was calculated. This estimated C-SBP was compared with the invasively measured C-SBP within a few minutes. In 41 patients who underwent cardiac catheterization (mean age: 65 years), invasively measured C-SBP was significantly lower than right cuff-based brachial BP (138.2 ± 26.3 vs 141.0 ± 24.9 mm Hg, difference -2.78 ± 1.36 mm Hg, P = 0.048). The cuff-based SBP was significantly higher than invasive measured C-SBP in subjects with younger than 60 years old. However, the estimated C-SBP using a N/6 moving average method from the scaled right brachial artery pressure waveforms and the invasively measured C-SBP did not significantly differ (137.8 ± 24.2 vs 138.2 ± 26.3 mm Hg, difference -0.49 ± 1.39, P = 0.73). N/6-point moving average method using the non-invasively acquired brachial artery waveform calibrated by the cuff-based brachial SBP was an accurate, convenient and useful method for estimating C-SBP. Thus, C-SBP can be estimated simply by applying a regular arm cuff, which is greatly feasible in the practical medicine.

  4. On a fourth order accurate implicit finite difference scheme for hyperbolic conservation laws. II - Five-point schemes

    NASA Technical Reports Server (NTRS)

    Harten, A.; Tal-Ezer, H.

    1981-01-01

    This paper presents a family of two-level five-point implicit schemes for the solution of one-dimensional systems of hyperbolic conservation laws, which generalized the Crank-Nicholson scheme to fourth order accuracy (4-4) in both time and space. These 4-4 schemes are nondissipative and unconditionally stable. Special attention is given to the system of linear equations associated with these 4-4 implicit schemes. The regularity of this system is analyzed and efficiency of solution-algorithms is examined. A two-datum representation of these 4-4 implicit schemes brings about a compactification of the stencil to three mesh points at each time-level. This compact two-datum representation is particularly useful in deriving boundary treatments. Numerical results are presented to illustrate some properties of the proposed scheme.

  5. Atypical late-time singular regimes accurately diagnosed in stagnation-point-type solutions of 3D Euler flows

    NASA Astrophysics Data System (ADS)

    Mulungye, Rachel M.; Lucas, Dan; Bustamante, Miguel D.

    2016-02-01

    We revisit, both numerically and analytically, the finite-time blowup of the infinite-energy solution of 3D Euler equations of stagnation-point-type introduced by Gibbon et al. (1999). By employing the method of mapping to regular systems, presented in Bustamante (2011) and extended to the symmetry-plane case by Mulungye et al. (2015), we establish a curious property of this solution that was not observed in early studies: before but near singularity time, the blowup goes from a fast transient to a slower regime that is well resolved spectrally, even at mid-resolutions of $512^2.$ This late-time regime has an atypical spectrum: it is Gaussian rather than exponential in the wavenumbers. The analyticity-strip width decays to zero in a finite time, albeit so slowly that it remains well above the collocation-point scale for all simulation times $t < T^* - 10^{-9000}$, where $T^*$ is the singularity time. Reaching such a proximity to singularity time is not possible in the original temporal variable, because floating point double precision ($\\approx 10^{-16}$) creates a `machine-epsilon' barrier. Due to this limitation on the \\emph{original} independent variable, the mapped variables now provide an improved assessment of the relevant blowup quantities, crucially with acceptable accuracy at an unprecedented closeness to the singularity time: $T^*- t \\approx 10^{-140}.$

  6. Parameter estimation and model selection for Neyman-Scott point processes.

    PubMed

    Tanaka, Ushio; Ogata, Yosihiko; Stoyan, Dietrich

    2008-02-01

    This paper proposes an approximative method for maximum likelihood estimation of parameters of Neyman-Scott and similar point processes. It is based on the point pattern resulting from forming all difference points of pairs of points in the window of observation. The intensity function of this constructed point process can be expressed in terms of second-order characteristics of the original process. This opens the way to parameter estimation, if the difference pattern is treated as a non-homogeneous Poisson process. The computational feasibility and accuracy of this approach is examined by means of simulated data. Furthermore, the method is applied to two biological data sets. For these data, various cluster process models are considered and compared with respect to their goodness-of-fit.

  7. Estimating the gas transfer velocity: a prerequisite for more accurate and higher resolution GHG fluxes (lower Aare River, Switzerland)

    NASA Astrophysics Data System (ADS)

    Sollberger, S.; Perez, K.; Schubert, C. J.; Eugster, W.; Wehrli, B.; Del Sontro, T.

    2013-12-01

    Currently, carbon dioxide (CO2) and methane (CH4) emissions from lakes, reservoirs and rivers are readily investigated due to the global warming potential of those gases and the role these inland waters play in the carbon cycle. However, there is a lack of high spatiotemporally-resolved emission estimates, and how to accurately assess the gas transfer velocity (K) remains controversial. In anthropogenically-impacted systems where run-of-river reservoirs disrupt the flow of sediments by increasing the erosion and load accumulation patterns, the resulting production of carbonic greenhouse gases (GH-C) is likely to be enhanced. The GH-C flux is thus counteracting the terrestrial carbon sink in these environments that act as net carbon emitters. The aim of this project was to determine the GH-C emissions from a medium-sized river heavily impacted by several impoundments and channelization through a densely-populated region of Switzerland. Estimating gas emission from rivers is not trivial and recently several models have been put forth to do so; therefore a second goal of this project was to compare the river emission models available with direct measurements. Finally, we further validated the modeled fluxes by using a combined approach with water sampling, chamber measurements, and highly temporal GH-C monitoring using an equilibrator. We conducted monthly surveys along the 120 km of the lower Aare River where we sampled for dissolved CH4 (';manual' sampling) at a 5-km sampling resolution, and measured gas emissions directly with chambers over a 35 km section. We calculated fluxes (F) via the boundary layer equation (F=K×(Cw-Ceq)) that uses the water-air GH-C concentration (C) gradient (Cw-Ceq) and K, which is the most sensitive parameter. K was estimated using 11 different models found in the literature with varying dependencies on: river hydrology (n=7), wind (2), heat exchange (1), and river width (1). We found that chamber fluxes were always higher than boundary

  8. Inverse estimation of near-field temperature and surface heat flux via single point temperature measurement

    NASA Astrophysics Data System (ADS)

    Wu, Chen-Wu; Shu, Yong-Hua; Xie, Ji-Jia; Jiang, Jian-Zheng; Fan, Jing

    2017-02-01

    A concept was developed to inversely estimate the near-field temperature as well as the surface heat flux for the transient heat conduction problem with boundary condition of the unknown heat flux. The mathematical formula was derived for the inverse estimation of the near-field temperature and surface heat flux via a single point temperature measurement. The experiments were carried out in a vacuum chamber and the theoretically predicted temperatures were justified in specific positions. The inverse estimation principle was validated and the estimation deviation was evaluated for the present configuration.

  9. An interior point method for state estimation with current magnitude measurements and inequality constraints

    SciTech Connect

    Handschin, E.; Langer, M.; Kliokys, E.

    1995-12-31

    The possibility of power system state estimation with non-traditional measurement configuration is investigated. It is assumed that some substations are equipped with current magnitude measurements. Unique state estimation is possible, in such a situation, if currents are combined with voltage or power measurements and inequality constraints on node power injections are taken into account. The state estimation algorithm facilitating the efficient incorporation of inequality constraints is developed using an interior point optimization method. Simulation results showing the performance of the algorithm are presented. The method can be used for state estimation in medium voltage subtransmission and distribution networks.

  10. Robust normal estimation of point cloud with sharp features via subspace clustering

    NASA Astrophysics Data System (ADS)

    Luo, Pei; Wu, Zhuangzhi; Xia, Chunhe; Feng, Lu; Jia, Bo

    2014-01-01

    Normal estimation is an essential step in point cloud based geometric processing, such as high quality point based rendering and surface reconstruction. In this paper, we present a clustering based method for normal estimation which preserves sharp features. For a piecewise smooth point cloud, the k-nearest neighbors of one point lie on a union of multiple subspaces. Given the PCA normals as input, we perform a subspace clustering algorithm to segment these subspaces. Normals are estimated by the points lying in the same subspace as the center point. In contrast to the previous method, we exploit the low-rankness of the input data, by seeking the lowest rank representation among all the candidates that can represent one normal as linear combinations of the others. Integration of Low-Rank Representation (LRR) makes our method robust to noise. Moreover, our method can simultaneously produce the estimated normals and the local structures which are especially useful for denoise and segmentation applications. The experimental results show that our approach successfully recovers sharp features and generates more reliable results compared with the state-of-theart.

  11. A method for simple and accurate estimation of fog deposition in a mountain forest using a meteorological model

    NASA Astrophysics Data System (ADS)

    Katata, Genki; Kajino, Mizuo; Hiraki, Takatoshi; Aikawa, Masahide; Kobayashi, Tomiki; Nagai, Haruyasu

    2011-10-01

    To apply a meteorological model to investigate fog occurrence, acidification and deposition in mountain forests, the meteorological model WRF was modified to calculate fog deposition accurately by the simple linear function of fog deposition onto vegetation derived from numerical experiments using the detailed multilayer atmosphere-vegetation-soil model (SOLVEG). The modified version of WRF that includes fog deposition (fog-WRF) was tested in a mountain forest on Mt. Rokko in Japan. fog-WRF provided a distinctly better prediction of liquid water content of fog (LWC) than the original version of WRF. It also successfully simulated throughfall observations due to fog deposition inside the forest during the summer season that excluded the effect of forest edges. Using the linear relationship between fog deposition and altitude given by the fog-WRF calculations and the data from throughfall observations at a given altitude, the vertical distribution of fog deposition can be roughly estimated in mountain forests. A meteorological model that includes fog deposition will be useful in mapping fog deposition in mountain cloud forests.

  12. Development of a new, robust and accurate, spectroscopic metric for scatterer size estimation in optical coherence tomography (OCT) images

    NASA Astrophysics Data System (ADS)

    Kassinopoulos, Michalis; Pitris, Costas

    2016-03-01

    The modulations appearing on the backscattering spectrum originating from a scatterer are related to its diameter as described by Mie theory for spherical particles. Many metrics for Spectroscopic Optical Coherence Tomography (SOCT) take advantage of this observation in order to enhance the contrast of Optical Coherence Tomography (OCT) images. However, none of these metrics has achieved high accuracy when calculating the scatterer size. In this work, Mie theory was used to further investigate the relationship between the degree of modulation in the spectrum and the scatterer size. From this study, a new spectroscopic metric, the bandwidth of the Correlation of the Derivative (COD) was developed which is more robust and accurate, compared to previously reported techniques, in the estimation of scatterer size. The self-normalizing nature of the derivative and the robustness of the first minimum of the correlation as a measure of its width, offer significant advantages over other spectral analysis approaches especially for scatterer sizes above 3 μm. The feasibility of this technique was demonstrated using phantom samples containing 6, 10 and 16 μm diameter microspheres as well as images of normal and cancerous human colon. The results are very promising, suggesting that the proposed metric could be implemented in OCT spectral analysis for measuring nuclear size distribution in biological tissues. A technique providing such information would be of great clinical significance since it would allow the detection of nuclear enlargement at the earliest stages of precancerous development.

  13. Measurement of pelvic motion is a prerequisite for accurate estimation of hip joint work in maximum height squat jumping.

    PubMed

    Blache, Yoann; Bobbert, Maarten; Argaud, Sebastien; Pairot de Fontenay, Benoit; Monteil, Karine M

    2013-08-01

    In experiments investigating vertical squat jumping, the HAT segment is typically defined as a line drawn from the hip to some point proximally on the upper body (eg, the neck, the acromion), and the hip joint as the angle between this line and the upper legs (θUL-HAT). In reality, the hip joint is the angle between the pelvis and the upper legs (θUL-pelvis). This study aimed to estimate to what extent hip joint definition affects hip joint work in maximal squat jumping. Moreover, the initial pelvic tilt was manipulated to maximize the difference in hip joint work as a function of hip joint definition. Twenty-two male athletes performed maximum effort squat jumps in three different initial pelvic tilt conditions: backward (pelvisB), neutral (pelvisN), and forward (pelvisF). Hip joint work was calculated by integrating the hip net joint torque with respect to θUL-HAT (WUL-HAT) or with respect to θUL-pelvis (WUL-pelvis). θUL-HAT was greater than θUL-pelvis in all conditions. WUL-HAT overestimated WULpelvis by 33%, 39%, and 49% in conditions pelvisF, pelvisN, and pelvisB, respectively. It was concluded that θUL-pelvis should be measured when the mechanical output of hip extensor muscles is estimated.

  14. Point Cloud Based Relative Pose Estimation of a Satellite in Close Range

    PubMed Central

    Liu, Lujiang; Zhao, Gaopeng; Bo, Yuming

    2016-01-01

    Determination of the relative pose of satellites is essential in space rendezvous operations and on-orbit servicing missions. The key problems are the adoption of suitable sensor on board of a chaser and efficient techniques for pose estimation. This paper aims to estimate the pose of a target satellite in close range on the basis of its known model by using point cloud data generated by a flash LIDAR sensor. A novel model based pose estimation method is proposed; it includes a fast and reliable pose initial acquisition method based on global optimal searching by processing the dense point cloud data directly, and a pose tracking method based on Iterative Closest Point algorithm. Also, a simulation system is presented in this paper in order to evaluate the performance of the sensor and generate simulated sensor point cloud data. It also provides truth pose of the test target so that the pose estimation error can be quantified. To investigate the effectiveness of the proposed approach and achievable pose accuracy, numerical simulation experiments are performed; results demonstrate algorithm capability of operating with point cloud directly and large pose variations. Also, a field testing experiment is conducted and results show that the proposed method is effective. PMID:27271633

  15. Point Cloud Based Relative Pose Estimation of a Satellite in Close Range.

    PubMed

    Liu, Lujiang; Zhao, Gaopeng; Bo, Yuming

    2016-06-04

    Determination of the relative pose of satellites is essential in space rendezvous operations and on-orbit servicing missions. The key problems are the adoption of suitable sensor on board of a chaser and efficient techniques for pose estimation. This paper aims to estimate the pose of a target satellite in close range on the basis of its known model by using point cloud data generated by a flash LIDAR sensor. A novel model based pose estimation method is proposed; it includes a fast and reliable pose initial acquisition method based on global optimal searching by processing the dense point cloud data directly, and a pose tracking method based on Iterative Closest Point algorithm. Also, a simulation system is presented in this paper in order to evaluate the performance of the sensor and generate simulated sensor point cloud data. It also provides truth pose of the test target so that the pose estimation error can be quantified. To investigate the effectiveness of the proposed approach and achievable pose accuracy, numerical simulation experiments are performed; results demonstrate algorithm capability of operating with point cloud directly and large pose variations. Also, a field testing experiment is conducted and results show that the proposed method is effective.

  16. A double-observer approach for estimating detection probability and abundance from point counts

    USGS Publications Warehouse

    Nichols, J.D.; Hines, J.E.; Sauer, J.R.; Fallon, F.W.; Fallon, J.E.; Heglund, P.J.

    2000-01-01

    Although point counts are frequently used in ornithological studies, basic assumptions about detection probabilities often are untested. We apply a double-observer approach developed to estimate detection probabilities for aerial surveys (Cook and Jacobson 1979) to avian point counts. At each point count, a designated 'primary' observer indicates to another ('secondary') observer all birds detected. The secondary observer records all detections of the primary observer as well as any birds not detected by the primary observer. Observers alternate primary and secondary roles during the course of the survey. The approach permits estimation of observer-specific detection probabilities and bird abundance. We developed a set of models that incorporate different assumptions about sources of variation (e.g. observer, bird species) in detection probability. Seventeen field trials were conducted, and models were fit to the resulting data using program SURVIV. Single-observer point counts generally miss varying proportions of the birds actually present, and observer and bird species were found to be relevant sources of variation in detection probabilities. Overall detection probabilities (probability of being detected by at least one of the two observers) estimated using the double-observer approach were very high (>0.95), yielding precise estimates of avian abundance. We consider problems with the approach and recommend possible solutions, including restriction of the approach to fixed-radius counts to reduce the effect of variation in the effective radius of detection among various observers and to provide a basis for using spatial sampling to estimate bird abundance on large areas of interest. We believe that most questions meriting the effort required to carry out point counts also merit serious attempts to estimate detection probabilities associated with the counts. The double-observer approach is a method that can be used for this purpose.

  17. Estimation of the global average temperature with optimally weighted point gauges

    NASA Technical Reports Server (NTRS)

    Hardin, James W.; Upson, Robert B.

    1993-01-01

    This paper considers the minimum mean squared error (MSE) incurred in estimating an idealized Earth's global average temperature with a finite network of point gauges located over the globe. We follow the spectral MSE formalism given by North et al. (1992) and derive the optimal weights for N gauges in the problem of estimating the Earth's global average temperature. Our results suggest that for commonly used configurations the variance of the estimate due to sampling error can be reduced by as much as 50%.

  18. Estimated results analysis and application of the precise point positioning based high-accuracy ionosphere delay

    NASA Astrophysics Data System (ADS)

    Wang, Shi-tai; Peng, Jun-huan

    2015-12-01

    The characterization of ionosphere delay estimated with precise point positioning is analyzed in this paper. The estimation, interpolation and application of the ionosphere delay are studied based on the processing of 24-h data from 5 observation stations. The results show that the estimated ionosphere delay is affected by the hardware delay bias from receiver so that there is a difference between the estimated and interpolated results. The results also show that the RMSs (root mean squares) are bigger, while the STDs (standard deviations) are better than 0.11 m. When the satellite difference is used, the hardware delay bias can be canceled. The interpolated satellite-differenced ionosphere delay is better than 0.11 m. Although there is a difference between the between the estimated and interpolated ionosphere delay results it cannot affect its application in single-frequency positioning and the positioning accuracy can reach cm level.

  19. A Direct Latent Variable Modeling Based Method for Point and Interval Estimation of Coefficient Alpha

    ERIC Educational Resources Information Center

    Raykov, Tenko; Marcoulides, George A.

    2015-01-01

    A direct approach to point and interval estimation of Cronbach's coefficient alpha for multiple component measuring instruments is outlined. The procedure is based on a latent variable modeling application with widely circulated software. As a by-product, using sample data the method permits ascertaining whether the population discrepancy…

  20. Estimating the melting point, entropy of fusion, and enthalpy of fusion of organic compounds via SPARC

    EPA Science Inventory

    The entropies of fusion, enthalies of fusion, and melting points of organic compounds can be estimated through three models developed using the SPARC (SPARC Performs Automated Reasoning in Chemistry) platform. The entropy of fusion is modeled through a combination of interaction ...

  1. Human body 3D posture estimation using significant points and two cameras.

    PubMed

    Juang, Chia-Feng; Chen, Teng-Chang; Du, Wei-Chin

    2014-01-01

    This paper proposes a three-dimensional (3D) human posture estimation system that locates 3D significant body points based on 2D body contours extracted from two cameras without using any depth sensors. The 3D significant body points that are located by this system include the head, the center of the body, the tips of the feet, the tips of the hands, the elbows, and the knees. First, a linear support vector machine- (SVM-) based segmentation method is proposed to distinguish the human body from the background in red, green, and blue (RGB) color space. The SVM-based segmentation method uses not only normalized color differences but also included angle between pixels in the current frame and the background in order to reduce shadow influence. After segmentation, 2D significant points in each of the two extracted images are located. A significant point volume matching (SPVM) method is then proposed to reconstruct the 3D significant body point locations by using 2D posture estimation results. Experimental results show that the proposed SVM-based segmentation method shows better performance than other gray level- and RGB-based segmentation approaches. This paper also shows the effectiveness of the 3D posture estimation results in different postures.

  2. Accurate Estimate of Some Propagation Characteristics for the First Higher Order Mode in Graded Index Fiber with Simple Analytic Chebyshev Method

    NASA Astrophysics Data System (ADS)

    Dutta, Ivy; Chowdhury, Anirban Roy; Kumbhakar, Dharmadas

    2013-03-01

    Using Chebyshev power series approach, accurate description for the first higher order (LP11) mode of graded index fibers having three different profile shape functions are presented in this paper and applied to predict their propagation characteristics. These characteristics include fractional power guided through the core, excitation efficiency and Petermann I and II spot sizes with their approximate analytic formulations. We have shown that where two and three Chebyshev points in LP11 mode approximation present fairly accurate results, the values based on our calculations involving four Chebyshev points match excellently with available exact numerical results.

  3. Observing Volcanic Thermal Anomalies from Space: How Accurate is the Estimation of the Hotspot's Size and Temperature?

    NASA Astrophysics Data System (ADS)

    Zaksek, K.; Pick, L.; Lombardo, V.; Hort, M. K.

    2015-12-01

    Measuring the heat emission from active volcanic features on the basis of infrared satellite images contributes to the volcano's hazard assessment. Because these thermal anomalies only occupy a small fraction (< 1 %) of a typically resolved target pixel (e.g. from Landsat 7, MODIS) the accurate determination of the hotspot's size and temperature is however problematic. Conventionally this is overcome by comparing observations in at least two separate infrared spectral wavebands (Dual-Band method). We investigate the resolution limits of this thermal un-mixing technique by means of a uniquely designed indoor analog experiment. Therein the volcanic feature is simulated by an electrical heating alloy of 0.5 mm diameter installed on a plywood panel of high emissivity. Two thermographic cameras (VarioCam high resolution and ImageIR 8300 by Infratec) record images of the artificial heat source in wavebands comparable to those available from satellite data. These range from the short-wave infrared (1.4-3 µm) over the mid-wave infrared (3-8 µm) to the thermal infrared (8-15 µm). In the conducted experiment the pixel fraction of the hotspot was successively reduced by increasing the camera-to-target distance from 3 m to 35 m. On the basis of an individual target pixel the expected decrease of the hotspot pixel area with distance at a relatively constant wire temperature of around 600 °C was confirmed. The deviation of the hotspot's pixel fraction yielded by the Dual-Band method from the theoretically calculated one was found to be within 20 % up until a target distance of 25 m. This means that a reliable estimation of the hotspot size is only possible if the hotspot is larger than about 3 % of the pixel area, a resolution boundary most remotely sensed volcanic hotspots fall below. Future efforts will focus on the investigation of a resolution limit for the hotspot's temperature by varying the alloy's amperage. Moreover, the un-mixing results for more realistic multi

  4. Estimability of thrusting trajectories in 3-D from a single passive sensor with unknown launch point

    NASA Astrophysics Data System (ADS)

    Yuan, Ting; Bar-Shalom, Yaakov; Willett, Peter; Ben-Dov, R.; Pollak, S.

    2013-09-01

    The problem of estimating the state of thrusting/ballistic endoatmospheric projectiles moving in 3-dimensional (3-D) space using 2-dimensional (2-D) measurements from a single passive sensor is investigated. The location of projectile's launch point (LP) is unavailable and this could significantly affect the performance of the estimation and the IPP. The LP altitude is then an unknown target parameter. The estimability is analyzed based on the Fisher Information Matrix (FIM) of the target parameter vector, comprising the initial launch (azimuth and elevation) angles, drag coefficient, thrust and the LP altitude, which determine the trajectory according to a nonlinear motion equation. The full rank of the FIM ensures that one has an estimable target parameters. The corresponding Craḿer-Rao lower bound (CRLB) quantifies the estimation performance of the estimator that is statistically efficient and can be used for IPP. In view of the inherent nonlinearity of the problem, the maximum likelihood (ML) estimate of the target parameter vector is found by using a mixed (partially grid-based) search approach. For a selected grid in the drag-coefficient-thrust-altitude subspace, the proposed parallelizable approach is shown to have reliable estimation performance and further leads to the final IPP of high accuracy.

  5. Monte Carlo point process estimation of electromyographic envelopes from motor cortical spikes for brain-machine interfaces

    NASA Astrophysics Data System (ADS)

    Liao, Yuxi; She, Xiwei; Wang, Yiwen; Zhang, Shaomin; Zhang, Qiaosheng; Zheng, Xiaoxiang; Principe, Jose C.

    2015-12-01

    Objective. Representation of movement in the motor cortex (M1) has been widely studied in brain-machine interfaces (BMIs). The electromyogram (EMG) has greater bandwidth than the conventional kinematic variables (such as position, velocity), and is functionally related to the discharge of cortical neurons. As the stochastic information of EMG is derived from the explicit spike time structure, point process (PP) methods will be a good solution for decoding EMG directly from neural spike trains. Previous studies usually assume linear or exponential tuning curves between neural firing and EMG, which may not be true. Approach. In our analysis, we estimate the tuning curves in a data-driven way and find both the traditional functional-excitatory and functional-inhibitory neurons, which are widely found across a rat’s motor cortex. To accurately decode EMG envelopes from M1 neural spike trains, the Monte Carlo point process (MCPP) method is implemented based on such nonlinear tuning properties. Main results. Better reconstruction of EMG signals is shown on baseline and extreme high peaks, as our method can better preserve the nonlinearity of the neural tuning during decoding. The MCPP improves the prediction accuracy (the normalized mean squared error) 57% and 66% on average compared with the adaptive point process filter using linear and exponential tuning curves respectively, for all 112 data segments across six rats. Compared to a Wiener filter using spike rates with an optimal window size of 50 ms, MCPP decoding EMG from a point process improves the normalized mean square error (NMSE) by 59% on average. Significance. These results suggest that neural tuning is constantly changing during task execution and therefore, the use of spike timing methodologies and estimation of appropriate tuning curves needs to be undertaken for better EMG decoding in motor BMIs.

  6. A method of rapidly estimating the position of the laminar separation point

    NASA Technical Reports Server (NTRS)

    Von Doenhoff, Albert E

    1938-01-01

    A method is described of rapidly estimating the position of the laminar separation point from the given pressure distribution along a body; the method is applicable to a fairly wide variety of cases. The laminar separation point is found by the von Karman-Millikan method for a series of velocity distributions along a flat plate, which consist of a region of uniform velocity followed by a region of uniform decreased velocity. It is shown that such a velocity distribution can frequently replace the actual velocity distribution along a body insofar as the effects on laminar separation are concerned. An example of the application of the method is given by using it to calculate the position of the laminar separation point on the NACA 0012 airfoil section at zero lift. The agreement between the position of the separation point calculated according to the present method and that found from more elaborate computations is very good.

  7. Estimating the melting point, entropy of fusion, and enthalpy of fusion of organic compounds via SPARC.

    PubMed

    Whiteside, T S; Hilal, S H; Brenner, A; Carreira, L A

    2016-08-01

    The entropy of fusion, enthalpy of fusion, and melting point of organic compounds can be estimated through three models developed using the SPARC (SPARC Performs Automated Reasoning in Chemistry) platform. The entropy of fusion is modelled through a combination of interaction terms and physical descriptors. The enthalpy of fusion is modelled as a function of the entropy of fusion, boiling point, and flexibility of the molecule. The melting point model is the enthalpy of fusion divided by the entropy of fusion. These models were developed in part to improve SPARC's vapour pressure and solubility models. These models have been tested on 904 unique compounds. The entropy model has a RMS of 12.5 J mol(-1) K(-1). The enthalpy model has a RMS of 4.87 kJ mol(-1). The melting point model has a RMS of 54.4°C.

  8. Sensitivity analysis of point and parametric pedotransfer functions for estimating water retention of soils in Algeria

    NASA Astrophysics Data System (ADS)

    Touil, Sami; Degre, Aurore; Nacer Chabaca, Mohamed

    2016-12-01

    Improving the accuracy of pedotransfer functions (PTFs) requires studying how prediction uncertainty can be apportioned to different sources of uncertainty in inputs. In this study, the question addressed was as follows: which variable input is the main or best complementary predictor of water retention, and at which water potential? Two approaches were adopted to generate PTFs: multiple linear regressions (MLRs) for point PTFs and multiple nonlinear regressions (MNLRs) for parametric PTFs. Reliability tests showed that point PTFs provided better estimates than parametric PTFs (root mean square error, RMSE: 0.0414 and 0.0444 cm3 cm-3, and 0.0613 and 0.0605 cm3 cm-3 at -33 and -1500 kPa, respectively). The local parametric PTFs provided better estimates than Rosetta PTFs at -33 kPa. No significant difference in accuracy, however, was found between the parametric PTFs and Rosetta H2 at -1500 kPa with RMSE values of 0.0605 cm3 cm-3 and 0.0636 cm3 cm-3, respectively. The results of global sensitivity analyses (GSAs) showed that the mathematical formalism of PTFs and their input variables reacted differently in terms of point pressure and texture. The point and parametric PTFs were sensitive mainly to the sand fraction in the fine- and medium-textural classes. The use of clay percentage (C %) and bulk density (BD) as inputs in the medium-textural class improved the estimation of PTFs at -33 kPa.

  9. Estimation of the auto frequency response function at unexcited points using dummy masses

    NASA Astrophysics Data System (ADS)

    Hosoya, Naoki; Yaginuma, Shinji; Onodera, Hiroshi; Yoshimura, Takuya

    2015-02-01

    If structures with complex shapes have space limitations, vibration tests using an exciter or impact hammer for the excitation are difficult. Although measuring the auto frequency response function at an unexcited point may not be practical via a vibration test, it can be obtained by assuming that the inertia acting on a dummy mass is an external force on the target structure upon exciting a different excitation point. We propose a method to estimate the auto frequency response functions at unexcited points by attaching a small mass (dummy mass), which is comparable to the accelerometer mass. The validity of the proposed method is demonstrated by comparing the auto frequency response functions estimated at unexcited points in a beam structure to those obtained from numerical simulations. We also consider random measurement errors by finite element analysis and vibration tests, but not bias errors. Additionally, the applicability of the proposed method is demonstrated by applying it to estimate the auto frequency response function of the lower arm in a car suspension.

  10. Estimates of Emissions and Chemical Lifetimes of NOx from Point Sources using OMI Retrievals

    NASA Astrophysics Data System (ADS)

    de Foy, B.

    2014-12-01

    We use three different methods to estimate emissions of NOx from large point sources based on OMI retrievals. The results are evaluated against data from the Continuous Emission Monitoring System (CEMS). The methods tested are: 1. Simple box model, 2. Two-dimensional Gaussian fit and 3. Exponentially-Modified Gaussian Fit. The sensitivity of the results to the plume speed and wind direction was explored by considering different ways of estimating these from wind measurements. The accuracy of the emissions estimates compared with the CEMS data was found to be variable from site to site. Furthermore, lifetimes obtained from some of the methods were found to be very short and are thought to be more representative of plume transport than of chemical transformation. We explore the strengths and weaknesses of the methods and consider avenues for improved estimates.

  11. A removal model for estimating detection probabilities from point-count surveys

    USGS Publications Warehouse

    Farnsworth, G.L.; Pollock, K.H.; Nichols, J.D.; Simons, T.R.; Hines, J.E.; Sauer, J.R.

    2002-01-01

    Use of point-count surveys is a popular method for collecting data on abundance and distribution of birds. However, analyses of such data often ignore potential differences in detection probability. We adapted a removal model to directly estimate detection probability during point-count surveys. The model assumes that singing frequency is a major factor influencing probability of detection when birds are surveyed using point counts. This may be appropriate for surveys in which most detections are by sound. The model requires counts to be divided into several time intervals. Point counts are often conducted for 10 min, where the number of birds recorded is divided into those first observed in the first 3 min, the subsequent 2 min, and the last 5 min. We developed a maximum-likelihood estimator for the detectability of birds recorded during counts divided into those intervals. This technique can easily be adapted to point counts divided into intervals of any length. We applied this method to unlimited-radius counts conducted in Great Smoky Mountains National Park. We used model selection criteria to identify whether detection probabilities varied among species, throughout the morning, throughout the season, and among different observers. We found differences in detection probability among species. Species that sing frequently such as Winter Wren (Troglodytes troglodytes) and Acadian Flycatcher (Empidonax virescens) had high detection probabilities (~90%) and species that call infrequently such as Pileated Woodpecker (Dryocopus pileatus) had low detection probability (36%). We also found detection probabilities varied with the time of day for some species (e.g. thrushes) and between observers for other species. We used the same approach to estimate detection probability and density for a subset of the observations with limited-radius point counts.

  12. Shear wavelength estimation based on inverse filtering and multiple-point shear wave generation

    NASA Astrophysics Data System (ADS)

    Kitazaki, Tomoaki; Kondo, Kengo; Yamakawa, Makoto; Shiina, Tsuyoshi

    2016-07-01

    Elastography provides important diagnostic information because tissue elasticity is related to pathological conditions. For example, in a mammary gland, higher grade malignancies yield harder tumors. Estimating shear wave speed enables the quantification of tissue elasticity imaging using time-of-flight. However, time-of-flight measurement is based on an assumption about the propagation direction of a shear wave which is highly affected by reflection and refraction, and thus might cause an artifact. An alternative elasticity estimation approach based on shear wavelength was proposed and applied to passive configurations. To determine the elasticity of tissue more quickly and more accurately, we proposed a new method for shear wave elasticity imaging that combines the shear wavelength approach and inverse filtering with multiple shear wave sources induced by acoustic radiation force (ARF). The feasibility of the proposed method was verified using an elasticity phantom with a hard inclusion.

  13. Comparison of single-point and continuous sampling methods for estimating residential indoor temperature and humidity

    PubMed Central

    Johnston, James D.; Magnusson, Brianna M.; Eggett, Dennis; Collingwood, Scott C.; Bernhardt, Scott A.

    2016-01-01

    Residential temperature and humidity are associated with multiple health effects. Studies commonly use single-point measures to estimate indoor temperature and humidity exposures, but there is little evidence to support this sampling strategy. This study evaluated the relationship between single-point and continuous monitoring of air temperature, apparent temperature, relative humidity, and absolute humidity over four exposure intervals (5-min, 30-min, 24-hrs, and 12-days) in 9 northern Utah homes, from March – June 2012. Three homes were sampled twice, for a total of 12 observation periods. Continuous data-logged sampling was conducted in homes for 2-3 wks, and simultaneous single-point measures (n = 114) were collected using handheld thermo-hygrometers. Time-centered single-point measures were moderately correlated with short-term (30-min) data logger mean air temperature (r = 0.76, β = 0.74), apparent temperature (r = 0.79, β = 0.79), relative humidity (r = 0.70, β = 0.63), and absolute humidity (r = 0.80, β = 0.80). Data logger 12-day means were also moderately correlated with single-point air temperature (r = 0.64, β = 0.43) and apparent temperature (r = 0.64, β = 0.44), but were weakly correlated with single-point relative humidity (r = 0.53, β = 0.35) and absolute humidity (r = 0.52, β = 0.39). Of the single-point RH measures, 59 (51.8%) deviated more than ±5%, 21 (18.4%) deviated more than ±10%, and 6 (5.3%) deviated more than ±15% from data logger 12-day means. Where continuous indoor monitoring is not feasible, single-point sampling strategies should include multiple measures collected at prescribed time points based on local conditions. PMID:26030088

  14. Comparison of Single-Point and Continuous Sampling Methods for Estimating Residential Indoor Temperature and Humidity.

    PubMed

    Johnston, James D; Magnusson, Brianna M; Eggett, Dennis; Collingwood, Scott C; Bernhardt, Scott A

    2015-01-01

    Residential temperature and humidity are associated with multiple health effects. Studies commonly use single-point measures to estimate indoor temperature and humidity exposures, but there is little evidence to support this sampling strategy. This study evaluated the relationship between single-point and continuous monitoring of air temperature, apparent temperature, relative humidity, and absolute humidity over four exposure intervals (5-min, 30-min, 24-hr, and 12-days) in 9 northern Utah homes, from March-June 2012. Three homes were sampled twice, for a total of 12 observation periods. Continuous data-logged sampling was conducted in homes for 2-3 wks, and simultaneous single-point measures (n = 114) were collected using handheld thermo-hygrometers. Time-centered single-point measures were moderately correlated with short-term (30-min) data logger mean air temperature (r = 0.76, β = 0.74), apparent temperature (r = 0.79, β = 0.79), relative humidity (r = 0.70, β = 0.63), and absolute humidity (r = 0.80, β = 0.80). Data logger 12-day means were also moderately correlated with single-point air temperature (r = 0.64, β = 0.43) and apparent temperature (r = 0.64, β = 0.44), but were weakly correlated with single-point relative humidity (r = 0.53, β = 0.35) and absolute humidity (r = 0.52, β = 0.39). Of the single-point RH measures, 59 (51.8%) deviated more than ±5%, 21 (18.4%) deviated more than ±10%, and 6 (5.3%) deviated more than ±15% from data logger 12-day means. Where continuous indoor monitoring is not feasible, single-point sampling strategies should include multiple measures collected at prescribed time points based on local conditions.

  15. Position Estimation of Access Points in 802.11 Wireless Networks

    SciTech Connect

    Kent, C A; Dowla, F U; Atwal, P K; Lennon, W J

    2003-12-05

    We developed a technique to locate wireless network nodes using multiple time-of-flight range measurements in a position estimate. When used with communication methods that allow propagation through walls, such as Ultra-Wideband and 802.11, we can locate network nodes in buildings and in caves where GPS is unavailable. This paper details the implementation on an 802.11a network where we demonstrated the ability to locate a network access point to within 20 feet.

  16. Reappraisal of disparities between osmolality estimates by freezing point depression and vapor pressure deficit methods.

    PubMed

    Winzor, Donald J

    2004-02-15

    As a response to recent expression of concern about possible unreliability of vapor pressure deficit measurements (K. Kiyosawa, Biophys. Chem. 104 (2003) 171-188), the results of published studies on the temperature dependence of the osmotic pressure of aqueous polyethylene glycol solutions are shown to account for the observed discrepancies between osmolality estimates obtained by freezing point depression and vapor pressure deficit osmometry--the cause of the concern.

  17. Capacity Estimation Model for Signalized Intersections under the Impact of Access Point.

    PubMed

    Zhao, Jing; Li, Peng; Zhou, Xizhao

    2016-01-01

    Highway Capacity Manual 2010 provides various factors to adjust the base saturation flow rate for the capacity analysis of signalized intersections. No factors, however, is considered for the potential change of signalized intersections capacity caused by the access point closeing to the signalized intersection. This paper presented a theoretical model to estimate the lane group capacity at signalized intersections with the consideration of the effects of access points. Two scenarios of access point locations, upstream or downstream of the signalized intersection, and impacts of six types of access traffic flow are taken into account. The proposed capacity model was validated based on VISSIM simulation. Results of extensive numerical analysis reveal the substantial impact of access point on the capacity, which has an inverse correlation with both the number of major street lanes and the distance between the intersection and access point. Moreover, among the six types of access traffic flows, the access traffic flow 1 (right-turning traffic from major street), flow 4 (left-turning traffic from access point), and flow 5 (left-turning traffic from major street) cause a more significant effect on lane group capacity than others. Some guidance on the mitigation of the negative effect is provided for practitioners.

  18. Capacity Estimation Model for Signalized Intersections under the Impact of Access Point

    PubMed Central

    Zhao, Jing; Li, Peng; Zhou, Xizhao

    2016-01-01

    Highway Capacity Manual 2010 provides various factors to adjust the base saturation flow rate for the capacity analysis of signalized intersections. No factors, however, is considered for the potential change of signalized intersections capacity caused by the access point closeing to the signalized intersection. This paper presented a theoretical model to estimate the lane group capacity at signalized intersections with the consideration of the effects of access points. Two scenarios of access point locations, upstream or downstream of the signalized intersection, and impacts of six types of access traffic flow are taken into account. The proposed capacity model was validated based on VISSIM simulation. Results of extensive numerical analysis reveal the substantial impact of access point on the capacity, which has an inverse correlation with both the number of major street lanes and the distance between the intersection and access point. Moreover, among the six types of access traffic flows, the access traffic flow 1 (right-turning traffic from major street), flow 4 (left-turning traffic from access point), and flow 5 (left-turning traffic from major street) cause a more significant effect on lane group capacity than others. Some guidance on the mitigation of the negative effect is provided for practitioners. PMID:26726998

  19. A removal model for estimating detection probabilities from point-count surveys

    USGS Publications Warehouse

    Farnsworth, G.L.; Pollock, K.H.; Nichols, J.D.; Simons, T.R.; Hines, J.E.; Sauer, J.R.

    2000-01-01

    We adapted a removal model to estimate detection probability during point count surveys. The model assumes one factor influencing detection during point counts is the singing frequency of birds. This may be true for surveys recording forest songbirds when most detections are by sound. The model requires counts to be divided into several time intervals. We used time intervals of 2, 5, and 10 min to develop a maximum-likelihood estimator for the detectability of birds during such surveys. We applied this technique to data from bird surveys conducted in Great Smoky Mountains National Park. We used model selection criteria to identify whether detection probabilities varied among species, throughout the morning, throughout the season, and among different observers. The overall detection probability for all birds was 75%. We found differences in detection probability among species. Species that sing frequently such as Winter Wren and Acadian Flycatcher had high detection probabilities (about 90%) and species that call infrequently such as Pileated Woodpecker had low detection probability (36%). We also found detection probabilities varied with the time of day for some species (e.g. thrushes) and between observers for other species. This method of estimating detectability during point count surveys offers a promising new approach to using count data to address questions of the bird abundance, density, and population trends.

  20. Aberration estimation from single point image in a simulated adaptive optics system.

    PubMed

    Grisan, Enrico; Frassetto, Fabio; Da Deppo, Vania; Naletto, Giampiero; Ruggeri, Alfredo

    2005-01-01

    Adaptive optics has been recently applied for the development of ophthalmic devices, with the main objective of obtaining higher resolution images for diagnostic purposes or ideally correcting high-order eye aberrations. The core of every adaptive optics systems is an optical device that is able to modify the wavefront shape of the light entering a system: once the shape of the incoming wavefront has been estimated, by means of this device it is possible to correct the aberrations introduced along the optical path. The aim of this paper is to demonstrate the feasibility, although in a simulated system, of estimating and correcting the wavefront shape simply by means of an iterative software analysis of a single point source image, thus avoiding expensive wavefront sensors or the burdensome computation of the PSF of the optical system. To test the proposed algorithm, a simple optical system has been simulated with a ray-tracing software and a program to estimate the Zernike coefficients of the simulated aberration from the analysis of the source image has been developed. Numerical indexes were used to evaluate the capability of the software of correctly estimating the Zernike coefficients. Even if only defocus, astigmatism and coma were considered, the very satisfactory results obtained confirm the soundness of this new approach and encourage further work in this direction, in order to develop a system able to estimate also spherical aberration, tilt and field curvature. An implementation of this aberration estimation in a real AO system is also currently in progress.

  1. An homomorphic filtering and expectation maximization approach for the point spread function estimation in ultrasound imaging

    NASA Astrophysics Data System (ADS)

    Benameur, S.; Mignotte, M.; Lavoie, F.

    2012-03-01

    In modern ultrasound imaging systems, the spatial resolution is severely limited due to the effects of both the finite aperture and overall bandwidth of ultrasound transducers and the non-negligible width of the transmitted ultrasound beams. This low spatial resolution remains the major limiting factor in the clinical usefulness of medical ultrasound images. In order to recover clinically important image details, which are often masked due to this resolution limitation, an image restoration procedure should be applied. To this end, an estimation of the Point Spread Function (PSF) of the ultrasound imaging system is required. This paper introduces a novel, original, reliable, and fast Maximum Likelihood (ML) approach for recovering the PSF of an ultrasound imaging system. This new PSF estimation method assumes as a constraint that the PSF is of known parametric form. Under this constraint, the parameter values of its associated Modulation Transfer Function (MTF) are then efficiently estimated using a homomorphic filter, a denoising step, and an expectation-maximization (EM) based clustering algorithm. Given this PSF estimate, a deconvolution can then be efficiently used in order to improve the spatial resolution of an ultrasound image and to obtain an estimate (independent of the properties of the imaging system) of the true tissue reflectivity function. The experiments reported in this paper demonstrate the efficiency and illustrate all the potential of this new estimation and blind deconvolution approach.

  2. Estimation of melting points of large set of persistent organic pollutants utilizing QSPR approach.

    PubMed

    Watkins, Marquita; Sizochenko, Natalia; Rasulev, Bakhtiyor; Leszczynski, Jerzy

    2016-03-01

    The presence of polyhalogenated persistent organic pollutants (POPs), such as Cl/Br-substituted benzenes, biphenyls, diphenyl ethers, and naphthalenes has been identified in all environmental compartments. The exposure to these compounds can pose potential risk not only for ecological systems, but also for human health. Therefore, efficient tools for comprehensive environmental risk assessment for POPs are required. Among the factors vital for environmental transport and fate processes is melting point of a compound. In this study, we estimated the melting points of a large group (1419 compounds) of chloro- and bromo- derivatives of dibenzo-p-dioxins, dibenzofurans, biphenyls, naphthalenes, diphenylethers, and benzenes by utilizing quantitative structure-property relationship (QSPR) techniques. The compounds were classified by applying structure-based clustering methods followed by GA-PLS modeling. In addition, random forest method has been applied to develop more general models. Factors responsible for melting point behavior and predictive ability of each method were discussed.

  3. Comparison of Optimization and Two-point Methods in Estimation of Soil Water Retention Curve

    NASA Astrophysics Data System (ADS)

    Ghanbarian-Alavijeh, B.; Liaghat, A. M.; Huang, G.

    2009-04-01

    Soil water retention curve (SWRC) is one of the soil hydraulic properties in which its direct measurement is time consuming and expensive. Since, its measurement is unavoidable in study of environmental sciences i.e. investigation of unsaturated hydraulic conductivity and solute transport, in this study the attempt is to predict soil water retention curve from two measured points. By using Cresswell and Paydar (1996) method (two-point method) and an optimization method developed in this study on the basis of two points of SWRC, parameters of Tyler and Wheatcraft (1990) model (fractal dimension and air entry value) were estimated and then water content at different matric potentials were estimated and compared with their measured values (n=180). For each method, we used both 3 and 1500 kPa (case 1) and 33 and 1500 kPa (case 2) as two points of SWRC. The calculated RMSE values showed that in the Creswell and Paydar (1996) method, there exists no significant difference between case 1 and case 2. However, the calculated RMSE value in case 2 (2.35) was slightly less than case 1 (2.37). The results also showed that the developed optimization method in this study had significantly less RMSE values for cases 1 (1.63) and 2 (1.33) rather than Cresswell and Paydar (1996) method.

  4. Articulated Non-Rigid Point Set Registration for Human Pose Estimation from 3D Sensors

    PubMed Central

    Ge, Song; Fan, Guoliang

    2015-01-01

    We propose a generative framework for 3D human pose estimation that is able to operate on both individual point sets and sequential depth data. We formulate human pose estimation as a point set registration problem, where we propose three new approaches to address several major technical challenges in this research. First, we integrate two registration techniques that have a complementary nature to cope with non-rigid and articulated deformations of the human body under a variety of poses. This unique combination allows us to handle point sets of complex body motion and large pose variation without any initial conditions, as required by most existing approaches. Second, we introduce an efficient pose tracking strategy to deal with sequential depth data, where the major challenge is the incomplete data due to self-occlusions and view changes. We introduce a visible point extraction method to initialize a new template for the current frame from the previous frame, which effectively reduces the ambiguity and uncertainty during registration. Third, to support robust and stable pose tracking, we develop a segment volume validation technique to detect tracking failures and to re-initialize pose registration if needed. The experimental results on both benchmark 3D laser scan and depth datasets demonstrate the effectiveness of the proposed framework when compared with state-of-the-art algorithms. PMID:26131673

  5. A non-rigid point matching method with local topology preservation for accurate bladder dose summation in high dose rate cervical brachytherapy.

    PubMed

    Chen, Haibin; Zhong, Zichun; Liao, Yuliang; Pompoš, Arnold; Hrycushko, Brian; Albuquerque, Kevin; Zhen, Xin; Zhou, Linghong; Gu, Xuejun

    2016-02-07

    GEC-ESTRO guidelines for high dose rate cervical brachytherapy advocate the reporting of the D2cc (the minimum dose received by the maximally exposed 2cc volume) to organs at risk. Due to large interfractional organ motion, reporting of accurate cumulative D2cc over a multifractional course is a non-trivial task requiring deformable image registration and deformable dose summation. To efficiently and accurately describe the point-to-point correspondence of the bladder wall over all treatment fractions while preserving local topologies, we propose a novel graphic processing unit (GPU)-based non-rigid point matching algorithm. This is achieved by introducing local anatomic information into the iterative update of correspondence matrix computation in the 'thin plate splines-robust point matching' (TPS-RPM) scheme. The performance of the GPU-based TPS-RPM with local topology preservation algorithm (TPS-RPM-LTP) was evaluated using four numerically simulated synthetic bladders having known deformations, a custom-made porcine bladder phantom embedded with twenty one fiducial markers, and 29 fractional computed tomography (CT) images from seven cervical cancer patients. Results show that TPS-RPM-LTP achieved excellent geometric accuracy with landmark residual distance error (RDE) of 0.7  ±  0.3 mm for the numerical synthetic data with different scales of bladder deformation and structure complexity, and 3.7  ±  1.8 mm and 1.6  ±  0.8 mm for the porcine bladder phantom with large and small deformation, respectively. The RDE accuracy of the urethral orifice landmarks in patient bladders was 3.7  ±  2.1 mm. When compared to the original TPS-RPM, the TPS-RPM-LTP improved landmark matching by reducing landmark RDE by 50  ±  19%, 37  ±  11% and 28  ±  11% for the synthetic, porcine phantom and the patient bladders, respectively. This was achieved with a computational time of less than 15 s in all cases

  6. A non-rigid point matching method with local topology preservation for accurate bladder dose summation in high dose rate cervical brachytherapy

    NASA Astrophysics Data System (ADS)

    Chen, Haibin; Zhong, Zichun; Liao, Yuliang; Pompoš, Arnold; Hrycushko, Brian; Albuquerque, Kevin; Zhen, Xin; Zhou, Linghong; Gu, Xuejun

    2016-02-01

    GEC-ESTRO guidelines for high dose rate cervical brachytherapy advocate the reporting of the D2cc (the minimum dose received by the maximally exposed 2cc volume) to organs at risk. Due to large interfractional organ motion, reporting of accurate cumulative D2cc over a multifractional course is a non-trivial task requiring deformable image registration and deformable dose summation. To efficiently and accurately describe the point-to-point correspondence of the bladder wall over all treatment fractions while preserving local topologies, we propose a novel graphic processing unit (GPU)-based non-rigid point matching algorithm. This is achieved by introducing local anatomic information into the iterative update of correspondence matrix computation in the ‘thin plate splines-robust point matching’ (TPS-RPM) scheme. The performance of the GPU-based TPS-RPM with local topology preservation algorithm (TPS-RPM-LTP) was evaluated using four numerically simulated synthetic bladders having known deformations, a custom-made porcine bladder phantom embedded with twenty one fiducial markers, and 29 fractional computed tomography (CT) images from seven cervical cancer patients. Results show that TPS-RPM-LTP achieved excellent geometric accuracy with landmark residual distance error (RDE) of 0.7  ±  0.3 mm for the numerical synthetic data with different scales of bladder deformation and structure complexity, and 3.7  ±  1.8 mm and 1.6  ±  0.8 mm for the porcine bladder phantom with large and small deformation, respectively. The RDE accuracy of the urethral orifice landmarks in patient bladders was 3.7  ±  2.1 mm. When compared to the original TPS-RPM, the TPS-RPM-LTP improved landmark matching by reducing landmark RDE by 50  ±  19%, 37  ±  11% and 28  ±  11% for the synthetic, porcine phantom and the patient bladders, respectively. This was achieved with a computational time of less than 15 s in all cases

  7. Point and Fixed Plot Sampling Inventory Estimates at the Savannah River Site, South Carolina.

    SciTech Connect

    Parresol, Bernard, R.

    2004-02-01

    This report provides calculation of systematic point sampling volume estimates for trees greater than or equal to 5 inches diameter breast height (dbh) and fixed radius plot volume estimates for trees < 5 inches dbh at the Savannah River Site (SRS), Aiken County, South Carolina. The inventory of 622 plots was started in March 1999 and completed in January 2002 (Figure 1). Estimates are given in cubic foot volume. The analyses are presented in a series of Tables and Figures. In addition, a preliminary analysis of fuel levels on the SRS is given, based on depth measurements of the duff and litter layers on the 622 inventory plots plus line transect samples of down coarse woody material. Potential standing live fuels are also included. The fuels analyses are presented in a series of tables.

  8. Vein visualization using a smart phone with multispectral Wiener estimation for point-of-care applications.

    PubMed

    Song, Jae Hee; Kim, Choye; Yoo, Yangmo

    2015-03-01

    Effective vein visualization is clinically important for various point-of-care applications, such as needle insertion. It can be achieved by utilizing ultrasound imaging or by applying infrared laser excitation and monitoring its absorption. However, while these approaches can be used for vein visualization, they are not suitable for point-of-care applications because of their cost, time, and accessibility. In this paper, a new vein visualization method based on multispectral Wiener estimation is proposed and its real-time implementation on a smart phone is presented. In the proposed method, a conventional RGB camera on a commercial smart phone (i.e., Galaxy Note 2, Samsung Electronics Inc., Suwon, Korea) is used to acquire reflectance information from veins. Wiener estimation is then applied to extract the multispectral information from the veins. To evaluate the performance of the proposed method, an experiment was conducted using a color calibration chart (ColorChecker Classic, X-rite, Grand Rapids, MI, USA) and an average root-mean-square error of 12.0% was obtained. In addition, an in vivo subcutaneous vein imaging experiment was performed to explore the clinical performance of the smart phone-based Wiener estimation. From the in vivo experiment, the veins at various sites were successfully localized using the reconstructed multispectral images and these results were confirmed by ultrasound B-mode and color Doppler images. These results indicate that the presented multispectral Wiener estimation method can be used for visualizing veins using a commercial smart phone for point-of-care applications (e.g., vein puncture guidance).

  9. Star Tracker Based ATP System Conceptual Design and Pointing Accuracy Estimation

    NASA Technical Reports Server (NTRS)

    Orfiz, Gerardo G.; Lee, Shinhak

    2006-01-01

    A star tracker based beaconless (a.k.a. non-cooperative beacon) acquisition, tracking and pointing concept for precisely pointing an optical communication beam is presented as an innovative approach to extend the range of high bandwidth (> 100 Mbps) deep space optical communication links throughout the solar system and to remove the need for a ground based high power laser as a beacon source. The basic approach for executing the ATP functions involves the use of stars as the reference sources from which the attitude knowledge is obtained and combined with high bandwidth gyroscopes for propagating the pointing knowledge to the beam pointing mechanism. Details of the conceptual design are presented including selection of an orthogonal telescope configuration and the introduction of an optical metering scheme to reduce misalignment error. Also, estimates are presented that demonstrate that aiming of the communications beam to the Earth based receive terminal can be achieved with a total system pointing accuracy of better than 850 nanoradians (3 sigma) from anywhere in the solar system.

  10. Estimation of boiling points using density functional theory with polarized continuum model solvent corrections.

    PubMed

    Chan, Poh Yin; Tong, Chi Ming; Durrant, Marcus C

    2011-09-01

    An empirical method for estimation of the boiling points of organic molecules based on density functional theory (DFT) calculations with polarized continuum model (PCM) solvent corrections has been developed. The boiling points are calculated as the sum of three contributions. The first term is calculated directly from the structural formula of the molecule, and is related to its effective surface area. The second is a measure of the electronic interactions between molecules, based on the DFT-PCM solvation energy, and the third is employed only for planar aromatic molecules. The method is applicable to a very diverse range of organic molecules, with normal boiling points in the range of -50 to 500 °C, and includes ten different elements (C, H, Br, Cl, F, N, O, P, S and Si). Plots of observed versus calculated boiling points gave R²=0.980 for a training set of 317 molecules, and R²=0.979 for a test set of 74 molecules. The role of intramolecular hydrogen bonding in lowering the boiling points of certain molecules is quantitatively discussed.

  11. Quaternion-Based Unscented Kalman Filter for Accurate Indoor Heading Estimation Using Wearable Multi-Sensor System

    PubMed Central

    Yuan, Xuebing; Yu, Shuai; Zhang, Shengzhi; Wang, Guoping; Liu, Sheng

    2015-01-01

    Inertial navigation based on micro-electromechanical system (MEMS) inertial measurement units (IMUs) has attracted numerous researchers due to its high reliability and independence. The heading estimation, as one of the most important parts of inertial navigation, has been a research focus in this field. Heading estimation using magnetometers is perturbed by magnetic disturbances, such as indoor concrete structures and electronic equipment. The MEMS gyroscope is also used for heading estimation. However, the accuracy of gyroscope is unreliable with time. In this paper, a wearable multi-sensor system has been designed to obtain the high-accuracy indoor heading estimation, according to a quaternion-based unscented Kalman filter (UKF) algorithm. The proposed multi-sensor system including one three-axis accelerometer, three single-axis gyroscopes, one three-axis magnetometer and one microprocessor minimizes the size and cost. The wearable multi-sensor system was fixed on waist of pedestrian and the quadrotor unmanned aerial vehicle (UAV) for heading estimation experiments in our college building. The results show that the mean heading estimation errors are less 10° and 5° to multi-sensor system fixed on waist of pedestrian and the quadrotor UAV, respectively, compared to the reference path. PMID:25961384

  12. Markov chain Monte Carlo estimation of a multiparameter decision model: consistency of evidence and the accurate assessment of uncertainty.

    PubMed

    Ades, A E; Cliffe, S

    2002-01-01

    Decision models are usually populated 1 parameter at a time, with 1 item of information informing each parameter. Often, however, data may not be available on the parameters themselves but on several functions of parameters, and there may be more items of information than there are parameters to be estimated. The authors show how in these circumstances all the model parameters can be estimated simultaneously using Bayesian Markov chain Monte Carlo methods. Consistency of the information and/or the adequacy of the model can also be assessed within this framework. Statistical evidence synthesis using all available data should result in more precise estimates of parameters and functions of parameters, and is compatible with the emphasis currently placed on systematic use of evidence. To illustrate this, WinBUGS software is used to estimate a simple 9-parameter model of the epidemiology of HIV in women attending prenatal clinics, using information on 12 functions of parameters, and to thereby compute the expected net benefit of 2 alternative prenatal testing strategies, universal testing and targeted testing of high-risk groups. The authors demonstrate improved precision of estimates, and lower estimates of the expected value of perfect information, resulting from the use of all available data.

  13. Quaternion-based unscented Kalman filter for accurate indoor heading estimation using wearable multi-sensor system.

    PubMed

    Yuan, Xuebing; Yu, Shuai; Zhang, Shengzhi; Wang, Guoping; Liu, Sheng

    2015-05-07

    Inertial navigation based on micro-electromechanical system (MEMS) inertial measurement units (IMUs) has attracted numerous researchers due to its high reliability and independence. The heading estimation, as one of the most important parts of inertial navigation, has been a research focus in this field. Heading estimation using magnetometers is perturbed by magnetic disturbances, such as indoor concrete structures and electronic equipment. The MEMS gyroscope is also used for heading estimation. However, the accuracy of gyroscope is unreliable with time. In this paper, a wearable multi-sensor system has been designed to obtain the high-accuracy indoor heading estimation, according to a quaternion-based unscented Kalman filter (UKF) algorithm. The proposed multi-sensor system including one three-axis accelerometer, three single-axis gyroscopes, one three-axis magnetometer and one microprocessor minimizes the size and cost. The wearable multi-sensor system was fixed on waist of pedestrian and the quadrotor unmanned aerial vehicle (UAV) for heading estimation experiments in our college building. The results show that the mean heading estimation errors are less 10° and 5° to multi-sensor system fixed on waist of pedestrian and the quadrotor UAV, respectively, compared to the reference path.

  14. The Point Count Transect Method for Estimates of Biodiversity on Coral Reefs: Improving the Sampling of Rare Species.

    PubMed

    Roberts, T Edward; Bridge, Thomas C; Caley, M Julian; Baird, Andrew H

    2016-01-01

    Understanding patterns in species richness and diversity over environmental gradients (such as altitude and depth) is an enduring component of ecology. As most biological communities feature few common and many rare species, quantifying the presence and abundance of rare species is a crucial requirement for analysis of these patterns. Coral reefs present specific challenges for data collection, with limitations on time and site accessibility making efficiency crucial. Many commonly used methods, such as line intercept transects (LIT), are poorly suited to questions requiring the detection of rare events or species. Here, an alternative method for surveying reef-building corals is presented; the point count transect (PCT). The PCT consists of a count of coral colonies at a series of sample stations, located at regular intervals along a transect. In contrast the LIT records the proportion of each species occurring under a transect tape of a given length. The same site was surveyed using PCT and LIT to compare species richness estimates between the methods. The total number of species increased faster per individual sampled and unit of time invested using PCT. Furthermore, 41 of the 44 additional species recorded by the PCT occurred ≤ 3 times, demonstrating the increased capacity of PCT to detect rare species. PCT provides a more accurate estimate of local-scale species richness than the LIT, and is an efficient alternative method for surveying reef corals to address questions associated with alpha-diversity, and rare or incidental events.

  15. Improving radar rainfall estimation by merging point rainfall measurements within a model combination framework

    NASA Astrophysics Data System (ADS)

    Hasan, Mohammad Mahadi; Sharma, Ashish; Mariethoz, Gregoire; Johnson, Fiona; Seed, Alan

    2016-11-01

    While the value of correcting raw radar rainfall estimates using simultaneous ground rainfall observations is well known, approaches that use the complete record of both gauge and radar measurements to provide improved rainfall estimates are much less common. We present here two new approaches for estimating radar rainfall that are designed to address known limitations in radar rainfall products by using a relatively long history of radar reflectivity and ground rainfall observations. The first of these two approaches is a radar rainfall estimation algorithm that is nonparametric by construction. Compared to the traditional gauge adjusted parametric relationship between reflectivity (Z) and ground rainfall (R), the suggested new approach is based on a nonparametric radar rainfall estimation method (NPR) derived using the conditional probability distribution of reflectivity and gauge rainfall. The NPR method is applied to the densely gauged Sydney Terrey Hills radar network, where it reduces the RMSE in rainfall estimates by 10%, with improvements observed at 90% of the gauges. The second of the two approaches is a method to merge radar and spatially interpolated gauge measurements. The two sources of information are combined using a dynamic combinatorial algorithm with weights that vary in both space and time. The weight for any specific period is calculated based on the error covariance matrix that is formulated from the radar and spatially interpolated rainfall errors of similar reflectivity periods in a cross-validation setting. The combination method reduces the RMSE by about 20% compared to the traditional Z-R relationship method, and improves estimates compared to spatially interpolated point measurements in sparsely gauged areas.

  16. Two-point correlation functions to characterize microgeometry and estimate permeabilities of synthetic and natural sandstones

    SciTech Connect

    Blair, S.C.; Berge, P.A.; Berryman, J.G.

    1993-08-01

    We have developed an image-processing method for characterizing the microstructure of rock and other porous materials, and for providing a quantitative means for understanding the dependence of physical properties on the pore structure. This method is based upon the statistical properties of the microgeometry as observed in scanning electron micrograph (SEM) images of cross sections of porous materials. The method utilizes a simple statistical function, called the spatial correlation function, which can be used to predict bounds on permeability and other physical properties. We obtain estimates of the porosity and specific surface area of the material from the two-point correlation function. The specific surface area can be related to the permeability of porous materials using a Kozeny-Carman relation, and we show that the specific surface area measured on images of sandstones is consistent with the specific surface area used in a simple flow model for computation of permeability. In this paper, we discuss the two-point spatial correlation function and its use in characterizing microstructure features such as pore and grain sizes. We present estimates of permeabilities found using SEM images of several different synthetic and natural sandstones. Comparison of the estimates to laboratory measurements shows good agreement. Finally, we briefly discuss extension of this technique to two-phase flow.

  17. Minimum Number of Observation Points for LEO Satellite Orbit Estimation by OWL Network

    NASA Astrophysics Data System (ADS)

    Park, Maru; Jo, Jung Hyun; Cho, Sungki; Choi, Jin; Kim, Chun-Hwey; Park, Jang-Hyun; Yim, Hong-Suh; Choi, Young-Jun; Moon, Hong-Kyu; Bae, Young-Ho; Park, Sun-Youp; Kim, Ji-Hye; Roh, Dong-Goo; Jang, Hyun-Jung; Park, Young-Sik; Jeong, Min-Ji

    2015-12-01

    By using the Optical Wide-field Patrol (OWL) network developed by the Korea Astronomy and Space Science Institute (KASI) we generated the right ascension and declination angle data from optical observation of Low Earth Orbit (LEO) satellites. We performed an analysis to verify the optimum number of observations needed per arc for successful estimation of orbit. The currently functioning OWL observatories are located in Daejeon (South Korea), Songino (Mongolia), and Oukaïmeden (Morocco). The Daejeon Observatory is functioning as a test bed. In this study, the observed targets were Gravity Probe B, COSMOS 1455, COSMOS 1726, COSMOS 2428, SEASAT 1, ATV-5, and CryoSat-2 (all in LEO). These satellites were observed from the test bed and the Songino Observatory of the OWL network during 21 nights in 2014 and 2015. After we estimated the orbit from systematically selected sets of observation points (20, 50, 100, and 150) for each pass, we compared the difference between the orbit estimates for each case, and the Two Line Element set (TLE) from the Joint Space Operation Center (JSpOC). Then, we determined the average of the difference and selected the optimal observation points by comparing the average values.

  18. Analysis of open-loop conical scan pointing error and variance estimators

    NASA Technical Reports Server (NTRS)

    Alvarez, L. S.

    1993-01-01

    General pointing error and variance estimators for an open-loop conical scan (conscan) system are derived and analyzed. The conscan algorithm is modeled as a weighted least-squares estimator whose inputs are samples of receiver carrier power and its associated measurement uncertainty. When the assumptions of constant measurement noise and zero pointing error estimation are applied, the variance equation is then strictly a function of the carrier power to uncertainty ratio and the operator selectable radius and period input to the algorithm. The performance equation is applied to a 34-m mirror-based beam-waveguide conscan system interfaced with the Block V Receiver Subsystem tracking a Ka-band (32-GHz) downlink. It is shown that for a carrier-to-noise power ratio greater than or equal to 30 dB-Hz, the conscan period for Ka-band operation may be chosen well below the current DSN minimum of 32 sec. The analysis presented forms the basis of future conscan work in both research and development as well as for the upcoming DSN antenna controller upgrade for the new DSS-24 34-m beam-waveguide antenna.

  19. Probability distributions of the logarithm of inter-spike intervals yield accurate entropy estimates from small datasets.

    PubMed

    Dorval, Alan D

    2008-08-15

    The maximal information that the spike train of any neuron can pass on to subsequent neurons can be quantified as the neuronal firing pattern entropy. Difficulties associated with estimating entropy from small datasets have proven an obstacle to the widespread reporting of firing pattern entropies and more generally, the use of information theory within the neuroscience community. In the most accessible class of entropy estimation techniques, spike trains are partitioned linearly in time and entropy is estimated from the probability distribution of firing patterns within a partition. Ample previous work has focused on various techniques to minimize the finite dataset bias and standard deviation of entropy estimates from under-sampled probability distributions on spike timing events partitioned linearly in time. In this manuscript we present evidence that all distribution-based techniques would benefit from inter-spike intervals being partitioned in logarithmic time. We show that with logarithmic partitioning, firing rate changes become independent of firing pattern entropy. We delineate the entire entropy estimation process with two example neuronal models, demonstrating the robust improvements in bias and standard deviation that the logarithmic time method yields over two widely used linearly partitioned time approaches.

  20. How accurately can students estimate their performance on an exam and how does this relate to their actual performance on the exam?

    NASA Astrophysics Data System (ADS)

    Rebello, N. Sanjay

    2012-02-01

    Research has shown students' beliefs regarding their own abilities in math and science can influence their performance in these disciplines. I investigated the relationship between students' estimated performance and actual performance on five exams in a second semester calculus-based physics class. Students in a second-semester calculus-based physics class were given about 72 hours after the completion of each of five exams, to estimate their individual and class mean score on each exam. Students were given extra credit worth 1% of the exam points for estimating their score correct within 2% of the actual score and another 1% extra credit for estimating the class mean score within 2% of the correct value. I compared students' individual and mean score estimations with the actual scores to investigate the relationship between estimation accuracies and exam performance of the students as well as trends over the semester.

  1. Effect of distance-related heterogeneity on population size estimates from point counts

    USGS Publications Warehouse

    Efford, Murray G.; Dawson, Deanna K.

    2009-01-01

    Point counts are used widely to index bird populations. Variation in the proportion of birds counted is a known source of error, and for robust inference it has been advocated that counts be converted to estimates of absolute population size. We used simulation to assess nine methods for the conduct and analysis of point counts when the data included distance-related heterogeneity of individual detection probability. Distance from the observer is a ubiquitous source of heterogeneity, because nearby birds are more easily detected than distant ones. Several recent methods (dependent double-observer, time of first detection, time of detection, independent multiple-observer, and repeated counts) do not account for distance-related heterogeneity, at least in their simpler forms. We assessed bias in estimates of population size by simulating counts with fixed radius w over four time intervals (occasions). Detection probability per occasion was modeled as a half-normal function of distance with scale parameter sigma and intercept g(0) = 1.0. Bias varied with sigma/w; values of sigma inferred from published studies were often 50% for a 100-m fixed-radius count. More critically, the bias of adjusted counts sometimes varied more than that of unadjusted counts, and inference from adjusted counts would be less robust. The problem was not solved by using mixture models or including distance as a covariate. Conventional distance sampling performed well in simulations, but its assumptions are difficult to meet in the field. We conclude that no existing method allows effective estimation of population size from point counts.

  2. Unbalanced and Minimal Point Equivalent Estimation Second-Order Split-Plot Designs

    NASA Technical Reports Server (NTRS)

    Parker, Peter A.; Kowalski, Scott M.; Vining, G. Geoffrey

    2007-01-01

    Restricting the randomization of hard-to-change factors in industrial experiments is often performed by employing a split-plot design structure. From an economic perspective, these designs minimize the experimental cost by reducing the number of resets of the hard-to- change factors. In this paper, unbalanced designs are considered for cases where the subplots are relatively expensive and the experimental apparatus accommodates an unequal number of runs per whole-plot. We provide construction methods for unbalanced second-order split- plot designs that possess the equivalence estimation optimality property, providing best linear unbiased estimates of the parameters; independent of the variance components. Unbalanced versions of the central composite and Box-Behnken designs are developed. For cases where the subplot cost approaches the whole-plot cost, minimal point designs are proposed and illustrated with a split-plot Notz design.

  3. How Accurate Are German Work-Time Data? A Comparison of Time-Diary Reports and Stylized Estimates

    ERIC Educational Resources Information Center

    Otterbach, Steffen; Sousa-Poza, Alfonso

    2010-01-01

    This study compares work time data collected by the German Time Use Survey (GTUS) using the diary method with stylized work time estimates from the GTUS, the German Socio-Economic Panel, and the German Microcensus. Although on average the differences between the time-diary data and the interview data is not large, our results show that significant…

  4. Estimate of Shock Standoff Distance Ahead of a General Stagnation Point

    NASA Technical Reports Server (NTRS)

    Reshotko, Eli

    1961-01-01

    The shock standoff distance ahead of a general rounded stagnation point has been estimated under the assumption of a constant-density-shock layer. It is found that, with the exception of almost-two-dimensional bodies with very strong shock waves, the present theoretical calculations and the experimental data of Zakkay and Visich for toroids are well represented by the relation Delta-3D/R(s) = ((Delta-ax sym)/(R(s))/(2/(K+1))) where Delta is the shock standoff distance, R(s),x is the smaller principal shock radius, and K is the ratio of the smaller to the larger of the principal shock radii.

  5. Travel Time Estimation Using Freeway Point Detector Data Based on Evolving Fuzzy Neural Inference System.

    PubMed

    Tang, Jinjun; Zou, Yajie; Ash, John; Zhang, Shen; Liu, Fang; Wang, Yinhai

    2016-01-01

    Travel time is an important measurement used to evaluate the extent of congestion within road networks. This paper presents a new method to estimate the travel time based on an evolving fuzzy neural inference system. The input variables in the system are traffic flow data (volume, occupancy, and speed) collected from loop detectors located at points both upstream and downstream of a given link, and the output variable is the link travel time. A first order Takagi-Sugeno fuzzy rule set is used to complete the inference. For training the evolving fuzzy neural network (EFNN), two learning processes are proposed: (1) a K-means method is employed to partition input samples into different clusters, and a Gaussian fuzzy membership function is designed for each cluster to measure the membership degree of samples to the cluster centers. As the number of input samples increases, the cluster centers are modified and membership functions are also updated; (2) a weighted recursive least squares estimator is used to optimize the parameters of the linear functions in the Takagi-Sugeno type fuzzy rules. Testing datasets consisting of actual and simulated data are used to test the proposed method. Three common criteria including mean absolute error (MAE), root mean square error (RMSE), and mean absolute relative error (MARE) are utilized to evaluate the estimation performance. Estimation results demonstrate the accuracy and effectiveness of the EFNN method through comparison with existing methods including: multiple linear regression (MLR), instantaneous model (IM), linear model (LM), neural network (NN), and cumulative plots (CP).

  6. Estimation of ground reaction force and zero moment point on a powered ankle-foot prosthesis.

    PubMed

    Martinez-Villalpando, Ernesto C; Herr, Hugh; Farrell, Matthew

    2007-01-01

    The ground reaction force (GRF) and the zero moment point (ZMP) are important parameters for the advancement of biomimetic control of robotic lower-limb prosthetic devices. In this document a method to estimate GRF and ZMP on a motorized ankle-foot prosthesis (MIT Powered Ankle-Foot Prosthesis) is presented. The method proposed is based on the analysis of data collected from a sensory system embedded in the prosthetic device using a custom designed wearable computing unit. In order to evaluate the performance of the estimation methods described, standing and walking clinical studies were conducted on a transtibial amputee. The results were statistically compared to standard analysis methodologies employed in a gait laboratory. The average RMS error and correlation factor were calculated for all experimental sessions. By using a static analysis procedure, the estimation of the vertical component of GRF had an averaged correlation coefficient higher than 0.94. The estimated ZMP location had a distance error of less than 1 cm, equal to 4% of the anterior-posterior foot length or 12% of the medio-lateral foot width.

  7. The number of alleles at a microsatellite defines the allele frequency spectrum and facilitates fast accurate estimation of theta.

    PubMed

    Haasl, Ryan J; Payseur, Bret A

    2010-12-01

    Theoretical work focused on microsatellite variation has produced a number of important results, including the expected distribution of repeat sizes and the expected squared difference in repeat size between two randomly selected samples. However, closed-form expressions for the sampling distribution and frequency spectrum of microsatellite variation have not been identified. Here, we use coalescent simulations of the stepwise mutation model to develop gamma and exponential approximations of the microsatellite allele frequency spectrum, a distribution central to the description of microsatellite variation across the genome. For both approximations, the parameter of biological relevance is the number of alleles at a locus, which we express as a function of θ, the population-scaled mutation rate, based on simulated data. Discovered relationships between θ, the number of alleles, and the frequency spectrum support the development of three new estimators of microsatellite θ. The three estimators exhibit roughly similar mean squared errors (MSEs) and all are biased. However, across a broad range of sample sizes and θ values, the MSEs of these estimators are frequently lower than all other estimators tested. The new estimators are also reasonably robust to mutation that includes step sizes greater than one. Finally, our approximation to the microsatellite allele frequency spectrum provides a null distribution of microsatellite variation. In this context, a preliminary analysis of the effects of demographic change on the frequency spectrum is performed. We suggest that simulations of the microsatellite frequency spectrum under evolutionary scenarios of interest may guide investigators to the use of relevant and sometimes novel summary statistics.

  8. Assessment of the point-source method for estimating dose rates to members of the public from exposure to patients with 131I thyroid treatment

    SciTech Connect

    Dewji, Shaheen Azim; Bellamy, Michael B.; Hertel, Nolan E.; Leggett, Richard Wayne; Sherbini, Sami; Saba, Mohammad S.; Eckerman, Keith F.

    2015-09-01

    The U.S. Nuclear Regulatory Commission (USNRC) initiated a contract with Oak Ridge National Laboratory (ORNL) to calculate radiation dose rates to members of the public that may result from exposure to patients recently administered iodine-131 (131I) as part of medical therapy. The main purpose was to compare dose rate estimates based on a point source and target with values derived from more realistic simulations that considered the time-dependent distribution of 131I in the patient and attenuation of emitted photons by the patient’s tissues. The external dose rate estimates were derived using Monte Carlo methods and two representations of the Phantom with Movable Arms and Legs, previously developed by ORNL and the USNRC, to model the patient and a nearby member of the public. Dose rates to tissues and effective dose rates were calculated for distances ranging from 10 to 300 cm between the phantoms and compared to estimates based on the point-source method, as well as to results of previous studies that estimated exposure from 131I patients. The point-source method overestimates dose rates to members of the public in very close proximity to an 131I patient but is a broadly accurate method of dose rate estimation at separation distances of 300 cm or more at times closer to administration.

  9. Assessment of the point-source method for estimating dose rates to members of the public from exposure to patients with 131I thyroid treatment

    DOE PAGES

    Dewji, Shaheen Azim; Bellamy, Michael B.; Hertel, Nolan E.; ...

    2015-09-01

    The U.S. Nuclear Regulatory Commission (USNRC) initiated a contract with Oak Ridge National Laboratory (ORNL) to calculate radiation dose rates to members of the public that may result from exposure to patients recently administered iodine-131 (131I) as part of medical therapy. The main purpose was to compare dose rate estimates based on a point source and target with values derived from more realistic simulations that considered the time-dependent distribution of 131I in the patient and attenuation of emitted photons by the patient’s tissues. The external dose rate estimates were derived using Monte Carlo methods and two representations of the Phantommore » with Movable Arms and Legs, previously developed by ORNL and the USNRC, to model the patient and a nearby member of the public. Dose rates to tissues and effective dose rates were calculated for distances ranging from 10 to 300 cm between the phantoms and compared to estimates based on the point-source method, as well as to results of previous studies that estimated exposure from 131I patients. The point-source method overestimates dose rates to members of the public in very close proximity to an 131I patient but is a broadly accurate method of dose rate estimation at separation distances of 300 cm or more at times closer to administration.« less

  10. Estimation of the temperature dependent interaction between uncharged point defects in Si

    SciTech Connect

    Kamiyama, Eiji; Vanhellemont, Jan; Sueoka, Koji

    2015-01-15

    A method is described to estimate the temperature dependent interaction between two uncharged point defects in Si based on DFT calculations. As an illustration, the formation of the uncharged di-vacancy V{sub 2} is discussed, based on the temperature dependent attractive field between both vacancies. For that purpose, all irreducible configurations of two uncharged vacancies are determined, each with their weight given by the number of equivalent configurations. Using a standard 216-atoms supercell, nineteen irreducible configurations of two vacancies are obtained. The binding energies of all these configurations are calculated. Each vacancy is surrounded by several attractive sites for another vacancy. The obtained temperature dependent of total volume of these attractive sites has a radius that is closely related with the capture radius for the formation of a di-vacancy that is used in continuum theory. The presented methodology can in principle also be applied to estimate the capture radius for pair formation of any type of point defects.

  11. Use of the point load index in estimation of the strength rating for the RMR system

    NASA Astrophysics Data System (ADS)

    Karaman, Kadir; Kaya, Ayberk; Kesimal, Ayhan

    2015-06-01

    The Rock Mass Rating (RMR) system is a worldwide reference for design applications involving estimation of rock mass properties and tunnel support. In the RMR system, Uniaxial Compressive Strength (UCS) is an important input parameter to determine the strength rating of intact rock. In practice, there are some difficulties in determining the UCS of rocks from problematic ground conditions due to rapid data requirements. In this study, a combined strength rating chart was developed to overcome this problem based on the experience gained in the last decades from the point load test. For this purpose, a total of 490 UCS and Point Load Index (PLI) data pairs collected from the accessible world literature and obtained from the Eastern Black Sea Region (EBSR) in Turkey were evaluated together. The UCS and PLI data pairs were classified for the cases of PLI < 1 and PLI > 1 MPa, and two different strength rating charts were suggested by using the regression analyses. The Variance Account For (VAF), Root Mean Square Error (RMSE) and Mean Absolute Percentage Error (MAPE) indices were calculated to compare the performance of the prediction capacity of the suggested strength rating charts. Further, the one way analysis of variance (ANOVA) was performed to test whether the means of the calculated and predicted ratings are similar to each other. Findings of the analyses have demonstrated that the combined strength rating chart for the cases of PLI < 1 and PLI > 1 MPa can be reliably used in estimation of the strength ratings for the RMR system.

  12. Estimating dispersed and point source emissions of methane in East Anglia: results and implications

    NASA Astrophysics Data System (ADS)

    Harris, Neil; Connors, Sarah; Hancock, Ben; Jones, Pip; Murphy, Jonathan; Riddick, Stuart; Robinson, Andrew; Skelton, Robert; Manning, Alistair; Forster, Grant; Oram, David; O'Doherty, Simon; Young, Dickon; Stavert, Ann; Fisher, Rebecca; Lowry, David; Nisbet, Euan; Zazzeri, Guilia; Allen, Grant; Pitt, Joseph

    2016-04-01

    We have been investigating ways to estimate dispersed and point source emissions of methane. To do so we have used continuous measurements from a small network of instruments at 4 sites across East Anglia since 2012. These long-term series have been supplemented by measurements taken in focussed studies at landfills, which are important point sources of methane, and by measurements of the 13C:12C ratio in methane to provide additional information about its sources. These measurements have been analysed using the NAME InTEM inversion model to provide county-level emissions (~30 km x ~30 km) in East Anglia. A case study near a landfill just north of Cambridge was also analysed using a Gaussian plume model and the Windtrax dispersion model. The resulting emission estimates from the three techniques are consistent within the uncertainties, despite the different spatial scales being considered. A seasonal cycle in emissions from the landfill (identified by the isotopic measurements) is observed with higher emissions in winter than summer. This would be expected from consideration of the likely activity of methanogenic bacteria in the landfill, but is not currently represented in emission inventories such as the UK National Atmospheric Emissions Inventory. The possibility of assessing North Sea gas field emissions using ground-based measurements will also be discussed.

  13. A Roving Dual-Presentation Simultaneity-Judgment Task to Estimate the Point of Subjective Simultaneity.

    PubMed

    Yarrow, Kielan; Martin, Sian E; Di Costa, Steven; Solomon, Joshua A; Arnold, Derek H

    2016-01-01

    The most popular tasks with which to investigate the perception of subjective synchrony are the temporal order judgment (TOJ) and the simultaneity judgment (SJ). Here, we discuss a complementary approach-a dual-presentation (2x) SJ task-and focus on appropriate analysis methods for a theoretically desirable "roving" design. Two stimulus pairs are presented on each trial and the observer must select the most synchronous. To demonstrate this approach, in Experiment 1 we tested the 2xSJ task alongside TOJ, SJ, and simple reaction-time (RT) tasks using audiovisual stimuli. We interpret responses from each task using detection-theoretic models, which assume variable arrival times for sensory signals at critical brain structures for timing perception. All tasks provide similar estimates of the point of subjective simultaneity (PSS) on average, and PSS estimates from some tasks were correlated on an individual basis. The 2xSJ task produced lower and more stable estimates of model-based (and thus comparable) sensory/decision noise than the TOJ. In Experiment 2 we obtained similar results using RT, TOJ, ternary, and 2xSJ tasks for all combinations of auditory, visual, and tactile stimuli. In Experiment 3 we investigated attentional prior entry, using both TOJs and 2xSJs. We found that estimates of prior-entry magnitude correlated across these tasks. Overall, our study establishes the practicality of the roving dual-presentation SJ task, but also illustrates the additional complexity of the procedure. We consider ways in which this task might complement more traditional procedures, particularly when it is important to estimate both PSS and sensory/decisional noise.

  14. A Roving Dual-Presentation Simultaneity-Judgment Task to Estimate the Point of Subjective Simultaneity

    PubMed Central

    Yarrow, Kielan; Martin, Sian E.; Di Costa, Steven; Solomon, Joshua A.; Arnold, Derek H.

    2016-01-01

    The most popular tasks with which to investigate the perception of subjective synchrony are the temporal order judgment (TOJ) and the simultaneity judgment (SJ). Here, we discuss a complementary approach—a dual-presentation (2x) SJ task—and focus on appropriate analysis methods for a theoretically desirable “roving” design. Two stimulus pairs are presented on each trial and the observer must select the most synchronous. To demonstrate this approach, in Experiment 1 we tested the 2xSJ task alongside TOJ, SJ, and simple reaction-time (RT) tasks using audiovisual stimuli. We interpret responses from each task using detection-theoretic models, which assume variable arrival times for sensory signals at critical brain structures for timing perception. All tasks provide similar estimates of the point of subjective simultaneity (PSS) on average, and PSS estimates from some tasks were correlated on an individual basis. The 2xSJ task produced lower and more stable estimates of model-based (and thus comparable) sensory/decision noise than the TOJ. In Experiment 2 we obtained similar results using RT, TOJ, ternary, and 2xSJ tasks for all combinations of auditory, visual, and tactile stimuli. In Experiment 3 we investigated attentional prior entry, using both TOJs and 2xSJs. We found that estimates of prior-entry magnitude correlated across these tasks. Overall, our study establishes the practicality of the roving dual-presentation SJ task, but also illustrates the additional complexity of the procedure. We consider ways in which this task might complement more traditional procedures, particularly when it is important to estimate both PSS and sensory/decisional noise. PMID:27047434

  15. Improved age modelling and high-precision age estimates of late Quaternary tephras, for accurate palaeoclimate reconstruction

    NASA Astrophysics Data System (ADS)

    Blockley, Simon P. E.; Bronk Ramsey, C.; Pyle, D. M.

    2008-10-01

    The role of tephrochronology, as a dating and stratigraphic tool, in precise palaeoclimate and environmental reconstruction, has expanded significantly in recent years. The power of tephrochronology rests on the fact that a tephra layer can stratigraphically link records at the resolution of as little as a few years, and that the most precise age for a particular tephra can be imported into any site where it is found. In order to maximise the potential of tephras for this purpose it is necessary to have the most precise and robustly tested age estimate possible available for key tephras. Given the varying number and quality of dates associated with different tephras it is important to be able to build age models to test competing tephra dates. Recent advances in Bayesian age modelling of dates in sequence have radically extended our ability to build such stratigraphic age models. As an example of the potential here we use Bayesian methods, now widely applied, to examine the dating of some key Late Quaternary tephras from Italy. These are: the Agnano Monte Spina Tephra (AMST), the Neapolitan Yellow Tuff (NYT) and the Agnano Pomici Principali (APP), and all of them have multiple estimates of their true age. Further, we use the Bayesian approaches to generate a revised mixed radiocarbon/varve chronology for the important Lateglacial section of the Lago Grande Monticchio record, as a further illustration of what can be achieved by a Bayesian approach. With all three tephras we were able to produce viable model ages for the tephra, validate the proposed 40Ar/ 39Ar age ranges for these tephras, and provide relatively high precision age models. The results of the Bayesian integration of dating and stratigraphic information, suggest that the current best 95% confidence calendar age estimates for the AMST are 4690-4300 cal BP, the NYT 14320-13900 cal BP, and the APP 12380-12140 cal BP.

  16. Is the SenseWear Armband accurate enough to quantify and estimate energy expenditure in healthy adults?

    PubMed Central

    Hernández-Vicente, Adrián; Pérez-Isaac, Raúl; Santín-Medeiros, Fernanda; Cristi-Montero, Carlos; Casajús, Jose Antonio; Garatachea, Nuria

    2017-01-01

    Background The SenseWear Armband (SWA) is a monitor that can be used to estimate energy expenditure (EE); however, it has not been validated in healthy adults. The objective of this paper was to study the validity of the SWA for quantifying EE levels. Methods Twenty-three healthy adults (age 40–55 years, mean: 48±3.42 years) performed different types of standardized physical activity (PA) for 10 minutes (rest, walking at 3 and 5 km·h-1, running at 7 and 9 km·h-1, and sitting/standing at a rate of 30 cycle·min-1). Participants wore the SWA on their right arm, and their EE was measured by indirect calorimetry (IC) the gold standard. Results There were significant differences between the SWA and IC, except in the group that ran at 9 km·h-1 (>9 METs). Bland-Altman analysis showed a BIAS of 1.56 METs (±1.83 METs) and limits of agreement (LOA) at 95% of −2.03 to 5.16 METs. There were indications of heteroscedasticity (R2 =0.03; P<0.05). Analysis of the receiver operating characteristic (ROC) curves showed that the SWA seems to be not sensitive enough to estimate the level of EE at highest intensities. Conclusions The SWA is not as precise in estimating EE as IC, but it could be a useful tool to determine levels of EE at low intensities. PMID:28361062

  17. Exact confidence interval estimation for the Youden index and its corresponding optimal cut-point.

    PubMed

    Lai, Chin-Ying; Tian, Lili; Schisterman, Enrique F

    2012-05-01

    In diagnostic studies, the receiver operating characteristic (ROC) curve and the area under the ROC curve are important tools in assessing the utility of biomarkers in discriminating between non-diseased and diseased populations. For classifying a patient into the non-diseased or diseased group, an optimal cut-point of a continuous biomarker is desirable. Youden's index (J), defined as the maximum vertical distance between the ROC curve and the diagonal line, serves as another global measure of overall diagnostic accuracy and can be used in choosing an optimal cut-point. The proposed approach is to make use of a generalized approach to estimate the confidence intervals of the Youden index and its corresponding optimal cut-point. Simulation results are provided for comparing the coverage probabilities of the confidence intervals based on the proposed method with those based on the large sample method and the parametric bootstrap method. Finally, the proposed method is illustrated via an application to a data set from a study on Duchenne muscular dystrophy (DMD).

  18. Contaminant point source localization error estimates as functions of data quantity and model quality

    NASA Astrophysics Data System (ADS)

    Hansen, Scott K.; Vesselinov, Velimir V.

    2016-10-01

    We develop empirically-grounded error envelopes for localization of a point contamination release event in the saturated zone of a previously uncharacterized heterogeneous aquifer into which a number of plume-intercepting wells have been drilled. We assume that flow direction in the aquifer is known exactly and velocity is known to within a factor of two of our best guess from well observations prior to source identification. Other aquifer and source parameters must be estimated by interpretation of well breakthrough data via the advection-dispersion equation. We employ high performance computing to generate numerous random realizations of aquifer parameters and well locations, simulate well breakthrough data, and then employ unsupervised machine optimization techniques to estimate the most likely spatial (or space-time) location of the source. Tabulating the accuracy of these estimates from the multiple realizations, we relate the size of 90% and 95% confidence envelopes to the data quantity (number of wells) and model quality (fidelity of ADE interpretation model to actual concentrations in a heterogeneous aquifer with channelized flow). We find that for purely spatial localization of the contaminant source, increased data quantities can make up for reduced model quality. For space-time localization, we find similar qualitative behavior, but significantly degraded spatial localization reliability and less improvement from extra data collection. Since the space-time source localization problem is much more challenging, we also tried a multiple-initial-guess optimization strategy. This greatly enhanced performance, but gains from additional data collection remained limited.

  19. Contaminant point source localization error estimates as functions of data quantity and model quality

    SciTech Connect

    Hansen, Scott K.; Vesselinov, Velimir Valentinov

    2016-10-01

    We develop empirically-grounded error envelopes for localization of a point contamination release event in the saturated zone of a previously uncharacterized heterogeneous aquifer into which a number of plume-intercepting wells have been drilled. We assume that flow direction in the aquifer is known exactly and velocity is known to within a factor of two of our best guess from well observations prior to source identification. Other aquifer and source parameters must be estimated by interpretation of well breakthrough data via the advection-dispersion equation. We employ high performance computing to generate numerous random realizations of aquifer parameters and well locations, simulate well breakthrough data, and then employ unsupervised machine optimization techniques to estimate the most likely spatial (or space-time) location of the source. Tabulating the accuracy of these estimates from the multiple realizations, we relate the size of 90% and 95% confidence envelopes to the data quantity (number of wells) and model quality (fidelity of ADE interpretation model to actual concentrations in a heterogeneous aquifer with channelized flow). We find that for purely spatial localization of the contaminant source, increased data quantities can make up for reduced model quality. For space-time localization, we find similar qualitative behavior, but significantly degraded spatial localization reliability and less improvement from extra data collection. Since the space-time source localization problem is much more challenging, we also tried a multiple-initial-guess optimization strategy. Furthermore, this greatly enhanced performance, but gains from additional data collection remained limited.

  20. Curie Point Depth Estimates Beneath the Incipient Okavango Rift Zone, Northwest Botswana

    NASA Astrophysics Data System (ADS)

    Leseane, K.; Atekwana, E. A.; Mickus, K. L.; Mohamed, A.; Atekwana, E. A.

    2013-12-01

    We investigated the regional thermal structure of the crust beneath the Okavango Rift Zone (ORZ), surrounding cratons and orogenic mobile belts using the Curie Point Depth (CPD) estimates. Estimating the depth to the base of magnetic sources is important in understanding and constraining the thermal structure of the crust in zones of incipient continental rifting where no other data are available to image the crustal thermal structure. Our objective was to determine if there are any thermal perturbations within the lithosphere during rift initiation. The top and bottom of the magnetized crust were calculated using the two dimensional (2D) power-density spectra analysis and three dimensional (3D) inversions of the total field magnetic data of Botswana in overlapping square windows of 1degree x 1 degree. The calculated CPD estimates varied between ~8 km and ~24 km. The deepest CPD values (16-24 km) occur under the surrounding cratons and orogenic mobile belts whereas the shallowest CPD values were found within the ORZ. CPD values of 8 to 10 km occur in the northeastern part of ORZ; a site of more developed rift structures and where hot springs are known to occur. CPD values of 12 to 16 km were obtained in the southwestern part of the ORZ where rift structures are progressively less developed and where the rift terminates. The results suggests possible thermal anomaly beneath the incipient ORZ. Further geophysical studies as part of the PRIDE (Project for Rift Initiation Development and Evolution) project are needed to confirm this proposition.

  1. Using change-point models to estimate empirical critical loads for nitrogen in mountain ecosystems.

    PubMed

    Roth, Tobias; Kohli, Lukas; Rihm, Beat; Meier, Reto; Achermann, Beat

    2017-01-01

    To protect ecosystems and their services, the critical load concept has been implemented under the framework of the Convention on Long-range Transboundary Air Pollution (UNECE) to develop effects-oriented air pollution abatement strategies. Critical loads are thresholds below which damaging effects on sensitive habitats do not occur according to current knowledge. Here we use change-point models applied in a Bayesian context to overcome some of the difficulties when estimating empirical critical loads for nitrogen (N) from empirical data. We tested the method using simulated data with varying sample sizes, varying effects of confounding variables, and with varying negative effects of N deposition on species richness. The method was applied to the national-scale plant species richness data from mountain hay meadows and (sub)alpine scrubs sites in Switzerland. Seven confounding factors (elevation, inclination, precipitation, calcareous content, aspect as well as indicator values for humidity and light) were selected based on earlier studies examining numerous environmental factors to explain Swiss vascular plant diversity. The estimated critical load confirmed the existing empirical critical load of 5-15 kg N ha(-1) yr(-1) for (sub)alpine scrubs, while for mountain hay meadows the estimated critical load was at the lower end of the current empirical critical load range. Based on these results, we suggest to narrow down the critical load range for mountain hay meadows to 10-15 kg N ha(-1) yr(-1).

  2. Estimation of the skull insertion loss using an optoacoustic point source

    NASA Astrophysics Data System (ADS)

    Estrada, Héctor; Rebling, Johannes; Turner, Jake; Kneipp, Moritz; Shoham, Shy; Razansky, Daniel

    2016-03-01

    The acoustically-mismatched skull bone poses significant challenges for the application of ultrasonic and optical techniques in neuroimaging, still typically requiring invasive approaches using craniotomy or skull thinning. Optoacoustic imaging partially circumvents the acoustic distortions due to the skull because the induced wave is transmitted only once as opposed to the round trip in pulse-echo ultrasonography. To this end, the mouse brain has been successfully imaged transcranially by optoacoustic scanning microscopy. Yet, the skull may adversely affect the lateral and axial resolution of transcranial brain images. In order to accurately characterize the complex behavior of the optoacoustic signal as it traverses through the skull, one needs to consider the ultrawideband nature of the optoacoustic signals. Here the insertion loss of murine skull has been measured by means of a hybrid optoacoustic-ultrasound scanning microscope having a spherically focused PVDF transducer and pulsed laser excitation at 532 nm of a 20 μm diameter absorbing microsphere acting as an optoacoustic point source. Accurate modeling of the acoustic transmission through the skull is further performed using a Fourier-domain expansion of a solid-plate model, based on the simultaneously acquired pulse-echo ultrasound image providing precise information about the skull's position and its orientation relative to the optoacoustic source. Good qualitative agreement has been found between the a solid-plate model and experimental measurements. The presented strategy might pave the way for modeling skull effects and deriving efficient correction schemes to account for acoustic distortions introduced by an adult murine skull, thus improving the spatial resolution, effective penetration depth and overall image quality of transcranial optoacoustic brain microscopy.

  3. Estimating the contribution of point sources to atmospheric metals using single-particle mass spectrometry

    NASA Astrophysics Data System (ADS)

    Snyder, David C.; Schauer, James J.; Gross, Deborah S.; Turner, Jay R.

    Single-particle mass spectra were collected using an Aerosol Time-of-Flight Mass Spectrometer (ATOFMS) during December of 2003 and February of 2004 at an industrially impacted location in East St. Louis, IL. Hourly integrated peak areas for twenty ions were evaluated for their suitability in representing metals/metalloids, particularly those reported in the US EPA Toxic Release Inventory (TRI). Of the initial twenty ions examined, six (Al, As, Cu, Hg, Ti, and V) were found to be unsuitable due to strong isobaric interferences with commonly observed organic fragments, and one (Be) was found to have no significant signal. The usability of three ions (Co, Cr, and Mn) was limited due to suspected isobaric interferences based on temporal comparisons with commonly observed organic fragments. The identity of the remaining ions (Sb, Ba, Cd, Ca, Fe, Ni, Pb, K, Se, and Zn) was substantiated by comparing their signals with the integrated hourly signals of one or more isotope ions. When compared with one-in-six day integrated elemental data as determined by X-ray fluorescence spectroscopy (XRF), the daily integrated ATOFMS signal for several metal ions revealed a semi-quantitative relationship between ATOFMS peak area and XRF concentrations, although in some cases comparison of these measurements were poor at low elemental concentrations/ion signals due to isobaric interferences. A method of estimating the impact of local point sources was developed using hourly integrated ATOFMS peak areas, and this method attributed as much as 85% of the concentration of individual metals observed at the study site to local point sources. Hourly surface wind data were used in conjunction with TRI facility emissions data to reveal likely point sources impacting metal concentrations at the study site and to illustrate the utility of using single-particle mass spectral data to characterize atmospheric metals and identify point sources.

  4. How accurate is the estimation of anthropogenic carbon in the ocean? An evaluation of the ΔC* method

    NASA Astrophysics Data System (ADS)

    Matsumoto, Katsumi; Gruber, Nicolas

    2005-09-01

    The ΔC* method of Gruber et al. (1996) is widely used to estimate the distribution of anthropogenic carbon in the ocean; however, as yet, no thorough assessment of its accuracy has been made. Here we provide a critical re-assessment of the method and determine its accuracy by applying it to synthetic data from a global ocean biogeochemistry model, for which we know the "true" anthropogenic CO2 distribution. Our results indicate that the ΔC* method tends to overestimate anthropogenic carbon in relatively young waters but underestimate it in older waters. Main sources of these biases are (1) the time evolution of the air-sea CO2 disequilibrium, which is not properly accounted for in the ΔC* method, (2) a pCFC ventilation age bias that arises from mixing, and (3) errors in identifying the different end-member water types. We largely support the findings of Hall et al. (2004), who have also identified the first two bias sources. An extrapolation of the errors that we quantified on a number of representative isopycnals to the global ocean suggests a positive bias of about 7% in the ΔC*-derived global anthropogenic CO2 inventory. The magnitude of this bias is within the previously estimated 20% uncertainty of the method, but regional biases can be larger. Finally, we propose two improvements to the ΔC* method in order to account for the evolution of air-sea CO2 disequilibrium and the ventilation age mixing bias.

  5. Novel point estimation from a semiparametric ratio estimator (SPRE): long-term health outcomes from short-term linear data, with application to weight loss in obesity.

    PubMed

    Weissman-Miller, Deborah

    2013-11-02

    Point estimation is particularly important in predicting weight loss in individuals or small groups. In this analysis, a new health response function is based on a model of human response over time to estimate long-term health outcomes from a change point in short-term linear regression. This important estimation capability is addressed for small groups and single-subject designs in pilot studies for clinical trials, medical and therapeutic clinical practice. These estimations are based on a change point given by parameters derived from short-term participant data in ordinary least squares (OLS) regression. The development of the change point in initial OLS data and the point estimations are given in a new semiparametric ratio estimator (SPRE) model. The new response function is taken as a ratio of two-parameter Weibull distributions times a prior outcome value that steps estimated outcomes forward in time, where the shape and scale parameters are estimated at the change point. The Weibull distributions used in this ratio are derived from a Kelvin model in mechanics taken here to represent human beings. A distinct feature of the SPRE model in this article is that initial treatment response for a small group or a single subject is reflected in long-term response to treatment. This model is applied to weight loss in obesity in a secondary analysis of data from a classic weight loss study, which has been selected due to the dramatic increase in obesity in the United States over the past 20 years. A very small relative error of estimated to test data is shown for obesity treatment with the weight loss medication phentermine or placebo for the test dataset. An application of SPRE in clinical medicine or occupational therapy is to estimate long-term weight loss for a single subject or a small group near the beginning of treatment.

  6. Travel Time Estimation Using Freeway Point Detector Data Based on Evolving Fuzzy Neural Inference System

    PubMed Central

    Tang, Jinjun; Zou, Yajie; Ash, John; Zhang, Shen; Liu, Fang; Wang, Yinhai

    2016-01-01

    Travel time is an important measurement used to evaluate the extent of congestion within road networks. This paper presents a new method to estimate the travel time based on an evolving fuzzy neural inference system. The input variables in the system are traffic flow data (volume, occupancy, and speed) collected from loop detectors located at points both upstream and downstream of a given link, and the output variable is the link travel time. A first order Takagi-Sugeno fuzzy rule set is used to complete the inference. For training the evolving fuzzy neural network (EFNN), two learning processes are proposed: (1) a K-means method is employed to partition input samples into different clusters, and a Gaussian fuzzy membership function is designed for each cluster to measure the membership degree of samples to the cluster centers. As the number of input samples increases, the cluster centers are modified and membership functions are also updated; (2) a weighted recursive least squares estimator is used to optimize the parameters of the linear functions in the Takagi-Sugeno type fuzzy rules. Testing datasets consisting of actual and simulated data are used to test the proposed method. Three common criteria including mean absolute error (MAE), root mean square error (RMSE), and mean absolute relative error (MARE) are utilized to evaluate the estimation performance. Estimation results demonstrate the accuracy and effectiveness of the EFNN method through comparison with existing methods including: multiple linear regression (MLR), instantaneous model (IM), linear model (LM), neural network (NN), and cumulative plots (CP). PMID:26829639

  7. Combination of radar and daily precipitation data to estimate meaningful sub-daily point precipitation extremes

    NASA Astrophysics Data System (ADS)

    Bárdossy, András; Pegram, Geoffrey

    2017-01-01

    The use of radar measurements for the space time estimation of precipitation has for many decades been a central topic in hydro-meteorology. In this paper we are interested specifically in daily and sub-daily extreme values of precipitation at gauged or ungauged locations which are important for design. The purpose of the paper is to develop a methodology to combine daily precipitation observations and radar measurements to estimate sub-daily extremes at point locations. Radar data corrected using precipitation-reflectivity relationships lead to biased estimations of extremes. Different possibilities of correcting systematic errors using the daily observations are investigated. Observed gauged daily amounts are interpolated to unsampled points and subsequently disaggregated using the sub-daily values obtained by the radar. Different corrections based on the spatial variability and the subdaily entropy of scaled rainfall distributions are used to provide unbiased corrections of short duration extremes. Additionally a statistical procedure not based on a matching day by day correction is tested. In this last procedure as we are only interested in rare extremes, low to medium values of rainfall depth were neglected leaving a small number of L days of ranked daily maxima in each set per year, whose sum typically comprises about 50% of each annual rainfall total. The sum of these L day maxima is first iterpolated using a Kriging procedure. Subsequently this sum is disaggregated to daily values using a nearest neighbour procedure. The daily sums are then disaggregated by using the relative values of the biggest L radar based days. Of course, the timings of radar and gauge maxima can be different, so the method presented here uses radar for disaggregating daily gauge totals down to 15 min intervals in order to extract the maxima of sub-hourly through to daily rainfall. The methodologies were tested in South Africa, where an S-band radar operated relatively continuously at

  8. How many measurements are needed to estimate accurate daily and annual soil respiration fluxes? Analysis using data from a temperate rainforest

    NASA Astrophysics Data System (ADS)

    Perez-Quezada, Jorge F.; Brito, Carla E.; Cabezas, Julián; Galleguillos, Mauricio; Fuentes, Juan P.; Bown, Horacio E.; Franck, Nicolás

    2016-12-01

    Making accurate estimations of daily and annual Rs fluxes is key for understanding the carbon cycle process and projecting effects of climate change. In this study we used high-frequency sampling (24 measurements per day) of Rs in a temperate rainforest during 1 year, with the objective of answering the questions of when and how often measurements should be made to obtain accurate estimations of daily and annual Rs. We randomly selected data to simulate samplings of 1, 2, 4 or 6 measurements per day (distributed either during the whole day or only during daytime), combined with 4, 6, 12, 26 or 52 measurements per year. Based on the comparison of partial-data series with the full-data series, we estimated the performance of different partial sampling strategies based on bias, precision and accuracy. In the case of annual Rs estimation, we compared the performance of interpolation vs. using non-linear modelling based on soil temperature. The results show that, under our study conditions, sampling twice a day was enough to accurately estimate daily Rs (RMSE < 10 % of average daily flux), even if both measurements were done during daytime. The highest reduction in RMSE for the estimation of annual Rs was achieved when increasing from four to six measurements per year, but reductions were still relevant when further increasing the frequency of sampling. We found that increasing the number of field campaigns was more effective than increasing the number of measurements per day, provided a minimum of two measurements per day was used. Including night-time measurements significantly reduced the bias and was relevant in reducing the number of field campaigns when a lower level of acceptable error (RMSE < 5 %) was established. Using non-linear modelling instead of linear interpolation did improve the estimation of annual Rs, but not as expected. In conclusion, given that most of the studies of Rs use manual sampling techniques and apply only one measurement per day, we

  9. Parameter Estimation of Fossil Oysters from High Resolution 3D Point Cloud and Image Data

    NASA Astrophysics Data System (ADS)

    Djuricic, Ana; Harzhauser, Mathias; Dorninger, Peter; Nothegger, Clemens; Mandic, Oleg; Székely, Balázs; Molnár, Gábor; Pfeifer, Norbert

    2014-05-01

    A unique fossil oyster reef was excavated at Stetten in Lower Austria, which is also the highlight of the geo-edutainment park 'Fossilienwelt Weinviertel'. It provides the rare opportunity to study the Early Miocene flora and fauna of the Central Paratethys Sea. The site presents the world's largest fossil oyster biostrome formed about 16.5 million years ago in a tropical estuary of the Korneuburg Basin. About 15,000 up to 80-cm-long shells of Crassostrea gryphoides cover a 400 m2 large area. Our project 'Smart-Geology for the World's largest fossil oyster reef' combines methods of photogrammetry, geology and paleontology to document, evaluate and quantify the shell bed. This interdisciplinary approach will be applied to test hypotheses on the genesis of the taphocenosis (e.g.: tsunami versus major storm) and to reconstruct pre- and post-event processes. Hence, we are focusing on using visualization technologies from photogrammetry in geology and paleontology in order to develop new methods for automatic and objective evaluation of 3D point clouds. These will be studied on the basis of a very dense surface reconstruction of the oyster reef. 'Smart Geology', as extension of the classic discipline, exploits massive data, automatic interpretation, and visualization. Photogrammetry provides the tools for surface acquisition and objective, automated interpretation. We also want to stress the economic aspect of using automatic shape detection in paleontology, which saves manpower and increases efficiency during the monitoring and evaluation process. Currently, there are many well known algorithms for 3D shape detection of certain objects. We are using dense 3D laser scanning data from an instrument utilizing the phase shift measuring principle, which provides accurate geometrical basis < 3 mm. However, the situation is difficult in this multiple object scenario where more than 15,000 complete or fragmentary parts of an object with random orientation are found. The goal

  10. Accurate spike estimation from noisy calcium signals for ultrafast three-dimensional imaging of large neuronal populations in vivo

    PubMed Central

    Deneux, Thomas; Kaszas, Attila; Szalay, Gergely; Katona, Gergely; Lakner, Tamás; Grinvald, Amiram; Rózsa, Balázs; Vanzetta, Ivo

    2016-01-01

    Extracting neuronal spiking activity from large-scale two-photon recordings remains challenging, especially in mammals in vivo, where large noises often contaminate the signals. We propose a method, MLspike, which returns the most likely spike train underlying the measured calcium fluorescence. It relies on a physiological model including baseline fluctuations and distinct nonlinearities for synthetic and genetically encoded indicators. Model parameters can be either provided by the user or estimated from the data themselves. MLspike is computationally efficient thanks to its original discretization of probability representations; moreover, it can also return spike probabilities or samples. Benchmarked on extensive simulations and real data from seven different preparations, it outperformed state-of-the-art algorithms. Combined with the finding obtained from systematic data investigation (noise level, spiking rate and so on) that photonic noise is not necessarily the main limiting factor, our method allows spike extraction from large-scale recordings, as demonstrated on acousto-optical three-dimensional recordings of over 1,000 neurons in vivo. PMID:27432255

  11. Adaptive robust maximum power point tracking control for perturbed photovoltaic systems with output voltage estimation.

    PubMed

    Koofigar, Hamid Reza

    2016-01-01

    The problem of maximum power point tracking (MPPT) in photovoltaic (PV) systems, despite the model uncertainties and the variations in environmental circumstances, is addressed. Introducing a mathematical description, an adaptive sliding mode control (ASMC) algorithm is first developed. Unlike many previous investigations, the output voltage is not required to be sensed and the upper bound of system uncertainties and the variations of irradiance and temperature are not required to be known. Estimating the output voltage by an update law, an adaptive-based H∞ tracking algorithm is then developed for the case the perturbations are energy-bounded. The stability analysis is presented for the proposed tracking control schemes, based on the Lyapunov stability theorem. From a comparison viewpoint, some numerical and experimental studies are also presented and discussed.

  12. Estimation of normalized point-source sensitivity of segment surface specifications for extremely large telescopes.

    PubMed

    Seo, Byoung-Joon; Nissly, Carl; Troy, Mitchell; Angeli, George; Bernier, Robert; Stepp, Larry; Williams, Eric

    2013-06-20

    We present a method which estimates the normalized point-source sensitivity (PSSN) of a segmented telescope when only information from a single segment surface is known. The estimation principle is based on a statistical approach with an assumption that all segment surfaces have the same power spectral density (PSD) as the given segment surface. As presented in this paper, the PSSN based on this statistical approach represents a worst-case scenario among statistical random realizations of telescopes when all segment surfaces have the same PSD. Therefore, this method, which we call the vendor table, is expected to be useful for individual segment specification such as the segment polishing specification. The specification based on the vendor table can be directly related to a science metric such as PSSN and provides the mirror vendors significant flexibility by specifying a single overall PSSN value for them to meet. We build a vendor table for the Thirty Meter Telescope (TMT) and test it using multiple mirror samples from various mirror vendors to prove its practical utility. Accordingly, TMT has a plan to adopt this vendor table for its M1 segment final mirror polishing requirement.

  13. Contaminant point source localization error estimates as functions of data quantity and model quality

    DOE PAGES

    Hansen, Scott K.; Vesselinov, Velimir Valentinov

    2016-10-01

    We develop empirically-grounded error envelopes for localization of a point contamination release event in the saturated zone of a previously uncharacterized heterogeneous aquifer into which a number of plume-intercepting wells have been drilled. We assume that flow direction in the aquifer is known exactly and velocity is known to within a factor of two of our best guess from well observations prior to source identification. Other aquifer and source parameters must be estimated by interpretation of well breakthrough data via the advection-dispersion equation. We employ high performance computing to generate numerous random realizations of aquifer parameters and well locations, simulatemore » well breakthrough data, and then employ unsupervised machine optimization techniques to estimate the most likely spatial (or space-time) location of the source. Tabulating the accuracy of these estimates from the multiple realizations, we relate the size of 90% and 95% confidence envelopes to the data quantity (number of wells) and model quality (fidelity of ADE interpretation model to actual concentrations in a heterogeneous aquifer with channelized flow). We find that for purely spatial localization of the contaminant source, increased data quantities can make up for reduced model quality. For space-time localization, we find similar qualitative behavior, but significantly degraded spatial localization reliability and less improvement from extra data collection. Since the space-time source localization problem is much more challenging, we also tried a multiple-initial-guess optimization strategy. Furthermore, this greatly enhanced performance, but gains from additional data collection remained limited.« less

  14. The robustness of single-point Tenax extractions of pyrethroids: Effects of the Tenax to organic carbon mass ratio on exposure estimates.

    PubMed

    Nutile, Samuel A; Harwood, Amanda D; Sinche, Federico L; Huff Hartz, Kara E; Landrum, Peter F; Lydy, Michael J

    2017-03-01

    Use of Tenax extractable concentrations to estimate biological exposure to hydrophobic organic contaminants is well documented, yet method variation exists between studies, specifically in the ratio of Tenax mass to organic carbon mass in the sediment (Tenax:OC ratio) being extracted. The effects of this variation on exposure estimates are not well understood. As Tenax is theoretically in direct competition with organic carbon for freely dissolved chemical in sediment interstitial water, varying the Tenax:OC ratio could impact single-point Tenax extraction (SPTE) exposure estimates. Therefore, the effects of varying Tenax:OC ratios on SPTE pyrethroid concentrations from field-contaminated and laboratory-spiked sediments were compared to bioaccumulation by Lumbriculus variegatus. The Tenax:OC ratio had minimal effect on SPTE pyrethroid concentrations. The SPTE pyrethroid concentrations obtained using the highest and lowest Tenax:OC ratios ranged from 0.85- to 3.91-fold different, which is unlikely to contribute substantial error to bioaccessibility estimates. Comparisons to Tenax exposure endpoints from previous research reveal the variation in these endpoints is likely due to toxicokinetic and toxicodynamic differences; processes common to exposure estimates provided by any chemical extraction technique. As the pyrethroid concentrations in the experimental sediments caused toxicity to L. variegatus, thus affecting bioaccumulation, the SPTE concentrations overestimated bioaccumulation. However, SPTE concentrations strongly correlated with growth inhibition regardless of the Tenax:OC ratio, providing accurate estimates of the correct exposure endpoint. Tenax masses of 0.500-0.800 g should provide sufficient Tenax to achieve Tenax:OC ratios of at least 5:1, which will provide accurate exposure estimates while retaining the ease of conducting SPTEs.

  15. Empirical Bayes Point Estimates of True Score Using a Compound Binomial Error Model. Research Memorandum 74-11.

    ERIC Educational Resources Information Center

    Kearns, Jack

    Empirical Bayes point estimates of true score may be obtained if the distribution of observed score for a fixed examinee is approximated in one of several ways by a well-known compound binomial model. The Bayes estimates of true score may be expressed in terms of the observed score distribution and the distribution of a hypothetical binomial test.…

  16. Robust dynamic myocardial perfusion CT deconvolution for accurate residue function estimation via adaptive-weighted tensor total variation regularization: a preclinical study

    NASA Astrophysics Data System (ADS)

    Zeng, Dong; Gong, Changfei; Bian, Zhaoying; Huang, Jing; Zhang, Xinyu; Zhang, Hua; Lu, Lijun; Niu, Shanzhou; Zhang, Zhang; Liang, Zhengrong; Feng, Qianjin; Chen, Wufan; Ma, Jianhua

    2016-11-01

    Dynamic myocardial perfusion computed tomography (MPCT) is a promising technique for quick diagnosis and risk stratification of coronary artery disease. However, one major drawback of dynamic MPCT imaging is the heavy radiation dose to patients due to its dynamic image acquisition protocol. In this work, to address this issue, we present a robust dynamic MPCT deconvolution algorithm via adaptive-weighted tensor total variation (AwTTV) regularization for accurate residue function estimation with low-mA s data acquisitions. For simplicity, the presented method is termed ‘MPD-AwTTV’. More specifically, the gains of the AwTTV regularization over the original tensor total variation regularization are from the anisotropic edge property of the sequential MPCT images. To minimize the associative objective function we propose an efficient iterative optimization strategy with fast convergence rate in the framework of an iterative shrinkage/thresholding algorithm. We validate and evaluate the presented algorithm using both digital XCAT phantom and preclinical porcine data. The preliminary experimental results have demonstrated that the presented MPD-AwTTV deconvolution algorithm can achieve remarkable gains in noise-induced artifact suppression, edge detail preservation, and accurate flow-scaled residue function and MPHM estimation as compared with the other existing deconvolution algorithms in digital phantom studies, and similar gains can be obtained in the porcine data experiment.

  17. TU-EF-204-01: Accurate Prediction of CT Tube Current Modulation: Estimating Tube Current Modulation Schemes for Voxelized Patient Models Used in Monte Carlo Simulations

    SciTech Connect

    McMillan, K; Bostani, M; McNitt-Gray, M; McCollough, C

    2015-06-15

    Purpose: Most patient models used in Monte Carlo-based estimates of CT dose, including computational phantoms, do not have tube current modulation (TCM) data associated with them. While not a problem for fixed tube current simulations, this is a limitation when modeling the effects of TCM. Therefore, the purpose of this work was to develop and validate methods to estimate TCM schemes for any voxelized patient model. Methods: For 10 patients who received clinically-indicated chest (n=5) and abdomen/pelvis (n=5) scans on a Siemens CT scanner, both CT localizer radiograph (“topogram”) and image data were collected. Methods were devised to estimate the complete x-y-z TCM scheme using patient attenuation data: (a) available in the Siemens CT localizer radiograph/topogram itself (“actual-topo”) and (b) from a simulated topogram (“sim-topo”) derived from a projection of the image data. For comparison, the actual TCM scheme was extracted from the projection data of each patient. For validation, Monte Carlo simulations were performed using each TCM scheme to estimate dose to the lungs (chest scans) and liver (abdomen/pelvis scans). Organ doses from simulations using the actual TCM were compared to those using each of the estimated TCM methods (“actual-topo” and “sim-topo”). Results: For chest scans, the average differences between doses estimated using actual TCM schemes and estimated TCM schemes (“actual-topo” and “sim-topo”) were 3.70% and 4.98%, respectively. For abdomen/pelvis scans, the average differences were 5.55% and 6.97%, respectively. Conclusion: Strong agreement between doses estimated using actual and estimated TCM schemes validates the methods for simulating Siemens topograms and converting attenuation data into TCM schemes. This indicates that the methods developed in this work can be used to accurately estimate TCM schemes for any patient model or computational phantom, whether a CT localizer radiograph is available or not

  18. Estimate of beryllium critical point on the basis of correspondence between the critical and the Zeno-line parameters.

    PubMed

    Apfelbaum, E M

    2012-12-20

    The critical-point coordinates of Beryllium have been calculated by means of recently found similarity relations between the Zeno-line and the critical-point parameters. We have used the NVT MC simulations and pseudopotential theory to calculate the Zeno-line parameters together with the data of isobaric measurements to construct the liquid branch of Beryllium binodal. The critical-point coordinates, determined this way, are lower than earlier estimates. We have shown that these previous estimates are in evident contradiction with available measurements data. Present investigation can resolve this contradiction if the measurements data are supposed to be reliable.

  19. Further Estimates of (T-T_{90}) Close to the Triple Point of Water

    NASA Astrophysics Data System (ADS)

    Underwood, R.; de Podesta, M.; Sutton, G.; Stanger, L.; Rusby, R.; Harris, P.; Morantz, P.; Machin, G.

    2017-03-01

    Recent advances in primary acoustic gas thermometry (AGT) have revealed significant differences between temperature measurements using the International Temperature Scale of 1990, T_{90}, and thermodynamic temperature, T. In 2015, we published estimates of the differences (T-T_{90}) from 118 K to 303 K, which showed interesting behavior in the region around the triple point of water, T_TPW=273.16 K. In that work, the T_{90} measurements below T_TPW used a different ensemble of capsule standard platinum resistance thermometers (SPRTs) than the T_{90} measurements above T_TPW. In this work, we extend our earlier measurements using the same ensemble of SPRTs above and below T_TPW, enabling a deeper analysis of the slope d(T-T_{90})/dT around T_TPW. In this article, we present the results of seven AGT isotherms in the temperature range 258 K to 323 K. The derived values of (T-T_{90}) have exceptionally low uncertainties and are in good agreement with our previous data and other AGT results. We present the values (T-T_{90}) alongside our previous estimates, with the resistance ratios W( T) from two SPRTs which have been used across the full range 118 K to 323 K. Additionally, our measurements show discontinuities in d(T-T_{90})/dT at T_TPW which are consistent with the slope discontinuity in the SPRT deviation functions. Since this discontinuity is by definition non-unique, and can take a range of values including zero, we suggest that mathematical representations of (T-T_{90}), such as those in the mise en pratique for the kelvin (Fellmuth et al. in Philos Trans R Soc A 374:20150037, 2016. doi: 10.1098/rsta.2015.0037), should have continuity of d(T-T_{90})/dT at T_TPW.

  20. A novel method of estimating effective dose from the point dose method: a case study—parathyroid CT scans

    NASA Astrophysics Data System (ADS)

    Januzis, Natalie; Nguyen, Giao; Hoang, Jenny K.; Lowry, Carolyn; Yoshizumi, Terry T.

    2015-02-01

    The purpose of this study was to validate a novel approach of applying a partial volume correction factor (PVCF) using a limited number of MOSFET detectors in the effective dose (E) calculation. The results of the proposed PVCF method were compared to the results from both the point dose (PD) method and a commercial CT dose estimation software (CT-Expo). To measure organ doses, an adult female anthropomorphic phantom was loaded with 20 MOSFET detectors and was scanned using the non-contrast and 2 phase contrast-enhanced parathyroid imaging protocols on a 64-slice multi-detector computed tomography scanner. E was computed by three methods: the PD method, the PVCF method, and the CT-Expo method. The E (in mSv) for the PD method, the PVCF method, and CT-Expo method was 2.6  ±  0.2, 1.3  ±  0.1, and 1.1 for the non-contrast scan, 21.9  ±  0.4, 13.9  ±  0.2, and 14.6 for the 1st phase of the contrast-enhanced scan, and 15.5  ±  0.3, 9.8  ±  0.1, and 10.4 for the 2nd phase of the contrast-enhanced scan, respectively. The E with the PD method differed from the PVCF method by 66.7% for the non-contrast scan, by 44.9% and by 45.5% respectively for the 1st and 2nd phases of the contrast-enhanced scan. The E with PVCF was comparable to the results from the CT-Expo method with percent differences of 15.8%, 5.0%, and 6.3% for the non-contrast scan and the 1st and 2nd phases of the contrast-enhanced scan, respectively. To conclude, the PVCF method estimated E within 16% difference as compared to 50-70% in the PD method. In addition, the results demonstrate that E can be estimated accurately from a limited number of detectors.

  1. Subcutaneous nerve activity is more accurate than the heart rate variability in estimating cardiac sympathetic tone in ambulatory dogs with myocardial infarction

    PubMed Central

    Chan, Yi-Hsin; Tsai, Wei-Chung; Shen, Changyu; Han, Seongwook; Chen, Lan S.; Lin, Shien-Fong; Chen, Peng-Sheng

    2015-01-01

    Background We recently reported that subcutaneous nerve activity (SCNA) can be used to estimate sympathetic tone. Objectives To test the hypothesis that left thoracic SCNA is more accurate than heart rate variability (HRV) in estimating cardiac sympathetic tone in ambulatory dogs with myocardial infarction (MI). Methods We used an implanted radiotransmitter to study left stellate ganglion nerve activity (SGNA), vagal nerve activity (VNA), and thoracic SCNA in 9 dogs at baseline and up to 8 weeks after MI. HRV was determined based by time-domain, frequency-domain and non-linear analyses. Results The correlation coefficients between integrated SGNA and SCNA averaged 0.74 (95% confidence interval (CI), 0.41–1.06) at baseline and 0.82 (95% CI, 0.63–1.01) after MI (P<.05 for both). The absolute values of the correlation coefficients were significant larger than that between SGNA and HRV analysis based on time-domain, frequency-domain and non-linear analyses, respectively, at baseline (P<.05 for all) and after MI (P<.05 for all). There was a clear increment of SGNA and SCNA at 2, 4, 6 and 8 weeks after MI, while HRV parameters showed no significant changes. Significant circadian variations were noted in SCNA, SGNA and all HRV parameters at baseline and after MI, respectively. Atrial tachycardia (AT) episodes were invariably preceded by the SCNA and SGNA, which were progressively increased from 120th, 90th, 60th to 30th s before the AT onset. No such changes of HRV parameters were observed before AT onset. Conclusion SCNA is more accurate than HRV in estimating cardiac sympathetic tone in ambulatory dogs with MI. PMID:25778433

  2. Estimating the operating point of the cochlear transducer using low-frequency biased distortion products

    PubMed Central

    Brown, Daniel J.; Hartsock, Jared J.; Gill, Ruth M.; Fitzgerald, Hillary E.; Salt, Alec N.

    2009-01-01

    Distortion products in the cochlear microphonic (CM) and in the ear canal in the form of distortion product otoacoustic emissions (DPOAEs) are generated by nonlinear transduction in the cochlea and are related to the resting position of the organ of Corti (OC). A 4.8 Hz acoustic bias tone was used to displace the OC, while the relative amplitude and phase of distortion products evoked by a single tone [most often 500 Hz, 90 dB SPL (sound pressure level)] or two simultaneously presented tones (most often 4 kHz and 4.8 kHz, 80 dB SPL) were monitored. Electrical responses recorded from the round window, scala tympani and scala media of the basal turn, and acoustic emissions in the ear canal were simultaneously measured and compared during the bias. Bias-induced changes in the distortion products were similar to those predicted from computer models of a saturating transducer with a first-order Boltzmann distribution. Our results suggest that biased DPOAEs can be used to non-invasively estimate the OC displacement, producing a measurement equivalent to the transducer operating point obtained via Boltzmann analysis of the basal turn CM. Low-frequency biased DPOAEs might provide a diagnostic tool to objectively diagnose abnormal displacements of the OC, as might occur with endolymphatic hydrops. PMID:19354389

  3. Estimation of the break-even point for smoking cessation programs in pregnancy.

    PubMed Central

    Shipp, M; Croughan-Minihane, M S; Petitti, D B; Washington, A E

    1992-01-01

    BACKGROUND. Successful programs to help pregnant women quit smoking have been developed and evaluated, but formal smoking cessation programs are not a part of care at most prenatal sites. The cost of such programs may be an issue. Considering the costs of adverse maternal and infant outcomes resulting from smoking, we estimated there would be an amount of money a prenatal program could invest in smoking cessation and still "break even" economically. METHODS. A model was developed and published data, along with 1989 hospital charge data, were used to arrive at a break-even point for smoking cessation programs in pregnancy. RESULTS. Using overall United States data, we arrived at a break-even cost of $32 per pregnant woman. When these data were varied to fit specific US populations, the break-even costs varied from $10 to $237, with the incidence of preterm low birth weight having the most impact on the cost. CONCLUSIONS. It may be advisable to invest greater amounts of money in a prenatal smoking cessation program for some populations. However, for every population there is an amount that can be invested while still breaking even. PMID:1536354

  4. Step change point estimation in the multivariate-attribute process variability using artificial neural networks and maximum likelihood estimation

    NASA Astrophysics Data System (ADS)

    Maleki, Mohammad Reza; Amiri, Amirhossein; Mousavi, Seyed Meysam

    2015-07-01

    In some statistical process control applications, the combination of both variable and attribute quality characteristics which are correlated represents the quality of the product or the process. In such processes, identification the time of manifesting the out-of-control states can help the quality engineers to eliminate the assignable causes through proper corrective actions. In this paper, first we use an artificial neural network (ANN)-based method in the literature for detecting the variance shifts as well as diagnosing the sources of variation in the multivariate-attribute processes. Then, based on the quality characteristics responsible for the out-of-control state, we propose a modular model based on the ANN for estimating the time of step change in the multivariate-attribute process variability. We also compare the performance of the ANN-based estimator with the estimator based on maximum likelihood method (MLE). A numerical example based on simulation study is used to evaluate the performance of the estimators in terms of the accuracy and precision criteria. The results of the simulation study show that the proposed ANN-based estimator outperforms the MLE estimator under different out-of-control scenarios where different shift magnitudes in the covariance matrix of multivariate-attribute quality characteristics are manifested.

  5. Point estimation and p-values in phase II adaptive two-stage designs with a binary endpoint.

    PubMed

    Kunzmann, K; Kieser, M

    2017-03-15

    Clinical trials in phase II of drug development are frequently conducted as single-arm two-stage studies with a binary endpoint. Recently, adaptive designs have been proposed for this setting that enable a midcourse modification of the sample size. While these designs are elaborated with respect to hypothesis testing by assuring control of the type I error rate, the topic of point estimation has up to now not been addressed. For adaptive designs with a prespecified sample size recalculation rule, we propose a new point estimator that both assures compatibility of estimation and test decision and minimizes average mean squared error. This estimator can be interpreted as a constrained posterior mean estimate based on the non-informative Jeffreys prior. A comparative investigation of the operating characteristics demonstrates the favorable properties of the proposed approach. Copyright © 2016 John Wiley & Sons, Ltd.

  6. Embedded fiber-optic sensing for accurate internal monitoring of cell state in advanced battery management systems part 2: Internal cell signals and utility for state estimation

    NASA Astrophysics Data System (ADS)

    Ganguli, Anurag; Saha, Bhaskar; Raghavan, Ajay; Kiesel, Peter; Arakaki, Kyle; Schuh, Andreas; Schwartz, Julian; Hegyi, Alex; Sommer, Lars Wilko; Lochbaum, Alexander; Sahu, Saroj; Alamgir, Mohamed

    2017-02-01

    A key challenge hindering the mass adoption of Lithium-ion and other next-gen chemistries in advanced battery applications such as hybrid/electric vehicles (xEVs) has been management of their functional performance for more effective battery utilization and control over their life. Contemporary battery management systems (BMS) reliant on monitoring external parameters such as voltage and current to ensure safe battery operation with the required performance usually result in overdesign and inefficient use of capacity. More informative embedded sensors are desirable for internal cell state monitoring, which could provide accurate state-of-charge (SOC) and state-of-health (SOH) estimates and early failure indicators. Here we present a promising new embedded sensing option developed by our team for cell monitoring, fiber-optic (FO) sensors. High-performance large-format pouch cells with embedded FO sensors were fabricated. This second part of the paper focuses on the internal signals obtained from these FO sensors. The details of the method to isolate intercalation strain and temperature signals are discussed. Data collected under various xEV operational conditions are presented. An algorithm employing dynamic time warping and Kalman filtering was used to estimate state-of-charge with high accuracy from these internal FO signals. Their utility for high-accuracy, predictive state-of-health estimation is also explored.

  7. A systematic approach for the accurate non-invasive estimation of blood glucose utilizing a novel light-tissue interaction adaptive modelling scheme

    NASA Astrophysics Data System (ADS)

    Rybynok, V. O.; Kyriacou, P. A.

    2007-10-01

    Diabetes is one of the biggest health challenges of the 21st century. The obesity epidemic, sedentary lifestyles and an ageing population mean prevalence of the condition is currently doubling every generation. Diabetes is associated with serious chronic ill health, disability and premature mortality. Long-term complications including heart disease, stroke, blindness, kidney disease and amputations, make the greatest contribution to the costs of diabetes care. Many of these long-term effects could be avoided with earlier, more effective monitoring and treatment. Currently, blood glucose can only be monitored through the use of invasive techniques. To date there is no widely accepted and readily available non-invasive monitoring technique to measure blood glucose despite the many attempts. This paper challenges one of the most difficult non-invasive monitoring techniques, that of blood glucose, and proposes a new novel approach that will enable the accurate, and calibration free estimation of glucose concentration in blood. This approach is based on spectroscopic techniques and a new adaptive modelling scheme. The theoretical implementation and the effectiveness of the adaptive modelling scheme for this application has been described and a detailed mathematical evaluation has been employed to prove that such a scheme has the capability of extracting accurately the concentration of glucose from a complex biological media.

  8. A technique for estimating spatial sampling errors in coarse-scale soil moisture estimates derived from point-scale observations

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The validation of satellite surface soil moisture retrievals requires the spatial aggregation of point-scale ground soil moisture measurements up to coarse resolution satellite footprint scales (>10 km). In regions containing a limited number of ground measurements per satellite footprint, a large c...

  9. GIS based probabilistic analysis for shallow landslide susceptibility using Point Estimate Method

    NASA Astrophysics Data System (ADS)

    Park, Hyuck-Jin; Lee, Jung-Hyun

    2016-04-01

    The mechanical properties of soil materials (such as cohesion and friction angle) used in physically based model for landslide susceptibility analyses have been identified as the major source of uncertainty caused by complex geological conditions and spatial variability. In addition, limited sampling is another source of the uncertainty since the input parameters were obtained from broad areas. Therefore, in order to properly account for the uncertainty in mechanical parameters, the parameters were considered as random variables and the probabilistic analysis method has been used. In many previous researches, the Monte Carlo simulation has been widely used as the probabilistic analysis. However, since the Monte Carlo method requires a large number of repeated calculations and a great deal of calculation time to evaluate the probability of failure, it is not easy to adopt this approach to extensive study area due to a huge amount of computation time for regional study area. Therefore, this study proposes the alternative probabilistic analysis approach using the Point Estimate method (PEM), which has the advantage overcoming the shortcomings of the Monte Carlo simulation. This is because PEM requires only the mean and standard deviation of random variables and can obtain the probability of failure with a simple calculation. This proposed approach was performed in GIS based environments and applied to the study are which was experienced a large amount of landslides. The spatial database for input parameters and landslide inventory map were constructed in a grid-based GIS environment. To evaluate the performance of the model, the results of the landslide susceptibility assessment were compared with the landslide inventories using ROC graph.

  10. Group vector space method for estimating enthalpy of vaporization of organic compounds at the normal boiling point.

    PubMed

    Wenying, Wei; Jinyu, Han; Wen, Xu

    2004-01-01

    The specific position of a group in the molecule has been considered, and a group vector space method for estimating enthalpy of vaporization at the normal boiling point of organic compounds has been developed. Expression for enthalpy of vaporization Delta(vap)H(T(b)) has been established and numerical values of relative group parameters obtained. The average percent deviation of estimation of Delta(vap)H(T(b)) is 1.16, which show that the present method demonstrates significant improvement in applicability to predict the enthalpy of vaporization at the normal boiling point, compared the conventional group methods.

  11. Depth Estimation and Specular Removal for Glossy Surfaces Using Point and Line Consistency with Light-Field Cameras.

    PubMed

    Tao, Michael W; Su, Jong-Chyi; Wang, Ting-Chun; Malik, Jitendra; Ramamoorthi, Ravi

    2016-06-01

    Light-field cameras have now become available in both consumer and industrial applications, and recent papers have demonstrated practical algorithms for depth recovery from a passive single-shot capture. However, current light-field depth estimation methods are designed for Lambertian objects and fail or degrade for glossy or specular surfaces. The standard Lambertian photoconsistency measure considers the variance of different views, effectively enforcing point-consistency, i.e., that all views map to the same point in RGB space. This variance or point-consistency condition is a poor metric for glossy surfaces. In this paper, we present a novel theory of the relationship between light-field data and reflectance from the dichromatic model. We present a physically-based and practical method to estimate the light source color and separate specularity. We present a new photo consistency metric, line-consistency, which represents how viewpoint changes affect specular points. We then show how the new metric can be used in combination with the standard Lambertian variance or point-consistency measure to give us results that are robust against scenes with glossy surfaces. With our analysis, we can also robustly estimate multiple light source colors and remove the specular component from glossy objects. We show that our method outperforms current state-of-the-art specular removal and depth estimation algorithms in multiple real world scenarios using the consumer Lytro and Lytro Illum light field cameras.

  12. Applying the quarter-hour rule: can people with insomnia accurately estimate 15-min periods during the sleep-onset phase?

    PubMed

    Harrow, Lisa; Espie, Colin

    2010-03-01

    The 'quarter-hour rule' (QHR) instructs the person with insomnia to get out of bed after 15 min of wakefulness and return to bed only when sleep feels imminent. Recent research has identified that sleep can be significantly improved using this simple intervention (Malaffo and Espie, Sleep, 27(s), 2004, 280; Sleep, 29 (s), 2006, 257), but successful implementation depends on estimating time without clock monitoring, and the insomnia literature indicates poor time perception is a maintaining factor in primary insomnia (Harvey, Behav. Res. Ther., 40, 2002, 869). This study expands upon previous research with the aim of identifying whether people with insomnia can accurately perceive a 15-min interval during the sleep-onset period, and therefore successfully implements the QHR. A mixed models anova design was applied with between-participants factor of group (insomnia versus good sleepers) and within-participants factor of context (night versus day). Results indicated no differences between groups and contexts on time estimation tasks. This was despite an increase in arousal in the night context for both groups, and tentative support for the impact of arousal in inducing underestimations of time. These results provide promising support for the successful application of the QHR in people with insomnia. The results are discussed in terms of whether the design employed successfully accessed the processes that are involved in distorting time perception in insomnia. Suggestions for future research are provided and limitations of the current study discussed.

  13. Accurate Equilibrium Structures for trans-HEXATRIENE by the Mixed Estimation Method and for the Three Isomers of Octatetraene from Theory; Structural Consequences of Electron Delocalization

    NASA Astrophysics Data System (ADS)

    Craig, Norman C.; Demaison, Jean; Groner, Peter; Rudolph, Heinz Dieter; Vogt, Natalja

    2015-06-01

    An accurate equilibrium structure of trans-hexatriene has been determined by the mixed estimation method with rotational constants from 8 deuterium and carbon isotopologues and high-level quantum chemical calculations. In the mixed estimation method bond parameters are fit concurrently to moments of inertia of various isotopologues and to theoretical bond parameters, each data set carrying appropriate uncertainties. The accuracy of this structure is 0.001 Å and 0.1°. Structures of similar accuracy have been computed for the cis,cis, trans,trans, and cis,trans isomers of octatetraene at the CCSD(T) level with a basis set of wCVQZ(ae) quality adjusted in accord with the experience gained with trans-hexatriene. The structures are compared with butadiene and with cis-hexatriene to show how increasing the length of the chain in polyenes leads to increased blurring of the difference between single and double bonds in the carbon chain. In trans-hexatriene r(“C_1=C_2") = 1.339 Å and r(“C_3=C_4") = 1.346 Å compared to 1.338 Å for the “double" bond in butadiene; r(“C_2-C_3") = 1.449 Å compared to 1.454 Å for the “single" bond in butadiene. “Double" bonds increase in length; “single" bonds decrease in length.

  14. Genetic diversity estimates point to immediate efforts for conserving the endangered Tibetan sheep of India

    PubMed Central

    Sharma, Rekha; Kumar, Brijesh; Arora, Reena; Ahlawat, Sonika; Mishra, A.K.; Tantia, M.S.

    2016-01-01

    Tibetan is a valuable Himalayan sheep breed classified as endangered. Knowledge of the level and distribution of genetic diversity in Tibetan sheep is important for designing conservation strategies for their sustainable survival and to preserve their evolutionary potential. Thus, for the first time, genetic variability in the Tibetan population was accessed with twenty five inter-simple sequence repeat markers. All the microsatellites were polymorphic and a total of 148 alleles were detected across these loci. The observed number of alleles across all the loci was more than the effective number of alleles and ranged from 3 (BM6506) to 11 (BM6526) with 5.920 ± 0.387 mean number of alleles per locus. The average observed heterozygosity was less than the expected heterozygosity. The observed and expected heterozygosity values ranged from 0.150 (BM1314) to 0.9 (OarCP20) with an overall mean of 0.473 ± 0.044 and from 0.329 (BM8125) to 0.885 (BM6526) with an overall mean 0.672 ± 0.030, respectively. The lower heterozygosity pointed towards diminished genetic diversity in the population. Thirteen microsatellite loci exhibited significant (P < 0.05) departures from the Hardy–Weinberg proportions in the population. The estimate of heterozygote deficiency varied from − 0.443 (OarCP20) to 0.668 (OarFCB128) with a mean positive value of 0.302 ± 0.057. A normal ‘L’ shaped distribution of mode-shift test and non-significant heterozygote excess on the basis of different models suggested absence of recent bottleneck in the existing Tibetan population. In view of the declining population of Tibetan sheep (less than 250) in the breeding tract, need of the hour is immediate scientific management of the population so as to increase the population hand in hand with retaining the founder alleles to the maximum possible extent. PMID:27014586

  15. Incorporating variability in point estimates in risk assessment: bridging the gap between LC50 and population endpoints

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Historically, the use of point estimates such as the LC50 has been instrumental in assessing the risk associated with toxicants to rare or economically important species. In recent years, growing awareness of the shortcomings of this approach has led to an increased focus on analyses using populatio...

  16. Bayesian Estimation of Fugitive Methane Point Source Emission Rates from a SingleDownwind High-Frequency Gas Sensor

    EPA Science Inventory

    Bayesian Estimation of Fugitive Methane Point Source Emission Rates from a Single Downwind High-Frequency Gas Sensor With the tremendous advances in onshore oil and gas exploration and production (E&P) capability comes the realization that new tools are needed to support env...

  17. Weakly Informative Prior for Point Estimation of Covariance Matrices in Hierarchical Models

    ERIC Educational Resources Information Center

    Chung, Yeojin; Gelman, Andrew; Rabe-Hesketh, Sophia; Liu, Jingchen; Dorie, Vincent

    2015-01-01

    When fitting hierarchical regression models, maximum likelihood (ML) estimation has computational (and, for some users, philosophical) advantages compared to full Bayesian inference, but when the number of groups is small, estimates of the covariance matrix (S) of group-level varying coefficients are often degenerate. One can do better, even from…

  18. Block bootstrap methods for the estimation of the intensity of a spatial point process with confidence bounds.

    PubMed

    Mattfeldt, T; Häbel, H; Fleischer, F

    2013-07-01

    This paper deals with the estimation of the intensity of a planar point process on the basis of a single point pattern, observed in a rectangular window. If the model assumptions of stationarity and isotropy hold, the method of block bootstrapping can be used to estimate the intensity of the process with confidence bounds. The results of two variants of block bootstrapping are compared with a parametric approximation based on the assumption of a Gaussian distribution of the numbers of points in deterministic subwindows of the original pattern. The studies were performed on patterns obtained by simulation of well-known point process models (Poisson process, two Matérn cluster processes, Matérn hardcore process, Strauss hardcore process). They were also performed on real histopathological data (point patterns of capillary profiles of 12 cases of prostatic cancer). The methods are presented as worked examples on two cases, where we illustrate their use as a check on stationarity (homogeneity) of a point process with respect to different fields of vision. The paper concludes with various methodological discussions and suggests possible extensions of the block bootstrap approach to other fields of spatial statistics.

  19. Global shape estimates and GIS cartography of Io and Enceladus using new control point network

    NASA Astrophysics Data System (ADS)

    Nadezhdina, I.; Patraty, V.; Shishkina, L.; Zhukov, D.; Zubarev, A.; Karachevtseva, I.; Oberst, J.

    2012-04-01

    We have analyzed a total of 53 Galileo and Voyager images of Io and 54 Cassini images of Enceladus to derive new geodetic control point networks for the two satellites. In order to derive the network for Io we used a subset of 66 images from those used in previous control point network studies [1, 2]. Additionally we have carried out new point measurements. We used recently reconstructed Galileo spacecraft trajectory data, supplied by the spacecraft navigation team of JPL. A total of 1956 tie point measurements for Io and 4392 ones for Enceladus have been carried out, which were processed by performing photogrammetric bundle block adjustments. Measurements and block adjustments were performed by means of the «PHOTOMOD» software [3] which was especially adapted for this study to accommodate global networks of small bodies, such as Io and Enceladus. As a result, two catalogs with the Cartesian three-dimensional coordinates of 197 and 351 control points were obtained for Io and Enceladus, respectively. The control points for Io have a mean overall accuracy of 4985.7 m (RMS). The individual accuracy of the control points for Enceladus differ substantially over the surface (the range is from 0.1 to 36.0 km) because images lack coverage and resolutions. We also determine best-fit spheres, spheroids, and tri-axial ellipsoids. The centers of the models were found to be shifted from the coordinate system origin attesting to possible errors in the ephemeris of Io. Conclusion and Future work: A comparison of our results for Io with the most recent control point network analysis [2] has revealed that we managed to derive the same accuracy of the control points using a smaller number of images and measurements (This study: 1956 measurements, DLR study: 4392). This probably attests to the fact that the now available new navigation data are internally more consistent. At present an analysis of the data is in progress. We report that control point measurements and global network

  20. Accurate estimation of global and regional cardiac function by retrospectively gated multidetector row computed tomography: comparison with cine magnetic resonance imaging.

    PubMed

    Belge, Bénédicte; Coche, Emmanuel; Pasquet, Agnès; Vanoverschelde, Jean-Louis J; Gerber, Bernhard L

    2006-07-01

    Retrospective reconstruction of ECG-gated images at different parts of the cardiac cycle allows the assessment of cardiac function by multi-detector row CT (MDCT) at the time of non-invasive coronary imaging. We compared the accuracy of such measurements by MDCT to cine magnetic resonance (MR). Forty patients underwent the assessment of global and regional cardiac function by 16-slice MDCT and cine MR. Left ventricular (LV) end-diastolic and end-systolic volumes estimated by MDCT (134+/-51 and 67+/-56 ml) were similar to those by MR (137+/-57 and 70+/-60 ml, respectively; both P=NS) and strongly correlated (r=0.92 and r=0.95, respectively; both P<0.001). Consequently, LV ejection fractions by MDCT and MR were also similar (55+/-21 vs. 56+/-21%; P=NS) and highly correlated (r=0.95; P<0.001). Regional end-diastolic and end-systolic wall thicknesses by MDCT were highly correlated (r=0.84 and r=0.92, respectively; both P<0.001), but significantly lower than by MR (8.3+/-1.8 vs. 8.8+/-1.9 mm and 12.7+/-3.4 vs. 13.3+/-3.5 mm, respectively; both P<0.001). Values of regional wall thickening by MDCT and MR were similar (54+/-30 vs. 51+/-31%; P=NS) and also correlated well (r=0.91; P<0.001). Retrospectively gated MDCT can accurately estimate LV volumes, EF and regional LV wall thickening compared to cine MR.

  1. Automatic NMO Correction and Full Common Depth Point NMO Velocity Field Estimation in Anisotropic Media

    NASA Astrophysics Data System (ADS)

    Sedek, Mohamed; Gross, Lutz; Tyson, Stephen

    2017-01-01

    We present a new computational method of automatic normal moveout (NMO) correction that not only accurately flattens and corrects the far offset data, but simultaneously provides NMO velocity (v_nmo) for each individual seismic trace. The method is based on a predefined number of NMO velocity sweeps using linear vertical interpolation of different NMO velocities at each seismic trace. At each sweep, we measure the semblance between the zero offset trace (pilot trace) and the next seismic trace using a trace-by-trace rather than sample-by-sample based semblance measure; then after all the sweeps are done, the one with the maximum semblance value is chosen, which is assumed to be the most suitable NMO velocity trace that accurately flattens seismic reflection events. Other traces follow the same process, and a final velocity field is then extracted. Isotropic, anisotropic and lateral heterogenous synthetic geological models were built to test the method. A range of synthetic background noise, ranging from 10 to 30 %, was applied to the models. In addition, the method was tested on Hess's VTI (vertical transverse isotropy) model. Furthermore, we tested our method on a real pre-stack seismic CDP gathered from a gas field in Alaska. The results from the presented examples show an excellent NMO correction and extracted a reasonably accurate NMO velocity field.

  2. MISSING VALUES IN MULTIVARIATE STATISTICS. II. POINT ESTIMATION IN SIMPLE LINEAR REGRESSION.

    DTIC Science & Technology

    They derive the mean square error of prediction for each method of estimation. Tables are given to characterize in terms of the correlation ... coefficient those situations where a given method has smaller mean square error than its competitors.

  3. Impact of Footprint Diameter and Off-Nadir Pointing on the Precision of Canopy Height Estimates from Spaceborne Lidar

    NASA Technical Reports Server (NTRS)

    Pang, Yong; Lefskky, Michael; Sun, Guoqing; Ranson, Jon

    2011-01-01

    A spaceborne lidar mission could serve multiple scientific purposes including remote sensing of ecosystem structure, carbon storage, terrestrial topography and ice sheet monitoring. The measurement requirements of these different goals will require compromises in sensor design. Footprint diameters that would be larger than optimal for vegetation studies have been proposed. Some spaceborne lidar mission designs include the possibility that a lidar sensor would share a platform with another sensor, which might require off-nadir pointing at angles of up to 16 . To resolve multiple mission goals and sensor requirements, detailed knowledge of the sensitivity of sensor performance to these aspects of mission design is required. This research used a radiative transfer model to investigate the sensitivity of forest height estimates to footprint diameter, off-nadir pointing and their interaction over a range of forest canopy properties. An individual-based forest model was used to simulate stands of mixed conifer forest in the Tahoe National Forest (Northern California, USA) and stands of deciduous forests in the Bartlett Experimental Forest (New Hampshire, USA). Waveforms were simulated for stands generated by a forest succession model using footprint diameters of 20 m to 70 m. Off-nadir angles of 0 to 16 were considered for a 25 m diameter footprint diameter. Footprint diameters in the range of 25 m to 30 m were optimal for estimates of maximum forest height (R(sup 2) of 0.95 and RMSE of 3 m). As expected, the contribution of vegetation height to the vertical extent of the waveform decreased with larger footprints, while the contribution of terrain slope increased. Precision of estimates decreased with an increasing off-nadir pointing angle, but off-nadir pointing had less impact on height estimates in deciduous forests than in coniferous forests. When pointing off-nadir, the decrease in precision was dependent on local incidence angle (the angle between the off

  4. Application of portable gas detector in point and scanning method to estimate spatial distribution of methane emission in landfill.

    PubMed

    Lando, Asiyanthi Tabran; Nakayama, Hirofumi; Shimaoka, Takayuki

    2017-01-01

    Methane from landfills contributes to global warming and can pose an explosion hazard. To minimize these effects emissions must be monitored. This study proposed application of portable gas detector (PGD) in point and scanning measurements to estimate spatial distribution of methane emissions in landfills. The aims of this study were to discover the advantages and disadvantages of point and scanning methods in measuring methane concentrations, discover spatial distribution of methane emissions, cognize the correlation between ambient methane concentration and methane flux, and estimate methane flux and emissions in landfills. This study was carried out in Tamangapa landfill, Makassar city-Indonesia. Measurement areas were divided into basic and expanded area. In the point method, PGD was held one meter above the landfill surface, whereas scanning method used a PGD with a data logger mounted on a wire drawn between two poles. Point method was efficient in time, only needed one person and eight minutes in measuring 400m(2) areas, whereas scanning method could capture a lot of hot spots location and needed 20min. The results from basic area showed that ambient methane concentration and flux had a significant (p<0.01) positive correlation with R(2)=0.7109 and y=0.1544 x. This correlation equation was used to describe spatial distribution of methane emissions in the expanded area by using Kriging method. The average of estimated flux from scanning method was 71.2gm(-2)d(-1) higher than 38.3gm(-2)d(-1) from point method. Further, scanning method could capture the lower and higher value, which could be useful to evaluate and estimate the possible effects of the uncontrolled emissions in landfill.

  5. On the Estimation of Forest Resources Using 3D Remote Sensing Techniques and Point Cloud Data

    NASA Astrophysics Data System (ADS)

    Karjalainen, Mika; Karila, Kirsi; Liang, Xinlian; Yu, Xiaowei; Huang, Guoman; Lu, Lijun

    2016-08-01

    In recent years, 3D capable remote sensing techniques have shown great potential in forest biomass estimation because of their ability to measure the forest canopy structure, tree height and density. The objective of the Dragon3 forest resources research project (ID 10667) and the supporting ESA young scientist project (ESA contract NO. 4000109483/13/I-BG) was to study the use of satellite based 3D techniques in forest tree height estimation, and consequently in forest biomass and biomass change estimation, by combining satellite data with terrestrial measurements. Results from airborne 3D techniques were also used in the project. Even though, forest tree height can be estimated from 3D satellite SAR data to some extent, there is need for field reference plots. For this reason, we have also been developing automated field plot measurement techniques based on Terrestrial Laser Scanning data, which can be used to train and calibrate satellite based estimation models. In this paper, results of canopy height models created from TerraSAR-X stereo and TanDEM-X INSAR data are shown as well as preliminary results from TLS field plot measurement system. Also, results from the airborne CASMSAR system to measure forest canopy height from P- and X- band INSAR are presented.

  6. A Robust and Accurate Two-Step Auto-Labeling Conditional Iterative Closest Points (TACICP) Algorithm for Three-Dimensional Multi-Modal Carotid Image Registration

    PubMed Central

    Guo, Hengkai; Wang, Guijin; Huang, Lingyun; Hu, Yuxin; Yuan, Chun; Li, Rui; Zhao, Xihai

    2016-01-01

    Atherosclerosis is among the leading causes of death and disability. Combining information from multi-modal vascular images is an effective and efficient way to diagnose and monitor atherosclerosis, in which image registration is a key technique. In this paper a feature-based registration algorithm, Two-step Auto-labeling Conditional Iterative Closed Points (TACICP) algorithm, is proposed to align three-dimensional carotid image datasets from ultrasound (US) and magnetic resonance (MR). Based on 2D segmented contours, a coarse-to-fine strategy is employed with two steps: rigid initialization step and non-rigid refinement step. Conditional Iterative Closest Points (CICP) algorithm is given in rigid initialization step to obtain the robust rigid transformation and label configurations. Then the labels and CICP algorithm with non-rigid thin-plate-spline (TPS) transformation model is introduced to solve non-rigid carotid deformation between different body positions. The results demonstrate that proposed TACICP algorithm has achieved an average registration error of less than 0.2mm with no failure case, which is superior to the state-of-the-art feature-based methods. PMID:26881433

  7. A Robust and Accurate Two-Step Auto-Labeling Conditional Iterative Closest Points (TACICP) Algorithm for Three-Dimensional Multi-Modal Carotid Image Registration.

    PubMed

    Guo, Hengkai; Wang, Guijin; Huang, Lingyun; Hu, Yuxin; Yuan, Chun; Li, Rui; Zhao, Xihai

    2016-01-01

    Atherosclerosis is among the leading causes of death and disability. Combining information from multi-modal vascular images is an effective and efficient way to diagnose and monitor atherosclerosis, in which image registration is a key technique. In this paper a feature-based registration algorithm, Two-step Auto-labeling Conditional Iterative Closed Points (TACICP) algorithm, is proposed to align three-dimensional carotid image datasets from ultrasound (US) and magnetic resonance (MR). Based on 2D segmented contours, a coarse-to-fine strategy is employed with two steps: rigid initialization step and non-rigid refinement step. Conditional Iterative Closest Points (CICP) algorithm is given in rigid initialization step to obtain the robust rigid transformation and label configurations. Then the labels and CICP algorithm with non-rigid thin-plate-spline (TPS) transformation model is introduced to solve non-rigid carotid deformation between different body positions. The results demonstrate that proposed TACICP algorithm has achieved an average registration error of less than 0.2mm with no failure case, which is superior to the state-of-the-art feature-based methods.

  8. A hierarchical model combining distance sampling and time removal to estimate detection probability during avian point counts

    USGS Publications Warehouse

    Amundson, Courtney L.; Royle, J. Andrew; Handel, Colleen M.

    2014-01-01

    Imperfect detection during animal surveys biases estimates of abundance and can lead to improper conclusions regarding distribution and population trends. Farnsworth et al. (2005) developed a combined distance-sampling and time-removal model for point-transect surveys that addresses both availability (the probability that an animal is available for detection; e.g., that a bird sings) and perceptibility (the probability that an observer detects an animal, given that it is available for detection). We developed a hierarchical extension of the combined model that provides an integrated analysis framework for a collection of survey points at which both distance from the observer and time of initial detection are recorded. Implemented in a Bayesian framework, this extension facilitates evaluating covariates on abundance and detection probability, incorporating excess zero counts (i.e. zero-inflation), accounting for spatial autocorrelation, and estimating population density. Species-specific characteristics, such as behavioral displays and territorial dispersion, may lead to different patterns of availability and perceptibility, which may, in turn, influence the performance of such hierarchical models. Therefore, we first test our proposed model using simulated data under different scenarios of availability and perceptibility. We then illustrate its performance with empirical point-transect data for a songbird that consistently produces loud, frequent, primarily auditory signals, the Golden-crowned Sparrow (Zonotrichia atricapilla); and for 2 ptarmigan species (Lagopus spp.) that produce more intermittent, subtle, and primarily visual cues. Data were collected by multiple observers along point transects across a broad landscape in southwest Alaska, so we evaluated point-level covariates on perceptibility (observer and habitat), availability (date within season and time of day), and abundance (habitat, elevation, and slope), and included a nested point

  9. A method for estimating spikelet number per panicle: Integrating image analysis and a 5-point calibration model.

    PubMed

    Zhao, Sanqin; Gu, Jiabing; Zhao, Youyong; Hassan, Muhammad; Li, Yinian; Ding, Weimin

    2015-11-06

    Spikelet number per panicle (SNPP) is one of the most important yield components used to estimate rice yields. The use of high-throughput quantitative image analysis methods for understanding the diversity of the panicle has increased rapidly. However, it is difficult to simultaneously extract panicle branch and spikelet/grain information from images at the same resolution due to the different scales of these traits. To use a lower resolution and meet the accuracy requirement, we proposed an interdisciplinary method that integrated image analysis and a 5-point calibration model to rapidly estimate SNPP. First, a linear relationship model between the total length of the primary branch (TLPB) and the SNPP was established based on the physiological characteristics of the panicle. Second, the TLPB and area (the primary branch region) traits were rapidly extracted by developing image analysis algorithm. Finally, a 5-point calibration method was adopted to improve the universality of the model. The number of panicle samples that the error of the SNPP estimates was less than 10% was greater than 90% by the proposed method. The estimation accuracy was consistent with the accuracy determined using manual measurements. The proposed method uses available concepts and techniques for automated estimations of rice yield information.

  10. Estimating a Meaningful Point of Change: A Comparison of Exploratory Techniques Based on Nonparametric Regression

    ERIC Educational Resources Information Center

    Klotsche, Jens; Gloster, Andrew T.

    2012-01-01

    Longitudinal studies are increasingly common in psychological research. Characterized by repeated measurements, longitudinal designs aim to observe phenomena that change over time. One important question involves identification of the exact point in time when the observed phenomena begin to meaningfully change above and beyond baseline…

  11. Size matters: point pattern analysis biases the estimation of spatial properties of stomata distribution.

    PubMed

    Naulin, Paulette I; Valenzuela, Gerardo; Estay, Sergio A

    2017-03-01

    Stomata distribution is an example of biological patterning. Formal methods used to study stomata patterning are generally based on point-pattern analysis, which assumes that stomata are points and ignores the constraints imposed by size on the placement of neighbors. The inclusion of size in the analysis requires the use of a null model based on finite-size object geometry. In this study, we compare the results obtained by analyzing samples from several species using point and disc null models. The results show that depending on the null model used, there was a 20% reduction in the number of samples classified as uniform; these results suggest that stomata patterning is not as general as currently reported. Some samples changed drastically from being classified as uniform to being classified as clustered. In samples of Arabidopsis thaliana, only the disc model identified clustering at high densities of stomata. This reinforces the importance of selecting an appropriate null model to avoid incorrect inferences about underlying biological mechanisms. Based on the results gathered here, we encourage researchers to abandon point-pattern analysis when studying stomata patterning; more realistic conclusions can be drawn from finite-size object analysis.

  12. Using a focal-plane array to estimate antenna pointing errors

    NASA Technical Reports Server (NTRS)

    Zohar, S.; Vilnrotter, V. A.

    1991-01-01

    The use of extra collecting horns in the focal plane of an antenna as a means of determining the Direction of Arrival (DOA) of the signal impinging on it, provided it is within the antenna beam, is considered. Our analysis yields a relatively simple algorithm to extract the DOA from the horns' outputs. An algorithm which, in effect, measures the thermal noise of the horns' signals and determines its effect on the uncertainty of the extracted DOA parameters is developed. Both algorithms were implemented in software and tested in simulated data. Based on these tests, it is concluded that this is a viable approach to the DOA determination. Though the results obtained are of general applicability, the particular motivation for the present work is their application to the pointing of a mechanically deformed antenna. It is anticipated that the pointing algorithm developed for a deformed antenna could be obtained as a small perturbation of the algorithm developed for an undeformed antenna. In this context, it should be pointed out that, with a deformed antenna, the array of horns and its associated circuitry constitute the main part of the deformation-compensation system. In this case, the pointing system proposed may be viewed as an additional task carried out by the deformation-compensation hardware.

  13. Screening-level estimates of mass discharge uncertainty from point measurement methods

    EPA Science Inventory

    The uncertainty of mass discharge measurements associated with point-scale measurement techniques was investigated by deriving analytical solutions for the mass discharge coefficient of variation for two simplified, conceptual models. In the first case, a depth-averaged domain w...

  14. Using a genetic algorithm to estimate the details of earthquake slip distributions from point surface displacements

    NASA Astrophysics Data System (ADS)

    Lindsay, A.; McCloskey, J.; Nic Bhloscaidh, M.

    2016-03-01

    Examining fault activity over several earthquake cycles is necessary for long-term modeling of the fault strain budget and stress state. While this requires knowledge of coseismic slip distributions for successive earthquakes along the fault, these exist only for the most recent events. However, overlying the Sunda Trench, sparsely distributed coral microatolls are sensitive to tectonically induced changes in relative sea levels and provide a century-spanning paleogeodetic and paleoseismic record. Here we present a new technique called the Genetic Algorithm Slip Estimator to constrain slip distributions from observed surface deformations of corals. We identify a suite of models consistent with the observations, and from them we compute an ensemble estimate of the causative slip. We systematically test our technique using synthetic data. Applying the technique to observed coral displacements for the 2005 Nias-Simeulue earthquake and 2007 Mentawai sequence, we reproduce key features of slip present in previously published inversions such as the magnitude and location of slip asperities. From the displacement data available for the 1797 and 1833 Mentawai earthquakes, we present slip estimates reproducing observed displacements. The areas of highest modeled slip in the paleoearthquake are nonoverlapping, and our solutions appear to tile the plate interface, complementing one another. This observation is supported by the complex rupture pattern of the 2007 Mentawai sequence, underlining the need to examine earthquake occurrence through long-term strain budget and stress modeling. Although developed to estimate earthquake slip, the technique is readily adaptable for a wider range of applications.

  15. Stereo-synthetic aperture radar technique without using control points to estimate terrain height

    NASA Astrophysics Data System (ADS)

    Chou, Hsi-Tseng; Lu, Kung-Yu; Liu, Chung-Chih

    2015-01-01

    A stereo-synthetic aperture radar (stereo-SAR)-based technique is proposed to estimate the unknown terrain profile of a target area. This technique first mathematically builds up a virtual reference profile. An algorithm is afterward developed to estimate the relative height difference between the desired and reference profiles by using the trigonometric relationship between their relative SAR range distances, which allows for building up the height of the desired profile from the reference profile. This technique is advantageous and is simple in implementation because the virtual reference profile is constructed by using the same SAR range information as that used for the terrain profile under estimation, which is established by considering the measurement difference between two SAR receivers. It does not require the use of an existing known profile as the reference. Furthermore, we present a technique for calibrating the measured SAR range information, which significantly improves the estimation accuracy. Three practical examples are presented to demonstrate the feasibility of the developed technique.

  16. ESTIMATING THE EXPOSURE POINT CONCENTRATION TERM USING PROUCL, VERSION 3.0

    EPA Science Inventory

    In superfund and RCRA Projects of the U.S. EPA, cleanup, exposure, and risk assessment decisions are often made based upon the mean concentrations of the contaminants of potential concern (COPC). A 95% upper confidence limit (UCL) of the population mean is used to estimate the e...

  17. Variance-reduced estimator of the connected two-point function in the presence of a broken Z(2)-symmetry.

    PubMed

    Hasenbusch, Martin

    2016-03-01

    The exchange or geometric cluster algorithm allows us to define a variance-reduced estimator of the connected two-point function in the presence of a broken Z(2)-symmetry. We present numerical tests for the improved Blume-Capel model on the simple-cubic lattice. We perform simulations for the critical isotherm, the low-temperature phase at vanishing external field, and, for comparison, also the high-temperature phase. For the connected two-point function, a substantial reduction of the variance can be obtained, allowing us to compute the correlation length ξ with high precision. Based on these results, estimates for various universal amplitude ratios that characterize the universality class of the three-dimensional Ising model are computed.

  18. Summary Report on the Graded Prognostic Assessment: An Accurate and Facile Diagnosis-Specific Tool to Estimate Survival for Patients With Brain Metastases

    PubMed Central

    Sperduto, Paul W.; Kased, Norbert; Roberge, David; Xu, Zhiyuan; Shanley, Ryan; Luo, Xianghua; Sneed, Penny K.; Chao, Samuel T.; Weil, Robert J.; Suh, John; Bhatt, Amit; Jensen, Ashley W.; Brown, Paul D.; Shih, Helen A.; Kirkpatrick, John; Gaspar, Laurie E.; Fiveash, John B.; Chiang, Veronica; Knisely, Jonathan P.S.; Sperduto, Christina Maria; Lin, Nancy; Mehta, Minesh

    2012-01-01

    Purpose Our group has previously published the Graded Prognostic Assessment (GPA), a prognostic index for patients with brain metastases. Updates have been published with refinements to create diagnosis-specific Graded Prognostic Assessment indices. The purpose of this report is to present the updated diagnosis-specific GPA indices in a single, unified, user-friendly report to allow ease of access and use by treating physicians. Methods A multi-institutional retrospective (1985 to 2007) database of 3,940 patients with newly diagnosed brain metastases underwent univariate and multivariate analyses of prognostic factors associated with outcomes by primary site and treatment. Significant prognostic factors were used to define the diagnosis-specific GPA prognostic indices. A GPA of 4.0 correlates with the best prognosis, whereas a GPA of 0.0 corresponds with the worst prognosis. Results Significant prognostic factors varied by diagnosis. For lung cancer, prognostic factors were Karnofsky performance score, age, presence of extracranial metastases, and number of brain metastases, confirming the original Lung-GPA. For melanoma and renal cell cancer, prognostic factors were Karnofsky performance score and the number of brain metastases. For breast cancer, prognostic factors were tumor subtype, Karnofsky performance score, and age. For GI cancer, the only prognostic factor was the Karnofsky performance score. The median survival times by GPA score and diagnosis were determined. Conclusion Prognostic factors for patients with brain metastases vary by diagnosis, and for each diagnosis, a robust separation into different GPA scores was discerned, implying considerable heterogeneity in outcome, even within a single tumor type. In summary, these indices and related worksheet provide an accurate and facile diagnosis-specific tool to estimate survival, potentially select appropriate treatment, and stratify clinical trials for patients with brain metastases. PMID:22203767

  19. Enhancing efficiency and quality of statistical estimation of immunogenicity assay cut points through standardization and automation.

    PubMed

    Su, Cheng; Zhou, Lei; Hu, Zheng; Weng, Winnie; Subramani, Jayanthi; Tadkod, Vineet; Hamilton, Kortney; Bautista, Ami; Wu, Yu; Chirmule, Narendra; Zhong, Zhandong Don

    2015-10-01

    Biotherapeutics can elicit immune responses, which can alter the exposure, safety, and efficacy of the therapeutics. A well-designed and robust bioanalytical method is critical for the detection and characterization of relevant anti-drug antibody (ADA) and the success of an immunogenicity study. As a fundamental criterion in immunogenicity testing, assay cut points need to be statistically established with a risk-based approach to reduce subjectivity. This manuscript describes the development of a validated, web-based, multi-tier customized assay statistical tool (CAST) for assessing cut points of ADA assays. The tool provides an intuitive web interface that allows users to import experimental data generated from a standardized experimental design, select the assay factors, run the standardized analysis algorithms, and generate tables, figures, and listings (TFL). It allows bioanalytical scientists to perform complex statistical analysis at a click of the button to produce reliable assay parameters in support of immunogenicity studies.

  20. Screening-level estimates of mass discharge uncertainty from point measurement methods.

    PubMed

    Brooks, Michael C; Cha, Ki Young; Wood, A Lynn; Annable, Michael D

    2015-01-01

    The uncertainty of mass discharge measurements associated with point-scale measurement techniques was investigated by deriving analytical solutions for the mass discharge coefficient of variation for two simplified, conceptual models. In the first case, a depth-averaged domain was assumed, consisting of one-dimensional groundwater flow perpendicular to a one-dimensional control plane of uniformly spaced sampling points. The contaminant flux along the control plane was assumed to be normally distributed. The second case consisted of one-dimensional groundwater flow perpendicular to a two-dimensional control plane of uniformly spaced sampling points. The contaminant flux in this case was assumed to be distributed according to a bivariate normal distribution. The center point for the flux distributions in both cases was allowed to vary in the domain of the control plane as a uniform random variable. Simplified equations for the uncertainty were investigated to facilitate screening-level evaluations of uncertainty as a function of sampling network design. Results were used to express uncertainty as a function of the length of the control plane and number of wells, or alternatively as a function of the sample spacing. Uncertainty was also expressed as a function of a new dimensionless parameter, Ω, defined as the ratio of the maximum local flux to the product of mass discharge and sample density. Expressing uncertainty as a function of Ω provided a convenient means to demonstrate the relationship between uncertainty, the magnitude of a local hot spot, magnitude of mass discharge, distribution of the contaminant across the control plane, and the sampling density.

  1. Estimating animal resource selection from telemetry data using point process models

    USGS Publications Warehouse

    Johnson, Devin S.; Hooten, Mevin B.; Kuhn, Carey E.

    2013-01-01

    To demonstrate the analysis of telemetry data with the point process approach, we analysed a data set of telemetry locations from northern fur seals (Callorhinus ursinus) in the Pribilof Islands, Alaska. Both a space–time and an aggregated space-only model were fitted. At the individual level, the space–time analysis showed little selection relative to the habitat covariates. However, at the study area level, the space-only model showed strong selection relative to the covariates.

  2. Towards an Accurate Measurement of Thermal Contact Resistance at Chemical Vapor Deposition-Grown Graphene/SiO2 Interface Through Null Point Scanning Thermal Microscopy.

    PubMed

    Chung, Jaehun; Hwang, Gwangseok; Kim, Hyeongkeun; Yang, Wooseok; Kwon, Ohmyoung

    2015-11-01

    In the development of graphene-based electronic devices, it is crucial to characterize the thermal contact resistance between the graphene and the substrate precisely. In this study, we demonstrate that the thermal contact resistance between CVD-grown graphene and SiO2 substrate can be obtained by measuring the temperature drop occurring at the graphene/SiO2 interface with null point scanning thermal microscopy (NP SThM), which profiles the temperature distribution quantitatively with nanoscale spatial resolution (-50 nm) without the shortcomings of the conventional SThM. The thermal contact resistance between the CVD-grown graphene and SiO2 substrate is measured as (1.7 ± 0.27) x 10(-6) M2K/W. This abnormally large thermal contact resistance seems to be caused by extrinsic factors such as ripples and metal-based contamination, which inevitably form in CVD-grown graphene during the production and transfer processes.

  3. Is point-of-care ultrasound accurate and useful in the hands of military medical technicians? A review of the literature.

    PubMed

    Hile, David C; Morgan, Andrew R; Laselle, Brooks T; Bothwell, Jason D

    2012-08-01

    Over the past decade, point-of-care ultrasound (US) use by nonphysician providers has grown substantially. The purpose of this article is to (1) summarize the literature evaluating military medics' facility at US, (2) more clearly define the potential utility of military prehospital US technology, and (3) lay a pathway for future research of military prehospital US. The authors performed a keyword search using multiple search engines. Each author independently reviewed the search results and evaluated the literature for inclusion. Of 30 studies identified, five studies met inclusion criteria. The applications included evaluation of cardiac activity, pneumothorax evaluation, and fracture evaluation. Additionally, a descriptive study demonstrated distribution of US exam types during practical use by Army Special Forces Medical Sergeants. No studies evaluated retention of skills over prolonged periods. Multiple studies demonstrate the feasibility of training military medics in US. Even under austere conditions, the majority of studies conclude that medic can perform US with a high degree of accuracy. Lessons learned from these studies tend to support continued use of US in out-of-hospital settings and exploration of the optimal curriculum to introduce this skill.

  4. Eigenspace perturbations for uncertainty estimation of single-point turbulence closures

    NASA Astrophysics Data System (ADS)

    Iaccarino, Gianluca; Mishra, Aashwin Ananda; Ghili, Saman

    2017-02-01

    Reynolds-averaged Navier-Stokes (RANS) models represent the workhorse for predicting turbulent flows in complex industrial applications. However, RANS closures introduce a significant degree of epistemic uncertainty in predictions due to the potential lack of validity of the assumptions utilized in model formulation. Estimating this uncertainty is a fundamental requirement for building confidence in such predictions. We outline a methodology to estimate this structural uncertainty, incorporating perturbations to the eigenvalues and the eigenvectors of the modeled Reynolds stress tensor. The mathematical foundations of this framework are derived and explicated. Thence, this framework is applied to a set of separated turbulent flows, while compared to numerical and experimental data and contrasted against the predictions of the eigenvalue-only perturbation methodology. It is exhibited that for separated flows, this framework is able to yield significant enhancement over the established eigenvalue perturbation methodology in explaining the discrepancy against experimental observations and high-fidelity simulations. Furthermore, uncertainty bounds of potential engineering utility can be estimated by performing five specific RANS simulations, reducing the computational expenditure on such an exercise.

  5. Estimation of precipitable water vapour using kinematic GNSS precise point positioning over an altitude range of 1 km

    NASA Astrophysics Data System (ADS)

    Webb, S. R.; Penna, N. T.; Clarke, P. J.; Webster, S.; Martin, I.

    2013-12-01

    The estimation of total precipitable water vapour (PWV) using kinematic GNSS has been investigated since around 2001, aiming to extend the use of static ground-based GNSS, from which PWV estimates are now operationally assimilated into numerical weather prediction models. To date, kinematic GNSS PWV studies suggest a PWV measurement agreement with radiosondes of 2-3 mm, almost commensurate with static GNSS measurement accuracy, but only shipborne experiments have so far been carried out. As a first step towards extending such sea level-based studies to platforms that operate at a range of altitudes, such as airplanes or land based vehicles, the kinematic GNSS estimation of PWV over an exactly repeated trajectory is considered. A data set was collected from a GNSS receiver and antenna mounted on a carriage of the Snowdon Mountain Railway, UK, which continually ascends and descends through 950 m of vertical relief. Static GNSS reference receivers were installed at the top and bottom of the altitude profile, and derived zenith wet delay (ZWD) was interpolated to the altitude of the train to provide reference values together with profile estimates from the 100 m resolution runs of the Met Office's Unified Model. We demonstrate similar GNSS accuracies as obtained from previous shipborne studies, namely a double difference relative kinematic GNSS ZWD accuracy within 14 mm, and a kinematic GNSS precise point positioning ZWD accuracy within 15 mm. The latter is a more typical airborne PWV estimation scenario i.e. without the reliance on ground-based GNSS reference stations. We show that the kinematic GPS-only precise point positioning ZWD estimation is enhanced by also incorporating GLONASS observations.

  6. THEORETICAL ESTIMATES OF TWO-POINT SHEAR CORRELATION FUNCTIONS USING TANGLED MAGNETIC FIELDS

    SciTech Connect

    Pandey, Kanhaiya L.; Sethi, Shiv K.

    2012-03-20

    The existence of primordial magnetic fields can induce matter perturbations with additional power at small scales as compared to the usual {Lambda}CDM model. We study its implication within the context of a two-point shear correlation function from gravitational lensing. We show that a primordial magnetic field can leave its imprints on the shear correlation function at angular scales {approx}< a few arcminutes. The results are compared with CFHTLS data, which yield some of the strongest known constraints on the parameters (strength and spectral index) of the primordial magnetic field. We also discuss the possibility of detecting sub-nano Gauss fields using future missions such as SNAP.

  7. Point-Process Models of Social Network Interactions: Parameter Estimation and Missing Data Recovery

    DTIC Science & Technology

    2014-08-01

    choice has precedent in seismology [21]. Figure 2 shows Hawkes process realisations with µ = 0.15 and g(t) = 0.5e−0.6t. The intensity and event times are...resembling Figure 1(a). The Hawkes process appears in the seismology literature as a model for the timing of earthquakes and their aftershocks [25]. As...036127. [35] Veen, A. & Schoenberg, F. P. 2008 Estimation of space–time branching process models in seismology using an EM-type algorithm. Journal of

  8. A quantum mechanical/neural net model for boiling points with error estimation.

    PubMed

    Chalk, A J; Beck, B; Clark, T

    2001-01-01

    We present QSPR models for normal boiling points employing a neural network approach and descriptors calculated using semiempirical MO theory (AM1 and PM3). These models are based on a data set of 6000 compounds with widely varying functionality and should therefore be applicable to a diverse range of systems. We include cross-validation by simultaneously training 10 different networks, each with different training and test sets. The predicted boiling point is given by the mean of the 10 results, and the individual error of each compound is related to the standard deviation of these predictions. For our best model we find that the standard deviation of the training error is 16.5 K for 6000 compounds and the correlation coefficient (R2) between our prediction and experiment is 0.96. We also examine the effect of different conformations and tautomerism on our calculated results. Large deviations between our predictions and experiment can generally be explained by experimental errors or problems with the semiempirical methods.

  9. Comparison of dose at an interventional reference point between the displayed estimated value and measured value.

    PubMed

    Chida, Koichi; Inaba, Yohei; Morishima, Yoshiaki; Taura, Masaaki; Ebata, Ayako; Yanagawa, Isao; Takeda, Ken; Zuguchi, Masayuki

    2011-07-01

    Today, interventional radiology (IR) X-ray units are required for display of doses at an interventional reference point (IRP) for the operator (IR physician). The dose displayed at the IRP (the reference dose) of an X-ray unit has been reported to be helpful for characterizing patient exposure in real time. However, no detailed report has evaluated the accuracy of the reference doses displayed on X-ray equipment. Thus, in this study, we compared the displayed reference dose to the actual measured value in many IR X-ray systems. Although the displayed reference doses of many IR X-ray systems agreed with the measured actual values within approximately 15%, the doses of a few IR units were not close. Furthermore, some X-ray units made in Japan displayed reference doses quite different from the actual measured value, probably because the reference point of these units differs from the International Electrotechnical Commission standard. Thus, IR physicians should pay attention to the location of the IRP of the displayed reference dose in Japan. Furthermore, physicians should be aware of the accuracy of the displayed reference dose of the X-ray system that they use for IR. Thus, regular checks of the displayed reference dose of the X-ray system are important.

  10. Estimation of lactose hydrolysis by freezing point measurements in milk and whey substrates treated with lactases from various microorganisms.

    PubMed

    Chen, S L; Frank, J F; Loewenstein, M

    1981-11-01

    beta-Galactosidase concentrates obtained from several microorganisms were used to hydrolyze skim milk, low fat (2%) milk, sweet whey, acid whey, acid whey permeate, and acid whey concentrate. Among acid substrates, the freezing point depression for each 1% lactose hydrolyzed was the greatest with the lactase from Aspergillus niger (0.0501 degrees H); among neutral substrates, the depression was greater in sweet whey (0.0495 degree H) and lesser in low fat milk (0.0445 degrees H). All data were statistically significant. The average freezing point depression for each 1% lactose hydrolyzed wa s0.0468 degrees H (range 0.0436-0.0501 degrees H). Oligosaccharides formed in the lactose hydrolysis inconsistent freezing point readings of the cryoscope at the low freezing points measured, and protease contamination in some lactases may affect the precision of freezing point determination. Hydration and volume of non-protein components in commercial enzymes, unstable color complex formed by lactose and methylamine solution, and difficulty in the use of methylamine solution might cause variations in determination of lactose by the analytical procedure. These factors can be eliminated or minimized. This method is the simplest and quickest estimation of lactose hydrolysis, and it offers great accuracy and consistency.

  11. Point process modeling and estimation: Advances in the analysis of dynamic neural spiking data

    NASA Astrophysics Data System (ADS)

    Deng, Xinyi

    A common interest of scientists in many fields is to understand the relationship between the dynamics of a physical system and the occurrences of discrete events within such physical system. Seismologists study the connection between mechanical vibrations of the Earth and the occurrences of earthquakes so that future earthquakes can be better predicted. Astrophysicists study the association between the oscillating energy of celestial regions and the emission of photons to learn the Universe's various objects and their interactions. Neuroscientists study the link between behavior and the millisecond-timescale spike patterns of neurons to understand higher brain functions. Such relationships can often be formulated within the framework of state-space models with point process observations. The basic idea is that the dynamics of the physical systems are driven by the dynamics of some stochastic state variables and the discrete events we observe in an interval are noisy observations with distributions determined by the state variables. This thesis proposes several new methodological developments that advance the framework of state-space models with point process observations at the intersection of statistics and neuroscience. In particular, we develop new methods 1) to characterize the rhythmic spiking activity using history-dependent structure, 2) to model population spike activity using marked point process models, 3) to allow for real-time decision making, and 4) to take into account the need for dimensionality reduction for high-dimensional state and observation processes. We applied these methods to a novel problem of tracking rhythmic dynamics in the spiking of neurons in the subthalamic nucleus of Parkinson's patients with the goal of optimizing placement of deep brain stimulation electrodes. We developed a decoding algorithm that can make decision in real-time (for example, to stimulate the neurons or not) based on various sources of information present in

  12. Estimating Limit Reference Points for Western Pacific Leatherback Turtles (Dermochelys coriacea) in the U.S. West Coast EEZ

    PubMed Central

    Curtis, K. Alexandra; Moore, Jeffrey E.; Benson, Scott R.

    2015-01-01

    Biological limit reference points (LRPs) for fisheries catch represent upper bounds that avoid undesirable population states. LRPs can support consistent management evaluation among species and regions, and can advance ecosystem-based fisheries management. For transboundary species, LRPs prorated by local abundance can inform local management decisions when international coordination is lacking. We estimated LRPs for western Pacific leatherbacks in the U.S. West Coast Exclusive Economic Zone (WCEEZ) using three approaches with different types of information on local abundance. For the current application, the best-informed LRP used a local abundance estimate derived from nest counts, vital rate information, satellite tag data, and fishery observer data, and was calculated with a Potential Biological Removal estimator. Management strategy evaluation was used to set tuning parameters of the LRP estimators to satisfy risk tolerances for falling below population thresholds, and to evaluate sensitivity of population outcomes to bias in key inputs. We estimated local LRPs consistent with three hypothetical management objectives: allowing the population to rebuild to its maximum net productivity level (4.7 turtles per five years), limiting delay of population rebuilding (0.8 turtles per five years), or only preventing further decline (7.7 turtles per five years). These LRPs pertain to all human-caused removals and represent the WCEEZ contribution to meeting population management objectives within a broader international cooperative framework. We present multi-year estimates, because at low LRP values, annual assessments are prone to substantial error that can lead to volatile and costly management without providing further conservation benefit. The novel approach and the performance criteria used here are not a direct expression of the “jeopardy” standard of the U.S. Endangered Species Act, but they provide useful assessment information and could help guide

  13. [Establishment and application of the estimation model for agricultural non-point source pollution in the field].

    PubMed

    Li, Qiang-kun; Li, Huai-en; Hu, Ya-wei; Chen, Wei-wei; Sun, Juan

    2009-12-01

    The quantitative research on pollution loads is the basis of control, evaluation and management of non-point source pollution. The estimation of agricultural non-point source pollution loads includes two steps: evaluation of water discharge and prediction of pollutant concentration in agricultural drain. Water discharge was calculated by DRAINMOD model based on the principle of water balance on farmland. Meanwhile, the synthesis of fertilization and irrigation is used as an impulse input to the farmland, the pollutant concentration changes in agricultural drain is looked as the response process corresponding to the impulse input, the complex migratory and transforming process of pollutant in soil are expressed implied by Inverse Gaussian Probability Density Function. Based on the above, the estimation model of agricultural non-point source pollution loads at field scale was constructed. Taking the typical experimentation area of Qingtongxia Irrigation District in Ningxia as an example, the loads of nitrate nitrogen and total phosphorus in paddy-field drain was simulated by this model. The results show that the simulated accorded with measured data approximately and Nash-Suttcliffe coefficient is 0.963 and 0.945 respectively.

  14. Point set registration: coherent point drift.

    PubMed

    Myronenko, Andriy; Song, Xubo

    2010-12-01

    Point set registration is a key component in many computer vision tasks. The goal of point set registration is to assign correspondences between two sets of points and to recover the transformation that maps one point set to the other. Multiple factors, including an unknown nonrigid spatial transformation, large dimensionality of point set, noise, and outliers, make the point set registration a challenging problem. We introduce a probabilistic method, called the Coherent Point Drift (CPD) algorithm, for both rigid and nonrigid point set registration. We consider the alignment of two point sets as a probability density estimation problem. We fit the Gaussian mixture model (GMM) centroids (representing the first point set) to the data (the second point set) by maximizing the likelihood. We force the GMM centroids to move coherently as a group to preserve the topological structure of the point sets. In the rigid case, we impose the coherence constraint by reparameterization of GMM centroid locations with rigid parameters and derive a closed form solution of the maximization step of the EM algorithm in arbitrary dimensions. In the nonrigid case, we impose the coherence constraint by regularizing the displacement field and using the variational calculus to derive the optimal transformation. We also introduce a fast algorithm that reduces the method computation complexity to linear. We test the CPD algorithm for both rigid and nonrigid transformations in the presence of noise, outliers, and missing points, where CPD shows accurate results and outperforms current state-of-the-art methods.

  15. Estimating SO2 emissions from a large point source using 10 year OMI SO2 observations: Afsin Elbistan Power Plant

    NASA Astrophysics Data System (ADS)

    Kaynak Tezel, Burcak; Firatli, Ertug

    2016-04-01

    SO2 pollution has still been a problem for parts of Turkey, especially regions with large scale coal power plants. In this study, 10 year Ozone Monitoring Instrument (OMI) SO2 observations are used for estimating SO2 emissions from large point sources in Turkey. We aim to estimate SO2 emissions from coal power plants where no online monitoring is available and improve the emissions given in current emission inventories with these top-down estimates. High-resolution yearly averaged maps are created on a domain over large point sources by oversampling SO2 columns for each grid for the years 2005-2014. This method reduced the noise and resulted in a better signal from large point sources and it was used for coal power plants in U.S and India, previously. The SO2 signal over selected power plants are observed with this method, and the spatiotemporal changes of SO2 signal are analyzed. With the assumption that OMI SO2 observations are correlating with emissions, long-term OMI SO2 observation averages can be used to estimate emission levels of significant point sources. Two-dimensional Gaussian function is used for explaining the relationships between OMI SO2 observations and emissions. Afsin Elbistan Power Plant, which is the largest capacity coal power plant in Turkey, is investigated in detail as a case study. The satellite scans within 50 km of the power plant are selected and averaged over a 2 x 2 km2 gridded domain by smoothing method for 2005-2014. The yearly averages of OMI SO2 are calculated to investigate the magnitude and the impact area of the SO2 emissions of the power plant. A significant increase in OMI SO2 observations over Afsin Elbistan from 2005 to 2009 was observed (over 2 times) possibly due to the capacity increase from 1715 to 2795 MW in 2006. Comparison between the yearly gross electricity production of the plant and OMI SO2 observations indicated consistency until 2009, but OMI SO2 observations indicated a rapid increase while gross electricity

  16. Detection/estimation of the modulus of a vector. Application to point-source detection in polarization data

    NASA Astrophysics Data System (ADS)

    Argüeso, F.; Sanz, J. L.; Herranz, D.; López-Caniego, M.; González-Nuevo, J.

    2009-05-01

    Given a set of images, whose pixel values can be considered as the components of a vector, it is interesting to estimate the modulus of such a vector in some localized areas corresponding to a compact signal. For instance, the detection/estimation of a polarized signal in compact sources immersed in a background is relevant in some fields like astrophysics. We develop two different techniques, one based on the Neyman-Pearson lemma, the Neyman-Pearson filter (NPF), and another based on pre-filtering before fusion, the filtered fusion (FF), to deal with the problem of detection of the source and estimation of the polarization given two or three images corresponding to the different components of polarization (two for linear polarization, three including circular polarization). For the case of linear polarization, we have performed numerical simulations on two-dimensional patches to test these filters following two different approaches (a blind and a non-blind detection), considering extragalactic point sources immersed in cosmic microwave background (CMB) and non-stationary noise with the conditions of the 70 GHz Planck channel. The FF outperforms the NPF, especially for low fluxes. We can detect with the FF extragalactic sources in a high noise zone with fluxes Jy for (blind/non-blind) detection and in a low noise zone with fluxes Jy for (blind/non-blind) detection with low errors in the estimated flux and position.

  17. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  18. Using ToxCast™ Data to Reconstruct Dynamic Cell State Trajectories and Estimate Toxicological Points of Departure

    PubMed Central

    Shah, Imran; Setzer, R. Woodrow; Jack, John; Houck, Keith A.; Judson, Richard S.; Knudsen, Thomas B.; Liu, Jie; Martin, Matthew T.; Reif, David M.; Richard, Ann M.; Thomas, Russell S.; Crofton, Kevin M.; Dix, David J.; Kavlock, Robert J.

    2015-01-01

    state trajectories and estimate toxicological points of departure. Environ Health Perspect 124:910–919; http://dx.doi.org/10.1289/ehp.1409029 PMID:26473631

  19. BeiDou satellite's differential code biases estimation based on uncombined precise point positioning with triple-frequency observable

    NASA Astrophysics Data System (ADS)

    Fan, Lei; Li, Min; Wang, Cheng; Shi, Chuang

    2017-02-01

    The differential code bias (DCB) of BeiDou satellite is an important topic to make better use of BeiDou system (BDS) for many practical applications. This paper proposes a new method to estimate the BDS satellite DCBs based on triple-frequency uncombined precise point positioning (UPPP). A general model of both triple-frequency UPPP and Geometry-Free linear combination of Phase-Smoothed Range (GFPSR) is presented, in which, the ionospheric observable and the combination of triple-frequency satellite and receiver DCBs (TF-SRDCBs) are derived. Then the satellite and receiver DCBs (SRDCBs) are estimated together with the ionospheric delay that is modeled at each individual station in a weighted least-squares estimator, and the satellite DCBs are determined by introducing the zero-mean condition of all available BDS satellites. To validate the new method, 90 day's real tracking GNSS data (from January to March in 2014) collected from 9 Multi-GNSS Experiment (MGEX) stations (equipped with Trimble NETR9 receiver) is used, and the BDS satellite DCB products from German Aerospace Center (DLR) are taken as reference values for comparison. Results show that the proposed method is able to precisely estimate BDS satellite DCBs: (1) the mean value of the day-to-day scattering for all available BDS satellites is about 0.24 ns, which is reduced in average by 23% when compared with the results derived by only GFPSR. Moreover, the mean value of the day-to-day scattering of IGSO satellites is lower than that of GEO and MEO satellites; (2) the mean value of RMS of the difference with respect to DLR DCB products is about 0.39 ns, which is improved by an average of 11% when compared with the results derived by only GFPSR. Besides, the RMS of IGSO and MEO satellites is at the same level which is better than that of GEO satellites.

  20. [Estimation of urban non-point source pollution loading and its factor analysis in the Pearl River Delta].

    PubMed

    Liao, Yi-Shan; Zhuo, Mu-Ning; Li, Ding-Qiang; Guo, Tai-Long

    2013-08-01

    In the Pearl Delta region, urban rivers have been seriously polluted, and the input of non-point source pollution materials, such as chemical oxygen demand (COD), into rivers cannot be neglected. During 2009-2010, the water qualities at eight different catchments in the Fenjiang River of Foshan city were monitored, and the COD loads for eight rivulet sewages were calculated in respect of different rainfall conditions. Interesting results were concluded in our paper. The rainfall and landuse type played important roles in the COD loading, with greater influence of rainfall than landuse type. Consequently, a COD loading formula was constructed that was defined as a function of runoff and landuse type that were derived SCS model and land use map. Loading of COD could be evaluated and predicted with the constructed formula. The mean simulation accuracy for single rainfall event was 75.51%. Long-term simulation accuracy was better than that of single rainfall. In 2009, the estimated COD loading and its loading intensity were 8 053 t and 339 kg x (hm2 x a)(-1), and the industrial land was regarded as the main source of COD pollution area. The severe non-point source pollution such as COD in Fenjiang River must be paid more attention in the future.

  1. Position-dependent velocity of an effective temperature point for the estimation of the thermal diffusivity of solids

    NASA Astrophysics Data System (ADS)

    Balachandar, Settu; Shivaprakash, N. C.; Kameswara Rao, L.

    2016-01-01

    A new approach is proposed to estimate the thermal diffusivity of optically transparent solids at ambient temperature based on the velocity of an effective temperature point (ETP), and by using a two-beam interferometer the proposed concept is corroborated. 1D unsteady heat flow via step-temperature excitation is interpreted as a ‘micro-scale rectilinear translatory motion’ of an ETP. The velocity dependent function is extracted by revisiting the Fourier heat diffusion equation. The relationship between the velocity of the ETP with thermal diffusivity is modeled using a standard solution. Under optimized thermal excitation, the product of the ‘velocity of the ETP’ and the distance is a new constitutive equation for the thermal diffusivity of the solid. The experimental approach involves the establishment of a 1D unsteady heat flow inside the sample through step-temperature excitation. In the moving isothermal surfaces, the ETP is identified using a two-beam interferometer. The arrival-time of the ETP to reach a fixed distance away from heat source is measured, and its velocity is calculated. The velocity of the ETP and a given distance is sufficient to estimate the thermal diffusivity of a solid. The proposed method is experimentally verified for BK7 glass samples and the measured results are found to match closely with the reported value.

  2. An Evaluation of Vegetation Filtering Algorithms for Improved Snow Depth Estimation from Point Cloud Observations in Mountain Environments

    NASA Astrophysics Data System (ADS)

    Vanderjagt, B. J.; Durand, M. T.; Lucieer, A.; Wallace, L.

    2014-12-01

    High-resolution snow depth measurements are possible through bare-earth (BE) differencing of point cloud datasets obtained using LiDAR and photogrammetry during snow-free and snow-covered conditions. These accuracy and resolution of these snow depth measurements are desirable in mountain environments in which ground measurements are dangerous and difficult to perform, and other remote sensing techniques are often characterized by large errors and uncertainties due variable topography, vegetation, and snow properties. BE ground filtering algorithms make different assumptions about ground characteristics to differentiate between ground and non-ground features. Because of this, ground surfaces may have unique characteristics that confound ground filters depending on the location and terrain conditions. These include low-lying shrubs (<1 m), areas with high topographic relief, and areas with high surface roughness. We evaluate several different algorithms, including lowest point, kriging, and more sophisticated splining techniques such as the Multiscale Curvature Classification (MCC) to resolve snow depths. Understanding how these factors affect BE surface models and thus snow depth measurements is a valuable contribution towards improving the processing protocols associated with these relatively new snow observation techniques. We test the different BE filtering algorithms using LiDAR and photogrammetric measurements taken from an Unmanned Aerial Vehicle (UAV) in Southwest Tasmania, Australia during the winter and spring of 2013. The study area is characterized by sloping, uneven terrain, and different types of vegetation including eucalyptus and conifer trees, as well as dense shrubs varying in heights from 0.3-1.5 meters. Initial snow depth measurements using the unfiltered point cloud measurements are characterized by large errors (~20-90 cm) due to the dense vegetation. Using filtering techniques instead of raw differencing improves the estimation of snow depth in

  3. A predictive model for estimating regional skeletal muscle size by two-point dixon magnetic resonance imaging in healthy Koreans.

    PubMed

    Kim, Cheol-Min; Lee, Chang-Hyung; Choi, Young A; Kim, Byoung-Chul; Jung, Duk-Young; Shin, Myung-Jun

    This study was undertaken to develop and cross-validate reference and individual predictive models for estimating functional thigh muscle cross-sectional area (TCSA) by 2-point Dixon magnetic resonance imaging (MRI). TCSAs of dominant sides at the mid-thigh level were measured by 2-point Dixon MRI ((MRI)TCSA). Functional (MRI)TCSA were compared with the predictive models in a sample of 92 younger (20-40 years; 28.55±4.87; n=50) and older (>65years; 71.22±4.82; n=42) Koreans. Lean body masses were measured by dual energy X-ray absorptiometry ((DXA)LBM), and thigh isokinetic muscle strengths, extension peak torque at 60°/sec, were measured using a Biodex(®) dynamometer ((Biodex)EPT). Multiple regression analysis generated the reference model (R(2)=0.75 and SEE=1472.63mm(2) (8%)) as follows: The reference model: functional TCSA(mm(2))=-1230.49+62.81*height+3061.78*gender -2692.57*age+58.91*weight. The individual model (R(2)=0.80, SEE=1158.34mm(2) (7%)) was as follows: The individual model: functional TCSA(mm(2))=1631.62+1.76* (DXA)LBM+9.51*(Biodex)EPT where height is in centimeters; weight is in kilograms; for gender, female=0 and male=1; and for age, age under 40=1 and age over 65=2. PRESS statistics of R(2) and SEE were 0.78 and 1382.98mm(2) for the reference model, and 0.88 and 979.02mm(2) for the individual model. The 2-point Dixon MRI appears to be valid for measuring functional muscle size. Our results suggest that the reference and individual models provide acceptable estimates of functional thigh muscle CSA in healthy Korean adults. Therefore, the models developed in the study could be useful as a research tool to establish indexes for functional muscle composition in healthy Koreans.

  4. Publication Bias Currently Makes an Accurate Estimate of the Benefits of Enrichment Programs Difficult: A Postmortem of Two Meta-Analyses Using Statistical Power Analysis

    ERIC Educational Resources Information Center

    Warne, Russell T.

    2016-01-01

    Recently Kim (2016) published a meta-analysis on the effects of enrichment programs for gifted students. She found that these programs produced substantial effects for academic achievement (g = 0.96) and socioemotional outcomes (g = 0.55). However, given current theory and empirical research these estimates of the benefits of enrichment programs…

  5. Identification of an accurate soil suspension/dispersion modeling method for use in estimating health-based soil cleanup levels of hexavalent chromium in chromite ore processing residues.

    PubMed

    Scott, P K; Finley, B L; Sung, H M; Schulze, R H; Turner, D B

    1997-07-01

    The primary health concern associated with chromite ore processing residues (COPR) at sites in Hudson County, NJ, is the inhalation of Cr(VI) suspended from surface soils. Since health-based soil standards for Cr(VI) will be derived using the inhalation pathway, soil suspension modeling will be necessary to estimate site-specific, health-based soil cleanup levels (HBSCLs). The purpose of this study was to identify the most appropriate particulate emission and air dispersion models for estimating soil suspension at these sites based on their theoretical underpinnings, scientific acceptability, and past performance. The identified modeling approach, the AP-42 particulate emission model and the fugitive dust model (FDM), was used to calculate concentrations of airborne Cr(VI) and TSP at two COPR sites. These estimated concentrations were then compared to concentrations measured at each site. The TSP concentrations calculated using the AP-42/FDM soil suspension modeling approach were all within a factor of 3 of the measured concentrations. The majority of the estimated air concentrations were greater than the measured, indicating that the AP-42/FDM approach tends to overestimate on-site concentrations. The site-specific Cr(VI) HBSCLs for these two sites calculated using this conservative soil suspension modeling approach ranged from 190 to 420 mg/kg.

  6. Novel applications using maximum-likelihood estimation in optical metrology and nuclear medical imaging: Point-diffraction interferometry and BazookaPET

    NASA Astrophysics Data System (ADS)

    Park, Ryeojin

    This dissertation aims to investigate two different applications in optics using maximum-likelihood (ML) estimation. The first application of ML estimation is used in optical metrology. For this application, an innovative iterative search method called the synthetic phase-shifting (SPS) algorithm is proposed. This search algorithm is used for estimation of a wavefront that is described by a finite set of Zernike Fringe (ZF) polynomials. In this work, we estimate the ZF coefficient, or parameter values of the wavefront using a single interferogram obtained from a point-diffraction interferometer (PDI). In order to find the estimates, we first calculate the squared-difference between the measured and simulated interferograms. Under certain assumptions, this squared-difference image can be treated as an interferogram showing the phase difference between the true wavefront deviation and simulated wavefront deviation. The wavefront deviation is defined as the difference between the reference and the test wavefronts. We calculate the phase difference using a traditional phase-shifting technique without physical phase-shifters. We present a detailed forward model for the PDI interferogram, including the effect of the nite size of a detector pixel. The algorithm was validated with computational studies and its performance and constraints are discussed. A prototype PDI was built and the algorithm was also experimentally validated. A large wavefront deviation was successfully estimated without using null optics or physical phase-shifters. The experimental result shows that the proposed algorithm has great potential to provide an accurate tool for non-null testing. The second application of ML estimation is used in nuclear medical imaging. A high-resolution positron tomography scanner called BazookaPET is proposed. We have designed and developed a novel proof-of-concept detector element for a PET system called BazookaPET. In order to complete the PET configuration, at least

  7. Eclipsing Binaries as Astrophysical Laboratories: CM Draconis - Accurate Absolute Physical Properties of Low Mass Stars and an Independent Estimate of the Primordial Helium Abundance

    NASA Astrophysics Data System (ADS)

    McCook, G. P.; Guinan, E. F.; Saumon, D.; Kang, Y. W.

    1997-05-01

    CM Draconis (Gl 630.1; Vmax = +12.93) is an important eclipsing binary consisting of two dM4.5e stars with an orbital period of 1.2684 days. This binary is a high velocity star (s= 164 km/s) and the brighter member of a common proper motion pair with a cool faint white dwarf companion (LP 101-16). CM Dra and its white dwarf companion were once considered by Zwicky to belong to a class of "pygmy stars", but they turned out to be ordinary old, cool white dwarfs or faint red dwarfs. Lacy (ApJ 218,444L) determined the first orbital and physical properties of CM Dra from the analysis of his light and radial velocity curves. In addition to providing directly measured masses, radii, and luminosities for low mass stars, CM Dra was also recognized by Lacy and later by Paczynski and Sienkiewicz (ApJ 286,332) as an important laboratory for cosmology, as a possible old Pop II object where it may be possible to determine the primordial helium abundance. Recently, Metcalfe et al.(ApJ 456,356) obtained accurate RV measures for CM Dra and recomputed refined elements along with its helium abundance. Starting in 1995, we have been carrying out intensive RI photoelectric photometry of CM Dra to obtain well defined, accurate light curves so that its fundamental properties can be improved, and at the same time, to search for evidence of planets around the binary from planetary transit eclipses. During 1996 and 1997 well defined light curves were secured and these were combined with the RV measures of Metcalfe et al. (1996) to determine the orbital and physical parameters of the system, including a refined orbital period. A recent version of the Wilson-Devinney program was used to analyze the data. New radii, masses, mean densities, Teff, and luminosities were found as well as a re-determination of the helium abundance (Y). The results of the recent analyses of the light and RV curves will be presented and modelling results discussed. This research is supported by NSF grants AST-9315365

  8. Relationship of estimated SHIV acquisition time points during the menstrual cycle and thinning of vaginal epithelial layers in pigtail macaques

    PubMed Central

    Kersh, Ellen N.; Ritter, Jana; Butler, Katherine; Ostergaard, Sharon Dietz; Hanson, Debra; Ellis, Shanon; Zaki, Sherif; McNicholl, Janet M.

    2015-01-01

    Background HIV acquisition in the female genital tract remains incompletely understood. Quantitative data on biological HIV risk factors, the influence of reproductive hormones, and infection risk are lacking. We evaluated vaginal epithelial thickness during the menstrual cycle in pigtail macaques (Macaca nemestrina). This model previously revealed increased susceptibility to vaginal infection during and following progesterone-dominated periods in the menstrual cycle. Methods Nucleated and non-nucleated (superficial) epithelial layers were quantitated throughout the menstrual cycle of 16 macaques. We examined the relationship with previously estimated vaginal SHIVSF162P3 acquisition time points in the cycle of 43 different animals repeatedly exposed to low virus doses. Results In the luteal phase (days 17 to cycle end), the mean vaginal epithelium thinned to 66% of mean follicular thickness (days 1-16; p=0.007, Mann-Whitney test). Analyzing four-day segments, the epithelium was thickest on days 9-12, and thinned to 31% thereof on days 29-32, with reductions of nucleated and non-nucleated layers to 36 and 15% of their previous thickness, respectively. The proportion of animals with estimated SHIV acquisition in each cycle segment correlated with non-nucleated layer thinning (Pearson’s r = 0.7, p<0.05, linear regression analysis), but not nucleated layer thinning (Pearson’s r = 0.6, p=0.15). Conclusions These data provide a detailed picture of dynamic cycle-related changes in the vaginal epithelium of pigtail macaques. Substantial thinning occurred in the superficial, non-nucleated layer, which maintains the vaginal microbiome. The findings support vaginal tissue architecture as susceptibility factor for infection and contribute to our understanding of innate resistance to SHIV infection. PMID:26562699

  9. Polydimethylsiloxane-air partition ratios for semi-volatile organic compounds by GC-based measurement and COSMO-RS estimation: Rapid measurements and accurate modelling.

    PubMed

    Okeme, Joseph O; Parnis, J Mark; Poole, Justen; Diamond, Miriam L; Jantunen, Liisa M

    2016-08-01

    Polydimethylsiloxane (PDMS) shows promise for use as a passive air sampler (PAS) for semi-volatile organic compounds (SVOCs). To use PDMS as a PAS, knowledge of its chemical-specific partitioning behaviour and time to equilibrium is needed. Here we report on the effectiveness of two approaches for estimating the partitioning properties of polydimethylsiloxane (PDMS), values of PDMS-to-air partition ratios or coefficients (KPDMS-Air), and time to equilibrium of a range of SVOCs. Measured values of KPDMS-Air, Exp' at 25 °C obtained using the gas chromatography retention method (GC-RT) were compared with estimates from a poly-parameter free energy relationship (pp-FLER) and a COSMO-RS oligomer-based model. Target SVOCs included novel flame retardants (NFRs), polybrominated diphenyl ethers (PBDEs), polycyclic aromatic hydrocarbons (PAHs), organophosphate flame retardants (OPFRs), polychlorinated biphenyls (PCBs) and organochlorine pesticides (OCPs). Significant positive relationships were found between log KPDMS-Air, Exp' and estimates made using the pp-FLER model (log KPDMS-Air, pp-LFER) and the COSMOtherm program (log KPDMS-Air, COSMOtherm). The discrepancy and bias between measured and predicted values were much higher for COSMO-RS than the pp-LFER model, indicating the anticipated better performance of the pp-LFER model than COSMO-RS. Calculations made using measured KPDMS-Air, Exp' values show that a PDMS PAS of 0.1 cm thickness will reach 25% of its equilibrium capacity in ∼1 day for alpha-hexachlorocyclohexane (α-HCH) to ∼ 500 years for tris (4-tert-butylphenyl) phosphate (TTBPP), which brackets the volatility range of all compounds tested. The results presented show the utility of GC-RT method for rapid and precise measurements of KPDMS-Air.

  10. Temperature mapping in bread dough using SE and GE two-point MRI methods: experimental and theoretical estimation of uncertainty.

    PubMed

    Lucas, Tiphaine; Musse, Maja; Bornert, Mélanie; Davenel, Armel; Quellec, Stéphane

    2012-04-01

    Two-dimensional (2D)-SE, 2D-GE and tri-dimensional (3D)-GE two-point T(1)-weighted MRI methods were evaluated in this study in order to maximize the accuracy of temperature mapping of bread dough during thermal processing. Uncertainties were propagated throughout each protocol of measurement, and comparisons demonstrated that all the methods with comparable acquisition times minimized the temperature uncertainty to similar extent. The experimental uncertainties obtained with low-field MRI were also compared to the theoretical estimations. Some discrepancies were reported between experimental and theoretical values of uncertainties of temperature; however, experimental and theoretical trends with varying parameters agreed to a large extent for both SE and GE methods. The 2D-SE method was chosen for further applications on prefermented dough because of its lower sensitivity to susceptibility differences in porous media. It was applied for temperature mapping in prefermented dough during chilling prior to freezing and compared locally to optical fiber measurements.

  11. Arm span and ulnar length are reliable and accurate estimates of recumbent length and height in a multiethnic population of infants and children under 6 years of age.

    PubMed

    Forman, Michele R; Zhu, Yeyi; Hernandez, Ladia M; Himes, John H; Dong, Yongquan; Danish, Robert K; James, Kyla E; Caulfield, Laura E; Kerver, Jean M; Arab, Lenore; Voss, Paula; Hale, Daniel E; Kanafani, Nadim; Hirschfeld, Steven

    2014-09-01

    Surrogate measures are needed when recumbent length or height is unobtainable or unreliable. Arm span has been used as a surrogate but is not feasible in children with shoulder or arm contractures. Ulnar length is not usually impaired by joint deformities, yet its utility as a surrogate has not been adequately studied. In this cross-sectional study, we aimed to examine the accuracy and reliability of ulnar length measured by different tools as a surrogate measure of recumbent length and height. Anthropometrics [recumbent length, height, arm span, and ulnar length by caliper (ULC), ruler (ULR), and grid (ULG)] were measured in 1479 healthy infants and children aged <6 y across 8 study centers in the United States. Multivariate mixed-effects linear regression models for recumbent length and height were developed by using ulnar length and arm span as surrogate measures. The agreement between the measured length or height and the predicted values by ULC, ULR, ULG, and arm span were examined by Bland-Altman plots. All 3 measures of ulnar length and arm span were highly correlated with length and height. The degree of precision of prediction equations for length by ULC, ULR, and ULG (R(2) = 0.95, 0.95, and 0.92, respectively) was comparable with that by arm span (R(2) = 0.97) using age, sex, and ethnicity as covariates; however, height prediction by ULC (R(2) = 0.87), ULR (R(2) = 0.85), and ULG (R(2) = 0.88) was less comparable with arm span (R(2) = 0.94). Our study demonstrates that arm span and ULC, ULR, or ULG can serve as accurate and reliable surrogate measures of recumbent length and height in healthy children; however, ULC, ULR, and ULG tend to slightly overestimate length and height in young infants and children. Further testing of ulnar length as a surrogate is warranted in physically impaired or nonambulatory children.

  12. Challenging the distributed temperature sensing technique for estimating groundwater discharge to streams through controlled artificial point source experiment

    NASA Astrophysics Data System (ADS)

    Lauer, F.; Frede, H.-G.; Breuer, L.

    2012-04-01

    Spatially confined groundwater discharge can contribute significantly to stream discharge. Distributed fibre optic temperature sensing (DTS) of stream water has been successfully used to localize- and quantify groundwater discharge from this type "point sources" (PS) in small first-order streams. During periods when stream and groundwater temperatures differ PS appear as abrupt step in longitudinal stream water temperature distribution. Based on stream temperature observation up- and downstream of a point source and estimated or measured groundwater temperature the proportion of groundwater inflow to stream discharge can be quantified using simple mixing models. However so far this method has not been quantitatively verified, nor has a detailed uncertainty analysis of the method been conducted. The relative accuracy of this method is expected to decrease nonlinear with decreasing proportions of lateral inflow. Furthermore it depends on the temperature differences (ΔT) between groundwater and surface water and on the accuracy of temperature measurement itself. The latter could be affected by different sources of errors. For example it has been shown that a direct impact of solar radiation on fibre optic cables can lead to errors in temperature measurements in small streams due to low water depth. Considerable uncertainty might also be related to the determination of groundwater temperature through direct measurements or derived from the DTS signal. In order to directly validate the method and asses it's uncertainty we performed a set of artificial point source experiments with controlled lateral inflow rates to a natural stream. The experiments were carried out at the Vollnkirchener Bach, a small head water stream in Hessen, Germany in November and December 2011 during a low flow period. A DTS system was installed along a 1.2 km sub reach of the stream. Stream discharge was measured using a gauging flume installed directly upstream of the artificial PS. Lateral

  13. Three calibration factors, applied to a rapid sweeping method, can accurately estimate Aedes aegypti (Diptera: Culicidae) pupal numbers in large water-storage containers at all temperatures at which dengue virus transmission occurs.

    PubMed

    Romero-Vivas, C M E; Llinás, H; Falconar, A K I

    2007-11-01

    The ability of a simple sweeping method, coupled to calibration factors, to accurately estimate the total numbers of Aedes aegypti (L.) (Diptera: Culicidae) pupae in water-storage containers (20-6412-liter capacities at different water levels) throughout their main dengue virus transmission temperature range was evaluated. Using this method, one set of three calibration factors were derived that could accurately estimate the total Ae. aegypti pupae in their principal breeding sites, large water-storage containers, found throughout the world. No significant differences were obtained using the method at different altitudes (14-1630 m above sea level) that included the range of temperatures (20-30 degrees C) at which dengue virus transmission occurs in the world. In addition, no significant differences were found in the results obtained between and within the 10 different teams that applied this method; therefore, this method was extremely robust. One person could estimate the Ae. aegypti pupae in each of the large water-storage containers in only 5 min by using this method, compared with two people requiring between 45 and 90 min to collect and count the total pupae population in each of them. Because the method was both rapid to perform and did not disturb the sediment layers in these domestic water-storage containers, it was more acceptable by the residents, and, therefore, ideally suited for routine surveillance purposes and to assess the efficacy of Ae. aegypti control programs in dengue virus-endemic areas throughout the world.

  14. Dual time-point imaging for post-dose binding potential estimation applied to a [(11)C]raclopride PET dose occupancy study.

    PubMed

    Alves, Isadora L; Willemsen, Antoon Tm; Dierckx, Rudi A; da Silva, Ana Maria M; Koole, Michel

    2017-03-01

    Receptor occupancy studies performed with PET often require time-consuming dynamic imaging for baseline and post-dose scans. Shorter protocol approximations based on standard uptake value ratios have been proposed. However, such methods depend on the time-point chosen for the quantification and often lead to overestimation and bias. The aim of this study was to develop a shorter protocol for the quantification of post-dose scans using a dual time-point approximation, which employs kinetic parameters from the baseline scan. Dual time-point was evaluated for a [(11)C]raclopride PET dose occupancy study with the D2 antagonist JNJ-37822681, obtaining estimates for binding potential and receptor occupancy. Results were compared to standard simplified reference tissue model and standard uptake value ratios-based estimates. Linear regression and Bland-Altman analysis demonstrated excellent correlation and agreement between dual time-point and the standard simplified reference tissue model approach. Moreover, the stability of dual time-point-based estimates is shown to be independent of the time-point chosen for quantification. Therefore, a dual time-point imaging protocol can be applied to post-dose [(11)C]raclopride PET scans, resulting in a significant reduction in total acquisition time while maintaining accuracy in the quantification of both the binding potential and the receptor occupancy.

  15. Accurate Evaluation of Quantum Integrals

    NASA Technical Reports Server (NTRS)

    Galant, D. C.; Goorvitch, D.; Witteborn, Fred C. (Technical Monitor)

    1995-01-01

    Combining an appropriate finite difference method with Richardson's extrapolation results in a simple, highly accurate numerical method for solving a Schrodinger's equation. Important results are that error estimates are provided, and that one can extrapolate expectation values rather than the wavefunctions to obtain highly accurate expectation values. We discuss the eigenvalues, the error growth in repeated Richardson's extrapolation, and show that the expectation values calculated on a crude mesh can be extrapolated to obtain expectation values of high accuracy.

  16. Normal Tissue Complication Probability Estimation by the Lyman-Kutcher-Burman Method Does Not Accurately Predict Spinal Cord Tolerance to Stereotactic Radiosurgery

    SciTech Connect

    Daly, Megan E.; Luxton, Gary; Choi, Clara Y.H.; Gibbs, Iris C.; Chang, Steven D.; Adler, John R.; Soltys, Scott G.

    2012-04-01

    traditionally used to estimate spinal cord NTCP may not apply to the dosimetry of SRS. Further research with additional NTCP models is needed.

  17. An SVM-Based Classifier for Estimating the State of Various Rotating Components in Agro-Industrial Machinery with a Vibration Signal Acquired from a Single Point on the Machine Chassis

    PubMed Central

    Ruiz-Gonzalez, Ruben; Gomez-Gil, Jaime; Gomez-Gil, Francisco Javier; Martínez-Martínez, Víctor

    2014-01-01

    The goal of this article is to assess the feasibility of estimating the state of various rotating components in agro-industrial machinery by employing just one vibration signal acquired from a single point on the machine chassis. To do so, a Support Vector Machine (SVM)-based system is employed. Experimental tests evaluated this system by acquiring vibration data from a single point of an agricultural harvester, while varying several of its working conditions. The whole process included two major steps. Initially, the vibration data were preprocessed through twelve feature extraction algorithms, after which the Exhaustive Search method selected the most suitable features. Secondly, the SVM-based system accuracy was evaluated by using Leave-One-Out cross-validation, with the selected features as the input data. The results of this study provide evidence that (i) accurate estimation of the status of various rotating components in agro-industrial machinery is possible by processing the vibration signal acquired from a single point on the machine structure; (ii) the vibration signal can be acquired with a uniaxial accelerometer, the orientation of which does not significantly affect the classification accuracy; and, (iii) when using an SVM classifier, an 85% mean cross-validation accuracy can be reached, which only requires a maximum of seven features as its input, and no significant improvements are noted between the use of either nonlinear or linear kernels. PMID:25372618

  18. A test of the 'one-point method' for estimating maximum carboxylation capacity from field-measured, light-saturated photosynthesis.

    PubMed

    De Kauwe, Martin G; Lin, Yan-Shih; Wright, Ian J; Medlyn, Belinda E; Crous, Kristine Y; Ellsworth, David S; Maire, Vincent; Prentice, I Colin; Atkin, Owen K; Rogers, Alistair; Niinemets, Ülo; Serbin, Shawn P; Meir, Patrick; Uddling, Johan; Togashi, Henrique F; Tarvainen, Lasse; Weerasinghe, Lasantha K; Evans, Bradley J; Ishida, F Yoko; Domingues, Tomas F

    2016-05-01

    Simulations of photosynthesis by terrestrial biosphere models typically need a specification of the maximum carboxylation rate (Vcmax ). Estimating this parameter using A-Ci curves (net photosynthesis, A, vs intercellular CO2 concentration, Ci ) is laborious, which limits availability of Vcmax data. However, many multispecies field datasets include net photosynthetic rate at saturating irradiance and at ambient atmospheric CO2 concentration (Asat ) measurements, from which Vcmax can be extracted using a 'one-point method'. We used a global dataset of A-Ci curves (564 species from 46 field sites, covering a range of plant functional types) to test the validity of an alternative approach to estimate Vcmax from Asat via this 'one-point method'. If leaf respiration during the day (Rday ) is known exactly, Vcmax can be estimated with an r(2) value of 0.98 and a root-mean-squared error (RMSE) of 8.19 μmol m(-2) s(-1) . However, Rday typically must be estimated. Estimating Rday as 1.5% of Vcmax, we found that Vcmax could be estimated with an r(2) of 0.95 and an RMSE of 17.1 μmol m(-2) s(-1) . The one-point method provides a robust means to expand current databases of field-measured Vcmax , giving new potential to improve vegetation models and quantify the environmental drivers of Vcmax variation.

  19. A HETEROSCEDASTIC METHOD FOR COMPARING REGRESSION LINES AT SPECIFIED DESIGN POINTS WHEN USING A ROBUST REGRESSION ESTIMATOR.

    PubMed

    Wilcox, Rand R

    2013-04-01

    It is well known that the ordinary least squares (OLS) regression estimator is not robust. Many robust regression estimators have been proposed and inferential methods based on these estimators have been derived. However, for two independent groups, let θj (X) be some conditional measure of location for the jth group, given X, based on some robust regression estimator. An issue that has not been addressed is computing a 1 - α confidence interval for θ1(X) - θ2(X) in a manner that allows both within group and between group hetereoscedasticity. The paper reports the finite sample properties of a simple method for accomplishing this goal. Simulations indicate that, in terms of controlling the probability of a Type I error, the method performs very well for a wide range of situations, even with a relatively small sample size. In principle, any robust regression estimator can be used. The simulations are focused primarily on the Theil-Sen estimator, but some results using Yohai's MM-estimator, as well as the Koenker and Bassett quantile regression estimator, are noted. Data from the Well Elderly II study, dealing with measures of meaningful activity using the cortisol awakening response as a covariate, are used to illustrate that the choice between an extant method based on a nonparametric regression estimator, and the method suggested here, can make a practical difference.

  20. Software Estimation: Developing an Accurate, Reliable Method

    DTIC Science & Technology

    2011-08-01

    level 5 organizations. Defects identified here for CMM level 1 and level 5 are captured from Capers Jones who has identified software delivered... Capers , “Software Assessments, Benchmarks, and Best Practices”, Addison-Wesley Professional, April 2000. 1. At the AV-8B Joint System Support

  1. Analytical estimates of the locations of phase transition points in the ground state for the bimodal Ising spin glass model in two dimensions

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Chiaki

    2014-05-01

    We analytically estimate the locations of phase transition points in the ground state for the ± J random bond Ising model with asymmetric bond distributions on the square lattice. We propose and study the percolation transitions for two types of bond shared by two non-frustrated plaquettes. The present method indirectly treats the sizes of clusters of correlated spins for the ferromagnetic and spin glass orders. We find two transition points. The first transition point is the phase transition point for the ferromagnetic order, and the location is obtained as p_c^{(1)} ≈ 0.895 399 54 as the solution of [p^2 + 3 (1-p)^2 ]^2 p^3 - {1}/{2} = 0. The second transition point is the phase transition point for the spin glass order, and the location is obtained as p_c^{(2)} = {1}/{4} [2 + √ {2 (√ {5} - 1)}] ≈ 0.893 075 69. Here, p is the ferromagnetic bond concentration, and 1 - p is the antiferromagnetic bond concentration. The obtained locations are very reasonably close to the previously estimated locations. This study suggests the presence of an intermediate phase between p_c^{(1)} and p_c^{(2)}; however, since the present method produces remarkable values but has no mathematical proof for accuracy yet, no conclusions are drawn in this article about the presence of the intermediate phase.

  2. A point-infiltration model for estimating runoff from rainfall on small basins in semiarid areas of Wyoming

    USGS Publications Warehouse

    Rankl, James G.

    1990-01-01

    A physically based point-infiltration model was developed for computing infiltration of rainfall into soils and the resulting runoff from small basins in Wyoming. The user describes a 'design storm' in terms of average rainfall intensity and storm duration. Information required to compute runoff for the design storm by using the model include (1) soil type and description, and (2) two infiltration parameters and a surface-retention storage parameter. Parameter values are tabulated in the report. Rainfall and runoff data for three ephemeral-stream basins that contain only one type of soil were used to develop the model. Two assumptions were necessary: antecedent soil moisture is some long-term average, and storm rainfall is uniform in both time and space. The infiltration and surface-retention storage parameters were determined for the soil of each basin. Observed rainstorm and runoff data were used to develop a separation curve, or incipient-runoff curve, which distinguishes between runoff and nonrunoff rainfall data. The position of this curve defines the infiltration and surface-retention storage parameters. A procedure for applying the model to basins that contain more than one type of soil was developed using data from 7 of the 10 study basins. For these multiple-soil basins, the incipient-runoff curve defines the infiltration and retention-storage parameters for the soil having the highest runoff potential. Parameters were defined by ranking the soils according to their relative permeabilities and optimizing the position of the incipient-runoff curve by using measured runoff as a control for the fit. Analyses of runoff from multiple-soil basins indicate that the effective contributing area of runoff is less than the drainage area of the basin. In this study, the effective drainage area ranged from 41.6 to 71.1 percent of the total drainage area. Information on effective drainage area is useful in evaluating drainage area as an independent variable in

  3. Digital signal processing and control and estimation theory -- Points of tangency, area of intersection, and parallel directions

    NASA Technical Reports Server (NTRS)

    Willsky, A. S.

    1976-01-01

    A number of current research directions in the fields of digital signal processing and modern control and estimation theory were studied. Topics such as stability theory, linear prediction and parameter identification, system analysis and implementation, two-dimensional filtering, decentralized control and estimation, image processing, and nonlinear system theory were examined in order to uncover some of the basic similarities and differences in the goals, techniques, and philosophy of the two disciplines. An extensive bibliography is included.

  4. Point: clarifying policy evidence with potential-outcomes thinking--beyond exposure-response estimation in air pollution epidemiology.

    PubMed

    Zigler, Corwin Matthew; Dominici, Francesca

    2014-12-15

    The regulatory environment surrounding policies to control air pollution warrants a new type of epidemiologic evidence. Whereas air pollution epidemiology has typically informed policies with estimates of exposure-response relationships between pollution and health outcomes, these estimates alone cannot support current debates surrounding the actual health effects of air quality regulations. We argue that directly evaluating specific control strategies is distinct from estimating exposure-response relationships and that increased emphasis on estimating effects of well-defined regulatory interventions would enhance the evidence that supports policy decisions. Appealing to similar calls for accountability assessment of whether regulatory actions impact health outcomes, we aim to sharpen the analytic distinctions between studies that directly evaluate policies and those that estimate exposure-response relationships, with particular focus on perspectives for causal inference. Our goal is not to review specific methodologies or studies, nor is it to extoll the advantages of "causal" versus "associational" evidence. Rather, we argue that potential-outcomes perspectives can elevate current policy debates with more direct evidence of the extent to which complex regulatory interventions affect health. Augmenting the existing body of exposure-response estimates with rigorous evidence of the causal effects of well-defined actions will ensure that the highest-level epidemiologic evidence continues to support regulatory policies.

  5. Estimation of point source fugitive emission rates from a single sensor time series: a conditionally-sampled Gaussian plume reconstruction

    EPA Science Inventory

    This paper presents a technique for determining the trace gas emission rate from a point source. The technique was tested using data from controlled methane release experiments and from measurement downwind of a natural gas production facility in Wyoming. Concentration measuremen...

  6. A test of the 'one-point method' for estimating maximum carboxylation capacity from field-measured, light-saturated photosynthesis

    DOE PAGES

    Martin G. De Kauwe; Serbin, Shawn P.; Lin, Yan -Shih; ...

    2015-12-31

    Here, simulations of photosynthesis by terrestrial biosphere models typically need a specification of the maximum carboxylation rate (Vcmax). Estimating this parameter using A–Ci curves (net photosynthesis, A, vs intercellular CO2 concentration, Ci) is laborious, which limits availability of Vcmax data. However, many multispecies field datasets include net photosynthetic rate at saturating irradiance and at ambient atmospheric CO2 concentration (Asat) measurements, from which Vcmax can be extracted using a ‘one-point method’.

  7. Future PMPs Estimation in Korea under AR5 RCP 8.5 Climate Change Scenario: Focus on Dew Point Temperature Change

    NASA Astrophysics Data System (ADS)

    Okjeong, Lee; Sangdan, Kim

    2016-04-01

    According to future climate change scenarios, future temperature is expected to increase gradually. Therefore, it is necessary to reflect the effects of these climate changes to predict Probable Maximum Precipitations (PMPs). In this presentation, PMPs will be estimated with future dew point temperature change. After selecting 174 major storm events from 1981 to 2005, new PMPs will be proposed with respect to storm areas (25, 100, 225, 400, 900, 2,025, 4,900, 10,000 and 19,600 km2) and storm durations (1, 2, 4, 6, 8, 12, 18, 24, 48 and 72 hours) using the Korea hydro-meteorological method. Also, orographic transposition factor will be applied in place of the conventional terrain impact factor which has been used in previous Korean PMPs estimation reports. After estimating dew point temperature using future temperature and representative humidity information under the Korea Meteorological Administration AR5 RCP 8.5, changes in the PMPs under dew point temperature change will be investigated by comparison with present and future PMPs. This research was supported by a grant(14AWMP-B082564-01) from Advanced Water Management Research Program funded by Ministry of Land, Infrastructure and Transport of Korean government.

  8. Using ToxCast data to reconstruct dynamic cell state trajectories and estimate toxicological points of departure.

    EPA Pesticide Factsheets

    Background: High-content imaging (HCI) allows simultaneous measurement of multiple cellular phenotypic changes and is an important tool for evaluating the biological activity of chemicals.Objectives: Our goal was to analyze dynamic cellular changes using HCI to identify the ??tipping point?? at which the cells did not show recovery towards a normal phenotypic state.Methods: HCI was used to evaluate the effects of 967 chemicals (in concentrations ranging from 0.4 to 200 03bcM) on HepG2 cells over a 72-hr exposure period. The HCI end points included p53, c-Jun, histone H2A.x, 03b1-tubulin, histone H3, alpha tubulin, mitochondrial membrane potential, mitochondrial mass, cell cycle arrest, nuclear size, and cell number. A computational model was developed to interpret HCI responses as cell-state trajectories.Results: Analysis of cell-state trajectories showed that 336 chemicals produced tipping points and that HepG2 cells were resilient to the effects of 334 chemicals up to the highest concentration (200 03bcM) and duration (72 hr) tested. Tipping points were identified as concentration-dependent transitions in system recovery, and the corresponding critical concentrations were generally between 5 and 15 times (25th and 75th percentiles, respectively) lower than the concentration that produced any significant effect on HepG2 cells. The remaining 297 chemicals require more data before they can be placed in either of these categories.Conclusions: These findings show t

  9. Sci—Thur AM: YIS - 11: Estimation of Bladder-Wall Cumulative Dose in Multi-Fraction Image-Based Gynaecological Brachytherapy Using Deformable Point Set Registration

    SciTech Connect

    Zakariaee, R; Brown, C J; Hamarneh, G; Parsons, C A; Spadinger, I

    2014-08-15

    Dosimetric parameters based on dose-volume histograms (DVH) of contoured structures are routinely used to evaluate dose delivered to target structures and organs at risk. However, the DVH provides no information on the spatial distribution of the dose in situations of repeated fractions with changes in organ shape or size. The aim of this research was to develop methods to more accurately determine geometrically localized, cumulative dose to the bladder wall in intracavitary brachytherapy for cervical cancer. The CT scans and treatment plans of 20 cervical cancer patients were used. Each patient was treated with five high-dose-rate (HDR) brachytherapy fractions of 600cGy prescribed dose. The bladder inner and outer surfaces were delineated using MIM Maestro software (MIM Software Inc.) and were imported into MATLAB (MathWorks) as 3-dimensional point clouds constituting the “bladder wall”. A point-set registration toolbox for MATLAB, Coherent Point Drift (CPD), was used to non-rigidly transform the bladder-wall points from four of the fractions to the coordinate system of the remaining (reference) fraction, which was chosen to be the emptiest bladder for each patient. The doses were accumulated on the reference fraction and new cumulative dosimetric parameters were calculated. The LENT-SOMA toxicity scores of these patients were studied against the cumulative dose parameters. Based on this study, there was no significant correlation between the toxicity scores and the determined cumulative dose parameters.

  10. Single point estimation of glucocorticoid receptors in lymphocytes of normal subjects and of children under long term glucocorticoid treatment.

    PubMed

    Lapcík, P; Hampl, R; Bicíková, M

    1992-03-01

    A single point assay of glucocorticoid receptors (GR) in human lymphocytes based on the measurement of specific dexamethasone binding has been developed and compared with a common multi-point Scatchard analysis. The assay conditions-concentration of the ligand 20 nmol/l, incubation time 2 h and the cell count 2-6 mil. cells/tube in the assay volume 0.25 ml were found to be optimal. An attempt was also undertaken to use a cell harvester for the separation of cells from unbound ligand. Though specifically bound dexamethasone measured by whole-cell assay and that using cell harvester correlated well, almost by one order lower values obtained with the latter method render it non-applicable for receptor quantitation. The results from 9 healthy volunteers (average GR concentration 7131 +/- 1256 sites/cell) correlated excellently with those obtained by the Scatchard analysis. The single point assay has been also applied for determination of GH in 10 children treated with large doses of prednisone. The average values from healthy volunteers did not differ significantly from those found in these children, though much broader range was found in patients.

  11. Barriers against required nurse estimation models applying in Iran hospitals from health system experts’ point of view

    PubMed Central

    Tabatabaee, Seyed Saeed; Nekoie-Moghadam, Mahmood; Vafaee-Najar, Ali; Amiresmaili, Mohammad Reza

    2016-01-01

    Introduction One of the strategies for accessing effective nursing care is to design and implement a nursing estimation model. The purpose of this research was to determine barriers in applying models or norms for estimating the size of a hospital’s nursing team. Methods This study was conducted from November 2015 to March 2016 among three levels of managers at the Ministry of Health, medical universities, and hospitals in Iran. We carried out a qualitative study using a Colaizzi method. We used semistructured and in-depth interviews by purposive, quota, and snowball sampling of 32 participants (10 informed experts in the area of policymaking in human resources in the Ministry of Health, 10 decision makers in employment and distribution of human resources in treatment and administrative chancellors of Medical Universities, and 12 nursing managers in hospitals). The data were analyzed by Atlas.ti software version 6.0.15. Results The following 14 subthemes emerged from data analysis: Lack of specific steward, weakness in attracting stakeholder contributions, lack of authorities trust to the models, lack of mutual interests between stakeholders, shortage of nurses, financial deficit, non-native models, designing models by people unfamiliar with nursing process, lack of attention to the nature of work in each ward, lack of attention to hospital classification, lack of transparency in defining models, reduced nurses available time, increased indirect activity of nurses, and outdated norms. The main themes were inappropriate planning and policymaking in high levels, resource constraints, and poor design of models and lack of updating the model. Conclusion The results of present study indicate that many barriers exist in applying models for estimating the size of a hospital’s nursing team. Therefore, for designing an appropriate nursing staff estimation model and implementing it, in addition to considering the present barriers, identifying the norm required features

  12. Modification and fixed-point analysis of a Kalman filter for orientation estimation based on 9D inertial measurement unit data.

    PubMed

    Brückner, Hans-Peter; Spindeldreier, Christian; Blume, Holger

    2013-01-01

    A common approach for high accuracy sensor fusion based on 9D inertial measurement unit data is Kalman filtering. State of the art floating-point filter algorithms differ in their computational complexity nevertheless, real-time operation on a low-power microcontroller at high sampling rates is not possible. This work presents algorithmic modifications to reduce the computational demands of a two-step minimum order Kalman filter. Furthermore, the required bit-width of a fixed-point filter version is explored. For evaluation real-world data captured using an Xsens MTx inertial sensor is used. Changes in computational latency and orientation estimation accuracy due to the proposed algorithmic modifications and fixed-point number representation are evaluated in detail on a variety of processing platforms enabling on-board processing on wearable sensor platforms.

  13. Comparison between CT-based volumetric calculations and ICRU reference-point estimates of radiation doses delivered to bladder and rectum during intracavitary radiotherapy for cervical cancer

    SciTech Connect

    Pelloski, Christopher E.; Palmer, Matthew B.S.; Chronowski, Gregory M.; Jhingran, Anuja; Horton, John; Eifel, Patricia J. . E-mail: peifel@mdanderson.org

    2005-05-01

    Purpose: To compare CT-based volumetric calculations and International Commission on Radiation Units and Measurements (ICRU) reference-point estimates of radiation doses to the bladder and rectum in patients with carcinoma of the uterine cervix treated with definitive low-dose-rate intracavitary radiotherapy (ICRT). Methods and Materials: Between November 2001 and March 2003, 60 patients were prospectively enrolled in a pilot study of ICRT with CT-based dosimetry. Most patients underwent two ICRT insertions. After insertion of an afterloading ICRT applicator, intraoperative orthogonal films were obtained to ensure proper positioning of the system and to facilitate subsequent planning. Treatments were prescribed using standard two-dimensional dosimetry and planning. Patients also underwent helical CT of the pelvis for three-dimensional reconstruction of the radiation dose distributions. The systems were loaded with {sup 137}Cs sources using the Selectron remote afterloading system according to institutional practice for low-dose-rate brachytherapy. Three-dimensional dose distributions were generated using the Varian BrachyVision treatment planning system. The rectum was contoured from the bottom of the ischial tuberosities to the sigmoid flexure. The entire bladder was contoured. The minimal doses delivered to the 2 cm{sup 3} of bladder and rectum receiving the highest dose (D{sub BV2} and D{sub RV2}, respectively) were determined from dose-volume histograms, and these estimates were compared with two-dimensionally derived estimates of the doses to the corresponding ICRU reference points. Results: A total of 118 unique intracavitary insertions were performed, and 93 were evaluated and the subject of this analysis. For the rectum, the estimated doses to the ICRU reference point did not differ significantly from the D{sub RV2} (p = 0.561); the mean ({+-} standard deviation) difference was 21 cGy ({+-} 344 cGy). The median volume of the rectum that received at least

  14. Impact of single-point GPS integrated water vapor estimates on short-range WRF model forecasts over southern India

    NASA Astrophysics Data System (ADS)

    Kumar, Prashant; Gopalan, Kaushik; Shukla, Bipasha Paul; Shyam, Abhineet

    2016-09-01

    Specifying physically consistent and accurate initial conditions is one of the major challenges of numerical weather prediction (NWP) models. In this study, ground-based global positioning system (GPS) integrated water vapor (IWV) measurements available from the International Global Navigation Satellite Systems (GNSS) Service (IGS) station in Bangalore, India, are used to assess the impact of GPS data on NWP model forecasts over southern India. Two experiments are performed with and without assimilation of GPS-retrieved IWV observations during the Indian winter monsoon period (November-December, 2012) using a four-dimensional variational (4D-Var) data assimilation method. Assimilation of GPS data improved the model IWV analysis as well as the subsequent forecasts. There is a positive impact of ˜10 % over Bangalore and nearby regions. The Weather Research and Forecasting (WRF) model-predicted 24-h surface temperature forecasts have also improved when compared with observations. Small but significant improvements were found in the rainfall forecasts compared to control experiments.

  15. Estimation of critical frequency and height maximum for path middle point on evidence derived from experimental oblique sounding data: comparison of calculated values with experimental and IRI values

    NASA Astrophysics Data System (ADS)

    Kim, Anton G.; Kotovich, Galina V.

    2006-11-01

    The work is devoted to experimental checking of technique for estimation of f 0F2 and hmF2 values in path midpoint through oblique sounding (OS) data. In this work data obtained by Irkutsk chirp-sounder on the Norilsk-Irkutsk path were used and data obtained by Podkamennaya Tunguska ionospheric station (which located near estimating path middle point) were used also. During the calculation, the experimental distance-frequency characteristics (DFC) of path are recalculated into height-frequency characteristics (HFC) in path midpoint by means of Smith method. It lets us to determine f 0F2 value in path middle point. For hmF2 definition N(h) profile is used which was obtained by recalculation of HFC by means the Guliaeva technique. Also the fast method of recalculation was probed in two DFC points. In the work comparison was made between calculated f 0F2 values and experimental f 0F2 values obtained by Podkamennaya Tunguska ionospheric station. Comparison of estimating hmF2 values with values calculated by Dudeney method from experimental f 0E, f 0F2, M(3000)F2 values at Podkamennaya Tunguska was carried out. In addition, estimating values was compared with values given by the IRI model. A capability of the IRI model adaptation by f 0F2 and hmF2 values was investigated. It will help during diagnostics, working out regional models of ionosphere and during the adaptation of various models of ionosphere to the real conditions.

  16. Extended Kalman filtering of point process observation.

    PubMed

    Salimpour, Yousef; Soltanian-Zadeh, Hamid; Abolhassani, Mohammad D

    2010-01-01

    A temporal point process is a stochastic time series of binary events that occurs in continuous time. In computational neuroscience, the point process is used to model neuronal spiking activity; however, estimating the model parameters from spike train is a challenging problem. The state space point process filtering theory is a new technique for the estimation of the states and parameters. In order to use the stochastic filtering theory for the states of neuronal system with the Gaussian assumption, we apply the extended Kalman filter. In this regard, the extended Kalman filtering equations are derived for the point process observation. We illustrate the new filtering algorithm by estimating the effect of visual stimulus on the spiking activity of object selective neurons from the inferior temporal cortex of macaque monkey. Based on the goodness-offit assessment, the extended Kalman filter provides more accurate state estimate than the conventional methods.

  17. Projections onto Convex Sets Super-Resolution Reconstruction Based on Point Spread Function Estimation of Low-Resolution Remote Sensing Images

    PubMed Central

    Fan, Chong; Wu, Chaoyun; Li, Grand; Ma, Jun

    2017-01-01

    To solve the problem on inaccuracy when estimating the point spread function (PSF) of the ideal original image in traditional projection onto convex set (POCS) super-resolution (SR) reconstruction, this paper presents an improved POCS SR algorithm based on PSF estimation of low-resolution (LR) remote sensing images. The proposed algorithm can improve the spatial resolution of the image and benefit agricultural crop visual interpolation. The PSF of the high-resolution (HR) image is unknown in reality. Therefore, analysis of the relationship between the PSF of the HR image and the PSF of the LR image is important to estimate the PSF of the HR image by using multiple LR images. In this study, the linear relationship between the PSFs of the HR and LR images can be proven. In addition, the novel slant knife-edge method is employed, which can improve the accuracy of the PSF estimation of LR images. Finally, the proposed method is applied to reconstruct airborne digital sensor 40 (ADS40) three-line array images and the overlapped areas of two adjacent GF-2 images by embedding the estimated PSF of the HR image to the original POCS SR algorithm. Experimental results show that the proposed method yields higher quality of reconstructed images than that produced by the blind SR method and the bicubic interpolation method. PMID:28208837

  18. Projections onto Convex Sets Super-Resolution Reconstruction Based on Point Spread Function Estimation of Low-Resolution Remote Sensing Images.

    PubMed

    Fan, Chong; Wu, Chaoyun; Li, Grand; Ma, Jun

    2017-02-13

    To solve the problem on inaccuracy when estimating the point spread function (PSF) of the ideal original image in traditional projection onto convex set (POCS) super-resolution (SR) reconstruction, this paper presents an improved POCS SR algorithm based on PSF estimation of low-resolution (LR) remote sensing images. The proposed algorithm can improve the spatial resolution of the image and benefit agricultural crop visual interpolation. The PSF of the highresolution (HR) image is unknown in reality. Therefore, analysis of the relationship between the PSF of the HR image and the PSF of the LR image is important to estimate the PSF of the HR image by using multiple LR images. In this study, the linear relationship between the PSFs of the HR and LR images can be proven. In addition, the novel slant knife-edge method is employed, which can improve the accuracy of the PSF estimation of LR images. Finally, the proposed method is applied to reconstruct airborne digital sensor 40 (ADS40) three-line array images and the overlapped areas of two adjacent GF-2 images by embedding the estimated PSF of the HR image to the original POCS SR algorithm. Experimental results show that the proposed method yields higher quality of reconstructed images than that produced by the blind SR method and the bicubic interpolation method.

  19. A Method to Estimate the Probability That Any Individual Lightning Stroke Contacted the Surface Within Any Radius of Any Point

    NASA Technical Reports Server (NTRS)

    Huddleston, Lisa L.; Roeder, William; Merceret, Francis J.

    2010-01-01

    A technique has been developed to calculate the probability that any nearby lightning stroke is within any radius of any point of interest. In practice, this provides the probability that a nearby lightning stroke was within a key distance of a facility, rather than the error ellipses centered on the stroke. This process takes the current bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to get the probability that the stroke is inside any specified radius. This new facility-centric technique will be much more useful to the space launch customers and may supersede the lightning error ellipse approach discussed in [5], [6].

  20. EM Sounding Characterization of Soil Environment toward Estimation of Potential Pollutant Load from Non-point Sources

    NASA Astrophysics Data System (ADS)

    Mori, Y.; Ide, J.; Somura, H.; Morisawa, T.

    2010-12-01

    A multi-frequency electro-magnetic (EM) sounding method was applied to agriculture fields to investigate the characteristics of non-point pollution load. Soil environmental properties such as differences in land management were analyzed with electrical conductivity (EC) maps. In addition, vertical EC profiles obtained from EM soundings were compared with EC in drainage ditch or river water. As results, surface soil EC maps successfully extracted the differences in land management affected by fertilizer application. Moreover, surface EC at the vertical profiles strongly related with drainage ditch or river EC, showing most of the EC in the water was explained by surface EC maps at the EM sounding data. The proposed method has strength in obtaining EC data without sampling river water, the situation we sometimes experienced at the field survey.

  1. Epidemiologic Behavior and Estimation of an Optimal Cut-Off Point for Homeostasis Model Assessment-2 Insulin Resistance: A Report from a Venezuelan Population

    PubMed Central

    Bermúdez, Valmore; Martínez, María Sofía; Apruzzese, Vanessa; Chávez-Castillo, Mervin; Gonzalez, Robys; Torres, Yaquelín; Bello, Luis; Añez, Roberto; Chacín, Maricarmen; Toledo, Alexandra; Cabrera, Mayela; Mengual, Edgardo; Ávila, Raquel; López-Miranda, José

    2014-01-01

    Background. Mathematical models such as Homeostasis Model Assessment have gained popularity in the evaluation of insulin resistance (IR). The purpose of this study was to estimate the optimal cut-off point for Homeostasis Model Assessment-2 Insulin Resistance (HOMA2-IR) in an adult population of Maracaibo, Venezuela. Methods. Descriptive, cross-sectional study with randomized, multistaged sampling included 2,026 adult individuals. IR was evaluated through HOMA2-IR calculation in 602 metabolically healthy individuals. For cut-off point estimation, two approaches were applied: HOMA2-IR percentile distribution and construction of ROC curves using sensitivity and specificity for selection. Results. HOMA2-IR arithmetic mean for the general population was 2.21 ± 1.42, with 2.18 ± 1.37 for women and 2.23 ± 1.47 for men (P = 0.466). When calculating HOMA2-IR for the healthy reference population, the resulting p75 was 2.00. Using ROC curves, the selected cut-off point was 1.95, with an area under the curve of 0.801, sensibility of 75.3%, and specificity of 72.8%. Conclusions. We propose an optimal cut-off point of 2.00 for HOMA2-IR, offering high sensitivity and specificity, sufficient for proper assessment of IR in the adult population of our city, Maracaibo. The determination of population-specific cut-off points is needed to evaluate risk for public health problems, such as obesity and metabolic syndrome. PMID:27379332

  2. Estimation of contribution from non-point sources to perfluorinated surfactants in a river by using boron as a wastewater tracer.

    PubMed

    Nishikoori, Hiroshi; Murakami, Michio; Sakai, Hiroshi; Oguma, Kumiko; Takada, Hideshige; Takizawa, Satoshi

    2011-08-01

    The contribution of non-point sources to perfluorinated surfactants (PFSs) in a river was evaluated by estimating their fluxes and by using boron (B) as a tracer. The utility of PFSs/B as an indicator for evaluating the impact of non-point sources was demonstrated. River water samples were collected from the Iruma River, upstream of the intake of drinking water treatment plants in Tokyo, during dry weather and wet weather, and 13 PFSs, dissolved organic carbon (DOC), total nitrogen (TN), and B were analyzed. Perfluorohexane sulfonate (PFHxS), perfluorooctane sulfonate (PFOS), perfluoroheptanoate (PFHpA), perfluorooctanoate (PFOA), perfluorononanoate (PFNA), perfluorodecanoate (PFDA), perfluoroundecanoate (PFUA), and perfluorododecanoate (PFDoDA) were detected on all sampling dates. The concentrations and fluxes of perfluorocarboxylates (PFCAs, e.g. PFOA and PFNA) were higher during wet weather, but those of perfluoroalkyl sulfonates (PFASs, e.g. PFHxS and PFOS) were not. The wet/dry ratios of PFSs/B (ratios of PFSs/B during wet weather to those during dry weather) agreed well with those of PFS fluxes (ratios of PFS fluxes during wet weather to those during dry weather), indicating that PFSs/B is useful for evaluating the contribution from non-point sources to PFSs in rivers. The wet/dry ratios of PFOA and PFNA were higher than those of other PFSs, DOC, and TN, showing that non-point sources contributed greatly to PFOA and PFNA in the water. This is the first study to use B as a wastewater tracer to estimate the contribution of non-point sources to PFSs in a river.

  3. A-line, bispectral index, and estimated effect-site concentrations: a prediction of clinical end-points of anesthesia.

    PubMed

    Kreuer, Sascha; Bruhn, Jörgen; Larsen, Reinhard; Buchinger, Heiko; Wilhelm, Wolfram

    2006-04-01

    Autoregressive modeling with exogenous input of middle-latency auditory evoked potentials (A-Line AEP index, AAI) has been developed for monitoring depth of anesthesia. We investigated the prediction of recovery and dose-response relationship of desflurane and AAI or bispectral index (BIS) values. Twenty adult men scheduled for radical prostatectomy were recruited. To minimize opioid effects, analgesia was provided by a concurrent epidural in addition to the general anesthetic. Electrodes for AAI and BIS monitoring and a headphone for auditory stimuli were applied. Propofol and remifentanil were used for anesthetic induction. Maintenance of anesthesia was with desflurane only. For comparison to AAI and BIS monitor parameters, pharmacokinetic models for desflurane and propofol distribution and effect-site concentrations were used to predict clinical end-points (Prediction probability P(K)). Patients opened their eyes at an AAI value of 47 +/- 20 and a BIS value of 77 +/- 14 (mean +/- sd), and the prediction probability for eye opening was P(K) = 0.81 for AAI, P(K) = 0.89 for BIS, and P(K) = 0.91 for desflurane effect-site concentration. The opening of eyes was best predicted by the calculated desflurane effect-site concentration. The relationship between predicted desflurane effect-site concentration versus AAI and BIS was calculated by nonlinear regression analysis (r = 0.75 for AAI and r = 0.80 for BIS). The correlation between BIS and clinical end-points of anesthesia or the desflurane effect-compartment concentration is better than for the AAI.

  4. Structural Constraints and Earthquake Recurrence Estimates for the West Tahoe-Dollar Point Fault, Lake Tahoe Basin, California

    NASA Astrophysics Data System (ADS)

    Maloney, J. M.; Driscoll, N. W.; Kent, G.; Brothers, D. S.; Baskin, R. L.; Babcock, J. M.; Noble, P. J.; Karlin, R. E.

    2011-12-01

    Previous work in the Lake Tahoe Basin (LTB), California, identified the West Tahoe-Dollar Point Fault (WTDPF) as the most hazardous fault in the region. Onshore and offshore geophysical mapping delineated three segments of the WTDPF extending along the western margin of the LTB. The rupture patterns between the three WTDPF segments remain poorly understood. Fallen Leaf Lake (FLL), Cascade Lake, and Emerald Bay are three sub-basins of the LTB, located south of Lake Tahoe, that provide an opportunity to image primary earthquake deformation along the WTDPF and associated landslide deposits. We present results from recent (June 2011) high-resolution seismic CHIRP surveys in FLL and Cascade Lake, as well as complete multibeam swath bathymetry coverage of FLL. Radiocarbon dates obtained from the new piston cores acquired in FLL provide age constraints on the older FLL slide deposits and build on and complement previous work that dated the most recent event (MRE) in Fallen Leaf Lake at ~4.1-4.5 k.y. BP. The CHIRP data beneath FLL image slide deposits that appear to correlate with contemporaneous slide deposits in Emerald Bay and Lake Tahoe. A major slide imaged in FLL CHIRP data is slightly younger than the Tsoyowata ash (7950-7730 cal yrs BP) identified in sediment cores and appears synchronous with a major Lake Tahoe slide deposit (7890-7190 cal yrs BP). The equivalent age of these slides suggests the penultimate earthquake on the WTDPF may have triggered them. If correct, we postulate a recurrence interval of ~3-4 k.y. These results suggest the FLL segment of the WTDPF is near its seismic recurrence cycle. Additionally, CHIRP profiles acquired in Cascade Lake image the WTDPF for the first time in this sub-basin, which is located near the transition zone between the FLL and Rubicon Point Sections of the WTDPF. We observe two fault-strands trending N45°W across southern Cascade Lake for ~450 m. The strands produce scarps of ~5 m and ~2.7 m, respectively, on the lake

  5. On the Choice of Access Point Selection Criterion and Other Position Estimation Characteristics for WLAN-Based Indoor Positioning

    PubMed Central

    Laitinen, Elina; Lohan, Elena Simona

    2016-01-01

    The positioning based on Wireless Local Area Networks (WLAN) is one of the most promising technologies for indoor location-based services, generally using the information carried by Received Signal Strengths (RSS). One challenge, however, is the huge amount of data in the radiomap database due to the enormous number of hearable Access Points (AP) that could make the positioning system very complex. This paper concentrates on WLAN-based indoor location by comparing fingerprinting, path loss and weighted centroid based positioning approaches in terms of complexity and performance and studying the effects of grid size and AP reduction with several choices for appropriate selection criterion. All results are based on real field measurements in three multi-floor buildings. We validate our earlier findings concerning several different AP selection criteria and conclude that the best results are obtained with a maximum RSS-based criterion, which also proved to be the most consistent among the different investigated approaches. We show that the weighted centroid based low-complexity method is very sensitive to AP reduction, while the path loss-based method is also very robust to high percentage removals. Indeed, for fingerprinting, 50% of the APs can be removed safely with a properly chosen removal criterion without increasing the positioning error much. PMID:27213395

  6. Contaminant dispersion prediction and source estimation with integrated Gaussian-machine learning network model for point source emission in atmosphere.

    PubMed

    Ma, Denglong; Zhang, Zaoxiao

    2016-07-05

    Gas dispersion model is important for predicting the gas concentrations when contaminant gas leakage occurs. Intelligent network models such as radial basis function (RBF), back propagation (BP) neural network and support vector machine (SVM) model can be used for gas dispersion prediction. However, the prediction results from these network models with too many inputs based on original monitoring parameters are not in good agreement with the experimental data. Then, a new series of machine learning algorithms (MLA) models combined classic Gaussian model with MLA algorithm has been presented. The prediction results from new models are improved greatly. Among these models, Gaussian-SVM model performs best and its computation time is close to that of classic Gaussian dispersion model. Finally, Gaussian-MLA models were applied to identifying the emission source parameters with the particle swarm optimization (PSO) method. The estimation performance of PSO with Gaussian-MLA is better than that with Gaussian, Lagrangian stochastic (LS) dispersion model and network models based on original monitoring parameters. Hence, the new prediction model based on Gaussian-MLA is potentially a good method to predict contaminant gas dispersion as well as a good forward model in emission source parameters identification problem.

  7. Estimation of glacier surface motion by robust phase correlation and point like features of SAR intensity images

    NASA Astrophysics Data System (ADS)

    Fang, Li; Xu, Yusheng; Yao, Wei; Stilla, Uwe

    2016-11-01

    For monitoring of glacier surface motion in pole and alpine areas, radar remote sensing is becoming a popular technology accounting for its specific advantages of being independent of weather conditions and sunlight. In this paper we propose a method for glacier surface motion monitoring using phase correlation (PC) based on point-like features (PLF). We carry out experiments using repeat-pass TerraSAR X-band (TSX) and Sentinel-1 C-band (S1C) intensity images of the Taku glacier in Juneau icefield located in southeast Alaska. The intensity imagery is first filtered by an improved adaptive refined Lee filter while the effect of topographic reliefs is removed via SRTM-X DEM. Then, a robust phase correlation algorithm based on singular value decomposition (SVD) and an improved random sample consensus (RANSAC) algorithm is applied to sequential PLF pairs generated by correlation using a 2D sinc function template. The approaches for glacier monitoring are validated by both simulated SAR data and real SAR data from two satellites. The results obtained from these three test datasets confirm the superiority of the proposed approach compared to standard correlation-like methods. By the use of the proposed adaptive refined Lee filter, we achieve a good balance between the suppression of noise and the preservation of local image textures. The presented phase correlation algorithm shows the accuracy of better than 0.25 pixels, when conducting matching tests using simulated SAR intensity images with strong noise. Quantitative 3D motions and velocities of the investigated Taku glacier during a repeat-pass period are obtained, which allows a comprehensive and reliable analysis for the investigation of large-scale glacier surface dynamics.

  8. Estimation of local anisotropy of plexiform bone: Comparison between depth sensing micro-indentation and Reference Point Indentation.

    PubMed

    Dall'Ara, E; Grabowski, P; Zioupos, P; Viceconti, M

    2015-11-26

    The recently developed Reference Point Indentation (RPI) allows the measurements of bone properties at the tissue level in vivo. The goal of this study was to compare the local anisotropic behaviour of bovine plexiform bone measured with depth sensing micro-indentation tests and with RPI. Fifteen plexiform bone specimens were extracted from a bovine femur and polished down to 0.05µm alumina paste for indentations along the axial, radial and circumferential directions (N=5 per group). Twenty-four micro-indentations (2.5µm in depth, 10% of them were excluded for testing problems) and four RPI-indentations (~50µm in depth) were performed on each sample. The local indentation modulus Eind was found to be highest for the axial direction (24.3±2.5GPa) compared to the one for the circumferential indentations (19% less stiff) and for the radial direction (30% less stiff). RPI measurements were also found to be dependent on indentation direction (p<0.001) with the exception of the Indentation Distance Increase (IDI) (p=0.173). In particular, the unloading slope US1 followed similar trends compared to the Eind: 0.47±0.03N/µm for axial, 11% lower for circumferential and 17% lower for radial. Significant correlations were found between US1 and Eind (p=0.001; R(2)=0.58), while no significant relationship was found between IDI and any of the micro-indentation measurements (p>0.157). In conclusion some of the RPI measurements can provide information about local anisotropy but IDI cannot. Moreover, there is a linear relationship between most local mechanical properties measured with RPI and with micro-indentations, but IDI does not correlate with any micro-indentation measurements.

  9. Estimation of influential points in any data set from coefficient of determination and its leave-one-out cross-validated counterpart.

    PubMed

    Tóth, Gergely; Bodai, Zsolt; Héberger, Károly

    2013-10-01

    Coefficient of determination (R (2)) and its leave-one-out cross-validated analogue (denoted by Q (2) or R cv (2) ) are the most frequantly published values to characterize the predictive performance of models. In this article we use R (2) and Q (2) in a reversed aspect to determine uncommon points, i.e. influential points in any data sets. The term (1 - Q (2))/(1 - R (2)) corresponds to the ratio of predictive residual sum of squares and the residual sum of squares. The ratio correlates to the number of influential points in experimental and random data sets. We propose an (approximate) F test on (1 - Q (2))/(1 - R (2)) term to quickly pre-estimate the presence of influential points in training sets of models. The test is founded upon the routinely calculated Q (2) and R (2) values and warns the model builders to verify the training set, to perform influence analysis or even to change to robust modeling.

  10. A test of the 'one-point method' for estimating maximum carboxylation capacity from field-measured, light-saturated photosynthesis

    SciTech Connect

    Martin G. De Kauwe; Serbin, Shawn P.; Lin, Yan -Shih; Wright, Ian J.; Medlyn, Belinda E.; Crous, Kristine Y.; Ellsworth, David S.; Maire, Vincent; Prentice, I. Colin; Atkin, Owen K.; Rogers, Alistair; Niinemets, Ulo; Meir, Patrick; Uddling, Johan; Togashi, Henrique F.; Tarvainen, Lasse; Weerasinghe, Lasantha K.; Evans, Bradley J.; Ishida, F. Yoko; Domingues, Tomas F.

    2015-12-31

    Here, simulations of photosynthesis by terrestrial biosphere models typically need a specification of the maximum carboxylation rate (Vcmax). Estimating this parameter using A–Ci curves (net photosynthesis, A, vs intercellular CO2 concentration, Ci) is laborious, which limits availability of Vcmax data. However, many multispecies field datasets include net photosynthetic rate at saturating irradiance and at ambient atmospheric CO2 concentration (Asat) measurements, from which Vcmax can be extracted using a ‘one-point method’.

  11. BeiDou phase bias estimation and its application in precise point positioning with triple-frequency observable

    NASA Astrophysics Data System (ADS)

    Gu, Shengfeng; Lou, Yidong; Shi, Chuang; Liu, Jingnan

    2015-10-01

    At present, the BeiDou system (BDS) enables the practical application of triple-frequency observable in the Asia-Pacific region, of many possible benefits from the additional signal; this study focuses on exploiting the contribution of zero difference (ZD) ambiguity resolution (AR) to the precise point positioning (PPP). A general modeling strategy for multi-frequency PPP AR is presented, in which, the least squares ambiguity decorrelation adjustment (LAMBDA) method is employed in ambiguity fixing based on the full variance-covariance ambiguity matrix generated from the raw data processing model. Because of the reliable fixing of BDS L1 ambiguity faces more difficulty, the LAMBDA method with partial ambiguity fixing is proposed to enable the independent and instantaneous resolution of extra wide-lane (EWL) and wide-lane (WL). This mechanism of sequential ambiguity fixing is demonstrated for resolving ZD satellite phase bias and performing triple-frequency PPP AR with two reference station networks with a typical baseline of up to 400 and 800 km, respectively. Tests show that about of the EWL and WL phase bias of BDS has a consistency of better than 0.1 cycle, and this value decreases to 80 % for L1 phase bias for Experiment I, while all the solutions of Experiment II have a similar RMS of about 0.12 cycles. In addition, the repeatability of the daily mean phase bias agree to 0.093 cycles and 0.095 cycles for EWL and WL on average, which is much smaller than 0.20 cycles of L1. To assess the improvement of fixed PPP brought by applying the third frequency signal as well as the above phase bias, various ambiguity fixing strategy are considered in the numerical demonstration. It is shown that the impact of the additional signal is almost negligible when only float solution involved. It is also shown that by fixing EWL and WL together, as opposed to the single ambiguity fixing, will leads to an improvement in PPP accuracy by about on average. Attributed to the efficient

  12. Development of a study design and implementation plan to estimate juvenile salmon survival in Lookout Point Reservoir and other reservoirs of the Willamette Project, western Oregon

    USGS Publications Warehouse

    Kock, Tobias J.; Perry, Russell W.; Monzyk, Fred R.; Pope, Adam C.; Plumb, John M.

    2016-12-23

    Survival estimates for juvenile salmon and steelhead fry in reservoirs impounded by high head dams are coveted data by resource managers.  However, this information is difficult to obtain because these fish are too small for tagging using conventional methods such as passive-integrated transponders or radio or acoustic transmitters.  We developed a study design and implementation plan to conduct a pilot evaluation that would assess the performance of two models for estimating fry survival in a field setting.  The first model is a staggered-release recovery model that was described by Skalski and others (2009) and Skalski (2016).  The second model is a parentage-based tagging N-mixture model that was developed and described in this document.  Both models are conceptually and statistically sound, but neither has been evaluated in the field.  In this document we provide an overview of a proposed study for 2017 in Lookout Point Reservoir, Oregon, that will evaluate survival of Chinook salmon fry using both models.  This approach will allow us to test each model and compare survival estimates, to determine model performance and better understand these study designs using field-collected data.

  13. The application of iterative closest point (ICP) registration to improve 3D terrain mapping estimates using the flash 3D ladar system

    NASA Astrophysics Data System (ADS)

    Woods, Jack; Armstrong, Ernest E.; Armbruster, Walter; Richmond, Richard

    2010-04-01

    The primary purpose of this research was to develop an effective means of creating a 3D terrain map image (point-cloud) in GPS denied regions from a sequence of co-bore sighted visible and 3D LIDAR images. Both the visible and 3D LADAR cameras were hard mounted to a vehicle. The vehicle was then driven around the streets of an abandoned village used as a training facility by the German Army and imagery was collected. The visible and 3D LADAR images were then fused and 3D registration performed using a variation of the Iterative Closest Point (ICP) algorithm. The ICP algorithm is widely used for various spatial and geometric alignment of 3D imagery producing a set of rotation and translation transformations between two 3D images. ICP rotation and translation information obtain from registering the fused visible and 3D LADAR imagery was then used to calculate the x-y plane, range and intensity (xyzi) coordinates of various structures (building, vehicles, trees etc.) along the driven path. The xyzi coordinates information was then combined to create a 3D terrain map (point-cloud). In this paper, we describe the development and application of 3D imaging techniques (most specifically the ICP algorithm) used to improve spatial, range and intensity estimates of imagery collected during urban terrain mapping using a co-bore sighted, commercially available digital video camera with focal plan of 640×480 pixels and a 3D FLASH LADAR. Various representations of the reconstructed point-clouds for the drive through data will also be presented.

  14. Point estimates in phylogenetic reconstructions

    PubMed Central

    Benner, Philipp; Bačák, Miroslav; Bourguignon, Pierre-Yves

    2014-01-01

    Motivation: The construction of statistics for summarizing posterior samples returned by a Bayesian phylogenetic study has so far been hindered by the poor geometric insights available into the space of phylogenetic trees, and ad hoc methods such as the derivation of a consensus tree makeup for the ill-definition of the usual concepts of posterior mean, while bootstrap methods mitigate the absence of a sound concept of variance. Yielding satisfactory results with sufficiently concentrated posterior distributions, such methods fall short of providing a faithful summary of posterior distributions if the data do not offer compelling evidence for a single topology. Results: Building upon previous work of Billera et al., summary statistics such as sample mean, median and variance are defined as the geometric median, Fréchet mean and variance, respectively. Their computation is enabled by recently published works, and embeds an algorithm for computing shortest paths in the space of trees. Studying the phylogeny of a set of plants, where several tree topologies occur in the posterior sample, the posterior mean balances correctly the contributions from the different topologies, where a consensus tree would be biased. Comparisons of the posterior mean, median and consensus trees with the ground truth using simulated data also reveals the benefits of a sound averaging method when reconstructing phylogenetic trees. Availability and implementation: We provide two independent implementations of the algorithm for computing Fréchet means, geometric medians and variances in the space of phylogenetic trees. TFBayes: https://github.com/pbenner/tfbayes, TrAP: https://github.com/bacak/TrAP. Contact: philipp.benner@mis.mpg.de PMID:25161244

  15. A Method to Estimate the Probability That Any Individual Cloud-to-Ground Lightning Stroke Was Within Any Radius of Any Point

    NASA Technical Reports Server (NTRS)

    Huddleston, Lisa L.; Roeder, William P.; Merceret, Francis J.

    2010-01-01

    A new technique has been developed to estimate the probability that a nearby cloud-to-ground lightning stroke was within a specified radius of any point of interest. This process uses the bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to determine the probability that the stroke is inside any specified radius of any location, even if that location is not centered on or even within the location error ellipse. This technique is adapted from a method of calculating the probability of debris collision with spacecraft. Such a technique is important in spaceport processing activities because it allows engineers to quantify the risk of induced current damage to critical electronics due to nearby lightning strokes. This technique was tested extensively and is now in use by space launch organizations at Kennedy Space Center and Cape Canaveral Air Force station.

  16. A Method to Estimate the Probability that any Individual Cloud-to-Ground Lightning Stroke was Within any Radius of any Point

    NASA Technical Reports Server (NTRS)

    Huddleston, Lisa L.; Roeder, William P.; Merceret, Francis J.

    2011-01-01

    A new technique has been developed to estimate the probability that a nearby cloud to ground lightning stroke was within a specified radius of any point of interest. This process uses the bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to determine the probability that the stroke is inside any specified radius of any location, even if that location is not centered on or even with the location error ellipse. This technique is adapted from a method of calculating the probability of debris collision with spacecraft. Such a technique is important in spaceport processing activities because it allows engineers to quantify the risk of induced current damage to critical electronics due to nearby lightning strokes. This technique was tested extensively and is now in use by space launch organizations at Kennedy Space Center and Cape Canaveral Air Force Station. Future applications could include forensic meteorology.

  17. High resolution measurements supported by electronic structure calculations of two naphthalene derivatives: [1,5]- and [1,6]-naphthyridine—Estimation of the zero point inertial defect for planar polycyclic aromatic compounds

    SciTech Connect

    Gruet, S. E-mail: manuel.goubet@univ-lille1.fr; Pirali, O.; Goubet, M. E-mail: manuel.goubet@univ-lille1.fr

    2014-06-21

    the semi-empirical relations to estimate the zero-point inertial defect (Δ{sub 0}) of polycyclic aromatic molecules and confirmed the contribution of low frequency out-of-plane vibrational modes to the GS inertial defects of PAHs, which is indeed a key parameter to validate the analysis of such large molecules.

  18. High resolution measurements supported by electronic structure calculations of two naphthalene derivatives: [1,5]- and [1,6]-naphthyridine--estimation of the zero point inertial defect for planar polycyclic aromatic compounds.

    PubMed

    Gruet, S; Goubet, M; Pirali, O

    2014-06-21

    Polycyclic aromatic hydrocarbons (PAHs) molecules are suspected to be present in the interstellar medium and to participate to the broad and unresolved emissions features, the so-called unidentified infrared bands. In the laboratory, very few studies report the rotationally resolved structure of such important class of molecules. In the present work, both experimental and theoretical approaches provide the first accurate determination of the rotational energy levels of two diazanaphthalene: [1,5]- and [1,6]-naphthyridine. [1,6]-naphthyridine has been studied at high resolution, in the microwave (MW) region using a Fourier transform microwave spectrometer and in the far-infrared (FIR) region using synchrotron-based Fourier transform spectroscopy. The very accurate set of ground state (GS) constants deduced from the analysis of the MW spectrum allowed the analysis of the most intense modes in the FIR (ν38-GS centered at about 483 cm(-1) and ν34-GS centered at about 842 cm(-1)). In contrast with [1,6]-naphthyridine, pure rotation spectroscopy of [1,5]-naphthyridine cannot be performed for symmetry reasons so the combined study of the two intense FIR modes (ν22-GS centered at about 166 cm(-1) and ν18-GS centered at about 818 cm(-1)) provided the GS and the excited states constants. Although the analysis of the very dense rotational patterns for such large molecules remains very challenging, relatively accurate anharmonic density functional theory calculations appeared as a highly relevant supporting tool to the analysis for both molecules. In addition, the good agreement between the experimental and calculated infrared spectrum shows that the present theoretical approach should provide useful data for the astrophysical models. Moreover, inertial defects calculated in the GS (ΔGS) of both molecules exhibit slightly negative values as previously observed for planar species of this molecular family. We adjusted the semi-empirical relations to estimate the zero-point

  19. Simple and Fast Continuous Estimation Method of Respiratory Frequency During Sleep using the Number of Extreme Points of Heart Rate Time Series

    NASA Astrophysics Data System (ADS)

    Yoshida, Yutaka; Yokoyama, Kiyoko; Ishii, Naohiro

    It is reported that frequency component of approximately 0.25Hz of heart rate time series (RSA) is corresponding to the respiratory frequency. In this paper, we proposed that continuous estimation method of respiratory fequency during sleep using the number of extreme points of heart rate time series in real time. Equation for calculation of the method is very simple and the method can continuously calculate frequency by window width of about 18 beats. To evaluate accuracy of proposal method, RSA frequency was calculated using proposal method from the heart rate time series during supine rest. Result, minimum error rate was observed when RSA had time lag for about 11s and error rate was about 13.8%. Result of estimating RSA frequency time series during sleep, it varied regularly during non-REM and varied irregularly during REM. This result is similar as report of previous study about respiratory variability during sleep. Therefore, it is considered that proposal method possible to apply respiratory monitoring system during sleep.

  20. Accurate monotone cubic interpolation

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1991-01-01

    Monotone piecewise cubic interpolants are simple and effective. They are generally third-order accurate, except near strict local extrema where accuracy degenerates to second-order due to the monotonicity constraint. Algorithms for piecewise cubic interpolants, which preserve monotonicity as well as uniform third and fourth-order accuracy are presented. The gain of accuracy is obtained by relaxing the monotonicity constraint in a geometric framework in which the median function plays a crucial role.

  1. Consistent and powerful graph-based change-point test for high-dimensional data.

    PubMed

    Shi, Xiaoping; Wu, Yuehua; Rao, Calyampudi Radhakrishna

    2017-04-11

    A change-point detection is proposed by using a Bayesian-type statistic based on the shortest Hamiltonian path, and the change-point is estimated by using ratio cut. A permutation procedure is applied to approximate the significance of Bayesian-type statistics. The change-point test is proven to be consistent, and an error probability in change-point estimation is provided. The test is very powerful against alternatives with a shift in variance and is accurate in change-point estimation, as shown in simulation studies. Its applicability in tracking cell division is illustrated.

  2. Application of the N-point moving average method for brachial pressure waveform-derived estimation of central aortic systolic pressure.

    PubMed

    Shih, Yuan-Ta; Cheng, Hao-Min; Sung, Shih-Hsien; Hu, Wei-Chih; Chen, Chen-Huan

    2014-04-01

    The N-point moving average (NPMA) is a mathematical low-pass filter that can smooth peaked noninvasively acquired radial pressure waveforms to estimate central aortic systolic pressure using a common denominator of N/4 (where N=the acquisition sampling frequency). The present study investigated whether the NPMA method can be applied to brachial pressure waveforms. In the derivation group, simultaneously recorded invasive high-fidelity brachial and central aortic pressure waveforms from 40 subjects were analyzed to identify the best common denominator. In the validation group, the NPMA method with the obtained common denominator was applied on noninvasive brachial pressure waveforms of 100 subjects. Validity was tested by comparing the noninvasive with the simultaneously recorded invasive central aortic systolic pressure. Noninvasive brachial pressure waveforms were calibrated to the cuff systolic and diastolic blood pressures. In the derivation study, an optimal denominator of N/6 was identified for NPMA to derive central aortic systolic pressure. The mean difference between the invasively/noninvasively estimated (N/6) and invasively measured central aortic systolic pressure was 0.1±3.5 and -0.6±7.6 mm Hg in the derivation and validation study, respectively. It satisfied the Association for the Advancement of Medical Instrumentation standard of 5±8 mm Hg. In conclusion, this method for estimating central aortic systolic pressure using either invasive or noninvasive brachial pressure waves requires a common denominator of N/6. By integrating the NPMA method into the ordinary oscillometric blood pressure determining process, convenient noninvasive central aortic systolic pressure values could be obtained with acceptable accuracy.

  3. Performance evaluation of GNSS-TEC estimation techniques at the grid point in middle and low latitudes during different geomagnetic conditions

    NASA Astrophysics Data System (ADS)

    Abe, O. E.; Otero Villamide, X.; Paparini, C.; Radicella, S. M.; Nava, B.; Rodríguez-Bouza, M.

    2016-11-01

    Global Navigation Satellite Systems (GNSS) have become a powerful tool use in surveying and mapping, air and maritime navigation, ionospheric/space weather research and other applications. However, in some cases, its maximum efficiency could not be attained due to some uncorrelated errors associated with the system measurements, which is caused mainly by the dispersive nature of the ionosphere. Ionosphere has been represented using the total number of electrons along the signal path at a particular height known as Total Electron Content (TEC). However, there are many methods to estimate TEC but the outputs are not uniform, which could be due to the peculiarity in characterizing the biases inside the observables (measurements), and sometimes could be associated to the influence of mapping function. The errors in TEC estimation could lead to wrong conclusion and this could be more critical in case of safety-of-life application. This work investigated the performance of Ciraolo's and Gopi's GNSS-TEC calibration techniques, during 5 geomagnetic quiet and disturbed conditions in the month of October 2013, at the grid points located in low and middle latitudes. The data used are obtained from the GNSS ground-based receivers located at Borriana in Spain (40° N, 0° E; mid latitude) and Accra in Ghana (5.50° N, -0.20° E; low latitude). The results of the calibrated TEC are compared with the TEC obtained from European Geostationary Navigation Overlay System Processing Set (EGNOS PS) TEC algorithm, which is considered as a reference data. The TEC derived from Global Ionospheric Maps (GIM) through International GNSS service (IGS) was also examined at the same grid points. The results obtained in this work showed that Ciraolo's calibration technique (a calibration technique based on carrier-phase measurements only) estimates TEC better at middle latitude in comparison to Gopi's technique (a calibration technique based on code and carrier-phase measurements). At the same

  4. Performance evaluation of GNSS-TEC estimation techniques at the grid point in middle and low latitudes during different geomagnetic conditions

    NASA Astrophysics Data System (ADS)

    Abe, O. E.; Otero Villamide, X.; Paparini, C.; Radicella, S. M.; Nava, B.; Rodríguez-Bouza, M.

    2017-04-01

    Global Navigation Satellite Systems (GNSS) have become a powerful tool use in surveying and mapping, air and maritime navigation, ionospheric/space weather research and other applications. However, in some cases, its maximum efficiency could not be attained due to some uncorrelated errors associated with the system measurements, which is caused mainly by the dispersive nature of the ionosphere. Ionosphere has been represented using the total number of electrons along the signal path at a particular height known as Total Electron Content (TEC). However, there are many methods to estimate TEC but the outputs are not uniform, which could be due to the peculiarity in characterizing the biases inside the observables (measurements), and sometimes could be associated to the influence of mapping function. The errors in TEC estimation could lead to wrong conclusion and this could be more critical in case of safety-of-life application. This work investigated the performance of Ciraolo's and Gopi's GNSS-TEC calibration techniques, during 5 geomagnetic quiet and disturbed conditions in the month of October 2013, at the grid points located in low and middle latitudes. The data used are obtained from the GNSS ground-based receivers located at Borriana in Spain (40°N, 0°E; mid latitude) and Accra in Ghana (5.50°N, 0.20°E; low latitude). The results of the calibrated TEC are compared with the TEC obtained from European Geostationary Navigation Overlay System Processing Set (EGNOS PS) TEC algorithm, which is considered as a reference data. The TEC derived from Global Ionospheric Maps (GIM) through International GNSS service (IGS) was also examined at the same grid points. The results obtained in this work showed that Ciraolo's calibration technique (a calibration technique based on carrier-phase measurements only) estimates TEC better at middle latitude in comparison to Gopi's technique (a calibration technique based on code and carrier-phase measurements). At the same time

  5. Accurate quantum chemical calculations

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.

    1989-01-01

    An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.

  6. Ionospheric corrections estimation in a local GNSS permanent stations network: improvement of Code Point Positioning at sub-metric accuracy level

    NASA Astrophysics Data System (ADS)

    Brunini, C.; Crespi, M.; Mazzoni, A.

    2008-12-01

    It is well know that GNSS permanent networks for real-time positioning were mainly designed to generate and transmit products for RTK (or Network-RTK) positioning. In this context, RTK products are restricted to users equipped with geodetic-class receivers. This work is a first step toward using a local network of permanent GNSS stations to generate and transmit real time products that could remarkably improve positioning accuracy for C/A receiver users. A simple experiment was carried out based on 3 consecutive days of data from 3 permanent stations that belong to the RESNAP-GPS network (w3.uniroma1.it/resnap-gps), located at the Lazio Region (Central Italy) and managed by DITS-Area di Geodesia e Geomatica, Sapienza University of Rome. In the first step the RINEX files were corrected for the differential code biases according to IGS recommendations and then processed with Bernese 5.0 CODSPP module (single point positioning using code measurements), using IGS precise ephemeris and clocks. One position per epoch (every 30 seconds) was estimated for P1 and for the ionosphere free combination (P3). The accuracy obtained with the P3 combination for the vertical component, which ranged from -1 to +1 m, was taken as the reference for the following discussion. For P1 observations, the vertical coordinate errors showed a typical signature due to the ionospheric activity: higher errors for day-time (up to 5 m) and smaller ones for night-time (around 1.5 m). In order to improve the accuracy of the P1 solution, ionospheric corrections were estimated using the La Plata Ionospheric Model, based on the dual-frequency observations from the RESNAP-GPS network. Those corrections were applied to the RINEX files of a probing station located within the reference network. With this procedure, the vertical coordinate errors were reduced to the range from -0.8 to 0.8 m. This methodological approach shows the possibility to remarkably improve the real time positioning based on Code

  7. Ionospheric corrections estimation in a local GNSS permanent stations network: improvement of Code Point Positioning at sub-metric accuracy level

    NASA Astrophysics Data System (ADS)

    Brunini, C.; Crespi, M.; Mazzoni, A.

    2009-04-01

    It is well know that GNSS permanent networks for real-time positioning were mainly designed to generate and transmit products for RTK (or Network-RTK) positioning. In this context, RTK products are restricted to users equipped with geodetic-class receivers. This work is a first step toward using a local network of permanent GNSS stations to generate and transmit real time products that could remarkably improve positioning accuracy for C/A receiver users. A simple experiment was carried out based on 3 consecutive days of data from 3 permanent stations that belong to the RESNAP-GPS network (w3.uniroma1.it/resnap-gps), located at the Lazio Region (Central Italy) and managed by DITS-Area di Geodesia e Geomatica, Sapienza University of Rome. In the first step the RINEX files were corrected for the differential code biases according to IGS recommendations and then processed with Bernese 5.0 CODSPP module (single point positioning using code measurements), using IGS precise ephemeris and clocks. One position per epoch (every 30 seconds) was estimated for P1 and for the ionosphere free combination (P3). The accuracy obtained with the P3• combination for the vertical component, which ranged from -1 to +1 m, was taken as the reference for the following discussion. For P1 observations, the vertical coordinate errors showed a typical signature due to the ionospheric activity: higher errors for day-time (up to 5 m) and smaller ones for night-time (around 1.5 m). In order to improve the accuracy of the P1 solution, ionospheric corrections were estimated using the La Plata Ionospheric Model, based on the dual-frequency observations from the RESNAP-GPS network. Those corrections were applied to the RINEX files of a probing station located within the reference network. With this procedure, the vertical coordinate errors were reduced to the range from -0.8 to 0.8 m. This methodological approach shows the possibility to remarkably improve the real time positioning based on Code

  8. A Method to Estimate the Probability that Any Individual Cloud-to-Ground Lightning Stroke was Within Any Radius of Any Point

    NASA Technical Reports Server (NTRS)

    Huddleston, Lisa; Roeder, WIlliam P.; Merceret, Francis J.

    2011-01-01

    A new technique has been developed to estimate the probability that a nearby cloud-to-ground lightning stroke was within a specified radius of any point of interest. This process uses the bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to determine the probability that the stroke is inside any specified radius of any location, even if that location is not centered on or even within the location error ellipse. This technique is adapted from a method of calculating the probability of debris collision with spacecraft. Such a technique is important in spaceport processing activities because it allows engineers to quantify the risk of induced current damage to critical electronics due to nearby lightning strokes. This technique was tested extensively and is now in use by space launch organizations at Kennedy Space Center and Cape Canaveral Air Force station. Future applications could include forensic meteorology.

  9. Volume 2 - Point Sources

    EPA Pesticide Factsheets

    Point source emission reference materials from the Emissions Inventory Improvement Program (EIIP). Provides point source guidance on planning, emissions estimation, data collection, inventory documentation and reporting, and quality assurance/quality contr

  10. Application of modified export coefficient method on the load estimation of non-point source nitrogen and phosphorus pollution of soil and water loss in semiarid regions.

    PubMed

    Wu, Lei; Gao, Jian-en; Ma, Xiao-yi; Li, Dan

    2015-07-01

    Chinese Loess Plateau is considered as one of the most serious soil loss regions in the world, its annual sediment output accounts for 90 % of the total sediment loads of the Yellow River, and most of the Loess Plateau has a very typical characteristic of "soil and water flow together", and water flow in this area performs with a high sand content. Serious soil loss results in nitrogen and phosphorus loss of soil. Special processes of water and soil in the Loess Plateau lead to the loss mechanisms of water, sediment, nitrogen, and phosphorus are different from each other, which are greatly different from other areas of China. In this study, the modified export coefficient method considering the rainfall erosivity factor was proposed to simulate and evaluate non-point source (NPS) nitrogen and phosphorus loss load caused by soil and water loss in the Yanhe River basin of the hilly and gully area, Loess Plateau. The results indicate that (1) compared with the traditional export coefficient method, annual differences of NPS total nitrogen (TN) and total phosphorus (TP) load after considering the rainfall erosivity factor are obvious; it is more in line with the general law of NPS pollution formation in a watershed, and it can reflect the annual variability of NPS pollution more accurately. (2) Under the traditional and modified conditions, annual changes of NPS TN and TP load in four counties (districts) took on the similar trends from 1999 to 2008; the load emission intensity not only is closely related to rainfall intensity but also to the regional distribution of land use and other pollution sources. (3) The output structure, source composition, and contribution rate of NPS pollution load under the modified method are basically the same with the traditional method. The average output structure of TN from land use and rural life is about 66.5 and 17.1 %, the TP is about 53.8 and 32.7 %; the maximum source composition of TN (59 %) is farmland; the maximum source

  11. Real-time estimation of prostate tumor rotation and translation with a kV imaging system based on an iterative closest point algorithm

    NASA Astrophysics Data System (ADS)

    Nasehi Tehrani, Joubin; O'Brien, Ricky T.; Rugaard Poulsen, Per; Keall, Paul

    2013-12-01

    Previous studies have shown that during cancer radiotherapy a small translation or rotation of the tumor can lead to errors in dose delivery. Current best practice in radiotherapy accounts for tumor translations, but is unable to address rotation due to a lack of a reliable real-time estimate. We have developed a method based on the iterative closest point (ICP) algorithm that can compute rotation from kilovoltage x-ray images acquired during radiation treatment delivery. A total of 11 748 kilovoltage (kV) images acquired from ten patients (one fraction for each patient) were used to evaluate our tumor rotation algorithm. For each kV image, the three dimensional coordinates of three fiducial markers inside the prostate were calculated. The three dimensional coordinates were used as input to the ICP algorithm to calculate the real-time tumor rotation and translation around three axes. The results show that the root mean square error was improved for real-time calculation of tumor displacement from a mean of 0.97 mm with the stand alone translation to a mean of 0.16 mm by adding real-time rotation and translation displacement with the ICP algorithm. The standard deviation (SD) of rotation for the ten patients was 2.3°, 0.89° and 0.72° for rotation around the right-left (RL), anterior-posterior (AP) and superior-inferior (SI) directions respectively. The correlation between all six degrees of freedom showed that the highest correlation belonged to the AP and SI translation with a correlation of 0.67. The second highest correlation in our study was between the rotation around RL and rotation around AP, with a correlation of -0.33. Our real-time algorithm for calculation of rotation also confirms previous studies that have shown the maximum SD belongs to AP translation and rotation around RL. ICP is a reliable and fast algorithm for estimating real-time tumor rotation which could create a pathway to investigational clinical treatment studies requiring real

  12. Accurate spectral color measurements

    NASA Astrophysics Data System (ADS)

    Hiltunen, Jouni; Jaeaeskelaeinen, Timo; Parkkinen, Jussi P. S.

    1999-08-01

    Surface color measurement is of importance in a very wide range of industrial applications including paint, paper, printing, photography, textiles, plastics and so on. For a demanding color measurements spectral approach is often needed. One can measure a color spectrum with a spectrophotometer using calibrated standard samples as a reference. Because it is impossible to define absolute color values of a sample, we always work with approximations. The human eye can perceive color difference as small as 0.5 CIELAB units and thus distinguish millions of colors. This 0.5 unit difference should be a goal for the precise color measurements. This limit is not a problem if we only want to measure the color difference of two samples, but if we want to know in a same time exact color coordinate values accuracy problems arise. The values of two instruments can be astonishingly different. The accuracy of the instrument used in color measurement may depend on various errors such as photometric non-linearity, wavelength error, integrating sphere dark level error, integrating sphere error in both specular included and specular excluded modes. Thus the correction formulas should be used to get more accurate results. Another question is how many channels i.e. wavelengths we are using to measure a spectrum. It is obvious that the sampling interval should be short to get more precise results. Furthermore, the result we get is always compromise of measuring time, conditions and cost. Sometimes we have to use portable syste or the shape and the size of samples makes it impossible to use sensitive equipment. In this study a small set of calibrated color tiles measured with the Perkin Elmer Lamda 18 and the Minolta CM-2002 spectrophotometers are compared. In the paper we explain the typical error sources of spectral color measurements, and show which are the accuracy demands a good colorimeter should have.

  13. Curie Point Depth, Geothermal Gradient and Heat-Flow Estimation and Geothermal Anomaly Exploration from Integrated Analysis of Aeromagnetic and Gravity Data on the Sabalan Area, NW Iran

    NASA Astrophysics Data System (ADS)

    Afshar, A.; Norouzi, G. H.; Moradzadeh, A.; Riahi, M. A.; Porkhial, S.

    2017-03-01

    Prospecting the geothermal resources in northwest of Iran, conducted in 1975, revealed several promising areas and introduced the Sabalan geothermal field as a priority for further studies. The Sabalan Mt., representing the Sabalan geothermal field, is a large stratovolcano which consists of an extensive central edifice built on a probable tectonic horst of underlying intrusive and effusive volcanic rocks. In this study, Curie point depth (CPD), geothermal gradient and heat-flow map were constituted from spectral analysis of the aeromagnetic data for the NW of Iran. The top of the geothermal resource (i.e., the thickness of the overburden) was evaluated by applying the Euler deconvolution method on the residual gravity data. The thickness of the geothermal resource was calculated by subtracting the Euler depths obtained from the CPDs in the geothermal anomalous region. The geothermal anomalous region was defined by the heat-flow value greater than 150 mW/m2. CPDs in the investigated area are found between 8.8 km in the Sabalan geothermal field and 14.1 in the northeast. The results showed that the geothermal gradient is higher than 62 °C/km and the heat-flow is higher than 152 mW/m2 for the geothermal manifestation region; the thickness of the geothermal resource was also estimated to vary between 5.4 and 9.1 km. These results are consistent with the drilling and other geological information. Findings indicate that the CDPs agree with earthquake distribution and the type of thermal spring is related to the depth of the top of the geothermal resource.

  14. Estimating contaminant mass discharge: A field comparison of the multilevel point measurement and the integral pumping investigation approaches and their uncertainties

    NASA Astrophysics Data System (ADS)

    Béland-Pelletier, Caroline; Fraser, Michelle; Barker, Jim; Ptak, Thomas

    2011-03-01

    In this field study, two approaches to assess contaminant mass discharge were compared: the sampling of multilevel wells (MLS) and the integral groundwater investigation (or integral pumping test, IPT) that makes use of the concentration-time series obtained from pumping wells. The MLS approached used concentrations, hydraulic conductivity and gradient rather than direct chemical flux measurements, while the IPT made use of a simplified analytical inversion. The two approaches were applied at a control plane located approximately 40 m downgradient of a gasoline source at Canadian Forces Base Borden, Ontario, Canada. The methods yielded similar estimates of the mass discharging across the control plane. The sources of uncertainties in the mass discharge in each approach were evaluated, including the uncertainties inherent in the underlying assumptions and procedures. The maximum uncertainty of the MLS method was about 67%, and about 28% for the IPT method in this specific field situation. For the MLS method, the largest relative uncertainty (62%) was attributed to the limited sampling density (0.63 points/m 2), through a novel comparison with a denser sampling grid nearby. A five-fold increase of the sampling grid density would have been required to reduce the overall relative uncertainty for the MLS method to about the same level as that for the IPT method. Uncertainty in the complete coverage of the control plane provided the largest relative uncertainty (37%) in the IPT method. While MLS or IPT methods to assess contaminant mass discharge are attractive assessment tools, the large relative uncertainty in either method found for this reasonable well monitored and simple aquifer suggests that results in more complex plumes in more heterogeneous aquifers should be viewed with caution.

  15. Curie Point Depth, Geothermal Gradient and Heat-Flow Estimation and Geothermal Anomaly Exploration from Integrated Analysis of Aeromagnetic and Gravity Data on the Sabalan Area, NW Iran

    NASA Astrophysics Data System (ADS)

    Afshar, A.; Norouzi, G. H.; Moradzadeh, A.; Riahi, M. A.; Porkhial, S.

    2016-12-01

    Prospecting the geothermal resources in northwest of Iran, conducted in 1975, revealed several promising areas and introduced the Sabalan geothermal field as a priority for further studies. The Sabalan Mt., representing the Sabalan geothermal field, is a large stratovolcano which consists of an extensive central edifice built on a probable tectonic horst of underlying intrusive and effusive volcanic rocks. In this study, Curie point depth (CPD), geothermal gradient and heat-flow map were constituted from spectral analysis of the aeromagnetic data for the NW of Iran. The top of the geothermal resource (i.e., the thickness of the overburden) was evaluated by applying the Euler deconvolution method on the residual gravity data. The thickness of the geothermal resource was calculated by subtracting the Euler depths obtained from the CPDs in the geothermal anomalous region. The geothermal anomalous region was defined by the heat-flow value greater than 150 mW/m2. CPDs in the investigated area are found between 8.8 km in the Sabalan geothermal field and 14.1 in the northeast. The results showed that the geothermal gradient is higher than 62 °C/km and the heat-flow is higher than 152 mW/m2 for the geothermal manifestation region; the thickness of the geothermal resource was also estimated to vary between 5.4 and 9.1 km. These results are consistent with the drilling and other geological information. Findings indicate that the CDPs agree with earthquake distribution and the type of thermal spring is related to the depth of the top of the geothermal resource.

  16. Estimating contaminant mass discharge: a field comparison of the multilevel point measurement and the integral pumping investigation approaches and their uncertainties.

    PubMed

    Béland-Pelletier, Caroline; Fraser, Michelle; Barker, Jim; Ptak, Thomas

    2011-03-25

    In this field study, two approaches to assess contaminant mass discharge were compared: the sampling of multilevel wells (MLS) and the integral groundwater investigation (or integral pumping test, IPT) that makes use of the concentration-time series obtained from pumping wells. The MLS approached used concentrations, hydraulic conductivity and gradient rather than direct chemical flux measurements, while the IPT made use of a simplified analytical inversion. The two approaches were applied at a control plane located approximately 40m downgradient of a gasoline source at Canadian Forces Base Borden, Ontario, Canada. The methods yielded similar estimates of the mass discharging across the control plane. The sources of uncertainties in the mass discharge in each approach were evaluated, including the uncertainties inherent in the underlying assumptions and procedures. The maximum uncertainty of the MLS method was about 67%, and about 28% for the IPT method in this specific field situation. For the MLS method, the largest relative uncertainty (62%) was attributed to the limited sampling density (0.63 points/m(2)), through a novel comparison with a denser sampling grid nearby. A five-fold increase of the sampling grid density would have been required to reduce the overall relative uncertainty for the MLS method to about the same level as that for the IPT method. Uncertainty in the complete coverage of the control plane provided the largest relative uncertainty (37%) in the IPT method. While MLS or IPT methods to assess contaminant mass discharge are attractive assessment tools, the large relative uncertainty in either method found for this reasonable well monitored and simple aquifer suggests that results in more complex plumes in more heterogeneous aquifers should be viewed with caution.

  17. Curie-point depths estimated from fractal spectral analyses of magnetic anomalies in the western United States and northeast Pacific Oecan

    NASA Astrophysics Data System (ADS)

    Wang, J.; Li, C.

    2011-12-01

    We estimate Curie-point depths (Zb) of the western United States and northeast Pacific Ocean by analyzing radially averaged amplitude spectra of magnetic anomalies based on a fractal magnetization model. The amplitude spectrum of source magnetization is proportional to the wavenumber (k) raised to a fractal exponent (-β). We first test whether long-wavelength components are captured appropriately by using variable overlapping windows ranging in sizes from 75 × 75 km2 to 200 × 200 km2. For each sliding window, the amplitude spectrum is pre-multiplied with the factor k-β prior to computation. We then use the centroid method (Tanaka et al., 1999) to calculate Zb. We find that when the window size approaches 200 × 200 km2 the resolution of estimated Zb is too low to reveal important geological features. For our study, fractal exponents larger than 0.6 will result in overcorrection. Considering the difficulty of simultaneous inversion of the depths to the top and centroid of magnetic sources (Zt and Z0 respectively) and β, we fix β = 0.5 for the whole study area. Note that β here is defined for amplitude spectrum, which is equivalent to 1 for power spectrum of 2D magnetic sources. Our results show that the estimated Curie depths range from 4 km to 40 km. The average Zb in the northern part of the northeast Pacific Ocean is about 14 km below the sea level, and almost the same depths are found in the junction of the active and ancient Cascade arcs and remanent track of Yellowstone hotspot. Subduction beneath the North American plate and consequent magmatism can account for small Zb in the above mentioned volcanic arc regions. The Mendocino Triple Junction separates the northeast Pacific into northern (mainly consisting of the Explorer, Juan de Fuca and Gorda plates) and southern parts. Both the Zb and the thickness of magnetic layer in the southern part are larger than those in the northern part. This contrast is due to the fact that the Pacific plate to the south

  18. A rapid and accurate method for the quantitative estimation of natural polysaccharides and their fractions using high performance size exclusion chromatography coupled with multi-angle laser light scattering and refractive index detector.

    PubMed

    Cheong, Kit-Leong; Wu, Ding-Tao; Zhao, Jing; Li, Shao-Ping

    2015-06-26

    In this study, a rapid and accurate method for quantitative analysis of natural polysaccharides and their different fractions was developed. Firstly, high performance size exclusion chromatography (HPSEC) was utilized to separate natural polysaccharides. And then the molecular masses of their fractions were determined by multi-angle laser light scattering (MALLS). Finally, quantification of polysaccharides or their fractions was performed based on their response to refractive index detector (RID) and their universal refractive index increment (dn/dc). Accuracy of the developed method for the quantification of individual and mixed polysaccharide standards, including konjac glucomannan, CM-arabinan, xyloglucan, larch arabinogalactan, oat β-glucan, dextran (410, 270, and 25 kDa), mixed xyloglucan and CM-arabinan, and mixed dextran 270 K and CM-arabinan was determined, and their average recoveries were between 90.6% and 98.3%. The limits of detection (LOD) and quantification (LOQ) were ranging from 10.68 to 20.25 μg/mL, and 42.70 to 68.85 μg/mL, respectively. Comparing to the conventional phenol sulfuric acid assay and HPSEC coupled with evaporative light scattering detection (HPSEC-ELSD) analysis, the developed HPSEC-MALLS-RID method based on universal dn/dc for the quantification of polysaccharides and their fractions is much more simple, rapid, and accurate with no need of individual polysaccharide standard, as well as free of calibration curve. The developed method was also successfully utilized for quantitative analysis of polysaccharides and their different fractions from three medicinal plants of Panax genus, Panax ginseng, Panax notoginseng and Panax quinquefolius. The results suggested that the HPSEC-MALLS-RID method based on universal dn/dc could be used as a routine technique for the quantification of polysaccharides and their fractions in natural resources.

  19. Estimating Implementation and Operational Costs of an Integrated Tiered CD4 Service including Laboratory and Point of Care Testing in a Remote Health District in South Africa

    PubMed Central

    Cassim, Naseem; Coetzee, Lindi M.; Schnippel, Kathryn; Glencross, Deborah K.

    2014-01-01

    Background An integrated tiered service delivery model (ITSDM) has been proposed to provide ‘full-coverage’ of CD4 services throughout South Africa. Five tiers are described, defined by testing volumes and number of referring health-facilities. These include: (1) Tier-1/decentralized point-of-care service (POC) in a single site; Tier-2/POC-hub servicing processing <30–40 samples from 8–10 health-clinics; Tier-3/Community laboratories servicing ∼50 health-clinics, processing <150 samples/day; high-volume centralized laboratories (Tier-4 and Tier-5) processing <300 or >600 samples/day and serving >100 or >200 health-clinics, respectively. The objective of this study was to establish costs of existing and ITSDM-tiers 1, 2 and 3 in a remote, under-serviced district in South Africa. Methods Historical health-facility workload volumes from the Pixley-ka-Seme district, and the total volumes of CD4 tests performed by the adjacent district referral CD4 laboratories, linked to locations of all referring clinics and related laboratory-to-result turn-around time (LTR-TAT) data, were extracted from the NHLS Corporate-Data-Warehouse for the period April-2012 to March-2013. Tiers were costed separately (as a cost-per-result) including equipment, staffing, reagents and test consumable costs. A one-way sensitivity analyses provided for changes in reagent price, test volumes and personnel time. Results The lowest cost-per-result was noted for the existing laboratory-based Tiers- 4 and 5 ($6.24 and $5.37 respectively), but with related increased LTR-TAT of >24–48 hours. Full service coverage with TAT <6-hours could be achieved with placement of twenty-seven Tier-1/POC or eight Tier-2/POC-hubs, at a cost-per-result of $32.32 and $15.88 respectively. A single district Tier-3 laboratory also ensured ‘full service coverage’ and <24 hour LTR-TAT for the district at $7.42 per-test. Conclusion Implementing a single Tier-3/community laboratory to extend and improve delivery

  20. Fatty acid ethyl esters in hair as alcohol markers: estimating a reliable cut-off point by evaluation of 1,057 autopsy cases.

    PubMed

    Hastedt, Martin; Bossers, Lydia; Krumbiegel, Franziska; Herre, Sieglinde; Hartwig, Sven

    2013-06-01

    Alcohol abuse is a widespread problem, especially in Western countries. Therefore, it is important to have markers of alcohol consumption with validated cut-off points. For many years research has focused on analysis of hair for alcohol markers, but data on the performance and reliability of cut-off values are still lacking. Evaluating 1,057 cases from 2005 to 2011, included a large sample group for the estimation of an applicable cut-off value when compared to earlier studies on fatty acid ethyl esters (FAEEs) in hair. The FAEEs concentrations in hair, police investigation reports, medical history, and the macroscopic and microscopic alcohol-typical results from autopsy, such as liver, pancreas, and cardiac findings, were taken into account in this study. In 80.2 % of all 1,057 cases pathologic findings that may be related to alcohol abuse were reported. The cases were divided into social drinkers (n = 168), alcohol abusers (n = 502), and cases without information on alcohol use. The median FAEEs concentration in the group of social drinkers was 0.302 ng/mg (range 0.008-14.3 ng/mg). In the group of alcohol abusers a median of 1.346 ng/mg (range 0.010-83.7 ng/mg) was found. Before June 2009 the hair FAEEs test was routinely applied to a proximal hair segment of 0-6 cm, changing to a routinely investigated hair length of 3 cm after 2009, as proposed by the Society of Hair Testing (SoHT). The method showed significant differences between the groups of social drinkers and alcoholics, leading to an improvement in the postmortem detection of alcohol abuse. Nevertheless, the performance of the method was rather poor, with an area under the curve calculated from receiver operating characteristic (ROC curve AUC) of 0.745. The optimum cut-off value for differentiation between social and chronic excessive drinking calculated for hair FAEEs was 1.08 ng/mg, with a sensitivity of 56 % and a specificity of 80 %. In relation to the "Consensus on Alcohol Markers 2012

  1. Thunderstorm activity in early Earth: same estimations from point of view a role of electric discharges in formation of prebiotic conditions

    NASA Astrophysics Data System (ADS)

    Serozhkin, Yu.

    2008-09-01

    increase quantity of lightning at 50 % [7]. The examinations of processes of separation of charges in clouds result in a very narrow diapason of temperature and pressure of an atmosphere, at which the separation of charges is possible. It is necessary to tell that the electrostatic charging of thunderstorm clouds not received a satisfactory explanation. One of not explained properties is the formation at the altitude 6 … 8 km at temperature about -15o the negatively charged layer by thickness some hundreds meters. At this altitude at such pressure the water can exist in three phases. In this layer because of interaction of the ice crystals with snow pellets there is a separation of charges. Above this layer there is a so-called charge reverse - a not explained phenomenon causing that the ice crystals are lower this layer are charged positively, and above negatively. The snow pellets are higher this layer is charged positively, and below negatively. Thus negatively charged layer consists of negatively charged ice crystals and snow pellets. Positively charged snow pellets form a charge at the top of a cloud, and positively charged ice crystals form positive charge in the bottom of a cloud. It follows that the dependence of the electrostatic charging of thunderstorm clouds from parameters of atmosphere is extremely difficult to estimate. About influence of pressure it is possible to tell the general words. It is possible to tell that at pressure corresponding to the point of charge reverse (about 250 Torr at the altitude 8 km) usual thunderstorm activity will decrease. It means that if the atmospheric pressure during formation pre-biotic conditions was less than 100 Torr, it is necessary to discuss a role of electrical discharges, which are connected with accumulation of charges on particles (sand storms, tornado) or ashes at eruption of volcano. What tracks of thunderstorm activity it is possible to search in the past? It is know that the cloud - ground lightning

  2. Estimating potential evapotranspiration with improved radiation estimation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Potential evapotranspiration (PET) is of great importance to estimation of surface energy budget and water balance calculation. The accurate estimation of PET will facilitate efficient irrigation scheduling, drainage design, and other agricultural and meteorological applications. However, accuracy o...

  3. Arm Span and Ulnar Length Are Reliable and Accurate Estimates of Recumbent Length and Height in a Multiethnic Population of Infants and Children under 6 Years of Age123

    PubMed Central

    Forman, Michele R.; Zhu, Yeyi; Hernandez, Ladia M.; Himes, John H.; Dong, Yongquan; Danish, Robert K.; James, Kyla E.; Caulfield, Laura E.; Kerver, Jean M.; Arab, Lenore; Voss, Paula; Hale, Daniel E.; Kanafani, Nadim; Hirschfeld, Steven

    2014-01-01

    Surrogate measures are needed when recumbent length or height is unobtainable or unreliable. Arm span has been used as a surrogate but is not feasible in children with shoulder or arm contractures. Ulnar length is not usually impaired by joint deformities, yet its utility as a surrogate has not been adequately studied. In this cross-sectional study, we aimed to examine the accuracy and reliability of ulnar length measured by different tools as a surrogate measure of recumbent length and height. Anthropometrics [recumbent length, height, arm span, and ulnar length by caliper (ULC), ruler (ULR), and grid (ULG)] were measured in 1479 healthy infants and children aged <6 y across 8 study centers in the United States. Multivariate mixed-effects linear regression models for recumbent length and height were developed by using ulnar length and arm span as surrogate measures. The agreement between the measured length or height and the predicted values by ULC, ULR, ULG, and arm span were examined by Bland-Altman plots. All 3 measures of ulnar length and arm span were highly correlated with length and height. The degree of precision of prediction equations for length by ULC, ULR, and ULG (R2 = 0.95, 0.95, and 0.92, respectively) was comparable with that by arm span (R2 = 0.97) using age, sex, and ethnicity as covariates; however, height prediction by ULC (R2 = 0.87), ULR (R2 = 0.85), and ULG (R2 = 0.88) was less comparable with arm span (R2 = 0.94). Our study demonstrates that arm span and ULC, ULR, or ULG can serve as accurate and reliable surrogate measures of recumbent length and height in healthy children; however, ULC, ULR, and ULG tend to slightly overestimate length and height in young infants and children. Further testing of ulnar length as a surrogate is warranted in physically impaired or nonambulatory children. PMID:25031329

  4. Numerical estimation of densities

    NASA Astrophysics Data System (ADS)

    Ascasibar, Y.; Binney, J.

    2005-01-01

    We present a novel technique, dubbed FIESTAS, to estimate the underlying density field from a discrete set of sample points in an arbitrary multidimensional space. FIESTAS assigns a volume to each point by means of a binary tree. Density is then computed by integrating over an adaptive kernel. As a first test, we construct several Monte Carlo realizations of a Hernquist profile and recover the particle density in both real and phase space. At a given point, Poisson noise causes the unsmoothed estimates to fluctuate by a factor of ~2 regardless of the number of particles. This spread can be reduced to about 1dex (~26 per cent) by our smoothing procedure. The density range over which the estimates are unbiased widens as the particle number increases. Our tests show that real-space densities obtained with an SPH kernel are significantly more biased than those yielded by FIESTAS. In phase space, about 10 times more particles are required in order to achieve a similar accuracy. As a second application we have estimated phase-space densities in a dark matter halo from a cosmological simulation. We confirm the results of Arad, Dekel & Klypin that the highest values of f are all associated with substructure rather than the main halo, and that the volume function v(f) ~f-2.5 over about four orders of magnitude in f. We show that a modified version of the toy model proposed by Arad et al. explains this result and suggests that the departures of v(f) from power-law form are not mere numerical artefacts. We conclude that our algorithm accurately measures the phase-space density up to the limit where discreteness effects render the simulation itself unreliable. Computationally, FIESTAS is orders of magnitude faster than the method based on Delaunay tessellation that Arad et al. employed, making it practicable to recover smoothed density estimates for sets of 109 points in six dimensions.

  5. Specific binding of /sup 125/I-labeled human chorionic gonadotropin to gonadal tissue: comparison of limited-point saturation analyses to Scatchard analyses for determining binding capacities and factors affecting estimates of binding capacity

    SciTech Connect

    Spicer, L.J.; Ireland, J.J.

    1986-07-01

    Experiments were conducted to compare gonadotropin binding capacity calculated from limited-point saturation analyses to those obtained from Scatchard analyses, and to test the effects of membrane purity and source of gonadotropin receptors on determining the maximum percentage of radioiodinated hormone bound to receptors (maximum bindability). One- to four-point saturation analyses gave results comparable to results by Scatchard analyses when examining relative binding capacities of receptors. Crude testicular homogenates had lower estimates of maximum bindability of /sup 125/I-labeled human chorionic gonadotropin than more purified gonadotropin receptor preparations. Under similar preparation techniques, some gonadotropin receptor sources exhibited low maximum bindability.

  6. The Relationship of Actigraph Accelerometer Cut-Points for Estimating Physical Activity with Selected Health Outcomes: Results from NHANES 2003-06

    ERIC Educational Resources Information Center

    Loprinzi, Paul D.; Lee, Hyo; Cardinal, Bradley J.; Crespo, Carlos J.; Andersen, Ross E.; Smit, Ellen

    2012-01-01

    The purpose of this study was to examine the influence of child and adult cut-points on physical activity (PA) intensity, the prevalence of meeting PA guidelines, and association with selected health outcomes. Participants (6,578 adults greater than or equal to 18 years, and 3,174 children and adolescents less than or equal to 17 years) from the…

  7. A fast and accurate decoder for underwater acoustic telemetry.

    PubMed

    Ingraham, J M; Deng, Z D; Li, X; Fu, T; McMichael, G A; Trumbo, B A

    2014-07-01

    The Juvenile Salmon Acoustic Telemetry System, developed by the U.S. Army Corps of Engineers, Portland District, has been used to monitor the survival of juvenile salmonids passing through hydroelectric facilities in the Federal Columbia River Power System. Cabled hydrophone arrays deployed at dams receive coded transmissions sent from acoustic transmitters implanted in fish. The signals' time of arrival on different hydrophones is used to track fish in 3D. In this article, a new algorithm that decodes the received transmissions is described and the results are compared to results for the previous decoding algorithm. In a laboratory environment, the new decoder was able to decode signals with lower signal strength than the previous decoder, effectively increasing decoding efficiency and range. In field testing, the new algorithm decoded significantly more signals than the previous decoder and three-dimensional tracking experiments showed that the new decoder's time-of-arrival estimates were accurate. At multiple distances from hydrophones, the new algorithm tracked more points more accurately than the previous decoder. The new algorithm was also more than 10 times faster, which is critical for real-time applications on an embedded system.

  8. The Most Accurate Path from Point A to Point B is Not Necessarily a Straight Line

    DTIC Science & Technology

    2012-08-20

    2σ2θ sin(θ) cos(θ) ≈ sin(θ̃) cos(θ̃) 1− 2σ2θ (13) 4 DISTRIBUTION A 2.4 Guidance 2.4.1 Midcourse A natural way to achieve the excitation necessary for...Air Force Research Laboratory Munitions Directorate 101 W Eglin Blvd. Eglin AFB, FL 32542 20 August 2012 FINAL REPORT...Force Base, FL 32542 AIR FORCE RESEARCH LABORATORY MUNITIONS DIRECTORATE ii Distribution A NOTICE AND SIGNATURE PAGE Using Government

  9. An Estimation of the Likelihood of Significant Eruptions During 2000-2009 Using Poisson Statistics on Two-Point Moving Averages of the Volcanic Time Series

    NASA Technical Reports Server (NTRS)

    Wilson, Robert M.

    2001-01-01

    Since 1750, the number of cataclysmic volcanic eruptions (volcanic explosivity index (VEI)>=4) per decade spans 2-11, with 96 percent located in the tropics and extra-tropical Northern Hemisphere. A two-point moving average of the volcanic time series has higher values since the 1860's than before, being 8.00 in the 1910's (the highest value) and 6.50 in the 1980's, the highest since the 1910's peak. Because of the usual behavior of the first difference of the two-point moving averages, one infers that its value for the 1990's will measure approximately 6.50 +/- 1, implying that approximately 7 +/- 4 cataclysmic volcanic eruptions should be expected during the present decade (2000-2009). Because cataclysmic volcanic eruptions (especially those having VEI>=5) nearly always have been associated with short-term episodes of global cooling, the occurrence of even one might confuse our ability to assess the effects of global warming. Poisson probability distributions reveal that the probability of one or more events with a VEI>=4 within the next ten years is >99 percent. It is approximately 49 percent for an event with a VEI>=5, and 18 percent for an event with a VEI>=6. Hence, the likelihood that a climatically significant volcanic eruption will occur within the next ten years appears reasonably high.

  10. Fast and accurate registration techniques for affine and nonrigid alignment of MR brain images.

    PubMed

    Liu, Jia-Xiu; Chen, Yong-Sheng; Chen, Li-Fen

    2010-01-01

    Registration of magnetic resonance brain images is a geometric operation that determines point-wise correspondences between two brains. It remains a difficult task due to the highly convoluted structure of the brain. This paper presents novel methods, Brain Image Registration Tools (BIRT), that can rapidly and accurately register brain images by utilizing the brain structure information estimated from image derivatives. Source and target image spaces are related by affine transformation and non-rigid deformation. The deformation field is modeled by a set of Wendland's radial basis functions hierarchically deployed near the salient brain structures. In general, nonlinear optimization is heavily engaged in the parameter estimation for affine/non-rigid transformation and good initial estimates are thus essential to registration performance. In this work, the affine registration is initialized by a rigid transformation, which can robustly estimate the orientation and position differences of brain images. The parameters of the affine/non-rigid transformation are then hierarchically estimated in a coarse-to-fine manner by maximizing an image similarity measure, the correlation ratio, between the involved images. T1-weighted brain magnetic resonance images were utilized for performance evaluation. Our experimental results using four 3-D image sets demonstrated that BIRT can efficiently align images with high accuracy compared to several other algorithms, and thus is adequate to the applications which apply registration process intensively. Moreover, a voxel-based morphometric study quantitatively indicated that accurate registration can improve both the sensitivity and specificity of the statistical inference results.

  11. Accurate estimation of the elastic properties of porous fibers

    SciTech Connect

    Thissell, W.R.; Zurek, A.K.; Addessio, F.

    1997-05-01

    A procedure is described to calculate polycrystalline anisotropic fiber elastic properties with cylindrical symmetry and porosity. It uses a preferred orientation model (Tome ellipsoidal self-consistent model) for the determination of anisotropic elastic properties for the case of highly oriented carbon fibers. The model predictions, corrected for porosity, are compared to back-calculated fiber elastic properties of an IM6/3501-6 unidirectional composite whose elastic properties have been determined via resonant ultrasound spectroscopy. The Halpin-Tsai equations used to back-calculated fiber elastic properties are found to be inappropriate for anisotropic composite constituents. Modifications are proposed to the Halpin-Tsai equations to expand their applicability to anisotropic reinforcement materials.

  12. Active point out-of-plane ultrasound calibration

    NASA Astro