Science.gov

Sample records for clusters improved estimates

  1. An Improved Cluster Richness Estimator

    SciTech Connect

    Rozo, Eduardo; Rykoff, Eli S.; Koester, Benjamin P.; McKay, Timothy; Hao, Jiangang; Evrard, August; Wechsler, Risa H.; Hansen, Sarah; Sheldon, Erin; Johnston, David; Becker, Matthew R.; Annis, James T.; Bleem, Lindsey; Scranton, Ryan; /Pittsburgh U.

    2009-08-03

    Minimizing the scatter between cluster mass and accessible observables is an important goal for cluster cosmology. In this work, we introduce a new matched filter richness estimator, and test its performance using the maxBCG cluster catalog. Our new estimator significantly reduces the variance in the L{sub X}-richness relation, from {sigma}{sub lnL{sub X}}{sup 2} = (0.86 {+-} 0.02){sup 2} to {sigma}{sub lnL{sub X}}{sup 2} = (0.69 {+-} 0.02){sup 2}. Relative to the maxBCG richness estimate, it also removes the strong redshift dependence of the richness scaling relations, and is significantly more robust to photometric and redshift errors. These improvements are largely due to our more sophisticated treatment of galaxy color data. We also demonstrate the scatter in the L{sub X}-richness relation depends on the aperture used to estimate cluster richness, and introduce a novel approach for optimizing said aperture which can be easily generalized to other mass tracers.

  2. The cluster graphical lasso for improved estimation of Gaussian graphical models

    PubMed Central

    Tan, Kean Ming; Witten, Daniela; Shojaie, Ali

    2015-01-01

    The task of estimating a Gaussian graphical model in the high-dimensional setting is considered. The graphical lasso, which involves maximizing the Gaussian log likelihood subject to a lasso penalty, is a well-studied approach for this task. A surprising connection between the graphical lasso and hierarchical clustering is introduced: the graphical lasso in effect performs a two-step procedure, in which (1) single linkage hierarchical clustering is performed on the variables in order to identify connected components, and then (2) a penalized log likelihood is maximized on the subset of variables within each connected component. Thus, the graphical lasso determines the connected components of the estimated network via single linkage clustering. The single linkage clustering is known to perform poorly in certain finite-sample settings. Therefore, the cluster graphical lasso, which involves clustering the features using an alternative to single linkage clustering, and then performing the graphical lasso on the subset of variables within each cluster, is proposed. Model selection consistency for this technique is established, and its improved performance relative to the graphical lasso is demonstrated in a simulation study, as well as in applications to a university webpage and a gene expression data sets. PMID:25642008

  3. A nonparametric clustering technique which estimates the number of clusters

    NASA Technical Reports Server (NTRS)

    Ramey, D. B.

    1983-01-01

    In applications of cluster analysis, one usually needs to determine the number of clusters, K, and the assignment of observations to each cluster. A clustering technique based on recursive application of a multivariate test of bimodality which automatically estimates both K and the cluster assignments is presented.

  4. Cluster Sampling with Referral to Improve the Efficiency of Estimating Unmet Needs among Pregnant and Postpartum Women after Disasters

    PubMed Central

    Horney, Jennifer; Zotti, Marianne E.; Williams, Amy; Hsia, Jason

    2015-01-01

    Introduction and Background Women of reproductive age, in particular women who are pregnant or fewer than 6 months postpartum, are uniquely vulnerable to the effects of natural disasters, which may create stressors for caregivers, limit access to prenatal/postpartum care, or interrupt contraception. Traditional approaches (e.g., newborn records, community surveys) to survey women of reproductive age about unmet needs may not be practical after disasters. Finding pregnant or postpartum women is especially challenging because fewer than 5% of women of reproductive age are pregnant or postpartum at any time. Methods From 2009 to 2011, we conducted three pilots of a sampling strategy that aimed to increase the proportion of pregnant and postpartum women of reproductive age who were included in postdisaster reproductive health assessments in Johnston County, North Carolina, after tornadoes, Cobb/Douglas Counties, Georgia, after flooding, and Bertie County, North Carolina, after hurricane-related flooding. Results Using this method, the percentage of pregnant and postpartum women interviewed in each pilot increased from 0.06% to 21%, 8% to 19%, and 9% to 17%, respectively. Conclusion and Discussion Two-stage cluster sampling with referral can be used to increase the proportion of pregnant and postpartum women included in a postdisaster assessment. This strategy may be a promising way to assess unmet needs of pregnant and postpartum women in disaster-affected communities. PMID:22365134

  5. Attitude Estimation in Fractionated Spacecraft Cluster Systems

    NASA Technical Reports Server (NTRS)

    Hadaegh, Fred Y.; Blackmore, James C.

    2011-01-01

    An attitude estimation was examined in fractioned free-flying spacecraft. Instead of a single, monolithic spacecraft, a fractionated free-flying spacecraft uses multiple spacecraft modules. These modules are connected only through wireless communication links and, potentially, wireless power links. The key advantage of this concept is the ability to respond to uncertainty. For example, if a single spacecraft module in the cluster fails, a new one can be launched at a lower cost and risk than would be incurred with onorbit servicing or replacement of the monolithic spacecraft. In order to create such a system, however, it is essential to know what the navigation capabilities of the fractionated system are as a function of the capabilities of the individual modules, and to have an algorithm that can perform estimation of the attitudes and relative positions of the modules with fractionated sensing capabilities. Looking specifically at fractionated attitude estimation with startrackers and optical relative attitude sensors, a set of mathematical tools has been developed that specify the set of sensors necessary to ensure that the attitude of the entire cluster ( cluster attitude ) can be observed. Also developed was a navigation filter that can estimate the cluster attitude if these conditions are satisfied. Each module in the cluster may have either a startracker, a relative attitude sensor, or both. An extended Kalman filter can be used to estimate the attitude of all modules. A range of estimation performances can be achieved depending on the sensors used and the topology of the sensing network.

  6. Tidal radius estimates for three open clusters

    NASA Astrophysics Data System (ADS)

    Danilov, V. M.; Loktin, A. V.

    2015-10-01

    A new method is developed for estimating tidal radii and masses of open star clusters (OCL) based on the sky-plane coordinates and proper motions and/or radial velocities of cluster member stars. To this end, we perform the correlation and spectral analysis of oscillations of absolute values of stellar velocity components relative to the cluster mass center along three coordinate planes and along each coordinate axis in five OCL models. Mutual correlation functions for fluctuations of absolute values of velocity field components are computed. The spatial Fourier transform of the mutual correlation functions in the case of zero time offset is used to compute wavenumber spectra of oscillations of absolute values of stellar velocity components. The oscillation spectra of these quantities contain series of local maxima at equidistant wavenumber k values. The ratio of the tidal radius of the cluster to the wavenumber difference Δ k of adjacent local maxima in the oscillation spectra of absolute values of velocity field components is found to be the same for all five OCL models. This ratio is used to estimate the tidal radii and masses of the Pleiades, Praesepe, and M67 based on the proper motions and sky-plane coordinates of the member stars of these clusters. The radial dependences of the absolute values of the tangential and radial projections of cluster star velocities computed using the proper motions relative to the cluster center are determined, along with the corresponding autocorrelation functions and wavenumber spectra of oscillations of absolute values of velocity field components. The Pleiades virial mass is estimated assuming that the cluster is either isolated or non-isolated. Also derived are the estimates of the Pleiades dynamical mass assuming that it is non-stationary and non-isolated. The inferred Pleiades tidal radii corresponding to these masses are reported.

  7. Optimizing weak lensing mass estimates for cluster profile uncertainty

    SciTech Connect

    Gruen, D.; Bernstein, G. M.; Lam, T. Y.; Seitz, S.

    2011-09-11

    Weak lensing measurements of cluster masses are necessary for calibrating mass-observable relations (MORs) to investigate the growth of structure and the properties of dark energy. However, the measured cluster shear signal varies at fixed mass M200m due to inherent ellipticity of background galaxies, intervening structures along the line of sight, and variations in the cluster structure due to scatter in concentrations, asphericity and substructure. We use N-body simulated halos to derive and evaluate a weak lensing circular aperture mass measurement Map that minimizes the mass estimate variance <(Map - M200m)2> in the presence of all these forms of variability. Depending on halo mass and observational conditions, the resulting mass estimator improves on Map filters optimized for circular NFW-profile clusters in the presence of uncorrelated large scale structure (LSS) about as much as the latter improve on an estimator that only minimizes the influence of shape noise. Optimizing for uncorrelated LSS while ignoring the variation of internal cluster structure puts too much weight on the profile near the cores of halos, and under some circumstances can even be worse than not accounting for LSS at all. As a result, we discuss the impact of variability in cluster structure and correlated structures on the design and performance of weak lensing surveys intended to calibrate cluster MORs.

  8. Optimizing weak lensing mass estimates for cluster profile uncertainty

    DOE PAGESBeta

    Gruen, D.; Bernstein, G. M.; Lam, T. Y.; Seitz, S.

    2011-09-11

    Weak lensing measurements of cluster masses are necessary for calibrating mass-observable relations (MORs) to investigate the growth of structure and the properties of dark energy. However, the measured cluster shear signal varies at fixed mass M200m due to inherent ellipticity of background galaxies, intervening structures along the line of sight, and variations in the cluster structure due to scatter in concentrations, asphericity and substructure. We use N-body simulated halos to derive and evaluate a weak lensing circular aperture mass measurement Map that minimizes the mass estimate variance <(Map - M200m)2> in the presence of all these forms of variability. Dependingmore » on halo mass and observational conditions, the resulting mass estimator improves on Map filters optimized for circular NFW-profile clusters in the presence of uncorrelated large scale structure (LSS) about as much as the latter improve on an estimator that only minimizes the influence of shape noise. Optimizing for uncorrelated LSS while ignoring the variation of internal cluster structure puts too much weight on the profile near the cores of halos, and under some circumstances can even be worse than not accounting for LSS at all. As a result, we discuss the impact of variability in cluster structure and correlated structures on the design and performance of weak lensing surveys intended to calibrate cluster MORs.« less

  9. Estimating the concrete compressive strength using hard clustering and fuzzy clustering based regression techniques.

    PubMed

    Nagwani, Naresh Kumar; Deo, Shirish V

    2014-01-01

    Understanding of the compressive strength of concrete is important for activities like construction arrangement, prestressing operations, and proportioning new mixtures and for the quality assurance. Regression techniques are most widely used for prediction tasks where relationship between the independent variables and dependent (prediction) variable is identified. The accuracy of the regression techniques for prediction can be improved if clustering can be used along with regression. Clustering along with regression will ensure the more accurate curve fitting between the dependent and independent variables. In this work cluster regression technique is applied for estimating the compressive strength of the concrete and a novel state of the art is proposed for predicting the concrete compressive strength. The objective of this work is to demonstrate that clustering along with regression ensures less prediction errors for estimating the concrete compressive strength. The proposed technique consists of two major stages: in the first stage, clustering is used to group the similar characteristics concrete data and then in the second stage regression techniques are applied over these clusters (groups) to predict the compressive strength from individual clusters. It is found from experiments that clustering along with regression techniques gives minimum errors for predicting compressive strength of concrete; also fuzzy clustering algorithm C-means performs better than K-means algorithm. PMID:25374939

  10. Estimating the Concrete Compressive Strength Using Hard Clustering and Fuzzy Clustering Based Regression Techniques

    PubMed Central

    Nagwani, Naresh Kumar; Deo, Shirish V.

    2014-01-01

    Understanding of the compressive strength of concrete is important for activities like construction arrangement, prestressing operations, and proportioning new mixtures and for the quality assurance. Regression techniques are most widely used for prediction tasks where relationship between the independent variables and dependent (prediction) variable is identified. The accuracy of the regression techniques for prediction can be improved if clustering can be used along with regression. Clustering along with regression will ensure the more accurate curve fitting between the dependent and independent variables. In this work cluster regression technique is applied for estimating the compressive strength of the concrete and a novel state of the art is proposed for predicting the concrete compressive strength. The objective of this work is to demonstrate that clustering along with regression ensures less prediction errors for estimating the concrete compressive strength. The proposed technique consists of two major stages: in the first stage, clustering is used to group the similar characteristics concrete data and then in the second stage regression techniques are applied over these clusters (groups) to predict the compressive strength from individual clusters. It is found from experiments that clustering along with regression techniques gives minimum errors for predicting compressive strength of concrete; also fuzzy clustering algorithm C-means performs better than K-means algorithm. PMID:25374939

  11. An Improved Fst Estimator

    PubMed Central

    Chen, Guanjie; Yuan, Ao; Shriner, Daniel; Tekola-Ayele, Fasil; Zhou, Jie; Bentley, Amy R.; Zhou, Yanxun; Wang, Chuntao; Newport, Melanie J.; Adeyemo, Adebowale; Rotimi, Charles N.

    2015-01-01

    The fixation index Fst plays a central role in ecological and evolutionary genetic studies. The estimators of Wright (F^st1), Weir and Cockerham (F^st2), and Hudson et al. (F^st3) are widely used to measure genetic differences among different populations, but all have limitations. We propose a minimum variance estimator F^stm using F^st1 and F^st2. We tested F^stm in simulations and applied it to 120 unrelated East African individuals from Ethiopia and 11 subpopulations in HapMap 3 with 464,642 SNPs. Our simulation study showed that F^stm has smaller bias than F^st2 for small sample sizes and smaller bias than F^st1 for large sample sizes. Also, F^stm has smaller variance than F^st2 for small Fst values and smaller variance than F^st1 for large Fst values. We demonstrated that approximately 30 subpopulations and 30 individuals per subpopulation are required in order to accurately estimate Fst. PMID:26317214

  12. Using second-order generalized estimating equations to model heterogeneous intraclass correlation in cluster randomized trials

    PubMed Central

    Crespi, Catherine M.; Wong, Weng Kee; Mishra, Shiraz I.

    2009-01-01

    SUMMARY In cluster randomized trials, it is commonly assumed that the magnitude of the correlation among subjects within a cluster is constant across clusters. However, the correlation may in fact be heterogeneous and depend on cluster characteristics. Accurate modeling of the correlation has the potential to improve inference. We use second-order generalized estimating equations to model heterogeneous correlation in cluster randomized trials. Using simulation studies we show that accurate modeling of heterogeneous correlation can improve inference when the correlation is high or varies by cluster size. We apply the methods to a cluster randomized trial of an intervention to promote breast cancer screening. PMID:19109804

  13. Efficient Pairwise Composite Likelihood Estimation for Spatial-Clustered Data

    PubMed Central

    Bai, Yun; Kang, Jian; Song, Peter X.-K.

    2015-01-01

    Summary Spatial-clustered data refer to high-dimensional correlated measurements collected from units or subjects that are spatially clustered. Such data arise frequently from studies in social and health sciences. We propose a unified modeling framework, termed as GeoCopula, to characterize both large-scale variation, and small-scale variation for various data types, including continuous data, binary data, and count data as special cases. To overcome challenges in the estimation and inference for the model parameters, we propose an efficient composite likelihood approach in that the estimation efficiency is resulted from a construction of over-identified joint composite estimating equations. Consequently, the statistical theory for the proposed estimation is developed by extending the classical theory of the generalized method of moments. A clear advantage of the proposed estimation method is the computation feasibility. We conduct several simulation studies to assess the performance of the proposed models and estimation methods for both Gaussian and binary spatial-clustered data. Results show a clear improvement on estimation efficiency over the conventional composite likelihood method. An illustrative data example is included to motivate and demonstrate the proposed method. PMID:24945876

  14. Cross-Clustering: A Partial Clustering Algorithm with Automatic Estimation of the Number of Clusters

    PubMed Central

    Tellaroli, Paola; Bazzi, Marco; Donato, Michele; Brazzale, Alessandra R.; Drăghici, Sorin

    2016-01-01

    Four of the most common limitations of the many available clustering methods are: i) the lack of a proper strategy to deal with outliers; ii) the need for a good a priori estimate of the number of clusters to obtain reasonable results; iii) the lack of a method able to detect when partitioning of a specific data set is not appropriate; and iv) the dependence of the result on the initialization. Here we propose Cross-clustering (CC), a partial clustering algorithm that overcomes these four limitations by combining the principles of two well established hierarchical clustering algorithms: Ward’s minimum variance and Complete-linkage. We validated CC by comparing it with a number of existing clustering methods, including Ward’s and Complete-linkage. We show on both simulated and real datasets, that CC performs better than the other methods in terms of: the identification of the correct number of clusters, the identification of outliers, and the determination of real cluster memberships. We used CC to cluster samples in order to identify disease subtypes, and on gene profiles, in order to determine groups of genes with the same behavior. Results obtained on a non-biological dataset show that the method is general enough to be successfully used in such diverse applications. The algorithm has been implemented in the statistical language R and is freely available from the CRAN contributed packages repository. PMID:27015427

  15. Estimating potential evapotranspiration with improved radiation estimation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Potential evapotranspiration (PET) is of great importance to estimation of surface energy budget and water balance calculation. The accurate estimation of PET will facilitate efficient irrigation scheduling, drainage design, and other agricultural and meteorological applications. However, accuracy o...

  16. Memory color assisted illuminant estimation through pixel clustering

    NASA Astrophysics Data System (ADS)

    Zhang, Heng; Quan, Shuxue

    2010-01-01

    The under constrained nature of illuminant estimation determines that in order to resolve the problem, certain assumptions are needed, such as the gray world theory. Including more constraints in this process may help explore the useful information in an image and improve the accuracy of the estimated illuminant, providing that the constraints hold. Based on the observation that most personal images have contents of one or more of the following categories: neutral objects, human beings, sky, and plants, we propose a method for illuminant estimation through the clustering of pixels of gray and three dominant memory colors: skin tone, sky blue, and foliage green. Analysis shows that samples of the above colors cluster around small areas under different illuminants and their characteristics can be used to effectively detect pixels falling into each of the categories. The algorithm requires the knowledge of the spectral sensitivity response of the camera, and a spectral database consisted of the CIE standard illuminants and reflectance or radiance database of samples of the above colors.

  17. Cluster Stability Estimation Based on a Minimal Spanning Trees Approach

    NASA Astrophysics Data System (ADS)

    Volkovich, Zeev (Vladimir); Barzily, Zeev; Weber, Gerhard-Wilhelm; Toledano-Kitai, Dvora

    2009-08-01

    Among the areas of data and text mining which are employed today in science, economy and technology, clustering theory serves as a preprocessing step in the data analyzing. However, there are many open questions still waiting for a theoretical and practical treatment, e.g., the problem of determining the true number of clusters has not been satisfactorily solved. In the current paper, this problem is addressed by the cluster stability approach. For several possible numbers of clusters we estimate the stability of partitions obtained from clustering of samples. Partitions are considered consistent if their clusters are stable. Clusters validity is measured as the total number of edges, in the clusters' minimal spanning trees, connecting points from different samples. Actually, we use the Friedman and Rafsky two sample test statistic. The homogeneity hypothesis, of well mingled samples within the clusters, leads to asymptotic normal distribution of the considered statistic. Resting upon this fact, the standard score of the mentioned edges quantity is set, and the partition quality is represented by the worst cluster corresponding to the minimal standard score value. It is natural to expect that the true number of clusters can be characterized by the empirical distribution having the shortest left tail. The proposed methodology sequentially creates the described value distribution and estimates its left-asymmetry. Numerical experiments, presented in the paper, demonstrate the ability of the approach to detect the true number of clusters.

  18. Learning Markov Random Walks for robust subspace clustering and estimation.

    PubMed

    Liu, Risheng; Lin, Zhouchen; Su, Zhixun

    2014-11-01

    Markov Random Walks (MRW) has proven to be an effective way to understand spectral clustering and embedding. However, due to less global structural measure, conventional MRW (e.g., the Gaussian kernel MRW) cannot be applied to handle data points drawn from a mixture of subspaces. In this paper, we introduce a regularized MRW learning model, using a low-rank penalty to constrain the global subspace structure, for subspace clustering and estimation. In our framework, both the local pairwise similarity and the global subspace structure can be learnt from the transition probabilities of MRW. We prove that under some suitable conditions, our proposed local/global criteria can exactly capture the multiple subspace structure and learn a low-dimensional embedding for the data, in which giving the true segmentation of subspaces. To improve robustness in real situations, we also propose an extension of the MRW learning model based on integrating transition matrix learning and error correction in a unified framework. Experimental results on both synthetic data and real applications demonstrate that our proposed MRW learning model and its robust extension outperform the state-of-the-art subspace clustering methods. PMID:25005156

  19. Thermochemical property estimation of hydrogenated silicon clusters.

    PubMed

    Adamczyk, Andrew J; Broadbelt, Linda J

    2011-08-18

    The thermochemical properties for selected hydrogenated silicon clusters (Si(x)H(y), x = 3-13, y = 0-18) were calculated using quantum chemical calculations and statistical thermodynamics. Standard enthalpy of formation at 298 K and standard entropy and constant pressure heat capacity at various temperatures, i.e., 298-6000 K, were calculated for 162 hydrogenated silicon clusters using G3//B3LYP. The hydrogenated silicon clusters contained ten to twenty fused Si-Si bonds, i.e., bonds participating in more than one three- to six-membered ring. The hydrogenated silicon clusters in this study involved different degrees of hydrogenation, i.e., the ratio of hydrogen to silicon atoms varied widely depending on the size of the cluster and/or degree of multifunctionality. A group additivity database composed of atom-centered groups and ring corrections, as well as bond-centered groups, was created to predict thermochemical properties most accurately. For the training set molecules, the average absolute deviation (AAD) comparing the G3//B3LYP values to the values obtained from the revised group additivity database for standard enthalpy of formation and entropy at 298 K and constant pressure heat capacity at 500, 1000, and 1500 K were 3.2%, 1.9%, 0.40%, 0.43%, and 0.53%, respectively. Sensitivity analysis of the revised group additivity parameter database revealed that the group parameters were able to predict the thermochemical properties of molecules that were not used in the training set within an AAD of 3.8% for standard enthalpy of formation at 298 K. PMID:21728331

  20. Proportion estimation using prior cluster purities

    NASA Technical Reports Server (NTRS)

    Terrell, G. R. (Principal Investigator)

    1980-01-01

    The prior distribution of CLASSY component purities is studied, and this information incorporated into maximum likelihood crop proportion estimators. The method is tested on Transition Year spring small grain segments.

  1. Estimating the abundance of clustered animal population by using adaptive cluster sampling and negative binomial distribution

    NASA Astrophysics Data System (ADS)

    Bo, Yizhou; Shifa, Naima

    2013-09-01

    An estimator for finding the abundance of a rare, clustered and mobile population has been introduced. This model is based on adaptive cluster sampling (ACS) to identify the location of the population and negative binomial distribution to estimate the total in each site. To identify the location of the population we consider both sampling with replacement (WR) and sampling without replacement (WOR). Some mathematical properties of the model are also developed.

  2. A clustering routing algorithm based on improved ant colony clustering for wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Xiao, Xiaoli; Li, Yang

    Because of real wireless sensor network node distribution uniformity, this paper presents a clustering strategy based on the ant colony clustering algorithm (ACC-C). To reduce the energy consumption of the head near the base station and the whole network, The algorithm uses ant colony clustering on non-uniform clustering. The improve route optimal degree is presented to evaluate the performance of the chosen route. Simulation results show that, compared with other algorithms, like the LEACH algorithm and the improve particle cluster kind of clustering algorithm (PSC - C), the proposed approach is able to keep away from the node with less residual energy, which can improve the life of networks.

  3. IMPROVING BIOGENIC EMISSION ESTIMATES WITH SATELLITE IMAGERY

    EPA Science Inventory

    This presentation will review how existing and future applications of satellite imagery can improve the accuracy of biogenic emission estimates. Existing applications of satellite imagery to biogenic emission estimates have focused on characterizing land cover. Vegetation dat...

  4. Spatial dependence clusters in the estimation of forest structural parameters

    NASA Astrophysics Data System (ADS)

    Wulder, Michael Albert

    1999-12-01

    In this thesis we provide a summary of the methods by which remote sensing may be applied in forestry, while also acknowledging the various limitations which are faced. The application of spatial statistics to high spatial resolution imagery is explored as a means of increasing the information which may be extracted from digital images. A number of high spatial resolution optical remote sensing satellites that are soon to be launched will increase the availability of imagery for the monitoring of forest structure. This technological advancement is timely as current forest management practices have been altered to reflect the need for sustainable ecosystem level management. The low accuracy level at which forest structural parameters have been estimated in the past is partly due to low image spatial resolution. A large pixel is often composed of a number of surface features, resulting in a spectral value which is due to the reflectance characteristics of all surface features within that pixel. In the case of small pixels, a portion of a surface feature may be represented by a single pixel. When a single pixel represents a portion of a surface object, the potential to isolate distinct surface features exists. Spatial statistics, such as the Gets statistic, provide for an image processing method to isolate distinct surface features. In this thesis, high spatial resolution imagery sensed over a forested landscape is processed with spatial statistics to combine distinct image objects into clusters, representing individual or groups of trees. Tree clusters are a means to deal with the inevitable foliage overlap which occurs within complex mixed and deciduous forest stands. The generation of image objects, that is, clusters, is necessary to deal with the presence of spectrally mixed pixels. The ability to estimate forest inventory and biophysical parameters from image clusters generated from spatially dependent image features is tested in this thesis. The inventory

  5. Rod cluster having improved vane configuration

    SciTech Connect

    Shockling, L.A.; Francis, T.A.

    1989-09-05

    This patent describes a pressurized water reactor vessel, the vessel defining a predetermined axial direction of the flow of coolant therewithin and having plural spider assemblies supporting, for vertical movement within the vessel, respective clusters of rods in spaced, parallel axial relationship, parallel to the predetermined axial direction of coolant flow, and a rod guide for each spider assembly and respective cluster of rods. The rod guide having horizontally oriented support plates therewithin, each plate having an interior opening for accommodating axial movement therethrough of the spider assembly and respective cluster of rods. The opening defining plural radially extending channels and corresponding parallel interior wall surfaces of the support plate.

  6. Identifying sampling locations for field-scale soil moisture estimation using K-means clustering

    NASA Astrophysics Data System (ADS)

    Van Arkel, Zach; Kaleita, Amy L.

    2014-08-01

    Identifying and understanding the impact of field-scale soil moisture patterns is currently limited by the time and resources required to do sufficient monitoring. This study uses K-means clustering to find critical sampling points to estimate field-scale near-surface soil moisture. Points within the field are clustered based upon topographic and soils data and the points representing the center of those clusters are identified as the critical sampling points. Soil moisture observations at 42 sites across the growing seasons of 4 years were collected several times per week. Using soil moisture observations at the critical sampling points and the number of points within each cluster, a weighted average is found and used as the estimated mean field-scale soil moisture. Field-scale soil moisture estimations from this method are compared to the rank stability approach (RSA) to find optimal sampling locations based upon temporal soil moisture data. The clustering approach on soil and topography data resulted in field-scale average moisture estimates that were as good or better than RSA, but without the need for exhaustive presampling of soil moisture. Using an electromagnetic inductance map as a proxy for soils data significantly improved the estimates over those obtained based on topography alone.

  7. Improved Ant Colony Clustering Algorithm and Its Performance Study.

    PubMed

    Gao, Wei

    2016-01-01

    Clustering analysis is used in many disciplines and applications; it is an important tool that descriptively identifies homogeneous groups of objects based on attribute values. The ant colony clustering algorithm is a swarm-intelligent method used for clustering problems that is inspired by the behavior of ant colonies that cluster their corpses and sort their larvae. A new abstraction ant colony clustering algorithm using a data combination mechanism is proposed to improve the computational efficiency and accuracy of the ant colony clustering algorithm. The abstraction ant colony clustering algorithm is used to cluster benchmark problems, and its performance is compared with the ant colony clustering algorithm and other methods used in existing literature. Based on similar computational difficulties and complexities, the results show that the abstraction ant colony clustering algorithm produces results that are not only more accurate but also more efficiently determined than the ant colony clustering algorithm and the other methods. Thus, the abstraction ant colony clustering algorithm can be used for efficient multivariate data clustering. PMID:26839533

  8. Improved Ant Colony Clustering Algorithm and Its Performance Study

    PubMed Central

    Gao, Wei

    2016-01-01

    Clustering analysis is used in many disciplines and applications; it is an important tool that descriptively identifies homogeneous groups of objects based on attribute values. The ant colony clustering algorithm is a swarm-intelligent method used for clustering problems that is inspired by the behavior of ant colonies that cluster their corpses and sort their larvae. A new abstraction ant colony clustering algorithm using a data combination mechanism is proposed to improve the computational efficiency and accuracy of the ant colony clustering algorithm. The abstraction ant colony clustering algorithm is used to cluster benchmark problems, and its performance is compared with the ant colony clustering algorithm and other methods used in existing literature. Based on similar computational difficulties and complexities, the results show that the abstraction ant colony clustering algorithm produces results that are not only more accurate but also more efficiently determined than the ant colony clustering algorithm and the other methods. Thus, the abstraction ant colony clustering algorithm can be used for efficient multivariate data clustering. PMID:26839533

  9. Improving clustering by imposing network information

    PubMed Central

    Gerber, Susanne; Horenko, Illia

    2015-01-01

    Cluster analysis is one of the most popular data analysis tools in a wide range of applied disciplines. We propose and justify a computationally efficient and straightforward-to-implement way of imposing the available information from networks/graphs (a priori available in many application areas) on a broad family of clustering methods. The introduced approach is illustrated on the problem of a noninvasive unsupervised brain signal classification. This task is faced with several challenging difficulties such as nonstationary noisy signals and a small sample size, combined with a high-dimensional feature space and huge noise-to-signal ratios. Applying this approach results in an exact unsupervised classification of very short signals, opening new possibilities for clustering methods in the area of a noninvasive brain-computer interface. PMID:26601225

  10. Improving performance through concept formation and conceptual clustering

    NASA Technical Reports Server (NTRS)

    Fisher, Douglas H.

    1992-01-01

    Research from June 1989 through October 1992 focussed on concept formation, clustering, and supervised learning for purposes of improving the efficiency of problem-solving, planning, and diagnosis. These projects resulted in two dissertations on clustering, explanation-based learning, and means-ends planning, and publications in conferences and workshops, several book chapters, and journals; a complete Bibliography of NASA Ames supported publications is included. The following topics are studied: clustering of explanations and problem-solving experiences; clustering and means-end planning; and diagnosis of space shuttle and space station operating modes.

  11. Galaxy cluster mass estimation from stacked spectroscopic analysis

    NASA Astrophysics Data System (ADS)

    Farahi, Arya; Evrard, August E.; Rozo, Eduardo; Rykoff, Eli S.; Wechsler, Risa H.

    2016-08-01

    We use simulated galaxy surveys to study: (i) how galaxy membership in redMaPPer clusters maps to the underlying halo population, and (ii) the accuracy of a mean dynamical cluster mass, Mσ(λ), derived from stacked pairwise spectroscopy of clusters with richness λ. Using ˜130 000 galaxy pairs patterned after the Sloan Digital Sky Survey (SDSS) redMaPPer cluster sample study of Rozo et al., we show that the pairwise velocity probability density function of central-satellite pairs with mi < 19 in the simulation matches the form seen in Rozo et al. Through joint membership matching, we deconstruct the main Gaussian velocity component into its halo contributions, finding that the top-ranked halo contributes ˜60 per cent of the stacked signal. The halo mass scale inferred by applying the virial scaling of Evrard et al. to the velocity normalization matches, to within a few per cent, the log-mean halo mass derived through galaxy membership matching. We apply this approach, along with miscentring and galaxy velocity bias corrections, to estimate the log-mean matched halo mass at z = 0.2 of SDSS redMaPPer clusters. Employing the velocity bias constraints of Guo et al., we find = ln (M30) + αm ln (λ/30) with M30 = 1.56 ± 0.35 × 1014 M⊙ and αm = 1.31 ± 0.06stat ± 0.13sys. Systematic uncertainty in the velocity bias of satellite galaxies overwhelmingly dominates the error budget.

  12. Nanostar Clustering Improves the Sensitivity of Plasmonic Assays.

    PubMed

    Park, Yong Il; Im, Hyungsoon; Weissleder, Ralph; Lee, Hakho

    2015-08-19

    Star-shaped Au nanoparticles (Au nanostars, AuNS) have been developed to improve the plasmonic sensitivity, but their application has largely been limited to single-particle probes. We herein describe a AuNS clustering assay based on nanoscale self-assembly of multiple AuNS and which further increases detection sensitivity. We show that each cluster contains multiple nanogaps to concentrate electric fields, thereby amplifying the signal via plasmon coupling. Numerical simulation indicated that AuNS clusters assume up to 460-fold higher field density than Au nanosphere clusters of similar mass. The results were validated in model assays of protein biomarker detection. The AuNS clustering assay showed higher sensitivity than Au nanosphere. Minimizing the size of affinity ligand was found important to tightly confine electric fields and improve the sensitivity. The resulting assay is simple and fast and can be readily applied to point-of-care molecular detection schemes. PMID:26102604

  13. Fuzzy C-mean clustering on kinetic parameter estimation with generalized linear least square algorithm in SPECT

    NASA Astrophysics Data System (ADS)

    Choi, Hon-Chit; Wen, Lingfeng; Eberl, Stefan; Feng, Dagan

    2006-03-01

    Dynamic Single Photon Emission Computed Tomography (SPECT) has the potential to quantitatively estimate physiological parameters by fitting compartment models to the tracer kinetics. The generalized linear least square method (GLLS) is an efficient method to estimate unbiased kinetic parameters and parametric images. However, due to the low sensitivity of SPECT, noisy data can cause voxel-wise parameter estimation by GLLS to fail. Fuzzy C-Mean (FCM) clustering and modified FCM, which also utilizes information from the immediate neighboring voxels, are proposed to improve the voxel-wise parameter estimation of GLLS. Monte Carlo simulations were performed to generate dynamic SPECT data with different noise levels and processed by general and modified FCM clustering. Parametric images were estimated by Logan and Yokoi graphical analysis and GLLS. The influx rate (K I), volume of distribution (V d) were estimated for the cerebellum, thalamus and frontal cortex. Our results show that (1) FCM reduces the bias and improves the reliability of parameter estimates for noisy data, (2) GLLS provides estimates of micro parameters (K I-k 4) as well as macro parameters, such as volume of distribution (Vd) and binding potential (BP I & BP II) and (3) FCM clustering incorporating neighboring voxel information does not improve the parameter estimates, but improves noise in the parametric images. These findings indicated that it is desirable for pre-segmentation with traditional FCM clustering to generate voxel-wise parametric images with GLLS from dynamic SPECT data.

  14. Clustering of Casablanca stock market based on hurst exponent estimates

    NASA Astrophysics Data System (ADS)

    Lahmiri, Salim

    2016-08-01

    This paper deals with the problem of Casablanca Stock Exchange (CSE) topology modeling as a complex network during three different market regimes: general trend characterized by ups and downs, increasing trend, and decreasing trend. In particular, a set of seven different Hurst exponent estimates are used to characterize long-range dependence in each industrial sector generating process. They are employed in conjunction with hierarchical clustering approach to examine the co-movements of the Casablanca Stock Exchange industrial sectors. The purpose is to investigate whether cluster structures are similar across variable, increasing and decreasing regimes. It is observed that the general structure of the CSE topology has been considerably changed over 2009 (variable regime), 2010 (increasing regime), and 2011 (decreasing regime) time periods. The most important findings follow. First, in general a high value of Hurst exponent is associated to a variable regime and a small one to a decreasing regime. In addition, Hurst estimates during increasing regime are higher than those of a decreasing regime. Second, correlations between estimated Hurst exponent vectors of industrial sectors increase when Casablanca stock exchange follows an upward regime, whilst they decrease when the overall market follows a downward regime.

  15. Estimating adjusted prevalence ratio in clustered cross-sectional epidemiological data

    PubMed Central

    Santos, Carlos Antônio ST; Fiaccone, Rosemeire L; Oliveira, Nelson F; Cunha, Sérgio; Barreto, Maurício L; do Carmo, Maria Beatriz B; Moncayo, Ana-Lucia; Rodrigues, Laura C; Cooper, Philip J; Amorim, Leila D

    2008-01-01

    Background Many epidemiologic studies report the odds ratio as a measure of association for cross-sectional studies with common outcomes. In such cases, the prevalence ratios may not be inferred from the estimated odds ratios. This paper overviews the most commonly used procedures to obtain adjusted prevalence ratios and extends the discussion to the analysis of clustered cross-sectional studies. Methods Prevalence ratios(PR) were estimated using logistic models with random effects. Their 95% confidence intervals were obtained using delta method and clustered bootstrap. The performance of these approaches was evaluated through simulation studies. Using data from two studies with health-related outcomes in children, we discuss the interpretation of the measures of association and their implications. Results The results from data analysis highlighted major differences between estimated OR and PR. Results from simulation studies indicate an improved performance of delta method compared to bootstrap when there are small number of clusters. Conclusion We recommend the use of logistic model with random effects for analysis of clustered data. The choice of method to estimate confidence intervals for PR (delta or bootstrap method) should be based on study design. PMID:19087281

  16. Estimating interstellar extinction towards elliptical galaxies and star clusters.

    NASA Astrophysics Data System (ADS)

    de Amôres, E. B.; Lépine, J. R. D.

    The ability to estimate interstellar extinction is essential for color corrections and distance calculations of all sorts of astronomical objects being fundamental for galactic structure studies. We performed comparisons of interstellar extinction models by Amores & Lépine (2005) that are available at: http://www.astro.iag.usp.br/\\symbol{126}amores. These models are based on the hypothesis that gas and dust are homogeneously mixed, and make use of the dust-to gas ratio. The gas density distribution used in the models is obtained from the gas large scale surveys: Berkeley and Parkes HI surveys and from the Columbia University CO survey. In the present work, we compared these models with extinction predictions of elliptical galaxies (gE) and star clusters. We used the similar sample of gE galaxies proposed by Burstein for the comparison between the extinction calculation methods of Burstein & Heiles (1978, 1982) and of Schlegel et al. (1998) extending the comparison to our models. We found rms differences equal to 0.0179 and 0.0189 mag respectively, in the comparison of the predictions of our "model A" with the two methods mentioned. The comparison takes into account the "zero points" introduced by Burstein. The correlation coefficient obtained in the comparison is around 0.85. These results bring to light that our models can be safely used for the estimation of extinction in our Galaxy for extragalactic work, as an alternative method to the BH and SFD predictions. In the comparison with the globular clusters we found rms differences equal to 0.32 and 0.30 for our models A and S, respectively. For the open clusters we made comparisons using different samples and the rms differences were around 0.25.

  17. Improving Osteoporosis Screening: Results from a Randomized Cluster Trial

    PubMed Central

    Kolk, Deneil; Peterson, Edward L.; McCarthy, Bruce D.; Weiss, Thomas W.; Chen, Ya-Ting; Muma, Bruce K.

    2007-01-01

    Background Despite recommendations, osteoporosis screening rates among women aged 65 years and older remain low. We present results from a clustered, randomized trial evaluating patient mailed reminders, alone and in combination with physician prompts, to improve osteoporosis screening and treatment. Methods Primary care clinics (n = 15) were randomized to usual care, mailed reminders alone, or mailed reminders with physician prompts. Study patients were females aged 65–89 years (N = 10,354). Using automated clinical and pharmacy data, information was collected on bone mineral density testing, pharmacy dispensings, and other patient characteristics. Unadjusted/adjusted differences in testing and treatment were assessed using generalized estimating equation approaches. Results Osteoporosis screening rates were 10.8% in usual care, 24.1% in mailed reminder, and 28.9% in mailed reminder with physician prompt. Results adjusted for differences at baseline indicated that mailed reminders significantly improved testing rates compared to usual care, and that the addition of prompts further improved testing. This effect increased with patient age. Treatment rates were 5.2% in usual care, 8.4% in mailed reminders, and 9.1% in mailed reminders with prompt. No significant differences were found in treatment rates between those receiving mailed reminders alone or in combination with physician prompts. However, women receiving usual care were significantly less likely to be treated. Conclusions The use of mailed reminders, either alone or with physician prompts, can significantly improve osteoporosis screening and treatment rates among insured primary care patients (Clinical Trials.gov number NCT00139425). PMID:17356966

  18. Unsupervised, Robust Estimation-based Clustering for Multispectral Images

    NASA Technical Reports Server (NTRS)

    Netanyahu, Nathan S.

    1997-01-01

    To prepare for the challenge of handling the archiving and querying of terabyte-sized scientific spatial databases, the NASA Goddard Space Flight Center's Applied Information Sciences Branch (AISB, Code 935) developed a number of characterization algorithms that rely on supervised clustering techniques. The research reported upon here has been aimed at continuing the evolution of some of these supervised techniques, namely the neural network and decision tree-based classifiers, plus extending the approach to incorporating unsupervised clustering algorithms, such as those based on robust estimation (RE) techniques. The algorithms developed under this task should be suited for use by the Intelligent Information Fusion System (IIFS) metadata extraction modules, and as such these algorithms must be fast, robust, and anytime in nature. Finally, so that the planner/schedule module of the IlFS can oversee the use and execution of these algorithms, all information required by the planner/scheduler must be provided to the IIFS development team to ensure the timely integration of these algorithms into the overall system.

  19. A sparse-sampling strategy for the estimation of large-scale clustering from redshift surveys

    NASA Astrophysics Data System (ADS)

    Kaiser, N.

    1986-04-01

    It is shown that a fractional faint-magnitude limited redshift survey can significantly reduce the uncertainty in the two-point function for a given telescope time investment, in the estimation of large scale clustering. The signal-to-noise ratio for a 1-in-20 bright galaxy sample is roughly twice that provided by a same-cost complete survey, and this performance is the same as for a larger complete survey of about seven times the cost. A similar performance increase is achieved with a wide-field telescope multiple redshift collection from a close to full sky coverage survey. Little performance improvement is seen for smaller multiply collected surveys ideally sampled at a 1-in-10 bright galaxy rate. The optimum sampling fraction for Abell's rich clusters is found to be close to unity, with little sparse sampling performance improvement.

  20. Time-calibrated estimates of oceanographic profiles using empirical orthogonal functions and clustering

    NASA Astrophysics Data System (ADS)

    Hjelmervik, Karina; Hjelmervik, Karl Thomas

    2014-05-01

    Oceanographic climatology is widely used in different applications, such as climate studies, ocean model validation and planning of naval operations. Conventional climatological estimates are based on historic measurements, typically by averaging the measurements and thereby smoothing local phenomena. Such phenomena are often local in time and space, but crucial to some applications. Here, we propose a new method to estimate time-calibrated oceanographic profiles based on combined historic and real-time measurements. The real-time measurements may, for instance, be SAR pictures or autonomous underwater vehicles providing temperature values at a limited set of depths. The method employs empirical orthogonal functions and clustering on a training data set in order to divide the ocean into climatological regions. The real-time measurements are first used to determine what climatological region is most representative. Secondly, an improved estimate is determined using an optimisation approach that minimises the difference between the real-time measurements and the final estimate.

  1. Estimating cougar predation rates from GPS location clusters

    USGS Publications Warehouse

    Anderson, C.R., Jr.; Lindzey, F.G.

    2003-01-01

    We examined cougar (Puma concolor) predation from Global Positioning System (GPS) location clusters (???2 locations within 200 m on the same or consecutive nights) of 11 cougars during September-May, 1999-2001. Location success of GPS averaged 2.4-5.0 of 6 location attempts/night/cougar. We surveyed potential predation sites during summer-fall 2000 and summer 2001 to identify prey composition (n = 74; 3-388 days post predation) and record predation-site variables (n = 97; 3-270 days post predation). We developed a model to estimate probability that a cougar killed a large mammal from data collected at GPS location clusters where the probability of predation increased with number of nights (defined as locations at 2200, 0200, or 0500 hr) of cougar presence within a 200-m radius (P < 0.001). Mean estimated cougar predation rates for large mammals were 7.3 days/kill for subadult females (1-2.5 yr; n = 3, 90% CI: 6.3 to 9.9), 7.0 days/kill for adult females (n = 2, 90% CI: 5.8 to 10.8), 5.4 days/kill for family groups (females with young; n = 3, 90% CI: 4.5 to 8.4), 9.5 days/kill for a subadult male (1-2.5 yr; n = 1, 90% CI: 6.9 to 16.4), and 7.8 days/kill for adult males (n = 2, 90% CI: 6.8 to 10.7). We may have slightly overestimated cougar predation rates due to our inability to separate scavenging from predation. We detected 45 deer (Odocoileus spp.), 15 elk (Cervus elaphus), 6 pronghorn (Antilocapra americana), 2 livestock, 1 moose (Alces alces), and 6 small mammals at cougar predation sites. Comparisons between cougar sexes suggested that females selected mule deer and males selected elk (P < 0.001). Cougars averaged 3.0 nights on pronghorn carcasses, 3.4 nights on deer carcasses, and 6.0 nights on elk carcasses. Most cougar predation (81.7%) occurred between 1901-0500 hr and peaked from 2201-0200 hr (31.7%). Applying GPS technology to identify predation rates and prey selection will allow managers to efficiently estimate the ability of an area's prey base to

  2. A Hierarchical Clustering Methodology for the Estimation of Toxicity

    EPA Science Inventory

    A Quantitative Structure Activity Relationship (QSAR) methodology based on hierarchical clustering was developed to predict toxicological endpoints. This methodology utilizes Ward's method to divide a training set into a series of structurally similar clusters. The structural sim...

  3. Improved dose estimates for nuclear criticality accidents

    SciTech Connect

    Wilkinson, A.D.; Basoglu, B.; Bentley, C.L.; Dunn, M.E.; Plaster, M.J.; Dodds, H.L.; Haught, C.F.; Yamamoto, T.; Hopper, C.M.

    1995-08-01

    Slide rules are improved for estimating doses and dose rates resulting from nuclear criticality accidents. The original slide rules were created for highly enriched uranium solutions and metals using hand calculations along with the decades old Way-Wigner radioactive decay relationship and the inverse square law. This work uses state-of-the-art methods and better data to improve the original slide rules and also to extend the slide rule concept to three additional systems; i.e., highly enriched (93.2 wt%) uranium damp (H/{sup 235}U = 10) powder (U{sub 3}O{sub 8}) and low-enriched (5 wt%) uranium mixtures (UO{sub 2}F{sub 2}) with a H/{sup 235}U ratio of 200 and 500. Although the improved slide rules differ only slightly from the original slide rules, the improved slide rules and also the new slide rules can be used with greater confidence since they are based on more rigorous methods and better nuclear data.

  4. Process control improvements realized in a vertical reactor cluster tool

    NASA Astrophysics Data System (ADS)

    Werkhoven, Chris J.; Granneman, E. H.; Lindow, E.

    1993-04-01

    Advance cell structures present in high-density memories and logic devices require high quality, ultra thin dielectric and conductor films. By controlling the interface properties of such films, remarkable process control enhancements of manufacturing proven, vertical LPCVD and oxidation processes are realized. To this end, an HF/H2O vapor etch reactor is integrated in a vacuum cluster tool comprising vertical reactors for the various LPCVD and oxidation processes. Data of process control improvement are provided for polysilicon emitters, polysilicon contacts, polysilicon gates, and NO capacitors. Finally, the cost of ownership of cluster tool use is compared with that of stand-along equipment.

  5. The Effect of Mergers on Galaxy Cluster Mass Estimates

    NASA Astrophysics Data System (ADS)

    Johnson, Ryan E.; Zuhone, John A.; Thorsen, Tessa; Hinds, Andre

    2015-08-01

    At vertices within the filamentary structure that describes the universal matter distribution, clusters of galaxies grow hierarchically through merging with other clusters. As such, the most massive galaxy clusters should have experienced many such mergers in their histories. Though we cannot see them evolve over time, these mergers leave lasting, measurable effects in the cluster galaxies' phase space. By simulating several different galaxy cluster mergers here, we examine how the cluster galaxies kinematics are altered as a result of these mergers. Further, we also examine the effect of our line of sight viewing angle with respect to the merger axis. In projecting the 6-dimensional galaxy phase space onto a 3-dimensional plane, we are able to simulate how these clusters might actually appear to optical redshift surveys. We find that for those optical cluster statistics which are most often used as a proxy for the cluster mass (variants of σv), the uncertainty due to an inprecise or unknown line of sight may alter the derived cluster masses moreso than the kinematic disturbance of the merger itself. Finally, by examining these, and several other clustering statistics, we find that significant events (such as pericentric crossings) are identifiable over a range of merger initial conditions and from many different lines of sight.

  6. Accounting for One-Group Clustering in Effect-Size Estimation

    ERIC Educational Resources Information Center

    Citkowicz, Martyna; Hedges, Larry V.

    2013-01-01

    In some instances, intentionally or not, study designs are such that there is clustering in one group but not in the other. This paper describes methods for computing effect size estimates and their variances when there is clustering in only one group and the analysis has not taken that clustering into account. The authors provide the effect size…

  7. A improvement to the cluster recognition model for peripheral collisions

    SciTech Connect

    Garcia-Solis, E.J.; Mignerey, A.C.

    1996-02-01

    Among the microscopic dynamical simulations used for the study of the evolution of nuclear collisions at energies around 100 MeV, it has ben found, that the BUU-type of calculation describes adequately the general features of nuclear collisions in that energy regime. The BUU method consists of the numerical solution of the modified Vlaslov equation for a generated phase-space distribution of nucleons. It generally describes satisfactorily the first stages of a nuclear reaction, however it is not able to separate the fragments formed during the projectile-target interaction. It therefore is necessary to insert a clusterization procedure to obtain the primary fragments of the reaction. The general description of the clustering model proposed by the authors can be found elsewhere. The current paper deals with improvements that have been made to the clustering procedure.

  8. Comparative analysis of missing value imputation methods to improve clustering and interpretation of microarray experiments

    PubMed Central

    2010-01-01

    Background Microarray technologies produced large amount of data. In a previous study, we have shown the interest of k-Nearest Neighbour approach for restoring the missing gene expression values, and its positive impact of the gene clustering by hierarchical algorithm. Since, numerous replacement methods have been proposed to impute missing values (MVs) for microarray data. In this study, we have evaluated twelve different usable methods, and their influence on the quality of gene clustering. Interestingly we have used several datasets, both kinetic and non kinetic experiments from yeast and human. Results We underline the excellent efficiency of approaches proposed and implemented by Bo and co-workers and especially one based on expected maximization (EM_array). These improvements have been observed also on the imputation of extreme values, the most difficult predictable values. We showed that the imputed MVs have still important effects on the stability of the gene clusters. The improvement on the clustering obtained by hierarchical clustering remains limited and, not sufficient to restore completely the correct gene associations. However, a common tendency can be found between the quality of the imputation method and the gene cluster stability. Even if the comparison between clustering algorithms is a complex task, we observed that k-means approach is more efficient to conserve gene associations. Conclusions More than 6.000.000 independent simulations have assessed the quality of 12 imputation methods on five very different biological datasets. Important improvements have so been done since our last study. The EM_array approach constitutes one efficient method for restoring the missing expression gene values, with a lower estimation error level. Nonetheless, the presence of MVs even at a low rate is a major factor of gene cluster instability. Our study highlights the need for a systematic assessment of imputation methods and so of dedicated benchmarks. A

  9. IMPROVED RISK ESTIMATES FOR CARBON TETRACHLORIDE

    SciTech Connect

    Benson, Janet M.; Springer, David L.

    1999-12-31

    Carbon tetrachloride has been used extensively within the DOE nuclear weapons facilities. Rocky Flats was formerly the largest volume consumer of CCl4 in the United States using 5000 gallons in 1977 alone (Ripple, 1992). At the Hanford site, several hundred thousand gallons of CCl4 were discharged between 1955 and 1973 into underground cribs for storage. Levels of CCl4 in groundwater at highly contaminated sites at the Hanford facility have exceeded 8 the drinking water standard of 5 ppb by several orders of magnitude (Illman, 1993). High levels of CCl4 at these facilities represent a potential health hazard for workers conducting cleanup operations and for surrounding communities. The level of CCl4 cleanup required at these sites and associated costs are driven by current human health risk estimates, which assume that CCl4 is a genotoxic carcinogen. The overall purpose of these studies was to improve the scientific basis for assessing the health risk associated with human exposure to CCl4. Specific research objectives of this project were to: (1) compare the rates of CCl4 metabolism by rats, mice and hamsters in vivo and extrapolate those rates to man based on parallel studies on the metabolism of CCl4 by rat, mouse, hamster and human hepatic microsomes in vitro; (2) using hepatic microsome preparations, determine the role of specific cytochrome P450 isoforms in CCl4-mediated toxicity and the effects of repeated inhalation and ingestion of CCl4 on these isoforms; and (3) evaluate the toxicokinetics of inhaled CCl4 in rats, mice and hamsters. This information has been used to improve the physiologically based pharmacokinetic (PBPK) model for CCl4 originally developed by Paustenbach et al. (1988) and more recently revised by Thrall and Kenny (1996). Another major objective of the project was to provide scientific evidence that CCl4, like chloroform, is a hepatocarcinogen only when exposure results in cell damage, cell killing and regenerative proliferation. In

  10. A Multicriteria Decision Making Approach for Estimating the Number of Clusters in a Data Set

    PubMed Central

    Peng, Yi; Zhang, Yong; Kou, Gang; Shi, Yong

    2012-01-01

    Determining the number of clusters in a data set is an essential yet difficult step in cluster analysis. Since this task involves more than one criterion, it can be modeled as a multiple criteria decision making (MCDM) problem. This paper proposes a multiple criteria decision making (MCDM)-based approach to estimate the number of clusters for a given data set. In this approach, MCDM methods consider different numbers of clusters as alternatives and the outputs of any clustering algorithm on validity measures as criteria. The proposed method is examined by an experimental study using three MCDM methods, the well-known clustering algorithm–k-means, ten relative measures, and fifteen public-domain UCI machine learning data sets. The results show that MCDM methods work fairly well in estimating the number of clusters in the data and outperform the ten relative measures considered in the study. PMID:22870181

  11. Research opportunities to improve DSM impact estimates

    SciTech Connect

    Misuriello, H.; Hopkins, M.E.F.

    1992-03-01

    This report was commissioned by the California Institute for Energy Efficiency (CIEE) as part of its research mission to advance the energy efficiency and productivity of all end-use sectors in California. Our specific goal in this effort has been to identify viable research and development (R&D) opportunities that can improve capabilities to determine the energy-use and demand reductions achieved through demand-side management (DSM) programs and measures. We surveyed numerous practitioners in California and elsewhere to identify the major obstacles to effective impact evaluation, drawing on their collective experience. As a separate effort, we have also profiled the status of regulatory practices in leading states with respect to DSM impact evaluation. We have synthesized this information, adding our own perspective and experience to those of our survey-respondent colleagues, to characterize today`s state of the art in impact-evaluation practices. This scoping study takes a comprehensive look at the problems and issues involved in DSM impact estimates at the customer-facility or site level. The major portion of our study investigates three broad topic areas of interest to CIEE: Data analysis issues, field-monitoring issues, issues in evaluating DSM measures. Across these three topic areas, we have identified 22 potential R&D opportunities, to which we have assigned priority levels. These R&D opportunities are listed by topic area and priority.

  12. Research opportunities to improve DSM impact estimates

    SciTech Connect

    Misuriello, H.; Hopkins, M.E.F. )

    1992-03-01

    This report was commissioned by the California Institute for Energy Efficiency (CIEE) as part of its research mission to advance the energy efficiency and productivity of all end-use sectors in California. Our specific goal in this effort has been to identify viable research and development (R D) opportunities that can improve capabilities to determine the energy-use and demand reductions achieved through demand-side management (DSM) programs and measures. We surveyed numerous practitioners in California and elsewhere to identify the major obstacles to effective impact evaluation, drawing on their collective experience. As a separate effort, we have also profiled the status of regulatory practices in leading states with respect to DSM impact evaluation. We have synthesized this information, adding our own perspective and experience to those of our survey-respondent colleagues, to characterize today's state of the art in impact-evaluation practices. This scoping study takes a comprehensive look at the problems and issues involved in DSM impact estimates at the customer-facility or site level. The major portion of our study investigates three broad topic areas of interest to CIEE: Data analysis issues, field-monitoring issues, issues in evaluating DSM measures. Across these three topic areas, we have identified 22 potential R D opportunities, to which we have assigned priority levels. These R D opportunities are listed by topic area and priority.

  13. Estimation of Carcinogenicity using Hierarchical Clustering and Nearest Neighbor Methodologies

    EPA Science Inventory

    Previously a hierarchical clustering (HC) approach and a nearest neighbor (NN) approach were developed to model acute aquatic toxicity end points. These approaches were developed to correlate the toxicity for large, noncongeneric data sets. In this study these approaches applie...

  14. Towards Improved Estimates of Ocean Heat Flux

    NASA Astrophysics Data System (ADS)

    Bentamy, Abderrahim; Hollman, Rainer; Kent, Elisabeth; Haines, Keith

    2014-05-01

    Recommendations and priorities for ocean heat flux research are for instance outlined in recent CLIVAR and WCRP reports, eg. Yu et al (2013). Among these is the need for improving the accuracy, the consistency, and the spatial and temporal resolution of air-sea fluxes over global as well as at region scales. To meet the main air-sea flux requirements, this study is aimed at obtaining and analyzing all the heat flux components (latent, sensible and radiative) at the ocean surface over global oceans using multiple satellite sensor observations in combination with in-situ measurements and numerical model analyses. The fluxes will be generated daily and monthly for the 20-year (1992-2011) period, between 80N and 80S and at 0.25deg resolution. Simultaneous estimates of all surface heat flux terms have not yet been calculated at such large scale and long time period. Such an effort requires a wide range of expertise and data sources that only recently are becoming available. Needed are methods for integrating many data sources to calculate energy fluxes (short-wave, long wave, sensible and latent heat) across the air-sea interface. We have access to all the relevant, recently available satellite data to perform such computations. Yu, L., K. Haines, M. Bourassa, M. Cronin, S. Gulev, S. Josey, S. Kato, A. Kumar, T. Lee, D. Roemmich: Towards achieving global closure of ocean heat and freshwater budgets: Recommendations for advancing research in air-sea fluxes through collaborative activities. INTERNATIONAL CLIVAR PROJECT OFFICE, 2013: International CLIVAR Publication Series No 189. http://www.clivar.org/sites/default/files/ICPO189_WHOI_fluxes_workshop.pdf

  15. A comparison of acromion marker cluster calibration methods for estimating scapular kinematics during upper extremity ergometry.

    PubMed

    Richardson, R Tyler; Nicholson, Kristen F; Rapp, Elizabeth A; Johnston, Therese E; Richards, James G

    2016-05-01

    Accurate measurement of joint kinematics is required to understand the musculoskeletal effects of a therapeutic intervention such as upper extremity (UE) ergometry. Traditional surface-based motion capture is effective for quantifying humerothoracic motion, but scapular kinematics are challenging to obtain. Methods for estimating scapular kinematics include the widely-reported acromion marker cluster (AMC) which utilizes a static calibration between the scapula and the AMC to estimate the orientation of the scapula during motion. Previous literature demonstrates that including additional calibration positions throughout the motion improves AMC accuracy for single plane motions; however this approach has not been assessed for the non-planar shoulder complex motion occurring during UE ergometry. The purpose of this study was to evaluate the accuracy of single, dual, and multiple AMC calibration methods during UE ergometry. The orientations of the UE segments of 13 healthy subjects were recorded with motion capture. Scapular landmarks were palpated at eight evenly-spaced static positions around the 360° cycle. The single AMC method utilized one static calibration position to estimate scapular kinematics for the entire cycle, while the dual and multiple AMC methods used two and four static calibration positions, respectively. Scapulothoracic angles estimated by the three AMC methods were compared with scapulothoracic angles determined by palpation. The multiple AMC method produced the smallest RMS errors and was not significantly different from palpation about any axis. We recommend the multiple AMC method as a practical and accurate way to estimate scapular kinematics during UE ergometry. PMID:26976228

  16. An improved distance matrix computation algorithm for multicore clusters.

    PubMed

    Al-Neama, Mohammed W; Reda, Naglaa M; Ghaleb, Fayed F M

    2014-01-01

    Distance matrix has diverse usage in different research areas. Its computation is typically an essential task in most bioinformatics applications, especially in multiple sequence alignment. The gigantic explosion of biological sequence databases leads to an urgent need for accelerating these computations. DistVect algorithm was introduced in the paper of Al-Neama et al. (in press) to present a recent approach for vectorizing distance matrix computing. It showed an efficient performance in both sequential and parallel computing. However, the multicore cluster systems, which are available now, with their scalability and performance/cost ratio, meet the need for more powerful and efficient performance. This paper proposes DistVect1 as highly efficient parallel vectorized algorithm with high performance for computing distance matrix, addressed to multicore clusters. It reformulates DistVect1 vectorized algorithm in terms of clusters primitives. It deduces an efficient approach of partitioning and scheduling computations, convenient to this type of architecture. Implementations employ potential of both MPI and OpenMP libraries. Experimental results show that the proposed method performs improvement of around 3-fold speedup upon SSE2. Further it also achieves speedups more than 9 orders of magnitude compared to the publicly available parallel implementation utilized in ClustalW-MPI. PMID:25013779

  17. Improved Yield Estimation by Trellis Tension Monitoring

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Most yield estimation practices for commercial vineyards rely on hand-sampling fruit on one or a small number of dates during the growing season. Limitations associated with the static yield estimates may be overcome with Trellis Tension Monitors (TTMs), systems that measure dynamically changes in t...

  18. Improved Estimation by Trellis Tension Monitoring

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Most yield estimation practices for commercial vineyards are based on longstanding but individually variable industry protocols that rely on hand sampling fruit on one or a small number of dates during the growing season. Limitations associated with the static nature of yield estimation may be overc...

  19. Communication: Improved pair approximations in local coupled-cluster methods

    SciTech Connect

    Schwilk, Max; Werner, Hans-Joachim; Usvyat, Denis

    2015-03-28

    In local coupled cluster treatments the electron pairs can be classified according to the magnitude of their energy contributions or distances into strong, close, weak, and distant pairs. Different approximations are introduced for the latter three classes. In this communication, an improved simplified treatment of close and weak pairs is proposed, which is based on long-range cancellations of individually slowly decaying contributions in the amplitude equations. Benchmark calculations for correlation, reaction, and activation energies demonstrate that these approximations work extremely well, while pair approximations based on local second-order Møller-Plesset theory can lead to errors that are 1-2 orders of magnitude larger.

  20. A novel method to estimate the impact parameter on a drift cell by using the information of single ionization clusters

    NASA Astrophysics Data System (ADS)

    Signorelli, G.; D`Onofrio, A.; Venturini, M.

    2016-07-01

    Measuring the time of each ionization cluster in drift chambers has been proposed to improve the single hit resolution, especially for very low mass tracking systems. Ad hoc formulae have been developed to combine the information from the single clusters. We show that the problem falls in a wide category of problems that can be solved with an algorithm called Maximum Possible Spacing (MPS) which has been demonstrated to find the optimal estimator. We show that the MPS approach is applicable and gives the expected results. Its application in a real tracking device, namely the MEG II cylindrical drift chamber, is discussed.

  1. Infant immunization coverage in Italy: estimates by simultaneous EPI cluster surveys of regions. ICONA Study Group.

    PubMed Central

    Salmaso, S.; Rota, M. C.; Ciofi Degli Atti, M. L.; Tozzi, A. E.; Kreidl, P.

    1999-01-01

    In 1998, a series of regional cluster surveys (the ICONA Study) was conducted simultaneously in 19 out of the 20 regions in Italy to estimate the mandatory immunization coverage of children aged 12-24 months with oral poliovirus (OPV), diphtheria-tetanus (DT) and viral hepatitis B (HBV) vaccines, as well as optional immunization coverage with pertussis, measles and Haemophilus influenzae b (Hib) vaccines. The study children were born in 1996 and selected from birth registries using the Expanded Programme of Immunization (EPI) cluster sampling technique. Interviews with parents were conducted to determine each child's immunization status and the reasons for any missed or delayed vaccinations. The study population comprised 4310 children aged 12-24 months. Coverage for both mandatory and optional vaccinations differed by region. The overall coverage for mandatory vaccines (OPV, DT and HBV) exceeded 94%, but only 79% had been vaccinated in accord with the recommended schedule (i.e. during the first year of life). Immunization coverage for pertussis increased from 40% (1993 survey) to 88%, but measles coverage (56%) remained inadequate for controlling the disease; Hib coverage was 20%. These results confirm that in Italy the coverage of only mandatory immunizations is satisfactory. Pertussis immunization coverage has improved dramatically since the introduction of acellular vaccines. A greater effort to educate parents and physicians is still needed to improve the coverage of optional vaccinations in all regions. PMID:10593033

  2. Estimated number of field stars toward Galactic globular clusters and Local Group Galaxies

    NASA Technical Reports Server (NTRS)

    Ratnatunga, K. U.; Bahcall, J. N.

    1985-01-01

    Field star densities are estimated for 89 fields with /b/ greater than 10 degrees based on the Galaxy model of Bahcall and Soneira (1980, 1984; Bahcall et al. 1985). Calculated tables are presented for 76 of the fields toward Galactic globular clusters, and 16 Local Group Galaxies in 13 fields. The estimates can be used as an initial guide for planning both ground-based and Space Telescope observations of globular clusters at intermediate-to-high Galactic latitudes.

  3. Performance of small cluster surveys and the clustered LQAS design to estimate local-level vaccination coverage in Mali

    PubMed Central

    2012-01-01

    Background Estimation of vaccination coverage at the local level is essential to identify communities that may require additional support. Cluster surveys can be used in resource-poor settings, when population figures are inaccurate. To be feasible, cluster samples need to be small, without losing robustness of results. The clustered LQAS (CLQAS) approach has been proposed as an alternative, as smaller sample sizes are required. Methods We explored (i) the efficiency of cluster surveys of decreasing sample size through bootstrapping analysis and (ii) the performance of CLQAS under three alternative sampling plans to classify local VC, using data from a survey carried out in Mali after mass vaccination against meningococcal meningitis group A. Results VC estimates provided by a 10 × 15 cluster survey design were reasonably robust. We used them to classify health areas in three categories and guide mop-up activities: i) health areas not requiring supplemental activities; ii) health areas requiring additional vaccination; iii) health areas requiring further evaluation. As sample size decreased (from 10 × 15 to 10 × 3), standard error of VC and ICC estimates were increasingly unstable. Results of CLQAS simulations were not accurate for most health areas, with an overall risk of misclassification greater than 0.25 in one health area out of three. It was greater than 0.50 in one health area out of two under two of the three sampling plans. Conclusions Small sample cluster surveys (10 × 15) are acceptably robust for classification of VC at local level. We do not recommend the CLQAS method as currently formulated for evaluating vaccination programmes. PMID:23057445

  4. IMPROVED RISK ESTIMATES FOR CARBON TETRACHLORIDE

    EPA Science Inventory

    Carbon tetrachloride (CCl4) has been used extensively within the Department of Energy (DOE) nuclear weapons facilities. Costs associated with cleanup of CCl4 at DOE facilities are driven by current cancer risk estimates which assume CCl4 is a genotoxic carcinogen. However, a grow...

  5. Improving Reliability of Subject-Level Resting-State fMRI Parcellation with Shrinkage Estimators

    PubMed Central

    Mejia, Amanda F.; Nebel, Mary Beth; Shou, Haochang; Crainiceanu, Ciprian M.; Pekar, James J.; Mostofsky, Stewart; Caffo, Brian; Lindquist, Martin A.

    2015-01-01

    A recent interest in resting state functional magnetic resonance imaging (rsfMRI) lies in subdividing the human brain into anatomically and functionally distinct regions of interest. For example, brain parcellation is often a necessary step for defining the network nodes used in connectivity studies. While inference has traditionally been performed on group-level data, there is a growing interest in parcellating single subject data. However, this is difficult due to the inherent low signal-to-noise ratio of rsfMRI data, combined with typically short scan lengths. A large number of brain parcellation approaches employ clustering, which begins with a measure of similarity or distance between voxels. The goal of this work is to improve the reproducibility of single-subject parcellation using shrinkage-based estimators of such measures, allowing the noisy subject-specific estimator to “borrow strength” in a principled manner from a larger population of subjects. We present several empirical Bayes shrinkage estimators and outline methods for shrinkage when multiple scans are not available for each subject. We perform shrinkage on raw inter-voxel correlation estimates and use both raw and shrinkage estimates to produce parcellations by performing clustering on the voxels. While we employ a standard spectral clustering approach, our proposed method is agnostic to the choice of clustering method and can be used as a pre-processing step for any clustering algorithm. Using two datasets – a simulated dataset where the true parcellation is known and is subject-specific and a test-retest dataset consisting of two 7-minute resting-state fMRI scans from 20 subjects – we show that parcellations produced from shrinkage correlation estimates have higher reliability and validity than those produced from raw correlation estimates. Application to test-retest data shows that using shrinkage estimators increases the reproducibility of subject-specific parcellations of the motor

  6. Improving reliability of subject-level resting-state fMRI parcellation with shrinkage estimators.

    PubMed

    Mejia, Amanda F; Nebel, Mary Beth; Shou, Haochang; Crainiceanu, Ciprian M; Pekar, James J; Mostofsky, Stewart; Caffo, Brian; Lindquist, Martin A

    2015-05-15

    A recent interest in resting state functional magnetic resonance imaging (rsfMRI) lies in subdividing the human brain into anatomically and functionally distinct regions of interest. For example, brain parcellation is often a necessary step for defining the network nodes used in connectivity studies. While inference has traditionally been performed on group-level data, there is a growing interest in parcellating single subject data. However, this is difficult due to the inherent low signal-to-noise ratio of rsfMRI data, combined with typically short scan lengths. A large number of brain parcellation approaches employ clustering, which begins with a measure of similarity or distance between voxels. The goal of this work is to improve the reproducibility of single-subject parcellation using shrinkage-based estimators of such measures, allowing the noisy subject-specific estimator to "borrow strength" in a principled manner from a larger population of subjects. We present several empirical Bayes shrinkage estimators and outline methods for shrinkage when multiple scans are not available for each subject. We perform shrinkage on raw inter-voxel correlation estimates and use both raw and shrinkage estimates to produce parcellations by performing clustering on the voxels. While we employ a standard spectral clustering approach, our proposed method is agnostic to the choice of clustering method and can be used as a pre-processing step for any clustering algorithm. Using two datasets - a simulated dataset where the true parcellation is known and is subject-specific and a test-retest dataset consisting of two 7-minute resting-state fMRI scans from 20 subjects - we show that parcellations produced from shrinkage correlation estimates have higher reliability and validity than those produced from raw correlation estimates. Application to test-retest data shows that using shrinkage estimators increases the reproducibility of subject-specific parcellations of the motor cortex by

  7. Clustering-based urbanisation to improve enterprise information systems agility

    NASA Astrophysics Data System (ADS)

    Imache, Rabah; Izza, Said; Ahmed-Nacer, Mohamed

    2015-11-01

    Enterprises are daily facing pressures to demonstrate their ability to adapt quickly to the unpredictable changes of their dynamic in terms of technology, social, legislative, competitiveness and globalisation. Thus, to ensure its place in this hard context, enterprise must always be agile and must ensure its sustainability by a continuous improvement of its information system (IS). Therefore, the agility of enterprise information systems (EISs) can be considered today as a primary objective of any enterprise. One way of achieving this objective is by the urbanisation of the EIS in the context of continuous improvement to make it a real asset servicing enterprise strategy. This paper investigates the benefits of EISs urbanisation based on clustering techniques as a driver for agility production and/or improvement to help managers and IT management departments to improve continuously the performance of the enterprise and make appropriate decisions in the scope of the enterprise objectives and strategy. This approach is applied to the urbanisation of a tour operator EIS.

  8. Cluster Structure in Cosmological Simulations. I. Correlation to Observables, Mass Estimates, and Evolution

    NASA Astrophysics Data System (ADS)

    Jeltema, Tesla E.; Hallman, Eric J.; Burns, Jack O.; Motl, Patrick M.

    2008-07-01

    We use Enzo, a hybrid Eulerian adaptive mesh refinement/N-body code including nongravitational heating and cooling, to explore the morphology of the X-ray gas in clusters of galaxies and its evolution in current-generation cosmological simulations. We employ and compare two observationally motivated structure measures: power ratios and centroid shift. Overall, the structure of our simulated clusters compares remarkably well to low-redshift observations, although some differences remain that may point to incomplete gas physics. We find no dependence on cluster structure in the mass-observable scaling relations, TX-M and YX-M, when using the true cluster masses. However, estimates of the total mass based on the assumption of hydrostatic equilibrium, as assumed in observational studies, are systematically low. We show that the hydrostatic mass bias strongly correlates with cluster structure and, more weakly, with cluster mass. When the hydrostatic masses are used, the mass-observable scaling relations and gas mass fractions depend significantly on cluster morphology, and the true relations are not recovered even if the most relaxed clusters are used. We show that cluster structure, via the power ratios, can be used to effectively correct the hydrostatic mass estimates and mass scaling relations, suggesting that we can calibrate for this systematic effect in cosmological studies. Similar to observational studies, we find that cluster structure, particularly centroid shift, evolves with redshift. This evolution is mild but will lead to additional errors at high redshift. Projection along the line of sight leads to significant uncertainty in the structure of individual clusters: less than 50% of clusters which appear relaxed in projection based on our structure measures are truly relaxed.

  9. Extending Zelterman's approach for robust estimation of population size to zero-truncated clustered Data.

    PubMed

    Navaratna, W C W; Del Rio Vilas, Victor J; Böhning, Dankmar

    2008-08-01

    Estimation of population size with missing zero-class is an important problem that is encountered in epidemiological assessment studies. Fitting a Poisson model to the observed data by the method of maximum likelihood and estimation of the population size based on this fit is an approach that has been widely used for this purpose. In practice, however, the Poisson assumption is seldom satisfied. Zelterman (1988) has proposed a robust estimator for unclustered data that works well in a wide class of distributions applicable for count data. In the work presented here, we extend this estimator to clustered data. The estimator requires fitting a zero-truncated homogeneous Poisson model by maximum likelihood and thereby using a Horvitz-Thompson estimator of population size. This was found to work well, when the data follow the hypothesized homogeneous Poisson model. However, when the true distribution deviates from the hypothesized model, the population size was found to be underestimated. In the search of a more robust estimator, we focused on three models that use all clusters with exactly one case, those clusters with exactly two cases and those with exactly three cases to estimate the probability of the zero-class and thereby use data collected on all the clusters in the Horvitz-Thompson estimator of population size. Loss in efficiency associated with gain in robustness was examined based on a simulation study. As a trade-off between gain in robustness and loss in efficiency, the model that uses data collected on clusters with at most three cases to estimate the probability of the zero-class was found to be preferred in general. In applications, we recommend obtaining estimates from all three models and making a choice considering the estimates from the three models, robustness and the loss in efficiency. PMID:18663764

  10. The estimation of masses of individual galaxies in clusters of galaxies.

    NASA Technical Reports Server (NTRS)

    Wolf, R. A.; Bahcall, J. N.

    1972-01-01

    Three different methods of estimating masses are discussed. The 'density method' is based on the analysis of the density distribution of galaxies around the object whose mass is to be found. The 'bound-galaxy method' gives estimates of the mass of a double, triple, or quadruple system from analysis of the orbital motion of the components. The 'virial method' utilizes the formulas derived for the second method to obtain estimates of the virial-theorem masses of whole clusters, and thus to obtain upper limits on the mass of an individual galaxy in a cluster. The analytic formulas are developed and compared with computer experiments, and some applications are given.

  11. A Clustering Classification of Spare Parts for Improving Inventory Policies

    NASA Astrophysics Data System (ADS)

    Meri Lumban Raja, Anton; Ai, The Jin; Diar Astanti, Ririn

    2016-02-01

    Inventory policies in a company may consist of storage, control, and replenishment policy. Since the result of common ABC inventory classification can only affect the replenishment policy, we are proposing a clustering based classification technique as a basis for developing inventory policy especially for storage and control policy. Hierarchical clustering procedure is used after clustering variables are defined. Since hierarchical clustering procedure requires metric variables only, therefore a step to convert non-metric variables to metric variables is performed. The clusters resulted from the clustering techniques are analyzed in order to define each cluster characteristics. Then, the inventory policies are determined for each group according to its characteristics. A real data, which consists of 612 items from a local manufacturer's spare part warehouse, are used in the research of this paper to show the applicability of the proposed methodology.

  12. Structural Nested Mean Models to Estimate the Effects of Time-Varying Treatments on Clustered Outcomes.

    PubMed

    He, Jiwei; Stephens-Shields, Alisa; Joffe, Marshall

    2015-11-01

    In assessing the efficacy of a time-varying treatment structural nested models (SNMs) are useful in dealing with confounding by variables affected by earlier treatments. These models often consider treatment allocation and repeated measures at the individual level. We extend SNMMs to clustered observations with time-varying confounding and treatments. We demonstrate how to formulate models with both cluster- and unit-level treatments and show how to derive semiparametric estimators of parameters in such models. For unit-level treatments, we consider interference, namely the effect of treatment on outcomes in other units of the same cluster. The properties of estimators are evaluated through simulations and compared with the conventional GEE regression method for clustered outcomes. To illustrate our method, we use data from the treatment arm of a glaucoma clinical trial to compare the effectiveness of two commonly used ocular hypertension medications. PMID:26115504

  13. Recent improvements in ocean heat content estimation

    NASA Astrophysics Data System (ADS)

    Abraham, J. P.

    2015-12-01

    Increase of ocean heat content is an outcome of a persistent and ongoing energy imbalance to the Earth's climate system. This imbalance, largely caused by human emissions of greenhouse gases, has engendered a multi-decade increase in stored thermal energy within the Earth system, manifest principally as an increase in ocean heat content. Consequently, in order to quantify the rate of global warming, it is necessary to measure the rate of increase of ocean heat content. The historical record of ocean heat content is extracted from a history of various devices and spatial/temporal coverage across the globe. One of the most important historical devices is the eXpendable BathyThermograph (XBT) which has been used for decades to measure ocean temperatures to depths of 700m and deeper. Here, recent progress in improving the XBT record of upper ocean heat content is described including corrections to systematic biases, filling in spatial gaps where data does not exist, and the selection of a proper climatology. In addition, comparisons of the revised historical record and CMIP5 climate models are made. It is seen that there is very good agreement between the models and measurements, with the models slightly under-predicting the increase of ocean heat content in the upper water layers over the past 45 years.

  14. Improving the performance of molecular dynamics simulations on parallel clusters.

    PubMed

    Borstnik, Urban; Hodoscek, Milan; Janezic, Dusanka

    2004-01-01

    In this article a procedure is derived to obtain a performance gain for molecular dynamics (MD) simulations on existing parallel clusters. Parallel clusters use a wide array of interconnection technologies to connect multiple processors together, often at different speeds, such as multiple processor computers and networking. It is demonstrated how to configure existing programs for MD simulations to efficiently handle collective communication on parallel clusters with processor interconnections of different speeds. PMID:15032512

  15. Improving Collective Estimations Using Resistance to Social Influence

    PubMed Central

    Madirolas, Gabriel; de Polavieja, Gonzalo G.

    2015-01-01

    Groups can make precise collective estimations in cases like the weight of an object or the number of items in a volume. However, in others tasks, for example those requiring memory or mental calculation, subjects often give estimations with large deviations from factual values. Allowing members of the group to communicate their estimations has the additional perverse effect of shifting individual estimations even closer to the biased collective estimation. Here we show that this negative effect of social interactions can be turned into a method to improve collective estimations. We first obtained a statistical model of how humans change their estimation when receiving the estimates made by other individuals. We confirmed using existing experimental data its prediction that individuals use the weighted geometric mean of private and social estimations. We then used this result and the fact that each individual uses a different value of the social weight to devise a method that extracts the subgroups resisting social influence. We found that these subgroups of individuals resisting social influence can make very large improvements in group estimations. This is in contrast to methods using the confidence that each individual declares, for which we find no improvement in group estimations. Also, our proposed method does not need to use historical data to weight individuals by performance. These results show the benefits of using the individual characteristics of the members in a group to better extract collective wisdom. PMID:26565619

  16. First Estimates of the Fundamental Parameters of Three Large Magellanic Cloud Clusters

    NASA Astrophysics Data System (ADS)

    Piatti, Andrés E.; Clariá, Juan J.; Parisi, María Celeste; Ahumada, Andrea V.

    2011-05-01

    As part of an ongoing project to investigate the cluster formation and chemical evolution history in the Large Magellanic Cloud (LMC), we have used the CTIO 0.9 m telescope to obtain CCD imaging in the Washington system of NGC 2161, SL 874, and KMHK 1719—three unstudied star clusters located in the outer region of the LMC. We measured T1 magnitudes and C - T1 colors for a total of 9611 stars distributed throughout cluster areas of 13.6 × 13.6 arcmin2. Cluster radii were estimated from star counts distributed throughout the entire observed fields. Careful attention was paid to setting apart the cluster and field star distributions so that statistically cleaned color-magnitude diagrams (CMDs) were obtained. Based on the best fits of isochrones computed by the Padova group to the (T1, C - T1) CMDs, the δT1 index, and the standard giant branch procedure, ages and metallicities were derived for the three clusters. The different methods for both age and metallicity determination are in good agreement. The three clusters were found to be of intermediate-age (~1 Gyr) and relatively metal-poor ([Fe/H] ~ -0.7 dex). By combining the current results with others available in the literature, a total sample of 45 well-known LMC clusters older than 1 Gyr was compiled. By adopting an age interval varying in terms of age according to a logarithmic law, we built the cluster age histogram, which statistically represents the intermediate-age and old stellar populations in the LMC. Two main cluster formation episodes that peaked at t ~ 2 and ~14 Gyr were detected. The present cluster age distribution was compared with star formation rates that were analytically derived in previous studies.

  17. How to Estimate the Value of Service Reliability Improvements

    SciTech Connect

    Sullivan, Michael J.; Mercurio, Matthew G.; Schellenberg, Josh A.; Eto, Joseph H.

    2010-06-08

    A robust methodology for estimating the value of service reliability improvements is presented. Although econometric models for estimating value of service (interruption costs) have been established and widely accepted, analysts often resort to applying relatively crude interruption cost estimation techniques in assessing the economic impacts of transmission and distribution investments. This paper first shows how the use of these techniques can substantially impact the estimated value of service improvements. A simple yet robust methodology that does not rely heavily on simplifying assumptions is presented. When a smart grid investment is proposed, reliability improvement is one of the most frequently cited benefits. Using the best methodology for estimating the value of this benefit is imperative. By providing directions on how to implement this methodology, this paper sends a practical, usable message to the industry.

  18. Improving the Discipline of Cost Estimation and Analysis

    NASA Technical Reports Server (NTRS)

    Piland, William M.; Pine, David J.; Wilson, Delano M.

    2000-01-01

    The need to improve the quality and accuracy of cost estimates of proposed new aerospace systems has been widely recognized. The industry has done the best job of maintaining related capability with improvements in estimation methods and giving appropriate priority to the hiring and training of qualified analysts. Some parts of Government, and National Aeronautics and Space Administration (NASA) in particular, continue to need major improvements in this area. Recently, NASA recognized that its cost estimation and analysis capabilities had eroded to the point that the ability to provide timely, reliable estimates was impacting the confidence in planning man), program activities. As a result, this year the Agency established a lead role for cost estimation and analysis. The Independent Program Assessment Office located at the Langley Research Center was given this responsibility.

  19. High-Resolution Spatial Distribution and Estimation of Access to Improved Sanitation in Kenya

    PubMed Central

    Jia, Peng; Anderson, John D.; Leitner, Michael; Rheingans, Richard

    2016-01-01

    Background Access to sanitation facilities is imperative in reducing the risk of multiple adverse health outcomes. A distinct disparity in sanitation exists among different wealth levels in many low-income countries, which may hinder the progress across each of the Millennium Development Goals. Methods The surveyed households in 397 clusters from 2008–2009 Kenya Demographic and Health Surveys were divided into five wealth quintiles based on their national asset scores. A series of spatial analysis methods including excess risk, local spatial autocorrelation, and spatial interpolation were applied to observe disparities in coverage of improved sanitation among different wealth categories. The total number of the population with improved sanitation was estimated by interpolating, time-adjusting, and multiplying the surveyed coverage rates by high-resolution population grids. A comparison was then made with the annual estimates from United Nations Population Division and World Health Organization /United Nations Children's Fund Joint Monitoring Program for Water Supply and Sanitation. Results The Empirical Bayesian Kriging interpolation produced minimal root mean squared error for all clusters and five quintiles while predicting the raw and spatial coverage rates of improved sanitation. The coverage in southern regions was generally higher than in the north and east, and the coverage in the south decreased from Nairobi in all directions, while Nyanza and North Eastern Province had relatively poor coverage. The general clustering trend of high and low sanitation improvement among surveyed clusters was confirmed after spatial smoothing. Conclusions There exists an apparent disparity in sanitation among different wealth categories across Kenya and spatially smoothed coverage rates resulted in a closer estimation of the available statistics than raw coverage rates. Future intervention activities need to be tailored for both different wealth categories and nationally

  20. Improving visual estimates of cervical spine range of motion.

    PubMed

    Hirsch, Brandon P; Webb, Matthew L; Bohl, Daniel D; Fu, Michael; Buerba, Rafael A; Gruskay, Jordan A; Grauer, Jonathan N

    2014-11-01

    Cervical spine range of motion (ROM) is a common measure of cervical conditions, surgical outcomes, and functional impairment. Although ROM is routinely assessed by visual estimation in clinical practice, visual estimates have been shown to be unreliable and inaccurate. Reliable goniometers can be used for assessments, but the associated costs and logistics generally limit their clinical acceptance. To investigate whether training can improve visual estimates of cervical spine ROM, we asked attending surgeons, residents, and medical students at our institution to visually estimate the cervical spine ROM of healthy subjects before and after a training session. This training session included review of normal cervical spine ROM in 3 planes and demonstration of partial and full motion in 3 planes by multiple subjects. Estimates before, immediately after, and 1 month after this training session were compared to assess reliability and accuracy. Immediately after training, errors decreased by 11.9° (flexion-extension), 3.8° (lateral bending), and 2.9° (axial rotation). These improvements were statistically significant. One month after training, visual estimates remained improved, by 9.5°, 1.6°, and 3.1°, respectively, but were statistically significant only in flexion-extension. Although the accuracy of visual estimates can be improved, clinicians should be aware of the limitations of visual estimates of cervical spine ROM. Our study results support scrutiny of visual assessment of ROM as a criterion for diagnosing permanent impairment or disability. PMID:25379754

  1. Age and Mass Estimates for 41 Star Clusters in M33

    NASA Astrophysics Data System (ADS)

    Ma, Jun; Zhou, Xu; Chen, Jian-Sheng

    2004-04-01

    In this second paper of our series, we estimate the age of 41 star clusters, which were detected by Melnick & D'odorico in the nearby spiral galaxy M33, by comparing the integrated photometric measurements with theoretical stellar population synthesis models of Bruzual & Charlot. Also, we calculate the mass of these star clusters using the theoretical M/L_V ratio. The results show that, these star clusters formed continuously in M33 from ˜ 7× 106 -- 1010 years and have masses between ˜ 103 and 2 ×106 M⊙. M33 frames were observed as a part of the BATC Multicolor Survey of the sky in 13 intermediate-band filters from 3800 to 10 000 Å. The relation between age and mass confirms that the sample star cluster masses systematically decrease from the oldest to the youngest.

  2. Statistical uncertainties and systematic errors in weak lensing mass estimates of galaxy clusters

    NASA Astrophysics Data System (ADS)

    Köhlinger, F.; Hoekstra, H.; Eriksen, M.

    2015-11-01

    Upcoming and ongoing large area weak lensing surveys will also discover large samples of galaxy clusters. Accurate and precise masses of galaxy clusters are of major importance for cosmology, for example, in establishing well-calibrated observational halo mass functions for comparison with cosmological predictions. We investigate the level of statistical uncertainties and sources of systematic errors expected for weak lensing mass estimates. Future surveys that will cover large areas on the sky, such as Euclid or LSST and to lesser extent DES, will provide the largest weak lensing cluster samples with the lowest level of statistical noise regarding ensembles of galaxy clusters. However, the expected low level of statistical uncertainties requires us to scrutinize various sources of systematic errors. In particular, we investigate the bias due to cluster member galaxies which are erroneously treated as background source galaxies due to wrongly assigned photometric redshifts. We find that this effect is significant when referring to stacks of galaxy clusters. Finally, we study the bias due to miscentring, i.e. the displacement between any observationally defined cluster centre and the true minimum of its gravitational potential. The impact of this bias might be significant with respect to the statistical uncertainties. However, complementary future missions such as eROSITA will allow us to define stringent priors on miscentring parameters which will mitigate this bias significantly.

  3. An Improved Fuzzy c-Means Clustering Algorithm Based on Shadowed Sets and PSO

    PubMed Central

    Zhang, Jian; Shen, Ling

    2014-01-01

    To organize the wide variety of data sets automatically and acquire accurate classification, this paper presents a modified fuzzy c-means algorithm (SP-FCM) based on particle swarm optimization (PSO) and shadowed sets to perform feature clustering. SP-FCM introduces the global search property of PSO to deal with the problem of premature convergence of conventional fuzzy clustering, utilizes vagueness balance property of shadowed sets to handle overlapping among clusters, and models uncertainty in class boundaries. This new method uses Xie-Beni index as cluster validity and automatically finds the optimal cluster number within a specific range with cluster partitions that provide compact and well-separated clusters. Experiments show that the proposed approach significantly improves the clustering effect. PMID:25477953

  4. Estimators for Clustered Education RCTs Using the Neyman Model for Causal Inference

    ERIC Educational Resources Information Center

    Schochet, Peter Z.

    2013-01-01

    This article examines the estimation of two-stage clustered designs for education randomized control trials (RCTs) using the nonparametric Neyman causal inference framework that underlies experiments. The key distinction between the considered causal models is whether potential treatment and control group outcomes are considered to be fixed for…

  5. Improvements in estimating proportions of objects from multispectral data

    NASA Technical Reports Server (NTRS)

    Horwitz, H. M.; Hyde, P. D.; Richardson, W.

    1974-01-01

    Methods for estimating proportions of objects and materials imaged within the instantaneous field of view of a multispectral sensor were developed further. Improvements in the basic proportion estimation algorithm were devised as well as improved alien object detection procedures. Also, a simplified signature set analysis scheme was introduced for determining the adequacy of signature set geometry for satisfactory proportion estimation. Averaging procedures used in conjunction with the mixtures algorithm were examined theoretically and applied to artificially generated multispectral data. A computationally simpler estimator was considered and found unsatisfactory. Experiments conducted to find a suitable procedure for setting the alien object threshold yielded little definitive result. Mixtures procedures were used on a limited amount of ERTS data to estimate wheat proportion in selected areas. Results were unsatisfactory, partly because of the ill-conditioned nature of the pure signature set.

  6. An Example of an Improvable Rao–Blackwell Improvement, Inefficient Maximum Likelihood Estimator, and Unbiased Generalized Bayes Estimator

    PubMed Central

    Galili, Tal; Meilijson, Isaac

    2016-01-01

    The Rao–Blackwell theorem offers a procedure for converting a crude unbiased estimator of a parameter θ into a “better” one, in fact unique and optimal if the improvement is based on a minimal sufficient statistic that is complete. In contrast, behind every minimal sufficient statistic that is not complete, there is an improvable Rao–Blackwell improvement. This is illustrated via a simple example based on the uniform distribution, in which a rather natural Rao–Blackwell improvement is uniformly improvable. Furthermore, in this example the maximum likelihood estimator is inefficient, and an unbiased generalized Bayes estimator performs exceptionally well. Counterexamples of this sort can be useful didactic tools for explaining the true nature of a methodology and possible consequences when some of the assumptions are violated. [Received December 2014. Revised September 2015.

  7. Improving three-dimensional mass mapping with weak gravitational lensing using galaxy clustering

    NASA Astrophysics Data System (ADS)

    Simon, Patrick

    2013-12-01

    Context. The weak gravitational lensing distortion of distant galaxy images (defined as sources) probes the projected large-scale matter distribution in the Universe. The availability of redshift information in galaxy surveys also allows us to recover the radial matter distribution to a certain degree. Aims: To improve quality in the mass mapping, we combine the lensing information with the spatial clustering of a population of galaxies (defined as tracers) that trace the matter density with a known galaxy bias. Methods: We construct a minimum-variance estimator for the 3D matter density that incorporates the angular distribution of galaxy tracers, which are coarsely binned in redshift. Merely the second-order bias of the tracers has to be known, which can in principle be self-consistently constrained in the data by lensing techniques. This synergy introduces a new noise component because of the stochasticity in the matter-tracer density relation. We give a description of the stochasticity noise in the Gaussian regime, and we investigate the estimator characteristics analytically. We apply the estimator to a mock survey based on the Millennium Simulation. Results: The estimator linearly mixes the individual lensing mass and tracer number density maps into a combined smoothed mass map. The weighting in the mix depends on the signal-to-noise ratio (S/N) of the individual maps and the correlation, R, between the matter and galaxy density. The weight of the tracers can be reduced by hand. For moderate mixing, the S/N in the mass map improves by a factor ~2-3 for R ≳ 0.4. Importantly, the systematic offset between a true and apparent mass peak distance (defined as z-shift bias) in a lensing-only map is eliminated, even for weak correlations of R ~ 0.4. Conclusions: If the second-order bias of tracer galaxies can be determined, the synergy technique potentially provides an option to improve redshift accuracy and completeness of the lensing 3D mass map. Herein, the aim

  8. Estimate of the Total Mechanical Feedback Energy from Galaxy Cluster-centered Black Holes: Implications for Black Hole Evolution, Cluster Gas Fraction, and Entropy

    NASA Astrophysics Data System (ADS)

    Mathews, William G.; Guo, Fulai

    2011-09-01

    The total feedback energy injected into hot gas in galaxy clusters by central black holes can be estimated by comparing the potential energy of observed cluster gas profiles with the potential energy of non-radiating, feedback-free hot gas atmospheres resulting from gravitational collapse in clusters of the same total mass. Feedback energy from cluster-centered black holes expands the cluster gas, lowering the gas-to-dark-matter mass ratio below the cosmic value. Feedback energy is unnecessarily delivered by radio-emitting jets to distant gas far beyond the cooling radius where the cooling time equals the cluster lifetime. For clusters of mass (4-11) × 1014 M sun, estimates of the total feedback energy, (1-3) × 1063 erg, far exceed feedback energies estimated from observations of X-ray cavities and shocks in the cluster gas, energies gained from supernovae, and energies lost from cluster gas by radiation. The time-averaged mean feedback luminosity is comparable to those of powerful quasars, implying that some significant fraction of this energy may arise from the spin of the black hole. The universal entropy profile in feedback-free gaseous atmospheres in Navarro-Frenk-White cluster halos can be recovered by multiplying the observed gas entropy profile of any relaxed cluster by a factor involving the gas fraction profile. While the feedback energy and associated mass outflow in the clusters we consider far exceed that necessary to stop cooling inflow, the time-averaged mass outflow at the cooling radius almost exactly balances the mass that cools within this radius, an essential condition to shut down cluster cooling flows.

  9. Estimating regression coefficients from clustered samples: Sampling errors and optimum sample allocation

    NASA Astrophysics Data System (ADS)

    Kalton, G.

    1983-05-01

    A number of surveys were conducted to study the relationship between the level of aircraft or traffic noise exposure experienced by people living in a particular area and their annoyance with it. These surveys generally employ a clustered sample design which affects the precision of the survey estimates. Regression analysis of annoyance on noise measures and other variables is often an important component of the survey analysis. Formulae are presented for estimating the standard errors of regression coefficients and ratio of regression coefficients that are applicable with a two- or three-stage clustered sample design. Using a simple cost function, they also determine the optimum allocation of the sample across the stages of the sample design for the estimation of a regression coefficient.

  10. Estimating regression coefficients from clustered samples: Sampling errors and optimum sample allocation

    NASA Technical Reports Server (NTRS)

    Kalton, G.

    1983-01-01

    A number of surveys were conducted to study the relationship between the level of aircraft or traffic noise exposure experienced by people living in a particular area and their annoyance with it. These surveys generally employ a clustered sample design which affects the precision of the survey estimates. Regression analysis of annoyance on noise measures and other variables is often an important component of the survey analysis. Formulae are presented for estimating the standard errors of regression coefficients and ratio of regression coefficients that are applicable with a two- or three-stage clustered sample design. Using a simple cost function, they also determine the optimum allocation of the sample across the stages of the sample design for the estimation of a regression coefficient.

  11. Scanning linear estimation: improvements over region of interest (ROI) methods

    NASA Astrophysics Data System (ADS)

    Kupinski, Meredith K.; Clarkson, Eric W.; Barrett, Harrison H.

    2013-03-01

    In tomographic medical imaging, a signal activity is typically estimated by summing voxels from a reconstructed image. We introduce an alternative estimation scheme that operates on the raw projection data and offers a substantial improvement, as measured by the ensemble mean-square error (EMSE), when compared to using voxel values from a maximum-likelihood expectation-maximization (MLEM) reconstruction. The scanning-linear (SL) estimator operates on the raw projection data and is derived as a special case of maximum-likelihood estimation with a series of approximations to make the calculation tractable. The approximated likelihood accounts for background randomness, measurement noise and variability in the parameters to be estimated. When signal size and location are known, the SL estimate of signal activity is unbiased, i.e. the average estimate equals the true value. By contrast, unpredictable bias arising from the null functions of the imaging system affect standard algorithms that operate on reconstructed data. The SL method is demonstrated for two different tasks: (1) simultaneously estimating a signal’s size, location and activity; (2) for a fixed signal size and location, estimating activity. Noisy projection data are realistically simulated using measured calibration data from the multi-module multi-resolution small-animal SPECT imaging system. For both tasks, the same set of images is reconstructed using the MLEM algorithm (80 iterations), and the average and maximum values within the region of interest (ROI) are calculated for comparison. This comparison shows dramatic improvements in EMSE for the SL estimates. To show that the bias in ROI estimates affects not only absolute values but also relative differences, such as those used to monitor the response to therapy, the activity estimation task is repeated for three different signal sizes.

  12. Improving The Discipline of Cost Estimation and Analysis

    NASA Technical Reports Server (NTRS)

    Piland, William M.; Pine, David J.; Wilson, Delano M.

    2000-01-01

    The need to improve the quality and accuracy of cost estimates of proposed new aerospace systems has been widely recognized. The industry has done the best job of maintaining related capability with improvements in estimation methods and giving appropriate priority to the hiring and training of qualified analysts. Some parts of Government, and National Aeronautics and Space Administration (NASA) in particular, continue to need major improvements in this area. Recently, NASA recognized that its cost estimation and analysis capabilities had eroded to the point that the ability to provide timely, reliable estimates was impacting the confidence in planning many program activities. As a result, this year the Agency established a lead role for cost estimation and analysis. The Independent Program Assessment Office located at the Langley Research Center was given this responsibility. This paper presents the plans for the newly established role. Described is how the Independent Program Assessment Office, working with all NASA Centers, NASA Headquarters, other Government agencies, and industry, is focused on creating cost estimation and analysis as a professional discipline that will be recognized equally with the technical disciplines needed to design new space and aeronautics activities. Investments in selected, new analysis tools, creating advanced training opportunities for analysts, and developing career paths for future analysts engaged in the discipline are all elements of the plan. Plans also include increasing the human resources available to conduct independent cost analysis of Agency programs during their formulation, to improve near-term capability to conduct economic cost-benefit assessments, to support NASA management's decision process, and to provide cost analysis results emphasizing "full-cost" and "full-life cycle" considerations. The Agency cost analysis improvement plan has been approved for implementation starting this calendar year. Adequate financial

  13. Spectral clustering for optical confirmation and redshift estimation of X-ray selected galaxy cluster candidates in the SDSS Stripe 82

    NASA Astrophysics Data System (ADS)

    Mahmoud, E.; Takey, A.; Shoukry, A.

    2016-07-01

    We develop a galaxy cluster finding algorithm based on spectral clustering technique to identify optical counterparts and estimate optical redshifts for X-ray selected cluster candidates. As an application, we run our algorithm on a sample of X-ray cluster candidates selected from the third XMM-Newton serendipitous source catalog (3XMM-DR5) that are located in the Stripe 82 of the Sloan Digital Sky Survey (SDSS). Our method works on galaxies described in the color-magnitude feature space. We begin by examining 45 galaxy clusters with published spectroscopic redshifts in the range of 0.1-0.8 with a median of 0.36. As a result, we are able to identify their optical counterparts and estimate their photometric redshifts, which have a typical accuracy of 0.025 and agree with the published ones. Then, we investigate another 40 X-ray cluster candidates (from the same cluster survey) with no redshift information in the literature and found that 12 candidates are considered as galaxy clusters in the redshift range from 0.29 to 0.76 with a median of 0.57. These systems are newly discovered clusters in X-rays and optical data. Among them 7 clusters have spectroscopic redshifts for at least one member galaxy.

  14. Comparative assessment of bone pose estimation using Point Cluster Technique and OpenSim.

    PubMed

    Lathrop, Rebecca L; Chaudhari, Ajit M W; Siston, Robert A

    2011-11-01

    Estimating the position of the bones from optical motion capture data is a challenge associated with human movement analysis. Bone pose estimation techniques such as the Point Cluster Technique (PCT) and simulations of movement through software packages such as OpenSim are used to minimize soft tissue artifact and estimate skeletal position; however, using different methods for analysis may produce differing kinematic results which could lead to differences in clinical interpretation such as a misclassification of normal or pathological gait. This study evaluated the differences present in knee joint kinematics as a result of calculating joint angles using various techniques. We calculated knee joint kinematics from experimental gait data using the standard PCT, the least squares approach in OpenSim applied to experimental marker data, and the least squares approach in OpenSim applied to the results of the PCT algorithm. Maximum and resultant RMS differences in knee angles were calculated between all techniques. We observed differences in flexion/extension, varus/valgus, and internal/external rotation angles between all approaches. The largest differences were between the PCT results and all results calculated using OpenSim. The RMS differences averaged nearly 5° for flexion/extension angles with maximum differences exceeding 15°. Average RMS differences were relatively small (< 1.08°) between results calculated within OpenSim, suggesting that the choice of marker weighting is not critical to the results of the least squares inverse kinematics calculations. The largest difference between techniques appeared to be a constant offset between the PCT and all OpenSim results, which may be due to differences in the definition of anatomical reference frames, scaling of musculoskeletal models, and/or placement of virtual markers within OpenSim. Different methods for data analysis can produce largely different kinematic results, which could lead to the misclassification

  15. Improved Versions of Common Estimators of the Recombination Rate.

    PubMed

    Gärtner, Kerstin; Futschik, Andreas

    2016-09-01

    The scaled recombination parameter [Formula: see text] is one of the key parameters, turning up frequently in population genetic models. Accurate estimates of [Formula: see text] are difficult to obtain, as recombination events do not always leave traces in the data. One of the most widely used approaches is composite likelihood. Here, we show that popular implementations of composite likelihood estimators can often be uniformly improved by optimizing the trade-off between bias and variance. The amount of possible improvement depends on parameters such as the sequence length, the sample size, and the mutation rate, and it can be considerable in some cases. It turns out that approximate Bayesian computation, with composite likelihood as a summary statistic, also leads to improved estimates, but now in terms of the posterior risk. Finally, we demonstrate a practical application on real data from Drosophila. PMID:27409412

  16. Improving terrain height estimates from RADARSAT interferometric measurements

    SciTech Connect

    Thompson, P.A.; Eichel, P.H.; Calloway, T.M.

    1998-03-01

    The authors describe two methods of combining two-pass RADAR-SAT interferometric phase maps with existing DTED (digital terrain elevation data) to produce improved terrain height estimates. The first is a least-squares estimation procedure that fits the unwrapped phase data to a phase map computed from the DTED. The second is a filtering technique that combines the interferometric height map with the DTED map based on spatial frequency content. Both methods preserve the high fidelity of the interferometric data.

  17. Distributing Power Grid State Estimation on HPC Clusters A System Architecture Prototype

    SciTech Connect

    Liu, Yan; Jiang, Wei; Jin, Shuangshuang; Rice, Mark J.; Chen, Yousu

    2012-08-20

    The future power grid is expected to further expand with highly distributed energy sources and smart loads. The increased size and complexity lead to increased burden on existing computational resources in energy control centers. Thus the need to perform real-time assessment on such systems entails efficient means to distribute centralized functions such as state estimation in the power system. In this paper, we present our early prototype of a system architecture that connects distributed state estimators individually running parallel programs to solve non-linear estimation procedure. The prototype consists of a middleware and data processing toolkits that allows data exchange in the distributed state estimation. We build a test case based on the IEEE 118 bus system and partition the state estimation of the whole system model to available HPC clusters. The measurement from the testbed demonstrates the low overhead of our solution.

  18. Under What Circumstances Does External Knowledge about the Correlation Structure Improve Power in Cluster Randomized Designs?

    ERIC Educational Resources Information Center

    Rhoads, Christopher

    2014-01-01

    Recent publications have drawn attention to the idea of utilizing prior information about the correlation structure to improve statistical power in cluster randomized experiments. Because power in cluster randomized designs is a function of many different parameters, it has been difficult for applied researchers to discern a simple rule explaining…

  19. Improving warm rain estimation in the PERSIANN-CCS satellite-based retrieval algorithm

    NASA Astrophysics Data System (ADS)

    Karbalaee, N.; Hsu, K. L.; Sorooshian, S.

    2015-12-01

    The Precipitation Estimation from remotely Sensed Information using Artificial Neural Networks-Cloud Classification System (PERSIANN-CCS) is one of the algorithms being integrated in the IMERG (Integrated Multi-Satellite Retrievals for the Global Precipitation Mission GPM) to estimate precipitation at 0.04 lat-long scale every 30-minute. PERSIANN-CCS extracts features from infrared cloud image segmentation from three brightness temperature thresholds (220K, 235K, and 253K). Warm raining clouds with brightness temperature higher than 253K are not covered from the current algorithm. To improve rain detection from warm rain, in this study, the cloud image segmentation threshold to cover warmer clouds is extended from 253K to 300K. Several other temperature thresholds between 253K and 300K were also examined. K-means cluster algorithm was used to classify extracted image features to 400 groups. Rainfall rates from each cluster were retrained using radar rainfall measurements. Case studies were carried out over CONUS to investigate the ability to improve detection of warm rainfall from segmentation and image classification using warmer temperature thresholds. Satellite image and radar rainfall data in both summer and winter seasons were used in the experiments in year 2012 as a training data. Overall results show that rain detection from warm clouds is significantly improved. However, it also shows that the false rain detection is also relatively increased when the segmentation temperature is increased.

  20. Improving PERSIANN-CCS Rainfall Estimation using Passive Microwave Rainfall Estimation

    NASA Astrophysics Data System (ADS)

    Karbalaee, N.; Hsu, K. L.; Sorooshian, S.

    2014-12-01

    This presentation discusses the recent improvements to the PERSIANN-CCS (Precipitation Estimation from remotely Sensed Information using Artificial Neural Networks-Cloud Classification System). The PERSIANN-CCS is one of the algorithms being integrated in the IMERG (Integrated Multi-Satellite Retrievals for the Global Precipitation Mission GPM) to estimate precipitation at 0.04o lat-long scale at every 30-minute interval. While PERSIANN-CCS has a relatively fine temporal and spatial resolution for generating rainfall estimation over the globe, it sometimes underestimates or overestimates over some regions, depending on certain conditions. In this study, improving the PERSIANN-CCS precipitation estimation using long-term passive microwave (PMW) rainfall estimation is explored. The adjustment is proceeded by matching the probability distribution of PERSIANN-CCS estimates to the PMW rainfall estimation. Four years of concurrent samples from 2008 to 2011 were used in the calibration while one year (2012) of the data was used for the validation of the PMW-adjusted PERSIANN-CCS estimates. Samples over a 5 o x5 o lat-long coverage were collected and an adjustment look up table for each month covering 60oS-60oN was generated. The validation of PERSIANN-CCS estimation before and after PMW adjustment over CONUS using radar data was investigated. The results show that the adjustment has different impact on the PERSIANN-CCS rain estimates depending on the location and time of the year. PERSIANN-CCS adjustments were found to be more significant over high latitude and winter time periods and less significant over the low latitude and summer time period.

  1. Optimal Cluster-based Models for Estimation of Missing Precipitation Records

    NASA Astrophysics Data System (ADS)

    Teegavarapu, R. S.

    2008-05-01

    Deterministic and stochastic weighting methods are the most frequently used methods for estimating missing rainfall values at a gage based on values recorded at all other available recording gages. Distance-based weighting methods suffer from one major conceptual limitation based on the fact that Euclidian distance is not always a definitive measure of the correlation among spatial point measurements. Another point of contention is the number of control points used in estimation process. Several spatial weighting methods and optimal cluster based models are proposed, developed and investigated for estimation of missing precipitation records. These methods use mathematical programming formulations and evolutionary algorithms. Historical daily precipitation data obtained from 15 rain gauging stations from a temperate climatic region are used to test and derive conclusions about the efficacy these of methods. Results suggest that the weights and cluster-based models derived based on mathematical programming formulations and surrogate parameters for correlations are superior to those obtained from tarditional distance-based weights used in spatial interpolation methods for estimation of missing rainfall data at points of interest.

  2. Comparison of methods for estimating the intraclass correlation coefficient for binary responses in cancer prevention cluster randomized trials.

    PubMed

    Wu, Sheng; Crespi, Catherine M; Wong, Weng Kee

    2012-09-01

    The intraclass correlation coefficient (ICC) is a fundamental parameter of interest in cluster randomized trials as it can greatly affect statistical power. We compare common methods of estimating the ICC in cluster randomized trials with binary outcomes, with a specific focus on their application to community-based cancer prevention trials with primary outcome of self-reported cancer screening. Using three real data sets from cancer screening intervention trials with different numbers and types of clusters and cluster sizes, we obtained point estimates and 95% confidence intervals for the ICC using five methods: the analysis of variance estimator, the Fleiss-Cuzick estimator, the Pearson estimator, an estimator based on generalized estimating equations and an estimator from a random intercept logistic regression model. We compared estimates of the ICC for the overall sample and by study condition. Our results show that ICC estimates from different methods can be quite different, although confidence intervals generally overlap. The ICC varied substantially by study condition in two studies, suggesting that the common practice of assuming a common ICC across all clusters in the trial is questionable. A simulation study confirmed pitfalls of erroneously assuming a common ICC. Investigators should consider using sample size and analysis methods that allow the ICC to vary by study condition. PMID:22627076

  3. A simple recipe for estimating masses of elliptical galaxies and clusters of galaxies

    NASA Astrophysics Data System (ADS)

    Lyskova, N.

    2013-04-01

    We discuss a simple and robust procedure to evaluate the mass/circular velocity of massive elliptical galaxies and clusters of galaxies. It relies only on the surface density and the projected velocity dispersion profiles of tracer particles and therefore can be applied even in case of poor or noisy observational data. Stars, globular clusters or planetary nebulae can be used as tracers for mass determination of elliptical galaxies. For clusters the galaxies themselves can be used as tracer particles. The key element of the proposed procedure is the selection of a ``sweet'' radius R_sweet, where the sensitivity to the unknown anisotropy of the tracers' orbits is minimal. At this radius the surface density of tracers declines approximately as I(R)∝ R-2, thus placing R_sweet not far from the half-light radius of the tracers R_eff. The procedure was tested on a sample of cosmological simulations of individual galaxies and galaxy clusters and then applied to real observational data. Independently the total mass profile was derived from the hydrostatic equilibrium equation for the gaseous atmosphere. Mismatch in mass profiles obtained from optical and X-ray data is used to estimate the non-thermal contribution to the gas pressure and/or to constrain the distribution of tracers' orbits.

  4. Distance Estimates for High Redshift Clusters SZ and X-Ray Measurements

    NASA Technical Reports Server (NTRS)

    Joy, Marshall K.

    1999-01-01

    I present interferometric images of the Sunyaev-Zel'dovich effect for the high redshift (z $ greater than $ 0.5) galaxy clusters in the \\emph(Einstein) Medium Sensitivity Survey: MS0451.5-0305 (z = 0.54), MS0015.9+1609 (z = 0.55), MS2053.7-0449 (z = 0.58), MS1 137.5+6625 (z = 0.78), and MS 1054.5-0321 (z = 0.83). Isothermal $\\beta$ models are applied to the data to determine the magnitude of the Sunyaev-Zel'dovich (S-Z) decrement in each cluster. Complementary ROSAT PSPC and HRI x-ray data are also analyzed, and are combined with the S-Z data to generate an independent estimate of the cluster distance. Since the Sunyaev-Zel'dovich Effect is invariant with redshift, sensitive S-Z imaging can provide an independent determination of the size, shape, density, and distance of high redshift galaxy clusters; we will discuss current systematic uncertainties with this approach, as well as future observations which will yield stronger constraints.

  5. Performance Analysis of an Improved MUSIC DoA Estimator

    NASA Astrophysics Data System (ADS)

    Vallet, Pascal; Mestre, Xavier; Loubaton, Philippe

    2015-12-01

    This paper adresses the statistical performance of subspace DoA estimation using a sensor array, in the asymptotic regime where the number of samples and sensors both converge to infinity at the same rate. Improved subspace DoA estimators were derived (termed as G-MUSIC) in previous works, and were shown to be consistent and asymptotically Gaussian distributed in the case where the number of sources and their DoA remain fixed. In this case, which models widely spaced DoA scenarios, it is proved in the present paper that the traditional MUSIC method also provides DoA consistent estimates having the same asymptotic variances as the G-MUSIC estimates. The case of DoA that are spaced of the order of a beamwidth, which models closely spaced sources, is also considered. It is shown that G-MUSIC estimates are still able to consistently separate the sources, while it is no longer the case for the MUSIC ones. The asymptotic variances of G-MUSIC estimates are also evaluated.

  6. Estimating the incubation period of raccoon rabies: a time-space clustering approach.

    PubMed

    Tinline, Rowland; Rosatte, Rick; MacInnes, Charles

    2002-11-29

    We used a time-space clustering approach to estimate the incubation period of raccoon rabies in the wild using data from the 1999-2001 invasion of raccoon rabies into eastern Ontario from northern New York State. The time differences and geographical distances between all possible pairs of rabies cases were computed, classified and assembled into a time-space matrix. The rows of that matrix represented differences in cases in weeks and the columns represent distances between cases in kilometers and the values in the cells of the matrix represent the counts of cases at specific time and distance intervals. There was a significant cluster of pairs 5 weeks apart with apparent harmonics at additional 5-week intervals. These results are explained by assuming the incubation period of raccoon rabies had a mode of 5 weeks. The time clusters appeared consistently at distance intervals of 5 km. We discuss the possibility that the spatial intervals were influenced by the 5 km radius of the point infection control depopulation process used in 1999 and the 10-15 km radial areas used in 2000. With the practical limits of those radii, there was an intensive effort to eliminate raccoons. Our procedure is easy to implement and provides an estimate of the shape of the distribution of incubation periods for raccoon rabies. PMID:12419602

  7. Comparison of Three Plot Selection Methods for Estimating Change in Temporally Variable, Spatially Clustered Populations.

    SciTech Connect

    Thompson, William L.

    2001-07-01

    Monitoring population numbers is important for assessing trends and meeting various legislative mandates. However, sampling across time introduces a temporal aspect to survey design in addition to the spatial one. For instance, a sample that is initially representative may lose this attribute if there is a shift in numbers and/or spatial distribution in the underlying population that is not reflected in later sampled plots. Plot selection methods that account for this temporal variability will produce the best trend estimates. Consequently, I used simulation to compare bias and relative precision of estimates of population change among stratified and unstratified sampling designs based on permanent, temporary, and partial replacement plots under varying levels of spatial clustering, density, and temporal shifting of populations. Permanent plots produced more precise estimates of change than temporary plots across all factors. Further, permanent plots performed better than partial replacement plots except for high density (5 and 10 individuals per plot) and 25% - 50% shifts in the population. Stratified designs always produced less precise estimates of population change for all three plot selection methods, and often produced biased change estimates and greatly inflated variance estimates under sampling with partial replacement. Hence, stratification that remains fixed across time should be avoided when monitoring populations that are likely to exhibit large changes in numbers and/or spatial distribution during the study period. Key words: bias; change estimation; monitoring; permanent plots; relative precision; sampling with partial replacement; temporary plots.

  8. An improved approximate-Bayesian model-choice method for estimating shared evolutionary history

    PubMed Central

    2014-01-01

    Background To understand biological diversification, it is important to account for large-scale processes that affect the evolutionary history of groups of co-distributed populations of organisms. Such events predict temporally clustered divergences times, a pattern that can be estimated using genetic data from co-distributed species. I introduce a new approximate-Bayesian method for comparative phylogeographical model-choice that estimates the temporal distribution of divergences across taxa from multi-locus DNA sequence data. The model is an extension of that implemented in msBayes. Results By reparameterizing the model, introducing more flexible priors on demographic and divergence-time parameters, and implementing a non-parametric Dirichlet-process prior over divergence models, I improved the robustness, accuracy, and power of the method for estimating shared evolutionary history across taxa. Conclusions The results demonstrate the improved performance of the new method is due to (1) more appropriate priors on divergence-time and demographic parameters that avoid prohibitively small marginal likelihoods for models with more divergence events, and (2) the Dirichlet-process providing a flexible prior on divergence histories that does not strongly disfavor models with intermediate numbers of divergence events. The new method yields more robust estimates of posterior uncertainty, and thus greatly reduces the tendency to incorrectly estimate models of shared evolutionary history with strong support. PMID:24992937

  9. Motion estimation in the frequency domain using fuzzy c-planes clustering.

    PubMed

    Erdem, C E; Karabulut, G Z; Yanmaz, E; Anarim, E

    2001-01-01

    A recent work explicitly models the discontinuous motion estimation problem in the frequency domain where the motion parameters are estimated using a harmonic retrieval approach. The vertical and horizontal components of the motion are independently estimated from the locations of the peaks of respective periodogram analyses and they are paired to obtain the motion vectors using a procedure proposed. In this paper, we present a more efficient method that replaces the motion component pairing task and hence eliminates the problems of the pairing method described. The method described in this paper uses the fuzzy c-planes (FCP) clustering approach to fit planes to three-dimensional (3-D) frequency domain data obtained from the peaks of the periodograms. Experimental results are provided to demonstrate the effectiveness of the proposed method. PMID:18255527

  10. Improved estimation of reflectance spectra by utilizing prior knowledge.

    PubMed

    Dierl, Marcel; Eckhard, Timo; Frei, Bernhard; Klammer, Maximilian; Eichstädt, Sascha; Elster, Clemens

    2016-07-01

    Estimating spectral reflectance has attracted extensive research efforts in color science and machine learning, motivated through a wide range of applications. In many practical situations, prior knowledge is available that ought to be used. Here, we have developed a general Bayesian method that allows the incorporation of prior knowledge from previous monochromator and spectrophotometer measurements. The approach yields analytical expressions for fast and efficient estimation of spectral reflectance. In addition to point estimates, probability distributions are also obtained, which completely characterize the uncertainty associated with the reconstructed spectrum. We demonstrate that, through the incorporation of prior knowledge, our approach yields improved reconstruction results compared with methods that resort to training data only. Our method is particularly useful when the spectral reflectance to be recovered resides beyond the scope of the training data. PMID:27409695

  11. An Accurate Link Correlation Estimator for Improving Wireless Protocol Performance

    PubMed Central

    Zhao, Zhiwei; Xu, Xianghua; Dong, Wei; Bu, Jiajun

    2015-01-01

    Wireless link correlation has shown significant impact on the performance of various sensor network protocols. Many works have been devoted to exploiting link correlation for protocol improvements. However, the effectiveness of these designs heavily relies on the accuracy of link correlation measurement. In this paper, we investigate state-of-the-art link correlation measurement and analyze the limitations of existing works. We then propose a novel lightweight and accurate link correlation estimation (LACE) approach based on the reasoning of link correlation formation. LACE combines both long-term and short-term link behaviors for link correlation estimation. We implement LACE as a stand-alone interface in TinyOS and incorporate it into both routing and flooding protocols. Simulation and testbed results show that LACE: (1) achieves more accurate and lightweight link correlation measurements than the state-of-the-art work; and (2) greatly improves the performance of protocols exploiting link correlation. PMID:25686314

  12. An accurate link correlation estimator for improving wireless protocol performance.

    PubMed

    Zhao, Zhiwei; Xu, Xianghua; Dong, Wei; Bu, Jiajun

    2015-01-01

    Wireless link correlation has shown significant impact on the performance of various sensor network protocols. Many works have been devoted to exploiting link correlation for protocol improvements. However, the effectiveness of these designs heavily relies on the accuracy of link correlation measurement. In this paper, we investigate state-of-the-art link correlation measurement and analyze the limitations of existing works. We then propose a novel lightweight and accurate link correlation estimation (LACE) approach based on the reasoning of link correlation formation. LACE combines both long-term and short-term link behaviors for link correlation estimation. We implement LACE as a stand-alone interface in TinyOS and incorporate it into both routing and flooding protocols. Simulation and testbed results show that LACE: (1) achieves more accurate and lightweight link correlation measurements than the state-of-the-art work; and (2) greatly improves the performance of protocols exploiting link correlation. PMID:25686314

  13. Application of the Direct Distance Estimation procedure to eclipsing binaries in star clusters

    NASA Astrophysics Data System (ADS)

    Milone, E. F.; Schiller, S. J.

    2013-02-01

    We alert the community to a paradigm method to calibrate a range of standard candles by means of well-calibrated photometry of eclipsing binaries in star clusters. In particular, we re-examine systems studied as part of our Binaries-in-Clusters program, and previously analyzed with earlier versions of the Wilson-Devinney light-curve modeling program. We make use of the 2010 version of this program, which incorporates a procedure to estimate the distance to an eclipsing system directly, as a system parameter, and is thus dependent on the data and analysis model alone. As such, the derived distance is accorded a standard error, independent of any additional assumptions or approximations that such analyses conventionally require.

  14. Age estimates of globular clusters in the Milky Way: constraints on cosmology.

    PubMed

    Krauss, Lawrence M; Chaboyer, Brian

    2003-01-01

    Recent observations of stellar globular clusters in the Milky Way Galaxy, combined with revised ranges of parameters in stellar evolution codes and new estimates of the earliest epoch of globular cluster formation, result in a 95% confidence level lower limit on the age of the Universe of 11.2 billion years. This age is inconsistent with the expansion age for a flat Universe for the currently allowed range of the Hubble constant, unless the cosmic equation of state is dominated by a component that violates the strong energy condition. This means that the three fundamental observables in cosmology-the age of the Universe, the distance-redshift relation, and the geometry of the Universe-now independently support the case for a dark energy-dominated Universe. PMID:12511641

  15. A Novel Tool Improves Existing Estimates of Recent Tuberculosis Transmission in Settings of Sparse Data Collection.

    PubMed

    Kasaie, Parastu; Mathema, Barun; Kelton, W David; Azman, Andrew S; Pennington, Jeff; Dowdy, David W

    2015-01-01

    In any setting, a proportion of incident active tuberculosis (TB) reflects recent transmission ("recent transmission proportion"), whereas the remainder represents reactivation. Appropriately estimating the recent transmission proportion has important implications for local TB control, but existing approaches have known biases, especially where data are incomplete. We constructed a stochastic individual-based model of a TB epidemic and designed a set of simulations (derivation set) to develop two regression-based tools for estimating the recent transmission proportion from five inputs: underlying TB incidence, sampling coverage, study duration, clustered proportion of observed cases, and proportion of observed clusters in the sample. We tested these tools on a set of unrelated simulations (validation set), and compared their performance against that of the traditional 'n-1' approach. In the validation set, the regression tools reduced the absolute estimation bias (difference between estimated and true recent transmission proportion) in the 'n-1' technique by a median [interquartile range] of 60% [9%, 82%] and 69% [30%, 87%]. The bias in the 'n-1' model was highly sensitive to underlying levels of study coverage and duration, and substantially underestimated the recent transmission proportion in settings of incomplete data coverage. By contrast, the regression models' performance was more consistent across different epidemiological settings and study characteristics. We provide one of these regression models as a user-friendly, web-based tool. Novel tools can improve our ability to estimate the recent TB transmission proportion from data that are observable (or estimable) by public health practitioners with limited available molecular data. PMID:26679499

  16. A Novel Tool Improves Existing Estimates of Recent Tuberculosis Transmission in Settings of Sparse Data Collection

    PubMed Central

    Kasaie, Parastu; Mathema, Barun; Kelton, W. David; Azman, Andrew S.; Pennington, Jeff; Dowdy, David W.

    2015-01-01

    In any setting, a proportion of incident active tuberculosis (TB) reflects recent transmission (“recent transmission proportion”), whereas the remainder represents reactivation. Appropriately estimating the recent transmission proportion has important implications for local TB control, but existing approaches have known biases, especially where data are incomplete. We constructed a stochastic individual-based model of a TB epidemic and designed a set of simulations (derivation set) to develop two regression-based tools for estimating the recent transmission proportion from five inputs: underlying TB incidence, sampling coverage, study duration, clustered proportion of observed cases, and proportion of observed clusters in the sample. We tested these tools on a set of unrelated simulations (validation set), and compared their performance against that of the traditional ‘n-1’ approach. In the validation set, the regression tools reduced the absolute estimation bias (difference between estimated and true recent transmission proportion) in the ‘n-1’ technique by a median [interquartile range] of 60% [9%, 82%] and 69% [30%, 87%]. The bias in the ‘n-1’ model was highly sensitive to underlying levels of study coverage and duration, and substantially underestimated the recent transmission proportion in settings of incomplete data coverage. By contrast, the regression models’ performance was more consistent across different epidemiological settings and study characteristics. We provide one of these regression models as a user-friendly, web-based tool. Novel tools can improve our ability to estimate the recent TB transmission proportion from data that are observable (or estimable) by public health practitioners with limited available molecular data. PMID:26679499

  17. Applying clustering approach in predictive uncertainty estimation: a case study with the UNEEC method

    NASA Astrophysics Data System (ADS)

    Dogulu, Nilay; Solomatine, Dimitri; Lal Shrestha, Durga

    2014-05-01

    Within the context of flood forecasting, assessment of predictive uncertainty has become a necessity for most of the modelling studies in operational hydrology. There are several uncertainty analysis and/or prediction methods available in the literature; however, most of them rely on normality and homoscedasticity assumptions for model residuals occurring in reproducing the observed data. This study focuses on a statistical method analyzing model residuals without having any assumptions and based on a clustering approach: Uncertainty Estimation based on local Errors and Clustering (UNEEC). The aim of this work is to provide a comprehensive evaluation of the UNEEC method's performance in view of clustering approach employed within its methodology. This is done by analyzing normality of model residuals and comparing uncertainty analysis results (for 50% and 90% confidence level) with those obtained from uniform interval and quantile regression methods. An important part of the basis by which the methods are compared is analysis of data clusters representing different hydrometeorological conditions. The validation measures used are PICP, MPI, ARIL and NUE where necessary. A new validation measure linking prediction interval to the (hydrological) model quality - weighted mean prediction interval (WMPI) - is also proposed for comparing the methods more effectively. The case study is Brue catchment, located in the South West of England. A different parametrization of the method than its previous application in Shrestha and Solomatine (2008) is used, i.e. past error values in addition to discharge and effective rainfall is considered. The results show that UNEEC's notable characteristic in its methodology, i.e. applying clustering to data of predictors upon which catchment behaviour information is encapsulated, contributes increased accuracy of the method's results for varying flow conditions. Besides, classifying data so that extreme flow events are individually

  18. Can modeling improve estimation of desert tortoise population densities?

    USGS Publications Warehouse

    Nussear, K.E.; Tracy, C.R.

    2007-01-01

    The federally listed desert tortoise (Gopherus agassizii) is currently monitored using distance sampling to estimate population densities. Distance sampling, as with many other techniques for estimating population density, assumes that it is possible to quantify the proportion of animals available to be counted in any census. Because desert tortoises spend much of their life in burrows, and the proportion of tortoises in burrows at any time can be extremely variable, this assumption is difficult to meet. This proportion of animals available to be counted is used as a correction factor (g0) in distance sampling and has been estimated from daily censuses of small populations of tortoises (6-12 individuals). These censuses are costly and produce imprecise estimates of g0 due to small sample sizes. We used data on tortoise activity from a large (N = 150) experimental population to model activity as a function of the biophysical attributes of the environment, but these models did not improve the precision of estimates from the focal populations. Thus, to evaluate how much of the variance in tortoise activity is apparently not predictable, we assessed whether activity on any particular day can predict activity on subsequent days with essentially identical environmental conditions. Tortoise activity was only weakly correlated on consecutive days, indicating that behavior was not repeatable or consistent among days with similar physical environments. ?? 2007 by the Ecological Society of America.

  19. Improving Estimated Optical Constants With MSTM and DDSCAT Modeling

    NASA Astrophysics Data System (ADS)

    Pitman, K. M.; Wolff, M. J.

    2015-12-01

    We present numerical experiments to determine quantitatively the effects of mineral particle clustering on Mars spacecraft spectral signatures and to improve upon the values of refractive indices (optical constants n, k) derived from Mars dust laboratory analog spectra such as those from RELAB and MRO CRISM libraries. Whereas spectral properties for Mars analog minerals and actual Mars soil are dominated by aggregates of particles smaller than the size of martian atmospheric dust, the analytic radiative transfer (RT) solutions used to interpret planetary surfaces assume that individual, well-separated particles dominate the spectral signature. Both in RT models and in the refractive index derivation methods that include analytic RT approximations, spheres are also over-used to represent nonspherical particles. Part of the motivation is that the integrated effect over randomly oriented particles on quantities such as single scattering albedo and phase function are relatively less than for single particles. However, we have seen in previous numerical experiments that when varying the shape and size of individual grains within a cluster, the phase function changes in both magnitude and slope, thus the "relatively less" effect is more significant than one might think. Here we examine the wavelength dependence of the forward scattering parameter with multisphere T-matrix (MSTM) and discrete dipole approximation (DDSCAT) codes that compute light scattering by layers of particles on planetary surfaces to see how albedo is affected and integrate our model results into refractive index calculations to remove uncertainties in approximations and parameters that can lower the accuracy of optical constants. By correcting the single scattering albedo and phase function terms in the refractive index determinations, our data will help to improve the understanding of Mars in identifying, mapping the distributions, and quantifying abundances for these minerals and will address long

  20. IPEG- IMPROVED PRICE ESTIMATION GUIDELINES (IBM PC VERSION)

    NASA Technical Reports Server (NTRS)

    Aster, R. W.

    1994-01-01

    The Improved Price Estimation Guidelines, IPEG, program provides a simple yet accurate estimate of the price of a manufactured product. IPEG facilitates sensitivity studies of price estimates at considerably less expense than would be incurred by using the Standard Assembly-line Manufacturing Industry Simulation, SAMIS, program (COSMIC program NPO-16032). A difference of less than one percent between the IPEG and SAMIS price estimates has been observed with realistic test cases. However, the IPEG simplification of SAMIS allows the analyst with limited time and computing resources to perform a greater number of sensitivity studies than with SAMIS. Although IPEG was developed for the photovoltaics industry, it is readily adaptable to any standard assembly line type of manufacturing industry. IPEG estimates the annual production price per unit. The input data includes cost of equipment, space, labor, materials, supplies, and utilities. Production on an industry wide basis or a process wide basis can be simulated. Once the IPEG input file is prepared, the original price is estimated and sensitivity studies may be performed. The IPEG user selects a sensitivity variable and a set of values. IPEG will compute a price estimate and a variety of other cost parameters for every specified value of the sensitivity variable. IPEG is designed as an interactive system and prompts the user for all required information and offers a variety of output options. The IPEG/PC program is written in TURBO PASCAL for interactive execution on an IBM PC computer under DOS 2.0 or above with at least 64K of memory. The IBM PC color display and color graphics adapter are needed to use the plotting capabilities in IPEG/PC. IPEG/PC was developed in 1984. The original IPEG program is written in SIMSCRIPT II.5 for interactive execution and has been implemented on an IBM 370 series computer with a central memory requirement of approximately 300K of 8 bit bytes. The original IPEG was developed in 1980.

  1. IPEG- IMPROVED PRICE ESTIMATION GUIDELINES (IBM 370 VERSION)

    NASA Technical Reports Server (NTRS)

    Chamberlain, R. G.

    1994-01-01

    The Improved Price Estimation Guidelines, IPEG, program provides a simple yet accurate estimate of the price of a manufactured product. IPEG facilitates sensitivity studies of price estimates at considerably less expense than would be incurred by using the Standard Assembly-line Manufacturing Industry Simulation, SAMIS, program (COSMIC program NPO-16032). A difference of less than one percent between the IPEG and SAMIS price estimates has been observed with realistic test cases. However, the IPEG simplification of SAMIS allows the analyst with limited time and computing resources to perform a greater number of sensitivity studies than with SAMIS. Although IPEG was developed for the photovoltaics industry, it is readily adaptable to any standard assembly line type of manufacturing industry. IPEG estimates the annual production price per unit. The input data includes cost of equipment, space, labor, materials, supplies, and utilities. Production on an industry wide basis or a process wide basis can be simulated. Once the IPEG input file is prepared, the original price is estimated and sensitivity studies may be performed. The IPEG user selects a sensitivity variable and a set of values. IPEG will compute a price estimate and a variety of other cost parameters for every specified value of the sensitivity variable. IPEG is designed as an interactive system and prompts the user for all required information and offers a variety of output options. The IPEG/PC program is written in TURBO PASCAL for interactive execution on an IBM PC computer under DOS 2.0 or above with at least 64K of memory. The IBM PC color display and color graphics adapter are needed to use the plotting capabilities in IPEG/PC. IPEG/PC was developed in 1984. The original IPEG program is written in SIMSCRIPT II.5 for interactive execution and has been implemented on an IBM 370 series computer with a central memory requirement of approximately 300K of 8 bit bytes. The original IPEG was developed in 1980.

  2. A clustering approach for estimating parameters of a profile hidden Markov model.

    PubMed

    Aghdam, Rosa; Pezeshk, Hamid; Malekpour, Seyed Amir; Shemehsavar, Soudabeh; Eslahchi, Changiz

    2013-01-01

    A Profile Hidden Markov Model (PHMM) is a standard form of a Hidden Markov Models used for modeling protein and DNA sequence families based on multiple alignment. In this paper, we implement Baum-Welch algorithm and the Bayesian Monte Carlo Markov Chain (BMCMC) method for estimating parameters of small artificial PHMM. In order to improve the prediction accuracy of the estimation of the parameters of the PHMM, we classify the training data using the weighted values of sequences in the PHMM then apply an algorithm for estimating parameters of the PHMM. The results show that the BMCMC method performs better than the Maximum Likelihood estimation. PMID:23865165

  3. Can streaming potential data improve permeability estimates in EGS reservoirs?

    NASA Astrophysics Data System (ADS)

    Vogt, Christian; Klitzsch, Norbert

    2013-04-01

    We study the capability of streaming potential data to improve the estimation of permeability in fractured geothermal systems. To this end, we simulate a tracer experiment numerically carried out at the Enhanced Geothermal System (EGS) at Soultz-sous-Forêts, France, in 2005. The EGS is located in the Lower Rhine Graben. Here, at approximately 5000 m depth an engineered reservoir was established. The tracer circulation test provides information on hydraulic connectivity between the injection borehole GPK3 and the two production boreholes GPK2 and GPK4. Vogt et al. (2011) performed stochastic inversion approaches to estimate heterogeneous permeability at Soultz in an equivalent porous medium approach and studied the non-uniqueness of the possible pathways in the reservoir. They identified three different possible groups of pathway configurations between GPK2 and GPK3 and corresponding hydraulic properties. Using the Ensemble Kalman Fitler, Vogt et al. (2012) estimated permeability by updating sequentially an ensemble of heterogeneous Monte Carlo reservoir models. Additionally, this approach quantifies the heterogeneously distributed uncertainty. Here, we study whether considering hypothetical streaming potential (SP) data during the stochastic inversion can improve the determination of the hydraulic reservoir properties. In particular, we study whether the three groups are characterized uniquely by their corresponding SP signals along the boreholes and whether the Ensemble Kalman Filter fit could be improved by joint inversion of SP and tracer data. During the actual tracer test, no SP data were recorded. Therefore, this study is based on synthetic data. We find that SP data predominantly yields information on the near field of permeability around the wells. Therefore, SP observations along wells will not help to characterize large-scale reservoir flow paths. However, we investigate whether additional passive SP monitoring from deviated wells around the injection

  4. A new estimate of the Hubble constant using the Virgo cluster distance

    NASA Astrophysics Data System (ADS)

    Visvanathan, N.

    The Hubble constant, which defines the size and age of the universe, remains substantially uncertain. Attention is presently given to an improved distance to the Virgo Cluster obtained by means of the 1.05-micron luminosity-H I width relation of spirals. In order to improve the absolute calibration of the relation, accurate distances to the nearby SMC, LMC, N6822, SEX A and N300 galaxies have also been obtained, on the basis of the near-IR P-L relation of the Cepheids. A value for the global Hubble constant of 67 + or 4 km/sec per Mpc is obtained.

  5. A New X-ray/Infrared Age Estimator For Young Stellar Clusters

    NASA Astrophysics Data System (ADS)

    Getman, Konstantin; Feigelson, Eric; Kuhn, Michael; Broos, Patrick; Townsley, Leisa; Naylor, Tim; Povich, Matthew; Luhman, Kevin; Garmire, Gordon

    2013-07-01

    The MYStIX (Massive Young Star-Forming Complex Study in Infrared and X-ray; Feigelson et al. 2013) project seeks to characterize 20 OB-dominated young star forming regions (SFRs) at distances <4 kpc using photometric catalogs from the Chandra X-ray Observatory, Spitzer Space Telescope, and UKIRT and 2MASS NIR telescopes. A major impediment to understand star formation in the massive SFRs is the absence of a reliable stellar chronometer to unravel their complex star formation histories. We present estimation of stellar ages using a new method that employs NIR and X-ray photometry, t(JX). Stellar masses are directly derived from absorption-corrected X-ray luminosities using the Lx-Mass relation from the Taurus cloud. J-band magnitudes corrected for absorption and distance are compared to the mass-dependent pre-main-sequence evolutionary models of Siess et al. (2000) to estimate ages. Unlike some other age estimators, t(JX) is sensitive to all stages of evolution, from deeply embedded disky objects to widely dispersed older pre-main sequence stars. The method has been applied to >5500 out of >30000 MYStIX stars in 20 SFRs. As individual t(JX) values can be highly uncertain, we report median ages of samples within (sub)clusters defined by the companion study of Kuhn et al. (2013). Here a maximum likelihood model of the spatial distribution produces an objective assignment of each star to an isothermal ellipsoid or a distributed population. The MYStIX (sub)clusters show 0.5 < t(JX) < 5 Myr. The important science result of our study is the discovery of previously unknown age gradients across many different MYStIX regions and clusters. The t(JX) ages are often correlated with (sub)cluster extinction and location with respect to molecular cores and ionized pillars on the peripheries of HII regions. The NIR color J-H, a surrogate measure of extinction, can serve as an approximate age predictor for young embedded clusters.

  6. Improving Evapotranspiration Estimates Using Multi-Platform Remote Sensing

    NASA Astrophysics Data System (ADS)

    Knipper, Kyle; Hogue, Terri; Franz, Kristie; Scott, Russell

    2016-04-01

    Understanding the linkages between energy and water cycles through evapotranspiration (ET) is uniquely challenging given its dependence on a range of climatological parameters and surface/atmospheric heterogeneity. A number of methods have been developed to estimate ET either from primarily remote-sensing observations, in-situ measurements, or a combination of the two. However, the scale of many of these methods may be too large to provide needed information about the spatial and temporal variability of ET that can occur over regions with acute or chronic land cover change and precipitation driven fluxes. The current study aims to improve the spatial and temporal variability of ET utilizing only satellite-based observations by incorporating a potential evapotranspiration (PET) methodology with satellite-based down-scaled soil moisture estimates in southern Arizona, USA. Initially, soil moisture estimates from AMSR2 and SMOS are downscaled to 1km through a triangular relationship between MODIS land surface temperature (MYD11A1), vegetation indices (MOD13Q1/MYD13Q1), and brightness temperature. Downscaled soil moisture values are then used to scale PET to actual ET (AET) at a daily, 1km resolution. Derived AET estimates are compared to observed flux tower estimates, the North American Land Data Assimilation System (NLDAS) model output (i.e. Variable Infiltration Capacity (VIC) Macroscale Hydrologic Model, Mosiac Model, and Noah Model simulations), the Operational Simplified Surface Energy Balance Model (SSEBop), and a calibrated empirical ET model created specifically for the region. Preliminary results indicate a strong increase in correlation when incorporating the downscaling technique to original AMSR2 and SMOS soil moisture values, with the added benefit of being able to decipher small scale heterogeneity in soil moisture (riparian versus desert grassland). AET results show strong correlations with relatively low error and bias when compared to flux tower

  7. Estimation of Missing Precipitation Records using Classifier, Cluster and Proximity Metric-Based Interpolation Schemes

    NASA Astrophysics Data System (ADS)

    Teegavarapu, R. S.

    2012-12-01

    New optimal proximity-based imputation, k-nn (k-nearest neighbor) classification and k-means clustering methods are proposed and developed for estimation of missing precipitation records in this study. Variants of these methods are embedded in optimization formulations to optimize the weighing schemes involving proximity measures. Ten different binary and real valued distance metrics are used as proximity measures. Two climatic regions, Kentucky and Florida, (temperate and tropical) in the United States, with different gauge density and gauge network structure are used as case studies to evaluate the efficacy of these methods for estimation of missing precipitation data. A comprehensive exercise is undertaken in this study to compare the performances of the developed new methods and their variants to those of already available methods in literature. Several deterministic and stochastic spatial interpolation methods and their improvised variants using optimization formulations are used for comparisons. Results from these comparisons indicate that the optimal proximity-based imputation, k-mean cluster-based and k-nn classification methods are competitive when combined with mathematical programming formulations and provided better estimates of missing precipitation data than available deterministic and stochastic interpolation methods.

  8. Clustering and training set selection methods for improving the accuracy of quantitative laser induced breakdown spectroscopy

    NASA Astrophysics Data System (ADS)

    Anderson, Ryan B.; Bell, James F., III; Wiens, Roger C.; Morris, Richard V.; Clegg, Samuel M.

    2012-04-01

    We investigated five clustering and training set selection methods to improve the accuracy of quantitative chemical analysis of geologic samples by laser induced breakdown spectroscopy (LIBS) using partial least squares (PLS) regression. The LIBS spectra were previously acquired for 195 rock slabs and 31 pressed powder geostandards under 7 Torr CO2 at a stand-off distance of 7 m at 17 mJ per pulse to simulate the operational conditions of the ChemCam LIBS instrument on the Mars Science Laboratory Curiosity rover. The clustering and training set selection methods, which do not require prior knowledge of the chemical composition of the test-set samples, are based on grouping similar spectra and selecting appropriate training spectra for the partial least squares (PLS2) model. These methods were: (1) hierarchical clustering of the full set of training spectra and selection of a subset for use in training; (2) k-means clustering of all spectra and generation of PLS2 models based on the training samples within each cluster; (3) iterative use of PLS2 to predict sample composition and k-means clustering of the predicted compositions to subdivide the groups of spectra; (4) soft independent modeling of class analogy (SIMCA) classification of spectra, and generation of PLS2 models based on the training samples within each class; (5) use of Bayesian information criteria (BIC) to determine an optimal number of clusters and generation of PLS2 models based on the training samples within each cluster. The iterative method and the k-means method using 5 clusters showed the best performance, improving the absolute quadrature root mean squared error (RMSE) by ~ 3 wt.%. The statistical significance of these improvements was ~ 85%. Our results show that although clustering methods can modestly improve results, a large and diverse training set is the most reliable way to improve the accuracy of quantitative LIBS. In particular, additional sulfate standards and specifically fabricated

  9. Snowpack Estimates Improve Water Resources Climate-Change Adaptation Strategies

    NASA Astrophysics Data System (ADS)

    Lestak, L.; Molotch, N. P.; Guan, B.; Granger, S. L.; Nemeth, S.; Rizzardo, D.; Gehrke, F.; Franz, K. J.; Karsten, L. R.; Margulis, S. A.; Case, K.; Anderson, M.; Painter, T. H.; Dozier, J.

    2010-12-01

    Observed climate trends over the past 50 years indicate a reduction in snowpack water storage across the Western U.S. As the primary water source for the region, the loss in snowpack water storage presents significant challenges for managing water deliveries to meet agricultural, municipal, and hydropower demands. Improved snowpack information via remote sensing shows promise for improving seasonal water supply forecasts and for informing decadal scale infrastructure planning. An ongoing project in the California Sierra Nevada and examples from the Rocky Mountains indicate the tractability of estimating snowpack water storage on daily time steps using a distributed snowpack reconstruction model. Fractional snow covered area (FSCA) derived from Moderate Resolution Imaging Spectroradiometer (MODIS) satellite data were used with modeled snowmelt from the snowpack model to estimate snow water equivalent (SWE) in the Sierra Nevada (64,515 km2). Spatially distributed daily SWE estimates were calculated for 10 years, 2000-2009, with detailed analysis for two anamolous years, 2006, a wet year and 2009, an over-forecasted year. Sierra-wide mean SWE was 0.8 cm for 01 April 2006 versus 0.4 cm for 01 April 2009, comparing favorably with known outflow. Modeled SWE was compared to in-situ (observed) SWE for 01 April 2006 for the Feather (northern Sierra, lower-elevation) and Merced (central Sierra, higher-elevation) basins, with mean modeled SWE 80% of observed SWE. Integration of spatial SWE estimates into forecasting operations will allow for better visualization and analysis of high-altitude late-season snow missed by in-situ snow sensors and inter-annual anomalies associated with extreme precipitation events/atmospheric rivers. Collaborations with state and local entities establish protocols on how to meet current and future information needs and improve climate-change adaptation strategies.

  10. Improving the quality of parameter estimates obtained from slug tests

    USGS Publications Warehouse

    Butler, J.J., Jr.; McElwee, C.D.; Liu, W.

    1996-01-01

    The slug test is one of the most commonly used field methods for obtaining in situ estimates of hydraulic conductivity. Despite its prevalence, this method has received criticism from many quarters in the ground-water community. This criticism emphasizes the poor quality of the estimated parameters, a condition that is primarily a product of the somewhat casual approach that is often employed in slug tests. Recently, the Kansas Geological Survey (KGS) has pursued research directed it improving methods for the performance and analysis of slug tests. Based on extensive theoretical and field research, a series of guidelines have been proposed that should enable the quality of parameter estimates to be improved. The most significant of these guidelines are: (1) three or more slug tests should be performed at each well during a given test period; (2) two or more different initial displacements (Ho) should be used at each well during a test period; (3) the method used to initiate a test should enable the slug to be introduced in a near-instantaneous manner and should allow a good estimate of Ho to be obtained; (4) data-acquisition equipment that enables a large quantity of high quality data to be collected should be employed; (5) if an estimate of the storage parameter is needed, an observation well other than the test well should be employed; (6) the method chosen for analysis of the slug-test data should be appropriate for site conditions; (7) use of pre- and post-analysis plots should be an integral component of the analysis procedure, and (8) appropriate well construction parameters should be employed. Data from slug tests performed at a number of KGS field sites demonstrate the importance of these guidelines.

  11. Improving stochastic estimates with inference methods: calculating matrix diagonals.

    PubMed

    Selig, Marco; Oppermann, Niels; Ensslin, Torsten A

    2012-02-01

    Estimating the diagonal entries of a matrix, that is not directly accessible but only available as a linear operator in the form of a computer routine, is a common necessity in many computational applications, especially in image reconstruction and statistical inference. Here, methods of statistical inference are used to improve the accuracy or the computational costs of matrix probing methods to estimate matrix diagonals. In particular, the generalized Wiener filter methodology, as developed within information field theory, is shown to significantly improve estimates based on only a few sampling probes, in cases in which some form of continuity of the solution can be assumed. The strength, length scale, and precise functional form of the exploited autocorrelation function of the matrix diagonal is determined from the probes themselves. The developed algorithm is successfully applied to mock and real world problems. These performance tests show that, in situations where a matrix diagonal has to be calculated from only a small number of computationally expensive probes, a speedup by a factor of 2 to 10 is possible with the proposed method. PMID:22463179

  12. Tuning target selection algorithms to improve galaxy redshift estimates

    NASA Astrophysics Data System (ADS)

    Hoyle, Ben; Paech, Kerstin; Rau, Markus Michael; Seitz, Stella; Weller, Jochen

    2016-06-01

    We showcase machine learning (ML) inspired target selection algorithms to determine which of all potential targets should be selected first for spectroscopic follow-up. Efficient target selection can improve the ML redshift uncertainties as calculated on an independent sample, while requiring less targets to be observed. We compare seven different ML targeting algorithms with the Sloan Digital Sky Survey (SDSS) target order, and with a random targeting algorithm. The ML inspired algorithms are constructed iteratively by estimating which of the remaining target galaxies will be most difficult for the ML methods to accurately estimate redshifts using the previously observed data. This is performed by predicting the expected redshift error and redshift offset (or bias) of all of the remaining target galaxies. We find that the predicted values of bias and error are accurate to better than 10-30 per cent of the true values, even with only limited training sample sizes. We construct a hypothetical follow-up survey and find that some of the ML targeting algorithms are able to obtain the same redshift predictive power with 2-3 times less observing time, as compared to that of the SDSS, or random, target selection algorithms. The reduction in the required follow-up resources could allow for a change to the follow-up strategy, for example by obtaining deeper spectroscopy, which could improve ML redshift estimates for deeper test data.

  13. A PARAMETERIZED GALAXY CATALOG SIMULATOR FOR TESTING CLUSTER FINDING, MASS ESTIMATION, AND PHOTOMETRIC REDSHIFT ESTIMATION IN OPTICAL AND NEAR-INFRARED SURVEYS

    SciTech Connect

    Song, Jeeseon; Mohr, Joseph J.; Barkhouse, Wayne A.; Rude, Cody; Warren, Michael S.; Dolag, Klaus

    2012-03-01

    We present a galaxy catalog simulator that converts N-body simulations with halo and subhalo catalogs into mock, multiband photometric catalogs. The simulator assigns galaxy properties to each subhalo in a way that reproduces the observed cluster galaxy halo occupation distribution, the radial and mass-dependent variation in fractions of blue galaxies, the luminosity functions in the cluster and the field, and the color-magnitude relation in clusters. Moreover, the evolution of these parameters is tuned to match existing observational constraints. Parameterizing an ensemble of cluster galaxy properties enables us to create mock catalogs with variations in those properties, which in turn allows us to quantify the sensitivity of cluster finding to current observational uncertainties in these properties. Field galaxies are sampled from existing multiband photometric surveys of similar depth. We present an application of the catalog simulator to characterize the selection function and contamination of a galaxy cluster finder that utilizes the cluster red sequence together with galaxy clustering on the sky. We estimate systematic uncertainties in the selection to be at the {<=}15% level with current observational constraints on cluster galaxy populations and their evolution. We find the contamination in this cluster finder to be {approx}35% to redshift z {approx} 0.6. In addition, we use the mock galaxy catalogs to test the optical mass indicator B{sub gc} and a red-sequence redshift estimator. We measure the intrinsic scatter of the B{sub gc}-mass relation to be approximately log normal with {sigma}{sub log10M}{approx}0.25 and we demonstrate photometric redshift accuracies for massive clusters at the {approx}3% level out to z {approx} 0.7.

  14. Estimating accuracy of land-cover composition from two-stage cluster sampling

    USGS Publications Warehouse

    Stehman, S.V.; Wickham, J.D.; Fattorini, L.; Wade, T.D.; Baffetta, F.; Smith, J.H.

    2009-01-01

    Land-cover maps are often used to compute land-cover composition (i.e., the proportion or percent of area covered by each class), for each unit in a spatial partition of the region mapped. We derive design-based estimators of mean deviation (MD), mean absolute deviation (MAD), root mean square error (RMSE), and correlation (CORR) to quantify accuracy of land-cover composition for a general two-stage cluster sampling design, and for the special case of simple random sampling without replacement (SRSWOR) at each stage. The bias of the estimators for the two-stage SRSWOR design is evaluated via a simulation study. The estimators of RMSE and CORR have small bias except when sample size is small and the land-cover class is rare. The estimator of MAD is biased for both rare and common land-cover classes except when sample size is large. A general recommendation is that rare land-cover classes require large sample sizes to ensure that the accuracy estimators have small bias. ?? 2009 Elsevier Inc.

  15. Cosmological parameter estimation from CMB and X-ray cluster after Planck

    NASA Astrophysics Data System (ADS)

    Hu, Jian-Wei; Cai, Rong-Gen; Guo, Zong-Kuan; Hu, Bin

    2014-05-01

    We investigate constraints on cosmological parameters in three 8-parameter models with the summed neutrino mass as a free parameter, by a joint analysis of CCCP X-ray cluster data, the newly released Planck CMB data as well as some external data sets including baryon acoustic oscillation measurements from the 6dFGS, SDSS DR7 and BOSS DR9 surveys, and Hubble Space Telescope H0 measurement. We find that the combined data strongly favor a non-zero neutrino masses at more than 3σ confidence level in these non-vanilla models. Allowing the CMB lensing amplitude AL to vary, we find AL > 1 at 3σ confidence level. For dark energy with a constant equation of state w, we obtain w < -1 at 3σ confidence level. The estimate of the matter power spectrum amplitude σ8 is discrepant with the Planck value at 2σ confidence level, which reflects some tension between X-ray cluster data and Planck data in these non-vanilla models. The tension can be alleviated by adding a 9% systematic shift in the cluster mass function.

  16. Cosmological parameter estimation from CMB and X-ray cluster after Planck

    SciTech Connect

    Hu, Jian-Wei; Cai, Rong-Gen; Guo, Zong-Kuan; Hu, Bin E-mail: cairg@itp.ac.cn E-mail: hu@lorentz.leidenuniv.nl

    2014-05-01

    We investigate constraints on cosmological parameters in three 8-parameter models with the summed neutrino mass as a free parameter, by a joint analysis of CCCP X-ray cluster data, the newly released Planck CMB data as well as some external data sets including baryon acoustic oscillation measurements from the 6dFGS, SDSS DR7 and BOSS DR9 surveys, and Hubble Space Telescope H{sub 0} measurement. We find that the combined data strongly favor a non-zero neutrino masses at more than 3σ confidence level in these non-vanilla models. Allowing the CMB lensing amplitude A{sub L} to vary, we find A{sub L} > 1 at 3σ confidence level. For dark energy with a constant equation of state w, we obtain w < −1 at 3σ confidence level. The estimate of the matter power spectrum amplitude σ{sub 8} is discrepant with the Planck value at 2σ confidence level, which reflects some tension between X-ray cluster data and Planck data in these non-vanilla models. The tension can be alleviated by adding a 9% systematic shift in the cluster mass function.

  17. Speed Profiles for Improvement of Maritime Emission Estimation

    PubMed Central

    Yau, Pui Shan; Lee, Shun-Cheng; Ho, Kin Fai

    2012-01-01

    Abstract Maritime emissions play an important role in anthropogenic emissions, particularly for cities with busy ports such as Hong Kong. Ship emissions are strongly dependent on vessel speed, and thus accurate vessel speed is essential for maritime emission studies. In this study, we determined minute-by-minute high-resolution speed profiles of container ships on four major routes in Hong Kong waters using Automatic Identification System (AIS). The activity-based ship emissions of NOx, CO, HC, CO2, SO2, and PM10 were estimated using derived vessel speed profiles, and results were compared with those using the speed limits of control zones. Estimation using speed limits resulted in up to twofold overestimation of ship emissions. Compared with emissions estimated using the speed limits of control zones, emissions estimated using vessel speed profiles could provide results with up to 88% higher accuracy. Uncertainty analysis and sensitivity analysis of the model demonstrated the significance of improvement of vessel speed resolution. From spatial analysis, it is revealed that SO2 and PM10 emissions during maneuvering within 1 nautical mile from port were the highest. They contributed 7%–22% of SO2 emissions and 8%–17% of PM10 emissions of the entire voyage in Hong Kong. PMID:23236250

  18. Adaptive noise estimation and suppression for improving microseismic event detection

    NASA Astrophysics Data System (ADS)

    Mousavi, S. Mostafa; Langston, Charles A.

    2016-09-01

    Microseismic data recorded by surface arrays are often strongly contaminated by unwanted noise. This background noise makes the detection of small magnitude events difficult. A noise level estimation and noise reduction algorithm is presented for microseismic data analysis based upon minimally controlled recursive averaging and neighborhood shrinkage estimators. The method might not be compared with more sophisticated and computationally expensive denoising algorithm in terms of preserving detailed features of seismic signal. However, it is fast and data-driven and can be applied in real-time processing of continuous data for event detection purposes. Results from application of this algorithm to synthetic and real seismic data show that it holds a great promise for improving microseismic event detection.

  19. An improved sparse LS-SVR for estimating illumination

    NASA Astrophysics Data System (ADS)

    Zhu, Zhenmin; Lv, Zhaokang; Liu, Baifen

    2015-07-01

    Support Vector Regression performs well on estimating illumination chromaticity in a scene. Then the concept of Least Squares Support Vector Regression has been put forward as an effective, statistical and learning prediction model. Although it is successful to solve some problems of estimation, it also has obvious defects. Due to a large amount of support vectors which are chosen in the process of training LS-SVR , the calculation become very complex and it lost the sparsity of SVR. In this paper, we get inspiration from WLS-SVM(Weighted Least Squares Support Vector Machines) and a new method for sparse model. A Density Weighted Pruning algorithm is used to improve the sparsity of LS-SVR and named SLS-SVR(Sparse Least Squares Support Vector Regression).The simulation indicates that only need to select 30 percent of support vectors, the prediction can reach to 75 percent of the original one.

  20. Improved estimates of coordinate error for molecular replacement

    SciTech Connect

    Oeffner, Robert D.; Bunkóczi, Gábor; McCoy, Airlie J.; Read, Randy J.

    2013-11-01

    A function for estimating the effective root-mean-square deviation in coordinates between two proteins has been developed that depends on both the sequence identity and the size of the protein and is optimized for use with molecular replacement in Phaser. A top peak translation-function Z-score of over 8 is found to be a reliable metric of when molecular replacement has succeeded. The estimate of the root-mean-square deviation (r.m.s.d.) in coordinates between the model and the target is an essential parameter for calibrating likelihood functions for molecular replacement (MR). Good estimates of the r.m.s.d. lead to good estimates of the variance term in the likelihood functions, which increases signal to noise and hence success rates in the MR search. Phaser has hitherto used an estimate of the r.m.s.d. that only depends on the sequence identity between the model and target and which was not optimized for the MR likelihood functions. Variance-refinement functionality was added to Phaser to enable determination of the effective r.m.s.d. that optimized the log-likelihood gain (LLG) for a correct MR solution. Variance refinement was subsequently performed on a database of over 21 000 MR problems that sampled a range of sequence identities, protein sizes and protein fold classes. Success was monitored using the translation-function Z-score (TFZ), where a TFZ of 8 or over for the top peak was found to be a reliable indicator that MR had succeeded for these cases with one molecule in the asymmetric unit. Good estimates of the r.m.s.d. are correlated with the sequence identity and the protein size. A new estimate of the r.m.s.d. that uses these two parameters in a function optimized to fit the mean of the refined variance is implemented in Phaser and improves MR outcomes. Perturbing the initial estimate of the r.m.s.d. from the mean of the distribution in steps of standard deviations of the distribution further increases MR success rates.

  1. Estimating {Omega} from galaxy redshifts: Linear flow distortions and nonlinear clustering

    SciTech Connect

    Bromley, B.C. |; Warren, M.S.; Zurek, W.H.

    1997-02-01

    We propose a method to determine the cosmic mass density {Omega} from redshift-space distortions induced by large-scale flows in the presence of nonlinear clustering. Nonlinear structures in redshift space, such as fingers of God, can contaminate distortions from linear flows on scales as large as several times the small-scale pairwise velocity dispersion {sigma}{sub {nu}}. Following Peacock & Dodds, we work in the Fourier domain and propose a model to describe the anisotropy in the redshift-space power spectrum; tests with high-resolution numerical data demonstrate that the model is robust for both mass and biased galaxy halos on translinear scales and above. On the basis of this model, we propose an estimator of the linear growth parameter {beta}={Omega}{sup 0.6}/b, where b measures bias, derived from sampling functions that are tuned to eliminate distortions from nonlinear clustering. The measure is tested on the numerical data and found to recover the true value of {beta} to within {approximately}10{percent}. An analysis of {ital IRAS} 1.2 Jy galaxies yields {beta}=0.8{sub {minus}0.3}{sup +0.4} at a scale of 1000kms{sup {minus}1}, which is close to optimal given the shot noise and finite size of the survey. This measurement is consistent with dynamical estimates of {beta} derived from both real-space and redshift-space information. The importance of the method presented here is that nonlinear clustering effects are removed to enable linear correlation anisotropy measurements on scales approaching the translinear regime. We discuss implications for analyses of forthcoming optical redshift surveys in which the dispersion is more than a factor of 2 greater than in the {ital IRAS} data. {copyright} {ital 1997} {ital The American Astronomical Society}

  2. A stochastic movement simulator improves estimates of landscape connectivity.

    PubMed

    Coulon, A; Aben, J; Palmer, S C F; Stevens, V M; Callens, T; Strubbe, D; Lens, L; Matthysen, E; Baguette, M; Travis, J M J

    2015-08-01

    Conservation actions often focus on restoration or creation of natural areas designed to facilitate the movements of organisms among populations. To be efficient, these actions need to be based on reliable estimates or predictions of landscape connectivity. While circuit theory and least-cost paths (LCPs) are increasingly being used to estimate connectivity, these methods also have proven limitations. We compared their performance in predicting genetic connectivity with that of an alternative approach based on a simple, individual-based "stochastic movement simulator" (SMS). SMS predicts dispersal of organisms using the same landscape representation as LCPs and circuit theory-based estimates (i.e., a cost surface), while relaxing key LCP assumptions, namely individual omniscience of the landscape (by incorporating perceptual range) and the optimality of individual movements (by including stochasticity in simulated movements). The performance of the three estimators was assessed by the degree to which they correlated with genetic estimates of connectivity in two species with contrasting movement abilities (Cabanis's Greenbul, an Afrotropical forest bird species, and natterjack toad, an amphibian restricted to European sandy and heathland areas). For both species, the correlation between dispersal model and genetic data was substantially higher when SMS was used. Importantly, the results also demonstrate that the improvement gained by using SMS is robust both to variation in spatial resolution of the landscape and to uncertainty in the perceptual range model parameter. Integration of this individual-based approach with other developing methods in the field of connectivity research, such as graph theory, can yield rapid progress towards more robust connectivity indices and more effective recommendations for land management. PMID:26405745

  3. Improved soil moisture balance methodology for recharge estimation

    NASA Astrophysics Data System (ADS)

    Rushton, K. R.; Eilers, V. H. M.; Carter, R. C.

    2006-03-01

    Estimation of recharge in a variety of climatic conditions is possible using a daily soil moisture balance based on a single soil store. Both transpiration from crops and evaporation from bare soil are included in the conceptual and computational models. The actual evapotranspiration is less than the potential value when the soil is under stress; the stress factor is estimated in terms of the readily and total available water, parameters which depend on soil properties and the effective depth of the roots. Runoff is estimated as a function of the daily rainfall intensity and the current soil moisture deficit. A new concept, near surface soil storage, is introduced to account for continuing evapotranspiration on days following heavy rainfall even though a large soil moisture deficit exists. Algorithms for the computational model are provided. The data required for the soil moisture balance calculations are widely available or they can be deduced from published data. This methodology for recharge estimation using a soil moisture balance is applied to two contrasting case studies. The first case study refers to a rainfed crop in semi-arid northeast Nigeria; recharge occurs during the period of main crop growth. For the second case study in England, a location is selected where the long-term average rainfall and potential evapotranspiration are of similar magnitudes. For each case study, detailed information is presented about the selection of soil, crop and other parameters. The plausibility of the model outputs is examined using a variety of independent information and data. Uncertainties and variations in parameter values are explored using sensitivity analyses. These two case studies indicate that the improved single-store soil moisture balance model is a reliable approach for potential recharge estimation in a wide variety of situations.

  4. Improved risk estimates for carbon tetrachloride. 1998 annual progress report

    SciTech Connect

    Benson, J.M.; Springer, D.L.; Thrall, K.D.

    1998-06-01

    'The overall purpose of these studies is to improve the scientific basis for assessing the cancer risk associated with human exposure to carbon tetrachloride. Specifically, the toxicokinetics of inhaled carbon tetrachloride is being determined in rats, mice and hamsters. Species differences in the metabolism of carbon tetrachloride by rats, mice and hamsters is being determined in vivo and in vitro using tissues and microsomes from these rodent species and man. Dose-response relationships will be determined in all studies. The information will be used to improve the current physiologically based pharmacokinetic model for carbon tetrachloride. The authors will also determine whether carbon tetrachloride is a hepatocarcinogen only when exposure results in cell damage, cell killing, and regenerative cell proliferation. In combination, the results of these studies will provide the types of information needed to enable a refined risk estimate for carbon tetrachloride under EPA''s new guidelines for cancer risk assessment.'

  5. Improving transportation data for mobile source emission estimates. Final report

    SciTech Connect

    Chatterjee, A.; Miller, T.L.; Philpot, J.W.; Wholley, T.F.; Guensler, R.

    1997-12-31

    The report provides an overview of federal statutes and policies which form the foundation for air quality planning related to transportation systems development. It also provides a detailed presentation regarding the use of federally mandated air quality models in estimating mobile source emissions resulting from transportation development and operations. The authors suggest ways in which current practice and analysis tools can be improved to increase the accuracy of their results. They also suggest some priorities for additional related research. Finally, the report should assist federal agency practitioners in their efforts to improve analytical methods and tools for determining conformity. The report also serves as a basic educational resource for current and future transportation and air quality modeling.

  6. Improving the Accuracy of Estimation of Climate Extremes

    NASA Astrophysics Data System (ADS)

    Zolina, Olga; Detemmerman, Valery; Trenberth, Kevin E.

    2010-12-01

    Workshop on Metrics and Methodologies of Estimation of Extreme Climate Events; Paris, France, 27-29 September 2010; Climate projections point toward more frequent and intense weather and climate extremes such as heat waves, droughts, and floods, in a warmer climate. These projections, together with recent extreme climate events, including flooding in Pakistan and the heat wave and wildfires in Russia, highlight the need for improved risk assessments to help decision makers and the public. But accurate analysis and prediction of risk of extreme climate events require new methodologies and information from diverse disciplines. A recent workshop sponsored by the World Climate Research Programme (WCRP) and hosted at United Nations Educational, Scientific and Cultural Organization (UNESCO) headquarters in France brought together, for the first time, a unique mix of climatologists, statisticians, meteorologists, oceanographers, social scientists, and risk managers (such as those from insurance companies) who sought ways to improve scientists' ability to characterize and predict climate extremes in a changing climate.

  7. Reducing measurement scale mismatch to improve surface energy flux estimation

    NASA Astrophysics Data System (ADS)

    Iwema, Joost; Rosolem, Rafael; Rahman, Mostaquimur; Blyth, Eleanor; Wagener, Thorsten

    2016-04-01

    Soil moisture importantly controls land surface processes such as energy and water partitioning. A good understanding of these controls is needed especially when recognizing the challenges in providing accurate hyper-resolution hydrometeorological simulations at sub-kilometre scales. Soil moisture controlling factors can, however, differ at distinct scales. In addition, some parameters in land surface models are still often prescribed based on observations obtained at another scale not necessarily employed by such models (e.g., soil properties obtained from lab samples used in regional simulations). To minimize such effects, parameters can be constrained with local data from Eddy-Covariance (EC) towers (i.e., latent and sensible heat fluxes) and Point Scale (PS) soil moisture observations (e.g., TDR). However, measurement scales represented by EC and PS still differ substantially. Here we use the fact that Cosmic-Ray Neutron Sensors (CRNS) estimate soil moisture at horizontal footprint similar to that of EC fluxes to help answer the following question: Does reduced observation scale mismatch yield better soil moisture - surface fluxes representation in land surface models? To answer this question we analysed soil moisture and surface fluxes measurements from twelve COSMOS-Ameriflux sites in the USA characterized by distinct climate, soils and vegetation types. We calibrated model parameters of the Joint UK Land Environment Simulator (JULES) against PS and CRNS soil moisture data, respectively. We analysed the improvement in soil moisture estimation compared to uncalibrated model simulations and then evaluated the degree of improvement in surface fluxes before and after calibration experiments. Preliminary results suggest that a more accurate representation of soil moisture dynamics is achieved when calibrating against observed soil moisture and further improvement obtained with CRNS relative to PS. However, our results also suggest that a more accurate

  8. An Improved Clustering Algorithm of Tunnel Monitoring Data for Cloud Computing

    PubMed Central

    Zhong, Luo; Tang, KunHao; Li, Lin; Yang, Guang; Ye, JingJing

    2014-01-01

    With the rapid development of urban construction, the number of urban tunnels is increasing and the data they produce become more and more complex. It results in the fact that the traditional clustering algorithm cannot handle the mass data of the tunnel. To solve this problem, an improved parallel clustering algorithm based on k-means has been proposed. It is a clustering algorithm using the MapReduce within cloud computing that deals with data. It not only has the advantage of being used to deal with mass data but also is more efficient. Moreover, it is able to compute the average dissimilarity degree of each cluster in order to clean the abnormal data. PMID:24982971

  9. Improving Estimates of Cloud Radiative Forcing over Greenland

    NASA Astrophysics Data System (ADS)

    Wang, W.; Zender, C. S.

    2014-12-01

    Multiple driving mechanisms conspire to increase melt extent and extreme melt events frequency in the Arctic: changing heat transport, shortwave radiation (SW), and longwave radiation (LW). Cloud Radiative Forcing (CRF) of Greenland's surface is amplified by a dry atmosphere and by albedo feedback, making its contribution to surface melt even more variable in time and space. Unfortunately accurate cloud observations and thus CRF estimates are hindered by Greenland's remoteness, harsh conditions, and low contrast between surface and cloud reflectance. In this study, cloud observations from satellites and reanalyses are ingested into and evaluated within a column radiative transfer model. An improved CRF dataset is obtained by correcting systematic discrepancies derived from sensitivity experiments. First, we compare the surface radiation budgets from the Column Radiation Model (CRM) driven by different cloud datasets, with surface observations from Greenland Climate Network (GC-Net). In clear skies, CRM-estimated surface radiation driven by water vapor profiles from both AIRS and MODIS during May-Sept 2010-2012 are similar, stable, and reliable. For example, although AIRS water vapor path exceeds MODIS by 1.4 kg/m2 on a daily average, the overall absolute difference in downwelling SW is < 4 W/m2. CRM estimates are within 20 W/m2 range of GC-Net downwelling SW. After calibrating CRM in clear skies, the remaining differences between CRM and observed surface radiation are primarily attributable to differences in cloud observations. We estimate CRF using cloud products from MODIS and from MERRA. The SW radiative forcing of thin clouds is mainly controlled by cloud water path (CWP). As CWP increases from near 0 to 200 g/m2, the net surface SW drops from over 100 W/m2 to 30 W/m2 almost linearly, beyond which it becomes relatively insensitive to CWP. The LW is dominated by cloud height. For clouds at all altitudes, the lower the clouds, the greater the LW forcing. By

  10. Which Elements of Improvement Collaboratives Are Most Effective? A Cluster-Randomized Trial

    PubMed Central

    Gustafson, D. H.; Quanbeck, A. R.; Robinson, J. M.; Ford, J. H.; Pulvermacher, A.; French, M. T.; McConnell, K. J.; Batalden, P. B.; Hoffman, K. A.; McCarty, D.

    2013-01-01

    Aims Improvement collaboratives consisting of various components are used throughout healthcare to improve quality, but no study has identified which components work best. This study tested the effectiveness of different components in addiction treatment services, hypothesizing that a combination of all components would be most effective. Design An unblinded cluster-randomized trial assigned clinics to one of four groups: interest circle calls (group teleconferences), clinic-level coaching, learning sessions (large face-to-face meetings), and a combination of all three. Interest circle calls functioned as a minimal intervention comparison group. Setting Outpatient addiction treatment clinics in the U.S. Participants 201 clinics in 5 states. Measurements Clinic data managers submitted data on three primary outcomes: waiting time (mean days between first contact and first treatment), retention (percent of patients retained from first to fourth treatment session), and annual number of new patients. State and group costs were collected for a cost-effectiveness analysis. Findings Waiting time declined significantly for 3 groups: coaching (an average of −4.6 days/clinic, P=0.001), learning sessions (−3.5 days/clinic, P=0.012), and the combination (−4.7 days/clinic, P=0.001). The coaching and combination groups significantly increased the number of new patients (19.5%, P=0.028; 8.9%, P=0.029; respectively). Interest circle calls showed no significant effects on outcomes. None of the groups significantly improved retention. The estimated cost/clinic was $2,878 for coaching versus $7,930 for the combination. Coaching and the combination of collaborative components were about equally effective in achieving study aims, but coaching was substantially more cost effective. Conclusions When trying to improve the effectiveness of addiction treatment services, clinic-level coaching appears to help improve waiting time and number of new patients while other components of

  11. Laser photogrammetry improves size and demographic estimates for whale sharks

    PubMed Central

    Richardson, Anthony J.; Prebble, Clare E.M.; Marshall, Andrea D.; Bennett, Michael B.; Weeks, Scarla J.; Cliff, Geremy; Wintner, Sabine P.; Pierce, Simon J.

    2015-01-01

    Whale sharks Rhincodon typus are globally threatened, but a lack of biological and demographic information hampers an accurate assessment of their vulnerability to further decline or capacity to recover. We used laser photogrammetry at two aggregation sites to obtain more accurate size estimates of free-swimming whale sharks compared to visual estimates, allowing improved estimates of biological parameters. Individual whale sharks ranged from 432–917 cm total length (TL) (mean ± SD = 673 ± 118.8 cm, N = 122) in southern Mozambique and from 420–990 cm TL (mean ± SD = 641 ± 133 cm, N = 46) in Tanzania. By combining measurements of stranded individuals with photogrammetry measurements of free-swimming sharks, we calculated length at 50% maturity for males in Mozambique at 916 cm TL. Repeat measurements of individual whale sharks measured over periods from 347–1,068 days yielded implausible growth rates, suggesting that the growth increment over this period was not large enough to be detected using laser photogrammetry, and that the method is best applied to estimating growth rates over longer (decadal) time periods. The sex ratio of both populations was biased towards males (74% in Mozambique, 89% in Tanzania), the majority of which were immature (98% in Mozambique, 94% in Tanzania). The population structure for these two aggregations was similar to most other documented whale shark aggregations around the world. Information on small (<400 cm) whale sharks, mature individuals, and females in this region is lacking, but necessary to inform conservation initiatives for this globally threatened species. PMID:25870776

  12. Laser photogrammetry improves size and demographic estimates for whale sharks.

    PubMed

    Rohner, Christoph A; Richardson, Anthony J; Prebble, Clare E M; Marshall, Andrea D; Bennett, Michael B; Weeks, Scarla J; Cliff, Geremy; Wintner, Sabine P; Pierce, Simon J

    2015-01-01

    Whale sharks Rhincodon typus are globally threatened, but a lack of biological and demographic information hampers an accurate assessment of their vulnerability to further decline or capacity to recover. We used laser photogrammetry at two aggregation sites to obtain more accurate size estimates of free-swimming whale sharks compared to visual estimates, allowing improved estimates of biological parameters. Individual whale sharks ranged from 432-917 cm total length (TL) (mean ± SD = 673 ± 118.8 cm, N = 122) in southern Mozambique and from 420-990 cm TL (mean ± SD = 641 ± 133 cm, N = 46) in Tanzania. By combining measurements of stranded individuals with photogrammetry measurements of free-swimming sharks, we calculated length at 50% maturity for males in Mozambique at 916 cm TL. Repeat measurements of individual whale sharks measured over periods from 347-1,068 days yielded implausible growth rates, suggesting that the growth increment over this period was not large enough to be detected using laser photogrammetry, and that the method is best applied to estimating growth rates over longer (decadal) time periods. The sex ratio of both populations was biased towards males (74% in Mozambique, 89% in Tanzania), the majority of which were immature (98% in Mozambique, 94% in Tanzania). The population structure for these two aggregations was similar to most other documented whale shark aggregations around the world. Information on small (<400 cm) whale sharks, mature individuals, and females in this region is lacking, but necessary to inform conservation initiatives for this globally threatened species. PMID:25870776

  13. Towards Improved Snow Water Equivalent Estimation via GRACE Assimilation

    NASA Technical Reports Server (NTRS)

    Forman, Bart; Reichle, Rofl; Rodell, Matt

    2011-01-01

    Passive microwave (e.g. AMSR-E) and visible spectrum (e.g. MODIS) measurements of snow states have been used in conjunction with land surface models to better characterize snow pack states, most notably snow water equivalent (SWE). However, both types of measurements have limitations. AMSR-E, for example, suffers a loss of information in deep/wet snow packs. Similarly, MODIS suffers a loss of temporal correlation information beyond the initial accumulation and final ablation phases of the snow season. Gravimetric measurements, on the other hand, do not suffer from these limitations. In this study, gravimetric measurements from the Gravity Recovery and Climate Experiment (GRACE) mission are used in a land surface model data assimilation (DA) framework to better characterize SWE in the Mackenzie River basin located in northern Canada. Comparisons are made against independent, ground-based SWE observations, state-of-the-art modeled SWE estimates, and independent, ground-based river discharge observations. Preliminary results suggest improved SWE estimates, including improved timing of the subsequent ablation and runoff of the snow pack. Additionally, use of the DA procedure can add vertical and horizontal resolution to the coarse-scale GRACE measurements as well as effectively downscale the measurements in time. Such findings offer the potential for better understanding of the hydrologic cycle in snow-dominated basins located in remote regions of the globe where ground-based observation collection if difficult, if not impossible. This information could ultimately lead to improved freshwater resource management in communities dependent on snow melt as well as a reduction in the uncertainty of river discharge into the Arctic Ocean.

  14. Improved PPP ambiguity resolution by COES FCB estimation

    NASA Astrophysics Data System (ADS)

    Li, Yihe; Gao, Yang; Shi, Junbo

    2016-05-01

    Precise point positioning (PPP) integer ambiguity resolution is able to significantly improve the positioning accuracy with the correction of fractional cycle biases (FCBs) by shortening the time to first fix (TTFF) of ambiguities. When satellite orbit products are adopted to estimate the satellite FCB corrections, the narrow-lane (NL) FCB corrections will be contaminated by the orbit's line-of-sight (LOS) errors which subsequently affect ambiguity resolution (AR) performance, as well as positioning accuracy. To effectively separate orbit errors from satellite FCBs, we propose a cascaded orbit error separation (COES) method for the PPP implementation. Instead of using only one direction-independent component in previous studies, the satellite NL improved FCB corrections are modeled by one direction-independent component and three directional-dependent components per satellite in this study. More specifically, the direction-independent component assimilates actual FCBs, whereas the directional-dependent components are used to assimilate the orbit errors. To evaluate the performance of the proposed method, GPS measurements from a regional and a global network are processed with the IGSReal-time service (RTS), IGS rapid (IGR) products and predicted orbits with >10 cm 3D root mean square (RMS) error. The improvements by the proposed FCB estimation method are validated in terms of ambiguity fractions after applying FCB corrections and positioning accuracy. The numerical results confirm that the obtained FCBs using the proposed method outperform those by conventional method. The RMS of ambiguity fractions after applying FCB corrections is reduced by 13.2 %. The position RMSs in north, east and up directions are reduced by 30.0, 32.0 and 22.0 % on average.

  15. Evaluation of Incremental Improvement in the NWS MPE Precipitation Estimates

    NASA Astrophysics Data System (ADS)

    Qin, L.; Habib, E. H.

    2009-12-01

    This study focuses on assessment of incremental improvement in the multi-sensor precipitation estimates (MPE) developed by the National Weather Service (NWS) River Forecast Centers (RFC). The MPE product is based upon merging of data from WSR-88D radar, surface rain gauge, and occasionally geo-stationary satellite data. The MPE algorithm produces 5 intermediate sets of products known as: RMOSAIC, BMOSAIC, MMOSAIC, LMOSAIC, and MLMOSAIC. These products have different bias-removal and optimal gauge-merging mechanisms. The final product used in operational applications is selected by the RFC forecasters. All the MPE products are provided at hourly temporal resolution and over a national Hydrologic Rainfall Analysis Project (HRAP) grid of a nominal size of 4 square kilometers. To help determine the incremental improvement of MPE estimates, an evaluation analysis was performed over a two-year period (2005-2006) using 13 independently operated rain gauges located within an area of ~30 km2 in south Louisiana. The close proximity of gauge sites to each other allows for multiple gauges to be located within the same HRAP pixel and thus provides reliable estimates of true surface rainfall to be used as a reference dataset. The evaluation analysis is performed over two temporal scales: hourly and event duration. Besides graphical comparisons using scatter and histogram plots, several statistical measures are also applied such as multiplicative bias, additive bias, correlation, and error standard deviation. The results indicated a mixed performance of the different products over the study site depending on which statistical metric is used. The products based on local bias adjustment have lowest error standard deviation but worst multiplicative bias. The opposite is true for products that are based on mean-filed bias adjustment. Optimal merging with gauge fields lead to a reduction in the error quantiles of the products. The results of the current study will provide insight into

  16. Identifying victims of workplace bullying by integrating traditional estimation approaches into a latent class cluster model.

    PubMed

    Leon-Perez, Jose M; Notelaers, Guy; Arenas, Alicia; Munduate, Lourdes; Medina, Francisco J

    2014-05-01

    Research findings underline the negative effects of exposure to bullying behaviors and document the detrimental health effects of being a victim of workplace bullying. While no one disputes its negative consequences, debate continues about the magnitude of this phenomenon since very different prevalence rates of workplace bullying have been reported. Methodological aspects may explain these findings. Our contribution to this debate integrates behavioral and self-labeling estimation methods of workplace bullying into a measurement model that constitutes a bullying typology. Results in the present sample (n = 1,619) revealed that six different groups can be distinguished according to the nature and intensity of reported bullying behaviors. These clusters portray different paths for the workplace bullying process, where negative work-related and person-degrading behaviors are strongly intertwined. The analysis of the external validity showed that integrating previous estimation methods into a single measurement latent class model provides a reliable estimation method of workplace bullying, which may overcome previous flaws. PMID:24257593

  17. [Division of winter wheat yield estimation by remote sensing based on MODIS EVI time series data and spectral angle clustering].

    PubMed

    Zhu, Zai-Chun; Chen, Lian-Qun; Zhang, Jin-Shui; Pan, Yao-Zhong; Zhu, Wen-Quan; Hu, Tan-Gao

    2012-07-01

    Crop yield estimation division is the basis of crop yield estimation; it provides an important scientific basis for estimation research and practice. In the paper, MODIS EVI time-series data during winter wheat growth period is selected as the division data; JiangSu province is study area; A division method combined of advanced spectral angle mapping(SVM) and K-means clustering is presented, and tested in winter wheat yield estimation by remote sensing. The results show that: division method of spectral angle clustering can take full advantage of crop growth process that is reflected by MODIS time series data, and can fully reflect region differences of winter wheat that is brought by climate difference. Compared with the traditional division method, yield estimation result based on division result of spectral angle clustering has higher R2 (0.702 6 than 0.624 8) and lower RMSE (343.34 than 381.34 kg x hm(-2)), reflecting the advantages of the new division method in the winter wheat yield estimation. The division method in the paper only use convenient obtaining time-series remote sensing data of low-resolution as division data, can divide winter wheat into similar and well characterized region, accuracy and stability of yield estimation model is also very good, which provides an efficient way for winter wheat estimation by remote sensing, and is conducive to winter wheat yield estimation. PMID:23016349

  18. The Effect of Clustering on Estimations of the UV Ionizing Background from the Proximity Effect

    NASA Astrophysics Data System (ADS)

    Pascarelle, S. M.; Lanzetta, K. M.; Chen, H. W.

    1999-09-01

    There have been several determinations of the ionizing background using the proximity effect observed in the distibution of Lyman-alpha absorption lines in the spectra of QSOs at high redshift. It is usually assumed that the distribution of lines should be the same at very small impact parameters to the QSO as it is at large impact parameters, and any decrease in line density at small impact parameters is due to ionizing radiation from the QSO. However, if these Lyman-alpha absorption lines arise in galaxies (Lanzetta et al. 1995, Chen et al. 1998), then the strength of the proximity effect may have been underestimated in previous work, since galaxies are known to cluster around QSOs. Therefore, the UV background estimations have likely been overestimated by the same factor.

  19. Improved Estimates of Air Pollutant Emissions from Biorefinery

    SciTech Connect

    Tan, Eric C. D.

    2015-11-13

    We have attempted to use detailed kinetic modeling approach for improved estimation of combustion air pollutant emissions from biorefinery. We have developed a preliminary detailed reaction mechanism for biomass combustion. Lignin is the only biomass component included in the current mechanism and methane is used as the biogas surrogate. The model is capable of predicting the combustion emissions of greenhouse gases (CO2, N2O, CH4) and criteria air pollutants (NO, NO2, CO). The results are yet to be compared with the experimental data. The current model is still in its early stages of development. Given the acknowledged complexity of biomass oxidation, as well as the components in the feed to the combustor, obviously the modeling approach and the chemistry set discussed here may undergo revision, extension, and further validation in the future.

  20. Adaptive whitening of the electromyogram to improve amplitude estimation.

    PubMed

    Clancy, E A; Farry, K A

    2000-06-01

    Previous research showed that whitening the surface electromyogram (EMG) can improve EMG amplitude estimation (where EMG amplitude is defined as the time-varying standard deviation of the EMG). However, conventional whitening via a linear filter seems to fail at low EMG amplitude levels, perhaps due to additive background noise in the measured EMG. This paper describes an adaptive whitening technique that overcomes this problem by cascading a nonadaptive whitening filter, an adaptive Wiener filter, and an adaptive gain correction. These stages can be calibrated from two, five second duration, constant-angle, constant-force contractions, one at a reference level [e.g., 50% maximum voluntary contraction (MVC)] and one at 0% MVC. In experimental studies, subjects used real-time EMG amplitude estimates to track a uniform-density, band-limited random target. With a 0.25-Hz bandwidth target, either adaptive whitening or multiple-channel processing reduced the tracking error roughly half-way to the error achieved using the dynamometer signal as the feedback. At the 1.00-Hz bandwidth, all of the EMG processors had errors equivalent to that of the dynamometer signal, reflecting that errors in this task were dominated by subjects' inability to track targets at this bandwidth. Increases in the additive noise level, smoothing window length, and tracking bandwidth diminish the advantages of whitening. PMID:10833845

  1. Improving estimates of air pollution exposure through ubiquitous sensing technologies.

    PubMed

    de Nazelle, Audrey; Seto, Edmund; Donaire-Gonzalez, David; Mendez, Michelle; Matamala, Jaume; Nieuwenhuijsen, Mark J; Jerrett, Michael

    2013-05-01

    Traditional methods of exposure assessment in epidemiological studies often fail to integrate important information on activity patterns, which may lead to bias, loss of statistical power, or both in health effects estimates. Novel sensing technologies integrated with mobile phones offer potential to reduce exposure measurement error. We sought to demonstrate the usability and relevance of the CalFit smartphone technology to track person-level time, geographic location, and physical activity patterns for improved air pollution exposure assessment. We deployed CalFit-equipped smartphones in a free-living population of 36 subjects in Barcelona, Spain. Information obtained on physical activity and geographic location was linked to space-time air pollution mapping. We found that information from CalFit could substantially alter exposure estimates. For instance, on average travel activities accounted for 6% of people's time and 24% of their daily inhaled NO2. Due to the large number of mobile phone users, this technology potentially provides an unobtrusive means of enhancing epidemiologic exposure data at low cost. PMID:23416743

  2. Using SVD on Clusters to Improve Precision of Interdocument Similarity Measure.

    PubMed

    Zhang, Wen; Xiao, Fan; Li, Bin; Zhang, Siguang

    2016-01-01

    Recently, LSI (Latent Semantic Indexing) based on SVD (Singular Value Decomposition) is proposed to overcome the problems of polysemy and homonym in traditional lexical matching. However, it is usually criticized as with low discriminative power for representing documents although it has been validated as with good representative quality. In this paper, SVD on clusters is proposed to improve the discriminative power of LSI. The contribution of this paper is three manifolds. Firstly, we make a survey of existing linear algebra methods for LSI, including both SVD based methods and non-SVD based methods. Secondly, we propose SVD on clusters for LSI and theoretically explain that dimension expansion of document vectors and dimension projection using SVD are the two manipulations involved in SVD on clusters. Moreover, we develop updating processes to fold in new documents and terms in a decomposed matrix by SVD on clusters. Thirdly, two corpora, a Chinese corpus and an English corpus, are used to evaluate the performances of the proposed methods. Experiments demonstrate that, to some extent, SVD on clusters can improve the precision of interdocument similarity measure in comparison with other SVD based LSI methods. PMID:27579031

  3. Using SVD on Clusters to Improve Precision of Interdocument Similarity Measure

    PubMed Central

    Xiao, Fan; Li, Bin; Zhang, Siguang

    2016-01-01

    Recently, LSI (Latent Semantic Indexing) based on SVD (Singular Value Decomposition) is proposed to overcome the problems of polysemy and homonym in traditional lexical matching. However, it is usually criticized as with low discriminative power for representing documents although it has been validated as with good representative quality. In this paper, SVD on clusters is proposed to improve the discriminative power of LSI. The contribution of this paper is three manifolds. Firstly, we make a survey of existing linear algebra methods for LSI, including both SVD based methods and non-SVD based methods. Secondly, we propose SVD on clusters for LSI and theoretically explain that dimension expansion of document vectors and dimension projection using SVD are the two manipulations involved in SVD on clusters. Moreover, we develop updating processes to fold in new documents and terms in a decomposed matrix by SVD on clusters. Thirdly, two corpora, a Chinese corpus and an English corpus, are used to evaluate the performances of the proposed methods. Experiments demonstrate that, to some extent, SVD on clusters can improve the precision of interdocument similarity measure in comparison with other SVD based LSI methods. PMID:27579031

  4. THE NEXT GENERATION VIRGO CLUSTER SURVEY. XV. THE PHOTOMETRIC REDSHIFT ESTIMATION FOR BACKGROUND SOURCES

    SciTech Connect

    Raichoor, A.; Mei, S.; Huertas-Company, M.; Licitra, R.; Erben, T.; Hildebrandt, H.; Ilbert, O.; Boissier, S.; Boselli, A.; Ball, N. M.; Côté, P.; Ferrarese, L.; Gwyn, S. D. J.; Kavelaars, J. J.; Chen, Y.-T.; Cuillandre, J.-C.; Duc, P. A.; Guhathakurta, P.; and others

    2014-12-20

    The Next Generation Virgo Cluster Survey (NGVS) is an optical imaging survey covering 104 deg{sup 2} centered on the Virgo cluster. Currently, the complete survey area has been observed in the u*giz bands and one third in the r band. We present the photometric redshift estimation for the NGVS background sources. After a dedicated data reduction, we perform accurate photometry, with special attention to precise color measurements through point-spread function homogenization. We then estimate the photometric redshifts with the Le Phare and BPZ codes. We add a new prior that extends to i {sub AB} = 12.5 mag. When using the u* griz bands, our photometric redshifts for 15.5 mag ≤ i ≲ 23 mag or z {sub phot} ≲ 1 galaxies have a bias |Δz| < 0.02, less than 5% outliers, a scatter σ{sub outl.rej.}, and an individual error on z {sub phot} that increases with magnitude (from 0.02 to 0.05 and from 0.03 to 0.10, respectively). When using the u*giz bands over the same magnitude and redshift range, the lack of the r band increases the uncertainties in the 0.3 ≲ z {sub phot} ≲ 0.8 range (–0.05 < Δz < –0.02, σ{sub outl.rej} ∼ 0.06, 10%-15% outliers, and z {sub phot.err.} ∼ 0.15). We also present a joint analysis of the photometric redshift accuracy as a function of redshift and magnitude. We assess the quality of our photometric redshifts by comparison to spectroscopic samples and by verifying that the angular auto- and cross-correlation function w(θ) of the entire NGVS photometric redshift sample across redshift bins is in agreement with the expectations.

  5. An HST/WFPC2 survey of bright young clusters in M 31. IV. Age and mass estimates

    NASA Astrophysics Data System (ADS)

    Perina, S.; Cohen, J. G.; Barmby, P.; Beasley, M. A.; Bellazzini, M.; Brodie, J. P.; Federici, L.; Fusi Pecci, F.; Galleti, S.; Hodge, P. W.; Huchra, J. P.; Kissler-Patig, M.; Puzia, T. H.; Strader, J.

    2010-02-01

    Aims: We present the main results of an imaging survey of possible young massive clusters (YMC) in M 31 performed with the Wide Field and Planetary Camera 2 (WFPC2) on the Hubble Space Telescope (HST), with the aim of estimating their age and their mass. We obtained shallow (to B ˜ 25) photometry of individual stars in 19 clusters (of the 20 targets of the survey). We present the images and color magnitude diagrams (CMDs) of all of our targets. Methods: Point spread function fitting photometry of individual stars was obtained for all the WFPC2 images of the target clusters, and the completeness of the final samples was estimated using extensive sets of artificial stars experiments. The reddening, age, and metallicity of the clusters were estimated by comparing the observed CMDs and luminosity functions (LFs) with theoretical models. Stellar masses were estimated by comparison with theoretical models in the log(Age) vs. absolute integrated magnitude plane, using ages estimated from our CMDs and integrated J, H, K magnitudes from 2MASS-6X. Results: Nineteen of the twenty surveyed candidates were confirmed to be real star clusters, while one turned out to be a bright star. Three of the clusters were found not to be good YMC candidates from newly available integrated spectroscopy and were in fact found to be old from their CMD. Of the remaining sixteen clusters, fourteen have ages between 25 Myr and 280 Myr, two have older ages than 500 Myr (lower limits). By including ten other YMC with HST photometry from the literature, we assembled a sample of 25 clusters younger than 1 Gyr, with mass ranging from 0.6× 10^4 Msun to 6× 10^4 Msun, with an average of ˜3× 10^4 Msun. Our estimates of ages and masses well agree with recent independent studies based on integrated spectra. Conclusions: The clusters considered here are confirmed to have masses significantly higher than Galactic open clusters (OC) in the same age range. Our analysis indicates that YMCs are relatively

  6. Propensity score methods for estimating relative risks in cluster randomized trials with low-incidence binary outcomes and selection bias.

    PubMed

    Leyrat, Clémence; Caille, Agnès; Donner, Allan; Giraudeau, Bruno

    2014-09-10

    Despite randomization, selection bias may occur in cluster randomized trials. Classical multivariable regression usually allows for adjusting treatment effect estimates with unbalanced covariates. However, for binary outcomes with low incidence, such a method may fail because of separation problems. This simulation study focused on the performance of propensity score (PS)-based methods to estimate relative risks from cluster randomized trials with binary outcomes with low incidence. The results suggested that among the different approaches used (multivariable regression, direct adjustment on PS, inverse weighting on PS, and stratification on PS), only direct adjustment on the PS fully corrected the bias and moreover had the best statistical properties. PMID:24771662

  7. Small sample performance of bias-corrected sandwich estimators for cluster-randomized trials with binary outcomes.

    PubMed

    Li, Peng; Redden, David T

    2015-01-30

    The sandwich estimator in generalized estimating equations (GEE) approach underestimates the true variance in small samples and consequently results in inflated type I error rates in hypothesis testing. This fact limits the application of the GEE in cluster-randomized trials (CRTs) with few clusters. Under various CRT scenarios with correlated binary outcomes, we evaluate the small sample properties of the GEE Wald tests using bias-corrected sandwich estimators. Our results suggest that the GEE Wald z-test should be avoided in the analyses of CRTs with few clusters even when bias-corrected sandwich estimators are used. With t-distribution approximation, the Kauermann and Carroll (KC)-correction can keep the test size to nominal levels even when the number of clusters is as low as 10 and is robust to the moderate variation of the cluster sizes. However, in cases with large variations in cluster sizes, the Fay and Graubard (FG)-correction should be used instead. Furthermore, we derive a formula to calculate the power and minimum total number of clusters one needs using the t-test and KC-correction for the CRTs with binary outcomes. The power levels as predicted by the proposed formula agree well with the empirical powers from the simulations. The proposed methods are illustrated using real CRT data. We conclude that with appropriate control of type I error rates under small sample sizes, we recommend the use of GEE approach in CRTs with binary outcomes because of fewer assumptions and robustness to the misspecification of the covariance structure. PMID:25345738

  8. Improved fuzzy clustering algorithms in segmentation of DC-enhanced breast MRI.

    PubMed

    Kannan, S R; Ramathilagam, S; Devi, Pandiyarajan; Sathya, A

    2012-02-01

    Segmentation of medical images is a difficult and challenging problem due to poor image contrast and artifacts that result in missing or diffuse organ/tissue boundaries. Many researchers have applied various techniques however fuzzy c-means (FCM) based algorithms is more effective compared to other methods. The objective of this work is to develop some robust fuzzy clustering segmentation systems for effective segmentation of DCE - breast MRI. This paper obtains the robust fuzzy clustering algorithms by incorporating kernel methods, penalty terms, tolerance of the neighborhood attraction, additional entropy term and fuzzy parameters. The initial centers are obtained using initialization algorithm to reduce the computation complexity and running time of proposed algorithms. Experimental works on breast images show that the proposed algorithms are effective to improve the similarity measurement, to handle large amount of noise, to have better results in dealing the data corrupted by noise, and other artifacts. The clustering results of proposed methods are validated using Silhouette Method. PMID:20703716

  9. Improved Soundings and Error Estimates using AIRS/AMSU Data

    NASA Technical Reports Server (NTRS)

    Susskind, Joel

    2006-01-01

    AIRS was launched on EOS Aqua on May 4, 2002, together with AMSU A and HSB, to form a next generation polar orbiting infrared and microwave atmospheric sounding system. The primary products of AIRS/AMSU are twice daily global fields of atmospheric temperature-humidity profiles, ozone profiles, sea/land surface skin temperature, and cloud related parameters including OLR. The sounding goals of AIRS are to produce 1 km tropospheric layer mean temperatures with an rms error of 1 K, and layer precipitable water with an rms error of 20 percent, in cases with up to 80 percent effective cloud cover. The basic theory used to analyze AIRS/AMSU/HSB data in the presence of clouds, called the at-launch algorithm, and a post-launch algorithm which differed only in the minor details from the at-launch algorithm, have been described previously. The post-launch algorithm, referred to as AIRS Version 4.0, has been used by the Goddard DAAC to analyze and distribute AIRS retrieval products. In this paper we show progress made toward the AIRS Version 5.0 algorithm which will be used by the Goddard DAAC starting late in 2006. A new methodology has been developed to provide accurate case by case error estimates for retrieved geophysical parameters and for the channel by channel cloud cleared radiances used to derive the geophysical parameters from the AIRS/AMSU observations. These error estimates are in turn used for quality control of the derived geophysical parameters and clear column radiances. Improvements made to the retrieval algorithm since Version 4.0 are described as well as results comparing Version 5.0 retrieval accuracy and spatial coverage with those obtained using Version 4.0.

  10. Improved Image Registration by Sparse Patch-Based Deformation Estimation

    PubMed Central

    Kim, Minjeong; Wu, Guorong; Wang, Qian; Shen, Dinggang

    2014-01-01

    Despite of intensive efforts for decades, deformable image registration is still a challenging problem due to the potential large anatomical differences across individual images, which limits the registration performance. Fortunately, this issue could be alleviated if a good initial deformation can be provided for the two images under registration, which are often termed as the moving subject and the fixed template, respectively. In this work, we present a novel patch-based initial deformation prediction framework for improving the performance of existing registration algorithms. Our main idea is to estimate the initial deformation between subject and template in a patch-wise fashion by using the sparse representation technique. We argue that two image patches should follow the same deformation towards the template image if their patch-wise appearance patterns are similar. To this end, our framework consists of two stages, i.e., the training stage and the application stage. In the training stage, we register all training images to the pre-selected template, such that the deformation of each training image with respect to the template is known. In the application stage, we apply the following four steps to efficiently calculate the initial deformation field for the new test subject: (1) We pick a small number of key points in the distinctive regions of the test subject; (2) For each key point, we extract a local patch and form a coupled appearance-deformation dictionary from training images where each dictionary atom consists of the image intensity patch as well as their respective local deformations; (3) A small set of training image patches in the coupled dictionary are selected to represent the image patch of each subject key point by sparse representation. Then, we can predict the initial deformation for each subject key point by propagating the pre-estimated deformations on the selected training patches with the same sparse representation coefficients. (4) We

  11. Dendrimer mediated clustering of bacteria: improved aggregation and evaluation of bacterial response and viability.

    PubMed

    Leire, Emma; Amaral, Sandra P; Louzao, Iria; Winzer, Klaus; Alexander, Cameron; Fernandez-Megia, Eduardo; Fernandez-Trillo, Francisco

    2016-06-24

    Here, we evaluate how cationic gallic acid-triethylene glycol (GATG) dendrimers interact with bacteria and their potential to develop new antimicrobials. We demonstrate that GATG dendrimers functionalised with primary amines in their periphery can induce the formation of clusters in Vibrio harveyi, an opportunistic marine pathogen, in a generation dependent manner. Moreover, these cationic GATG dendrimers demonstrate an improved ability to induce cluster formation when compared to poly(N-[3-(dimethylamino)propyl]methacrylamide) [p(DMAPMAm)], a cationic linear polymer previously shown to cluster bacteria. Viability of the bacteria within the formed clusters and evaluation of quorum sensing controlled phenotypes (i.e. light production in V. harveyi) suggest that GATG dendrimers may be activating microbial responses by maintaining a high concentration of quorum sensing signals inside the clusters while increasing permeability of the microbial outer membranes. Thus, the reported GATG dendrimers constitute a valuable platform for the development of novel antimicrobial materials that can target microbial viability and/or virulence. PMID:27127812

  12. Disseminating quality improvement: study protocol for a large cluster-randomized trial

    PubMed Central

    2011-01-01

    Background Dissemination is a critical facet of implementing quality improvement in organizations. As a field, addiction treatment has produced effective interventions but disseminated them slowly and reached only a fraction of people needing treatment. This study investigates four methods of disseminating quality improvement (QI) to addiction treatment programs in the U.S. It is, to our knowledge, the largest study of organizational change ever conducted in healthcare. The trial seeks to determine the most cost-effective method of disseminating quality improvement in addiction treatment. Methods The study is evaluating the costs and effectiveness of different QI approaches by randomizing 201 addiction-treatment programs to four interventions. Each intervention used a web-based learning kit plus monthly phone calls, coaching, face-to-face meetings, or the combination of all three. Effectiveness is defined as reducing waiting time (days between first contact and treatment), increasing program admissions, and increasing continuation in treatment. Opportunity costs will be estimated for the resources associated with providing the services. Outcomes The study has three primary outcomes: waiting time, annual program admissions, and continuation in treatment. Secondary outcomes include: voluntary employee turnover, treatment completion, and operating margin. We are also seeking to understand the role of mediators, moderators, and other factors related to an organization's success in making changes. Analysis We are fitting a mixed-effect regression model to each program's average monthly waiting time and continuation rates (based on aggregated client records), including terms to isolate state and intervention effects. Admissions to treatment are aggregated to a yearly level to compensate for seasonality. We will order the interventions by cost to compare them pair-wise to the lowest cost intervention (monthly phone calls). All randomized sites with outcome data will be

  13. Using Satellite Rainfall Estimates to Improve Climate Services in Africa

    NASA Astrophysics Data System (ADS)

    Dinku, T.

    2012-12-01

    Climate variability and change pose serious challenges to sustainable development in Africa. The recent famine crisis in Horn of Africa is yet again another evidence of how fluctuations in the climate can destroy lives and livelihoods. Building resilience against the negative impacts of climate and maximizing the benefits from favorable conditions will require mainstreaming climate issues into development policy, planning and practice at different levels. The availability of decision-relevant climate information at different levels is very critical. The number and quality of weather stations in many part of Africa, however, has been declining. The available stations are unevenly distributed with most of the stations located along the main roads. This imposes severe limitations to the availability of climate information and services to rural communities where these services are needed most. Where observations are taken, they suffer from gaps and poor quality and are often unavailable beyond the respective national meteorological services. Combining available local observation with satellite products, making data and products available through the Internet, and training the user community to understand and use climate information will help to alleviate these problems. Improving data availability involves organizing and cleaning all available national station observations and combining them with satellite rainfall estimates. The main advantage of the satellite products is the excellent spatial coverage at increasingly improved spatial and temporal resolutions. This approach has been implemented in Ethiopia and Tanzania, and it is in the process being implemented in West Africa. The main outputs include: 1. Thirty-year times series of combined satellite-gauge rainfall time series at 10-daily time scale 10-km spatial resolution; 2. An array of user-specific products for climate analysis and monitoring; 3. An online facility providing user-friendly tools for

  14. Improved Estimate of Phobos Secular Acceleration from MOLA Observations

    NASA Technical Reports Server (NTRS)

    Bills, Bruce; Neumann, Gregory; Smith, David; Zuber, Maria

    2004-01-01

    We report on new observations of the orbital position of Phobos, and use them to obtain a new and improved estimate of the rate of secular acceleration in longitude due to tidal dissipation within Mars. Phobos is the inner-most natural satellite of Mars, and one of the few natural satellites in the solar system with orbital period shorter than the rotation period of its primary. As a result, any departure from a perfect elastic response by Mars in the tides raised on it by Phobos will cause a transfer of angular momentum from the orbit of Phobos to the spin of Mars. Since its discovery in 1877, Phobos has completed over 145,500 orbits, and has one of the best studied orbits in the solar system, with over 6000 earth-based astrometric observations, and over 300 spacecraft observations. As early as 1945, Sharpless noted that there is a secular acceleration in mean longitude, with rate (1.88 + 0.25) 10(exp -3) degrees per square year. In preparation for the 1989 Russian spacecraft mission to Phobos, considerable work was done compiling past observations, and refining the orbital model. All of the published estimates from that era are in good agreement. A typical solution (Jacobson et al., 1989) yields (1.249 + 0.018) 10(exp -3) degrees per square year. The MOLA instrument on MGS is a laser altimeter, and was designed to measure the topography of Mars. However, it has also been used to make observations of the position of Phobos. In 1998, a direct range measurement was made, which indicated that Phobos was slightly ahead of the predicted position. The MOLA detector views the surface of Mars in a narrow field of view, at 1064 nanometer wavelength, and can detect shadows cast by Phobos on the surface of Mars. We have found 15 such serendipitous shadow transit events over the interval from xx to xx, and all of them show Phobos to be ahead of schedule, and getting progressively farther ahead of the predicted position. In contrast, the cross-track positions are quite close

  15. Technical Methods Report: The Estimation of Average Treatment Effects for Clustered RCTs of Education Interventions. NCEE 2009-0061 rev.

    ERIC Educational Resources Information Center

    Schochet, Peter Z.

    2009-01-01

    This paper examines the estimation of two-stage clustered RCT designs in education research using the Neyman causal inference framework that underlies experiments. The key distinction between the considered causal models is whether potential treatment and control group outcomes are considered to be fixed for the study population (the…

  16. Distributed sensing for atmospheric probing: an improved concept of laser firefly clustering

    NASA Astrophysics Data System (ADS)

    Kedar, Debbie; Arnon, Shlomi

    2004-10-01

    In this paper, we present an improved concept of "Laser Firefly Clustering" for atmospheric probing, elaborating upon previous published work. The laser firefly cluster is a mobile, flexible and versatile distributed sensing system, whose purpose is to profile the chemical and particulate composition of the atmosphere for pollution monitoring, meteorology, detection of contamination and other aims. The fireflies are deployed in situ at the altitude of interest, and evoke a backscatter response form aerosols and molecules in the immediate vicinity using a coded laser signal. In the improved system a laser transmitter and one imaging receiver telescope are placed at a base station, while sophisticated miniature distributed sensors (fireflies), are deployed in the atmosphere. The fireflies are interrogated by the base station laser, and emit non-coded probing signals in response. The backscatter signal is processed on the firefly and the transduced data is transmitted to the imaging receiver on the ground. These improvements lead to better performance at lower energy cost and expand the scope of application of the innovative concept of laser firefly clustering. A numerical example demonstrates the potential of the novel system.

  17. Improved Rosetta Pedotransfer Estimation of Hydraulic Properties and Their Covariance

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Schaap, M. G.

    2014-12-01

    Quantitative knowledge of the soil hydraulic properties is necessary for most studies involving water flow and solute transport in the vadose zone. However, it is always expensive, difficult, and time consuming to measure hydraulic properties directly. Pedotransfer functions (PTFs) have been widely used to forecast soil hydraulic parameters. Rosetta is is one of many PTFs and based on artificial neural network analysis coupled with the bootstrap sampling method. The model provides hierarchical PTFs for different levels of input data for Rosetta (H1-H5 models, with higher order models requiring more input variables). The original Rosetta model consists of separate PTFs for the four "van Genuchten" (VG) water retention parameters and saturated hydraulic conductivity (Ks) because different numbers of samples were available for these characteristics. In this study, we present an improved Rosetta pedotransfer function that uses a single model for all five parameters combined; these parameters are weighed for each sample individually using the covariance matrix obtained from the curve-fit of the VG parameters to the primary data. The optimal number of hidden nodes, weights for saturated hydraulic conductivity and water retention parameters in the neural network and bootstrap realization were selected. Results show that root mean square error (RMSE) for water retention decreased from 0.076 to 0.072 cm3/cm3 for the H2 model and decreased from 0.044 to 0.039 cm3/cm3 for the H5 model. Mean errors which indicate variable matric potential-dependent bias were also reduced significantly in the new model. The RMSE for Ks increased slightly (H2: 0.717 to 0.722; H5: 0.581 to 0.594); this increase is minimal and a result of using a single model for water retention and Ks. Despite this small increase the new model is recommended because of its improved estimation of water retention, and because it is now possible to calculate the full covariance matrix of soil water retention

  18. Improved Critical Eigenfunction Restriction Estimates on Riemannian Surfaces with Nonpositive Curvature

    NASA Astrophysics Data System (ADS)

    Xi, Yakun; Zhang, Cheng

    2016-07-01

    We show that one can obtain improved L 4 geodesic restriction estimates for eigenfunctions on compact Riemannian surfaces with nonpositive curvature. We achieve this by adapting Sogge's strategy in (Improved critical eigenfunction estimates on manifolds of nonpositive curvature, Preprint). We first combine the improved L 2 restriction estimate of Blair and Sogge (Concerning Toponogov's Theorem and logarithmic improvement of estimates of eigenfunctions, Preprint) and the classical improved {L^∞} estimate of Bérard to obtain an improved weak-type L 4 restriction estimate. We then upgrade this weak estimate to a strong one by using the improved Lorentz space estimate of Bak and Seeger (Math Res Lett 18(4):767-781, 2011). This estimate improves the L 4 restriction estimate of Burq et al. (Duke Math J 138:445-486, 2007) and Hu (Forum Math 6:1021-1052, 2009) by a power of {(log logλ)^{-1}} . Moreover, in the case of compact hyperbolic surfaces, we obtain further improvements in terms of {(logλ)^{-1}} by applying the ideas from (Chen and Sogge, Commun Math Phys 329(3):435-459, 2014) and (Blair and Sogge, Concerning Toponogov's Theorem and logarithmic improvement of estimates of eigenfunctions, Preprint). We are able to compute various constants that appeared in (Chen and Sogge, Commun Math Phys 329(3):435-459, 2014) explicitly, by proving detailed oscillatory integral estimates and lifting calculations to the universal cover H^2.

  19. Dynamical evolution of stellar mass black holes in dense stellar clusters: estimate for merger rate of binary black holes originating from globular clusters

    NASA Astrophysics Data System (ADS)

    Tanikawa, A.

    2013-10-01

    We have performed N-body simulations of globular clusters (GCs) in order to estimate a detection rate of mergers of binary stellar mass black holes (BBHs) by means of gravitational wave (GW) observatories. For our estimate, we have only considered mergers of BBHs which escape from GCs (BBH escapers). BBH escapers merge more quickly than BBHs inside GCs because of their small semimajor axes. N-body simulation cannot deal with a GC with the number of stars N ˜ 106 due to its high calculation cost. We have simulated dynamical evolution of small N clusters (104 ≲ N ≲ 105), and have extrapolated our simulation results to large N clusters. From our simulation results, we have found the following dependence of BBH properties on N. BBHs escape from a cluster at each two-body relaxation time at a rate proportional to N. Semimajor axes of BBH escapers are inversely proportional to N, if initial mass densities of clusters are fixed. Eccentricities, primary masses and mass ratios of BBH escapers are independent of N. Using this dependence of BBH properties, we have artificially generated a population of BBH escapers from a GC with N ˜ 106, and have estimated a detection rate of mergers of BBH escapers by next-generation GW observatories. We have assumed that all the GCs are formed 10 or 12 Gyr ago with their initial numbers of stars Ni = 5 × 105-2 × 106 and their initial stellar mass densities inside their half-mass radii ρh,i = 6 × 103-106 M⊙ pc-3. Then, the detection rate of BBH escapers is 0.5-20 yr-1 for a BH retention fraction RBH = 0.5. A few BBH escapers are components of hierarchical triple systems, although we do not consider secular perturbation on such BBH escapers for our estimate. Our simulations have shown that BHs are still inside some of GCs at the present day. These BHs may marginally contribute to BBH detection.

  20. An Effective Intrusion Detection Algorithm Based on Improved Semi-supervised Fuzzy Clustering

    NASA Astrophysics Data System (ADS)

    Li, Xueyong; Zhang, Baojian; Sun, Jiaxia; Yan, Shitao

    An algorithm for intrusion detection based on improved evolutionary semi- supervised fuzzy clustering is proposed which is suited for situation that gaining labeled data is more difficulty than unlabeled data in intrusion detection systems. The algorithm requires a small number of labeled data only and a large number of unlabeled data and class labels information provided by labeled data is used to guide the evolution process of each fuzzy partition on unlabeled data, which plays the role of chromosome. This algorithm can deal with fuzzy label, uneasily plunges locally optima and is suited to implement on parallel architecture. Experiments show that the algorithm can improve classification accuracy and has high detection efficiency.

  1. Propensity score matching with clustered data. An application to the estimation of the impact of caesarean section on the Apgar score.

    PubMed

    Arpino, Bruno; Cannas, Massimo

    2016-05-30

    This article focuses on the implementation of propensity score matching for clustered data. Different approaches to reduce bias due to cluster-level confounders are considered and compared using Monte Carlo simulations. We investigated methods that exploit the clustered structure of the data in two ways: in the estimation of the propensity score model (through the inclusion of fixed or random effects) or in the implementation of the matching algorithm. In addition to a pure within-cluster matching, we also assessed the performance of a new approach, 'preferential' within-cluster matching. This approach first searches for control units to be matched to treated units within the same cluster. If matching is not possible within-cluster, then the algorithm searches in other clusters. All considered approaches successfully reduced the bias due to the omission of a cluster-level confounder. The preferential within-cluster matching approach, combining the advantages of within-cluster and between-cluster matching, showed a relatively good performance both in the presence of big and small clusters, and it was often the best method. An important advantage of this approach is that it reduces the number of unmatched units as compared with a pure within-cluster matching. We applied these methods to the estimation of the effect of caesarean section on the Apgar score using birth register data. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26833893

  2. REGIONAL APPROACH TO ESTIMATING RECREATION BENEFITS OF IMPROVED WATER QUALITY

    EPA Science Inventory

    Recreation demand and value are estimated with the travel-cost method for fishing, camping, boating, and swimming on a site-specific regional basis. The model is regional in that 197 sites are defined for the Pacific Northwest. A gravity model is employed to estimate the number o...

  3. Cluster Observations for Combined X-Ray and Sunyaev-Zel'dovich Estimates of Peculiar Velocities and Distances

    NASA Technical Reports Server (NTRS)

    Lange, A. E.

    1997-01-01

    Measurements of the peculiar velocities of galaxy clusters with respect to the Hubble flow allow the determination of the gravitational field from all matter in the universe, not just the visible component. The Sunyaev-Zel'dovich (SZ) effect (the inverse-Compton scattering of cosmic microwave background photons by the hot gas in clusters of galaxies) allows these velocities to be measured without the use of empirical distance indicators. Additionally, because the magnitude of the SZ effect is independent of redshift, the technique can be used to measure velocities out to the epoch of cluster formation. The SZ technique requires a determination of the temperature of the hot cluster gas from X-ray observations, and measurements of the SZ effect at millimeter wavelengths to separate the contribution from the thermal motions within the gas from that of the cluster peculiax velocity. We have constructed a bolometric receiver, the Sunyaev-Zel'dovich Infrared Experiment, specifically to make measurements of the SZ effect at millimeter wavelengths in order to apply the SZ technique to peculiar velocity measurements. This receiver has already been used to set limits to the peculiar velocities of two galaxy clusters at z approx. 0.2. As a test of the SZ technique, the double cluster pair Abell 222 and 223 was selected for observation. Measurements of the redshifts of the two components suggest that, if the clusters are gravitationally bound, they should exhibit a relative velocity of 10OO km/ s, well above the expected precision of 200 km/ s (set by astrophysical confusion) that is expected from the SZ method. The temperature can be measured from ASCA data which we obtained for this cluster pair. However, in order to ensure that the temperature estimate from the ASCA data was not dominated by cooling flows within the cluster, we requested ROSAT HRI observations of this cluster pair. Analysis of the X-ray properties of the cluster pair is continuing by combining the ROSAT

  4. Using Smartphone Sensors for Improving Energy Expenditure Estimation.

    PubMed

    Pande, Amit; Zhu, Jindan; Das, Aveek K; Zeng, Yunze; Mohapatra, Prasant; Han, Jay J

    2015-01-01

    Energy expenditure (EE) estimation is an important factor in tracking personal activity and preventing chronic diseases, such as obesity and diabetes. Accurate and real-time EE estimation utilizing small wearable sensors is a difficult task, primarily because the most existing schemes work offline or use heuristics. In this paper, we focus on accurate EE estimation for tracking ambulatory activities (walking, standing, climbing upstairs, or downstairs) of a typical smartphone user. We used built-in smartphone sensors (accelerometer and barometer sensor), sampled at low frequency, to accurately estimate EE. Using a barometer sensor, in addition to an accelerometer sensor, greatly increases the accuracy of EE estimation. Using bagged regression trees, a machine learning technique, we developed a generic regression model for EE estimation that yields upto 96% correlation with actual EE. We compare our results against the state-of-the-art calorimetry equations and consumer electronics devices (Fitbit and Nike+ FuelBand). The newly developed EE estimation algorithm demonstrated superior accuracy compared with currently available methods. The results were calibrated against COSMED K4b2 calorimeter readings. PMID:27170901

  5. Using Smartphone Sensors for Improving Energy Expenditure Estimation

    PubMed Central

    Zhu, Jindan; Das, Aveek K.; Zeng, Yunze; Mohapatra, Prasant; Han, Jay J.

    2015-01-01

    Energy expenditure (EE) estimation is an important factor in tracking personal activity and preventing chronic diseases, such as obesity and diabetes. Accurate and real-time EE estimation utilizing small wearable sensors is a difficult task, primarily because the most existing schemes work offline or use heuristics. In this paper, we focus on accurate EE estimation for tracking ambulatory activities (walking, standing, climbing upstairs, or downstairs) of a typical smartphone user. We used built-in smartphone sensors (accelerometer and barometer sensor), sampled at low frequency, to accurately estimate EE. Using a barometer sensor, in addition to an accelerometer sensor, greatly increases the accuracy of EE estimation. Using bagged regression trees, a machine learning technique, we developed a generic regression model for EE estimation that yields upto 96% correlation with actual EE. We compare our results against the state-of-the-art calorimetry equations and consumer electronics devices (Fitbit and Nike+ FuelBand). The newly developed EE estimation algorithm demonstrated superior accuracy compared with currently available methods. The results were calibrated against COSMED K4b2 calorimeter readings. PMID:27170901

  6. Improving Estimation Accuracy of Aggregate Queries on Data Cubes

    SciTech Connect

    Pourabbas, Elaheh; Shoshani, Arie

    2008-08-15

    In this paper, we investigate the problem of estimation of a target database from summary databases derived from a base data cube. We show that such estimates can be derived by choosing a primary database which uses a proxy database to estimate the results. This technique is common in statistics, but an important issue we are addressing is the accuracy of these estimates. Specifically, given multiple primary and multiple proxy databases, that share the same summary measure, the problem is how to select the primary and proxy databases that will generate the most accurate target database estimation possible. We propose an algorithmic approach for determining the steps to select or compute the source databases from multiple summary databases, which makes use of the principles of information entropy. We show that the source databases with the largest number of cells in common provide the more accurate estimates. We prove that this is consistent with maximizing the entropy. We provide some experimental results on the accuracy of the target database estimation in order to verify our results.

  7. Reference Cluster Normalization Improves Detection of Frontotemporal Lobar Degeneration by Means of FDG-PET

    PubMed Central

    Dukart, Juergen; Perneczky, Robert; Förster, Stefan; Barthel, Henryk; Diehl-Schmid, Janine; Draganski, Bogdan; Obrig, Hellmuth; Santarnecchi, Emiliano; Drzezga, Alexander; Fellgiebel, Andreas; Frackowiak, Richard; Kurz, Alexander; Müller, Karsten; Sabri, Osama; Schroeter, Matthias L.; Yakushev, Igor

    2013-01-01

    Positron emission tomography with [18F] fluorodeoxyglucose (FDG-PET) plays a well-established role in assisting early detection of frontotemporal lobar degeneration (FTLD). Here, we examined the impact of intensity normalization to different reference areas on accuracy of FDG-PET to discriminate between patients with mild FTLD and healthy elderly subjects. FDG-PET was conducted at two centers using different acquisition protocols: 41 FTLD patients and 42 controls were studied at center 1, 11 FTLD patients and 13 controls were studied at center 2. All PET images were intensity normalized to the cerebellum, primary sensorimotor cortex (SMC), cerebral global mean (CGM), and a reference cluster with most preserved FDG uptake in the aforementioned patients group of center 1. Metabolic deficits in the patient group at center 1 appeared 1.5, 3.6, and 4.6 times greater in spatial extent, when tracer uptake was normalized to the reference cluster rather than to the cerebellum, SMC, and CGM, respectively. Logistic regression analyses based on normalized values from FTLD-typical regions showed that at center 1, cerebellar, SMC, CGM, and cluster normalizations differentiated patients from controls with accuracies of 86%, 76%, 75% and 90%, respectively. A similar order of effects was found at center 2. Cluster normalization leads to a significant increase of statistical power in detecting early FTLD-associated metabolic deficits. The established FTLD-specific cluster can be used to improve detection of FTLD on a single case basis at independent centers – a decisive step towards early diagnosis and prediction of FTLD syndromes enabling specific therapies in the future. PMID:23451025

  8. Reference cluster normalization improves detection of frontotemporal lobar degeneration by means of FDG-PET.

    PubMed

    Dukart, Juergen; Perneczky, Robert; Förster, Stefan; Barthel, Henryk; Diehl-Schmid, Janine; Draganski, Bogdan; Obrig, Hellmuth; Santarnecchi, Emiliano; Drzezga, Alexander; Fellgiebel, Andreas; Frackowiak, Richard; Kurz, Alexander; Müller, Karsten; Sabri, Osama; Schroeter, Matthias L; Yakushev, Igor

    2013-01-01

    Positron emission tomography with [18F] fluorodeoxyglucose (FDG-PET) plays a well-established role in assisting early detection of frontotemporal lobar degeneration (FTLD). Here, we examined the impact of intensity normalization to different reference areas on accuracy of FDG-PET to discriminate between patients with mild FTLD and healthy elderly subjects. FDG-PET was conducted at two centers using different acquisition protocols: 41 FTLD patients and 42 controls were studied at center 1, 11 FTLD patients and 13 controls were studied at center 2. All PET images were intensity normalized to the cerebellum, primary sensorimotor cortex (SMC), cerebral global mean (CGM), and a reference cluster with most preserved FDG uptake in the aforementioned patients group of center 1. Metabolic deficits in the patient group at center 1 appeared 1.5, 3.6, and 4.6 times greater in spatial extent, when tracer uptake was normalized to the reference cluster rather than to the cerebellum, SMC, and CGM, respectively. Logistic regression analyses based on normalized values from FTLD-typical regions showed that at center 1, cerebellar, SMC, CGM, and cluster normalizations differentiated patients from controls with accuracies of 86%, 76%, 75% and 90%, respectively. A similar order of effects was found at center 2. Cluster normalization leads to a significant increase of statistical power in detecting early FTLD-associated metabolic deficits. The established FTLD-specific cluster can be used to improve detection of FTLD on a single case basis at independent centers - a decisive step towards early diagnosis and prediction of FTLD syndromes enabling specific therapies in the future. PMID:23451025

  9. Improved estimation of random vibration loads in launch vehicles

    NASA Technical Reports Server (NTRS)

    Mehta, R.; Erwin, E.; Suryanarayan, S.; Krishna, Murali M. R.

    1993-01-01

    Random vibration induced load is an important component of the total design load environment for payload and launch vehicle components and their support structures. The current approach to random vibration load estimation is based, particularly at the preliminary design stage, on the use of Miles' equation which assumes a single degree-of-freedom (DOF) system and white noise excitation. This paper examines the implications of the use of multi-DOF system models and response calculation based on numerical integration using the actual excitation spectra for random vibration load estimation. The analytical study presented considers a two-DOF system and brings out the effects of modal mass, damping and frequency ratios on the random vibration load factor. The results indicate that load estimates based on the Miles' equation can be significantly different from the more accurate estimates based on multi-DOF models.

  10. Improving estimates of exposures for epidemiologic studies of plutonium workers.

    PubMed

    Ruttenber, A J; Schonbeck, M; McCrea, J; McClure, D; Martyny, J

    2001-01-01

    Epidemiologic studies of nuclear facilities usually focus on relations between cancer and doses from external penetrating radiation, and describe these exposures with little detail on measurement error and missing data. We demonstrate ways to document complex exposures to nuclear workers with data on external and internal exposures to ionizing radiation and toxic chemicals. We describe methods for assessing internal exposures to plutonium and external doses from neutrons; the use of a job exposure matrix for estimating chemical exposures; and methods for imputing missing data for exposures and doses. For plutonium workers at Rocky Flats, errors in estimating neutron doses resulted in underestimating the total external dose for production workers by about 16%. Estimates of systemic deposition do not correlate well with estimates of organ doses. Only a small percentage of workers had exposures to toxic chemicals, making epidemiologic assessments of risk difficult. PMID:11319050

  11. Estimating Treatment Effects via Multilevel Matching within Homogenous Groups of Clusters

    ERIC Educational Resources Information Center

    Steiner, Peter M.; Kim, Jee-Seon

    2015-01-01

    Despite the popularity of propensity score (PS) techniques they are not yet well studied for matching multilevel data where selection into treatment takes place among level-one units within clusters. This paper suggests a PS matching strategy that tries to avoid the disadvantages of within- and across-cluster matching. The idea is to first…

  12. Rigid and non-rigid geometrical transformations of a marker-cluster and their impact on bone-pose estimation.

    PubMed

    Bonci, T; Camomilla, V; Dumas, R; Chèze, L; Cappozzo, A

    2015-11-26

    When stereophotogrammetry and skin-markers are used, bone-pose estimation is jeopardised by the soft tissue artefact (STA). At marker-cluster level, this can be represented using a modal series of rigid (RT; translation and rotation) and non-rigid (NRT; homothety and scaling) geometrical transformations. The NRT has been found to be smaller than the RT and claimed to have a limited impact on bone-pose estimation. This study aims to investigate this matter and comparatively assessing the propagation of both STA components to bone-pose estimate, using different numbers of markers. Twelve skin-markers distributed over the anterior aspect of a thigh were considered and STA time functions were generated for each of them, as plausibly occurs during walking, using an ad hoc model and represented through the geometrical transformations. Using marker-clusters made of four to 12 markers affected by these STAs, and a Procrustes superimposition approach, bone-pose and the relevant accuracy were estimated. This was done also for a selected four marker-cluster affected by STAs randomly simulated by modifying the original STA NRT component, so that its energy fell in the range 30-90% of total STA energy. The pose error, which slightly decreased while increasing the number of markers in the marker-cluster, was independent from the NRT amplitude, and was always null when the RT component was removed. It was thus demonstrated that only the RT component impacts pose estimation accuracy and should thus be accounted for when designing algorithms aimed at compensating for STA. PMID:26555716

  13. Improved Recharge Estimation from Portable, Low-Cost Weather Stations.

    PubMed

    Holländer, Hartmut M; Wang, Zijian; Assefa, Kibreab A; Woodbury, Allan D

    2016-03-01

    Groundwater recharge estimation is a critical quantity for sustainable groundwater management. The feasibility and robustness of recharge estimation was evaluated using physical-based modeling procedures, and data from a low-cost weather station with remote sensor techniques in Southern Abbotsford, British Columbia, Canada. Recharge was determined using the Richards-based vadose zone hydrological model, HYDRUS-1D. The required meteorological data were recorded with a HOBO(TM) weather station for a short observation period (about 1 year) and an existing weather station (Abbotsford A) for long-term study purpose (27 years). Undisturbed soil cores were taken at two locations in the vicinity of the HOBO(TM) weather station. The derived soil hydraulic parameters were used to characterize the soil in the numerical model. Model performance was evaluated using observed soil moisture and soil temperature data obtained from subsurface remote sensors. A rigorous sensitivity analysis was used to test the robustness of the model. Recharge during the short observation period was estimated at 863 and 816 mm. The mean annual recharge was estimated at 848 and 859 mm/year based on a time series of 27 years. The relative ratio of annual recharge-precipitation varied from 43% to 69%. From a monthly recharge perspective, the majority (80%) of recharge due to precipitation occurred during the hydrologic winter period. The comparison of the recharge estimates with other studies indicates a good agreement. Furthermore, this method is able to predict transient recharge estimates, and can provide a reasonable tool for estimates on nutrient leaching that is often controlled by strong precipitation events and rapid infiltration of water and nitrate into the soil. PMID:26011672

  14. Analyzing indirect effects in cluster randomized trials. The effect of estimation method, number of groups and group sizes on accuracy and power

    PubMed Central

    Hox, Joop J.; Moerbeek, Mirjam; Kluytmans, Anouck; van de Schoot, Rens

    2013-01-01

    Cluster randomized trials assess the effect of an intervention that is carried out at the group or cluster level. Ajzen's theory of planned behavior is often used to model the effect of the intervention as an indirect effect mediated in turn by attitude, norms and behavioral intention. Structural equation modeling (SEM) is the technique of choice to estimate indirect effects and their significance. However, this is a large sample technique, and its application in a cluster randomized trial assumes a relatively large number of clusters. In practice, the number of clusters in these studies tends to be relatively small, e.g., much less than fifty. This study uses simulation methods to find the lowest number of clusters needed when multilevel SEM is used to estimate the indirect effect. Maximum likelihood estimation is compared to Bayesian analysis, with the central quality criteria being accuracy of the point estimate and the confidence interval. We also investigate the power of the test for the indirect effect. We conclude that Bayes estimation works well with much smaller cluster level sample sizes such as 20 cases than maximum likelihood estimation; although the bias is larger the coverage is much better. When only 5–10 clusters are available per treatment condition even with Bayesian estimation problems occur. PMID:24550881

  15. Analyzing indirect effects in cluster randomized trials. The effect of estimation method, number of groups and group sizes on accuracy and power.

    PubMed

    Hox, Joop J; Moerbeek, Mirjam; Kluytmans, Anouck; van de Schoot, Rens

    2014-01-01

    Cluster randomized trials assess the effect of an intervention that is carried out at the group or cluster level. Ajzen's theory of planned behavior is often used to model the effect of the intervention as an indirect effect mediated in turn by attitude, norms and behavioral intention. Structural equation modeling (SEM) is the technique of choice to estimate indirect effects and their significance. However, this is a large sample technique, and its application in a cluster randomized trial assumes a relatively large number of clusters. In practice, the number of clusters in these studies tends to be relatively small, e.g., much less than fifty. This study uses simulation methods to find the lowest number of clusters needed when multilevel SEM is used to estimate the indirect effect. Maximum likelihood estimation is compared to Bayesian analysis, with the central quality criteria being accuracy of the point estimate and the confidence interval. We also investigate the power of the test for the indirect effect. We conclude that Bayes estimation works well with much smaller cluster level sample sizes such as 20 cases than maximum likelihood estimation; although the bias is larger the coverage is much better. When only 5-10 clusters are available per treatment condition even with Bayesian estimation problems occur. PMID:24550881

  16. Estimating the local geometry of magnetic field lines with Cluster: a theoretical discussion of physical and geometrical errors

    NASA Astrophysics Data System (ADS)

    Chanteur, Gerard

    A multi-spacecraft mission with at least four spacecraft, like CLUSTER, MMS, or Cross-Scales, can determine the local geometry of the magnetic field lines when the size of the cluster of spacecraft is small enough compared to the gradient scale lengths of the magnetic field. Shen et al. (2003) and Runov et al. (2003 and 2005) used CLUSTER data to estimate the normal and the curvature of magnetic field lines in the terrestrial current sheet: the two groups used different approaches. Reciprocal vectors of the tetrahedron formed by four spacecraft are a powerful tool for estimating gradients of fields (Chanteur, 1998 and 2000). Considering a thick and planar current sheet model and making use of the statistical properties of the reciprocal vectors allows to discuss theoretically how physical and geometrical errors affect these estimations. References Chanteur, G., Spatial Interpolation for Four Spacecraft: Theory, in Analysis Methods for Multi-Spacecraft Data, ISSI SR-001, pp. 349-369, ESA Publications Division, 1998. Chanteur, G., Accuracy of field gradient estimations by Cluster: Explanation of its dependency upon elongation and planarity of the tetrahedron, pp. 265-268, ESA SP-449, 2000. Runov, A., Nakamura, R., Baumjohann, W., Treumann, R. A., Zhang, T. L., Volwerk, M., V¨r¨s, Z., Balogh, A., Glaßmeier, K.-H., Klecker, B., R‘eme, H., and Kistler, L., Current sheet oo structure near magnetic X-line observed by Cluster, Geophys. Res. Lett., 30, 33-1, 2003. Runov, A., Sergeev, V. A., Nakamura, R., Baumjohann, W., Apatenkov, S., Asano, Y., Takada, T., Volwerk, M.,V¨r¨s, Z., Zhang, T. L., Sauvaud, J.-A., R‘eme, H., and Balogh, A., Local oo structure of the magnetotail current sheet: 2001 Cluster observations, Ann. Geophys., 24, 247-262, 2006. Shen, C., Li, X., Dunlop, M., Liu, Z. X., Balogh, A., Baker, D. N., Hapgood, M., and Wang, X., Analyses on the geometrical structure of magnetic field in the current sheet based on cluster measurements, J. Geophys. Res

  17. Validation of an Improved Pediatric Weight Estimation Strategy

    PubMed Central

    Abdel-Rahman, Susan M.; Ahlers, Nichole; Holmes, Anne; Wright, Krista; Harris, Ann; Weigel, Jaylene; Hill, Talita; Baird, Kim; Michaels, Marla; Kearns, Gregory L.

    2013-01-01

    OBJECTIVES To validate the recently described Mercy method for weight estimation in an independent cohort of children living in the United States. METHODS Anthropometric data including weight, height, humeral length, and mid upper arm circumference were collected from 976 otherwise healthy children (2 months to 14 years old). The data were used to examine the predictive performances of the Mercy method and four other weight estimation strategies (the Advanced Pediatric Life Support [APLS] method, the Broselow tape, and the Luscombe and Owens and the Nelson methods). RESULTS The Mercy method demonstrated accuracy comparable to that observed in the original study (mean error: −0.3 kg; mean percentage error: −0.3%; root mean square error: 2.62 kg; 95% limits of agreement: 0.83–1.19). This method estimated weight within 20% of actual for 95% of children compared with 58.7% for APLS, 78% for Broselow, 54.4% for Luscombe and Owens, and 70.4% for Nelson. Furthermore, the Mercy method was the only weight estimation strategy which enabled prediction of weight in all of the children enrolled. CONCLUSIONS The Mercy method proved to be highly accurate and more robust than existing weight estimation strategies across a wider range of age and body mass index values, thereby making it superior to other existing approaches. PMID:23798905

  18. IMPROVING EMISSIONS ESTIMATES WITH COMPUTATIONAL INTELLIGENCE, DATABASE EXPANSION, AND COMPREHENSIVE VALIDATION

    EPA Science Inventory

    The report discusses an EPA investigation of techniques to improve methods for estimating volatile organic compound (VOC) emissions from area sources. Using the automobile refinishing industry for a detailed area source case study, an emission estimation method is being developed...

  19. Maximum-Likelihood Fits to Histograms for Improved Parameter Estimation

    NASA Astrophysics Data System (ADS)

    Fowler, J. W.

    2014-08-01

    Straightforward methods for adapting the familiar statistic to histograms of discrete events and other Poisson distributed data generally yield biased estimates of the parameters of a model. The bias can be important even when the total number of events is large. For the case of estimating a microcalorimeter's energy resolution at 6 keV from the observed shape of the Mn K fluorescence spectrum, a poor choice of can lead to biases of at least 10 % in the estimated resolution when up to thousands of photons are observed. The best remedy is a Poisson maximum-likelihood fit, through a simple modification of the standard Levenberg-Marquardt algorithm for minimization. Where the modification is not possible, another approach allows iterative approximation of the maximum-likelihood fit.

  20. CFD modelling of most probable bubble nucleation rate from binary mixture with estimation of components' mole fraction in critical cluster

    NASA Astrophysics Data System (ADS)

    Hong, Ban Zhen; Keong, Lau Kok; Shariff, Azmi Mohd

    2016-05-01

    The employment of different mathematical models to address specifically for the bubble nucleation rates of water vapour and dissolved air molecules is essential as the physics for them to form bubble nuclei is different. The available methods to calculate bubble nucleation rate in binary mixture such as density functional theory are complicated to be coupled along with computational fluid dynamics (CFD) approach. In addition, effect of dissolved gas concentration was neglected in most study for the prediction of bubble nucleation rates. The most probable bubble nucleation rate for the water vapour and dissolved air mixture in a 2D quasi-stable flow across a cavitating nozzle in current work was estimated via the statistical mean of all possible bubble nucleation rates of the mixture (different mole fractions of water vapour and dissolved air) and the corresponding number of molecules in critical cluster. Theoretically, the bubble nucleation rate is greatly dependent on components' mole fraction in a critical cluster. Hence, the dissolved gas concentration effect was included in current work. Besides, the possible bubble nucleation rates were predicted based on the calculated number of molecules required to form a critical cluster. The estimation of components' mole fraction in critical cluster for water vapour and dissolved air mixture was obtained by coupling the enhanced classical nucleation theory and CFD approach. In addition, the distribution of bubble nuclei of water vapour and dissolved air mixture could be predicted via the utilisation of population balance model.

  1. Estimating f{sub NL} and g{sub NL} from massive high-redshift galaxy clusters

    SciTech Connect

    Enqvist, Kari; Hotchkiss, Shaun; Taanila, Olli E-mail: shaun.hotchkiss@helsinki.fi

    2011-04-01

    There are observations of at least 14 high-redshift massive galaxy clusters, which have an extremely small probability with a purely Gaussian initial curvature perturbation. Here we revisit the estimation of the contribution of non-Gaussianities to the cluster mass function and point out serious problems that have resulted from the application of the mass function out of the range of its validity. We remedy the situation and show that the values of f{sub NL} previously claimed to completely reconcile (i.e. at ∼ 100% confidence) the existence of the clusters with ΛCDM are unphysically small. However, for WMAP cosmology and at 95% confidence, we arrive at the limit f{sub NL}∼>411, which is similar to previous estimates. We also explore the possibility of a large g{sub NL} as the reason for the observed excess of the massive galaxy clusters. This scenario, g{sub NL} > 2 × 10{sup 6}, appears to be in more agreement with CMB and LSS limits for the non-Gaussianity parameters and could also provide an explanation for the overabundance of large voids in the early universe.

  2. An improved border detection in dermoscopy images for density based clustering

    PubMed Central

    2011-01-01

    Background Dermoscopy is one of the major imaging modalities used in the diagnosis of melanoma and other pigmented skin lesions. In current practice, dermatologists determine lesion area by manually drawing lesion borders. Therefore, automated assessment tools for dermoscopy images have become an important research field mainly because of inter- and intra-observer variations in human interpretation. One of the most important steps in dermoscopy image analysis is automated detection of lesion borders. To our knowledge, in our 2010 study we achieved one of the highest accuracy rates in the automated lesion border detection field by using modified density based clustering algorithm. In the previous study, we proposed a novel method which removes redundant computations in well-known spatial density based clustering algorithm, DBSCAN; thus, in turn it speeds up clustering process considerably. Findings Our previous study was heavily dependent on the pre-processing step which creates a binary image from original image. In this study, we embed a new distance measure to the existing algorithm. This provides twofold benefits. First, since new approach removes pre-processing step, it directly works on color images instead of binary ones. Thus, very important color information is not lost. Second, accuracy of delineated lesion borders is improved on 75% of 100 dermoscopy image dataset. Conclusion Previous and improved methods are tested within the same dermoscopy dataset along with the same set of dermatologist drawn ground truth images. Results revealed that the improved method directly works on color images without any pre-processing and generates more accurate results than existing method. PMID:22166058

  3. X-SRQ - Improving Scalability and Performance of Multi-Core InfiniBand Clusters

    SciTech Connect

    Shipman, Galen M; Poole, Stephen W

    2008-01-01

    To improve the scalability of InfiniBand on large scale clusters Open MPI introduced a protocol known as B-SRQ [2]. This protocol was shown to provide much better memory utilization of send and receive buffers for a wide variety of benchmarks and real-world applications. Unfortunately B-SRQ increases the number of connections between communicating peers. While addressing one scalability problem of InfiniBand the protocol introduced another. To alleviate the connection scalability problem of the B-SRQ protocol a small enhancement to the reliable connection transport was requested which would allow multiple shared receive queues to be attached to a single reliable connection. This modified reliable connection transport is now known as the extended reliable connection transport. X-SRQ is a new transport protocol in Open MPI based on B-SRQwhich takes advantage of this improvement in connection scalability. This paper introduces the X-SRQ protocol and details the significantly improved scalability of the protocol over B-SRQand its reduction of the memory footprint of connection state by as much as 2 orders of magnitude on large scale multi-core systems. In addition to improving scalability, performance of latency-sensitive collective operations are improved by up to 38% while significantly decreasing the variability of results. A detailed analysis of the improved memory scalability as well as the improved performance are discussed.

  4. A study of area clustering using factor analysis in small area estimation (An analysis of per capita expenditures of subdistricts level in regency and municipality of Bogor)

    NASA Astrophysics Data System (ADS)

    Wahyudi, Notodiputro, Khairil Anwar; Kurnia, Anang; Anisa, Rahma

    2016-02-01

    Empirical Best Linear Unbiased Prediction (EBLUP) is one of indirect estimating methods which used to estimate parameters of small areas. EBLUP methods works in using auxiliary variables of area while adding the area random effects. In estimating non-sampled area, the standard EBLUP can no longer be used due to no information of area random effects. To obtain more proper estimation methods for non sampled area, the standard EBLUP model has to be modified by adding cluster information. The aim of this research was to study clustering methods using factor analysis by means of simulation, provide better cluster information. The criteria used to evaluate the goodness of fit of the methods in the simulation study were the mean percentage of clustering accuracy. The results of the simulation study showed the use of factor analysis in clustering has increased the average percentage of accuracy particularly when using Ward method. The method was taken into account to estimate the per capita expenditures based on Small Area Estimation (SAE) techniques. The method was eventually used to estimate the per capita expenditures from SUSENAS and the quality of the estimates was measured by RMSE. This research has shown that the standard-modified EBLUP model provided with factor analysis better estimates when compared with standard EBLUP model and the standard-modified EBLUP without the factor analysis. Moreover, it was also shown that the clustering information is important in estimating non sampled area.

  5. An Improved Hybrid Recommender System Using Multi-Based Clustering Method

    NASA Astrophysics Data System (ADS)

    Puntheeranurak, Sutheera; Tsuji, Hidekazu

    Recommender systems have become an important research area as they provide some kind of intelligent web techniques to search through the enormous volume of information available on the internet. Content-based filtering and collaborative filtering methods are the most widely recommendation techniques adopted to date. Each of them has both advantages and disadvantages in providing high quality recommendations therefore a hybrid recommendation mechanism incorporating components from both of these methods would yield satisfactory results in many situations. In this paper, we present an elegant and effective framework for combining content-based filtering and collaborative filtering methods. Our approach clusters on user information and item information for content-based filtering to enhance existing user data and item data. Based on the result from the first step, we calculate the predicted rating data for collaborative filtering. We then do cluster on predicted rating data in the last step to enhance the scalability of our proposed system. We call our proposal multi-based clustering method. We show that our proposed system can solve a cold start problem, a sparsity problem, suitable for various situations in real-life applications. It thus contributes to the improvement of prediction quality of a hybrid recommender system as shown in the experimental results.

  6. Trellis Tension Monitoring Improves Yield Estimation in Vineyards

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The preponderance of yield estimation practices for commercial vineyards is based on longstanding but individually variable industry protocols that rely on hand sampling fruit on one or a small number of dates during the growing season. Limitations associated with the static nature of yield estimati...

  7. USING COLORS TO IMPROVE PHOTOMETRIC METALLICITY ESTIMATES FOR GALAXIES

    SciTech Connect

    Sanders, N. E.; Soderberg, A. M.; Levesque, E. M.

    2013-10-01

    There is a well known correlation between the mass and metallicity of star-forming galaxies. Because mass is correlated with luminosity, this relation is often exploited, when spectroscopy is not available, to estimate galaxy metallicities based on single band photometry. However, we show that galaxy color is typically more effective than luminosity as a predictor of metallicity. This is a consequence of the correlation between color and the galaxy mass-to-light ratio and the recently discovered correlation between star formation rate (SFR) and residuals from the mass-metallicity relation. Using Sloan Digital Sky Survey spectroscopy of ∼180, 000 nearby galaxies, we derive 'LZC relations', empirical relations between metallicity (in seven common strong line diagnostics), luminosity, and color (in 10 filter pairs and four methods of photometry). We show that these relations allow photometric metallicity estimates, based on luminosity and a single optical color, that are ∼50% more precise than those made based on luminosity alone; galaxy metallicity can be estimated to within ∼0.05-0.1 dex of the spectroscopically derived value depending on the diagnostic used. Including color information in photometric metallicity estimates also reduces systematic biases for populations skewed toward high or low SFR environments, as we illustrate using the host galaxy of the supernova SN 2010ay. This new tool will lend more statistical power to studies of galaxy populations, such as supernova and gamma-ray burst host environments, in ongoing and future wide-field imaging surveys.

  8. Improved surface volume estimates for surface irrigation balance calculations

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Volume balance calculations used in surface irrigation engineering analysis require estimates of surface storage. Typically, these calculations use the Manning formula and normal depth assumption to calculate upstream flow depth (and thus flow area), and a constant shape factor to describe the rela...

  9. A novel ULA-based geometry for improving AOA estimation

    NASA Astrophysics Data System (ADS)

    Shirvani-Moghaddam, Shahriar; Akbari, Farida

    2011-12-01

    Due to relatively simple implementation, Uniform Linear Array (ULA) is a popular geometry for array signal processing. Despite this advantage, it does not have a uniform performance in all directions and Angle of Arrival (AOA) estimation performance degrades considerably in the angles close to endfire. In this article, a new configuration is proposed which can solve this problem. Proposed Array (PA) configuration adds two elements to the ULA in top and bottom of the array axis. By extending signal model of the ULA to the new proposed ULA-based array, AOA estimation performance has been compared in terms of angular accuracy and resolution threshold through two well-known AOA estimation algorithms, MUSIC and MVDR. In both algorithms, Root Mean Square Error (RMSE) of the detected angles descends as the input Signal to Noise Ratio (SNR) increases. Simulation results show that the proposed array geometry introduces uniform accurate performance and higher resolution in middle angles as well as border ones. The PA also presents less RMSE than the ULA in endfire directions. Therefore, the proposed array offers better performance for the border angles with almost the same array size and simplicity in both MUSIC and MVDR algorithms with respect to the conventional ULA. In addition, AOA estimation performance of the PA geometry is compared with two well-known 2D-array geometries: L-shape and V-shape, and acceptable results are obtained with equivalent or lower complexity.

  10. Improved alternatives for estimating in-use material stocks.

    PubMed

    Chen, Wei-Qiang; Graedel, T E

    2015-03-01

    Determinations of in-use material stocks are useful for exploring past patterns and future scenarios of materials use, for estimating end-of-life flows of materials, and thereby for guiding policies on recycling and sustainable management of materials. This is especially true when those determinations are conducted for individual products or product groups such as "automobiles" rather than general (and sometimes nebulous) sectors such as "transportation". We propose four alternatives to the existing top-down and bottom-up methods for estimating in-use material stocks, with the choice depending on the focus of the study and on the available data. We illustrate with aluminum use in automobiles the robustness of and consistencies and differences among these four alternatives and demonstrate that a suitable combination of the four methods permits estimation of the in-use stock of a material contained in all products employing that material, or in-use stocks of different materials contained in a particular product. Therefore, we anticipate the estimation in the future of in-use stocks for many materials in many products or product groups, for many regions, and for longer time periods, by taking advantage of methodologies that fully employ the detailed data sets now becoming available. PMID:25636045

  11. Improved Uncertainty Quantification in Groundwater Flux Estimation Using GRACE

    NASA Astrophysics Data System (ADS)

    Reager, J. T., II; Rao, P.; Famiglietti, J. S.; Turmon, M.

    2015-12-01

    Groundwater change is difficult to monitor over large scales. One of the most successful approaches is in the remote sensing of time-variable gravity using NASA Gravity Recovery and Climate Experiment (GRACE) mission data, and successful case studies have created the opportunity to move towards a global groundwater monitoring framework for the world's largest aquifers. To achieve these estimates, several approximations are applied, including those in GRACE processing corrections, the formulation of the formal GRACE errors, destriping and signal recovery, and the numerical model estimation of snow water, surface water and soil moisture storage states used to isolate a groundwater component. A major weakness in these approaches is inconsistency: different studies have used different sources of primary and ancillary data, and may achieve different results based on alternative choices in these approximations. In this study, we present two cases of groundwater change estimation in California and the Colorado River basin, selected for their good data availability and varied climates. We achieve a robust numerical estimate of post-processing uncertainties resulting from land-surface model structural shortcomings and model resolution errors. Groundwater variations should demonstrate less variability than the overlying soil moisture state does, as groundwater has a longer memory of past events due to buffering by infiltration and drainage rate limits. We apply a model ensemble approach in a Bayesian framework constrained by the assumption of decreasing signal variability with depth in the soil column. We also discuss time variable errors vs. time constant errors, across-scale errors v. across-model errors, and error spectral content (across scales and across model). More robust uncertainty quantification for GRACE-based groundwater estimates would take all of these issues into account, allowing for more fair use in management applications and for better integration of GRACE

  12. A new cluster-based oversampling method for improving survival prediction of hepatocellular carcinoma patients.

    PubMed

    Santos, Miriam Seoane; Abreu, Pedro Henriques; García-Laencina, Pedro J; Simão, Adélia; Carvalho, Armando

    2015-12-01

    Liver cancer is the sixth most frequently diagnosed cancer and, particularly, Hepatocellular Carcinoma (HCC) represents more than 90% of primary liver cancers. Clinicians assess each patient's treatment on the basis of evidence-based medicine, which may not always apply to a specific patient, given the biological variability among individuals. Over the years, and for the particular case of Hepatocellular Carcinoma, some research studies have been developing strategies for assisting clinicians in decision making, using computational methods (e.g. machine learning techniques) to extract knowledge from the clinical data. However, these studies have some limitations that have not yet been addressed: some do not focus entirely on Hepatocellular Carcinoma patients, others have strict application boundaries, and none considers the heterogeneity between patients nor the presence of missing data, a common drawback in healthcare contexts. In this work, a real complex Hepatocellular Carcinoma database composed of heterogeneous clinical features is studied. We propose a new cluster-based oversampling approach robust to small and imbalanced datasets, which accounts for the heterogeneity of patients with Hepatocellular Carcinoma. The preprocessing procedures of this work are based on data imputation considering appropriate distance metrics for both heterogeneous and missing data (HEOM) and clustering studies to assess the underlying patient groups in the studied dataset (K-means). The final approach is applied in order to diminish the impact of underlying patient profiles with reduced sizes on survival prediction. It is based on K-means clustering and the SMOTE algorithm to build a representative dataset and use it as training example for different machine learning procedures (logistic regression and neural networks). The results are evaluated in terms of survival prediction and compared across baseline approaches that do not consider clustering and/or oversampling using the

  13. Improved source term estimation using blind outlier detection

    NASA Astrophysics Data System (ADS)

    Martinez-Camara, Marta; Bejar Haro, Benjamin; Vetterli, Martin; Stohl, Andreas

    2014-05-01

    Emissions of substances into the atmosphere are produced in situations such as volcano eruptions, nuclear accidents or pollutant releases. It is necessary to know the source term - how the magnitude of these emissions changes with time - in order to predict the consequences of the emissions, such as high radioactivity levels in a populated area or high concentration of volcanic ash in an aircraft flight corridor. However, in general, we know neither how much material was released in total, nor the relative variation of emission strength with time. Hence, estimating the source term is a crucial task. Estimating the source term generally involves solving an ill-posed linear inverse problem using datasets of sensor measurements. Several so-called inversion methods have been developed for this task. Unfortunately, objective quantitative evaluation of the performance of inversion methods is difficult due to the fact that the ground truth is unknown for practically all the available measurement datasets. In this work we use the European Tracer Experiment (ETEX) - a rare example of an experiment where the ground truth is available - to develop and to test new source estimation algorithms. Knowledge of the ground truth grants us access to the additive error term. We show that the distribution of this error is heavy-tailed, which means that some measurements are outliers. We also show that precisely these outliers severely degrade the performance of traditional inversion methods. Therefore, we develop blind outlier detection algorithms specifically suited to the source estimation problem. Then, we propose new inversion methods that combine traditional regularization techniques with blind outlier detection. Such hybrid methods reduce the error of reconstruction of the source term up to 45% with respect to previously proposed methods.

  14. Estimating Time of Infection Using Prior Serological and Individual Information Can Greatly Improve Incidence Estimation of Human and Wildlife Infections

    PubMed Central

    Hens, Niel; Beutels, Philippe; Leirs, Herwig; Reijniers, Jonas

    2016-01-01

    Diseases of humans and wildlife are typically tracked and studied through incidence, the number of new infections per time unit. Estimating incidence is not without difficulties, as asymptomatic infections, low sampling intervals and low sample sizes can introduce large estimation errors. After infection, biomarkers such as antibodies or pathogens often change predictably over time, and this temporal pattern can contain information about the time since infection that could improve incidence estimation. Antibody level and avidity have been used to estimate time since infection and to recreate incidence, but the errors on these estimates using currently existing methods are generally large. Using a semi-parametric model in a Bayesian framework, we introduce a method that allows the use of multiple sources of information (such as antibody level, pathogen presence in different organs, individual age, season) for estimating individual time since infection. When sufficient background data are available, this method can greatly improve incidence estimation, which we show using arenavirus infection in multimammate mice as a test case. The method performs well, especially compared to the situation in which seroconversion events between sampling sessions are the main data source. The possibility to implement several sources of information allows the use of data that are in many cases already available, which means that existing incidence data can be improved without the need for additional sampling efforts or laboratory assays. PMID:27177244

  15. Estimating Time of Infection Using Prior Serological and Individual Information Can Greatly Improve Incidence Estimation of Human and Wildlife Infections.

    PubMed

    Borremans, Benny; Hens, Niel; Beutels, Philippe; Leirs, Herwig; Reijniers, Jonas

    2016-05-01

    Diseases of humans and wildlife are typically tracked and studied through incidence, the number of new infections per time unit. Estimating incidence is not without difficulties, as asymptomatic infections, low sampling intervals and low sample sizes can introduce large estimation errors. After infection, biomarkers such as antibodies or pathogens often change predictably over time, and this temporal pattern can contain information about the time since infection that could improve incidence estimation. Antibody level and avidity have been used to estimate time since infection and to recreate incidence, but the errors on these estimates using currently existing methods are generally large. Using a semi-parametric model in a Bayesian framework, we introduce a method that allows the use of multiple sources of information (such as antibody level, pathogen presence in different organs, individual age, season) for estimating individual time since infection. When sufficient background data are available, this method can greatly improve incidence estimation, which we show using arenavirus infection in multimammate mice as a test case. The method performs well, especially compared to the situation in which seroconversion events between sampling sessions are the main data source. The possibility to implement several sources of information allows the use of data that are in many cases already available, which means that existing incidence data can be improved without the need for additional sampling efforts or laboratory assays. PMID:27177244

  16. Uncertainty Estimation Improves Energy Measurement and Verification Procedures

    SciTech Connect

    Walter, Travis; Price, Phillip N.; Sohn, Michael D.

    2014-05-14

    Implementing energy conservation measures in buildings can reduce energy costs and environmental impacts, but such measures cost money to implement so intelligent investment strategies require the ability to quantify the energy savings by comparing actual energy used to how much energy would have been used in absence of the conservation measures (known as the baseline energy use). Methods exist for predicting baseline energy use, but a limitation of most statistical methods reported in the literature is inadequate quantification of the uncertainty in baseline energy use predictions. However, estimation of uncertainty is essential for weighing the risks of investing in retrofits. Most commercial buildings have, or soon will have, electricity meters capable of providing data at short time intervals. These data provide new opportunities to quantify uncertainty in baseline predictions, and to do so after shorter measurement durations than are traditionally used. In this paper, we show that uncertainty estimation provides greater measurement and verification (M&V) information and helps to overcome some of the difficulties with deciding how much data is needed to develop baseline models and to confirm energy savings. We also show that cross-validation is an effective method for computing uncertainty. In so doing, we extend a simple regression-based method of predicting energy use using short-interval meter data. We demonstrate the methods by predicting energy use in 17 real commercial buildings. We discuss the benefits of uncertainty estimates which can provide actionable decision making information for investing in energy conservation measures.

  17. Improved plausibility bounds about the 2005 HIV and AIDS estimates

    PubMed Central

    Morgan, M; Walker, N; Gouws, E; Stanecki, K A; Stover, J

    2006-01-01

    Background Since 1998 the Joint United Nations Programme on HIV/AIDS and the World Health Organization has provided estimates on the magnitude of the HIV epidemic for individual countries. Starting with the 2003 estimates, plausibility bounds about the estimates were also reported. The bounds are intended to serve as a guide as to what reasonable or plausible ranges are for the uncertainty in HIV incidence, prevalence, and mortality. Methods Plausibility bounds were developed for three situations: for countries with generalised epidemics, for countries with low level or concentrated epidemics (LLC), and for regions. The techniques used build on those developed for the previous reporting round. However the current bounds are based on the available surveillance and survey data from each individual country rather than on data from a few prototypical countries. Results The uncertainty around the HIV estimates depends on the quality of the surveillance system in the country. Countries with population based HIV seroprevalence surveys have the tightest plausibility bounds (average relative range about the adult HIV prevalence (ARR) of −18% to +19%.) Generalised epidemic countries without a survey have the next tightest ranges (average ARR of −46% to +59%). Those LLC countries which have conducted multiple surveys over time for HIV among the populations most at risk have the bounds similar to those in generalised epidemic countries (ARR −40% to +67%). As the number and quality of the studies in LLC countries goes down, the plausibility bounds increase (ARR of −38% to +102% for countries with medium quality data and ARR of −53% to +183% for countries with poor quality data). The plausibility bounds for regions directly reflect the bounds for the countries in those regions. Conclusions Although scientific, the plausibility bounds do not represent and should not be interpreted as formal statistical confidence intervals. However in order to make the bounds as

  18. RCWIM - an improved global water isotope pattern prediction model using fuzzy climatic clustering regionalization

    NASA Astrophysics Data System (ADS)

    Terzer, Stefan; Araguás-Araguás, Luis; Wassenaar, Leonard I.; Aggarwal, Pradeep K.

    2013-04-01

    Prediction of geospatial H and O isotopic patterns in precipitation has become increasingly important to diverse disciplines beyond hydrology, such as climatology, ecology, food authenticity, and criminal forensics, because these two isotopes of rainwater often control the terrestrial isotopic spatial patterns that facilitate the linkage of products (food, wildlife, water) to origin or movement (food, criminalistics). Currently, spatial water isotopic pattern prediction relies on combined regression and interpolation techniques to create gridded datasets by using data obtained from the Global Network of Isotopes In Precipitation (GNIP). However, current models suffer from two shortcomings: (a) models may have limited covariates and/or parameterization fitted to a global domain, which results in poor predictive outcomes at regional scales, or (b) the spatial domain is intentionally restricted to regional settings, and thereby of little use in providing information at global geospatial scales. Here we present a new global climatically regionalized isotope prediction model which overcomes these limitations through the use of fuzzy clustering of climatic data subsets, allowing us to better identify and customize appropriate covariates and their multiple regression coefficients instead of aiming for a one-size-fits-all global fit (RCWIM - Regionalized Climate Cluster Water Isotope Model). The new model significantly reduces the point-based regression residuals and results in much lower overall isotopic prediction uncertainty, since residuals are interpolated onto the regression surface. The new precipitation δ2H and δ18O isoscape model is available on a global scale at 10 arc-minutes spatial and at monthly, seasonal and annual temporal resolution, and will provide improved predicted stable isotope values used for a growing number of applications. The model further provides a flexible framework for future improvements using regional climatic clustering.

  19. The Impact of Galaxy Cluster Mergers on Cosmological Parameter Estimation from Surveys of the Sunyaev-Zel'dovich Effect

    NASA Astrophysics Data System (ADS)

    Wik, Daniel R.; Sarazin, Craig L.; Ricker, Paul M.; Randall, Scott W.

    2008-06-01

    Sensitive surveys of the cosmic microwave background will detect thousands of galaxy clusters via the Sunyaev-Zel'dovich (SZ) effect. Two SZ observables, the central or maximum and integrated Comptonization parameters ymax and Y, relate in a simple way to the total cluster mass, which allows the construction of mass functions (MFs) that can be used to estimate cosmological parameters such as ΩM, σ8, and the dark energy parameter w. However, clusters form from the mergers of smaller structures, events that can disrupt the equilibrium of intracluster gas on which SZ- M relations rely. From a set of N-body/hydrodynamical simulations of binary cluster mergers, we calculate the evolution of Y and ymax over the course of merger events and find that both parameters are transiently "boosted," primarily during the first core passage. We then use a semianalytic technique developed by Randall et al. to estimate the effect of merger boosts on the distribution functions YF and yF of Y and ymax, respectively, via cluster merger histories determined from extended Press-Schechter (PS) merger trees. We find that boosts do not induce an overall systematic effect on YFs, and the values of ΩM, σ8, and w were returned to within 2% of values expected from the nonboosted YFs. The boosted yFs are significantly biased, however, causing ΩM to be underestimated by 15%-45%, σ8 to be overestimated by 10%-25%, and w to be pushed to more negative values by 25%-45%. We confirm that the integrated SZ effect, Y, is far more robust to mergers than ymax, as previously reported by Motl et al. and similarly found for the X-ray equivalent YX, and we conclude that Y is the superior choice for constraining cosmological parameters.

  20. Robust estimation of the arterial input function for Logan plots using an intersectional searching algorithm and clustering in positron emission tomography for neuroreceptor imaging.

    PubMed

    Naganawa, Mika; Kimura, Yuichi; Yano, Junichi; Mishina, Masahiro; Yanagisawa, Masao; Ishii, Kenji; Oda, Keiichi; Ishiwata, Kiichi

    2008-03-01

    The Logan plot is a powerful algorithm used to generate binding-potential images from dynamic positron emission tomography (PET) images in neuroreceptor studies. However, it requires arterial blood sampling and metabolite correction to provide an input function, and clinically it is preferable that this need for arterial blood sampling be obviated. Estimation of the input function with metabolite correction using an intersectional searching algorithm (ISA) has been proposed. The ISA seeks the input function from the intersection between the planes spanned by measured radioactivity curves in tissue and their cumulative integrals in data space. However, the ISA is sensitive to noise included in measured curves, and it often fails to estimate the input function. In this paper, we propose a robust estimation of the cumulative integral of the plasma time-activity curve (pTAC) using ISA (robust EPISA) to overcome noise issues. The EPISA reduces noise in the measured PET data using averaging and clustering that gathers radioactivity curves with similar kinetic parameters. We confirmed that a little noise made the estimation of the input function extremely difficult in the simulation. The robust EPISA was validated by application to eight real dynamic [(11)C]TMSX PET data sets used to visualize adenosine A(2A) receptors and four real dynamic [(11)C]PIB PET data sets used to visualize amyloid-beta plaque. Peripherally, the latter showed faster metabolism than the former. The clustering operation improved the signal-to-noise ratio for the PET data sufficiently to estimate the input function, and the calculated neuroreceptor images had a quality equivalent to that using measured pTACs after metabolite correction. Our proposed method noninvasively yields an alternative input function for Logan plots, allowing the Logan plot to be more useful in neuroreceptor studies. PMID:18187345

  1. Modified distance in average linkage based on M-estimator and MADn criteria in hierarchical cluster analysis

    NASA Astrophysics Data System (ADS)

    Muda, Nora; Othman, Abdul Rahman

    2015-10-01

    The process of grouping a set of objects into classes of similar objects is called clustering. It divides a large group of observations into smaller groups so that the observations within each group are relatively similar and the observations in different groups are relatively dissimilar. In this study, an agglomerative method in hierarchical cluster analysis is chosen and clusters were constructed by using an average linkage technique. An average linkage technique requires distance between clusters, which is calculated based on the average distance between all pairs of points, one group with another group. In calculating the average distance, the distance will not be robust when there is an outlier. Therefore, the average distance in average linkage needs to be modified in order to overcome the problem of outlier. Therefore, the criteria of outlier detection based on MADn criteria is used and the average distance is recalculated without the outlier. Next, the distance in average linkage is calculated based on a modified one step M-estimator (MOM). The groups of cluster are presented in dendrogram graph. To evaluate the goodness of a modified distance in the average linkage clustering, the bootstrap analysis is conducted on the dendrogram graph and the bootstrap value (BP) are assessed for each branch in dendrogram that formed the group, to ensure the reliability of the branches constructed. This study found that the average linkage technique with modified distance is significantly superior than the usual average linkage technique, if there is an outlier. Both of these techniques are said to be similar if there is no outlier.

  2. Global Water Resources Under Future Changes: Toward an Improved Estimation

    NASA Astrophysics Data System (ADS)

    Islam, M.; Agata, Y.; Hanasaki, N.; Kanae, S.; Oki, T.

    2005-05-01

    Global water resources availability in the 21st century is going to be an important concern. Despite its international recognition, however, until now there are very limited global estimates of water resources, which considered the geographical linkage between water supply and demand, defined by runoff and its passage through river network. The available studies are again insufficient due to reasons like different approaches in defining water scarcity, simply based on annual average figures without considering the inter-annual or seasonal variability, absence of the inclusion of virtual water trading, etc. In this study, global water resources under future climate change associated with several socio-economic factors were estimated varying over both temporal and spatial scale. Global runoff data was derived from several land surface models under the GSWP2 (Global Soil Wetness Project) project, which was further processed through TRIP (Total Runoff Integrated Pathways) river routing model to produce a 0.5x0.5 degree grid based figure. Water abstraction was estimated for the same spatial resolution for three sectors as domestic, industrial and agriculture. GCM outputs from CCSR and MRI were collected to predict the runoff changes. Socio-economic factors like population and GDP growth, affected mostly the demand part. Instead of simply looking at annual figures, monthly figures for both supply and demand was considered. For an average year, such a seasonal variability can affect the crop yield significantly. In other case, inter-annual variability of runoff can cause for an absolute drought condition. To account for vulnerabilities of a region to future changes, both inter-annual and seasonal effects were thus considered. At present, the study assumed the future agricultural water uses to be unchanged under climatic changes. In this connection, EPIC model is underway to use for estimating future agricultural water demand under climatic changes on a monthly basis. From

  3. Improved fire radiative energy estimation in high latitude ecosystems

    NASA Astrophysics Data System (ADS)

    Melchiorre, A.; Boschetti, L.

    2014-12-01

    Scientists, land managers, and policy makers are facing new challenges as fire regimes are evolving as a result of climate change (Westerling et al. 2006). In high latitudes fires are increasing in number and size as temperatures increase and precipitation decreases (Kasischke and Turetsky 2006). Peatlands, like the large complexes in the Alaskan tundra, are burning more frequently and severely as a result of these changes, releasing large amounts of greenhouse gases. Remotely sensed data are routinely used to monitor the location of active fires and the extent of burned areas, but they are not sensitive to the depth of the organic soil layer combusted, resulting in underestimation of peatland greenhouse gas emissions when employing the conventional 'bottom up' approach (Seiler and Crutzen 1980). An alternative approach would be the direct estimation of the biomass burned from the energy released by the fire (Fire Radiative Energy, FRE) (Wooster et al. 2003). Previous works (Boschetti and Roy 2009; Kumar et al. 2011) showed that the sampling interval of polar orbiting satellite systems severely limits the accuracy of the FRE in tropical ecosystems (up to four overpasses a day with MODIS), but because of the convergence of the orbits, more observations are available at higher latitudes. In this work, we used a combination of MODIS thermal data and Landsat optical data for the estimation of biomass burned in peatland ecosystems. First, the global MODIS active fire detection algorithm (Giglio et al. 2003) was modified, adapting the temperature thresholds to maximize the number of detections in boreal regions. Then, following the approach proposed by Boschetti and Roy (2009), the FRP point estimations were interpolated in time and space to cover the full temporal and spatial extent of the burned area, mapped with Landsat5 TM data. The methodology was tested on a large burned area in Alaska, and the results compared to published field measurements (Turetsky et al. 2011).

  4. Improved Speech Coding Based on Open-Loop Parameter Estimation

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Chen, Ya-Chin; Longman, Richard W.

    2000-01-01

    A nonlinear optimization algorithm for linear predictive speech coding was developed early that not only optimizes the linear model coefficients for the open loop predictor, but does the optimization including the effects of quantization of the transmitted residual. It also simultaneously optimizes the quantization levels used for each speech segment. In this paper, we present an improved method for initialization of this nonlinear algorithm, and demonstrate substantial improvements in performance. In addition, the new procedure produces monotonically improving speech quality with increasing numbers of bits used in the transmitted error residual. Examples of speech encoding and decoding are given for 8 speech segments and signal to noise levels as high as 47 dB are produced. As in typical linear predictive coding, the optimization is done on the open loop speech analysis model. Here we demonstrate that minimizing the error of the closed loop speech reconstruction, instead of the simpler open loop optimization, is likely to produce negligible improvement in speech quality. The examples suggest that the algorithm here is close to giving the best performance obtainable from a linear model, for the chosen order with the chosen number of bits for the codebook.

  5. Improving Mantel-Haenszel DIF Estimation through Bayesian Updating

    ERIC Educational Resources Information Center

    Zwick, Rebecca; Ye, Lei; Isham, Steven

    2012-01-01

    This study demonstrates how the stability of Mantel-Haenszel (MH) DIF (differential item functioning) methods can be improved by integrating information across multiple test administrations using Bayesian updating (BU). The authors conducted a simulation that showed that this approach, which is based on earlier work by Zwick, Thayer, and Lewis,…

  6. Estimation of feasible solution space using Cluster Newton Method: application to pharmacokinetic analysis of irinotecan with physiologically-based pharmacokinetic models

    PubMed Central

    2013-01-01

    Background To facilitate new drug development, physiologically-based pharmacokinetic (PBPK) modeling methods receive growing attention as a tool to fully understand and predict complex pharmacokinetic phenomena. As the number of parameters to reproduce physiological functions tend to be large in PBPK models, efficient parameter estimation methods are essential. We have successfully applied a recently developed algorithm to estimate a feasible solution space, called Cluster Newton Method (CNM), to reveal the cause of irinotecan pharmacokinetic alterations in two cancer patient groups. Results After improvements in the original CNM algorithm to maintain parameter diversities, a feasible solution space was successfully estimated for 55 or 56 parameters in the irinotecan PBPK model, within ten iterations, 3000 virtual samples, and in 15 minutes (Intel Xeon E5-1620 3.60GHz × 1 or Intel Core i7-870 2.93GHz × 1). Control parameters or parameter correlations were clarified after the parameter estimation processes. Possible causes in the irinotecan pharmacokinetic alterations were suggested, but they were not conclusive. Conclusions Application of CNM achieved a feasible solution space by solving inverse problems of a system containing ordinary differential equations (ODEs). This method may give us reliable insights into other complicated phenomena, which have a large number of parameters to estimate, under limited information. It is also helpful to design prospective studies for further investigation of phenomena of interest. PMID:24555857

  7. Theoretical estimation of solvation parameters and interfacial tension of clusters of potassium halides in aqueous solutions

    NASA Astrophysics Data System (ADS)

    Polak, W.; Sangwal, K.

    1996-03-01

    Using the model of the formation of ionic clusters, an analytical equation valid for the equilibrium concentration of solute in the solution is derived. Employing Boltzmann statistics in conjunction with the experimental values of the equilibrium concentration of KF, KCl, KBr and KI electrolytes in aqueous solution at 25°C, the above analytical equation is used to compute the best values of the dielectric permittivity of the solvation shell for the K + ion and four anions separately. These values of the dielectric permittivity of the solvation shells are then used to compute adsorption energy of water molecules on the {100} surface of regular clusters and their surface tension in the solution as functions of type of the salt, its concentration and cluster size. It is found that both the average adsorption energy and the interfacial tension of regular clusters composed of i ions can be approximated by a linear function of i - {1}/{2} for different concentrations of all the investigated potassium halides, and that, depending on the concentration of the solutions, the surface tension of regular clusters in solutions can increase or decrease with their size.

  8. An Improved Bandstrength Index for the CH G Band of Globular Cluster Giants

    NASA Astrophysics Data System (ADS)

    Martell, Sarah L.; Smith, Graeme H.; Briley, Michael M.

    2008-08-01

    Spectral indices are useful tools for quantifying the strengths of features in moderate-resolution spectra and relating them to intrinsic stellar parameters. This paper focuses on the 4300 Å CH G-band, a classic example of a feature interpreted through use of spectral indices. G-band index definitions, as applied to globular clusters of different metallicity, abound in the literature, and transformations between the various systems, or comparisons between different authors' work, are difficult and not always useful. We present a method for formulating an optimized G-band index, using a large grid of synthetic spectra. To make our new index a reliable measure of carbon abundance, we minimize its dependence on [N/Fe] and simultaneously maximize its sensitivity to [C/Fe]. We present a definition for the new index S2(CH), along with estimates of the errors inherent in using it for [C/Fe] determination, and conclude that it is valid for use with spectra of bright globular cluster red giants over a large range in [Fe/H], [C/Fe], and [N/Fe].

  9. Estimating Daytime Ecosystem Respiration to Improve Estimates of Gross Primary Production of a Temperate Forest

    PubMed Central

    Sun, Jinwei; Wu, Jiabing; Guan, Dexin; Yao, Fuqi; Yuan, Fenghui; Wang, Anzhi; Jin, Changjie

    2014-01-01

    Leaf respiration is an important component of carbon exchange in terrestrial ecosystems, and estimates of leaf respiration directly affect the accuracy of ecosystem carbon budgets. Leaf respiration is inhibited by light; therefore, gross primary production (GPP) will be overestimated if the reduction in leaf respiration by light is ignored. However, few studies have quantified GPP overestimation with respect to the degree of light inhibition in forest ecosystems. To determine the effect of light inhibition of leaf respiration on GPP estimation, we assessed the variation in leaf respiration of seedlings of the dominant tree species in an old mixed temperate forest with different photosynthetically active radiation levels using the Laisk method. Canopy respiration was estimated by combining the effect of light inhibition on leaf respiration of these species with within-canopy radiation. Leaf respiration decreased exponentially with an increase in light intensity. Canopy respiration and GPP were overestimated by approximately 20.4% and 4.6%, respectively, when leaf respiration reduction in light was ignored compared with the values obtained when light inhibition of leaf respiration was considered. This study indicates that accurate estimates of daytime ecosystem respiration are needed for the accurate evaluation of carbon budgets in temperate forests. In addition, this study provides a valuable approach to accurately estimate GPP by considering leaf respiration reduction in light in other ecosystems. PMID:25419844

  10. RSQRT: AN HEURISTIC FOR ESTIMATING THE NUMBER OF CLUSTERS TO REPORT.

    PubMed

    Carlis, John; Bruso, Kelsey

    2012-03-01

    Clustering can be a valuable tool for analyzing large datasets, such as in e-commerce applications. Anyone who clusters must choose how many item clusters, K, to report. Unfortunately, one must guess at K or some related parameter. Elsewhere we introduced a strongly-supported heuristic, RSQRT, which predicts K as a function of the attribute or item count, depending on attribute scales. We conducted a second analysis where we sought confirmation of the heuristic, analyzing data sets from theUCImachine learning benchmark repository. For the 25 studies where sufficient detail was available, we again found strong support. Also, in a side-by-side comparison of 28 studies, RSQRT best-predicted K and the Bayesian information criterion (BIC) predicted K are the same. RSQRT has a lower cost of O(log log n) versus O(n(2)) for BIC, and is more widely applicable. Using RSQRT prospectively could be much better than merely guessing. PMID:22773923

  11. Oxidative dehydrogenation of cyclohexene on size selected subnanometer cobalt clusters: improved catalytic performance via evolution of cluster-assembled nanostructures.

    PubMed

    Lee, Sungsik; Di Vece, Marcel; Lee, Byeongdu; Seifert, Sönke; Winans, Randall E; Vajda, Stefan

    2012-07-14

    The catalytic activity of oxide-supported metal nanoclusters strongly depends on their size and support. In this study, the origin of morphology transformation and chemical state changes during the oxidative dehydrogenation of cyclohexene was investigated in terms of metal-support interactions. Model catalyst systems were prepared by deposition of size selected subnanometer Co(27±4) clusters on various metal oxide supports (Al(2)O(3), ZnO and TiO(2) and MgO). The oxidation state and reactivity of the supported cobalt clusters were investigated by temperature programmed reaction (TPRx) and in situ grazing incidence X-ray absorption (GIXAS) during oxidative dehydrogenation of cyclohexene, while the sintering resistance monitored with grazing incidence small angle X-ray scattering (GISAXS). The activity and selectivity of cobalt clusters shows strong dependence on the support. GIXAS reveals that metal-support interaction plays a key role in the reaction. The most pronounced support effect is observed for MgO, where during the course of the reaction in its activity, composition and size dynamically evolving nanoassembly is formed from subnanometer cobalt clusters. PMID:22419008

  12. [An improved motion estimation of medical image series via wavelet transform].

    PubMed

    Zhang, Ying; Rao, Nini; Wang, Gang

    2006-10-01

    The compression of medical image series is very important in telemedicine. The motion estimation plays a key role in the video sequence compression. In this paper, an improved square-diamond search (SDS) algorithm is proposed for the motion estimation of medical image series. The improved SDS algorithm reduces the number of the searched points. This improved SDS algorithm is used in wavelet transformation field to estimate the motion of medical image series. A simulation experiment for digital subtraction angiography (DSA) is made. The experiment results show that the algorithm accuracy is higher than that of other algorithms in the motion estimation of medical image series. PMID:17121333

  13. Covariance specification and estimation to improve top-down Green House Gas emission estimates

    NASA Astrophysics Data System (ADS)

    Ghosh, S.; Lopez-Coto, I.; Prasad, K.; Whetstone, J. R.

    2015-12-01

    The National Institute of Standards and Technology (NIST) operates the North-East Corridor (NEC) project and the Indianapolis Flux Experiment (INFLUX) in order to develop measurement methods to quantify sources of Greenhouse Gas (GHG) emissions as well as their uncertainties in urban domains using a top down inversion method. Top down inversion updates prior knowledge using observations in a Bayesian way. One primary consideration in a Bayesian inversion framework is the covariance structure of (1) the emission prior residuals and (2) the observation residuals (i.e. the difference between observations and model predicted observations). These covariance matrices are respectively referred to as the prior covariance matrix and the model-data mismatch covariance matrix. It is known that the choice of these covariances can have large effect on estimates. The main objective of this work is to determine the impact of different covariance models on inversion estimates and their associated uncertainties in urban domains. We use a pseudo-data Bayesian inversion framework using footprints (i.e. sensitivities of tower measurements of GHGs to surface emissions) and emission priors (based on Hestia project to quantify fossil-fuel emissions) to estimate posterior emissions using different covariance schemes. The posterior emission estimates and uncertainties are compared to the hypothetical truth. We find that, if we correctly specify spatial variability and spatio-temporal variability in prior and model-data mismatch covariances respectively, then we can compute more accurate posterior estimates. We discuss few covariance models to introduce space-time interacting mismatches along with estimation of the involved parameters. We then compare several candidate prior spatial covariance models from the Matern covariance class and estimate their parameters with specified mismatches. We find that best-fitted prior covariances are not always best in recovering the truth. To achieve

  14. Estimating Missing Features to Improve Multimedia Information Retrieval

    SciTech Connect

    Bagherjeiran, A; Love, N S; Kamath, C

    2006-09-28

    Retrieval in a multimedia database usually involves combining information from different modalities of data, such as text and images. However, all modalities of the data may not be available to form the query. The retrieval results from such a partial query are often less than satisfactory. In this paper, we present an approach to complete a partial query by estimating the missing features in the query. Our experiments with a database of images and their associated captions show that, with an initial text-only query, our completion method has similar performance to a full query with both image and text features. In addition, when we use relevance feedback, our approach outperforms the results obtained using a full query.

  15. Does Integrating Family Planning into HIV Services Improve Gender Equitable Attitudes? Results from a Cluster Randomized Trial in Nyanza, Kenya.

    PubMed

    Newmann, Sara J; Rocca, Corinne H; Zakaras, Jennifer M; Onono, Maricianah; Bukusi, Elizabeth A; Grossman, Daniel; Cohen, Craig R

    2016-09-01

    This study investigated whether integrating family planning (FP) services into HIV care was associated with gender equitable attitudes among HIV-positive adults in western Kenya. Surveys were conducted with 480 women and 480 men obtaining HIV services from 18 clinics 1 year after the sites were randomized to integrated FP/HIV services (N = 12) or standard referral for FP (N = 6). We used multivariable regression, with generalized estimating equations to account for clustering, to assess whether gender attitudes (range 0-12) were associated with integrated care and with contraceptive use. Men at intervention sites had stronger gender equitable attitudes than those at control sites (adjusted mean difference in scores = 0.89, 95 % CI 0.03-1.74). Among women, attitudes did not differ by study arm. Gender equitable attitudes were not associated with contraceptive use among men (AOR = 1.06, 95 % CI 0.93-1.21) or women (AOR = 1.03, 95 % CI 0.94-1.13). Further work is needed to understand how integrating FP into HIV care affects gender relations, and how improved gender equity among men might be leveraged to improve contraceptive use and other reproductive health outcomes. PMID:26837632

  16. Strategies for Improved CALIPSO Aerosol Optical Depth Estimates

    NASA Technical Reports Server (NTRS)

    Vaughan, Mark A.; Kuehn, Ralph E.; Tackett, Jason L.; Rogers, Raymond R.; Liu, Zhaoyan; Omar, A.; Getzewich, Brian J.; Powell, Kathleen A.; Hu, Yongxiang; Young, Stuart A.; Avery, Melody A.; Winker, David M.; Trepte, Charles R.

    2010-01-01

    In the spring of 2010, the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) project will be releasing version 3 of its level 2 data products. In this paper we describe several changes to the algorithms and code that yield substantial improvements in CALIPSO's retrieval of aerosol optical depths (AOD). Among these are a retooled cloud-clearing procedure and a new approach to determining the base altitudes of aerosol layers in the planetary boundary layer (PBL). The results derived from these modifications are illustrated using case studies prepared using a late beta version of the level 2 version 3 processing code.

  17. Effectiveness of Improvement Plans in Primary Care Practice Accreditation: A Clustered Randomized Trial

    PubMed Central

    Nouwens, Elvira; van Lieshout, Jan; Bouma, Margriet; Braspenning, Jozé; Wensing, Michel

    2014-01-01

    Background Accreditation of healthcare organizations is a widely used method to assess and improve quality of healthcare. Our aim was to determine the effectiveness of improvement plans in practice accreditation of primary care practices, focusing on cardiovascular risk management (CVRM). Method A two-arm cluster randomized controlled trial with a block design was conducted with measurements at baseline and follow-up. Primary care practices allocated to the intervention group (n = 22) were instructed to focus improvement plans during the intervention period on CVRM, while practices in the control group (n = 23) could focus on any domain except on CVRM and diabetes mellitus. Primary outcomes were systolic blood pressure <140 mmHg, LDL cholesterol <2.5 mmol/l and prescription of antiplatelet drugs. Secondary outcomes were 17 indicators of CVRM and physician's perceived goal attainment for the chosen improvement project. Results No effect was found on the primary outcomes. Blood pressure targets were reached in 39.8% of patients in the intervention and 38.7% of patients in the control group; cholesterol target levels were reached in 44.5% and 49.0% respectively; antiplatelet drugs were prescribed in 82.7% in both groups. Six secondary outcomes improved: smoking status, exercise control, diet control, registration of alcohol intake, measurement of waist circumference, and fasting glucose. Participants' perceived goal attainment was high in both arms: mean scores of 7.9 and 8.2 on the 10-point scale. Conclusions The focus of improvement plans on CVRM in the practice accreditation program led to some improvements of CVRM, but not on the primary outcomes. ClinicalTrials.gov NCT00791362 PMID:25463149

  18. Application of the 2013 Wilson-Devinney Program’s Direct Distance Estimation procedure and enhanced spot modeling capability to eclipsing binaries in star clusters

    NASA Astrophysics Data System (ADS)

    Milone, Eugene F.; Schiller, Stephen J.

    2014-06-01

    A paradigm method to calibrate a range of standard candles by means of well-calibrated photometry of eclipsing binaries in star clusters is the Direct Distance Estimation (DDE) procedure, contained in the 2010 and 2013 versions of the Wilson-Devinney light-curve modeling program. In particular, we are re-examining systems previously studied in our Binaries-in-Clusters program and analyzed with earlier versions of the Wilson-Devinney program. Earlier we reported on our use of the 2010 version of this program, which incorporates the DDE procedure to estimate the distance to an eclipsing system directly, as a system parameter, and is thus dependent on the data and analysis model alone. As such, the derived distance is accorded a standard error, independent of any additional assumptions or approximations that such analyses conventionally require. Additionally we have now made use of the 2013 version, which introduces temporal evolution of spots, an important improvement for systems containing variable active regions, as is the case for the systems we are studying currently, namely HD 27130 in the Hyades and DS And in NGC 752. Our work provides some constraints on the effects of spot treatment on distance determination of active systems.

  19. Improving a regional model using reduced complexity and parameter estimation

    USGS Publications Warehouse

    Kelson, Victor A.; Hunt, Randall J.; Haitjema, Henk M.

    2002-01-01

    The availability of powerful desktop computers and graphical user interfaces for ground water flow models makes possible the construction of ever more complex models. A proposed copper-zinc sulfide mine in northern Wisconsin offers a unique case in which the same hydrologic system has been modeled using a variety of techniques covering a wide range of sophistication and complexity. Early in the permitting process, simple numerical models were used to evaluate the necessary amount of water to be pumped from the mine, reductions in streamflow, and the drawdowns in the regional aquifer. More complex models have subsequently been used in an attempt to refine the predictions. Even after so much modeling effort, questions regarding the accuracy and reliability of the predictions remain. We have performed a new analysis of the proposed mine using the two-dimensional analytic element code GFLOW coupled with the nonlinear parameter estimation code UCODE. The new model is parsimonious, containing fewer than 10 parameters, and covers a region several times larger in areal extent than any of the previous models. The model demonstrates the suitability of analytic element codes for use with parameter estimation codes. The simplified model results are similar to the more complex models; predicted mine inflows and UCODE-derived 95% confidence intervals are consistent with the previous predictions. More important, the large areal extent of the model allowed us to examine hydrological features not included in the previous models, resulting in new insights about the effects that far-field boundary conditions can have on near-field model calibration and parameterization. In this case, the addition of surface water runoff into a lake in the headwaters of a stream while holding recharge constant moved a regional ground watershed divide and resulted in some of the added water being captured by the adjoining basin. Finally, a simple analytical solution was used to clarify the GFLOW model

  20. An Adaptive Displacement Estimation Algorithm for Improved Reconstruction of Thermal Strain

    PubMed Central

    Ding, Xuan; Dutta, Debaditya; Mahmoud, Ahmed M.; Tillman, Bryan; Leers, Steven A.; Kim, Kang

    2014-01-01

    Thermal strain imaging (TSI) can be used to differentiate between lipid and water-based tissues in atherosclerotic arteries. However, detecting small lipid pools in vivo requires accurate and robust displacement estimation over a wide range of displacement magnitudes. Phase-shift estimators such as Loupas’ estimator and time-shift estimators like normalized cross-correlation (NXcorr) are commonly used to track tissue displacements. However, Loupas’ estimator is limited by phase-wrapping and NXcorr performs poorly when the signal-to-noise ratio (SNR) is low. In this paper, we present an adaptive displacement estimation algorithm that combines both Loupas’ estimator and NXcorr. We evaluated this algorithm using computer simulations and an ex-vivo human tissue sample. Using 1-D simulation studies, we showed that when the displacement magnitude induced by thermal strain was >λ/8 and the electronic system SNR was >25.5 dB, the NXcorr displacement estimate was less biased than the estimate found using Loupas’ estimator. On the other hand, when the displacement magnitude was ≤λ/4 and the electronic system SNR was ≤25.5 dB, Loupas’ estimator had less variance than NXcorr. We used these findings to design an adaptive displacement estimation algorithm. Computer simulations of TSI using Field II showed that the adaptive displacement estimator was less biased than either Loupas’ estimator or NXcorr. Strain reconstructed from the adaptive displacement estimates improved the strain SNR by 43.7–350% and the spatial accuracy by 1.2–23.0% (p < 0.001). An ex-vivo human tissue study provided results that were comparable to computer simulations. The results of this study showed that a novel displacement estimation algorithm, which combines two different displacement estimators, yielded improved displacement estimation and results in improved strain reconstruction. PMID:25585398

  1. Sample Size Estimation in Cluster Randomized Educational Trials: An Empirical Bayes Approach

    ERIC Educational Resources Information Center

    Rotondi, Michael A.; Donner, Allan

    2009-01-01

    The educational field has now accumulated an extensive literature reporting on values of the intraclass correlation coefficient, a parameter essential to determining the required size of a planned cluster randomized trial. We propose here a simple simulation-based approach including all relevant information that can facilitate this task. An…

  2. An Improved Source-Scanning Algorithm for Locating Earthquake Clusters or Aftershock Sequences

    NASA Astrophysics Data System (ADS)

    Liao, Y.; Kao, H.; Hsu, S.

    2010-12-01

    The Source-scanning Algorithm (SSA) was originally introduced in 2004 to locate non-volcanic tremors. Its application was later expanded to the identification of earthquake rupture planes and the near-real-time detection and monitoring of landslides and mud/debris flows. In this study, we further improve SSA for the purpose of locating earthquake clusters or aftershock sequences when only a limited number of waveform observations are available. The main improvements include the application of a ground motion analyzer to separate P and S waves, the automatic determination of resolution based on the grid size and time step of the scanning process, and a modified brightness function to utilize constraints from multiple phases. Specifically, the improved SSA (named as ISSA) addresses two major issues related to locating earthquake clusters/aftershocks. The first one is the massive amount of both time and labour to locate a large number of seismic events manually. And the second one is to efficiently and correctly identify the same phase across the entire recording array when multiple events occur closely in time and space. To test the robustness of ISSA, we generate synthetic waveforms consisting of 3 separated events such that individual P and S phases arrive at different stations in different order, thus making correct phase picking nearly impossible. Using these very complicated waveforms as the input, the ISSA scans all model space for possible combination of time and location for the existence of seismic sources. The scanning results successfully associate various phases from each event at all stations, and correctly recover the input. To further demonstrate the advantage of ISSA, we apply it to the waveform data collected by a temporary OBS array for the aftershock sequence of an offshore earthquake southwest of Taiwan. The overall signal-to-noise ratio is inadequate for locating small events; and the precise arrival times of P and S phases are difficult to

  3. Ionospheric perturbation degree estimates for improving GNSS applications

    NASA Astrophysics Data System (ADS)

    Jakowski, Norbert; Mainul Hoque, M.; Wilken, Volker; Berdermann, Jens; Hlubek, Nikolai

    Ionosphere can adversely affect accuracy, continuity, availability, and integrity of modern Global Navigation Satellite Systems (GNSS) in different ways. Hence, reliable information on key parameters describing the perturbation degree of the ionosphere is helpful for estimating the potential degradation of the performance of these systems. So, to guarantee the required safety level in aviation, Ground Based Augmentation Systems (GBAS) and Satellite Based Augmentation Systems (SBAS) have been established for detecting and mitigating ionospheric threats in particular due to ionospheric gradients. The paper reviews various attempts and capabilities to characterize the perturbation degree of the ionosphere currently being used in precise positioning and safety of life applications. Continuity and availability of signals are mainly impacted by amplitude and phase scintillations characterized by indices such as S4 or phase noise. To characterize medium and large scale ionospheric perturbations that may seriously affect accuracy and integrity of GNSS, the use of an internationally standardized Disturbance Ionosphere Index (DIX) is recommended. The definition of such a DIX must take into account the practical needs, should be an objective measure of ionospheric conditions and easy and reproducible to compute. A preliminary DIX approach is presented and discussed. Such a robust and easy adaptable index should have a great potential for being used in operational ionospheric weather services and GNSS augmentation systems.

  4. Improving discharge estimates from routine river flow monitoring in Sweden

    NASA Astrophysics Data System (ADS)

    Capell, Rene; Arheimer, Berit

    2016-04-01

    The Swedish Meteorological and Hydrological Institute (SMHI) maintains a permanent river gauging network for national hydrological monitoring which includes 263 gauging stations in Sweden. At all these stations, water levels are measured continuously, and discharges are computed through rating curves. The network represents a wide range of environmental settings, different gauging measurement types and gauging frequencies. Gauging frequencies are typically low compared with river gauges in more research-oriented settings, and thus uncertainties in discharges, particularly extremes, can be large. On the other hand, the gauging stations have often been in use for very long, with the oldest measurements dating back to 1900, and at least partly exhibit very stable conditions. Here, we show the variation in gauging stability in the SMHI's gauging network in order to try to identify more error-prone conditions. We investigate how the current, largely subjective, way of updating rating curves influences discharge estimates, and discuss ways forward towards a more objective evaluation of both discharge uncertainty and rating curve updating procedures.

  5. An improved method for nonlinear parameter estimation: a case study of the Rössler model

    NASA Astrophysics Data System (ADS)

    He, Wen-Ping; Wang, Liu; Jiang, Yun-Di; Wan, Shi-Quan

    2015-06-01

    Parameter estimation is an important research topic in nonlinear dynamics. Based on the evolutionary algorithm (EA), Wang et al. (2014) present a new scheme for nonlinear parameter estimation and numerical tests indicate that the estimation precision is satisfactory. However, the convergence rate of the EA is relatively slow when multiple unknown parameters in a multidimensional dynamical system are estimated simultaneously. To solve this problem, an improved method for parameter estimation of nonlinear dynamical equations is provided in the present paper. The main idea of the improved scheme is to use all of the known time series for all of the components in some dynamical equations to estimate the parameters in single component one by one, instead of estimating all of the parameters in all of the components simultaneously. Thus, we can estimate all of the parameters stage by stage. The performance of the improved method was tested using a classic chaotic system—Rössler model. The numerical tests show that the amended parameter estimation scheme can greatly improve the searching efficiency and that there is a significant increase in the convergence rate of the EA, particularly for multiparameter estimation in multidimensional dynamical equations. Moreover, the results indicate that the accuracy of parameter estimation and the CPU time consumed by the presented method have no obvious dependence on the sample size.

  6. An improved method for nonlinear parameter estimation: a case study of the Rössler model

    NASA Astrophysics Data System (ADS)

    He, Wen-Ping; Wang, Liu; Jiang, Yun-Di; Wan, Shi-Quan

    2016-08-01

    Parameter estimation is an important research topic in nonlinear dynamics. Based on the evolutionary algorithm (EA), Wang et al. (2014) present a new scheme for nonlinear parameter estimation and numerical tests indicate that the estimation precision is satisfactory. However, the convergence rate of the EA is relatively slow when multiple unknown parameters in a multidimensional dynamical system are estimated simultaneously. To solve this problem, an improved method for parameter estimation of nonlinear dynamical equations is provided in the present paper. The main idea of the improved scheme is to use all of the known time series for all of the components in some dynamical equations to estimate the parameters in single component one by one, instead of estimating all of the parameters in all of the components simultaneously. Thus, we can estimate all of the parameters stage by stage. The performance of the improved method was tested using a classic chaotic system—Rössler model. The numerical tests show that the amended parameter estimation scheme can greatly improve the searching efficiency and that there is a significant increase in the convergence rate of the EA, particularly for multiparameter estimation in multidimensional dynamical equations. Moreover, the results indicate that the accuracy of parameter estimation and the CPU time consumed by the presented method have no obvious dependence on the sample size.

  7. Estimating the Power Characteristics of Clusters of Large Offshore Wind Farms

    NASA Astrophysics Data System (ADS)

    Drew, D.; Barlow, J. F.; Coceal, O.; Coker, P.; Brayshaw, D.; Lenaghan, D.

    2014-12-01

    The next phase of offshore wind projects in the UK focuses on the development of very large wind farms clustered within several allocated zones. However, this change in the distribution of wind capacity brings uncertainty for the operational planning of the power system. Firstly, there are concerns that concentrating large amounts of capacity in one area could reduce some of the benefits seen by spatially dispersing the turbines, such as the smoothing of the power generation variability. Secondly, wind farms of the scale planned are likely to influence the boundary layer sufficiently to impact the performance of adjacent farms, therefore the power generation characteristics of the clusters are largely unknown. The aim of this study is to use the Weather Research and Forecasting (WRF) model to investigate the power output of a cluster of offshore wind farms for a range of extreme events, taking into account the wake effects of the individual turbines and the neighbouring farms. Each wind farm in the cluster is represented as an elevated momentum sink and a source of turbulent kinetic energy using the WRF Wind Farm Parameterization. The research focuses on the Dogger Bank zone (located in the North Sea approximately 125 km off the East coast of the UK), which could have 7.2 GW of installed capacity across six separate wind farms. For this site, a 33 year reanalysis data set (MERRA, from NASA-GMAO) has been used to identify a series of extreme event case studies. These are characterised by either periods of persistent low (or high) wind speeds, or by rapid changes in power output. The latter could be caused by small changes in the wind speed inducing large changes in power output, very high winds prompting turbine shut down, or a change in the wind direction which shifts the wake effects of the neighbouring farms in the cluster and therefore changes the wind resource available.

  8. An improved Pearson's correlation proximity-based hierarchical clustering for mining biological association between genes.

    PubMed

    Booma, P M; Prabhakaran, S; Dhanalakshmi, R

    2014-01-01

    Microarray gene expression datasets has concerned great awareness among molecular biologist, statisticians, and computer scientists. Data mining that extracts the hidden and usual information from datasets fails to identify the most significant biological associations between genes. A search made with heuristic for standard biological process measures only the gene expression level, threshold, and response time. Heuristic search identifies and mines the best biological solution, but the association process was not efficiently addressed. To monitor higher rate of expression levels between genes, a hierarchical clustering model was proposed, where the biological association between genes is measured simultaneously using proximity measure of improved Pearson's correlation (PCPHC). Additionally, the Seed Augment algorithm adopts average linkage methods on rows and columns in order to expand a seed PCPHC model into a maximal global PCPHC (GL-PCPHC) model and to identify association between the clusters. Moreover, a GL-PCPHC applies pattern growing method to mine the PCPHC patterns. Compared to existing gene expression analysis, the PCPHC model achieves better performance. Experimental evaluations are conducted for GL-PCPHC model with standard benchmark gene expression datasets extracted from UCI repository and GenBank database in terms of execution time, size of pattern, significance level, biological association efficiency, and pattern quality. PMID:25136661

  9. Improving modeled snow albedo estimates during the spring melt season

    NASA Astrophysics Data System (ADS)

    Malik, M. Jahanzeb; Velde, Rogier; Vekerdy, Zoltan; Su, Zhongbo

    2014-06-01

    Snow albedo influences snow-covered land energy and water budgets and is thus an important variable for energy and water fluxes calculations. Here, we quantify the performance of the three existing snow albedo parameterizations under alpine, tundra, and prairie snow conditions when implemented in the Noah land surface model (LSM)—Noah's default and ones from the Biosphere-Atmosphere Transfer Scheme (BATS) and the Canadian Land Surface Scheme (CLASS) LSMs. The Noah LSM is forced with and its output is evaluated using in situ measurements from seven sites in U.S. and France. Comparison of the snow albedo simulations with the in situ measurements reveals that the three parameterizations overestimate snow albedo during springtime. An alternative snow albedo parameterization is introduced that adopts the shape of the variogram for the optically thick snowpacks and decreases the albedo further for optically thin conditions by mixing the snow with the land surface (background) albedo as a function of snow depth. In comparison with the in situ measurements, the new parameterization improves albedo simulation of the alpine and tundra snowpacks and positively impacts the simulation of snow depth, snowmelt rate, and upward shortwave radiation. An improved model performance with the variogram-shaped parameterization can, however, not be unambiguously detected for prairie snowpacks, which may be attributed to uncertainties associated with the simulation of snow density. An assessment of the model performance for the Upper Colorado River Basin highlights that with the variogram-shaped parameterization Noah simulates more evapotranspiration and larger runoff peaks in Spring, whereas the Summer runoff is lower.

  10. Community Mobilization in Mumbai Slums to Improve Perinatal Care and Outcomes: A Cluster Randomized Controlled Trial

    PubMed Central

    More, Neena Shah; Bapat, Ujwala; Das, Sushmita; Alcock, Glyn; Patil, Sarita; Porel, Maya; Vaidya, Leena; Fernandez, Armida; Joshi, Wasundhara; Osrin, David

    2012-01-01

    Introduction Improving maternal and newborn health in low-income settings requires both health service and community action. Previous community initiatives have been predominantly rural, but India is urbanizing. While working to improve health service quality, we tested an intervention in which urban slum-dweller women's groups worked to improve local perinatal health. Methods and Findings A cluster randomized controlled trial in 24 intervention and 24 control settlements covered a population of 283,000. In each intervention cluster, a facilitator supported women's groups through an action learning cycle in which they discussed perinatal experiences, improved their knowledge, and took local action. We monitored births, stillbirths, and neonatal deaths, and interviewed mothers at 6 weeks postpartum. The primary outcomes described perinatal care, maternal morbidity, and extended perinatal mortality. The analysis included 18,197 births over 3 years from 2006 to 2009. We found no differences between trial arms in uptake of antenatal care, reported work, rest, and diet in later pregnancy, institutional delivery, early and exclusive breastfeeding, or care-seeking. The stillbirth rate was non-significantly lower in the intervention arm (odds ratio 0.86, 95% CI 0.60–1.22), and the neonatal mortality rate higher (1.48, 1.06–2.08). The extended perinatal mortality rate did not differ between arms (1.19, 0.90–1.57). We have no evidence that these differences could be explained by the intervention. Conclusions Facilitating urban community groups was feasible, and there was evidence of behaviour change, but we did not see population-level effects on health care or mortality. In cities with multiple sources of health care, but inequitable access to services, community mobilization should be integrated with attempts to deliver services for the poorest and most vulnerable, and with initiatives to improve quality of care in both public and private sectors. Trial registration

  11. Improved Facial-Feature Detection for AVSP via Unsupervised Clustering and Discriminant Analysis

    NASA Astrophysics Data System (ADS)

    Lucey, Simon; Sridharan, Sridha; Chandran, Vinod

    2003-12-01

    An integral part of any audio-visual speech processing (AVSP) system is the front-end visual system that detects facial-features (e.g., eyes and mouth) pertinent to the task of visual speech processing. The ability of this front-end system to not only locate, but also give a confidence measure that the facial-feature is present in the image, directly affects the ability of any subsequent post-processing task such as speech or speaker recognition. With these issues in mind, this paper presents a framework for a facial-feature detection system suitable for use in an AVSP system, but whose basic framework is useful for any application requiring frontal facial-feature detection. A novel approach for facial-feature detection is presented, based on an appearance paradigm. This approach, based on intraclass unsupervised clustering and discriminant analysis, displays improved detection performance over conventional techniques.

  12. Improving the estimation of historical marine surface temperature changes

    NASA Astrophysics Data System (ADS)

    Carella, Giulia; Kent, Elizabeth C.; Berry, David I.

    2015-04-01

    Global Surface Temperature (GST) is one of the main indicators of climate change and Sea Surface Temperature (SST) forms its marine component. Historical SST observations extend back more than 150 years and are used for monitoring climate change and variability over the oceans, for validation of climate models and to provide boundary conditions for atmospheric models. SST observations from ships form our longest instrumental record of surface marine temperature change, but over the years different methods of measuring SST have been used, each of which potentially has different biases. Changes in technology and observational practice can be rapid and undocumented: generally, it is assumed that almost all SST data collected before the 1940s were derived from bucket samples although the measurement practice is almost never known in detail. Especially prior to the 1940s where buckets measurements prevailed, SST biases are expected to be large, namely comparable to the climatic increase in the GST over the past two centuries. Currently, SST datasets use bias models representing only large-scale effects, based on 5˚ area average monthly climatological environmental conditions or on large-scale variations in air-sea temperature difference, which is also uncertain. There are major differences between the bias adjustment fields used to date, which limits our confidence in global and regional estimates of historical SST as well as in long term trends, which are expected to be controlled by uncertainty in systematic biases. The main barrier to finer-scale adjustments of SST is that information about measurement methods and ambient environmental conditions is usually insufficient. As a result, many reports cannot be confidently assigned to a particular vessel and hence, cautiously, to the same measurement methodology. Here we present a new approach to the quantification of SST biases that can be applied on a ship-by-ship basis. These ship dependent adjustments are expected to

  13. Novel angle estimation for bistatic MIMO radar using an improved MUSIC

    NASA Astrophysics Data System (ADS)

    Li, Jianfeng; Zhang, Xiaofei; Chen, Han

    2014-09-01

    In this article, we study the problem of angle estimation for bistatic multiple-input multiple-output (MIMO) radar and propose an improved multiple signal classification (MUSIC) algorithm for joint direction of departure (DOD) and direction of arrival (DOA) estimation. The proposed algorithm obtains initial estimations of angles obtained from the signal subspace and uses the local one-dimensional peak searches to achieve the joint estimations of DOD and DOA. The angle estimation performance of the proposed algorithm is better than that of estimation of signal parameters via rotational invariance techniques (ESPRIT) algorithm, and is almost the same as that of two-dimensional MUSIC. Furthermore, the proposed algorithm can be suitable for irregular array geometry, obtain automatically paired DOD and DOA estimations, and avoid two-dimensional peak searching. The simulation results verify the effectiveness and improvement of the algorithm.

  14. Estimating ages of open star clusters using stellar lumionosity and colour

    NASA Astrophysics Data System (ADS)

    Williams, Chris

    2004-12-01

    This paper was designed for the 'armchair' astronomer who is interested in 'amateur research' by utilising the vast amount of images placed on the Internet from various places. Open star clusters are groups of stars that are physically related, bound by mutual gravitational attraction, populate a limited region of space and are all roughly at the same distance from us. We believe they originate from large cosmic gas and dust clouds within the Milky Way and the process of formation takes only a short time, so therefore all members of the cluster are of similar age. Also, as all the stars in a cluster formed from the same cloud, they are all of similar (initial) chemical composition. This 'family' of stars may be of similar birth age but their evolutionary ages differ due to the variation in their masses. High mass stars evolve much quicker than low mass stars they consume their fuel faster, have higher luminosities and die in a very short time (astronomical speaking) compared to a fractional solar mass star.

  15. Annealing a Follow-up Program: Improvement of the Dark Energy Figure of Merit for Optical Galaxy Cluster Surveys

    SciTech Connect

    Wu, Hao-Yi; Rozo, Eduardo; Wechsler, Risa H.; /KIPAC, Menlo Park /SLAC /CCAPP, Columbus /KICP, Chicago /KIPAC, Menlo Park /SLAC

    2010-06-02

    The precision of cosmological parameters derived from galaxy cluster surveys is limited by uncertainty in relating observable signals to cluster mass. We demonstrate that a small mass-calibration follow-up program can significantly reduce this uncertainty and improve parameter constraints, particularly when the follow-up targets are judiciously chosen. To this end, we apply a simulated annealing algorithm to maximize the dark energy information at fixed observational cost, and find that optimal follow-up strategies can reduce the observational cost required to achieve a specified precision by up to an order of magnitude. Considering clusters selected from optical imaging in the Dark Energy Survey, we find that approximately 200 low-redshift X-ray clusters or massive Sunyaev-Zel'dovich clusters can improve the dark energy figure of merit by 50%, provided that the follow-up mass measurements involve no systematic error. In practice, the actual improvement depends on (1) the uncertainty in the systematic error in follow-up mass measurements, which needs to be controlled at the 5% level to avoid severe degradation of the results; and (2) the scatter in the optical richness-mass distribution, which needs to be made as tight as possible to improve the efficacy of follow-up observations.

  16. Precision Photometric Redshifts Of Clusters

    NASA Astrophysics Data System (ADS)

    Holden, L.; Annis, J.

    2006-06-01

    Clusters of galaxies provide a means to achieve more precise photometric redshifts than achievable using individual galaxies simply because of the numbers of galaxies available in clusters. Here we examine the expectation that one can achieve root-N improvement using the N galaxies in a cluster. We extracted from a maxBCG SDSS cluster catalog 28,000 clusters and used SDSS DR4 spectra to find spectroscopic redshifts for the cluster. We examined both using the brightest cluster galaxy redshift as the proxy for the cluster and using the mean of a collection of galaxies within a given angular diameter and redshift (about the cluster photo-z) range. We find that the BCG provides a better estimate of the cluster redshift, to be understood in the context of a handful of spectra in the neighborhood of the cluster. We find that the cluster photo-z has an approximate root-N scaling behavior with the normalization for maxBCG techniques being 0.07. We predict what ``afterburner photo-z'' techniques, which use individual galaxy photo-z's good to 0.03-0.05, can achieve for cluster catalogs and for cluster cosmology.

  17. Spatial and temporal estimation of soil loss for the sustainable management of a wet semi-arid watershed cluster.

    PubMed

    Rejani, R; Rao, K V; Osman, M; Srinivasa Rao, Ch; Reddy, K Sammi; Chary, G R; Pushpanjali; Samuel, Josily

    2016-03-01

    The ungauged wet semi-arid watershed cluster, Seethagondi, lies in the Adilabad district of Telangana in India and is prone to severe erosion and water scarcity. The runoff and soil loss data at watershed, catchment, and field level are necessary for planning soil and water conservation interventions. In this study, an attempt was made to develop a spatial soil loss estimation model for Seethagondi cluster using RUSLE coupled with ARCGIS and was used to estimate the soil loss spatially and temporally. The daily rainfall data of Aphrodite for the period from 1951 to 2007 was used, and the annual rainfall varied from 508 to 1351 mm with a mean annual rainfall of 950 mm and a mean erosivity of 6789 MJ mm ha(-1) h(-1) year(-1). Considerable variation in land use land cover especially in crop land and fallow land was observed during normal and drought years, and corresponding variation in the erosivity, C factor, and soil loss was also noted. The mean value of C factor derived from NDVI for crop land was 0.42 and 0.22 in normal year and drought years, respectively. The topography is undulating and major portion of the cluster has slope less than 10°, and 85.3% of the cluster has soil loss below 20 t ha(-1) year(-1). The soil loss from crop land varied from 2.9 to 3.6 t ha(-1) year(-1) in low rainfall years to 31.8 to 34.7 t ha(-1) year(-1) in high rainfall years with a mean annual soil loss of 12.2 t ha(-1) year(-1). The soil loss from crop land was higher in the month of August with an annual soil loss of 13.1 and 2.9 t ha(-1) year(-1) in normal and drought year, respectively. Based on the soil loss in a normal year, the interventions recommended for 85.3% of area of the watershed includes agronomic measures such as contour cultivation, graded bunds, strip cropping, mixed cropping, crop rotations, mulching, summer plowing, vegetative bunds, agri-horticultural system, and management practices such as broad bed furrow, raised sunken beds, and harvesting available water

  18. Direct Distance Estimation applied to Eclipsing Binaries in Star Clusters:Case Study of DS Andromedae in NGC 752

    NASA Astrophysics Data System (ADS)

    Milone, Eugene F.; Schiller, Stephen Joseph

    2015-08-01

    Eclipsing binaries (EB) with well-calibrated photometry and precisely measured double-lined radial velocities are candidate standard candles when analyzed with a version of the Wilson-Devinney (WD) light curve modeling program that includes the direct distance estimation (DDE) algorithm. In the DDE procedure, distance is determined as a system parameter, thus avoiding the assumption of stellar sphericity and yielding a well-determined standard error for distance. The method therefore provides a powerful way to calibrate the distances of other objects in any aggregate that contains suitable EB's. DDE has been successfully applied to nearby systems and to a small number of EB's in open clusters. Previously we reported on one of the systems in our Binaries-in-Clusters program, HD27130 = V818 Tau, that had been analyzed with earlier versions of the WD program (see 1987 AJ 93, 1471; 1988 AJ 95, 1466; and 1995 AJ 109, 359 for examples). Results from those early solutions were entered as starting parameters in the current work with the WD 2013 version.Here we report several series of ongoing modeling experiments on a 1.01-d period, early type EB in the intermediate age cluster NGC 752. In one series, ranges of interstellar extinction and hotter star temperature were assumed, and in another series both component temperatures were adjusted. Consistent parameter sets, including distance, confirm DDE's advantages, essentially limited only by knowledge of interstellar extinction, which is small for DS And. Uncertainties in the bandpass calibration constants (flux in standard units from a zero magnitude star) are much less important because derived distance scales (inversely) only with the calibration's square root. This work was enabled by the unstinting help of Bob Wilson. We acknowledge earlier support for the Binaries-in-Clusters program from NSERC of Canada, and the Research Grants Committee and Department of Physics & Astronomy of the University of Calgary.

  19. Toward an Accurate and Inexpensive Estimation of CCSD(T)/CBS Binding Energies of Large Water Clusters.

    PubMed

    Sahu, Nityananda; Singh, Gurmeet; Nandi, Apurba; Gadre, Shridhar R

    2016-07-21

    Owing to the steep scaling behavior, highly accurate CCSD(T) calculations, the contemporary gold standard of quantum chemistry, are prohibitively difficult for moderate- and large-sized water clusters even with the high-end hardware. The molecular tailoring approach (MTA), a fragmentation-based technique is found to be useful for enabling such high-level ab initio calculations. The present work reports the CCSD(T) level binding energies of many low-lying isomers of large (H2O)n (n = 16, 17, and 25) clusters employing aug-cc-pVDZ and aug-cc-pVTZ basis sets within the MTA framework. Accurate estimation of the CCSD(T) level binding energies [within 0.3 kcal/mol of the respective full calculation (FC) results] is achieved after effecting the grafting procedure, a protocol for minimizing the errors in the MTA-derived energies arising due to the approximate nature of MTA. The CCSD(T) level grafting procedure presented here hinges upon the well-known fact that the MP2 method, which scales as O(N(5)), can be a suitable starting point for approximating to the highly accurate CCSD(T) [that scale as O(N(7))] energies. On account of the requirement of only an MP2-level FC on the entire cluster, the current methodology ultimately leads to a cost-effective solution for the CCSD(T) level accurate binding energies of large-sized water clusters even at the complete basis set limit utilizing off-the-shelf hardware. PMID:27351269

  20. Clustering methods for removing outliers from vision-based range estimates

    NASA Technical Reports Server (NTRS)

    Hussien, B.; Suorsa, R.

    1992-01-01

    The present approach to the automation of helicopter low-altitude flight uses one or more passive imaging sensors to extract environmental obstacle information; this is then processed via computer-vision techniques to yield a time-varying map of range to obstacles in the sensor's field of view along the vehicle's flight path. Attention is given to two related techniques which can eliminate outliers from a sparse range map, clustering sparse range-map information into different spatial classes that rely on a segmented and labeled image to aid in spatial classification within the image plane.

  1. Joint estimation over multiple individuals improves behavioural state inference from animal movement data

    PubMed Central

    Jonsen, Ian

    2016-01-01

    State-space models provide a powerful way to scale up inference of movement behaviours from individuals to populations when the inference is made across multiple individuals. Here, I show how a joint estimation approach that assumes individuals share identical movement parameters can lead to improved inference of behavioural states associated with different movement processes. I use simulated movement paths with known behavioural states to compare estimation error between nonhierarchical and joint estimation formulations of an otherwise identical state-space model. Behavioural state estimation error was strongly affected by the degree of similarity between movement patterns characterising the behavioural states, with less error when movements were strongly dissimilar between states. The joint estimation model improved behavioural state estimation relative to the nonhierarchical model for simulated data with heavy-tailed Argos location errors. When applied to Argos telemetry datasets from 10 Weddell seals, the nonhierarchical model estimated highly uncertain behavioural state switching probabilities for most individuals whereas the joint estimation model yielded substantially less uncertainty. The joint estimation model better resolved the behavioural state sequences across all seals. Hierarchical or joint estimation models should be the preferred choice for estimating behavioural states from animal movement data, especially when location data are error-prone. PMID:26853261

  2. Using Local Matching to Improve Estimates of Program Impact: Evidence from Project STAR

    ERIC Educational Resources Information Center

    Jones, Nathan; Steiner, Peter; Cook, Tom

    2011-01-01

    In this study the authors test whether matching using intact local groups improves causal estimates over those produced using propensity score matching at the student level. Like the recent analysis of Wilde and Hollister (2007), they draw on data from Project STAR to estimate the effect of small class sizes on student achievement. They propose a…

  3. An Investigation of Methods for Improving Estimation of Test Score Distributions.

    ERIC Educational Resources Information Center

    Hanson, Bradley A.

    Three methods of estimating test score distributions that may improve on using the observed frequencies (OBFs) as estimates of a population test score distribution are considered: the kernel method (KM); the polynomial method (PM); and the four-parameter beta binomial method (FPBBM). The assumption each method makes about the smoothness of the…

  4. "Battleship Numberline": A Digital Game for Improving Estimation Accuracy on Fraction Number Lines

    ERIC Educational Resources Information Center

    Lomas, Derek; Ching, Dixie; Stampfer, Eliane; Sandoval, Melanie; Koedinger, Ken

    2011-01-01

    Given the strong relationship between number line estimation accuracy and math achievement, might a computer-based number line game help improve math achievement? In one study by Rittle-Johnson, Siegler and Alibali (2001), a simple digital game called "Catch the Monster" provided practice in estimating the location of decimals on a number line.…

  5. Intervention to improve the quality of antimicrobial prescribing for urinary tract infection: a cluster randomized trial

    PubMed Central

    Vellinga, Akke; Galvin, Sandra; Duane, Sinead; Callan, Aoife; Bennett, Kathleen; Cormican, Martin; Domegan, Christine; Murphy, Andrew W.

    2016-01-01

    Background: Overuse of antimicrobial therapy in the community adds to the global spread of antimicrobial resistance, which is jeopardizing the treatment of common infections. Methods: We designed a cluster randomized complex intervention to improve antimicrobial prescribing for urinary tract infection in Irish general practice. During a 3-month baseline period, all practices received a workshop to promote consultation coding for urinary tract infections. Practices in intervention arms A and B received a second workshop with information on antimicrobial prescribing guidelines and a practice audit report (baseline data). Practices in intervention arm B received additional evidence on delayed prescribing of antimicrobials for suspected urinary tract infection. A reminder integrated into the patient management software suggested first-line treatment and, for practices in arm B, delayed prescribing. Over the 6-month intervention, practices in arms A and B received monthly audit reports of antimicrobial prescribing. Results: The proportion of antimicrobial prescribing according to guidelines for urinary tract infection increased in arms A and B relative to control (adjusted overall odds ratio [OR] 2.3, 95% confidence interval [CI] 1.7 to 3.2; arm A adjusted OR 2.7, 95% CI 1.8 to 4.1; arm B adjusted OR 2.0, 95% CI 1.3 to 3.0). An unintended increase in antimicrobial prescribing was observed in the intervention arms relative to control (arm A adjusted OR 2.2, 95% CI 1.2 to 4.0; arm B adjusted OR 1.4, 95% CI 0.9 to 2.1). Improvements in guideline-based prescribing were sustained at 5 months after the intervention. Interpretation: A complex intervention, including audit reports and reminders, improved the quality of prescribing for urinary tract infection in Irish general practice. Trial registration: ClinicalTrials.gov, no. NCT01913860 PMID:26573754

  6. Estimation of Missing Daily Temperatures: Can a Weather Categorization Improve Its Accuracy?.

    NASA Astrophysics Data System (ADS)

    Huth, Radan; Nemeová, Ivana

    1995-07-01

    A method of estimating missing daily temperatures is proposed. The procedure is based on a weather classification consisting of two steps: principal component analysis and cluster analysis. At each time of observation @0700, 1400, and 2100 local time) the weather is characterized by temperature, relative humidity, wind speed, and cloudiness. The coefficients of regression equations, enabling the missing temperatures to be determined from the known temperatures at nearby stations, are computed within each weather class. The influence of various parameters @input variables, number of weather classes, number of principal components, their rotation, type of regression equation) on the accuracy of estimated temperatures is discussed. The method yields better results than ordinary regression methods that do not utilize a weather classification. An examination of statistical properties of the estimated temperatures confirms the applicability of the completed temperature series in climate studies.

  7. Stream gradient Hotspot and Cluster Analysis (SL-HCA) for improving the longitudinal profiles metrics

    NASA Astrophysics Data System (ADS)

    Troiani, Francesco; Piacentini, Daniela; Seta Marta, Della

    2016-04-01

    analysis conducted on 52 clusters of high and very high Gi* values indicate that mass movement of slope material represents the dominant process producing over-steeped long-profiles along connected streams, whereas the litho-structure accounts for the main anomalies along disconnected steams. Tectonic structures generally provide to the largest clusters. Our results demonstrate that SL-HCA maps have the same potential of lithologically-filtered SL maps for detecting knickzones due to hillslope processes and/or tectonic structures. The reduced-complexity model derived from SL-HCA approach highly improve the readability of the morphometric outcomes, thus the interpretation at a regional scale of the geological-geomorphological meaning of over-steeped segments on long-profiles. SL-HCA maps are useful to investigate and better interpret knickzones within regions poorly covered by geological data and where field surveys are difficult to be performed.

  8. OPTICAL REDSHIFT AND RICHNESS ESTIMATES FOR GALAXY CLUSTERS SELECTED WITH THE SUNYAEV-Zel'dovich EFFECT FROM 2008 SOUTH POLE TELESCOPE OBSERVATIONS

    SciTech Connect

    High, F. W.; Stalder, B.; Song, J.; Ade, P. A. R.; Aird, K. A.; Allam, S. S.; Buckley-Geer, E. J.; Armstrong, R.; Barkhouse, W. A.; Benson, B. A.; Bertin, E.; Bhattacharya, S.; Bleem, L. E.; Carlstrom, J. E.; Chang, C. L.; Crawford, T. M.; Crites, A. T.; Brodwin, M.; Challis, P.; De Haan, T.

    2010-11-10

    We present redshifts and optical richness properties of 21 galaxy clusters uniformly selected by their Sunyaev-Zel'dovich (SZ) signature. These clusters, plus an additional, unconfirmed candidate, were detected in a 178 deg{sup 2} area surveyed by the South Pole Telescope (SPT) in 2008. Using griz imaging from the Blanco Cosmology Survey and from pointed Magellan telescope observations, as well as spectroscopy using Magellan facilities, we confirm the existence of clustered red-sequence galaxies, report red-sequence photometric redshifts, present spectroscopic redshifts for a subsample, and derive R{sub 200} radii and M{sub 200} masses from optical richness. The clusters span redshifts from 0.15 to greater than 1, with a median redshift of 0.74; three clusters are estimated to be at z>1. Redshifts inferred from mean red-sequence colors exhibit 2% rms scatter in {sigma}{sub z}/(1 + z) with respect to the spectroscopic subsample for z < 1. We show that the M{sub 200} cluster masses derived from optical richness correlate with masses derived from SPT data and agree with previously derived scaling relations to within the uncertainties. Optical and infrared imaging is an efficient means of cluster identification and redshift estimation in large SZ surveys, and exploiting the same data for richness measurements, as we have done, will be useful for constraining cluster masses and radii for large samples in cosmological analysis.

  9. Systems analysis and improvement to optimize pMTCT (SAIA): a cluster randomized trial

    PubMed Central

    2014-01-01

    Background Despite significant increases in global health investment and the availability of low-cost, efficacious interventions to prevent mother-to-child HIV transmission (pMTCT) in low- and middle-income countries with high HIV burden, the translation of scientific advances into effective delivery strategies has been slow, uneven and incomplete. As a result, pediatric HIV infection remains largely uncontrolled. A five-step, facility-level systems analysis and improvement intervention (SAIA) was designed to maximize effectiveness of pMTCT service provision by improving understanding of inefficiencies (step one: cascade analysis), guiding identification and prioritization of low-cost workflow modifications (step two: value stream mapping), and iteratively testing and redesigning these modifications (steps three through five). This protocol describes the SAIA intervention and methods to evaluate the intervention’s impact on reducing drop-offs along the pMTCT cascade. Methods This study employs a two-arm, longitudinal cluster randomized trial design. The unit of randomization is the health facility. A total of 90 facilities were identified in Côte d’Ivoire, Kenya and Mozambique (30 per country). A subset was randomly selected and assigned to intervention and comparison arms, stratified by country and service volume, resulting in 18 intervention and 18 comparison facilities across all three countries, with six intervention and six comparison facilities per country. The SAIA intervention will be implemented for six months in the 18 intervention facilities. Primary trial outcomes are designed to assess improvements in the pMTCT service cascade, and include the percentage of pregnant women being tested for HIV at the first antenatal care visit, the percentage of HIV-infected pregnant women receiving adequate prophylaxis or combination antiretroviral therapy in pregnancy, and the percentage of newborns exposed to HIV in pregnancy receiving an HIV diagnosis eight

  10. Using Targeted Active-Learning Exercises and Diagnostic Question Clusters to Improve Students' Understanding of Carbon Cycling in Ecosystems

    ERIC Educational Resources Information Center

    Maskiewicz, April Cordero; Griscom, Heather Peckham; Welch, Nicole Turrill

    2012-01-01

    In this study, we used targeted active-learning activities to help students improve their ways of reasoning about carbon flow in ecosystems. The results of a validated ecology conceptual inventory (diagnostic question clusters [DQCs]) provided us with information about students' understanding of and reasoning about transformation of inorganic and…

  11. Estimating Accuracy of Land-Cover Composition From Two-Stage Clustering Sampling

    EPA Science Inventory

    Land-cover maps are often used to compute land-cover composition (i.e., the proportion or percent of area covered by each class), for each unit in a spatial partition of the region mapped. We derive design-based estimators of mean deviation (MD), mean absolute deviation (MAD), ...

  12. Accurate reconstruction of viral quasispecies spectra through improved estimation of strain richness

    PubMed Central

    2015-01-01

    Background Estimating the number of different species (richness) in a mixed microbial population has been a main focus in metagenomic research. Existing methods of species richness estimation ride on the assumption that the reads in each assembled contig correspond to only one of the microbial genomes in the population. This assumption and the underlying probabilistic formulations of existing methods are not useful for quasispecies populations where the strains are highly genetically related. The lack of knowledge on the number of different strains in a quasispecies population is observed to hinder the precision of existing Viral Quasispecies Spectrum Reconstruction (QSR) methods due to the uncontrolled reconstruction of a large number of in silico false positives. In this work, we formulated a novel probabilistic method for strain richness estimation specifically targeting viral quasispecies. By using this approach we improved our recently proposed spectrum reconstruction pipeline ViQuaS to achieve higher levels of precision in reconstructed quasispecies spectra without compromising the recall rates. We also discuss how one other existing popular QSR method named ShoRAH can be improved using this new approach. Results On benchmark data sets, our estimation method provided accurate richness estimates (< 0.2 median estimation error) and improved the precision of ViQuaS by 2%-13% and F-score by 1%-9% without compromising the recall rates. We also demonstrate that our estimation method can be used to improve the precision and F-score of ShoRAH by 0%-7% and 0%-5% respectively. Conclusions The proposed probabilistic estimation method can be used to estimate the richness of viral populations with a quasispecies behavior and to improve the accuracy of the quasispecies spectra reconstructed by the existing methods ViQuaS and ShoRAH in the presence of a moderate level of technical sequencing errors. Availability http://sourceforge.net/projects/viquas/ PMID:26678073

  13. Combined gene cluster engineering and precursor feeding to improve gougerotin production in Streptomyces graminearus.

    PubMed

    Jiang, Lingjuan; Wei, Junhong; Li, Lei; Niu, Guoqing; Tan, Huarong

    2013-12-01

    Gougerotin is a peptidyl nucleoside antibiotic produced by Streptomyces graminearus . It is a specific inhibitor of protein synthesis and exhibits a broad spectrum of biological activities. Generation of an overproducing strain is crucial for the scale-up production of gougerotin. In this study, the natural and engineered gougerotin gene clusters were reassembled into an integrative plasmid by λ-red-mediated recombination technology combined with classic cloning methods. The resulting plasmids pGOU and pGOUe were introduced into S. graminearus to obtain recombinant strains Sgr-GOU and Sgr-GOUe, respectively. Compared with the wild-type strain, Sgr-GOU led to a maximum 1.3-fold increase in gougerotin production, while Sgr-GOUe resulted in a maximum 2.1-fold increase in gougerotin production. To further increase the yield of gougerotin, the effect of different precursors on its production was investigated. All precursors, including cytosine, serine, and glycine, had stimulatory effect on gougerotin production. The maximum gougerotin yield was achieved with Sgr-GOUe in the presence of glycine, and it was approximately 2.5-fold higher than that of the wild-type strain. The strategies used in this study can be extended to other Streptomyces for improving production of industrial important antibiotics. PMID:24121866

  14. A Clustering Method for Improving Performance of Anomaly-Based Intrusion Detection System

    NASA Astrophysics Data System (ADS)

    Song, Jungsuk; Ohira, Kenji; Takakura, Hiroki; Okabe, Yasuo; Kwon, Yongjin

    Intrusion detection system (IDS) has played a central role as an appliance to effectively defend our crucial computer systems or networks against attackers on the Internet. The most widely deployed and commercially available methods for intrusion detection employ signature-based detection. However, they cannot detect unknown intrusions intrinsically which are not matched to the signatures, and their methods consume huge amounts of cost and time to acquire the signatures. In order to cope with the problems, many researchers have proposed various kinds of methods that are based on unsupervised learning techniques. Although they enable one to construct intrusion detection model with low cost and effort, and have capability to detect unforeseen attacks, they still have mainly two problems in intrusion detection: a low detection rate and a high false positive rate. In this paper, we present a new clustering method to improve the detection rate while maintaining a low false positive rate. We evaluated our method using KDD Cup 1999 data set. Evaluation results show that superiority of our approach to other existing algorithms reported in the literature.

  15. Improving the implementation of tailored expectant management in subfertile couples: protocol for a cluster randomized trial

    PubMed Central

    2013-01-01

    Background Prognostic models in reproductive medicine can help to identify subfertile couples who would benefit from fertility treatment. Expectant management in couples with a good chance of natural conception, i.e., tailored expectant management (TEM), prevents unnecessary treatment and is therefore recommended in international fertility guidelines. However, current implementation is not optimal, leaving room for improvement. Based on barriers and facilitators for TEM that were recently identified among professionals and subfertile couples, we have developed a multifaceted implementation strategy. The goal of this study is to assess the effects of this implementation strategy on the guideline adherence on TEM. Methods/design In a cluster randomized trial, 25 clinics and their allied practitioners units will be randomized between the multifaceted implementation strategy and care as usual. Randomization will be stratified for in vitro fertilization (IVF) facilities (full licensed, intermediate/no IVF facilities). The effect of the implementation strategy, i.e., the percentage guideline adherence on TEM, will be evaluated by pre- and post-randomization data collection. Furthermore, there will be a process and cost evaluation of the strategy. The implementation strategy will focus on subfertile couples and their care providers i.e., general practitioners (GPs), fertility doctors, and gynecologists. The implementation strategy addresses three levels: patient level: education materials in the form of a patient information leaflet and a website; professional level: audit and feedback, educational outreach visit, communication training, and access to a digital version of the prognostic model of Hunault on a website; organizational level: providing a protocol based on the guideline. The primary outcome will be the percentage guideline adherence on TEM. Additional outcome measures will be treatment-, patient-, and process-related outcome measures. Discussion This study

  16. A novel school-based intervention to improve nutrition knowledge in children: cluster randomised controlled trial

    PubMed Central

    2010-01-01

    Background Improving nutrition knowledge among children may help them to make healthier food choices. The aim of this study was to assess the effectiveness and acceptability of a novel educational intervention to increase nutrition knowledge among primary school children. Methods We developed a card game 'Top Grub' and a 'healthy eating' curriculum for use in primary schools. Thirty-eight state primary schools comprising 2519 children in years 5 and 6 (aged 9-11 years) were recruited in a pragmatic cluster randomised controlled trial. The main outcome measures were change in nutrition knowledge scores, attitudes to healthy eating and acceptability of the intervention by children and teachers. Results Twelve intervention and 13 control schools (comprising 1133 children) completed the trial. The main reason for non-completion was time pressure of the school curriculum. Mean total nutrition knowledge score increased by 1.1 in intervention (baseline to follow-up: 28.3 to 29.2) and 0.3 in control schools (27.3 to 27.6). Total nutrition knowledge score at follow-up, adjusted for baseline score, deprivation, and school size, was higher in intervention than in control schools (mean difference = 1.1; 95% CI: 0.05 to 2.16; p = 0.042). At follow-up, more children in the intervention schools said they 'are currently eating a healthy diet' (39.6%) or 'would try to eat a healthy diet' (35.7%) than in control schools (34.4% and 31.7% respectively; chi-square test p < 0.001). Most children (75.5%) enjoyed playing the game and teachers considered it a useful resource. Conclusions The 'Top Grub' card game facilitated the enjoyable delivery of nutrition education in a sample of UK primary school age children. Further studies should determine whether improvements in nutrition knowledge are sustained and lead to changes in dietary behaviour. PMID:20219104

  17. Effect of endmember clustering on proportion estimation: results on the SHARE 2012 dataset

    NASA Astrophysics Data System (ADS)

    Gunes, Erdinc; Yuksel, Seniha E.

    2015-05-01

    Estimating the number of endmembers and their spectrum is a challenging task. For one, endmember detection algorithms may over or underestimate the number of endmembers in a given scene. Further, even if the number of endmembers are known beforehand, result of the endmember detection algorithms may not be accurate. They may find multiple endmembers representing the same class, while completely missing some of the endmembers representing the other classes. This hinders the performance of unmixing, resulting in incorrect endmember proportion estimates. In this study, SHARE-2012 AVON data pertaining to the unmixing experiment was considered. It was cropped to include only the eight pieces of cloth and a portion of the surrounding asphalt and grass. This data was used to evaluate the performance of five endmember detection algorithms, namely the PPI, VCA, N-FINDR, ICE and SPICE; none of which found the endmember spectra correctly. All of these algorithms generated multiple endmembers corresponding to the same class or they completely missed some of the endmembers. Hence, the peak-aware N-FINDR algorithm was devised to group the endmembers of the same class so as not to over or under-estimate the true endmembers. The comparisons with or without this refinement for the N-FINDR algorithm are demonstrated.

  18. Improving the S-Shape Solar Radiation Estimation Method for Supporting Crop Models

    PubMed Central

    Fodor, Nándor

    2012-01-01

    In line with the critical comments formulated in relation to the S-shape global solar radiation estimation method, the original formula was improved via a 5-step procedure. The improved method was compared to four-reference methods on a large North-American database. According to the investigated error indicators, the final 7-parameter S-shape method has the same or even better estimation efficiency than the original formula. The improved formula is able to provide radiation estimates with a particularly low error pattern index (PIdoy) which is especially important concerning the usability of the estimated radiation values in crop models. Using site-specific calibration, the radiation estimates of the improved S-shape method caused an average of 2.72 ± 1.02 (α = 0.05) relative error in the calculated biomass. Using only readily available site specific metadata the radiation estimates caused less than 5% relative error in the crop model calculations when they were used for locations in the middle, plain territories of the USA. PMID:22645451

  19. N-dimensional B-spline surface estimated by lofting for locally improving IRI

    NASA Astrophysics Data System (ADS)

    Koch, K.; Schmidt, M.

    2011-03-01

    N-dimensional surfaces are defined by the tensor product of B-spline basis functions. To estimate the unknown control points of these B-spline surfaces, the lofting method also called skinning method by cross-sectional curve fits is applied. It is shown by an analytical proof and numerically confirmed by the example of a four-dimensional surface that the results of the lofting method agree with the ones of the simultaneous estimation of the unknown control points. The numerical complexity for estimating vn control points by the lofting method is O(vn+1) while it results in O(v3n) for the simultaneous estimation. It is also shown that a B-spline surface estimated by a simultaneous estimation can be extended to higher dimensions by the lofting method, thus saving computer time. An application of this method is the local improvement of the International Reference Ionosphere (IRI), e.g. by the slant total electron content (STEC) obtained by dual-frequency observations of the Global Navigation Satellite System (GNSS). Three-dimensional B-spline surfaces at different time epochs have to be determined by the simultaneous estimation of the control points for this improvement. A four-dimensional representation in space and time of the electron density of the ionosphere is desirable. It can be obtained by the lofting method. This takes less computer time than determining the four-dimensional surface solely by a simultaneous estimation.

  20. Ruling the Universe: An Improved Method for Measuring the Hubble Constant with Galaxy Clusters

    NASA Astrophysics Data System (ADS)

    Hallman, E. J.; Burns, J. O.; Motl, P. M.; Norman, M. L.

    2005-12-01

    We present a new method of calculating the value of the Hubble constant (H0) from X-ray/SZE observations of clusters of galaxies. Values of H0 reported from cluster observations are systematically low compared to other methods. We show using a large sample of numerically simulated clusters placed at a variety of redshifts that the typically used method of calculating H0, which assumes the cluster gas to be isothermal, results in a 20-30% underestimate of the mean value. This new method, which assumes the cluster gas temperature has a radial dependence described by a universal temperature profile (UTP), results in a value much closer to the true value of H0, the mean is a 3-8% overestimate. The new method also has greatly reduced scatter about the mean for all the clusters in the simulated catalog compared to the isothermal method. Our new method requires no additional observational effort compared to the traditional technique. This simple change in the analysis of the cluster data results in values of H0 which are consistent with other observations.

  1. Cluster-based differential features to improve detection accuracy of focal cortical dysplasia

    NASA Astrophysics Data System (ADS)

    Yang, Chin-Ann; Kaveh, Mostafa; Erickson, Bradley

    2012-03-01

    In this paper, a computer aided diagnosis (CAD) system for automatic detection of focal cortical dysplasia (FCD) on T1-weighted MRI is proposed. We introduce a new set of differential cluster-wise features comparing local differences of the candidate lesional area with its surroundings and other GM/WM boundaries. The local differences are measured in a distributional sense using χ2 distances. Finally, a Support Vector Machine (SVM) classifier is used to classify the clusters. Experimental results show an 88% lesion detection rate with only 1.67 false positive clusters per subject. Also, the results show that using additional differential features clearly outperforms the result using only absolute features.

  2. Improvement of color reproduction in color digital holography by using spectral estimation technique.

    PubMed

    Xia, Peng; Shimozato, Yuki; Ito, Yasunori; Tahara, Tatsuki; Kakue, Takashi; Awatsuji, Yasuhiro; Nishio, Kenzo; Ura, Shogo; Kubota, Toshihiro; Matoba, Osamu

    2011-12-01

    We propose a color digital holography by using spectral estimation technique to improve the color reproduction of objects. In conventional color digital holography, there is insufficient spectral information in holograms, and the color of the reconstructed images depend on only reflectances at three discrete wavelengths used in the recording of holograms. Therefore the color-composite image of the three reconstructed images is not accurate in color reproduction. However, in our proposed method, the spectral estimation technique was applied, which has been reported in multispectral imaging. According to the spectral estimation technique, the continuous spectrum of object can be estimated and the color reproduction is improved. The effectiveness of the proposed method was confirmed by a numerical simulation and an experiment, and, in the results, the average color differences are decreased from 35.81 to 7.88 and from 43.60 to 25.28, respectively. PMID:22193005

  3. Improving power in small-sample longitudinal studies when using generalized estimating equations.

    PubMed

    Westgate, Philip M; Burchett, Woodrow W

    2016-09-20

    Generalized estimating equations (GEE) are often used for the marginal analysis of longitudinal data. Although much work has been performed to improve the validity of GEE for the analysis of data arising from small-sample studies, little attention has been given to power in such settings. Therefore, we propose a valid GEE approach to improve power in small-sample longitudinal study settings in which the temporal spacing of outcomes is the same for each subject. Specifically, we use a modified empirical sandwich covariance matrix estimator within correlation structure selection criteria and test statistics. Use of this estimator can improve the accuracy of selection criteria and increase the degrees of freedom to be used for inference. The resulting impacts on power are demonstrated via a simulation study and application example. Copyright © 2016 John Wiley & Sons, Ltd. PMID:27090375

  4. Pollutant discharges to coastal areas: Improving upstream source estimates. Final report

    SciTech Connect

    Rohmann, S.O.

    1989-10-01

    The report describes a project NOAA's Strategic Environmental Assessments Division began to improve the estimates of pollutant discharges carried into coastal areas by rivers and streams. These estimates, termed discharges from upstream sources, take into account all pollution discharged by industries, sewage treatment plants, farms, cities, and other pollution-generating operations, as well as natural phenomena such as erosion and weathering which occur inland or upstream of the coastal US.

  5. Mobile Location Using Improved Covariance Shaping Least-Squares Estimation in Cellular Systems

    NASA Astrophysics Data System (ADS)

    Chang, Ann-Chen; Lee, Yu-Hong

    This Letter deals with the problem of non-line-of-sight (NLOS) in cellular systems devoted to location purposes. In conjugation with a variable loading technique, we present an efficient technique to make covariance shaping least squares estimator has robust capabilities against the NLOS effects. Compared with other methods, the proposed improved estimator has high accuracy under white Gaussian measurement noises and NLOS effects.

  6. Improving quality of sample entropy estimation for continuous distribution probability functions

    NASA Astrophysics Data System (ADS)

    Miśkiewicz, Janusz

    2016-05-01

    Entropy is a one of the key parameters characterizing state of system in statistical physics. Although, the entropy is defined for systems described by discrete and continuous probability distribution function (PDF), in numerous applications the sample entropy is estimated by a histogram, which, in fact, denotes that the continuous PDF is represented by a set of probabilities. Such a procedure may lead to ambiguities and even misinterpretation of the results. Within this paper, two possible general algorithms based on continuous PDF estimation are discussed in the application to the Shannon and Tsallis entropies. It is shown that the proposed algorithms may improve entropy estimation, particularly in the case of small data sets.

  7. Simple and Efficient Algorithm for Improving the MDL Estimator of the Number of Sources

    PubMed Central

    Guimarães, Dayan A.; de Souza, Rausley A. A.

    2014-01-01

    We propose a simple algorithm for improving the MDL (minimum description length) estimator of the number of sources of signals impinging on multiple sensors. The algorithm is based on the norms of vectors whose elements are the normalized and nonlinearly scaled eigenvalues of the received signal covariance matrix and the corresponding normalized indexes. Such norms are used to discriminate the largest eigenvalues from the remaining ones, thus allowing for the estimation of the number of sources. The MDL estimate is used as the input data of the algorithm. Numerical results unveil that the so-called norm-based improved MDL (iMDL) algorithm can achieve performances that are better than those achieved by the MDL estimator alone. Comparisons are also made with the well-known AIC (Akaike information criterion) estimator and with a recently-proposed estimator based on the random matrix theory (RMT). It is shown that our algorithm can also outperform the AIC and the RMT-based estimator in some situations. PMID:25330050

  8. Estimation of root zone storage capacity at the catchment scale using improved Mass Curve Technique

    NASA Astrophysics Data System (ADS)

    Zhao, Jie; Xu, Zongxue; Singh, Vijay P.

    2016-09-01

    The root zone storage capacity (Sr) greatly influences runoff generation, soil water movement, and vegetation growth and is hence an important variable for ecological and hydrological modelling. However, due to the great heterogeneity in soil texture and structure, there seems to be no effective approach to monitor or estimate Sr at the catchment scale presently. To fill the gap, in this study the Mass Curve Technique (MCT) was improved by incorporating a snowmelt module for the estimation of Sr at the catchment scale in different climatic regions. The "range of perturbation" method was also used to generate different scenarios for determining the sensitivity of the improved MCT-derived Sr to its influencing factors after the evaluation of plausibility of Sr derived from the improved MCT. Results can be showed as: (i) Sr estimates of different catchments varied greatly from ∼10 mm to ∼200 mm with the changes of climatic conditions and underlying surface characteristics. (ii) The improved MCT is a simple but powerful tool for the Sr estimation in different climatic regions of China, and incorporation of more catchments into Sr comparisons can further improve our knowledge on the variability of Sr. (iii) Variation of Sr values is an integrated consequence of variations in rainfall, snowmelt water and evapotranspiration. Sr values are most sensitive to variations in evapotranspiration of ecosystems. Besides, Sr values with a longer return period are more stable than those with a shorter return period when affected by fluctuations in its influencing factors.

  9. Use of spot measurements to improve the estimation of low streamflow statistics.

    NASA Astrophysics Data System (ADS)

    Kroll, C. N.; Stagnitta, T. J.; Vogel, R. M.

    2015-12-01

    Despite substantial efforts to improve the modeling and prediction of low streamflows at ungauged river sites, most models of low streamflow statistics create estimators with large errors. Often this is because the hydrogeologic characteristics of a watershed, which can strongly impact low streamflows, are difficult to characterize. One solution is to take a nominal number of streamflow measurements at an ungauged site to either estimate improved hydrogeologic indices or correlate with concurrent streamflow measurements at a nearby gauged river site. Past results have indicated that baseflow correlation performs better than regional regression when 4 or more streamflow measurements are available, even when the regional regression models are augmented by improved hydrogeologic indices. Here we revisit this issue within the 19,800 square mile Apalachicola-Chattahoochee-Flint watershed, a USGS WaterSMART region spanning Geogia, southeastern Alabama, and northwestern Florida. This study area is of particular interest because numerous watershed modeling analyses have previously been performed using gauged river sites within this basin. Initial results indicate that baseflow correlation can produce improved estimators when spot-measurements are available, but selection of an appropriate donor site is problematic, especially in regions with a small number of gauged river sites. Estimation of hydrogeologic indices do improve regional regression models, but these models are generally outperformed by baseflow correlation.

  10. The Role of Satellite Imagery to Improve Pastureland Estimates in South America

    NASA Astrophysics Data System (ADS)

    Graesser, J.

    2015-12-01

    Agriculture has changed substantially across the globe over the past half century. While much work has been done to improve spatial-temporal estimates of agricultural changes, we still know more about the extent of row-crop agriculture than livestock-grazed land. The gap between cropland and pastureland estimates exists largely because it is challenging to characterize natural versus grazed grasslands from a remote sensing perspective. However, the impasse of pastureland estimates is set to break, with an increasing number of spaceborne sensors and freely available satellite data. The Landsat satellite archive in particular provides researchers with immense amounts of data to improve pastureland information. Here we focus on South America, where pastureland expansion has been scrutinized for the past few decades. We explore the challenges of estimating pastureland using temporal Landsat imagery and focus on key agricultural countries, regions, and ecosystems. We focus on the suggested shift of pastureland from the Argentine Pampas to northern Argentina, and the mixing of small-scale and large-scale ranching in eastern Paraguay and how it could impact the Chaco forest to the west. Further, the Beni Savannahs of northern Bolivia and the Colombian Llanos—both grassland and savannah regions historically used for livestock grazing—have been hinted at as future areas for cropland expansion. There are certainly environmental concerns with pastureland expansion into forests; but what are the environmental implications when well-managed pasture systems are converted to intensive soybean or palm oil plantation? Tropical, grazed grasslands are important habitats for biodiversity, and pasturelands can mitigate soil erosion when well managed. Thus, we must improve estimates of grazed land before we can make informed policy and conservation decisions. This talk presents insights into pastureland estimates in South America and discusses the feasibility to improve current

  11. Integrating K-means Clustering with Kernel Density Estimation for the Development of a Conditional Weather Generation Downscaling Model

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Ho, C.; Chang, L.

    2011-12-01

    In previous decades, the climate change caused by global warming increases the occurrence frequency of extreme hydrological events. Water supply shortages caused by extreme events create great challenges for water resource management. To evaluate future climate variations, general circulation models (GCMs) are the most wildly known tools which shows possible weather conditions under pre-defined CO2 emission scenarios announced by IPCC. Because the study area of GCMs is the entire earth, the grid sizes of GCMs are much larger than the basin scale. To overcome the gap, a statistic downscaling technique can transform the regional scale weather factors into basin scale precipitations. The statistic downscaling technique can be divided into three categories include transfer function, weather generator and weather type. The first two categories describe the relationships between the weather factors and precipitations respectively based on deterministic algorithms, such as linear or nonlinear regression and ANN, and stochastic approaches, such as Markov chain theory and statistical distributions. In the weather type, the method has ability to cluster weather factors, which are high dimensional and continuous variables, into weather types, which are limited number of discrete states. In this study, the proposed downscaling model integrates the weather type, using the K-means clustering algorithm, and the weather generator, using the kernel density estimation. The study area is Shihmen basin in northern of Taiwan. In this study, the research process contains two steps, a calibration step and a synthesis step. Three sub-steps were used in the calibration step. First, weather factors, such as pressures, humidities and wind speeds, obtained from NCEP and the precipitations observed from rainfall stations were collected for downscaling. Second, the K-means clustering grouped the weather factors into four weather types. Third, the Markov chain transition matrixes and the

  12. Improving Propensity Score Estimators' Robustness to Model Misspecification Using Super Learner

    PubMed Central

    Pirracchio, Romain; Petersen, Maya L.; van der Laan, Mark

    2015-01-01

    The consistency of propensity score (PS) estimators relies on correct specification of the PS model. The PS is frequently estimated using main-effects logistic regression. However, the underlying model assumptions may not hold. Machine learning methods provide an alternative nonparametric approach to PS estimation. In this simulation study, we evaluated the benefit of using Super Learner (SL) for PS estimation. We created 1,000 simulated data sets (n = 500) under 4 different scenarios characterized by various degrees of deviance from the usual main-term logistic regression model for the true PS. We estimated the average treatment effect using PS matching and inverse probability of treatment weighting. The estimators' performance was evaluated in terms of PS prediction accuracy, covariate balance achieved, bias, standard error, coverage, and mean squared error. All methods exhibited adequate overall balancing properties, but in the case of model misspecification, SL performed better for highly unbalanced variables. The SL-based estimators were associated with the smallest bias in cases of severe model misspecification. Our results suggest that use of SL to estimate the PS can improve covariate balance and reduce bias in a meaningful manner in cases of serious model misspecification for treatment assignment. PMID:25515168

  13. Improved Cluster Identification and Visualization in High-Dimensional Data Using Self-Organizing Maps

    NASA Astrophysics Data System (ADS)

    Manukyan, N.; Eppstein, M. J.; Rizzo, D. M.

    2011-12-01

    A Kohonen self-organizing map (SOM) is a type of unsupervised artificial neural network that results in a self-organized projection of high-dimensional data onto a low-dimensional feature map, wherein vector similarity is implicitly translated into topological closeness, enabling clusters to be identified. In recently published work [1], 209 microbial variables from 22 monitoring wells around the leaking Schuyler Falls Landfill in Clinton, NY [2] were analyzed using a multi-stage non-parametric process to explore how microbial communities may act as indicators for the gradient of contamination in groundwater. The final stage of their analysis used a weighted SOM to identify microbial signatures in this high dimensionality data set that correspond to clean, fringe, and contaminated soils. Resulting clusters were visualized with the standard unified distance matrix (U-matrix). However, while the results of this analysis were very promising, visualized boundaries between clusters in the SOM were indistinct and required manual and somewhat arbitrary identification. In this contribution, we introduce (i) a new cluster reinforcement (CR) phase to be run subsequent to traditional SOM training for automatic sharpening of cluster boundaries, and (ii) a new boundary matrix (B-matrix) approach for visualization of the resulting cluster boundaries. The CR-phase differs from standard SOM training in several ways, most notably by using a feature-based neighborhood function rather than a topologically-based neighborhood function. In contrast to the U-matrix, the B-matrix can be directly superimposed on heat maps of the individual features (as output by the SOM) using grid lines whose thickness corresponds to inter-cluster distances. By thresholding the displayed lines, one obtains hierarchical control of the visual level of cluster resolution. We first illustrate the advantages of these methods on a small synthetic test case, and then apply them to the Schuyler Falls landfill

  14. Globular Cluster Variable Stars—Atlas and Coordinate Improvement using AAVSOnet Telescopes (Abstract)

    NASA Astrophysics Data System (ADS)

    Welch, D.; Henden, A.; Bell, T.; Suen, C.; Fare, I.; Sills, A.

    2015-12-01

    (Abstract only) The variable stars of globular clusters have played and continue to play a significant role in our understanding of certain classes of variable stars. Since all stars associated with a cluster have the same age, metallicity, distance and usually very similar (if not identical reddenings), such variables can produce uniquely powerful constraints on where certain types of pulsation behaviors are excited. Advanced amateur astronomers are increasingly well-positioned to provide long-term CCD monitoring of globular cluster variable star but are hampered by a long history of poor or inaccessible finder charts and coordinates. Many of variable-rich clusters have published photographic finder charts taken in relatively poor seeing with blue-sensitive photographic plates. While useful signal-to-noise ratios are relatively straightforward to achieve for RR Lyrae, Type 2 Cepheids, and red giant variables, correct identification remains a difficult issue—particularly when images are taken at V or longer wavelengths. We describe the project and report its progress using the OC61, TMO61, and SRO telescopes of AAVSOnet after the first year of image acquisition and demonstrate several of the data products being developed for globular cluster variables.

  15. Stimuli-responsive clustered nanoparticles for improved tumor penetration and therapeutic efficacy.

    PubMed

    Li, Hong-Jun; Du, Jin-Zhi; Du, Xiao-Jiao; Xu, Cong-Fei; Sun, Chun-Yang; Wang, Hong-Xia; Cao, Zhi-Ting; Yang, Xian-Zhu; Zhu, Yan-Hua; Nie, Shuming; Wang, Jun

    2016-04-12

    A principal goal of cancer nanomedicine is to deliver therapeutics effectively to cancer cells within solid tumors. However, there are a series of biological barriers that impede nanomedicine from reaching target cells. Here, we report a stimuli-responsive clustered nanoparticle to systematically overcome these multiple barriers by sequentially responding to the endogenous attributes of the tumor microenvironment. The smart polymeric clustered nanoparticle (iCluster) has an initial size of ∼100 nm, which is favorable for long blood circulation and high propensity of extravasation through tumor vascular fenestrations. Once iCluster accumulates at tumor sites, the intrinsic tumor extracellular acidity would trigger the discharge of platinum prodrug-conjugated poly(amidoamine) dendrimers (diameter ∼5 nm). Such a structural alteration greatly facilitates tumor penetration and cell internalization of the therapeutics. The internalized dendrimer prodrugs are further reduced intracellularly to release cisplatin to kill cancer cells. The superior in vivo antitumor activities of iCluster are validated in varying intractable tumor models including poorly permeable pancreatic cancer, drug-resistant cancer, and metastatic cancer, demonstrating its versatility and broad applicability. PMID:27035960

  16. Subspace Leakage Analysis and Improved DOA Estimation With Small Sample Size

    NASA Astrophysics Data System (ADS)

    Shaghaghi, Mahdi; Vorobyov, Sergiy A.

    2015-06-01

    Classical methods of DOA estimation such as the MUSIC algorithm are based on estimating the signal and noise subspaces from the sample covariance matrix. For a small number of samples, such methods are exposed to performance breakdown, as the sample covariance matrix can largely deviate from the true covariance matrix. In this paper, the problem of DOA estimation performance breakdown is investigated. We consider the structure of the sample covariance matrix and the dynamics of the root-MUSIC algorithm. The performance breakdown in the threshold region is associated with the subspace leakage where some portion of the true signal subspace resides in the estimated noise subspace. In this paper, the subspace leakage is theoretically derived. We also propose a two-step method which improves the performance by modifying the sample covariance matrix such that the amount of the subspace leakage is reduced. Furthermore, we introduce a phenomenon named as root-swap which occurs in the root-MUSIC algorithm in the low sample size region and degrades the performance of the DOA estimation. A new method is then proposed to alleviate this problem. Numerical examples and simulation results are given for uncorrelated and correlated sources to illustrate the improvement achieved by the proposed methods. Moreover, the proposed algorithms are combined with the pseudo-noise resampling method to further improve the performance.

  17. Poly L-lysine (PLL)-mediated porous hematite clusters as anode materials for improved Li-ion batteries

    NASA Astrophysics Data System (ADS)

    Kim, Kun-Woo; Lee, Sang-Wha

    2015-09-01

    Porous hematite clusters were prepared as anode materials for improved Li-ion batteries. First, poly-L-lysine (PLL)-linked Fe3O4 was facilely prepared via cross-linking between the positive amine groups of PLL and carboxylate-bound Fe3O4. The subsequent calcination transformed the PLL-linked Fe3O4 into porous hematite clusters (Fe2O3@PLL) consisting of spherical α-Fe2O3 particles. Compared with standard Fe2O3, Fe3O4@PLL exhibited improved electrochemical performance as anode materials. The discharge capacity of Fe2O3@PLL was retained at 814.7 mAh g-1 after 30 cycles, which is equivalent to 80.4% of the second discharge capacity, whereas standard Fe2O3 exhibited a retention capacity of 352.3 mAh g-1. The improved electrochemical performance of Fe2O3@PLL was mainly attributed to the porous hematite clusters with mesoporosity (20-40 nm), which was beneficial for facilitating ion transport, suggesting a useful guideline for the design of porous architectures with higher retention capacity. [Figure not available: see fulltext.

  18. IMPROVED METHOD FOR ESTIMATING MOLECULAR WEIGHTS OF VOLATILE ORGANIC COMPOUNDS FROM LOW RESOLUTION MASS SPECTRA

    EPA Science Inventory

    An improved method of estimating molecular weights of volatile organic compound from their mass spectra has been developed and implemented with an expert system. he method is based on the strong correlation of MAXMASS, the highest mass with an intensity of 5% of the base peak in ...

  19. Assimilation of active and passive microwave observations for improved estimates of soil moisture and crop growth

    Technology Transfer Automated Retrieval System (TEKTRAN)

    An Ensemble Kalman Filter-based data assimilation framework that links a crop growth model with active and passive (AP) microwave models was developed to improve estimates of soil moisture (SM) and vegetation biomass over a growing season of soybean. Complementarities in AP observations were incorpo...

  20. An Overdetermined System for Improved Autocorrelation Based Spectral Moment Estimator Performance

    NASA Technical Reports Server (NTRS)

    Keel, Byron M.

    1996-01-01

    from a closed system is shown to improve through the application of additional autocorrelation lags in an overdetermined system. This improvement is greater in the narrowband spectrum region where the information is spread over more lags of the autocorrelation function. The number of lags needed in the overdetermined system is a function of the spectral width, the number of terms in the series expansion, the number of samples used in estimating the autocorrelation function, and the signal-to-noise ratio. The overdetermined system provides a robustness to the chosen variance estimator by expanding the region of spectral widths and signal-to-noise ratios over which the estimator can perform as compared to the closed system.

  1. An improved proportionate normalized least-mean-square algorithm for broadband multipath channel estimation.

    PubMed

    Li, Yingsong; Hamamura, Masanori

    2014-01-01

    To make use of the sparsity property of broadband multipath wireless communication channels, we mathematically propose an l p -norm-constrained proportionate normalized least-mean-square (LP-PNLMS) sparse channel estimation algorithm. A general l p -norm is weighted by the gain matrix and is incorporated into the cost function of the proportionate normalized least-mean-square (PNLMS) algorithm. This integration is equivalent to adding a zero attractor to the iterations, by which the convergence speed and steady-state performance of the inactive taps are significantly improved. Our simulation results demonstrate that the proposed algorithm can effectively improve the estimation performance of the PNLMS-based algorithm for sparse channel estimation applications. PMID:24782663

  2. An improved technique for global solar radiation estimation using numerical weather prediction

    NASA Astrophysics Data System (ADS)

    Shamim, M. A.; Remesan, R.; Bray, M.; Han, D.

    2015-07-01

    Global solar radiation is the driving force in hydrological cycle especially for evapotranspiration (ET) and is quite infrequently measured. This has led to the reliance on indirect techniques of estimation for data scarce regions. This study presents an improved technique that uses information from a numerical weather prediction (NWP) model (National Centre for Atmospheric Research NCAR's Mesoscale Meteorological model version 5 MM5), for the determination of a cloud cover index (CI), a major factor in the attenuation of the incident solar radiation. The cloud cover index (CI) together with the atmospheric transmission factor (KT) and output from a global clear sky solar radiation were then used for the estimation of global solar radiation for the Brue catchment located in the southwest of England. The results clearly show an improvement in the estimated global solar radiation in comparison to the prevailing approaches.

  3. An Improved Proportionate Normalized Least-Mean-Square Algorithm for Broadband Multipath Channel Estimation

    PubMed Central

    2014-01-01

    To make use of the sparsity property of broadband multipath wireless communication channels, we mathematically propose an lp-norm-constrained proportionate normalized least-mean-square (LP-PNLMS) sparse channel estimation algorithm. A general lp-norm is weighted by the gain matrix and is incorporated into the cost function of the proportionate normalized least-mean-square (PNLMS) algorithm. This integration is equivalent to adding a zero attractor to the iterations, by which the convergence speed and steady-state performance of the inactive taps are significantly improved. Our simulation results demonstrate that the proposed algorithm can effectively improve the estimation performance of the PNLMS-based algorithm for sparse channel estimation applications. PMID:24782663

  4. Improved proper motion determinations for 15 open clusters based on the UCAC4 catalog

    NASA Astrophysics Data System (ADS)

    Kurtenkov, Alexander; Dimitrova, Nadezhda; Atanasov, Alexander; Aleksiev, Teodor D.

    2016-07-01

    The proper motions of 15 nearby (d > 1 kpc) open clusters (OCs) were recalculated using data from the UCAC4 catalog. Only evolved or main sequence stars inside a certain radius from the center of the cluster were used. The results significantly differ from the ones presented by Dias et al. (2014). This could be explained by a different approach in which we take the field star contamination into account. The present work aims to emphasize the importance of applying photometric criteria for the calculation of OC proper motions.

  5. Improving the blind restoration of retinal images by means of point-spread-function estimation assessment

    NASA Astrophysics Data System (ADS)

    Marrugo, Andrés. G.; Millán, María. S.; Å orel, Michal; Kotera, Jan; Å roubek, Filip

    2015-01-01

    Retinal images often suffer from blurring which hinders disease diagnosis and progression assessment. The restoration of the images is carried out by means of blind deconvolution, but the success of the restoration depends on the correct estimation of the point-spread-function (PSF) that blurred the image. The restoration can be space-invariant or space-variant. Because a retinal image has regions without texture or sharp edges, the blind PSF estimation may fail. In this paper we propose a strategy for the correct assessment of PSF estimation in retinal images for restoration by means of space-invariant or space-invariant blind deconvolution. Our method is based on a decomposition in Zernike coefficients of the estimated PSFs to identify valid PSFs. This significantly improves the quality of the image restoration revealed by the increased visibility of small details like small blood vessels and by the lack of restoration artifacts.

  6. Improving Ocean Angular Momentum Estimates Using a Model Constrained by Data

    NASA Technical Reports Server (NTRS)

    Ponte, Rui M.; Stammer, Detlef; Wunsch, Carl

    2001-01-01

    Ocean angular momentum (OAM) calculations using forward model runs without any data constraints have, recently revealed the effects of OAM variability on the Earth's rotation. Here we use an ocean model and its adjoint to estimate OAM values by constraining the model to available oceanic data. The optimization procedure yields substantial changes in OAM, related to adjustments in both motion and mass fields, as well as in the wind stress torques acting on the ocean. Constrained and unconstrained OAM values are discussed in the context of closing the planet's angular momentum budget. The estimation procedure, yields noticeable improvements in the agreement with the observed Earth rotation parameters, particularly at the seasonal timescale. The comparison with Earth rotation measurements provides an independent consistency check on the estimated ocean state and underlines the importance of ocean state estimation for quantitative. studies of the variable large-scale oceanic mass and circulation fields, including studies of OAM.

  7. Improving Estimates of m sin i by Expanding RV Data Sets

    NASA Astrophysics Data System (ADS)

    Brown, Robert A.

    2016-07-01

    We develop new techniques for estimating the fractional uncertainty ({ F }) in the projected planetary mass (m sin i) resulting from Keplerian fits to radial-velocity (RV) data sets of known Jupiter-class exoplanets. The techniques include (1) estimating the distribution of m sin i using projection, (2) detecting and mitigating chimeras, a source of systematic error, and (3) estimating the reduction in the uncertainty in m sin i if hypothetical observations were made in the future. We demonstrate the techniques on a representative set of RV exoplanets, known as the Sample of 27, which are candidates for detection and characterization by a future astrometric direct imaging mission. We estimate the improvements (reductions) in { F } due to additional, hypothetical RV measurements obtained in the future. We encounter and address a source of systematic error, “chimeras,” which can appear when multiple types of Keplerian solutions are compatible with a single data set.

  8. An improved time series approach for estimating groundwater recharge from groundwater level fluctuations

    NASA Astrophysics Data System (ADS)

    Cuthbert, M. O.

    2010-09-01

    An analytical solution to a linearized Boussinesq equation is extended to develop an expression for groundwater drainage using estimations of aquifer parameters. This is then used to develop an improved water table fluctuation (WTF) technique for estimating groundwater recharge. The resulting method extends the standard WTF technique by making it applicable, as long as aquifer properties for the area are relatively well known, in areas with smoothly varying water tables and is not reliant on precipitation data. The method is validated against numerical simulations and a case study from a catchment where recharge is "known" a priori using other means. The approach may also be inverted to provide initial estimates of aquifer parameters in areas where recharge can be reliably estimated by other methods.

  9. The Use of Radar to Improve Rainfall Estimation over the Tennessee and San Joaquin River Valleys

    NASA Technical Reports Server (NTRS)

    Petersen, Walter A.; Gatlin, Patrick N.; Felix, Mariana; Carey, Lawrence D.

    2010-01-01

    This slide presentation provides an overview of the collaborative radar rainfall project between the Tennessee Valley Authority (TVA), the Von Braun Center for Science & Innovation (VCSI), NASA MSFC and UAHuntsville. Two systems were used in this project, Advanced Radar for Meteorological & Operational Research (ARMOR) Rainfall Estimation Processing System (AREPS), a demonstration project of real-time radar rainfall using a research radar and NEXRAD Rainfall Estimation Processing System (NREPS). The objectives, methodology, some results and validation, operational experience and lessons learned are reviewed. The presentation. Another project that is using radar to improve rainfall estimations is in California, specifically the San Joaquin River Valley. This is part of a overall project to develop a integrated tool to assist water management within the San Joaquin River Valley. This involves integrating several components: (1) Radar precipitation estimates, (2) Distributed hydro model, (3) Snowfall measurements and Surface temperature / moisture measurements. NREPS was selected to provide precipitation component.

  10. Multiple data sources improve DNA-based mark-recapture population estimates of grizzly bears.

    PubMed

    Boulanger, John; Kendall, Katherine C; Stetz, Jeffrey B; Roon, David A; Waits, Lisette P; Paetkau, David

    2008-04-01

    A fundamental challenge to estimating population size with mark-recapture methods is heterogeneous capture probabilities and subsequent bias of population estimates. Confronting this problem usually requires substantial sampling effort that can be difficult to achieve for some species, such as carnivores. We developed a methodology that uses two data sources to deal with heterogeneity and applied this to DNA mark-recapture data from grizzly bears (Ursus arctos). We improved population estimates by incorporating additional DNA "captures" of grizzly bears obtained by collecting hair from unbaited bear rub trees concurrently with baited, grid-based, hair snag sampling. We consider a Lincoln-Petersen estimator with hair snag captures as the initial session and rub tree captures as the recapture session and develop an estimator in program MARK that treats hair snag and rub tree samples as successive sessions. Using empirical data from a large-scale project in the greater Glacier National Park, Montana, USA, area and simulation modeling we evaluate these methods and compare the results to hair-snag-only estimates. Empirical results indicate that, compared with hair-snag-only data, the joint hair-snag-rub-tree methods produce similar but more precise estimates if capture and recapture rates are reasonably high for both methods. Simulation results suggest that estimators are potentially affected by correlation of capture probabilities between sample types in the presence of heterogeneity. Overall, closed population Huggins-Pledger estimators showed the highest precision and were most robust to sparse data, heterogeneity, and capture probability correlation among sampling types. Results also indicate that these estimators can be used when a segment of the population has zero capture probability for one of the methods. We propose that this general methodology may be useful for other species in which mark-recapture data are available from multiple sources. PMID:18488618

  11. Improving Google Flu Trends Estimates for the United States through Transformation

    PubMed Central

    Martin, Leah J.; Xu, Biying; Yasui, Yutaka

    2014-01-01

    Google Flu Trends (GFT) uses Internet search queries in an effort to provide early warning of increases in influenza-like illness (ILI). In the United States, GFT estimates the percentage of physician visits related to ILI (%ILINet) reported by the Centers for Disease Control and Prevention (CDC). However, during the 2012–13 influenza season, GFT overestimated %ILINet by an appreciable amount and estimated the peak in incidence three weeks late. Using data from 2010–14, we investigated the relationship between GFT estimates (%GFT) and %ILINet. Based on the relationship between the relative change in %GFT and the relative change in %ILINet, we transformed %GFT estimates to better correspond with %ILINet values. In 2010–13, our transformed %GFT estimates were within ±10% of %ILINet values for 17 of the 29 weeks that %ILINet was above the seasonal baseline value determined by the CDC; in contrast, the original %GFT estimates were within ±10% of %ILINet values for only two of these 29 weeks. Relative to the %ILINet peak in 2012–13, the peak in our transformed %GFT estimates was 2% lower and one week later, whereas the peak in the original %GFT estimates was 74% higher and three weeks later. The same transformation improved %GFT estimates using the recalibrated 2013 GFT model in early 2013–14. Our transformed %GFT estimates can be calculated approximately one week before %ILINet values are reported by the CDC and the transformation equation was stable over the time period investigated (2010–13). We anticipate our results will facilitate future use of GFT. PMID:25551391

  12. Enhancing e-waste estimates: Improving data quality by multivariate Input–Output Analysis

    SciTech Connect

    Wang, Feng; Huisman, Jaco; Stevels, Ab; Baldé, Cornelis Peter

    2013-11-15

    Highlights: • A multivariate Input–Output Analysis method for e-waste estimates is proposed. • Applying multivariate analysis to consolidate data can enhance e-waste estimates. • We examine the influence of model selection and data quality on e-waste estimates. • Datasets of all e-waste related variables in a Dutch case study have been provided. • Accurate modeling of time-variant lifespan distributions is critical for estimate. - Abstract: Waste electrical and electronic equipment (or e-waste) is one of the fastest growing waste streams, which encompasses a wide and increasing spectrum of products. Accurate estimation of e-waste generation is difficult, mainly due to lack of high quality data referred to market and socio-economic dynamics. This paper addresses how to enhance e-waste estimates by providing techniques to increase data quality. An advanced, flexible and multivariate Input–Output Analysis (IOA) method is proposed. It links all three pillars in IOA (product sales, stock and lifespan profiles) to construct mathematical relationships between various data points. By applying this method, the data consolidation steps can generate more accurate time-series datasets from available data pool. This can consequently increase the reliability of e-waste estimates compared to the approach without data processing. A case study in the Netherlands is used to apply the advanced IOA model. As a result, for the first time ever, complete datasets of all three variables for estimating all types of e-waste have been obtained. The result of this study also demonstrates significant disparity between various estimation models, arising from the use of data under different conditions. It shows the importance of applying multivariate approach and multiple sources to improve data quality for modelling, specifically using appropriate time-varying lifespan parameters. Following the case study, a roadmap with a procedural guideline is provided to enhance e

  13. Clustering of heterogeneous precipitation fields for the assessment and possible improvement of lumped neural network models for streamflow forecasts

    NASA Astrophysics Data System (ADS)

    Lauzon, N.; Anctil, F.; Baxter, C. W.

    2006-07-01

    This work addresses the issue of better considering the heterogeneity of precipitation fields within lumped rainfall-runoff models where only areal mean precipitation is usually used as an input. A method using a Kohonen neural network is proposed for the clustering of precipitation fields. The evaluation and improvement of the performance of a lumped rainfall-runoff model for one-day ahead predictions is then established based on this clustering. Multilayer perceptron neural networks are employed as lumped rainfall-runoff models. The Bas-en-Basset watershed in France, which is equipped with 23 rain gauges with data for a 21-year period, is employed as the application case. The results demonstrate the relevance of the proposed clustering method, which produces groups of precipitation fields that are in agreement with the global climatological features affecting the region, as well as with the topographic constraints of the watershed (i.e., orography). The strengths and weaknesses of the rainfall-runoff models are highlighted by the analysis of their performance vis-à-vis the clustering of precipitation fields. The results also show the capability of multilayer perceptron neural networks to account for the heterogeneity of precipitation, even when built as lumped rainfall-runoff models.

  14. Simultaneous Estimation of Photometric Redshifts and SED Parameters: Improved Techniques and a Realistic Error Budget

    NASA Astrophysics Data System (ADS)

    Acquaviva, Viviana; Raichoor, Anand; Gawiser, Eric

    2015-05-01

    We seek to improve the accuracy of joint galaxy photometric redshift estimation and spectral energy distribution (SED) fitting. By simulating different sources of uncorrected systematic errors, we demonstrate that if the uncertainties in the photometric redshifts are estimated correctly, so are those on the other SED fitting parameters, such as stellar mass, stellar age, and dust reddening. Furthermore, we find that if the redshift uncertainties are over(under)-estimated, the uncertainties in SED parameters tend to be over(under)-estimated by similar amounts. These results hold even in the presence of severe systematics and provide, for the first time, a mechanism to validate the uncertainties on these parameters via comparison with spectroscopic redshifts. We propose a new technique (annealing) to re-calibrate the joint uncertainties in the photo-z and SED fitting parameters without compromising the performance of the SED fitting + photo-z estimation. This procedure provides a consistent estimation of the multi-dimensional probability distribution function in SED fitting + z parameter space, including all correlations. While the performance of joint SED fitting and photo-z estimation might be hindered by template incompleteness, we demonstrate that the latter is “flagged” by a large fraction of outliers in redshift, and that significant improvements can be achieved by using flexible stellar populations synthesis models and more realistic star formation histories. In all cases, we find that the median stellar age is better recovered than the time elapsed from the onset of star formation. Finally, we show that using a photometric redshift code such as EAZY to obtain redshift probability distributions that are then used as priors for SED fitting codes leads to only a modest bias in the SED fitting parameters and is thus a viable alternative to the simultaneous estimation of SED parameters and photometric redshifts.

  15. Combining SIP and NMR Measurements to Develop Improved Estimates of Permeability in Sandstone Cores

    NASA Astrophysics Data System (ADS)

    Keating, K.; Binley, A. M.

    2013-12-01

    Permeability is traditionally measured in-situ by inducing groundwater flow using pumping, slug, or packer tests; however, these methods require the existence of wells, can be labor intensive and can be constrained by measurement support volumes. Indirect estimates of permeability based on geophysical techniques benefit from relatively short measurement times, do not require fluid extraction, and are non-invasive when made from the surface (or minimally invasive when made in a borehole). However, estimates of permeability based on a single geophysical method often require calibration for rock type, and cannot be used to uniquely determine all of the physical properties required to accurately determine permeability. In this laboratory study we present the first critical step towards developing a method for estimating permeability based on the synergistic coupling of two complementary geophysical methods: spectral induced polarization (SIP) and nuclear magnetic resonance (NMR). To develop an improved model for estimating permeability, laboratory SIP and NMR measurements were collected on a series of sandstone cores, covering a wide range of permeabilities. Current models for estimating permeability from each individual geophysical measurement were compared to independently obtained estimates of permeability. The comparison confirmed previous research showing that estimates from SIP or NMR alone only yield the permeability within order of magnitude accuracy and must be calibrated for rock type. Next, the geophysical parameters determined from SIP and NMR were compared to independent measurements the physical properties of the sandstone cores including gravimetric porosity and pores-size distributions (obtained from mercury injection porosimetry); this comparison was used to evaluate which geophysical parameter more consistently and accurately predicted each physical property. Finally, we present an improved method for estimating permeability in sandstone cores based

  16. Spatially Explicit Estimation of Optimal Light Use Efficiency for Improved Satellite Data Driven Ecosystem Productivity Modeling

    NASA Astrophysics Data System (ADS)

    Madani, N.; Kimball, J. S.; Running, S. W.

    2014-12-01

    Remote sensing based light use efficiency (LUE) models, including the MODIS (MODerate resolution Imaging Spectroradiometer) MOD17 algorithm are commonly used for regional estimation and monitoring of vegetation gross primary production (GPP) and photosynthetic carbon (CO2) uptake. A common model assumption is that plants in a biome matrix operate at their photosynthetic capacity under optimal climatic conditions. A prescribed biome maximum light use efficiency parameter defines the maximum photosynthetic carbon conversion rate under prevailing climate conditions and is a large source of model uncertainty. Here, we used tower (FLUXNET) eddy covariance measurement based carbon flux data for estimating optimal LUE (LUEopt) over a North American domain. LUEopt was first estimated using tower observed daily carbon fluxes, meteorology and satellite (MODIS) observed fraction of photosynthetically active radiation (FPAR). LUEopt was then spatially interpolated over the domain using empirical models derived from independent geospatial data including global plant traits, surface soil moisture, terrain aspect, land cover type and percent tree cover. The derived LUEopt maps were then used as primary inputs to the MOD17 LUE algorithm for regional GPP estimation; these results were evaluated against tower observations and alternate MOD17 GPP estimates determined using Biome-specific LUEopt constants. Estimated LUEopt shows large spatial variability within and among different land cover classes indicated from a sparse North American tower network. Leaf nitrogen content and soil moisture are two important factors explaining LUEopt spatial variability. GPP estimated from spatially explicit LUEopt inputs shows significantly improved model accuracy against independent tower observations (R2 = 0.76; Mean RMSE < 257 g C m-2 yr-1) relative to GPP modeled using biome-specific LUEopt constants (R2 = 34; RMSE = 439 g C m-2 yr-1). We show that general landscape and plant trait information

  17. Improved factor analysis of dynamic PET images to estimate arterial input function and tissue curves

    NASA Astrophysics Data System (ADS)

    Boutchko, Rostyslav; Mitra, Debasis; Pan, Hui; Jagust, William; Gullberg, Grant T.

    2015-03-01

    Factor analysis of dynamic structures (FADS) is a methodology of extracting time-activity curves (TACs) for corresponding different tissue types from noisy dynamic images. The challenges of FADS include long computation time and sensitivity to the initial guess, resulting in convergence to local minima far from the true solution. We propose a method of accelerating and stabilizing FADS application to sequences of dynamic PET images by adding preliminary cluster analysis of the time activity curves for individual voxels. We treat the temporal variation of individual voxel concentrations as a set of time-series and use a partial clustering analysis to identify the types of voxel TACs that are most functionally distinct from each other. These TACs provide a good initial guess for the temporal factors for subsequent FADS processing. Applying this approach to a set of single slices of dynamic 11C-PIB images of the brain allows identification of the arterial input function and two different tissue TACs that are likely to correspond to the specific and non-specific tracer binding-tissue types. These results enable us to perform direct classification of tissues based on their pharmacokinetic properties in dynamic PET without relying on a compartment-based kinetic model, without identification of the reference region, or without using any external methods of estimating the arterial input function, as needed in some techniques.

  18. An improved radiative transfer model for estimating mineral abundance of immature and mature lunar soils

    NASA Astrophysics Data System (ADS)

    Liu, Dawei; Li, Lin; Sun, Ying

    2015-06-01

    An improved Hapke's radiative transfer model (RTM) is presented to estimate mineral abundance for both immature and mature lunar soils from the Lunar Soil Characterization Consortium (LSCC) dataset. Fundamental to this improved Hapke's model is the application of an alternative equation to describe the effects of larger size submicroscopic metallic iron (SMFe) (>50 nm) in the interior of agglutinitic glass that mainly darken the host material, contrasting to the darkening and reddening effects of smaller size SMFe (<50 nm) residing in the rims of mineral grains. Results from applying a nonlinear inversion procedure to the improved Hapke's RTM show that the average mass fraction of smaller and larger size SMFe in lunar soils was estimated to be 0.30% and 0.31% respectively, and the particle size distribution of soil samples is all within their measured range. Based on the derived mass fraction of SMFe and particle size of the soil samples, abundances of end-member components composing lunar soil samples were derived via minimizing the difference between measured and calculated spectra. The root mean square error (RMSE) between the fitted and measured spectra is lower than 0.01 for highland samples and 0.005 for mare samples. This improved Hapke's model accurately estimates abundances of agglutinitic glass (R-squared = 0.88), pyroxene (R-squared = 0.69) and plagioclase (R-squared = 0.95) for all 57 samples used in this study including both immature and mature lunar soils. However, the improved Hapke's RTM shows poor performance for quantifying abundances of olivine, ilmenite and volcanic glass. Improving the model performance for estimation of these three end-member components is the central focus for our future work.

  19. Improved weighting methods, deterministic and stochastic data-driven models for estimation of missing precipitation records

    NASA Astrophysics Data System (ADS)

    Teegavarapu, Ramesh S. V.; Chandramouli, V.

    2005-10-01

    Distance-weighted and data-driven methods are extensively used for estimation of missing rainfall data. Inverse distance weighting method (IDWM) is one of the most frequently used methods for estimating missing rainfall values at a gage based on values recorded at all other available recording gages. In spite of the method's wide success and acceptability, it suffers from major conceptual limitations. Conceptual improvements are incorporated in the IDWM method that led to several modified distance-based methods. A data-driven model that uses artificial neural network concepts and a stochastic interpolation technique, kriging, are also developed and tested in the current study. These methods are tested for estimation of missing precipitation data. Historical precipitation data from 20 rain-gauging stations in the state of Kentucky, USA, are used to test the improvised methods and derive conclusions about the efficacy of incorporated improvements. Results suggest that the conceptual revisions can improve estimation of missing precipitation records by defining better weighting parameters and surrogate measures for distances that are used in the IDWM.

  20. Estimation of local fleet characteristics data for improved emission inventory development

    SciTech Connect

    Heiken, J.; Pollack, A.; Austin, B.

    1996-12-31

    Considerable effort in recent years has been focused on the improvement of on-road mobile source emission factors with much less attention paid to the refinement of activity and fleet characteristics estimates. Current emissions modeling practices commonly use emission factor model defaults or statewide averages for fleet and activity data. As part of the US EPA`s Emission Inventory Improvement Program (EIIP), ENVIRON developed methodologies to derive locality-specific fleet characteristics data from existing data sources in order to improve local emission inventory estimates. Data sources examined included remote sensing studies and inspection and maintenance (I/M) program data. In this paper, we focus on two specific examples: (1) the calculation of mileage accumulation rates from Arizona I/M program data, and (2) the calculation of registration distribution from a Sacramento remote sensing database. In both examples, differences exist between the calculated distributions and those currently used for air quality modeling, resulting in significant impacts on the estimated mobile source emissions inventory. For example, use of the automobile registration distribution data derived from the Sacramento Pilot I/M Program remote sensing database results in an increase in estimated automobile TOG, CO and NO{sub x} of 15, 24 and 17 percent, respectively, when used in place of the default registration distribution in the current California Air Resources Board MVEI7G emissions model.

  1. Improving Remotely-sensed Precipitation Estimates Over Mountainous Regions For Use In Hydrological Models

    NASA Astrophysics Data System (ADS)

    Yucel, I.; Akcelik, M.; Kuligowski, R. J.

    2014-12-01

    In support of the National Oceanic and Atmospheric Administration (NOAA) National Weather Service's (NWS) flash flood warning and heavy precipitation forecast efforts, the NOAA National Environmental Satellite Data and Information Service (NESDIS) Center for Satellite Applications and Research (STAR) has been providing satellite based precipitation estimates operationally since 1978. Two of the satellite based rainfall algorithms are the Hydro-Estimator (HE) and the Self-Calibrating Multivariate Precipitation Retrieval (SCaMPR). However, unlike the HE algorithm the SCaMPR does not currently make any adjustments for the effects of complex topography on rainfall. This study investigates the potential for improving the SCaMPR algorithm by incorporating an orographic correction and humidity correction based calibration of the SCaMPR against rain gauge transects in northwestern Mexico to identify correctable biases related to elevation, slope, wind direction and humidity. Elevation-dependent bias structure of the SCaMPR algorithm suggest that the rainfall algorithm underestimates precipitation in case of upward atmospheric movements and overestimates rainfall in case of downward atmospheric movements along with mountainous terrain. A regionally dependent empirical elevation-based bias correction technique may help improve the quality of satellite-derived precipitation products. As well as orography, effect of atmospheric indices over precipitation estimates is analyzed. The findings suggest that continued improvement to the developed orographic correction scheme is warranted in order to advance quantitative precipitation estimation in complex terrain regions for use in weather forecasting and hydrologic applications.

  2. Improving PAGER's real-time earthquake casualty and loss estimation toolkit: a challenge

    USGS Publications Warehouse

    Jaiswal, K.S.; Wald, D.J.

    2012-01-01

    We describe the on-going developments of PAGER’s loss estimation models, and discuss value-added web content that can be generated related to exposure, damage and loss outputs for a variety of PAGER users. These developments include identifying vulnerable building types in any given area, estimating earthquake-induced damage and loss statistics by building type, and developing visualization aids that help locate areas of concern for improving post-earthquake response efforts. While detailed exposure and damage information is highly useful and desirable, significant improvements are still necessary in order to improve underlying building stock and vulnerability data at a global scale. Existing efforts with the GEM’s GED4GEM and GVC consortia will help achieve some of these objectives. This will benefit PAGER especially in regions where PAGER’s empirical model is less-well constrained; there, the semi-empirical and analytical models will provide robust estimates of damage and losses. Finally, we outline some of the challenges associated with rapid casualty and loss estimation that we experienced while responding to recent large earthquakes worldwide.

  3. Uncertainty in Population Estimates for Endangered Animals and Improving the Recovery Process.

    PubMed

    Haines, Aaron M; Zak, Matthew; Hammond, Katie; Scott, J Michael; Goble, Dale D; Rachlow, Janet L

    2013-01-01

    United States recovery plans contain biological information for a species listed under the Endangered Species Act and specify recovery criteria to provide basis for species recovery. The objective of our study was to evaluate whether recovery plans provide uncertainty (e.g., variance) with estimates of population size. We reviewed all finalized recovery plans for listed terrestrial vertebrate species to record the following data: (1) if a current population size was given, (2) if a measure of uncertainty or variance was associated with current estimates of population size and (3) if population size was stipulated for recovery. We found that 59% of completed recovery plans specified a current population size, 14.5% specified a variance for the current population size estimate and 43% specified population size as a recovery criterion. More recent recovery plans reported more estimates of current population size, uncertainty and population size as a recovery criterion. Also, bird and mammal recovery plans reported more estimates of population size and uncertainty compared to reptiles and amphibians. We suggest the use of calculating minimum detectable differences to improve confidence when delisting endangered animals and we identified incentives for individuals to get involved in recovery planning to improve access to quantitative data. PMID:26479531

  4. Improving the Accuracy of Laplacian Estimation with Novel Variable Inter-Ring Distances Concentric Ring Electrodes.

    PubMed

    Makeyev, Oleksandr; Besio, Walter G

    2016-01-01

    Noninvasive concentric ring electrodes are a promising alternative to conventional disc electrodes. Currently, the superiority of tripolar concentric ring electrodes over disc electrodes, in particular, in accuracy of Laplacian estimation, has been demonstrated in a range of applications. In our recent work, we have shown that accuracy of Laplacian estimation can be improved with multipolar concentric ring electrodes using a general approach to estimation of the Laplacian for an (n + 1)-polar electrode with n rings using the (4n + 1)-point method for n ≥ 2. This paper takes the next step toward further improving the Laplacian estimate by proposing novel variable inter-ring distances concentric ring electrodes. Derived using a modified (4n + 1)-point method, linearly increasing and decreasing inter-ring distances tripolar (n = 2) and quadripolar (n = 3) electrode configurations are compared to their constant inter-ring distances counterparts. Finite element method modeling and analytic results are consistent and suggest that increasing inter-ring distances electrode configurations may decrease the truncation error resulting in more accurate Laplacian estimates compared to respective constant inter-ring distances configurations. For currently used tripolar electrode configuration, the truncation error may be decreased more than two-fold, while for the quadripolar configuration more than a six-fold decrease is expected. PMID:27294933

  5. Improving the Accuracy of Laplacian Estimation with Novel Variable Inter-Ring Distances Concentric Ring Electrodes

    PubMed Central

    Makeyev, Oleksandr; Besio, Walter G.

    2016-01-01

    Noninvasive concentric ring electrodes are a promising alternative to conventional disc electrodes. Currently, the superiority of tripolar concentric ring electrodes over disc electrodes, in particular, in accuracy of Laplacian estimation, has been demonstrated in a range of applications. In our recent work, we have shown that accuracy of Laplacian estimation can be improved with multipolar concentric ring electrodes using a general approach to estimation of the Laplacian for an (n + 1)-polar electrode with n rings using the (4n + 1)-point method for n ≥ 2. This paper takes the next step toward further improving the Laplacian estimate by proposing novel variable inter-ring distances concentric ring electrodes. Derived using a modified (4n + 1)-point method, linearly increasing and decreasing inter-ring distances tripolar (n = 2) and quadripolar (n = 3) electrode configurations are compared to their constant inter-ring distances counterparts. Finite element method modeling and analytic results are consistent and suggest that increasing inter-ring distances electrode configurations may decrease the truncation error resulting in more accurate Laplacian estimates compared to respective constant inter-ring distances configurations. For currently used tripolar electrode configuration, the truncation error may be decreased more than two-fold, while for the quadripolar configuration more than a six-fold decrease is expected. PMID:27294933

  6. Improving High-resolution Spatial Estimates of Precipitation in the Equatorial Americas

    NASA Astrophysics Data System (ADS)

    Verdin, A.; Rajagopalan, B.; Funk, C. C.

    2013-12-01

    Drought and flood management practices require accurate estimates of precipitation in space and time. However, data is sparse in regions with complicated terrain (such as the Equatorial Americas), often in valleys (where people farm), and of poor quality. Consequently, extreme precipitation events are poorly represented. Satellite-derived rainfall data is an attractive alternative in such regions and is being widely used, though it too suffers from problems such as underestimation of extreme events (due to its dependency on retrieval algorithms) and the indirect relationship between satellite radiation observations and precipitation intensities. Thus, it seems appropriate to blend satellite-derived rainfall data of extensive spatial coverage with rain gauge data in order to provide a more robust estimate of precipitation. To this end, in this research we offer three techniques to blend rain gauge data and the Climate Hazards group InfraRed Precipitation (CHIRP) satellite-derived precipitation estimate for Central America and Colombia. In the first two methods, the gauge data is assigned to the closest CHIRP grid point, where the error is defined as r = Yobs - Ysat. The spatial structure of r is then modeled using physiographic information (Easting, Northing, and Elevation) by two methods (i) a traditional Cokriging approach whose variogram is calculated in Euclidean space and (ii) a nonparametric method based on local polynomial functional estimation. The models are used to estimate r at all grid points, which is then added to the CHIRP, thus creating an improved satellite estimate. We demonstrate these methods by applying them to pentadal and monthly total precipitation fields during 2009. The models' predictive abilities and their ability to capture extremes are investigated. These blending methods significantly improve upon the satellite-derived estimates and are also competitive in their ability to capture extreme precipitation. The above methods assume

  7. Using flow cytometry to estimate pollen DNA content: improved methodology and applications

    PubMed Central

    Kron, Paul; Husband, Brian C.

    2012-01-01

    Background and Aims Flow cytometry has been used to measure nuclear DNA content in pollen, mostly to understand pollen development and detect unreduced gametes. Published data have not always met the high-quality standards required for some applications, in part due to difficulties inherent in the extraction of nuclei. Here we describe a simple and relatively novel method for extracting pollen nuclei, involving the bursting of pollen through a nylon mesh, compare it with other methods and demonstrate its broad applicability and utility. Methods The method was tested across 80 species, 64 genera and 33 families, and the data were evaluated using established criteria for estimating genome size and analysing cell cycle. Filter bursting was directly compared with chopping in five species, yields were compared with published values for sonicated samples, and the method was applied by comparing genome size estimates for leaf and pollen nuclei in six species. Key Results Data quality met generally applied standards for estimating genome size in 81 % of species and the higher best practice standards for cell cycle analysis in 51 %. In 41 % of species we met the most stringent criterion of screening 10 000 pollen grains per sample. In direct comparison with two chopping techniques, our method produced better quality histograms with consistently higher nuclei yields, and yields were higher than previously published results for sonication. In three binucleate and three trinucleate species we found that pollen-based genome size estimates differed from leaf tissue estimates by 1·5 % or less when 1C pollen nuclei were used, while estimates from 2C generative nuclei differed from leaf estimates by up to 2·5 %. Conclusions The high success rate, ease of use and wide applicability of the filter bursting method show that this method can facilitate the use of pollen for estimating genome size and dramatically improve unreduced pollen production estimation with flow cytometry. PMID

  8. Fractional Vegetation Cover Estimation Based on an Improved Selective Endmember Spectral Mixture Model

    PubMed Central

    Li, Ying; Wang, Hong; Li, Xiao Bing

    2015-01-01

    Vegetation is an important part of ecosystem and estimation of fractional vegetation cover is of significant meaning to monitoring of vegetation growth in a certain region. With Landsat TM images and HJ-1B images as data source, an improved selective endmember linear spectral mixture model (SELSMM) was put forward in this research to estimate the fractional vegetation cover in Huangfuchuan watershed in China. We compared the result with the vegetation coverage estimated with linear spectral mixture model (LSMM) and conducted accuracy test on the two results with field survey data to study the effectiveness of different models in estimation of vegetation coverage. Results indicated that: (1) the RMSE of the estimation result of SELSMM based on TM images is the lowest, which is 0.044. The RMSEs of the estimation results of LSMM based on TM images, SELSMM based on HJ-1B images and LSMM based on HJ-1B images are respectively 0.052, 0.077 and 0.082, which are all higher than that of SELSMM based on TM images; (2) the R2 of SELSMM based on TM images, LSMM based on TM images, SELSMM based on HJ-1B images and LSMM based on HJ-1B images are respectively 0.668, 0.531, 0.342 and 0.336. Among these models, SELSMM based on TM images has the highest estimation accuracy and also the highest correlation with measured vegetation coverage. Of the two methods tested, SELSMM is superior to LSMM in estimation of vegetation coverage and it is also better at unmixing mixed pixels of TM images than pixels of HJ-1B images. So, the SELSMM based on TM images is comparatively accurate and reliable in the research of regional fractional vegetation cover estimation. PMID:25905772

  9. Improving the estimation of flavonoid intake for study of health outcomes

    PubMed Central

    Dwyer, Johanna T.; Jacques, Paul F.; McCullough, Marjorie L.

    2015-01-01

    Imprecision in estimating intakes of non-nutrient bioactive compounds such as flavonoids is a challenge in epidemiologic studies of health outcomes. The sources of this imprecision, using flavonoids as an example, include the variability of bioactive compounds in foods due to differences in growing conditions and processing, the challenges in laboratory quantification of flavonoids in foods, the incompleteness of flavonoid food composition tables, and the lack of adequate dietary assessment instruments. Steps to improve databases of bioactive compounds and to increase the accuracy and precision of the estimation of bioactive compound intakes in studies of health benefits and outcomes are suggested. PMID:26084477

  10. Does Ocean Color Data Assimilation Improve Estimates of Global Ocean Inorganic Carbon?

    NASA Technical Reports Server (NTRS)

    Gregg, Watson

    2012-01-01

    Ocean color data assimilation has been shown to dramatically improve chlorophyll abundances and distributions globally and regionally in the oceans. Chlorophyll is a proxy for phytoplankton biomass (which is explicitly defined in a model), and is related to the inorganic carbon cycle through the interactions of the organic carbon (particulate and dissolved) and through primary production where inorganic carbon is directly taken out of the system. Does ocean color data assimilation, whose effects on estimates of chlorophyll are demonstrable, trickle through the simulated ocean carbon system to produce improved estimates of inorganic carbon? Our emphasis here is dissolved inorganic carbon, pC02, and the air-sea flux. We use a sequential data assimilation method that assimilates chlorophyll directly and indirectly changes nutrient concentrations in a multi-variate approach. The results are decidedly mixed. Dissolved organic carbon estimates from the assimilation model are not meaningfully different from free-run, or unassimilated results, and comparisons with in situ data are similar. pC02 estimates are generally worse after data assimilation, with global estimates diverging 6.4% from in situ data, while free-run estimates are only 4.7% higher. Basin correlations are, however, slightly improved: r increase from 0.78 to 0.79, and slope closer to unity at 0.94 compared to 0.86. In contrast, air-sea flux of C02 is noticeably improved after data assimilation. Global differences decline from -0.635 mol/m2/y (stronger model sink from the atmosphere) to -0.202 mol/m2/y. Basin correlations are slightly improved from r=O.77 to r=0.78, with slope closer to unity (from 0.93 to 0.99). The Equatorial Atlantic appears as a slight sink in the free-run, but is correctly represented as a moderate source in the assimilation model. However, the assimilation model shows the Antarctic to be a source, rather than a modest sink and the North Indian basin is represented incorrectly as a sink

  11. Improved tilt-depth method for fast estimation of top and bottom depths of magnetic bodies

    NASA Astrophysics Data System (ADS)

    Wang, Yan-Guo; Zhang, Jin; Ge, Kun-Peng; Chen, Xiao; Nie, Feng-Jun

    2016-06-01

    The tilt-depth method can be used to make fast estimation of the top depth of magnetic bodies. However, it is unable to estimate bottom depths and its every inversion point only has a single solution. In order to resolve such weaknesses, this paper presents an improved tilt-depth method based on the magnetic anomaly expression of vertical contact with a finite depth extent, which can simultaneously estimate top and bottom depths of magnetic bodies. In addition, multiple characteristic points are selected on the tilt angle map for joint computation to improve reliability of inversion solutions. Two- and threedimensional model tests show that this improved tilt-depth method is effective in inverting buried depths of top and bottom bodies, and has a higher inversion precision for top depths than the conventional method. The improved method is then used to process aeromagnetic data over the Changling Fault Depression in the Songliao Basin, and inversion results of top depths are found to be more accurate for actual top depths of volcanic rocks in two nearby drilled wells than those using the conventional tilt-depth method.

  12. Improvement of PPP-inferred tropospheric estimates by integer ambiguity resolution

    NASA Astrophysics Data System (ADS)

    Shi, J.; Gao, Y.

    2012-11-01

    Integer ambiguity resolution in Precise Point Positioning (PPP) can improve positioning accuracy and reduce convergence time. The decoupled clock model proposed by Collins (2008) has been used to facilitate integer ambiguity resolution in PPP, and research has been conducted to assess the model's potential to improve positioning accuracy and reduce positioning convergence time. In particular, the biggest benefits have been identified for the positioning solutions within short observation periods such as one hour. However, there is little work reported about the model's potential to improve the estimation of the tropospheric parameter within short observation periods. This paper investigates the effect of PPP ambiguity resolution on the accuracy of the tropospheric estimates within one hour. The tropospheric estimates with float and fixed ambiguities within one hour are compared to two external references. The first reference is the International GNSS Service (IGS) final troposphere product based on the PPP technique. The second reference is the Constellation Observing System for Meteorology Ionosphere and Climate (COSMIC) radio occultation (RO) event based on the atmospheric profiles along the signal travel path. A comparison among ten co-located ground-based GPS and space-based RO troposphere zenith path delays shows that the mean bias of the troposphere estimates with float ambiguities can be significantly reduced from 30.1 to 17.0 mm when compared to the IGS troposphere product and from 36.3 to 19.7 mm when compared to the COSMIC RO. The root mean square (RMS) accuracy improvement of the tropospheric parameters by the ambiguity resolution is 33.3% when compared to the IGS products and 44.3% when compared to the COSMIC RO. All these improvements are achieved within one hour, which indicates the promising prospect of adopting PPP integer ambiguity resolution for time-critical applications such as typhoon prediction.

  13. Estimating the Effect of School Water, Sanitation, and Hygiene Improvements on Pupil Health Outcomes

    PubMed Central

    Garn, Joshua V.; Brumback, Babette A.; Drews-Botsch, Carolyn D.; Lash, Timothy L.; Kramer, Michael R.

    2016-01-01

    Background: We conducted a cluster-randomized water, sanitation, and hygiene trial in 185 schools in Nyanza province, Kenya. The trial, however, had imperfect school-level adherence at many schools. The primary goal of this study was to estimate the causal effects of school-level adherence to interventions on pupil diarrhea and soil-transmitted helminth infection. Methods: Schools were divided into water availability groups, which were then randomized separately into either water, sanitation, and hygiene intervention arms or a control arm. School-level adherence to the intervention was defined by the number of intervention components—water, latrines, soap—that had been adequately implemented. The outcomes of interest were pupil diarrhea and soil-transmitted helminth infection. We used a weighted generalized structural nested model to calculate prevalence ratio. Results: In the water-scarce group, there was evidence of a reduced prevalence of diarrhea among pupils attending schools that adhered to two or to three intervention components (prevalence ratio = 0.28, 95% confidence interval: 0.10, 0.75), compared with what the prevalence would have been had the same schools instead adhered to zero components or one. In the water-available group, there was no evidence of reduced diarrhea with better adherence. For the soil-transmitted helminth infection and intensity outcomes, we often observed point estimates in the preventive direction with increasing intervention adherence, but primarily among girls, and the confidence intervals were often very wide. Conclusions: Our instrumental variable point estimates sometimes suggested protective effects with increased water, sanitation, and hygiene intervention adherence, although many of the estimates were imprecise. PMID:27276028

  14. Improved target detection and bearing estimation utilizing fast orthogonal search for real-time spectral analysis

    NASA Astrophysics Data System (ADS)

    Osman, Abdalla; Nourledin, Aboelamgd; El-Sheimy, Naser; Theriault, Jim; Campbell, Scott

    2009-06-01

    The problem of target detection and tracking in the ocean environment has attracted considerable attention due to its importance in military and civilian applications. Sonobuoys are one of the capable passive sonar systems used in underwater target detection. Target detection and bearing estimation are mainly obtained through spectral analysis of received signals. The frequency resolution introduced by current techniques is limited which affects the accuracy of target detection and bearing estimation at a relatively low signal-to-noise ratio (SNR). This research investigates the development of a bearing estimation method using fast orthogonal search (FOS) for enhanced spectral estimation. FOS is employed in this research in order to improve both target detection and bearing estimation in the case of low SNR inputs. The proposed methods were tested using simulated data developed for two different scenarios under different underwater environmental conditions. The results show that the proposed method is capable of enhancing the accuracy for target detection as well as bearing estimation especially in cases of a very low SNR.

  15. Improved rapid magnitude estimation for a community-based, low-cost MEMS accelerometer network

    USGS Publications Warehouse

    Chung, Angela I.; Cochran, Elizabeth S.; Kaiser, Anna E.; Christensen, Carl M.; Yildirim, Battalgazi; Lawrence, Jesse F.

    2015-01-01

    Immediately following the Mw 7.2 Darfield, New Zealand, earthquake, over 180 Quake‐Catcher Network (QCN) low‐cost micro‐electro‐mechanical systems accelerometers were deployed in the Canterbury region. Using data recorded by this dense network from 2010 to 2013, we significantly improved the QCN rapid magnitude estimation relationship. The previous scaling relationship (Lawrence et al., 2014) did not accurately estimate the magnitudes of nearby (<35  km) events. The new scaling relationship estimates earthquake magnitudes within 1 magnitude unit of the GNS Science GeoNet earthquake catalog magnitudes for 99% of the events tested, within 0.5 magnitude units for 90% of the events, and within 0.25 magnitude units for 57% of the events. These magnitudes are reliably estimated within 3 s of the initial trigger recorded on at least seven stations. In this report, we present the methods used to calculate a new scaling relationship and demonstrate the accuracy of the revised magnitude estimates using a program that is able to retrospectively estimate event magnitudes using archived data.

  16. Improved estimates of Belgian private health expenditure can give important lessons to other OECD countries.

    PubMed

    Calcoen, Piet; Moens, Dirk; Verlinden, Pieter; van de Ven, Wynand P M M; Pacolet, Jozef

    2015-03-01

    OECD Health Data are a well-known source for detailed information about health expenditure. These data enable us to analyze health policy issues over time and in comparison with other countries. However, current official Belgian estimates of private expenditure (as published in the OECD Health Data) have proven not to be reliable. We distinguish four potential major sources of problems with estimating private health spending: interpretation of definitions, formulation of assumptions, missing or incomplete data and incorrect data. Using alternative sources of billing information, we have reached more accurate estimates of private and out-of-pocket expenditure. For Belgium we found differences of more than 100% between our estimates and the official Belgian estimates of private health expenditure (as published in the OECD Health Data). For instance, according to OECD Health Data private expenditure on hospitals in Belgium amounts to €3.1 billion, while according to our alternative calculations these expenses represent only €1.1 billion. Total private expenditure differs only 1%, but this is a mere coincidence. This exercise may be of interest to other OECD countries looking to improve their estimates of private expenditure on health. PMID:25108312

  17. Improved dichotomous search frequency offset estimator for burst-mode continuous phase modulation

    NASA Astrophysics Data System (ADS)

    Zhai, Wen-Chao; Li, Zan; Si, Jiang-Bo; Bai, Jun

    2015-11-01

    A data-aided technique for carrier frequency offset estimation with continuous phase modulation (CPM) in burst-mode transmission is presented. The proposed technique first exploits a special pilot sequence, or training sequence, to form a sinusoidal waveform. Then, an improved dichotomous search frequency offset estimator is introduced to determine the frequency offset using the sinusoid. Theoretical analysis and simulation results indicate that our estimator is noteworthy in the following aspects. First, the estimator can operate independently of timing recovery. Second, it has relatively low outlier, i.e., the minimum signal-to-noise ratio (SNR) required to guarantee estimation accuracy. Finally, the most important property is that our estimator is complexity-reduced compared to the existing dichotomous search methods: it eliminates the need for fast Fourier transform (FFT) and modulation removal, and exhibits faster convergence rate without accuracy degradation. Project supported by the National Natural Science Foundation of China (Grant No. 61301179), the Doctorial Programs Foundation of the Ministry of Education, China (Grant No. 20110203110011), and the Programme of Introducing Talents of Discipline to Universities, China (Grant No. B08038).

  18. Enhancing e-waste estimates: improving data quality by multivariate Input-Output Analysis.

    PubMed

    Wang, Feng; Huisman, Jaco; Stevels, Ab; Baldé, Cornelis Peter

    2013-11-01

    Waste electrical and electronic equipment (or e-waste) is one of the fastest growing waste streams, which encompasses a wide and increasing spectrum of products. Accurate estimation of e-waste generation is difficult, mainly due to lack of high quality data referred to market and socio-economic dynamics. This paper addresses how to enhance e-waste estimates by providing techniques to increase data quality. An advanced, flexible and multivariate Input-Output Analysis (IOA) method is proposed. It links all three pillars in IOA (product sales, stock and lifespan profiles) to construct mathematical relationships between various data points. By applying this method, the data consolidation steps can generate more accurate time-series datasets from available data pool. This can consequently increase the reliability of e-waste estimates compared to the approach without data processing. A case study in the Netherlands is used to apply the advanced IOA model. As a result, for the first time ever, complete datasets of all three variables for estimating all types of e-waste have been obtained. The result of this study also demonstrates significant disparity between various estimation models, arising from the use of data under different conditions. It shows the importance of applying multivariate approach and multiple sources to improve data quality for modelling, specifically using appropriate time-varying lifespan parameters. Following the case study, a roadmap with a procedural guideline is provided to enhance e-waste estimation studies. PMID:23899476

  19. Development of a mixed pixel filter for improved dimension estimation using AMCW laser scanner

    NASA Astrophysics Data System (ADS)

    Wang, Qian; Sohn, Hoon; Cheng, Jack C. P.

    2016-09-01

    Accurate dimension estimation is desired in many fields, but the traditional dimension estimation methods are time-consuming and labor-intensive. In the recent decades, 3D laser scanners have become popular for dimension estimation due to their high measurement speed and accuracy. Nonetheless, scan data obtained by amplitude-modulated continuous-wave (AMCW) laser scanners suffer from erroneous data called mixed pixels, which can influence the accuracy of dimension estimation. This study develops a mixed pixel filter for improved dimension estimation using AMCW laser scanners. The distance measurement of mixed pixels is firstly formulated based on the working principle of laser scanners. Then, a mixed pixel filter that can minimize the classification errors between valid points and mixed pixels is developed. Validation experiments were conducted to verify the formulation of the distance measurement of mixed pixels and to examine the performance of the proposed mixed pixel filter. Experimental results show that, for a specimen with dimensions of 840 mm × 300 mm, the overall errors of the dimensions estimated after applying the proposed filter are 1.9 mm and 1.0 mm for two different scanning resolutions, respectively. These errors are much smaller than the errors (4.8 mm and 3.5 mm) obtained by the scanner's built-in filter.

  20. A systematic review of cluster randomised trials in residential facilities for older people suggests how to improve quality

    PubMed Central

    2013-01-01

    Background Previous reviews of cluster randomised trials have been critical of the quality of the trials reviewed, but none has explored determinants of the quality of these trials in a specific field over an extended period of time. Recent work suggests that correct conduct and reporting of these trials may require more than published guidelines. In this review, our aim was to assess the quality of cluster randomised trials conducted in residential facilities for older people, and to determine whether (1) statistician involvement in the trial and (2) strength of journal endorsement of the Consolidated Standards of Reporting Trials (CONSORT) statement influence quality. Methods We systematically identified trials randomising residential facilities for older people, or parts thereof, without language restrictions, up to the end of 2010, using National Library of Medicine (Medline) via PubMed and hand-searching. We based quality assessment criteria largely on the extended CONSORT statement for cluster randomised trials. We assessed statistician involvement based on statistician co-authorship, and strength of journal endorsement of the CONSORT statement from journal websites. Results 73 trials met our inclusion criteria. Of these, 20 (27%) reported accounting for clustering in sample size calculations and 54 (74%) in the analyses. In 29 trials (40%), methods used to identify/recruit participants were judged by us to have potentially caused bias or reporting was unclear to reach a conclusion. Some elements of quality improved over time but this appeared not to be related to the publication of the extended CONSORT statement for these trials. Trials with statistician/epidemiologist co-authors were more likely to account for clustering in sample size calculations (unadjusted odds ratio 5.4, 95% confidence interval 1.1 to 26.0) and analyses (unadjusted OR 3.2, 1.2 to 8.5). Journal endorsement of the CONSORT statement was not associated with trial quality. Conclusions

  1. Cosmology with galaxy clusters

    NASA Astrophysics Data System (ADS)

    Sartoris, Barbara

    2015-08-01

    Clusters of galaxies are powerful probes to constrain parameters that describe the cosmological models and to distinguish among different models. Since, the evolution of the cluster mass function and large-scale clustering contain the informations about the linear growth rate of perturbations and the expansion history of the Universe, clusters have played an important role in establishing the current cosmological paradigm. It is crucial to know how to determine the cluster mass from observational quantities when using clusters as cosmological tools. For this, numerical simulations are helpful to define and study robust cluster mass proxies that have minimal and well understood scatter across the mass and redshift ranges of interest. Additionally, the bias in cluster mass determination can be constrained via observations of the strong and weak lensing effect, X-ray emission, the Sunyaev- Zel’dovic effect, and the dynamics of galaxies.A major advantage of X-ray surveys is that the observable-mass relation is tight. Moreover, clusters can be easily identified in X-ray as continuous, extended sources. As of today, interesting cosmological constraints have been obtained from relatively small cluster samples (~102), X-ray selected by the ROSAT satellite over a wide redshift range (0estimates obtained with Chandra and XMM follow-up. These constraints complement those from CMB and SNIa observations. Moreover, the large-scale power spectrum has been constructed using a low redshift (z<0.2) sample of ~103 nearby clusters, the ROSAT All-Sky Survey.The next generation of X-ray telescopes will enhance the statistics of detected clusters and enlarge their redshift coverage. In particular, eROSITA will produce a catalog of >105 clusters with photometric redshifts from multi-band optical surveys (e.g. PanSTARRS, DES, and LSST). This will vastly improve upon current cosmological constraints, especially by the synergy with other cluster surveys that

  2. Classification of Arabidopsis thaliana gene sequences: clustering of coding sequences into two groups according to codon usage improves gene prediction.

    PubMed

    Mathé, C; Peresetsky, A; Déhais, P; Van Montagu, M; Rouzé, P

    1999-02-01

    While genomic sequences are accumulating, finding the location of the genes remains a major issue that can be solved only for about a half of them by homology searches. Prediction methods are thus required, but unfortunately are not fully satisfying. Most prediction methods implicitly assume a unique model for genes. This is an oversimplification as demonstrated by the possibility to group coding sequences into several classes in Escherichia coli and other genomes. As no classification existed for Arabidopsis thaliana, we classified genes according to the statistical features of their coding sequences. A clustering algorithm using a codon usage model was developed and applied to coding sequences from A. thaliana, E. coli, and a mixture of both. By using it, Arabidopsis sequences were clustered into two classes. The CU1 and CU2 classes differed essentially by the choice of pyrimidine bases at the codon silent sites: CU2 genes often use C whereas CU1 genes prefer T. This classification discriminated the Arabidopsis genes according to their expressiveness, highly expressed genes being clustered in CU2 and genes expected to have a lower expression, such as the regulatory genes, in CU1. The algorithm separated the sequences of the Escherichia-Arabidopsis mixed data set into five classes according to the species, except for one class. This mixed class contained 89 % Arabidopsis genes from CU1 and 11 % E. coli genes, mostly horizontally transferred. Interestingly, most genes encoding organelle-targeted proteins, except the photosynthetic and photoassimilatory ones, were clustered in CU1. By tailoring the GeneMark CDS prediction algorithm to the observed coding sequence classes, its quality of prediction was greatly improved. Similar improvement can be expected with other prediction systems. PMID:9925779

  3. Depth resolution improvement in secondary ion mass spectrometry analysis using metal cluster complex ion bombardment

    SciTech Connect

    Tomita, M.; Kinno, T.; Koike, M.; Tanaka, H.; Takeno, S.; Fujiwara, Y.; Kondou, K.; Teranishi, Y.; Nonaka, H.; Fujimoto, T.; Kurokawa, A.; Ichimura, S.

    2006-07-31

    Secondary ion mass spectrometry analyses were carried out using a metal cluster complex ion of Ir{sub 4}(CO){sub 7}{sup +} as a primary ion beam. Depth resolution was evaluated as a function of primary ion species, energy, and incident angle. The depth resolution obtained using cluster ion bombardment was considerably better than that obtained by oxygen ion bombardment under the same experimental condition due to reduction of atomic mixing in the depth. The authors obtained a depth resolution of {approx}1 nm under 5 keV, 45 deg. condition. Depth resolution was degraded by ion-bombardment-induced surface roughness at 5 keV with higher incident angles.

  4. A study to improve the van der Waals component of the interaction in water clusters

    NASA Astrophysics Data System (ADS)

    Albertí, M.; Aguilar, A.; Bartolomei, M.; Cappelletti, D.; Laganà, A.; Lucas, J. M.; Pirani, F.

    2008-11-01

    A portable model potential, representing the intermolecular interaction of water as a combination of a few effective components given in terms of the polarizability and dipole moment values of the molecular partners, is here proposed as a building block of the force field of water clusters in molecular dynamics simulations. In this spirit, here, we discuss the key properties of the model potential and its application to water dimers, trimers and tetramers with the purpose of extrapolating the results to very large clusters mimicking the liquid phase. The suitability of the model potential for dynamics investigations is checked by comparing on one hand the value of the second virial coefficient calculated for the gaseous dimer with experimental data measured over a wide range of temperature (273-3000 K) and, on the other hand, the calculated radial distribution functions and density with those obtained from experiments performed using liquid water.

  5. Semi-Supervised Data Summarization: Using Spectral Libraries to Improve Hyperspectral Clustering

    NASA Technical Reports Server (NTRS)

    Wagstaff, K. L.; Shu, H. P.; Mazzoni, D.; Castano, R.

    2005-01-01

    Hyperspectral imagers produce very large images, with each pixel recorded at hundreds or thousands of different wavelengths. The ability to automatically generate summaries of these data sets enables several important applications, such as quickly browsing through a large image repository or determining the best use of a limited bandwidth link (e.g., determining which images are most critical for full transmission). Clustering algorithms can be used to generate these summaries, but traditional clustering methods make decisions based only on the information contained in the data set. In contrast, we present a new method that additionally leverages existing spectral libraries to identify materials that are likely to be present in the image target area. We find that this approach simultaneously reduces runtime and produces summaries that are more relevant to science goals.

  6. Improved ancestry estimation for both genotyping and sequencing data using projection procrustes analysis and genotype imputation.

    PubMed

    Wang, Chaolong; Zhan, Xiaowei; Liang, Liming; Abecasis, Gonçalo R; Lin, Xihong

    2015-06-01

    Accurate estimation of individual ancestry is important in genetic association studies, especially when a large number of samples are collected from multiple sources. However, existing approaches developed for genome-wide SNP data do not work well with modest amounts of genetic data, such as in targeted sequencing or exome chip genotyping experiments. We propose a statistical framework to estimate individual ancestry in a principal component ancestry map generated by a reference set of individuals. This framework extends and improves upon our previous method for estimating ancestry using low-coverage sequence reads (LASER 1.0) to analyze either genotyping or sequencing data. In particular, we introduce a projection Procrustes analysis approach that uses high-dimensional principal components to estimate ancestry in a low-dimensional reference space. Using extensive simulations and empirical data examples, we show that our new method (LASER 2.0), combined with genotype imputation on the reference individuals, can substantially outperform LASER 1.0 in estimating fine-scale genetic ancestry. Specifically, LASER 2.0 can accurately estimate fine-scale ancestry within Europe using either exome chip genotypes or targeted sequencing data with off-target coverage as low as 0.05×. Under the framework of LASER 2.0, we can estimate individual ancestry in a shared reference space for samples assayed at different loci or by different techniques. Therefore, our ancestry estimation method will accelerate discovery in disease association studies not only by helping model ancestry within individual studies but also by facilitating combined analysis of genetic data from multiple sources. PMID:26027497

  7. Probe Region Expression Estimation for RNA-Seq Data for Improved Microarray Comparability

    PubMed Central

    Uziela, Karolis; Honkela, Antti

    2015-01-01

    Rapidly growing public gene expression databases contain a wealth of data for building an unprecedentedly detailed picture of human biology and disease. This data comes from many diverse measurement platforms that make integrating it all difficult. Although RNA-sequencing (RNA-seq) is attracting the most attention, at present, the rate of new microarray studies submitted to public databases far exceeds the rate of new RNA-seq studies. There is clearly a need for methods that make it easier to combine data from different technologies. In this paper, we propose a new method for processing RNA-seq data that yields gene expression estimates that are much more similar to corresponding estimates from microarray data, hence greatly improving cross-platform comparability. The method we call PREBS is based on estimating the expression from RNA-seq reads overlapping the microarray probe regions, and processing these estimates with standard microarray summarisation algorithms. Using paired microarray and RNA-seq samples from TCGA LAML data set we show that PREBS expression estimates derived from RNA-seq are more similar to microarray-based expression estimates than those from other RNA-seq processing methods. In an experiment to retrieve paired microarray samples from a database using an RNA-seq query sample, gene signatures defined based on PREBS expression estimates were found to be much more accurate than those from other methods. PREBS also allows new ways of using RNA-seq data, such as expression estimation for microarray probe sets. An implementation of the proposed method is available in the Bioconductor package “prebs.” PMID:25966034

  8. Improved Ancestry Estimation for both Genotyping and Sequencing Data using Projection Procrustes Analysis and Genotype Imputation

    PubMed Central

    Wang, Chaolong; Zhan, Xiaowei; Liang, Liming; Abecasis, Gonçalo R.; Lin, Xihong

    2015-01-01

    Accurate estimation of individual ancestry is important in genetic association studies, especially when a large number of samples are collected from multiple sources. However, existing approaches developed for genome-wide SNP data do not work well with modest amounts of genetic data, such as in targeted sequencing or exome chip genotyping experiments. We propose a statistical framework to estimate individual ancestry in a principal component ancestry map generated by a reference set of individuals. This framework extends and improves upon our previous method for estimating ancestry using low-coverage sequence reads (LASER 1.0) to analyze either genotyping or sequencing data. In particular, we introduce a projection Procrustes analysis approach that uses high-dimensional principal components to estimate ancestry in a low-dimensional reference space. Using extensive simulations and empirical data examples, we show that our new method (LASER 2.0), combined with genotype imputation on the reference individuals, can substantially outperform LASER 1.0 in estimating fine-scale genetic ancestry. Specifically, LASER 2.0 can accurately estimate fine-scale ancestry within Europe using either exome chip genotypes or targeted sequencing data with off-target coverage as low as 0.05×. Under the framework of LASER 2.0, we can estimate individual ancestry in a shared reference space for samples assayed at different loci or by different techniques. Therefore, our ancestry estimation method will accelerate discovery in disease association studies not only by helping model ancestry within individual studies but also by facilitating combined analysis of genetic data from multiple sources. PMID:26027497

  9. Optimal exploitation of AMSR-E signals for improving soil moisture estimation through land data assimilation

    NASA Astrophysics Data System (ADS)

    Zhao, L.; Yang, K.; Qin, J.; Chen, Y.

    2012-04-01

    Regional soil moisture can be estimated by assimilating satellite microwave brightness temperature into a land surface model (LSM). This study explores how to improve soil moisture estimation based on sensitivity analyses when assimilating AMSR-E (Advanced Microwave Scanning Radiometer for Earth Observing System) brightness temperatures. By assimilating a lower and higher frequency-combination, the land data assimilation system (LDAS) used in this study first estimates model parameters in a calibration pass, and then estimates soil moisture in an assimilation pass. The ground truth of soil moisture was collected at a soil moisture network deployed in a Mongolian semiarid area. Analyzed are the effects of different polarizations (horizontal and vertical), satellite overpass times (nighttime and daytime), and different frequency (from 6.9 GHz to 36.5 GHz) combinations on the accuracy of soil moisture estimation by the LDAS. The analyses indicate that assimilating the horizontal polarization underestimates soil moisture and assimilating the daytime signals produces obviously overestimates soil moisture. The former is perhaps due to the high sensitivity of the horizontal polarization to land surface heterogeneity, and the latter is due to the effective soil temperature for microwave emission in the daytime being close to the one at several centimeters soil depth but not to the surface skin temperature. Therefore, assimilating the nighttime vertical polarizations in the LDAS is recommended. A further analysis shows that assimilating different frequency-combinations produces different soil moisture estimates and none is always superior to the others, because different frequency signals may be contaminated by varying clouds and/or water vapor with different degrees. So, an ensemble estimation based on frequency-combinations was proposed to filter off, to some extent, the stochastic frequency-dependent biases. The ensemble estimation performs more robust when driven by

  10. Approaches for Improved Doppler Estimation in Lidar Remote Sensing of Atmospheric Dynamics

    NASA Astrophysics Data System (ADS)

    Bhaskaran, Sreevatsan; Calhoun, Ronald

    2016-06-01

    Laser radar (Lidar) has been used extensively for remote sensing of wind patterns, turbulence in the atmospheric boundary layer and other important atmospheric transport phenomenon. As in most narrowband radar application, radial velocity of remote objects is encoded in the Doppler shift of the backscattered signal relative to the transmitted signal. In contrast to many applications, however, the backscattered signal in atmospheric Lidar sensing arises from a multitude of moving particles in a spatial cell under examination rather than from a few prominent "target" scattering features. This complicates the process of extracting a single Doppler value and corresponding radial velocity figure to associate with the cell. This paper summarizes the prevalent methods for Doppler estimation in atmospheric Lidar applications and proposes a computationally efficient scheme for improving Doppler estimation by exploiting the local structure of spectral density estimates near spectral peaks.

  11. Improving streamflow estimates through the use of LANDSAT. [Wisconsin and Pecatonica-Sugar River basins

    NASA Technical Reports Server (NTRS)

    Allord, G. J. (Principal Investigator); Scarpace, F. L.

    1981-01-01

    Estimates of low flow and flood frequency in several southwestern Wisconsin basins were improved by determining land cover from LANDSAT imagery. With the use of estimates of land cover in multiple-regression techniques, the standard error of estimate (SE) for the least annual 7-day low flow for 2- and 10-year recurrence intervals of ungaged sites were lowered by 9% each. The SE of flood frequency in the 'Driftless Area' of Wisconsin for 10-, 50-, and 100-year recurrence intervals were lowered by 14%. Four of nine basin characteristics determined from satellite imagery were significant variables in the multiple-regression techniques, whereas only 1 of the 12 characteristics determined from topographic maps was significant. The percentages of land cover categories in each basin were determined by merging basin boundaries, digitized from quadrangles, with a classified LANDSAT scene. Both the basin boundary X-Y polygon coordinates and the satellite coordinates were converted to latitude-longitude for merging compatibility.

  12. Improved method for estimation of multiple parameters in self-mixing interferometry.

    PubMed

    Gao, Yan; Yu, Yanguang; Xi, Jiangtao; Guo, Qinghua; Tong, Jun; Tong, Sheng

    2015-04-01

    There are two categories of applications for self-mixing interference (SMI)-based sensing: (1) estimation of parameters associated with a semiconductor laser (SL) and (2) measurement of the metrological quantities of the external target. To achieve high resolution sensing, each category of applications requires knowledge from the other. This paper proposes an improved method that can simultaneously measure the parameters of an SL and the target movement in arbitrary form. Starting with the existing SMI model, we derive a new matrix equation for the measurement. The measurement matrix is built by employing all the available data samples obtained from an SMI signal. The total least squares estimation approach is used to estimate the parameters. The proposed method is verified by both simulations and experiments. PMID:25967179

  13. The electronic image stabilization technology research based on improved optical-flow motion vector estimation

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Ji, Ming; Zhang, Ying; Jiang, Wentao; Lu, Xiaoyan; Wang, Jiaoying; Yang, Heng

    2016-01-01

    The electronic image stabilization technology based on improved optical-flow motion vector estimation technique can effectively improve the non normal shift, such as jitter, rotation and so on. Firstly, the ORB features are extracted from the image, a set of regions are built on these features; Secondly, the optical-flow vector is computed in the feature regions, in order to reduce the computational complexity, the multi resolution strategy of Pyramid is used to calculate the motion vector of the frame; Finally, qualitative and quantitative analysis of the effect of the algorithm is carried out. The results show that the proposed algorithm has better stability compared with image stabilization based on the traditional optical-flow motion vector estimation method.

  14. An improved approach for rainfall estimation over Indian summer monsoon region using Kalpana-1 data

    NASA Astrophysics Data System (ADS)

    Mahesh, C.; Prakash, Satya; Sathiyamoorthy, V.; Gairola, R. M.

    2014-08-01

    In this paper, an improved Kalpana-1 infrared (IR) based rainfall estimation algorithm, specific to Indian summer monsoon region is presented. This algorithm comprises of two parts: (i) development of Kalpana-1 IR based rainfall estimation algorithm with improvement for orographic warm rain underestimation generally suffered by IR based rainfall estimation methods and (ii) cooling index to take care of the growth and decay of clouds and thereby improving the precipitation estimation. In the first part, a power-law based regression relationship between cloud top temperature from Kalpana-1 IR channel and rainfall from Tropical Rainfall Measuring Mission (TRMM) - precipitation radar specific to the Indian region is developed. This algorithm tries to overcome the inherent orographic issues of the IR based rainfall estimation techniques. Over the windward sides of the Western Ghats, Himalayas and Arakan Yoma mountain chains, separate regression coefficients are generated to take care of the orographically produced warm rainfall. Generally global rainfall retrieval methods fail to detect the warm rainfall over these regions. Rain estimated over the orographic region is suitably blended with the rain retrieved over the entire domain comprising of the Indian monsoon region and parts of the Indian Ocean using another regression relationship. While blending, a smoothening function is applied to avoid rainfall artefacts and an elliptical weighting function is introduced for the purpose. In the second part, a cooling index to distinguish rain/no-rain conditions is developed using Kalpana-1 IR data. The cooling index identifies the cloud growing/decaying regions using two consecutive half-hourly IR images of Kalpana-1 by assigning appropriate weights to growing and non-growing clouds. Intercomparison of estimated rainfall from the present algorithm with TRMM-3B42/3B43 precipitation products and Indian Meteorological Department (IMD) gridded rain gauge data are found to be

  15. Improving Estimates of Coseismic Subsidence from southern Cascadia Subduction Zone Earthquakes at northern Humboldt Bay, California

    NASA Astrophysics Data System (ADS)

    Padgett, J. S.; Engelhart, S. E.; Hemphill-Haley, E.; Kelsey, H. M.; Witter, R. C.

    2015-12-01

    Geological estimates of subsidence from past earthquakes help to constrain Cascadia subduction zone (CSZ) earthquake rupture models. To improve subsidence estimates for past earthquakes along the southern CSZ, we apply transfer function analysis on microfossils from 3 intertidal marshes in northern Humboldt Bay, California, ~60 km north of the Mendocino Triple Junction. The transfer function method uses elevation-dependent intertidal foraminiferal and diatom assemblages to reconstruct relative sea-level (RSL) change indicated by shifts in microfossil assemblages. We interpret stratigraphic evidence associated with sudden shifts in microfossils to reflect sudden RSL rise due to subsidence during past CSZ earthquakes. Laterally extensive (>5 km) and sharp mud-over-peat contacts beneath marshes at Jacoby Creek, Mad River Slough, and McDaniel Slough demonstrate widespread earthquake subsidence in northern Humboldt Bay. C-14 ages of plant macrofossils taken from above and below three contacts that correlate across all three sites, provide estimates of the times of subsidence at ~250 yr BP, ~1300 yr BP and ~1700 yr BP. Two further contacts observed at only two sites provide evidence for subsidence during possible CSZ earthquakes at ~900 yr BP and ~1100 yr BP. Our study contributes 20 AMS radiocarbon ages, of identifiable plant macrofossils, that improve estimates of the timing of past earthquakes along the southern CSZ. We anticipate that our results will provide more accurate and precise reconstructions of RSL change induced by southern CSZ earthquakes. Prior to our work, studies in northern Humboldt Bay provided subsidence estimates with vertical uncertainties >±0.5 m; too imprecise to adequately constrain earthquake rupture models. Our method, applied recently in coastal Oregon, has shown that subsidence during past CSZ earthquakes can be reconstructed with a precision of ±0.3m and substantially improves constraints on rupture models used for seismic hazard

  16. Experimental verification of an interpolation algorithm for improved estimates of animal position.

    PubMed

    Schell, Chad; Jaffe, Jules S

    2004-07-01

    This article presents experimental verification of an interpolation algorithm that was previously proposed in Jaffe [J. Acoust. Soc. Am. 105, 3168-3175 (1999)]. The goal of the algorithm is to improve estimates of both target position and target strength by minimizing a least-squares residual between noise-corrupted target measurement data and the output of a model of the sonar's amplitude response to a target at a set of known locations. Although this positional estimator was shown to be a maximum likelihood estimator, in principle, experimental verification was desired because of interest in understanding its true performance. Here, the accuracy of the algorithm is investigated by analyzing the correspondence between a target's true position and the algorithm's estimate. True target position was measured by precise translation of a small test target (bead) or from the analysis of images of fish from a coregistered optical imaging system. Results with the stationary spherical test bead in a high signal-to-noise environment indicate that a large increase in resolution is possible, while results with commercial aquarium fish indicate a smaller increase is obtainable. However, in both experiments the algorithm provides improved estimates of target position over those obtained by simply accepting the angular positions of the sonar beam with maximum output as target position. In addition, increased accuracy in target strength estimation is possible by considering the effects of the sonar beam patterns relative to the interpolated position. A benefit of the algorithm is that it can be applied "ex post facto" to existing data sets from commercial multibeam sonar systems when only the beam intensities have been stored after suitable calibration. PMID:15295985

  17. Experimental verification of an interpolation algorithm for improved estimates of animal position

    NASA Astrophysics Data System (ADS)

    Schell, Chad; Jaffe, Jules S.

    2004-07-01

    This article presents experimental verification of an interpolation algorithm that was previously proposed in Jaffe [J. Acoust. Soc. Am. 105, 3168-3175 (1999)]. The goal of the algorithm is to improve estimates of both target position and target strength by minimizing a least-squares residual between noise-corrupted target measurement data and the output of a model of the sonar's amplitude response to a target at a set of known locations. Although this positional estimator was shown to be a maximum likelihood estimator, in principle, experimental verification was desired because of interest in understanding its true performance. Here, the accuracy of the algorithm is investigated by analyzing the correspondence between a target's true position and the algorithm's estimate. True target position was measured by precise translation of a small test target (bead) or from the analysis of images of fish from a coregistered optical imaging system. Results with the stationary spherical test bead in a high signal-to-noise environment indicate that a large increase in resolution is possible, while results with commercial aquarium fish indicate a smaller increase is obtainable. However, in both experiments the algorithm provides improved estimates of target position over those obtained by simply accepting the angular positions of the sonar beam with maximum output as target position. In addition, increased accuracy in target strength estimation is possible by considering the effects of the sonar beam patterns relative to the interpolated position. A benefit of the algorithm is that it can be applied ``ex post facto'' to existing data sets from commercial multibeam sonar systems when only the beam intensities have been stored after suitable calibration.

  18. Combining GOSAT XCO2 observations over land and ocean to improve regional CO2 flux estimates

    NASA Astrophysics Data System (ADS)

    Deng, Feng; Jones, Dylan B. A.; O'Dell, Christopher W.; Nassar, Ray; Parazoo, Nicholas C.

    2016-02-01

    We used the GEOS-Chem data assimilation system to examine the impact of combining Greenhouse Gases Observing Satellite (GOSAT) XCO2 data over land and ocean on regional CO2 flux estimates for 2010-2012. We found that compared to assimilating only land data, combining land and ocean data produced an a posteriori CO2 distribution that is in better agreement with independent data and fluxes that are in closer agreement with existing top-down and bottom-up estimates. Adding XCO2 data over oceans changed the tropical land regions from a source of 0.64 Pg C/yr to a sink of -0.60 Pg C/yr and produced a corresponding reduction in the estimated sink in northern and southern land regions by 0.49 Pg C/yr and 0.80 Pg C/yr, respectively. This highlights the importance of improved observational coverage in the tropics to better quantify the latitudinal distribution of the terrestrial fluxes. Based only on land XCO2 data, we estimated a strong source in northern tropical South America, which experienced wet conditions in 2010-2012. In contrast, with the land and ocean data, we estimated a sink for this wet region in the north, and a source for the seasonally dry regions in the south and east, which is consistent with our understanding of the impact of moisture availability on the carbon balance of the region. Our results suggest that using satellite data with a more zonally balanced observational coverage could help mitigate discrepancies in CO2 flux estimates; further improvement could be expected with the greater observational coverage provided by the Orbiting Carbon Observatory-2.

  19. Improving Spectral Crop Coefficient Approach with Raw Image Digital Count Data to Estimate Crop Water Use

    NASA Astrophysics Data System (ADS)

    Shafian, S.; Maas, S. J.; Rajan, N.

    2014-12-01

    Water resources and agricultural applications require knowledge of crop water use (CWU) over a range of spatial and temporal scales. Due to the spatial density of meteorological stations, the resolution of CWU estimates based on these data is fairly coarse and not particularly suitable or reliable for water resources planning, irrigation scheduling and decision making. Various methods have been developed for quantifying CWU of agricultural crops. In this study, an improved version of the spectral crop coefficient which includes the effects of stomatal closure is applied. Raw digital count (DC) data in the red, near-infrared, and thermal infrared (TIR) spectral bands of Landsat-7 and Landsat-8 imaging sensors are used to construct the TIR-ground cover (GC) pixel data distribution and estimate the effects of stomatal closure. CWU is then estimated by combining results of the spectral crop coefficient approach and the stomatal closer effect. To test this approach, evapotranspiration was measured in 5 agricultural fields in the semi-arid Texas High Plains during the 2013 and 2014 growing seasons and compared to corresponding estimated values of CWU determined using this approach. The results showed that the estimated CWU from this approach was strongly correlated (R2 = 0.79) with observed evapotranspiration. In addition, the results showed that considering the stomatal closer effect in the proposed approach can improve the accuracy of the spectral crop coefficient method. These results suggest that the proposed approach is suitable for operational estimation of evapotranspiration and irrigation scheduling where irrigation is used to replace the daily CWU of a crop.

  20. A cluster randomized trial of an organizational process improvement intervention for improving the assessment and case planning of offenders: a Study Protocol

    PubMed Central

    Shafer, Michael S; Prendergast, Michael; Melnick, Gerald; Stein, Lynda A; Welsh, Wayne N

    2014-01-01

    Background The Organizational Process Improvement Intervention (OPII), conducted by the NIDA-funded Criminal Justice Drug Abuse Treatment Studies consortium of nine research centers, examined an organizational intervention to improve the processes used in correctional settings to assess substance abusing offenders, develop case plans, transfer this information to community-based treatment agencies, and monitor the services provided by these community based treatment agencies. Methods/Design A multi-site cluster randomized design was used to evaluate an inter-agency organizational process improvement intervention among dyads of correctional agencies and community based treatment agencies. Linked correctional and community based agencies were clustered among nine (9) research centers and randomly assigned to an early or delayed intervention condition. Participants included administrators, managers, and line staff from the participating agencies; some participants served on interagency change teams while other participants performed agency tasks related to offender services. A manualized organizational intervention that includes the use of external organizational coaches was applied to create and support interagency change teams that proceeded through a four-step process over a planned intervention period of 12 months. The primary outcome of the process improvement intervention was to improve processes associated with the assessment, case planning, service referral and service provision processes within the linked organizations. Discussion Providing substance abuse offenders with coordinated treatment and access to community-based services is critical to reducing offender recidivism. Results from this study protocol will provide new and critical information on strategies and processes that improve the assessment and case planning for such offenders as they transition between correctional and community based systems and settings. Further, this study extends current

  1. Improved Estimates of Capital Formation in the National Health Expenditure Accounts

    PubMed Central

    Sensenig, Arthur L.; Donahoe, Gerald F.

    2006-01-01

    The National Health Expenditure Accounts (NHEA) were revised with the release of the 2004 estimates. The largest revision was the incorporation of a more comprehensive measure of investment in medical sector capital. The revision raised total health expenditures' share of gross domestic product (GDP) from 15.4 to 15.8 percent in 2003. The improved measure encompasses investment in moveable equipment and software, as well as expenditures for the construction of structures used by the medical sector. PMID:17290665

  2. Improved estimates of capital formation in the National Health Expenditure Accounts.

    PubMed

    Sensenig, Arthur L; Donahoe, Gerald F

    2006-01-01

    The National Health Expenditure Accounts (NHEA) were revised with the release of the 2004 estimates. The largest revision was the incorporation of a more comprehensive measure of investment in medical sector capital. The revision raised total health expenditures' share of gross domestic product (GDP) from 15.4 to 15.8 percent in 2003. The improved measure encompasses investment in moveable equipment and software, as well as expenditures for the construction of structures used by the medical sector. PMID:17290665

  3. Estimating Typhoon Rainfall over Sea from SSM/I Satellite Data Using an Improved Genetic Programming

    NASA Astrophysics Data System (ADS)

    Yeh, K.; Wei, H.; Chen, L.; Liu, G.

    2010-12-01

    Estimating Typhoon Rainfall over Sea from SSM/I Satellite Data Using an Improved Genetic Programming Keh-Chia Yeha, Hsiao-Ping Weia,d, Li Chenb, and Gin-Rong Liuc a Department of Civil Engineering, National Chiao Tung University, Hsinchu, Taiwan, 300, R.O.C. b Department of Civil Engineering and Engineering Informatics, Chung Hua University, Hsinchu, Taiwan, 300, R.O.C. c Center for Space and Remote Sensing Research, National Central University, Tao-Yuan, Taiwan, 320, R.O.C. d National Science and Technology Center for Disaster Reduction, Taipei County, Taiwan, 231, R.O.C. Abstract This paper proposes an improved multi-run genetic programming (GP) and applies it to predict the rainfall using meteorological satellite data. GP is a well-known evolutionary programming and data mining method, used to automatically discover the complex relationships among nonlinear systems. The main advantage of GP is to optimize appropriate types of function and their associated coefficients simultaneously. This study makes an improvement to enhance escape ability from local optimums during the optimization procedure. The GP continuously runs several times by replacing the terminal nodes at the next run with the best solution at the current run. The current novel model improves GP, obtaining a highly nonlinear mathematical equation to estimate the rainfall. In the case study, this improved GP described above combining with SSM/I satellite data is employed to establish a suitable method for estimating rainfall at sea surface during typhoon periods. These estimated rainfalls are then verified with the data from four rainfall stations located at Peng-Jia-Yu, Don-Gji-Dao, Lan-Yu, and Green Island, which are four small islands around Taiwan. From the results, the improved GP can generate sophisticated and accurate nonlinear mathematical equation through two-run learning procedures which outperforms the traditional multiple linear regression, empirical equations and back-propagated network

  4. An improved method for estimating the neutron background in measurements of neutron capture reactions

    NASA Astrophysics Data System (ADS)

    Žugec, P.; Bosnar, D.; Colonna, N.; Gunsing, F.

    2016-08-01

    The relation between the neutron background in neutron capture measurements and the neutron sensitivity related to the experimental setup is examined. It is pointed out that a proper estimate of the neutron background may only be obtained by means of dedicated simulations taking into account the full framework of the neutron-induced reactions and their complete temporal evolution. No other presently available method seems to provide reliable results, in particular under the capture resonances. An improved neutron background estimation technique is proposed, the main improvement regarding the treatment of the neutron sensitivity, taking into account the temporal evolution of the neutron-induced reactions. The technique is complemented by an advanced data analysis procedure based on relativistic kinematics of neutron scattering. The analysis procedure allows for the calculation of the neutron background in capture measurements, without requiring the time-consuming simulations to be adapted to each particular sample. A suggestion is made on how to improve the neutron background estimates if neutron background simulations are not available.

  5. Improving winter leaf area index estimation in coniferous forests and its significance in estimating the land surface albedo

    NASA Astrophysics Data System (ADS)

    Wang, Rong; Chen, Jing M.; Pavlic, Goran; Arain, Altaf

    2016-09-01

    Winter leaf area index (LAI) of evergreen coniferous forests exerts strong control on the interception of snow, snowmelt and energy balance. Simulation of winter LAI and associated winter processes in land surface models is challenging. Retrieving winter LAI from remote sensing data is difficult due to cloud contamination, poor illumination, lower solar elevation and higher radiation reflection by snow background. Underestimated winter LAI in evergreen coniferous forests is one of the major issues limiting the application of current remote sensing LAI products. It has not been fully addressed in past studies in the literature. In this study, we used needle lifespan to correct winter LAI in a remote sensing product developed by the University of Toronto. For the validation purpose, the corrected winter LAI was then used to calculate land surface albedo at five FLUXNET coniferous forests in Canada. The RMSE and bias values for estimated albedo were 0.05 and 0.011, respectively, for all sites. The albedo map over coniferous forests across Canada produced with corrected winter LAI showed much better agreement with the GLASS (Global LAnd Surface Satellites) albedo product than the one produced with uncorrected winter LAI. The results revealed that the corrected winter LAI yielded much greater accuracy in simulating land surface albedo, making the new LAI product an improvement over the original one. Our study will help to increase the usability of remote sensing LAI products in land surface energy budget modeling.

  6. Estimates of achievable potential for electricity efficiency improvements in U.S. residences

    SciTech Connect

    Brown, Richard

    1993-05-01

    This paper investigates the potential for public policies to achieve electricity efficiency improvements in US residences. This estimate of achievable potential builds upon a database of energy-efficient technologies developed for a previous study estimating the technical potential for electricity savings. The savings potential and cost for each efficiency measure in the database is modified to reflect the expected results of policies implemented between 1990 and 2010. Factors included in these modifications are: the market penetration of efficiency measures, the costs of administering policies, and adjustments to the technical potential measures to reflect the actual energy savings and cost experienced in the past. When all adjustment factors are considered, this study estimates that policies can achieve approximately 45% of the technical potential savings during the period from 1990 to 2010. Thus, policies can potentially avoid 18% of the annual frozen-efficiency baseline electricity consumption forecast for the year 2010. This study also investigates the uncertainty in best estimate of achievable potential by estimating two alternative scenarios -- a

  7. A 10-Week Multimodal Nutrition Education Intervention Improves Dietary Intake among University Students: Cluster Randomised Controlled Trial.

    PubMed

    Shahril, Mohd Razif; Wan Dali, Wan Putri Elena; Lua, Pei Lin

    2013-01-01

    The aim of the study was to evaluate the effectiveness of implementing multimodal nutrition education intervention (NEI) to improve dietary intake among university students. The design of study used was cluster randomised controlled design at four public universities in East Coast of Malaysia. A total of 417 university students participated in the study. They were randomly selected and assigned into two arms, that is, intervention group (IG) or control group (CG) according to their cluster. The IG received 10-week multimodal intervention using three modes (conventional lecture, brochures, and text messages) while CG did not receive any intervention. Dietary intake was assessed before and after intervention and outcomes reported as nutrient intakes as well as average daily servings of food intake. Analysis of covariance (ANCOVA) and adjusted effect size were used to determine difference in dietary changes between groups and time. Results showed that, compared to CG, participants in IG significantly improved their dietary intake by increasing their energy intake, carbohydrate, calcium, vitamin C and thiamine, fruits and 100% fruit juice, fish, egg, milk, and dairy products while at the same time significantly decreased their processed food intake. In conclusion, multimodal NEI focusing on healthy eating promotion is an effective approach to improve dietary intakes among university students. PMID:24069535

  8. Cloning, reassembling and integration of the entire nikkomycin biosynthetic gene cluster into Streptomyces ansochromogenes lead to an improved nikkomycin production

    PubMed Central

    2010-01-01

    Background Nikkomycins are a group of peptidyl nucleoside antibiotics produced by Streptomyces ansochromogenes. They are competitive inhibitors of chitin synthase and show potent fungicidal, insecticidal, and acaricidal activities. Nikkomycin X and Z are the main components produced by S. ansochromogenes. Generation of a high-producing strain is crucial to scale up nikkomycins production for further clinical trials. Results To increase the yields of nikkomycins, an additional copy of nikkomycin biosynthetic gene cluster (35 kb) was introduced into nikkomycin producing strain, S. ansochromogenes 7100. The gene cluster was first reassembled into an integrative plasmid by Red/ET technology combining with classic cloning methods and then the resulting plasmid(pNIK)was introduced into S. ansochromogenes by conjugal transfer. Introduction of pNIK led to enhanced production of nikkomycins (880 mg L-1, 4 -fold nikkomycin X and 210 mg L-1, 1.8-fold nikkomycin Z) in the resulting exconjugants comparing with the parent strain (220 mg L-1 nikkomycin X and 120 mg L-1 nikkomycin Z). The exconjugants are genetically stable in the absence of antibiotic resistance selection pressure. Conclusion A high nikkomycins producing strain (1100 mg L-1 nikkomycins) was obtained by introduction of an extra nikkomycin biosynthetic gene cluster into the genome of S. ansochromogenes. The strategies presented here could be applicable to other bacteria to improve the yields of secondary metabolites. PMID:20096125

  9. Improving Local and Regional Flood Quantile Estimates Using a Hierarchical Bayesian GEV Model

    NASA Astrophysics Data System (ADS)

    Ribeiro Lima, C. H.; Lall, U.; Devineni, N.; Troy, T.

    2013-12-01

    Flood risk management usually relies on local and regional flood frequency analysis, which tends to suffer from lack of data and parameter uncertainties. Here we estimate local and regional Generalized Extreme Value (GEV) distribution parameters in a hierarchical Bayesian framework, which helps reduce uncertainties by pooling more information in the estimation process and provides a simple topology to propagate model and parameter uncertainties to flood quantile estimates. As prior information for the Bayesian model, it is assumed for each site that the GEV location and scale parameters come from independent log-normal distributions, whose mean parameter follows the well known log-log scaling law with the drainage area. The shape parameter for each site is shrunk towards a common mean. Non-informative prior distributions are assumed for the hyperparameters and the MCMC method is used to sample from the posterior distributions. The model is tested using annual maximum series from 20 streamflow gauges located in an 83.000 km2 basin in southeastern Brazil. The results show a significant improvement of flood quantile estimates over the traditional GEV model, particularly for sites with few data. For return periods within the range of the data (around 50 years), the Bayesian credible intervals for the flood quantiles are narrower than the classical confidence limits based on the delta method. As the return period increases beyond the range of the data, the confidence limits from the delta method become unreliable and the Bayesian credible intervals provide a way to estimate satisfactory confidence bands for the flood quantiles considering the parameter uncertainties. In order to evaluate the applicability of the proposed hierarchical Bayesian model for flood frequency regional analysis, we estimate flood quantiles for three randomly chosen out-of-sample sites and compare with classical estimates using the index flood method. The posterior distributions of the scaling

  10. Improvement of biological time-of-flight-secondary ion mass spectrometry imaging with a bismuth cluster ion source.

    PubMed

    Touboul, David; Kollmer, Felix; Niehuis, Ewald; Brunelle, Alain; Laprévote, Olivier

    2005-10-01

    A new liquid metal ion gun (LMIG) filled with bismuth has been fitted to a time-of-flight-secondary ion mass spectrometer (TOF-SIMS). This source provides beams of Bi(n)q+ clusters with n = 1-7 and q = 1 and 2. The appropriate clusters have much better intensities and efficiencies than the Au3+ gold clusters recently used in TOF-SIMS imaging, and allow better lateral and mass resolution. The different beams delivered by this ion source have been tested for biological imaging of rat brain sections. The results show a great improvement of the imaging capabilities in terms of accessible mass range and useful lateral resolution. Secondary ion yields Y, disappearance cross sections sigma, efficiencies E = Y/sigma , and useful lateral resolutions deltaL have been compared using the different bismuth clusters, directly onto the surface of rat brain sections and for several positive and negative secondary ions with m/z ranging from 23 up to more than 750. The efficiency and the imaging capabilities of the different primary ions are compared by taking into account the primary ion current for reasonable acquisition times. The two best primary ions are Bi3+ and Bi5(2+). The Bi3+ ion beam has a current at least five times larger than Au3+ and therefore is an excellent beam for large-area imaging. Bi5(2+) ions exhibit large secondary ions yields and a reasonable intensity making them suitable for small-area images with an excellent sensitivity and a possible useful lateral resolution <400 nm. PMID:16112869

  11. Improving radar estimates of rainfall using an input subset of artificial neural networks

    NASA Astrophysics Data System (ADS)

    Yang, Tsun-Hua; Feng, Lei; Chang, Lung-Yao

    2016-04-01

    An input subset including average radar reflectivity (Zave) and its standard deviation (SD) is proposed to improve radar estimates of rainfall based on a radial basis function (RBF) neural network. The RBF derives a relationship from a historical input subset, called a training dataset, consisting of radar measurements such as reflectivity (Z) aloft and associated rainfall observation (R) on the ground. The unknown rainfall rate can then be predicted over the derived relationship with known radar measurements. The selection of the input subset has a significant impact on the prediction performance. This study simplified the selection of input subsets and studied its improvement in rainfall estimation. The proposed subset includes: (1) the Zave of the observed Z within a given distance from the ground observation to represent the intensity of a storm system and (2) the SD of the observed Z to describe the spatial variability. Using three historical rainfall events in 1999 near Darwin, Australia, the performance evaluation is conducted using three approaches: an empirical Z-R relation, RBF with Z, and RBF with Zave and SD. The results showed that the RBF with both Zave and SD achieved better rainfall estimations than the RBF using only Z. Two performance measures were used: (1) the Pearson correlation coefficient improved from 0.15 to 0.58 and (2) the average root-mean-square error decreased from 14.14 mm to 11.43 mm. The proposed model and findings can be used for further applications involving the use of neural networks for radar estimates of rainfall.

  12. Improving North American gross primary production (GPP) estimates using atmospheric measurements of carbonyl sulfide (COS)

    NASA Astrophysics Data System (ADS)

    Chen, Huilin; Montzka, Steve; Andrews, Arlyn; Sweeney, Colm; Jacobson, Andy; Miller, Ben; Masarie, Ken; Jung, Martin; Gerbig, Christoph; Campbell, Elliott; Abu-Naser, Mohammad; Berry, Joe; Baker, Ian; Tans, Pieter

    2013-04-01

    Understanding the responses of gross primary production (GPP) to climate change is essential for improving our prediction of climate change. To this end, it is important to accurately partition net ecosystem exchange of carbon into GPP and respiration. Recent studies suggest that carbonyl sulfide is a useful tracer to provide a constraint on GPP, based on the fact that both COS and CO2 are simultaneously taken up by plants and the quantitative correlation between GPP and COS plant uptake. We will present an assessment of North American GPP estimates from the Simple Biosphere (SiB) model, the Carnegie-Ames-Stanford Approach (CASA) model, and the MPI-BGC model through atmospheric transport simulations of COS in a receptor oriented framework. The newly upgraded Hybrid Single Particle Lagrangian Integrated Trajectory Model (HYSPLIT) will be employed to compute the influence functions, i.e. footprints, to link the surface fluxes to the concentration changes at the receptor observations. The HYSPLIT is driven by the 3-hourly archived NAM 12km meteorological data from NOAA NCEP. The background concentrations are calculated using empirical curtains along the west coast of North America that have been created by interpolating in time and space the observations at the NOAA/ESRL marine boundary layer stations and from aircraft vertical profiles. The plant uptake of COS is derived from GPP estimates of biospheric models. The soil uptake and anthropogenic emissions are from Kettle et al. 2002. In addition, we have developed a new soil flux map of COS based on observations of molecular hydrogen (H2), which shares a common soil uptake term but lacks a vegetative sink. We will also improve the GPP estimates by assimilating atmospheric observations of COS in the receptor oriented framework, and then present the assessment of the improved GPP estimates against variations of climate variables such as temperature and precipitation.

  13. Improved methods to estimate the effective impervious area in urban catchments using rainfall-runoff data

    NASA Astrophysics Data System (ADS)

    Ebrahimian, Ali; Wilson, Bruce N.; Gulliver, John S.

    2016-05-01

    Impervious surfaces are useful indicators of the urbanization impacts on water resources. Effective impervious area (EIA), which is the portion of total impervious area (TIA) that is hydraulically connected to the drainage system, is a better catchment parameter in the determination of actual urban runoff. Development of reliable methods for quantifying EIA rather than TIA is currently one of the knowledge gaps in the rainfall-runoff modeling context. The objective of this study is to improve the rainfall-runoff data analysis method for estimating EIA fraction in urban catchments by eliminating the subjective part of the existing method and by reducing the uncertainty of EIA estimates. First, the theoretical framework is generalized using a general linear least square model and using a general criterion for categorizing runoff events. Issues with the existing method that reduce the precision of the EIA fraction estimates are then identified and discussed. Two improved methods, based on ordinary least square (OLS) and weighted least square (WLS) estimates, are proposed to address these issues. The proposed weighted least squares method is then applied to eleven urban catchments in Europe, Canada, and Australia. The results are compared to map measured directly connected impervious area (DCIA) and are shown to be consistent with DCIA values. In addition, both of the improved methods are applied to nine urban catchments in Minnesota, USA. Both methods were successful in removing the subjective component inherent in the analysis of rainfall-runoff data of the current method. The WLS method is more robust than the OLS method and generates results that are different and more precise than the OLS method in the presence of heteroscedastic residuals in our rainfall-runoff data.

  14. An estimation method of MR signal parameters for improved image reconstruction in unilateral scanner

    NASA Astrophysics Data System (ADS)

    Bergman, Elad; Yeredor, Arie; Nevo, Uri

    2013-12-01

    Unilateral NMR devices are used in various applications including non-destructive testing and well logging, but are not used routinely for imaging. This is mainly due to the inhomogeneous magnetic field (B0) in these scanners. This inhomogeneity results in low sensitivity and further forces the use of the slow single point imaging scan scheme. Improving the measurement sensitivity is therefore an important factor as it can improve image quality and reduce imaging times. Short imaging times can facilitate the use of this affordable and portable technology for various imaging applications. This work presents a statistical signal-processing method, designed to fit the unique characteristics of imaging with a unilateral device. The method improves the imaging capabilities by improving the extraction of image information from the noisy data. This is done by the use of redundancy in the acquired MR signal and by the use of the noise characteristics. Both types of data were incorporated into a Weighted Least Squares estimation approach. The method performance was evaluated with a series of imaging acquisitions applied on phantoms. Images were extracted from each measurement with the proposed method and were compared to the conventional image reconstruction. All measurements showed a significant improvement in image quality based on the MSE criterion - with respect to gold standard reference images. An integration of this method with further improvements may lead to a prominent reduction in imaging times aiding the use of such scanners in imaging application.

  15. How does our choice of observable influence our estimation of the centre of a galaxy cluster? Insights from cosmological simulations

    NASA Astrophysics Data System (ADS)

    Cui, Weiguang; Power, Chris; Biffi, Veronica; Borgani, Stefano; Murante, Giuseppe; Fabjan, Dunja; Knebe, Alexander; Lewis, Geraint F.; Poole, Greg B.

    2016-03-01

    Galaxy clusters are an established and powerful test-bed for theories of both galaxy evolution and cosmology. Accurate interpretation of cluster observations often requires robust identification of the location of the centre. Using a statistical sample of clusters drawn from a suite of cosmological simulations in which we have explored a range of galaxy formation models, we investigate how the location of this centre is affected by the choice of observable - stars, hot gas, or the full mass distribution as can be probed by the gravitational potential. We explore several measures of cluster centre: the minimum of the gravitational potential, which would expect to define the centre if the cluster is in dynamical equilibrium; the peak of the density; the centre of brightest cluster galaxy (BCG); and the peak and centroid of X-ray luminosity. We find that the centre of BCG correlates more strongly with the minimum of the gravitational potential than the X-ray defined centres, while active galactic nuclei feedback acts to significantly enhance the offset between the peak X-ray luminosity and minimum gravitational potential. These results highlight the importance of centre identification when interpreting clusters observations, in particular when comparing theoretical predictions and observational data.

  16. Breaking the bottleneck: Use of molecular tailoring approach for the estimation of binding energies at MP2/CBS limit for large water clusters

    NASA Astrophysics Data System (ADS)

    Singh, Gurmeet; Nandi, Apurba; Gadre, Shridhar R.

    2016-03-01

    A pragmatic method based on the molecular tailoring approach (MTA) for estimating the complete basis set (CBS) limit at Møller-Plesset second order perturbation (MP2) theory accurately for large molecular clusters with limited computational resources is developed. It is applied to water clusters, (H2O)n (n = 7, 8, 10, 16, 17, and 25) optimized employing aug-cc-pVDZ (aVDZ) basis-set. Binding energies (BEs) of these clusters are estimated at the MP2/aug-cc-pVNZ (aVNZ) [N = T, Q, and 5 (whenever possible)] levels of theory employing grafted MTA (GMTA) methodology and are found to lie within 0.2 kcal/mol of the corresponding full calculation MP2 BE, wherever available. The results are extrapolated to CBS limit using a three point formula. The GMTA-MP2 calculations are feasible on off-the-shelf hardware and show around 50%-65% saving of computational time. The methodology has a potential for application to molecular clusters containing ˜100 atoms.

  17. Combining the Estimated Date of HIV Infection with a Phylogenetic Cluster Study to Better Understand HIV Spread: Application in a Paris Neighbourhood

    PubMed Central

    Robineau, Olivier; Frange, Pierre; Barin, Francis; Cazein, Françoise; Girard, Pierre-Marie; Chaix, Marie-Laure; Kreplak, Georges; Boelle, Pierre-Yves; Morand-Joubert, Laurence

    2015-01-01

    Objectives To relate socio-demographic and virological information to phylogenetic clustering in HIV infected patients in a limited geographical area and to evaluate the role of recently infected individuals in the spread of HIV. Methods HIV-1 pol sequences from newly diagnosed and treatment-naive patients receiving follow-up between 2008 and 2011 by physicians belonging to a health network in Paris were used to build a phylogenetic tree using neighbour-joining analysis. Time since infection was estimated by immunoassay to define recently infected patients (very early infected presenters, VEP). Data on socio-demographic, clinical and biological features in clustered and non-clustered patients were compared. Chains of infection structure was also analysed. Results 547 patients were included, 49 chains of infection containing 108 (20%) patients were identified by phylogenetic analysis. analysis. Eighty individuals formed pairs and 28 individuals were belonging to larger clusters. The median time between two successive HIV diagnoses in the same chain of infection was 248 days [CI = 176–320]. 34.7% of individuals were considered as VEP, and 27% of them were included in chains of infection. Multivariable analysis showed that belonging to a cluster was more frequent in VEP and those under 30 years old (OR: 3.65, 95 CI 1.49–8.95, p = 0.005 and OR: 2.42, 95% CI 1.05–5.85, p = 0.04 respectively). The prevalence of drug resistance was not associated with belonging to a pair or a cluster. Within chains, VEP were not grouped together more than chance predicted (p = 0.97). Conclusions Most newly diagnosed patients did not belong to a chain of infection, confirming the importance of undiagnosed or untreated HIV infected individuals in transmission. Furthermore, clusters involving both recently infected individuals and longstanding infected individuals support a substantial role in transmission of the latter before diagnosis. PMID:26267615

  18. Improved tilt sensing in an LGS-based tomographic AO system based on instantaneous PSF estimation

    NASA Astrophysics Data System (ADS)

    Veran, Jean-Pierre

    2013-12-01

    Laser guide star (LGS)-based tomographic AO systems, such as Multi-Conjugate AO (MCAO), Multi-Object AO (MOAO) and Laser Tomography AO (LTAO), require natural guide stars (NGSs) to sense tip-tilt (TT) and possibly other low order modes, to get rid of the LGS-tilt indetermination problem. For example, NFIRAOS, the first-light facility MCAO system for the Thirty Meter Telescope requires three NGSs, in addition to six LGSs: two to measure TT and one to measure TT and defocus. In order to improve sky coverage, these NGSs are selected in a so-called technical field (2 arcmin in diameter for NFIRAOS), which is much larger than the on-axis science field (17x17 arcsec for NFIRAOS), on which the AO correction is optimized. Most times, the NGSs are far off-axis and thus poorly corrected by the high-order AO loop, resulting in spots with low contrast and high speckle noise. Accurately finding the position of such spots is difficult, even with advanced methods such as matched-filtering or correlation, because these methods rely on the knowledge of an average spot image, which is quite different from the instantaneous spot image, especially in case of poor correction. This results in poor tilt estimation, which, ultimately, impacts sky coverage. We propose to improve the estimation of the position of the NGS spots by using, for each frame, a current estimate of the instantaneous spot profile instead of an average profile. This estimate can be readily obtained by tracing wavefront errors in the direction of the NGS through the turbulence volume. The latter is already computed by the tomographic process from the LGS measurements as part of the high order AO loop. Computing such a wavefront estimate has actually already been proposed for the purpose of driving a deformable mirror (DM) in each NGS WFS, to optically correct the NGS spot, which does lead to improved centroiding accuracy. Our approach, however, is much simpler, because it does not require the complication of extra DMs

  19. Improving regression-model-based streamwater constituent load estimates derived from serially correlated data

    USGS Publications Warehouse

    Aulenbach, Brent T.

    2013-01-01

    A regression-model based approach is a commonly used, efficient method for estimating streamwater constituent load when there is a relationship between streamwater constituent concentration and continuous variables such as streamwater discharge, season and time. A subsetting experiment using a 30-year dataset of daily suspended sediment observations from the Mississippi River at Thebes, Illinois, was performed to determine optimal sampling frequency, model calibration period length, and regression model methodology, as well as to determine the effect of serial correlation of model residuals on load estimate precision. Two regression-based methods were used to estimate streamwater loads, the Adjusted Maximum Likelihood Estimator (AMLE), and the composite method, a hybrid load estimation approach. While both methods accurately and precisely estimated loads at the model’s calibration period time scale, precisions were progressively worse at shorter reporting periods, from annually to monthly. Serial correlation in model residuals resulted in observed AMLE precision to be significantly worse than the model calculated standard errors of prediction. The composite method effectively improved upon AMLE loads for shorter reporting periods, but required a sampling interval of at least 15-days or shorter, when the serial correlations in the observed load residuals were greater than 0.15. AMLE precision was better at shorter sampling intervals and when using the shortest model calibration periods, such that the regression models better fit the temporal changes in the concentration–discharge relationship. The models with the largest errors typically had poor high flow sampling coverage resulting in unrepresentative models. Increasing sampling frequency and/or targeted high flow sampling are more efficient approaches to ensure sufficient sampling and to avoid poorly performing models, than increasing calibration period length.

  20. Improving the Carbon Dioxide Emission Estimates from the Combustion of Fossil Fuels in California

    SciTech Connect

    de la Rue du Can, Stephane; Wenzel, Tom; Price, Lynn

    2008-08-13

    Central to any study of climate change is the development of an emission inventory that identifies and quantifies the State's primary anthropogenic sources and sinks of greenhouse gas (GHG) emissions. CO2 emissions from fossil fuel combustion accounted for 80 percent of California GHG emissions (CARB, 2007a). Even though these CO2 emissions are well characterized in the existing state inventory, there still exist significant sources of uncertainties regarding their accuracy. This report evaluates the CO2 emissions accounting based on the California Energy Balance database (CALEB) developed by Lawrence Berkeley National Laboratory (LBNL), in terms of what improvements are needed and where uncertainties lie. The estimated uncertainty for total CO2 emissions ranges between -21 and +37 million metric tons (Mt), or -6percent and +11percent of total CO2 emissions. The report also identifies where improvements are needed for the upcoming updates of CALEB. However, it is worth noting that the California Air Resources Board (CARB) GHG inventory did not use CALEB data for all combustion estimates. Therefore the range in uncertainty estimated in this report does not apply to the CARB's GHG inventory. As much as possible, additional data sources used by CARB in the development of its GHG inventory are summarized in this report for consideration in future updates to CALEB.

  1. Integrating SAS and GIS software to improve habitat-use estimates from radiotelemetry data

    USGS Publications Warehouse

    Kenow, K.P.; Wright, R.G.; Samuel, M.D.; Rasmussen, P.W.

    2001-01-01

    Radiotelemetry has been used commonly to remotely determine habitat use by a variety of wildlife species. However, habitat misclassification can occur because the true location of a radiomarked animal can only be estimated. Analytical methods that provide improved estimates of habitat use from radiotelemetry location data using a subsampling approach have been proposed previously. We developed software, based on these methods, to conduct improved habitat-use analyses. A Statistical Analysis System (SAS)-executable file generates a random subsample of points from the error distribution of an estimated animal location and formats the output into ARC/INFO-compatible coordinate and attribute files. An associated ARC/INFO Arc Macro Language (AML) creates a coverage of the random points, determines the habitat type at each random point from an existing habitat coverage, sums the number of subsample points by habitat type for each location, and outputs tile results in ASCII format. The proportion and precision of habitat types used is calculated from the subsample of points generated for each radiotelemetry location. We illustrate the method and software by analysis of radiotelemetry data for a female wild turkey (Meleagris gallopavo).

  2. Improved global high resolution precipitation estimation using multi-satellite multi-spectral information

    NASA Astrophysics Data System (ADS)

    Behrangi, Ali

    In respond to the community demands, combining microwave (MW) and infrared (IR) estimates of precipitation has been an active area of research since past two decades. The anticipated launching of NASA's Global Precipitation Measurement (GPM) mission and the increasing number of spectral bands in recently launched geostationary platforms will provide greater opportunities for investigating new approaches to combine multi-source information towards improved global high resolution precipitation retrievals. After years of the communities' efforts the limitations of the existing techniques are: (1) Drawbacks of IR-only techniques to capture warm rainfall and screen out no-rain thin cirrus clouds; (2) Grid-box- only dependency of many algorithms with not much effort to capture the cloud textures whether in local or cloud patch scale; (3) Assumption of indirect relationship between rain rate and cloud-top temperature that force high intensity precipitation to any cold cloud; (4) Neglecting the dynamics and evolution of cloud in time; (5) Inconsistent combination of MW and IR-based precipitation estimations due to the combination strategies and as a result of above described shortcomings. This PhD dissertation attempts to improve the combination of data from Geostationary Earth Orbit (GEO) and Low-Earth Orbit (LEO) satellites in manners that will allow consistent high resolution integration of the more accurate precipitation estimates, directly observed through LEO's PMW sensors, into the short-term cloud evolution process, which can be inferred from GEO images. A set of novel approaches are introduced to cope with the listed limitations and is consist of the following four consecutive components: (1) starting with the GEO part and by using an artificial-neural network based method it is demonstrated that inclusion of multi-spectral data can ameliorate existing problems associated with IR-only precipitating retrievals; (2) through development of Precipitation Estimation

  3. Improving the Network Scale-Up Estimator: Incorporating Means of Sums, Recursive Back Estimation, and Sampling Weights

    PubMed Central

    Habecker, Patrick; Dombrowski, Kirk; Khan, Bilal

    2015-01-01

    Researchers interested in studying populations that are difficult to reach through traditional survey methods can now draw on a range of methods to access these populations. Yet many of these methods are more expensive and difficult to implement than studies using conventional sampling frames and trusted sampling methods. The network scale-up method (NSUM) provides a middle ground for researchers who wish to estimate the size of a hidden population, but lack the resources to conduct a more specialized hidden population study. Through this method it is possible to generate population estimates for a wide variety of groups that are perhaps unwilling to self-identify as such (for example, users of illegal drugs or other stigmatized populations) via traditional survey tools such as telephone or mail surveys—by asking a representative sample to estimate the number of people they know who are members of such a “hidden” subpopulation. The original estimator is formulated to minimize the weight a single scaling variable can exert upon the estimates. We argue that this introduces hidden and difficult to predict biases, and instead propose a series of methodological advances on the traditional scale-up estimation procedure, including a new estimator. Additionally, we formalize the incorporation of sample weights into the network scale-up estimation process, and propose a recursive process of back estimation “trimming” to identify and remove poorly performing predictors from the estimation process. To demonstrate these suggestions we use data from a network scale-up mail survey conducted in Nebraska during 2014. We find that using the new estimator and recursive trimming process provides more accurate estimates, especially when used in conjunction with sampling weights. PMID:26630261

  4. Analysis of Scattering Components from Fully Polarimetric SAR Images for Improving Accuracies of Urban Density Estimation

    NASA Astrophysics Data System (ADS)

    Susaki, J.

    2016-06-01

    In this paper, we analyze probability density functions (PDFs) of scatterings derived from fully polarimetric synthetic aperture radar (SAR) images for improving the accuracies of estimated urban density. We have reported a method for estimating urban density that uses an index Tv+c obtained by normalizing the sum of volume and helix scatterings Pv+c. Validation results showed that estimated urban densities have a high correlation with building-to-land ratios (Kajimoto and Susaki, 2013b; Susaki et al., 2014). While the method is found to be effective for estimating urban density, it is not clear why Tv+c is more effective than indices derived from other scatterings, such as surface or double-bounce scatterings, observed in urban areas. In this research, we focus on PDFs of scatterings derived from fully polarimetric SAR images in terms of scattering normalization. First, we introduce a theoretical PDF that assumes that image pixels have scatterers showing random backscattering. We then generate PDFs of scatterings derived from observations of concrete blocks with different orientation angles, and from a satellite-based fully polarimetric SAR image. The analysis of the PDFs and the derived statistics reveals that the curves of the PDFs of Pv+c are the most similar to the normal distribution among all the scatterings derived from fully polarimetric SAR images. It was found that Tv+c works most effectively because of its similarity to the normal distribution.

  5. Improvement of sub-pixel global motion estimation in UAV image stabilization

    NASA Astrophysics Data System (ADS)

    Li, Yingjuan; Ji, Ming; He, Junfeng; Zhen, Kang; Yang, Yizhou; Chen, Ying

    2016-01-01

    Global motion estimation within frames is very important in the UAV(unmanned aerial vehicle) image stabilization system. A fast algorithm based on phase correlation and image down-sampling in sub-pixel was proposed. First, down-sampling of the two frames to quantitatively reduce calculate data. Then, take the method based of phase correlation to realize the global motion estimation in integer-pixel. When it calculated out, chooses the overlapped area of the two frames and interpolated them with zero, then adopts the method based on phase correlation to achieve the global motion estimation in sub-pixel. At last, weighted calculate the result in integer-pixel and the result in sub-pixel, the global motion displacement in sub-pixel of the two images will be calculated out. Experimental results show that, using the proposed algorithm can not only achieve good robustness to the influence of noise, illumination and partially sheltered but also improve the accuracy of motion estimation and efficiency of computing significantly.

  6. Improving the precision of lake ecosystem metabolism estimates by identifying predictors of model uncertainty

    USGS Publications Warehouse

    Rose, Kevin C.; Winslow, Luke A.; Read, Jordan S.; Read, Emily K.; Solomon, Christopher T.; Adrian, Rita; Hanson, Paul C.

    2014-01-01

    Diel changes in dissolved oxygen are often used to estimate gross primary production (GPP) and ecosystem respiration (ER) in aquatic ecosystems. Despite the widespread use of this approach to understand ecosystem metabolism, we are only beginning to understand the degree and underlying causes of uncertainty for metabolism model parameter estimates. Here, we present a novel approach to improve the precision and accuracy of ecosystem metabolism estimates by identifying physical metrics that indicate when metabolism estimates are highly uncertain. Using datasets from seventeen instrumented GLEON (Global Lake Ecological Observatory Network) lakes, we discovered that many physical characteristics correlated with uncertainty, including PAR (photosynthetically active radiation, 400-700 nm), daily variance in Schmidt stability, and wind speed. Low PAR was a consistent predictor of high variance in GPP model parameters, but also corresponded with low ER model parameter variance. We identified a threshold (30% of clear sky PAR) below which GPP parameter variance increased rapidly and was significantly greater in nearly all lakes compared with variance on days with PAR levels above this threshold. The relationship between daily variance in Schmidt stability and GPP model parameter variance depended on trophic status, whereas daily variance in Schmidt stability was consistently positively related to ER model parameter variance. Wind speeds in the range of ~0.8-3 m s–1 were consistent predictors of high variance for both GPP and ER model parameters, with greater uncertainty in eutrophic lakes. Our findings can be used to reduce ecosystem metabolism model parameter uncertainty and identify potential sources of that uncertainty.

  7. Improvements in near-surface geophysical applications for hydrogeological parameter estimation

    NASA Astrophysics Data System (ADS)

    Addison, Adrian Demond

    One application of near-surface geophysical techniques is the hydrogeological parameter estimation. Hydrogeological estimated parameters such as volumetric water content, porosity, and hydraulic conductivity are useful in predicting groundwater flow. Therefore, any improvements in the field acquisition and data processing of the geophysical data will provide better results in estimating these parameters. This research examines the difficulties associated with processing and attribute analyses with shallow seismic P-wave reflection data, the application of the empirical mode decomposition (EMD) as a processing tool for ground-penetrating radar (GPR), and the use of GPR as tool in the assessment of bank filtration. Near-surface seismic reflection data are difficult to process because of the lack of reflections in the shot gathers; however, this research demonstrated that the application of certain steps such F-k filtering and velocity analysis can achieve the desired result, a more robust geologic model. The EMD technique was applied (removal of the WOW noise) to processing steps for GPR data in estimating hydrogeological parameters by providing significant stability during the calculation of dielectric constants. GPR techniques are widely known and diverse, but one rather different application of the GPR was to assess the suitability of bank filtration at a site in South Carolina. Finally, a multi-attribute analysis approach, a rather new application for near-surface seismic data, was used in predicting porosity from well logs and seismic data.

  8. Improved estimates of the range of errors on photomasks using measured values of skewness and kurtosis

    NASA Astrophysics Data System (ADS)

    Hamaker, Henry Chris

    1995-12-01

    Statistical process control (SPC) techniques often use six times the standard deviation sigma to estimate the range of errors within a process. Two assumptions are inherent in this choice of metric for the range: (1) the normal distribution adequately describes the errors, and (2) the fraction of errors falling within plus or minus 3 sigma, about 99.73%, is sufficiently large that we may consider the fraction occurring outside this range to be negligible. In state-of-the-art photomasks, however, the assumption of normality frequently breaks down, and consequently plus or minus 3 sigma is not a good estimate of the range of errors. In this study, we show that improved estimates for the effective maximum error Em, which is defined as the value for which 99.73% of all errors fall within plus or minus Em of the mean mu, may be obtained by quantifying the deviation from normality of the error distributions using the skewness and kurtosis of the error sampling. Data are presented indicating that in laser reticle- writing tools, Em less than or equal to 3 sigma. We also extend this technique for estimating the range of errors to specifications that are usually described by mu plus 3 sigma. The implications for SPC are examined.

  9. Improving waterfowl production estimates: results of a test in the prairie pothole region

    USGS Publications Warehouse

    Arnold, P.M.; Cowardin, L.M.

    1985-01-01

    The U.S. Fish and Wildlife Service in an effort to improve and standardize methods for estimating waterfowl production tested a new technique in the four-county Arrowwood Wetland Management District (WMD) for three years (1982-1984). On 14 randomly selected 10.36 km2 plots, upland and wetland habitat was mapped, classified, and digitized. Waterfowl breeding pairs were counted twice each year and the proportion of wetland basins containing water was determined. Pair numbers and habitat conditions were entered into a computer model developed by Northern Prairie Wildlife Research Center. That model estimates production on small federally owned wildlife tracts, federal wetland easements, and private land. Results indicate that production estimates were most accurate for mallards (Anas platyrhynchos), the species for which the computer model and data base were originally designed. Predictions for the pintail (Anas acuta), gadwall (A. strepa), blue-winged teal (A. discors), and northern shoveler (A. clypeata) were believed to be less accurate. Modeling breeding period dynamics of a waterfowl species and making credible production estimates for a geographic area are possible if the data used in the model are adequate. The process of modeling the breeding period of a species aids in locating areas of insufficient biological knowledge. This process will help direct future research efforts and permit more efficient gathering of field data.

  10. Ensuring and Improving Information Quality for Earth Science Data and Products: Role of the ESIP Information Quality Cluster

    NASA Technical Reports Server (NTRS)

    Ramapriyan, Hampapuram; Peng, Ge; Moroni, David; Shie, Chung-Lin

    2016-01-01

    Quality of products is always of concern to users regardless of the type of products. The focus of this paper is on the quality of Earth science data products. There are four different aspects of quality - scientific, product, stewardship and service. All these aspects taken together constitute Information Quality. With increasing requirement on ensuring and improving information quality, there has been considerable work related to information quality during the last several years. Given this rich background of prior work, the Information Quality Cluster (IQC), established within the Federation of Earth Science Information Partners (ESIP) has been active with membership from multiple organizations. Its objectives and activities, aimed at ensuring and improving information quality for Earth science data and products, are discussed briefly.

  11. Ensuring and Improving Information Quality for Earth Science Data and Products Role of the ESIP Information Quality Cluster

    NASA Technical Reports Server (NTRS)

    Ramapriyan, H. K. (Rama); Peng, Ge; Moroni, David; Shie, Chung-Lin

    2016-01-01

    Quality of products is always of concern to users regardless of the type of products. The focus of this paper is on the quality of Earth science data products. There are four different aspects of quality scientific, product, stewardship and service. All these aspects taken together constitute Information Quality. With increasing requirement on ensuring and improving information quality, there has been considerable work related to information quality during the last several years. Given this rich background of prior work, the Information Quality Cluster (IQC), established within the Federation of Earth Science Information Partners (ESIP) has been active with membership from multiple organizations. Its objectives and activities, aimed at ensuring and improving information quality for Earth science data and products, are discussed briefly.

  12. Improved Estimation of Ultrasound Thermal Strain Using Pulse Inversion Harmonic Imaging.

    PubMed

    Ding, Xuan; Nguyen, Man M; James, Isaac B; Marra, Kacey G; Rubin, J Peter; Leers, Steven A; Kim, Kang

    2016-05-01

    Thermal (temporal) strain imaging (TSI) is being developed to detect the lipid-rich core of atherosclerotic plaques and presence of fatty liver disease. However, the effects of ultrasonic clutter on TSI have not been considered. In this study, we evaluated whether pulse inversion harmonic imaging (PIHI) could be used to improve estimates of thermal (temporal) strain. Using mixed castor oil-gelatin phantoms of different concentrations and artificially introduced clutter, we found that PIHI improved the signal-to-noise ratio of TSI by an average of 213% or 52.1% relative to 3.3- and 6.6-MHz imaging, respectively. In a phantom constructed using human liposuction fat in the presence of clutter, the contrast-to-noise ratio was degraded by 35.1% for PIHI compared with 62.4% and 43.7% for 3.3- and 6.6-MHz imaging, respectively. These findings were further validated using an ex vivo carotid endarterectomy sample. PIHI can be used to improve estimates of thermal (temporal) strain in the presence of clutter. PMID:26948260

  13. Do multiple temperature measurements improve temperature-based death time estimation? The information degradation inequality.

    PubMed

    Hubig, M; Muggenthaler, H; Schenkl, S; Mall, G

    2016-09-01

    The accuracy of the input parameter values limits the accuracy of the output values in forensic temperature-based death time estimation (TDE) like in many scientific methods. A standard strategy to overcome this problem is to perform multiple measurements of the input parameter values, but such approaches are subject to noise accumulation and stochastic dependencies. A quantitative mathematical analysis of advantages as well as disadvantages of multiple measurements approaches (MMAs) was performed. The results are A general stochastic model of MMA. The information degradation inequality quantifying gains and losses of MMAs. Example calculations of the information degradation inequality for the following two MMAs relevant to TDE: o Multiple successive rectal temperature measurements o Multiple synchronous body layer temperature measurements Neither multiple successive rectal temperature measurements nor multiple synchronous body layer temperature measurements seem to significantly improve death time estimation. MMAs are superior to the single measurement approach only in the very early body cooling phase. PMID:26872468

  14. Approaches to radar reflectivity bias correction to improve rainfall estimation in Korea

    NASA Astrophysics Data System (ADS)

    You, Cheol-Hwan; Kang, Mi-Young; Lee, Dong-In; Lee, Jung-Tae

    2016-05-01

    Three methods for determining the reflectivity bias of single polarization radar using dual polarization radar reflectivity and disdrometer data (i.e., the equidistance line, overlapping area, and disdrometer methods) are proposed and evaluated for two low-pressure rainfall events that occurred over the Korean Peninsula on 25 August 2014 and 8 September 2012. Single polarization radar reflectivity was underestimated by more than 12 and 7 dB in the two rain events, respectively. All methods improved the accuracy of rainfall estimation, except for one case where drop size distributions were not observed, as the precipitation system did not pass through the disdrometer location. The use of these bias correction methods reduced the RMSE by as much as 50 %. Overall, the most accurate rainfall estimates were obtained using the overlapping area method to correct radar reflectivity.

  15. MIXED MODEL AND ESTIMATING EQUATION APPROACHES FOR ZERO INFLATION IN CLUSTERED BINARY RESPONSE DATA WITH APPLICATION TO A DATING VIOLENCE STUDY1

    PubMed Central

    Fulton, Kara A.; Liu, Danping; Haynie, Denise L.; Albert, Paul S.

    2016-01-01

    The NEXT Generation Health study investigates the dating violence of adolescents using a survey questionnaire. Each student is asked to affirm or deny multiple instances of violence in his/her dating relationship. There is, however, evidence suggesting that students not in a relationship responded to the survey, resulting in excessive zeros in the responses. This paper proposes likelihood-based and estimating equation approaches to analyze the zero-inflated clustered binary response data. We adopt a mixed model method to account for the cluster effect, and the model parameters are estimated using a maximum-likelihood (ML) approach that requires a Gaussian–Hermite quadrature (GHQ) approximation for implementation. Since an incorrect assumption on the random effects distribution may bias the results, we construct generalized estimating equations (GEE) that do not require the correct specification of within-cluster correlation. In a series of simulation studies, we examine the performance of ML and GEE methods in terms of their bias, efficiency and robustness. We illustrate the importance of properly accounting for this zero inflation by reanalyzing the NEXT data where this issue has previously been ignored. PMID:26937263

  16. Estimating the impacts of federal efforts to improve energy efficiency: The case of buildings

    SciTech Connect

    LaMontagne, J; Jones, R; Nicholls, A; Shankle, S

    1994-09-01

    The US Department of Energy`s Office of Energy Efficiency and Renewable Energy (EE) has for more than a decade focused its efforts on research to develop new technologies for improving the efficiency of energy use and increasing the role of renewable energy; success has usually been measured in term of energy saved or displaced. Estimates of future energy savings remain an important factor in program planning and prioritization. A variety of internal and external factors are now radically changing the planning process, and in turn the composition and thrust of the EE program. The Energy Policy Act of 1992, the Framework Convention on Climate Change (and the Administration`s Climate Change Action Plan), and concerns for the future of the economy (especially employment and international competitiveness) are increasing emphasis on technology deployment and near-term results. The Reinventing Government Initiative, the Government Performance and Results Act, and the Executive Order on Environmental Justice are all forcing Federal programs to demonstrate that they are producing desired results in a cost-effective manner. The application of Total Quality management principles has increased the scope and importance of producing quantified measures of benefit. EE has established a process for estimating the benefits of DOE`s energy efficiency and renewable energy programs called ``Quality Metrics`` (QM). The ``metrics`` are: energy, employment, equity, environment, risk, economics. This paper describes the approach taken by EE`s Office of Building Technologies to prepare estimates of program benefits in terms of these metrics, presents the estimates, discusses their implications, and explores possible improvements to the QM process as it is currently configured.

  17. Estimating the impacts of federal efforts to improve energy efficiency: The case of building

    SciTech Connect

    Nicolls, A.K.; Shankle, S.A.; LaMontagne, J.; Jones, R.E.

    1994-11-01

    The US Department of Energy`s Office of Energy Efficiency and Renewable Energy [EE] has for more than a decade focused its efforts on research to develop new technologies for improving the efficiency of energy use and increasing the role of renewable energy; success has usually been measured in terms of energy saved or displaced. Estimates of future energy savings remain an important factor in program planning and prioritization. A variety of internal and external factors are now radically changing the planning process, and in turn the composition and thrust of the EE program. The Energy Policy Act of 1992, the Framework Convention on Climate Change (and the Administration`s Climate Change Action Plan), and concerns for the future of the economy (especially employment and international competitiveness) are increasing emphasis on technology deployment and near-term results. The Reinventing Government Initiative, the Government Performance and Results Act, and the Executive Order on Environmental Justice are all forcing Federal programs to demonstrate that they are producing desired results in a cost-effective manner. The application of Total Quality Management principles has increased the scope and importance of producing quantified measures of benefit. EE has established a process for estimating the benefits of DOE`s energy efficiency and renewable energy programs called `Quality Metrics` (QM). The ``metrics`` are: Energy; Environment; Employment; Risk; Equity; Economics. This paper describes the approach taken by EE`s Office of Building Technologies to prepare estimates of program benefits in terms of these metrics, presents the estimates, discusses their implications, and explores possible improvements to the QM process as it is currently configured.

  18. Empirical Methods for Detecting Regional Trends and Other Spatial Expressions in Antrim Shale Gas Productivity, with Implications for Improving Resource Projections Using Local Nonparametric Estimation Techniques

    USGS Publications Warehouse

    Coburn, T.C.; Freeman, P.A.; Attanasi, E.D.

    2012-01-01

    The primary objectives of this research were to (1) investigate empirical methods for establishing regional trends in unconventional gas resources as exhibited by historical production data and (2) determine whether or not incorporating additional knowledge of a regional trend in a suite of previously established local nonparametric resource prediction algorithms influences assessment results. Three different trend detection methods were applied to publicly available production data (well EUR aggregated to 80-acre cells) from the Devonian Antrim Shale gas play in the Michigan Basin. This effort led to the identification of a southeast-northwest trend in cell EUR values across the play that, in a very general sense, conforms to the primary fracture and structural orientations of the province. However, including this trend in the resource prediction algorithms did not lead to improved results. Further analysis indicated the existence of clustering among cell EUR values that likely dampens the contribution of the regional trend. The reason for the clustering, a somewhat unexpected result, is not completely understood, although the geological literature provides some possible explanations. With appropriate data, a better understanding of this clustering phenomenon may lead to important information about the factors and their interactions that control Antrim Shale gas production, which may, in turn, help establish a more general protocol for better estimating resources in this and other shale gas plays. ?? 2011 International Association for Mathematical Geology (outside the USA).

  19. Toward Understanding Galaxy Clusters and Their Constituents: Projection Effects on Velocity Dispersion, X-Ray Emission, Mass Estimates, Gas Fraction, and Substructure

    NASA Astrophysics Data System (ADS)

    Cen, Renyue

    1997-08-01

    We study the projection effects on various observables of clusters of galaxies at redshift near zero, including cluster richness, velocity dispersion, X-ray luminosity, three total mass estimates (velocity-based, temperature-based, and gravitational lensing derived), gas fraction and substructure, utilizing a large simulation of a realistic cosmological model (a cold dark matter model with the following parameters: H0 = 65 km s-1 Mpc-1, Ω0 = 0.4, Λ0 = 0.6, σ8 = 0.79). Unlike previous studies focusing on the Abell clusters, we conservatively assume that both optical and X-ray observations can determine the source (galaxy or hot X-ray gas) positions along the line of sight as well as in the sky plane accurately; hence, we only include sources inside the velocity space defined by the cluster galaxies (filtered through the pessimistic 3σ clipping algorithm) as possible contamination sources. Projection effects are found to be important for some quantities but insignificant for others. We show that, on average, the gas to total mass ratio in clusters appears to be 30%-40% higher than its corresponding global ratio. Independent of its mean value, the broadness of the observed distribution of gas to total mass ratio is adequately accounted for by projection effects, alleviating (though not preventing) the need to invoke other nongravitational physical processes. While the moderate boost in the ratio narrows the gap, it is still not quite sufficient to reconcile the standard nucleosynthesis value of Ωb = 0.0125(H0/100)-2 (Walker et al. 1991) and Ω0 = 1 with the observed gas to mass ratio value in clusters of galaxies, 0.05(H0/100)-3/2, for any plausible value of H0. However, it is worth noting that real observations of X-ray clusters, especially X-ray imaging observations, may be subject to more projection contaminations than we allow for in our analysis. In contrast, the X-ray luminosity of a cluster within a radius <=1.0 h-1 Mpc is hardly altered by projection

  20. Improving satellite quantitative precipitation estimates by incorporating deep convective cloud optical depth

    NASA Astrophysics Data System (ADS)

    Stenz, Ronald D.

    As Deep Convective Systems (DCSs) are responsible for most severe weather events, increased understanding of these systems along with more accurate satellite precipitation estimates will improve NWS (National Weather Service) warnings and monitoring of hazardous weather conditions. A DCS can be classified into convective core (CC) regions (heavy rain), stratiform (SR) regions (moderate-light rain), and anvil (AC) regions (no rain). These regions share similar infrared (IR) brightness temperatures (BT), which can create large errors for many existing rain detection algorithms. This study assesses the performance of the National Mosaic and Multi-sensor Quantitative Precipitation Estimation System (NMQ) Q2, and a simplified version of the GOES-R Rainfall Rate algorithm (also known as the Self-Calibrating Multivariate Precipitation Retrieval, or SCaMPR), over the state of Oklahoma (OK) using OK MESONET observations as ground truth. While the average annual Q2 precipitation estimates were about 35% higher than MESONET observations, there were very strong correlations between these two data sets for multiple temporal and spatial scales. Additionally, the Q2 estimated precipitation distributions over the CC, SR, and AC regions of DCSs strongly resembled the MESONET observed ones, indicating that Q2 can accurately capture the precipitation characteristics of DCSs although it has a wet bias . SCaMPR retrievals were typically three to four times higher than the collocated MESONET observations, with relatively weak correlations during a year of comparisons in 2012. Overestimates from SCaMPR retrievals that produced a high false alarm rate were primarily caused by precipitation retrievals from the anvil regions of DCSs when collocated MESONET stations recorded no precipitation. A modified SCaMPR retrieval algorithm, employing both cloud optical depth and IR temperature, has the potential to make significant improvements to reduce the SCaMPR false alarm rate of retrieved

  1. Estimation of Crop Gross Primary Production (GPP). 2; Do Scaled (MODIS) Vegetation Indices Improve Performance?

    NASA Technical Reports Server (NTRS)

    Zhang, Qingyuan; Cheng, Yen-Ben; Lyapustin, Alexei I.; Wang, Yujie; Zhang, Xiaoyang; Suyker, Andrew; Verma, Shashi; Shuai, Yanmin; Middleton, Elizabeth M.

    2015-01-01

    Satellite remote sensing estimates of Gross Primary Production (GPP) have routinely been made using spectral Vegetation Indices (VIs) over the past two decades. The Normalized Difference Vegetation Index (NDVI), the Enhanced Vegetation Index (EVI), the green band Wide Dynamic Range Vegetation Index (WDRVIgreen), and the green band Chlorophyll Index (CIgreen) have been employed to estimate GPP under the assumption that GPP is proportional to the product of VI and photosynthetically active radiation (PAR) (where VI is one of four VIs: NDVI, EVI, WDRVIgreen, or CIgreen). However, the empirical regressions between VI*PAR and GPP measured locally at flux towers do not pass through the origin (i.e., the zero X-Y value for regressions). Therefore they are somewhat difficult to interpret and apply. This study investigates (1) what are the scaling factors and offsets (i.e., regression slopes and intercepts) between the fraction of PAR absorbed by chlorophyll of a canopy (fAPARchl) and the VIs, and (2) whether the scaled VIs developed in (1) can eliminate the deficiency and improve the accuracy of GPP estimates. Three AmeriFlux maize and soybean fields were selected for this study, two of which are irrigated and one is rainfed. The four VIs and fAPARchl of the fields were computed with the MODerate resolution Imaging Spectroradiometer (MODIS) satellite images. The GPP estimation performance for the scaled VIs was compared to results obtained with the original VIs and evaluated with standard statistics: the coefficient of determination (R2), the root mean square error (RMSE), and the coefficient of variation (CV). Overall, the scaled EVI obtained the best performance. The performance of the scaled NDVI, EVI and WDRVIgreen was improved across sites, crop types and soil/background wetness conditions. The scaled CIgreen did not improve results, compared to the original CIgreen. The scaled green band indices (WDRVIgreen, CIgreen) did not exhibit superior performance to either the

  2. Improvement of the quality of effective dose estimation by interlaboratory comparisons

    NASA Astrophysics Data System (ADS)

    Katarzyna, Ciszewska; Malgorzata, Dymecka; Tomasz, Pliszczynski; Jakub, Osko

    2010-01-01

    Radiation Protection Measurements Laboratory (RPLM) of the Institute of Atomic Energy POLATOM determines radionuclides in human urine to estimate the effective dose. Being an accredited laboratory, RPLM participated in interlaboratory comparisons in order to assure the quality of services concerning monitoring of internal contamination. The purpose of the study was to examine the effect of interlaboratory comparisons on the accuracy of the provided measurements. The results regarding tritium (3H) and strontium (90Sr) determination, obtained within the radiotoxicological intercomparison exercises, organized by PROCORAD, in 2005-2010, were analyzed and the methods used by the laboratory were verified and improved.

  3. An Improved Performance Frequency Estimation Algorithm for Passive Wireless SAW Resonant Sensors

    PubMed Central

    Liu, Boquan; Zhang, Chenrui; Ji, Xiaojun; Chen, Jing; Han, Tao

    2014-01-01

    Passive wireless surface acoustic wave (SAW) resonant sensors are suitable for applications in harsh environments. The traditional SAW resonant sensor system requires, however, Fourier transformation (FT) which has a resolution restriction and decreases the accuracy. In order to improve the accuracy and resolution of the measurement, the singular value decomposition (SVD)-based frequency estimation algorithm is applied for wireless SAW resonant sensor responses, which is a combination of a single tone undamped and damped sinusoid signal with the same frequency. Compared with the FT algorithm, the accuracy and the resolution of the method used in the self-developed wireless SAW resonant sensor system are validated. PMID:25429410

  4. Impact of regression methods on improved effects of soil structure on soil water retention estimates

    NASA Astrophysics Data System (ADS)

    Nguyen, Phuong Minh; De Pue, Jan; Le, Khoa Van; Cornelis, Wim

    2015-06-01

    Increasing the accuracy of pedotransfer functions (PTFs), an indirect method for predicting non-readily available soil features such as soil water retention characteristics (SWRC), is of crucial importance for large scale agro-hydrological modeling. Adding significant predictors (i.e., soil structure), and implementing more flexible regression algorithms are among the main strategies of PTFs improvement. The aim of this study was to investigate whether the improved effect of categorical soil structure information on estimating soil-water content at various matric potentials, which has been reported in literature, could be enduringly captured by regression techniques other than the usually applied linear regression. Two data mining techniques, i.e., Support Vector Machines (SVM), and k-Nearest Neighbors (kNN), which have been recently introduced as promising tools for PTF development, were utilized to test if the incorporation of soil structure will improve PTF's accuracy under a context of rather limited training data. The results show that incorporating descriptive soil structure information, i.e., massive, structured and structureless, as grouping criterion can improve the accuracy of PTFs derived by SVM approach in the range of matric potential of -6 to -33 kPa (average RMSE decreased up to 0.005 m3 m-3 after grouping, depending on matric potentials). The improvement was primarily attributed to the outperformance of SVM-PTFs calibrated on structureless soils. No improvement was obtained with kNN technique, at least not in our study in which the data set became limited in size after grouping. Since there is an impact of regression techniques on the improved effect of incorporating qualitative soil structure information, selecting a proper technique will help to maximize the combined influence of flexible regression algorithms and soil structure information on PTF accuracy.

  5. Improving high-resolution quantitative precipitation estimation via fusion of multiple radar-based precipitation products

    NASA Astrophysics Data System (ADS)

    Rafieeinasab, Arezoo; Norouzi, Amir; Seo, Dong-Jun; Nelson, Brian

    2015-12-01

    For monitoring and prediction of water-related hazards in urban areas such as flash flooding, high-resolution hydrologic and hydraulic modeling is necessary. Because of large sensitivity and scale dependence of rainfall-runoff models to errors in quantitative precipitation estimates (QPE), it is very important that the accuracy of QPE be improved in high-resolution hydrologic modeling to the greatest extent possible. With the availability of multiple radar-based precipitation products in many areas, one may now consider fusing them to produce more accurate high-resolution QPE for a wide spectrum of applications. In this work, we formulate and comparatively evaluate four relatively simple procedures for such fusion based on Fisher estimation and its conditional bias-penalized variant: Direct Estimation (DE), Bias Correction (BC), Reduced-Dimension Bias Correction (RBC) and Simple Estimation (SE). They are applied to fuse the Multisensor Precipitation Estimator (MPE) and radar-only Next Generation QPE (Q2) products at the 15-min 1-km resolution (Experiment 1), and the MPE and Collaborative Adaptive Sensing of the Atmosphere (CASA) QPE products at the 15-min 500-m resolution (Experiment 2). The resulting fused estimates are evaluated using the 15-min rain gauge observations from the City of Grand Prairie in the Dallas-Fort Worth Metroplex (DFW) in north Texas. The main criterion used for evaluation is that the fused QPE improves over the ingredient QPEs at their native spatial resolutions, and that, at the higher resolution, the fused QPE improves not only over the ingredient higher-resolution QPE but also over the ingredient lower-resolution QPE trivially disaggregated using the ingredient high-resolution QPE. All four procedures assume that the ingredient QPEs are unbiased, which is not likely to hold true in reality even if real-time bias correction is in operation. To test robustness under more realistic conditions, the fusion procedures were evaluated with and

  6. Did a quality improvement collaborative make stroke care better? A cluster randomized trial

    PubMed Central

    2014-01-01

    Background Stroke can result in death and long-term disability. Fast and high-quality care can reduce the impact of stroke, but UK national audit data has demonstrated variability in compliance with recommended processes of care. Though quality improvement collaboratives (QICs) are widely used, whether a QIC could improve reliability of stroke care was unknown. Methods Twenty-four NHS hospitals in the Northwest of England were randomly allocated to participate either in Stroke 90:10, a QIC based on the Breakthrough Series (BTS) model, or to a control group giving normal care. The QIC focused on nine processes of quality care for stroke already used in the national stroke audit. The nine processes were grouped into two distinct care bundles: one relating to early hours care and one relating to rehabilitation following stroke. Using an interrupted time series design and difference-in-difference analysis, we aimed to determine whether hospitals participating in the QIC improved more than the control group on bundle compliance. Results Data were available from nine interventions (3,533 patients) and nine control hospitals (3,059 patients). Hospitals in the QIC showed a modest improvement from baseline in the odds of average compliance equivalent to a relative improvement of 10.9% (95% CI 1.3%, 20.6%) in the Early Hours Bundle and 11.2% (95% CI 1.4%, 21.5%) in the Rehabilitation Bundle. Secondary analysis suggested that some specific processes were more sensitive to an intervention effect. Conclusions Some aspects of stroke care improved during the QIC, but the effects of the QIC were modest and further improvement is needed. The extent to which a BTS QIC can improve quality of stroke care remains uncertain. Some aspects of care may respond better to collaboratives than others. Trial registration ISRCTN13893902. PMID:24690267

  7. An Improved Cuckoo Search Optimization Algorithm for the Problem of Chaotic Systems Parameter Estimation

    PubMed Central

    Wang, Jun; Zhou, Bihua; Zhou, Shudao

    2016-01-01

    This paper proposes an improved cuckoo search (ICS) algorithm to establish the parameters of chaotic systems. In order to improve the optimization capability of the basic cuckoo search (CS) algorithm, the orthogonal design and simulated annealing operation are incorporated in the CS algorithm to enhance the exploitation search ability. Then the proposed algorithm is used to establish parameters of the Lorenz chaotic system and Chen chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the algorithm can estimate parameters with high accuracy and reliability. Finally, the results are compared with the CS algorithm, genetic algorithm, and particle swarm optimization algorithm, and the compared results demonstrate the method is energy-efficient and superior. PMID:26880874

  8. An Improved Cuckoo Search Optimization Algorithm for the Problem of Chaotic Systems Parameter Estimation.

    PubMed

    Wang, Jun; Zhou, Bihua; Zhou, Shudao

    2016-01-01

    This paper proposes an improved cuckoo search (ICS) algorithm to establish the parameters of chaotic systems. In order to improve the optimization capability of the basic cuckoo search (CS) algorithm, the orthogonal design and simulated annealing operation are incorporated in the CS algorithm to enhance the exploitation search ability. Then the proposed algorithm is used to establish parameters of the Lorenz chaotic system and Chen chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the algorithm can estimate parameters with high accuracy and reliability. Finally, the results are compared with the CS algorithm, genetic algorithm, and particle swarm optimization algorithm, and the compared results demonstrate the method is energy-efficient and superior. PMID:26880874

  9. The Use of Innovative Two-Component Cluster Analysis and Serodiagnostic Cut-Off Methods to Estimate Prevalence of Pertussis Reinfections.

    PubMed

    van Twillert, Inonge; Bonačić Marinović, Axel A; van Gaans-van den Brink, Jacqueline A M; Kuipers, Betsy; Berbers, Guy A M; van der Maas, Nicoline A T; Verheij, Theo J M; Versteegh, Florens G A; Teunis, Peter F M; van Els, Cécile A C M

    2016-01-01

    Bordetella pertussis circulates even in highly vaccinated countries affecting all age groups. Insight into the scale of concealed reinfections is important as they may contribute to transmission. We therefore investigated whether current single-point serodiagnostic methods are suitable to estimate the prevalence of pertussis reinfection. Two methods based on IgG-Ptx plasma levels alone were used to evaluate the proportion of renewed seroconversions in the past year in a cohort of retrospective pertussis cases ≥ 24 months after a proven earlier symptomatic infection. A Dutch population database was used as a baseline. Applying a classical 62.5 IU/ml IgG-Ptx cut-off, we calculated a seroprevalence of 15% in retrospective cases, higher than the 10% observed in the population baseline. However, this method could not discriminate between renewed seroconversion and waning of previously infection-enhanced IgG-Ptx levels. Two-component cluster analysis of the IgG-Ptx datasets of both pertussis cases and the general population revealed a continuum of intermediate IgG-Ptx levels, preventing the establishment of a positive population and the comparison of prevalence by this alternative method. Next, we investigated the complementary serodiagnostic value of IgA-Ptx levels. When modelling datasets including both convalescent and retrospective cases we obtained new cut-offs for both IgG-Ptx and IgA-Ptx that were optimized to evaluate renewed seroconversions in the ex-cases target population. Combining these cut-offs two-dimensionally, we calculated 8.0% reinfections in retrospective cases, being below the baseline seroprevalence. Our study for the first time revealed the shortcomings of using only IgG-Ptx data in conventional serodiagnostic methods to determine pertussis reinfections. Improved results can be obtained with two-dimensional serodiagnostic profiling. The proportion of reinfections thus established suggests a relatively increased period of protection to renewed

  10. The Use of Innovative Two-Component Cluster Analysis and Serodiagnostic Cut-Off Methods to Estimate Prevalence of Pertussis Reinfections

    PubMed Central

    van Twillert, Inonge; Bonačić Marinović, Axel A.; van Gaans-van den Brink, Jacqueline A. M.; Kuipers, Betsy; Berbers, Guy A. M.; van der Maas, Nicoline A. T.; Verheij, Theo J. M.; Versteegh, Florens G. A.; Teunis, Peter F. M.; van Els, Cécile A. C. M.

    2016-01-01

    Bordetella pertussis circulates even in highly vaccinated countries affecting all age groups. Insight into the scale of concealed reinfections is important as they may contribute to transmission. We therefore investigated whether current single-point serodiagnostic methods are suitable to estimate the prevalence of pertussis reinfection. Two methods based on IgG-Ptx plasma levels alone were used to evaluate the proportion of renewed seroconversions in the past year in a cohort of retrospective pertussis cases ≥ 24 months after a proven earlier symptomatic infection. A Dutch population database was used as a baseline. Applying a classical 62.5 IU/ml IgG-Ptx cut-off, we calculated a seroprevalence of 15% in retrospective cases, higher than the 10% observed in the population baseline. However, this method could not discriminate between renewed seroconversion and waning of previously infection-enhanced IgG-Ptx levels. Two-component cluster analysis of the IgG-Ptx datasets of both pertussis cases and the general population revealed a continuum of intermediate IgG-Ptx levels, preventing the establishment of a positive population and the comparison of prevalence by this alternative method. Next, we investigated the complementary serodiagnostic value of IgA-Ptx levels. When modelling datasets including both convalescent and retrospective cases we obtained new cut-offs for both IgG-Ptx and IgA-Ptx that were optimized to evaluate renewed seroconversions in the ex-cases target population. Combining these cut-offs two-dimensionally, we calculated 8.0% reinfections in retrospective cases, being below the baseline seroprevalence. Our study for the first time revealed the shortcomings of using only IgG-Ptx data in conventional serodiagnostic methods to determine pertussis reinfections. Improved results can be obtained with two-dimensional serodiagnostic profiling. The proportion of reinfections thus established suggests a relatively increased period of protection to renewed

  11. Improved atmospheric soundings and error estimates from analysis of AIRS/AMSU data

    NASA Astrophysics Data System (ADS)

    Susskind, Joel

    2007-09-01

    The AIRS Science Team Version 5.0 retrieval algorithm became operational at the Goddard DAAC in July 2007 generating near real-time products from analysis of AIRS/AMSU sounding data. This algorithm contains many significant theoretical advances over the AIRS Science Team Version 4.0 retrieval algorithm used previously. Three very significant developments of Version 5 are: 1) the development and implementation of an improved Radiative Transfer Algorithm (RTA) which allows for accurate treatment of non-Local Thermodynamic Equilibrium (non-LTE) effects on shortwave sounding channels; 2) the development of methodology to obtain very accurate case by case product error estimates which are in turn used for quality control; and 3) development of an accurate AIRS only cloud clearing and retrieval system. These theoretical improvements taken together enabled a new methodology to be developed which further improves soundings in partially cloudy conditions, without the need for microwave observations in the cloud clearing step as has been done previously. In this methodology, longwave CO II channel observations in the spectral region 700 cm -1 to 750 cm -1 are used exclusively for cloud clearing purposes, while shortwave CO II channels in the spectral region 2195 cm -1 to 2395 cm -1 are used for temperature sounding purposes. The new methodology for improved error estimates and their use in quality control is described briefly and results are shown indicative of their accuracy. Results are also shown of forecast impact experiments assimilating AIRS Version 5.0 retrieval products in the Goddard GEOS 5 Data Assimilation System using different quality control thresholds.

  12. Improved Atmospheric Soundings and Error Estimates from Analysis of AIRS/AMSU Data

    NASA Technical Reports Server (NTRS)

    Susskind, Joel

    2007-01-01

    The AIRS Science Team Version 5.0 retrieval algorithm became operational at the Goddard DAAC in July 2007 generating near real-time products from analysis of AIRS/AMSU sounding data. This algorithm contains many significant theoretical advances over the AIRS Science Team Version 4.0 retrieval algorithm used previously. Three very significant developments of Version 5 are: 1) the development and implementation of an improved Radiative Transfer Algorithm (RTA) which allows for accurate treatment of non-Local Thermodynamic Equilibrium (non-LTE) effects on shortwave sounding channels; 2) the development of methodology to obtain very accurate case by case product error estimates which are in turn used for quality control; and 3) development of an accurate AIRS only cloud clearing and retrieval system. These theoretical improvements taken together enabled a new methodology to be developed which further improves soundings in partially cloudy conditions, without the need for microwave observations in the cloud clearing step as has been done previously. In this methodology, longwave C02 channel observations in the spectral region 700 cm-' to 750 cm-' are used exclusively for cloud clearing purposes, while shortwave C02 channels in the spectral region 2195 cm-' to 2395 cm-' are used for temperature sounding purposes. The new methodology for improved error estimates and their use in quality control is described briefly and results are shown indicative of their accuracy. Results are also shown of forecast impact experiments assimilating AIRS Version 5.0 retrieval products in the Goddard GEOS 5 Data Assimilation System using different quality control thresholds.

  13. Kinetic Estimation of GFR Improves Prediction of Dialysis and Recovery after Kidney Transplantation

    PubMed Central

    Pianta, Timothy J.; Endre, Zoltan H.; Pickering, John W.; Buckley, Nicholas A.; Peake, Philip W.

    2015-01-01

    Background The early prediction of delayed graft function (DGF) would facilitate patient management after kidney transplantation. Methods In a single-centre retrospective analysis, we investigated kinetic estimated GFR under non-steady-state conditions, KeGFR, in prediction of DGF. KeGFRsCr was calculated at 4h, 8h and 12h in 56 recipients of deceased donor kidneys from initial serum creatinine (sCr) concentrations, estimated creatinine production rate, volume of distribution, and the difference between consecutive sCr values. The utility of KeGFRsCr for DGF prediction was compared with, sCr, plasma cystatin C (pCysC), and KeGFRpCysC similarly derived from pCysC concentrations. Results At 4h, the KeGFRsCr area under the receiver operator characteristic curve (AUC) for DGF prediction was 0.69 (95% CI: 0.56–0.83), while sCr was not useful (AUC 0.56, (CI: 0.41–0.72). Integrated discrimination improvement analysis showed that the KeGFRsCr improved a validated clinical prediction model at 4h, 8h, and 12h, increasing the AUC from 0.68 (0.52–0.83) to 0.88 (0.78–0.99) at 12h (p = 0.01). KeGFRpCysC also improved DGF prediction. In contrast, sCr provided no improvement at any time point. Conclusions Calculation of KeGFR from sCr facilitates early prediction of DGF within 4 hours of renal transplantation. PMID:25938452

  14. Merging raster meteorological data with low resolution satellite images for improved estimation of actual evapotranspiration

    NASA Astrophysics Data System (ADS)

    Cherif, Ines; Alexandridis, Thomas; Chambel Leitao, Pedro; Jauch, Eduardo; Stavridou, Domna; Iordanidis, Charalampos; Silleos, Nikolaos; Misopolinos, Nikolaos; Neves, Ramiro; Safara Araujo, Antonio

    2013-04-01

    ). A correlation analysis was performed at the common spatial resolution of 1km using selected homogeneous pixels (from the land cover point of view). A statistically significant correlation factor of 0.6 was found, and the RMSE was 0.92 mm/day. Using raster meteorological data the ITA-MyWater algorithms were able to catch the variability of weather patterns over the river basin and thus improved the spatial distribution of evapotranpiration estimations at low resolution. The work presented is part of the FP7-EU project "Merging hydrological models and Earth observation data for reliable information on water - MyWater".

  15. Estimating and improving the signal-to-noise ratio of time series by symbolic dynamics.

    PubMed

    Graben, P

    2001-11-01

    We investigate the effect of symbolic encoding applied to times series consisting of some deterministic signal and additive noise, as well as time series given by a deterministic signal with randomly distributed initial conditions as a model of event-related brain potentials. We introduce an estimator of the signal-to-noise ratio (SNR) of the system by means of time averages of running complexity measures such as Shannon and Rényi entropies, and prove its asymptotical equivalence with the linear SNR in the case of Shannon entropies of symbol distributions. A SNR improvement factor is defined, exhibiting a maximum for intermediate values of noise amplitude in analogy to stochastic resonance phenomena. We demonstrate that the maximum of the SNR improvement factor can be shifted toward smaller noise amplitudes by using higher order Rényi entropies instead of the Shannon entropy. For a further improvement of the SNR, a half wave encoding of noisy time series is introduced. Finally, we discuss the effect of noisy phases on the linear SNR as well as on the SNR defined by symbolic dynamics. It is shown that longer symbol sequences yield an improvement of the latter. PMID:11735897

  16. Model-free functional MRI analysis using improved fuzzy cluster analysis techniques

    NASA Astrophysics Data System (ADS)

    Lange, Oliver; Meyer-Baese, Anke; Wismueller, Axel; Hurdal, Monica; Sumners, DeWitt; Auer, Dorothee

    2004-04-01

    Conventional model-based or statistical analysis methods for functional MRI (fMRI) are easy to implement, and are effective in analyzing data with simple paradigms. However, they are not applicable in situations in which patterns of neural response are complicated and when fMRI response is unknown. In this paper the Gath-Geva algorithm is adapted and rigorously studied for analyzing fMRI data. The algorithm supports spatial connectivity aiding in the identification of activation sites in functional brain imaging. A comparison of this new method with the fuzzy n-means algorithm, Kohonen's self-organizing map, fuzzy n-means algorithm with unsupervised initialization, minimal free energy vector quantizer and the "neural gas" network is done in a systematic fMRI study showing comparative quantitative evaluations. The most important findings in the paper are: (1) the Gath-Geva algorithms outperforms for a large number of codebook vectors all other clustering methods in terms of detecting small activation areas, and (2) for a smaller number of codebook vectors the fuzzy n-means with unsupervised initialization outperforms all other techniques. The applicability of the new algorithm is demonstrated on experimental data.

  17. State Estimation and Forecasting of the Ski-Slope Model Using an Improved Shadowing Filter

    NASA Astrophysics Data System (ADS)

    Mat Daud, Auni Aslah

    In this paper, we present the application of the gradient descent of indeterminism (GDI) shadowing filter to a chaotic system, that is the ski-slope model. The paper focuses on the quality of the estimated states and their usability for forecasting. One main problem is that the existing GDI shadowing filter fails to provide stability to the convergence of the root mean square error and the last point error of the ski-slope model. Furthermore, there are unexpected cases in which the better state estimates give worse forecasts than the worse state estimates. We investigate these unexpected cases in particular and show how the presence of the humps contributes to them. However, the results show that the GDI shadowing filter can successfully be applied to the ski-slope model with only slight modification, that is, by introducing the adaptive step-size to ensure the convergence of indeterminism. We investigate its advantages over fixed step-size and how it can improve the performance of our shadowing filter.

  18. RNA-Seq alignment to individualized genomes improves transcript abundance estimates in multiparent populations.

    PubMed

    Munger, Steven C; Raghupathy, Narayanan; Choi, Kwangbom; Simons, Allen K; Gatti, Daniel M; Hinerfeld, Douglas A; Svenson, Karen L; Keller, Mark P; Attie, Alan D; Hibbs, Matthew A; Graber, Joel H; Chesler, Elissa J; Churchill, Gary A

    2014-09-01

    Massively parallel RNA sequencing (RNA-seq) has yielded a wealth of new insights into transcriptional regulation. A first step in the analysis of RNA-seq data is the alignment of short sequence reads to a common reference genome or transcriptome. Genetic variants that distinguish individual genomes from the reference sequence can cause reads to be misaligned, resulting in biased estimates of transcript abundance. Fine-tuning of read alignment algorithms does not correct this problem. We have developed Seqnature software to construct individualized diploid genomes and transcriptomes for multiparent populations and have implemented a complete analysis pipeline that incorporates other existing software tools. We demonstrate in simulated and real data sets that alignment to individualized transcriptomes increases read mapping accuracy, improves estimation of transcript abundance, and enables the direct estimation of allele-specific expression. Moreover, when applied to expression QTL mapping we find that our individualized alignment strategy corrects false-positive linkage signals and unmasks hidden associations. We recommend the use of individualized diploid genomes over reference sequence alignment for all applications of high-throughput sequencing technology in genetically diverse populations. PMID:25236449

  19. Combining Electrical Techniques to map a Till Aquitard for Quantifying Lateral Flows and Improved Recharge Estimation

    NASA Astrophysics Data System (ADS)

    Thatcher, K. E.; Mackay, R.

    2007-12-01

    Where low permeability layers are present in the unsaturated zone, groundwater recharge can be significantly modified by lateral flows. To improve estimates of the magnitude and spatial distribution of lateral flows, a well defined model of the unsaturated zone hydraulic properties is required. Electromagnetic (EM) surveys, using Geonics EM31 and EM34, along with Electrical Resistivity Tomography (ERT) have been used in the Tern Catchment, Shropshire, UK to determine the distribution of Quaternary glacial deposits above the Triassic sandstone aquifer. The deposits are generally less than 10m thick and comprise low permeability lodgement till and high permeability outwash. Modelling studies have shown the depth and slope of the till surface to be key parameters controlling the magnitude of lateral flows with recharge focussed at the till edge. The distribution of permeability within the till is of secondary importance. The spatial extent of the till is well constrained by EM data and is shown to be continuous. ERT profiles provide data on the depth to the till surface in detailed 2D sections. Combining the two data sets has enabled the depth estimates from the ERT surveys to be extrapolated across a 2D map area. Recharge estimates based on the depth maps take into account lateral flows across the top of the till and show that these flows can contribute significantly to catchment recharge.

  20. Improving service delivery of water, sanitation, and hygiene in primary schools: a cluster-randomized trial in western Kenya.

    PubMed

    Alexander, Kelly T; Dreibelbis, Robert; Freeman, Matthew C; Ojeny, Betty; Rheingans, Richard

    2013-09-01

    Water, sanitation, and hygiene (WASH) programs in schools have been shown to improve health and reduce absence. In resource-poor settings, barriers such as inadequate budgets, lack of oversight, and competing priorities limit effective and sustained WASH service delivery in schools. We employed a cluster-randomized trial to examine if schools could improve WASH conditions within existing administrative structures. Seventy schools were divided into a control group and three intervention groups. All intervention schools received a budget for purchasing WASH-related items. One group received no further intervention. A second group received additional funding for hiring a WASH attendant and making repairs to WASH infrastructure, and a third group was given guides for student and community monitoring of conditions. Intervention schools made significant improvements in provision of soap and handwashing water, treated drinking water, and clean latrines compared with controls. Teachers reported benefits of monitoring, repairs, and a WASH attendant, but quantitative data of WASH conditions did not determine whether expanded interventions out-performed our budget-only intervention. Providing schools with budgets for WASH operational costs improved access to necessary supplies, but did not ensure consistent service delivery to students. Further work is needed to clarify how schools can provide WASH services daily. PMID:23981878

  1. Improved estimation of anomalous diffusion exponents in single-particle tracking experiments

    NASA Astrophysics Data System (ADS)

    Kepten, Eldad; Bronshtein, Irena; Garini, Yuval

    2013-05-01

    The mean square displacement is a central tool in the analysis of single-particle tracking experiments, shedding light on various biophysical phenomena. Frequently, parameters are extracted by performing time averages on single-particle trajectories followed by ensemble averaging. This procedure, however, suffers from two systematic errors when applied to particles that perform anomalous diffusion. The first is significant at short-time lags and is induced by measurement errors. The second arises from the natural heterogeneity in biophysical systems. We show how to estimate and correct these two errors and improve the estimation of the anomalous parameters for the whole particle distribution. As a consequence, we manage to characterize ensembles of heterogeneous particles even for rather short and noisy measurements where regular time-averaged mean square displacement analysis fails. We apply this method to both simulations and in vivo measurements of telomere diffusion in 3T3 mouse embryonic fibroblast cells. The motion of telomeres is found to be subdiffusive with an average exponent constant in time. Individual telomere exponents are normally distributed around the average exponent. The proposed methodology has the potential to improve experimental accuracy while maintaining lower experimental costs and complexity.

  2. Estimating the costs and health benefits of water and sanitation improvements at global level.

    PubMed

    Haller, Laurence; Hutton, Guy; Bartram, Jamie

    2007-12-01

    The aim of this study was to estimate the costs and the health benefits of the following interventions: increasing access to improved water supply and sanitation facilities, increasing access to in house piped water and sewerage connection, and providing household water treatment, in ten WHO sub-regions. The cost-effectiveness of each intervention was assessed in terms of US dollars per disability adjusted life year (DALY) averted. This analysis found that almost all interventions were cost-effective, especially in developing countries with high mortality rates. The estimated cost-effectiveness ratio (CER) varied between US$20 per DALY averted for disinfection at point of use to US$13,000 per DALY averted for improved water and sanitation facilities. While increasing access to piped water supply and sewage connections on plot was the intervention that had the largest health impact across all sub-regions, household water treatment was found to be the most cost-effective intervention. A policy shift to include better household water quality management to complement the continuing expansion of coverage and upgrading of services would appear to be a cost-effective health intervention in many developing countries. PMID:17878561

  3. First look: a cluster-randomized trial of ultrasound to improve pregnancy outcomes in low income country settings

    PubMed Central

    2014-01-01

    Background In high-resource settings, obstetric ultrasound is a standard component of prenatal care used to identify pregnancy complications and to establish an accurate gestational age in order to improve obstetric care. Whether or not ultrasound use will improve care and ultimately pregnancy outcomes in low-resource settings is unknown. Methods/Design This multi-country cluster randomized trial will assess the impact of antenatal ultrasound screening performed by health care staff on a composite outcome consisting of maternal mortality and maternal near-miss, stillbirth and neonatal mortality in low-resource community settings. The trial will utilize an existing research infrastructure, the Global Network for Women’s and Children’s Health Research with sites in Pakistan, Kenya, Zambia, Democratic Republic of Congo and Guatemala. A maternal and newborn health registry in defined geographic areas which documents all pregnancies and their outcomes to 6 weeks post-delivery will provide population-based rates of maternal mortality and morbidity, stillbirth, neonatal mortality and morbidity, and health care utilization for study clusters. A total of 58 study clusters each with a health center and about 500 births per year will be randomized (29 intervention and 29 control). The intervention includes training of health workers (e.g., nurses, midwives, clinical officers) to perform ultrasound examinations during antenatal care, generally at 18–22 and at 32–36 weeks for each subject. Women who are identified as having a complication of pregnancy will be referred to a hospital for appropriate care. Finally, the intervention includes community sensitization activities to inform women and their families of the availability of ultrasound at the antenatal care clinic and training in emergency obstetric and neonatal care at referral facilities. Discussion In summary, our trial will evaluate whether introduction of ultrasound during antenatal care improves pregnancy

  4. Precursor-ion mass re-estimation improves peptide identification on hybrid instruments.

    PubMed

    Luethy, Roland; Kessner, Darren E; Katz, Jonathan E; Maclean, Brendan; Grothe, Robert; Kani, Kian; Faça, Vitor; Pitteri, Sharon; Hanash, Samir; Agus, David B; Mallick, Parag

    2008-09-01

    Mass spectrometry-based proteomics experiments have become an important tool for studying biological systems. Identifying the proteins in complex mixtures by assigning peptide fragmentation spectra to peptide sequences is an important step in the proteomics process. The 1-2 ppm mass-accuracy of hybrid instruments, like the LTQ-FT, has been cited as a key factor in their ability to identify a larger number of peptides with greater confidence than competing instruments. However, in replicate experiments of an 18-protein mixture, we note parent masses deviate 171 ppm, on average, for ion-trap data directed identifications and 8 ppm, on average, for preview Fourier transform (FT) data directed identifications. These deviations are neither caused by poor calibration nor by excessive ion-loading and are most likely due to errors in parent mass estimation. To improve these deviations, we introduce msPrefix, a program to re-estimate a peptide's parent mass from an associated high-accuracy full-scan survey spectrum. In 18-protein mixture experiments, msPrefix parent mass estimates deviate only 1 ppm, on average, from the identified peptides. In a cell lysate experiment searched with a tolerance of 50 ppm, 2295 peptides were confidently identified using native data and 4560 using msPrefixed data. Likewise, in a plasma experiment searched with a tolerance of 50 ppm, 326 peptides were identified using native data and 1216 using msPrefixed data. msPrefix is also able to determine which MS/MS spectra were possibly derived from multiple precursor ions. In complex mixture experiments, we demonstrate that more than 50% of triggered MS/MS may have had multiple precursor ions and note that spectra with multiple candidate ions are less likely to result in an identification using TANDEM. These results demonstrate integration of msPrefix into traditional shotgun proteomics workflows significantly improves identification results. PMID:18707148

  5. Long-term accounting for raindrop size distribution variations improves quantitative precipitation estimation by weather radar

    NASA Astrophysics Data System (ADS)

    Hazenberg, Pieter; Leijnse, Hidde; Uijlenhoet, Remko

    2016-04-01

    Weather radars provide information on the characteristics of precipitation at high spatial and temporal resolution. Unfortunately, rainfall measurements by radar are affected by multiple error sources. The current study is focused on the impact of variations of the raindrop size distribution on radar rainfall estimates. Such variations lead to errors in the estimated rainfall intensity (R) and specific attenuation (k) when using fixed relations for the conversion of the observed reflectivity (Z) into R and k. For non-polarimetric radar, this error source has received relatively little attention compared to other error sources. We propose to link the parameters of the Z-R and Z-k relations directly to those of the normalized gamma DSD. The benefit of this procedure is that it reduces the number of unknown parameters. In this work, the DSD parameters are obtained using 1) surface observations from a Parsivel and Thies LPM disdrometer, and 2) a Monte Carlo optimization procedure using surface rain gauge observations. The impact of both approaches for a given precipitation type is assessed for 45 days of summertime precipitation observed in The Netherlands. Accounting for DSD variations using disdrometer observations leads to an improved radar QPE product as compared to applying climatological Z-R and Z-k relations. This especially holds for situations where widespread stratiform precipitation is observed. The best results are obtained when the DSD parameters are optimized. However, the optimized Z-R and Z-k relations show an unrealistic variability that arises from uncorrected error sources. As such, the optimization approach does not result in a realistic DSD shape but instead also accounts for uncorrected error sources resulting in the best radar rainfall adjustment. Therefore, to further improve the quality of preciptitation estimates by weather radar, usage should either be made of polarimetric radar or by extending the network of disdrometers.

  6. Improvements to TOVS retrievals over sea ice and applications to estimating Arctic energy fluxes

    NASA Technical Reports Server (NTRS)

    Francis, Jennifer A.

    1994-01-01

    Modeling studies suggest that polar regions play a major role in modulating the Earth's climate and that they may be more sensitive than lower latitudes to climate change. Until recently, however, data from meteorological stations poleward of 70 degs have been sparse, and consequently, our understanding of air-sea-ice interaction processes is relatively poor. Satellite-borne sensors now offer a promising opportunity to observe polar regions and ultimately to improve parameterizations of energy transfer processes in climate models. This study focuses on the application of the TIROS-N operational vertical sounder (TOVS) to sea-ice-covered regions in the nonmelt season. TOVS radiances are processed with the improved initialization inversion ('3I') algorithm, providng estimates of layer-average temperature and moisture, cloud conditions, and surface characteristics at a horizontal resolution of approximately 100 km x 100 km. Although TOVS has flown continuously on polar-orbiting satellites since 1978, its potential has not been realized in high latitudes because the quality of retrievals is often significantly lower over sea ice and snow than over the surfaces. The recent availability of three Arctic data sets has provided an opportunity to validate TOVS retrievals: the first from the Coordinated Eastern Arctic Experiment (CEAREX) in winter 1988/1989, the second from the LeadEx field program in spring 1992, and the third from Russian drifting ice stations. Comparisons with these data reveal deficiencies in TOVS retrievals over sea ice during the cold season; e.g., ice surface temperature is often 5 to 15 K too warm, microwave emissivity is approximately 15% too low at large view angles, clear/cloudy scenes are sometimes misidentified, and low-level inversions are often not captured. In this study, methods to reduce these errors are investigated. Improvements to the ice surface temperature retrieval have reduced rms errors from approximately 7 K to 3 K; correction of

  7. Crop suitability monitoring for improved yield estimations with 100m PROBA-V data

    NASA Astrophysics Data System (ADS)

    Özüm Durgun, Yetkin; Gilliams, Sven; Gobin, Anne; Duveiller, Grégory; Djaby, Bakary; Tychon, Bernard

    2015-04-01

    This study has been realised within the framework of a PhD targeting to advance agricultural monitoring with improved yield estimations using SPOT VEGETATION remotely sensed data. For the first research question, the aim was to improve dry matter productivity (DMP) for C3 and C4 plants by adding a water stress factor. Additionally, the relation between the actual crop yield and DMP was studied. One of the limitations was the lack of crop specific maps which leads to the second research question on 'crop suitability monitoring'. The objective of this work is to create a methodological approach based on the spectral and temporal characteristics of PROBA-V images and ancillary data such as meteorology, soil and topographic data to improve the estimation of annual crop yields. The PROBA-V satellite was launched on 6th May 2013, and was designed to bridge the gap in space-borne vegetation measurements between SPOT-VGT (March 1998 - May 2014) and the upcoming Sentinel-3 satellites scheduled for launch in 2015/2016. PROBA -V has products in four spectral bands: BLUE (centred at 0.463 µm), RED (0.655 µm), NIR (0.845 µm), and SWIR (1.600 µm) with a spatial resolution ranging from 1km to 300m. Due to the construction of the sensor, the central camera can provide a 100m data product with a 5 to 8 days revisiting time. Although the 100m data product is still in test phase a methodology for crop suitability monitoring was developed. The multi-spectral composites, NDVI (Normalised Difference Vegetation Index) (NIR_RED/NIR+RED) and NDII (Normalised Difference Infrared Index) (NIR-SWIR/NIR+SWIR) profiles are used in addition to secondary data such as digital elevation data, precipitation, temperature, soil types and administrative boundaries to improve the accuracy of crop yield estimations. The methodology is evaluated on several FP7 SIGMA test sites for the 2014 - 2015 period. Reference data in the form of vector GIS with boundaries and cover type of agricultural fields are

  8. Strategies for Improving Power in Cluster Randomized Studies of Professional Development

    ERIC Educational Resources Information Center

    Kelcey, Ben; Spybrook, Jessaca; Zhang, Jiaqi; Phelps, Geoffrey; Jones, Nathan

    2015-01-01

    With research indicating substantial differences among teachers in terms of their effectiveness (Nye, Konstantopoulous, & Hedges, 2004), a major focus of recent research in education has been on improving teacher quality through professional development (Desimone, 2009; Institute of Educations Sciences [IES], 2012; Measures of Effective…

  9. Improving Maryland's Offshore Wind Energy Resource Estimate Using Doppler Wind Lidar Technology to Assess Microtmeteorology Controls

    NASA Astrophysics Data System (ADS)

    St. Pé, Alexandra; Wesloh, Daniel; Antoszewski, Graham; Daham, Farrah; Goudarzi, Navid; Rabenhorst, Scott; Delgado, Ruben

    2016-06-01

    There is enormous potential to harness the kinetic energy of offshore wind and produce power. However significant uncertainties are introduced in the offshore wind resource assessment process, due in part to limited observational networks and a poor understanding of the marine atmosphere's complexity. Given the cubic relationship between a turbine's power output and wind speed, a relatively small error in the wind speed estimate translates to a significant error in expected power production. The University of Maryland Baltimore County (UMBC) collected in-situ measurements offshore, within Maryland's Wind Energy Area (WEA) from July-August 2013. This research demonstrates the ability of Doppler wind lidar technology to reduce uncertainty in estimating an offshore wind resource, compared to traditional resource assessment techniques, by providing a more accurate representation of the wind profile and associated hub-height wind speed variability. The second objective of this research is to elucidate the impact of offshore micrometeorology controls (stability, wind shear, turbulence) on a turbine's ability to produce power. Compared to lidar measurements, power law extrapolation estimates and operational National Weather Service models underestimated hub-height wind speeds in the WEA. In addition, lidar observations suggest the frequent development of a low-level wind maximum (LLWM), with high turbinelayer wind shear and low turbulence intensity within a turbine's rotor layer (40m-160m). Results elucidate the advantages of using Doppler wind lidar technology to improve offshore wind resource estimates and its ability to monitor under-sampled offshore meteorological controls impact on a potential turbine's ability to produce power.

  10. Environmental DNA (eDNA) sampling improves occurrence and detection estimates of invasive burmese pythons.

    PubMed

    Hunter, Margaret E; Oyler-McCance, Sara J; Dorazio, Robert M; Fike, Jennifer A; Smith, Brian J; Hunter, Charles T; Reed, Robert N; Hart, Kristen M

    2015-01-01

    Environmental DNA (eDNA) methods are used to detect DNA that is shed into the aquatic environment by cryptic or low density species. Applied in eDNA studies, occupancy models can be used to estimate occurrence and detection probabilities and thereby account for imperfect detection. However, occupancy terminology has been applied inconsistently in eDNA studies, and many have calculated occurrence probabilities while not considering the effects of imperfect detection. Low detection of invasive giant constrictors using visual surveys and traps has hampered the estimation of occupancy and detection estimates needed for population management in southern Florida, USA. Giant constrictor snakes pose a threat to native species and the ecological restoration of the Florida Everglades. To assist with detection, we developed species-specific eDNA assays using quantitative PCR (qPCR) for the Burmese python (Python molurus bivittatus), Northern African python (P. sebae), boa constrictor (Boa constrictor), and the green (Eunectes murinus) and yellow anaconda (E. notaeus). Burmese pythons, Northern African pythons, and boa constrictors are established and reproducing, while the green and yellow anaconda have the potential to become established. We validated the python and boa constrictor assays using laboratory trials and tested all species in 21 field locations distributed in eight southern Florida regions. Burmese python eDNA was detected in 37 of 63 field sampling events; however, the other species were not detected. Although eDNA was heterogeneously distributed in the environment, occupancy models were able to provide the first estimates of detection probabilities, which were greater than 91%. Burmese python eDNA was detected along the leading northern edge of the known population boundary. The development of informative detection tools and eDNA occupancy models can improve conservation efforts in southern Florida and support more extensive studies of invasive constrictors

  11. Environmental DNA (eDNA) Sampling Improves Occurrence and Detection Estimates of Invasive Burmese Pythons

    PubMed Central

    Hunter, Margaret E.; Oyler-McCance, Sara J.; Dorazio, Robert M.; Fike, Jennifer A.; Smith, Brian J.; Hunter, Charles T.; Reed, Robert N.; Hart, Kristen M.

    2015-01-01

    Environmental DNA (eDNA) methods are used to detect DNA that is shed into the aquatic environment by cryptic or low density species. Applied in eDNA studies, occupancy models can be used to estimate occurrence and detection probabilities and thereby account for imperfect detection. However, occupancy terminology has been applied inconsistently in eDNA studies, and many have calculated occurrence probabilities while not considering the effects of imperfect detection. Low detection of invasive giant constrictors using visual surveys and traps has hampered the estimation of occupancy and detection estimates needed for population management in southern Florida, USA. Giant constrictor snakes pose a threat to native species and the ecological restoration of the Florida Everglades. To assist with detection, we developed species-specific eDNA assays using quantitative PCR (qPCR) for the Burmese python (Python molurus bivittatus), Northern African python (P. sebae), boa constrictor (Boa constrictor), and the green (Eunectes murinus) and yellow anaconda (E. notaeus). Burmese pythons, Northern African pythons, and boa constrictors are established and reproducing, while the green and yellow anaconda have the potential to become established. We validated the python and boa constrictor assays using laboratory trials and tested all species in 21 field locations distributed in eight southern Florida regions. Burmese python eDNA was detected in 37 of 63 field sampling events; however, the other species were not detected. Although eDNA was heterogeneously distributed in the environment, occupancy models were able to provide the first estimates of detection probabilities, which were greater than 91%. Burmese python eDNA was detected along the leading northern edge of the known population boundary. The development of informative detection tools and eDNA occupancy models can improve conservation efforts in southern Florida and support more extensive studies of invasive constrictors

  12. Environmental DNA (eDNA) sampling improves occurrence and detection estimates of invasive Burmese pythons

    USGS Publications Warehouse

    Hunter, Margaret E.; Oyler-McCance, Sara J.; Dorazio, Robert M.; Fike, Jennifer A.; Smith, Brian J.; Hunter, Charles T.; Reed, Robert N.; Hart, Kristen M.

    2015-01-01

    Environmental DNA (eDNA) methods are used to detect DNA that is shed into the aquatic environment by cryptic or low density species. Applied in eDNA studies, occupancy models can be used to estimate occurrence and detection probabilities and thereby account for imperfect detection. However, occupancy terminology has been applied inconsistently in eDNA studies, and many have calculated occurrence probabilities while not considering the effects of imperfect detection. Low detection of invasive giant constrictors using visual surveys and traps has hampered the estimation of occupancy and detection estimates needed for population management in southern Florida, USA. Giant constrictor snakes pose a threat to native species and the ecological restoration of the Florida Everglades. To assist with detection, we developed species-specific eDNA assays using quantitative PCR (qPCR) for the Burmese python (Python molurus bivittatus), Northern African python (P. sebae), boa constrictor (Boa constrictor), and the green (Eunectes murinus) and yellow anaconda (E. notaeus). Burmese pythons, Northern African pythons, and boa constrictors are established and reproducing, while the green and yellow anaconda have the potential to become established. We validated the python and boa constrictor assays using laboratory trials and tested all species in 21 field locations distributed in eight southern Florida regions. Burmese python eDNA was detected in 37 of 63 field sampling events; however, the other species were not detected. Although eDNA was heterogeneously distributed in the environment, occupancy models were able to provide the first estimates of detection probabilities, which were greater than 91%. Burmese python eDNA was detected along the leading northern edge of the known population boundary. The development of informative detection tools and eDNA occupancy models can improve conservation efforts in southern Florida and support more extensive studies of invasive constrictors

  13. The equilibrium molecular structures of 2-deoxyribose and fructose by the semiexperimental mixed estimation method and coupled-cluster computations.

    PubMed

    Vogt, Natalja; Demaison, Jean; Cocinero, Emilio J; Écija, Patricia; Lesarri, Alberto; Rudolph, Heinz Dieter; Vogt, Jürgen

    2016-06-21

    Fructose and deoxyribose (24 and 19 atoms, respectively) are too large for determining accurate equilibrium structures, either by high-level ab initio methods or by experiments alone. We show in this work that the semiexperimental (SE) mixed estimation (ME) method offers a valuable alternative for equilibrium structure determinations in moderate-sized molecules such as these monosaccharides or other biochemical building blocks. The SE/ME method proceeds by fitting experimental rotational data for a number of isotopologues, which have been corrected with theoretical vibration-rotation interaction parameters (αi), and predicate observations for the structure. The derived SE constants are later supplemented by carefully chosen structural parameters from medium level ab initio calculations, including those for hydrogen atoms. The combined data are then used in a weighted least-squares fit to determine an equilibrium structure (r). We applied the ME method here to fructose and 2-deoxyribose and checked the accuracy of the calculations for 2-deoxyribose against the high level ab initio r structure fully optimized at the CCSD(T) level. We show that the ME method allows determining a complete and reliable equilibrium structure for relatively large molecules, even when experimental rotational information includes a limited number of isotopologues. With a moderate computational cost the ME method could be applied to larger molecules, thereby improving the structural evidence for subtle orbital interactions such as the anomeric effect. PMID:27212641

  14. Optimal clustering of MGs based on droop controller for improving reliability using a hybrid of harmony search and genetic algorithms.

    PubMed

    Abedini, Mohammad; Moradi, Mohammad H; Hosseinian, S M

    2016-03-01

    This paper proposes a novel method to address reliability and technical problems of microgrids (MGs) based on designing a number of self-adequate autonomous sub-MGs via adopting MGs clustering thinking. In doing so, a multi-objective optimization problem is developed where power losses reduction, voltage profile improvement and reliability enhancement are considered as the objective functions. To solve the optimization problem a hybrid algorithm, named HS-GA, is provided, based on genetic and harmony search algorithms, and a load flow method is given to model different types of DGs as droop controller. The performance of the proposed method is evaluated in two case studies. The results provide support for the performance of the proposed method. PMID:26767800

  15. Estimation of Comfort/Disconfort Based on EEG in Massage by Use of Clustering according to Correration and Incremental Learning type NN

    NASA Astrophysics Data System (ADS)

    Teramae, Tatsuya; Kushida, Daisuke; Takemori, Fumiaki; Kitamura, Akira

    Authors proposed the estimation method combining k-means algorithm and NN for evaluating massage. However, this estimation method has a problem that discrimination ratio is decreased to new user. There are two causes of this problem. One is that generalization of NN is bad. Another one is that clustering result by k-means algorithm has not high correlation coefficient in a class. Then, this research proposes k-means algorithm according to correlation coefficient and incremental learning for NN. The proposed k-means algorithm is method included evaluation function based on correlation coefficient. Incremental learning is method that NN is learned by new data and initialized weight based on the existing data. The effect of proposed methods are verified by estimation result using EEG data when testee is given massage.

  16. Improving the management of multimorbidity in general practice: protocol of a cluster randomised controlled trial (The 3D Study)

    PubMed Central

    Chaplin, Katherine; Bower, Peter; Brookes, Sara; Fitzpatrick, Bridie; Guthrie, Bruce; Shaw, Alison; Mercer, Stewart; Rafi, Imran; Thorn, Joanna

    2016-01-01

    Introduction An increasing number of people are living with multimorbidity. The evidence base for how best to manage these patients is weak. Current clinical guidelines generally focus on single conditions, which may not reflect the needs of patients with multimorbidity. The aim of the 3D study is to develop, implement and evaluate an intervention to improve the management of patients with multimorbidity in general practice. Methods and analysis This is a pragmatic two-arm cluster randomised controlled trial. 32 general practices around Bristol, Greater Manchester and Glasgow will be randomised to receive either the ‘3D intervention’ or usual care. 3D is a complex intervention including components affecting practice organisation, the conduct of patient reviews, integration with secondary care and measures to promote change in practice organisation. Changes include improving continuity of care and replacing reviews of each disease with patient-centred reviews with a focus on patients' quality of life, mental health and polypharmacy. We aim to recruit 1383 patients who have 3 or more chronic conditions. This provides 90% power at 5% significance level to detect an effect size of 0.27 SDs in the primary outcome, which is health-related quality of life at 15 months using the EQ-5D-5L. Secondary outcome measures assess patient centredness, illness burden and treatment burden. The primary analysis will be a multilevel regression model adjusted for baseline, stratification/minimisation, clustering and important co-variables. Nested process evaluation will assess implementation, mechanisms of effectiveness and interaction of the intervention with local context. Economic analysis of cost-consequences and cost-effectiveness will be based on quality-adjusted life years. Ethics and dissemination This study has approval from South-West (Frenchay) National Health Service (NHS) Research Ethics Committee (14/SW/0011). Findings will be disseminated via final report, peer

  17. Can We Improve Estimates of Seismological Q Using a New ``Geometrical Spreading'' Model?

    NASA Astrophysics Data System (ADS)

    Xie, Jiakang

    2010-10-01

    Precise measurements of seismological Q are difficult because we lack detailed knowledge on how the Earth’s fine velocity structure affects the amplitude data. In a number of recent papers, Morozov (Geophys J Int 175:239-252, 2008; Seism Res Lett 80:5-7, 2009; Pure Appl Geophys, this volume, 2010) proposes a new procedure intended to improve Q determinations. The procedure relies on quantifying the structural effects using a new form of geometrical spreading (GS) model that has an exponentially decaying component with time, e -γt·γ is a free parameter and is measured together with Q. Morozov has refit many previously published sets of amplitude attenuation data. In general, the new Q estimates are much higher than previous estimates, and all of the previously estimated frequency-dependence values for Q disappear in the new estimates. In this paper I show that (1) the traditional modeling of seismic amplitudes is physically based, whereas the new model lacks a physical basis; (2) the method of measuring Q using the new model is effectively just a curve fitting procedure using a first-order Taylor series expansion; (3) previous high-frequency data that were fit by a power-law frequency dependence for Q are expected to be also fit by the first-order expansion in the limited frequency bands involved, because of the long tails of power-law functions; (4) recent laboratory measurements of intrinsic Q of mantle materials at seismic frequencies provide independent evidence that intrinsic Q is often frequency-dependent, which should lead to frequency-dependent total Q; (5) published long-period surface wave data that were used to derive several recent Q models inherently contradict the new GS model; and (6) previous modeling has already included a special case that is mathematically identical to the new GS model, but with physical assumptions and measured Q values that differ from those with the new GS model. Therefore, while individually the previous Q measurements

  18. Assimilating SMOS soil moisture observations into GLEAM to improve terrestrial evaporation estimates over continental Australia

    NASA Astrophysics Data System (ADS)

    Martens, Brecht; Miralles, Diego; Lievens, Hans; Fernández-Prieto, Diego; Verhoest, Niko

    2015-04-01

    Terrestrial evaporation (ET) is an essential component of the climate system that links the water, energy and carbon cycles. Despite the crucial importance of ET for climate, it is still one of the most uncertain components of the (global) hydrological cycle. During the last decades, much effort has been put to develop and improve techniques for measuring the evaporative flux from the land surface in the field. However, these in situ techniques are prone to several errors and, more importantly, only provide relevant information at a very local scale. As a consequence, evaporative models have been designed to derive ET from large-scale satellite data. In this study, GLEAM (Global Land Evaporation - Amsterdam Methodology) is used to simulate evaporation fields over continental Australia. GLEAM consists of a set of simple equations driven by remotely-sensed observations in order to estimate the different components of ET (e.g., transpiration, interception loss, soil evaporation and sublimation). The methodology calculates a multiplicative evaporative stress factor that converts Priestley and Taylor's potential into actual evaporation. Unlike in most other ET-dedicated global models, the stress factor in GLEAM is derived as a function of soil moisture (simulated using a precipitation-driven soil water balance model) and observations of vegetation optical depth (VOD, retrieved from microwave remote sensing). This study investigates the merits of using SMOS soil moisture (SM) and VOD retrievals in GLEAM. The Level 3 SMOS SM retrievals are assimilated into the soil water module using a simple Newtonian Nudging approach. Prior to the assimilation, SM observations are rescaled to the climatology of the model using a standard CDF-matching approach. Several assimilation experiments are conducted to show the efficiency of the assimilation scheme to improve ET estimates over continental Australia. Simulations are validated using both in situ observations of soil moisture and ET

  19. Determining Cluster Reddenings: A New Method

    NASA Astrophysics Data System (ADS)

    Miller, Nathan A.; Hong, Linh N.; Friel, Eileen D.; Janes, Kenneth A.

    1995-12-01

    We have developed a technique for determining the reddening to open clusters by using the equivalent width of the Balmer line, Hβ , to determine the intrinsic color of early-type stars in the clusters' fields. Our technique attempts to quantify spectral classification in spectra of moderate resolution using the temperature sensitivity of the Hβ line. We also use the strength of secondary indicators like Mgb (5170 Angstroms) to help distinguish spectral type for stars near A0, where the H-line strength is double-valued with respect to color. Members of the well-studied open cluster M67 were used to develop the calibration. The moderate resolution spectra used in this project were taken with the multi-object spectrographs at the CTIO and KPNO 4-meter telescopes. The calibrating cluster M67 could be observed from both sites, allowing the two data sets to be consistently combined. The two observations of M67 also resulted in a large number of calibrating stars, giving a well-defined relationship between (Hβ ) strength and intrinsic color for stars of known luminosity class. The calibration has been applied to obtain estimates of intrinsic color and thus reddening to individual stars in the fields of a number of open clusters, and the distribution of reddening with distance then constrains the reddening along the line of sight to the clusters. For clusters whose parameters are known, the technique gives excellent agreement with previously published estimates of reddening. The method also provides reddening estimates to a number of open clusters that lack estimates of reddening by traditional methods, such as King 5 and 11, Be 17, 20, 31, 32, and 39, To 2, Pi 2, Cr 261. These clusters include the oldest and most distant open clusters known, and improved estimates of their reddening are crucial for accurate determinations of cluster age and metallicity.

  20. Improvement of the size estimation of 3D tracked droplets using digital in-line holography with joint estimation reconstruction

    NASA Astrophysics Data System (ADS)

    Verrier, N.; Grosjean, N.; Dib, E.; Méès, L.; Fournier, C.; Marié, J.-L.

    2016-04-01

    Digital holography is a valuable tool for three-dimensional information extraction. Among existing configurations, the originally proposed set-up (i.e. Gabor, or in-line holography), is reasonably immune to variations in the experimental environment making it a method of choice for studies of fluid dynamics. Nevertheless, standard hologram reconstruction techniques, based on numerical light back-propagation are prone to artifacts such as twin images or aliases that limit both the quality and quantity of information extracted from the acquired holograms. To get round this issue, the hologram reconstruction as a parametric inverse problem has been shown to accurately estimate 3D positions and the size of seeding particles directly from the hologram. To push the bounds of accuracy on size estimation still further, we propose to fully exploit the information redundancy of a hologram video sequence using joint estimation reconstruction. Applying this approach in a bench-top experiment, we show that it led to a relative precision of 0.13% (for a 60 μm diameter droplet) for droplet size estimation, and a tracking precision of {σx}× {σy}× {σz}=0.15× 0.15× 1~\\text{pixels} .

  1. Analysis and improvement of estimated snow water equivalent (SWE) using Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    E Azar, A.; Ghedira, H.; Khanbilvardi, R.

    2005-12-01

    The goal of this study is to improve the retrieval of SWE/Snow depth in Great lakes area, United States using passive microwave images along with Normalized Difference Vegetation Index NDVI and Artificial Neural Networks (ANNs). Passive microwave images have been successfully used to estimate snow characteristics such as Snow Water Equivalent (SWE) and snow depth. Despite considerable progress, challenges still exist with respect to accuracy and reliability. In this study, Special Sensor Microwave Imager (SSM/I) channels which are available in Equal-Area Scalable Earth Grid (EASE-GRID) format are used. The study area is covered by a 28 by 35 grid of EASE-Grid pixels, 25km by 25km each. To have a comprehensive data set of brightness temperatures (Tb) of SSM/I channels, an assortment of pixels were selected based on latitude and land cover. A time series analysis was conducted for three winter seasons to assess the SSM/I capability to estimates snow depth and SWE for various land covers. Ground truth data' were obtained from the National Climate Data Center (NCDC) and the National Operational Hydrological Remote Sensing Center (NOHRSC). The NCDC provided daily snow depth measurements reported from various stations located in the study area. Measurements were recorded and projected to match EASE-GRID formatting. The NOHRSC produces SNODAS dataset using airborne Gamma radiation and gauge measurements combined with a physical model. The data set consisted of different snow characteristics such as SWE and snow depth. Landcover characteristics are introduced by using Normalized Difference Vegetation Index (NDVI). An Artificial Neural Network (ANN) algorithm has been employed to evaluate the effect of landcover in estimating snow depth and Snow Water Equivalent (SWE). The model is trained using SSM/I channels (19v, 19h, 37v, 37h, 22v, 85v, 85h) and the mean and standard deviation of NDVI for the each pixel. The preliminary time series results showed various degrees of

  2. Improving agricultural drought monitoring in West Africa using root zone soil moisture estimates derived from NDVI

    NASA Astrophysics Data System (ADS)

    McNally, A.; Funk, C. C.; Yatheendradas, S.; Michaelsen, J.; Cappelarere, B.; Peters-Lidard, C. D.; Verdin, J. P.

    2012-12-01

    The Famine Early Warning Systems Network (FEWS NET) relies heavily on remotely sensed rainfall and vegetation data to monitor agricultural drought in Sub-Saharan Africa and other places around the world. Analysts use satellite rainfall to calculate rainy season statistics and force crop water accounting models that show how the magnitude and timing of rainfall might lead to above or below average harvest. The Normalized Difference Vegetation Index (NDVI) is also an important indicator of growing season progress and is given more weight over regions where, for example, lack of rain gauges increases error in satellite rainfall estimates. Currently, however, near-real time NDVI is not integrated into a modeling framework that informs growing season predictions. To meet this need for our drought monitoring system a land surface model (LSM) is a critical component. We are currently enhancing the FEWS NET monitoring activities by configuring a custom instance of NASA's Land Information System (LIS) called the FEWS NET Land Data Assimilation System. Using the LIS Noah LSM, in-situ measurements, and remotely sensed data, we focus on the following questions: What is the relationship between NDVI and in-situ soil moisture measurements over the West Africa Sahel? How can we use this relationship to improve modeled water and energy fluxes over the West Africa Sahel? We investigate soil moisture and NDVI cross-correlation in the time and frequency domain to develop a transfer function model to predict soil moisture from NDVI. This work compares sites in southwest Niger, Benin, Burkina Faso, and Mali to test the generality of the transfer function. For several sites with fallow and millet vegetation in the Wankama catchment in southwest Niger we developed a non-parametric frequency response model, using NDVI inputs and soil moisture outputs, that accurately estimates root zone soil moisture (40-70cm). We extend this analysis by developing a low order parametric transfer function

  3. A Monte Carlo approach for improved estimation of groundwater level spatial variability in poorly gauged basins

    NASA Astrophysics Data System (ADS)

    Varouchakis, Emmanouil; Hristopulos, Dionissios

    2013-04-01

    Groundwater level is an important source of information in hydrological modelling. In many aquifers the boreholes monitored are scarce and/or sparse in space. In both cases, geostatistical methods can help to visualize the free surface of an aquifer, whereas the use of auxiliary information improves the accuracy of level estimates and maximizes the information gain for the quantification of groundwater level spatial variability. In addition, they allow the exploitation of datasets that cannot otherwise be efficiently used in catchment models. In this presentation, we demonstrate an approach for incorporating auxiliary information in interpolation approaches using a specific case study. In particular, the study area is located on the island of Crete (Greece). The available data consist of 70 hydraulic head measurements for the wet period of the hydrological year 2002-2003, the average pumping rates at the 70 wells, and 10 piezometer readings measured in the preceding hydrological year. We present a groundwater level trend model based on the generalized Thiem's equation for multiple wells. We use the drift term to incorporate secondary information in Residual Kriging (RK) (Varouchakis and Hristopulos 2013). The residuals are then interpolated using Ordinary Kriging and then are added to the drift model. Thiem's equation describes the relationship between the steady-state radial inflow into a pumping well and the drawdown. The generalized form of the equation includes the influence of a number of pumping wells. It incorporates the estimated hydraulic head, the initial hydraulic head before abstraction, the number of wells, the pumping rate, the distance of the estimation point from each well, and the well's radius of influence. We assume that the initial hydraulic head follows a linear trend, which we model based on the preceding hydrological year measurements. The hydraulic conductivity in the study basin varies between 0.0014 and 0.00014 m/s according to geological

  4. Economic support to improve tuberculosis treatment outcomes in South Africa: a pragmatic cluster-randomized controlled trial

    PubMed Central

    2013-01-01

    Background Poverty undermines adherence to tuberculosis treatment. Economic support may both encourage and enable patients to complete treatment. In South Africa, which carries a high burden of tuberculosis, such support may improve the currently poor outcomes of patients on tuberculosis treatment. The aim of this study was to test the feasibility and effectiveness of delivering economic support to patients with pulmonary tuberculosis in a high-burden province of South Africa. Methods This was a pragmatic, unblinded, two-arm cluster-randomized controlled trial, where 20 public sector clinics acted as clusters. Patients with pulmonary tuberculosis in intervention clinics (n = 2,107) were offered a monthly voucher of ZAR120.00 (approximately US$15) until the completion of their treatment. Vouchers were redeemed at local shops for foodstuffs. Patients in control clinics (n = 1,984) received usual tuberculosis care. Results Intention to treat analysis showed a small but non-significant improvement in treatment success rates in intervention clinics (intervention 76.2%; control 70.7%; risk difference 5.6% (95% confidence interval: -1.2%, 12.3%), P = 0.107). Low fidelity to the intervention meant that 36.2% of eligible patients did not receive a voucher at all, 32.3% received a voucher for between one and three months and 31.5% received a voucher for four to eight months of treatment. There was a strong dose–response relationship between frequency of receipt of the voucher and treatment success (P <0.001). Conclusions Our pragmatic trial has shown that, in the real world setting of public sector clinics in South Africa, economic support to patients with tuberculosis does not significantly improve outcomes on treatment. However, the low fidelity to the delivery of our voucher meant that a third of eligible patients did not receive it. Among patients in intervention clinics who received the voucher at least once, treatment success rates were significantly

  5. Improved efficiency in amplification of Escherichia coli o-antigen gene clusters using genome-wide sequence comparison

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Background: In many bacteria including E. coli, genes encoding O-antigens are clustered in the chromosome, with a 39-bp JUMPstart sequence and gnd gene located upstream and downstream of the cluster, respectively. For determining the DNA sequence of the E. coli O-antigen gene cluster, one set of P...

  6. Application of NTR ZTD estimates from GBAS network to improve fast-static GNSS positioning

    NASA Astrophysics Data System (ADS)

    Wielgosz, P.; Paziewski, J.; Stepniak, K.; Krukowska, M.; Kaplon, J.; Sierny, J.; Hadas, T.; Bosy, J.

    2012-04-01

    In precise GNSS positioning, the correlated tropospheric effects are usually reduced by double differencing of the observations and applying mathematical atmospheric models. However, with a growing distance between the receivers, the tropospheric errors decorrelate causing large residual errors affecting positioning quality. These errors mostly concern the height component of the user position and are related to a high correlation of this component with zenith tropospheric delays (ZTD). This is why nowadays the troposphere is considered as an ultimate accuracy limiting factor in geodetic applications of GNSS. Currently, the most popular solution in the state of the art applications is to estimate ZTD together with station coordinates in the common data adjustment. This approach requires long data spans, e.g., at least 30-60 minutes. However, in fast-static positioning when short data spans (a few minutes only) are available, this method in not feasible and the troposphere is very difficult to model. Therefore, fast-static positioning requires external tropospheric information in order to improve its accuracy. This can be achieved by a network of the reference GNSS stations (GBAS), where ZTD can be obtained in the adjustment of GNSS data or directly from the ground meteorological data in near real-time (NRT) and provided as an external supporting product. The presented research are carried out in the frame of the "ASG+" project aimed at the development of NRT supporting modules for the ASG-EUPOS system. In this paper we present the analysis of the application of several ZTD modeling techniques to fast-static GNSS positioning, namely: (1) NRT ZTD estimates obtained based on GNSS data from Polish GBAS system called ASG-EUPOS and IGS/EPN and IERS products, (2) NRT ZTD determination based on meteorological data collected in real time from ASG-EUPOS, METAR and SYNOP systems. In order to assess the accuracy of these ZTD modeling techniques, test baselines of several tens

  7. Improving root-zone soil moisture estimations using dynamic root growth and crop phenology

    NASA Astrophysics Data System (ADS)

    Hashemian, Minoo; Ryu, Dongryeol; Crow, Wade T.; Kustas, William P.

    2015-12-01

    Water Energy Balance (WEB) Soil Vegetation Atmosphere Transfer (SVAT) modelling can be used to estimate soil moisture by forcing the model with observed data such as precipitation and solar radiation. Recently, an innovative approach that assimilates remotely sensed thermal infrared (TIR) observations into WEB-SVAT to improve the results has been proposed. However, the efficacy of the model-observation integration relies on the model's realistic representation of soil water processes. Here, we explore methods to improve the soil water processes of a simple WEB-SVAT model by adopting and incorporating an exponential root water uptake model with water stress compensation and establishing a more appropriate soil-biophysical linkage between root-zone moisture content, above-ground states and biophysical indices. The existing WEB-SVAT model is extended to a new Multi-layer WEB-SVAT with Dynamic Root distribution (MWSDR) that has five soil layers. Impacts of plant root depth variations, growth stages and phenological cycle of the vegetation on transpiration are considered in developing stages. Hydrometeorological and biogeophysical measurements collected from two experimental sites, one in Dookie, Victoria, Australia and the other in Ponca, Oklahoma, USA, are used to validate the new model. Results demonstrate that MWSDR provides improved soil moisture, transpiration and evaporation predictions which, in turn, can provide an improved physical basis for assimilating remotely sensed data into the model. Results also show the importance of having an adequate representation of vegetation-related transpiration process for an appropriate simulation of water transfer in a complicated system of soil, plants and atmosphere.

  8. A model to estimate the cost effectiveness of the indoorenvironment improvements in office work

    SciTech Connect

    Seppanen, Olli; Fisk, William J.

    2004-06-01

    Deteriorated indoor climate is commonly related to increases in sick building syndrome symptoms, respiratory illnesses, sick leave, reduced comfort and losses in productivity. The cost of deteriorated indoor climate for the society is high. Some calculations show that the cost is higher than the heating energy costs of the same buildings. Also building-level calculations have shown that many measures taken to improve indoor air quality and climate are cost-effective when the potential monetary savings resulting from an improved indoor climate are included as benefits gained. As an initial step towards systemizing these building level calculations we have developed a conceptual model to estimate the cost-effectiveness of various measures. The model shows the links between the improvements in the indoor environment and the following potential financial benefits: reduced medical care cost, reduced sick leave, better performance of work, lower turn over of employees, and lower cost of building maintenance due to fewer complaints about indoor air quality and climate. The pathways to these potential benefits from changes in building technology and practices go via several human responses to the indoor environment such as infectious diseases, allergies and asthma, sick building syndrome symptoms, perceived air quality, and thermal environment. The model also includes the annual cost of investments, operation costs, and cost savings of improved indoor climate. The conceptual model illustrates how various factors are linked to each other. SBS symptoms are probably the most commonly assessed health responses in IEQ studies and have been linked to several characteristics of buildings and IEQ. While the available evidence indicates that SBS symptoms can affect these outcomes and suspects that such a linkage exists, at present we can not quantify the relationships sufficiently for cost-benefit modeling. New research and analyses of existing data to quantify the financial

  9. Improved Radiation Dosimetry/Risk Estimates to Facilitate Environmental Management of Plutonium-Contaminated Sites

    SciTech Connect

    Scott, Bobby R.; Tokarskaya, Zoya B.; Zhuntova, Galina V.; Osovets, Sergey V.; Syrchikov, Victor A., Belyaeva, Zinaida D.

    2007-12-14

    This report summarizes 4 years of research achievements in this Office of Science (BER), U.S. Department of Energy (DOE) project. The research described was conducted by scientists and supporting staff at Lovelace Respiratory Research Institute (LRRI)/Lovelace Biomedical and Environmental Research Institute (LBERI) and the Southern Urals Biophysics Institute (SUBI). All project objectives and goals were achieved. A major focus was on obtaining improved cancer risk estimates for exposure via inhalation to plutonium (Pu) isotopes in the workplace (DOE radiation workers) and environment (public exposures to Pu-contaminated soil). A major finding was that low doses and dose rates of gamma rays can significantly suppress cancer induction by alpha radiation from inhaled Pu isotopes. The suppression relates to stimulation of the body's natural defenses, including immunity against cancer cells and selective apoptosis which removes precancerous and other aberrant cells.

  10. Uniting Space, Ground and Underwater Measurements for Improved Estimates of Rain Rate

    NASA Technical Reports Server (NTRS)

    Amitai, E.; Nystuen, J. A.; Liao, L.; Meneghini, R.; Morin, E.

    2003-01-01

    Global precipitation is monitored from a variety of platforms including space-borne, ground- and ocean-based platforms. Intercomparisons of these observations are crucial to validating the measurements and providing confidence for each measurement technique. Probability distribution functions of rain rates are used to compare satellite and ground-based radar observations. A preferred adjustment technique for improving rain rate distribution estimates is identified using measurements from ground-based radar and radar and rain gauges within the coverage area of the radar. The underwater measurement of rainfall shows similarities to radar measurements, but with intermediate spatial resolution and high temporal resolution. Reconciling these different measurement techniques provides understanding and confidence for all of the methods.

  11. Improving Curve Number storm runoff estimates using passive microwave satellite observations

    NASA Astrophysics Data System (ADS)

    Beck, H. E.; Schellekens, J.; de Jeu, R. A. M.; van Dijk, A. I. J. M.; Bruijnzeel, L. A.

    2009-04-01

    This study investigated the potential for improvement of Soil Conservation Service (SCS) Curve Number (CN) storm runoff estimates with the implementation of satellite-derived soil moisture. A large data-set (1980-2007) of daily measurements of precipitation and streamflow for 135 Australian catchments ranging in size from 53 to 471 km2 was used. The observed CN, a measure of the soil's maximum potential retention, was calculated using the SCS-CN model from measured precipitation and stormflow data. The observed CN was compared to a soil wetness index (SWI) based on AMSR-E satellite surface moisture and an antecedent precipitation index (API) based on field observations. Significant correlations (p

  12. Improved Estimates of Temporally Coherent Internal Tides and Energy Fluxes from Satellite Altimetry

    NASA Technical Reports Server (NTRS)

    Ray, Richard D.; Chao, Benjamin F. (Technical Monitor)

    2002-01-01

    Satellite altimetry has opened a surprising new avenue to observing internal tides in the open ocean. The tidal surface signatures are very small, a few cm at most, but in many areas they are robust, owing to averaging over many years. By employing a simplified two dimensional wave fitting to the surface elevations in combination with climatological hydrography to define the relation between the surface height and the current and pressure at depth, we may obtain rough estimates of internal tide energy fluxes. Initial results near Hawaii with Topex/Poseidon (T/P) data show good agreement with detailed 3D (three dimensional) numerical models, but the altimeter picture is somewhat blurred owing to the widely spaced T/P tracks. The resolution may be enhanced somewhat by using data from the ERS-1 (ESA (European Space Agency) Remote Sensing) and ERS-2 satellite altimeters. The ERS satellite tracks are much more closely spaced (0.72 deg longitude vs. 2.83 deg for T/P), but the tidal estimates are less accurate than those for T/P. All altimeter estimates are also severely affected by noise in regions of high mesoscale variability, and we have obtained some success in reducing this contamination by employing a prior correction for mesoscale variability based on ten day detailed sea surface height maps developed by Le Traon and colleagues. These improvements allow us to more clearly define the internal tide surface field and the corresponding energy fluxes. Results from throughout the global ocean will be presented.

  13. Quantitative DNA metabarcoding: improved estimates of species proportional biomass using correction factors derived from control material.

    PubMed

    Thomas, Austen C; Deagle, Bruce E; Eveson, J Paige; Harsch, Corie H; Trites, Andrew W

    2016-05-01

    DNA metabarcoding is a powerful new tool allowing characterization of species assemblages using high-throughput amplicon sequencing. The utility of DNA metabarcoding for quantifying relative species abundances is currently limited by both biological and technical biases which influence sequence read counts. We tested the idea of sequencing 50/50 mixtures of target species and a control species in order to generate relative correction factors (RCFs) that account for multiple sources of bias and are applicable to field studies. RCFs will be most effective if they are not affected by input mass ratio or co-occurring species. In a model experiment involving three target fish species and a fixed control, we found RCFs did vary with input ratio but in a consistent fashion, and that 50/50 RCFs applied to DNA sequence counts from various mixtures of the target species still greatly improved relative abundance estimates (e.g. average per species error of 19 ± 8% for uncorrected vs. 3 ± 1% for corrected estimates). To demonstrate the use of correction factors in a field setting, we calculated 50/50 RCFs for 18 harbour seal (Phoca vitulina) prey species (RCFs ranging from 0.68 to 3.68). Applying these corrections to field-collected seal scats affected species percentages from individual samples (Δ 6.7 ± 6.6%) more than population-level species estimates (Δ 1.7 ± 1.2%). Our results indicate that the 50/50 RCF approach is an effective tool for evaluating and correcting biases in DNA metabarcoding studies. The decision to apply correction factors will be influenced by the feasibility of creating tissue mixtures for the target species, and the level of accuracy needed to meet research objectives. PMID:26602877

  14. Improvement of force-sensor-based heart rate estimation using multichannel data fusion.

    PubMed

    Bruser, Christoph; Kortelainen, Juha M; Winter, Stefan; Tenhunen, Mirja; Parkka, Juha; Leonhardt, Steffen

    2015-01-01

    The aim of this paper is to present and evaluate algorithms for heartbeat interval estimation from multiple spatially distributed force sensors integrated into a bed. Moreover, the benefit of using multichannel systems as opposed to a single sensor is investigated. While it might seem intuitive that multiple channels are superior to a single channel, the main challenge lies in finding suitable methods to actually leverage this potential. To this end, two algorithms for heart rate estimation from multichannel vibration signals are presented and compared against a single-channel sensing solution. The first method operates by analyzing the cepstrum computed from the average spectra of the individual channels, while the second method applies Bayesian fusion to three interval estimators, such as the autocorrelation, which are applied to each channel. This evaluation is based on 28 night-long sleep lab recordings during which an eight-channel polyvinylidene fluoride-based sensor array was used to acquire cardiac vibration signals. The recruited patients suffered from different sleep disorders of varying severity. From the sensor array data, a virtual single-channel signal was also derived for comparison by averaging the channels. The single-channel results achieved a beat-to-beat interval error of 2.2% with a coverage (i.e., percentage of the recording which could be analyzed) of 68.7%. In comparison, the best multichannel results attained a mean error and coverage of 1.0% and 81.0%, respectively. These results present statistically significant improvements of both metrics over the single-channel results (p < 0.05). PMID:25561445

  15. Improved frequency and time of arrival estimation methods in search and rescue system based on MEO satellites

    NASA Astrophysics Data System (ADS)

    Lin, Mo; Li, Rui; Li, Jilin

    2007-11-01

    This paper deals with several key points including parameter estimation such as frequency of arrival (FOA), time of arrival (TOA) estimation algorithm and signal processing techniques in Medium-altitude Earth Orbit Local User Terminals (MEOLUT) based on Cospas-Sarsat Medium-altitude Earth Orbit Search and Rescue system (MEOSAR). Based on an analytical description of distress beacon, improved TOA and FOA estimation methods have been proposed. An improved FOA estimation method which integrates bi-FOA measurement, FFT method, Rife algorithm and Gaussian window is proposed to improve the accuracy of FOA estimation. In addition, TPD algorithm and signal correlation techniques are used to achieve a high performance of TOA estimation. Parameter estimation problems are solved by proposed FOA/TOA methods under quite poor Carrier-to-Noise (C/N0). A number of simulations are done to show the improvements. FOA and TOA estimation error are lower than 0.1Hz and 11μs respectively which is very high system requirement for MEOSAR system MEOLUT.

  16. Improvements in Virtual Sensors: Using Spatial Information to Estimate Remote Sensing Spectra

    NASA Technical Reports Server (NTRS)

    Oza, Nikunj C.; Srivastava, Ashok N.; Stroeve, Julienne

    2005-01-01

    Various instruments are used to create images of the Earth and other objects in the universe in a diverse set of wavelength bands with the aim of understanding natural phenomena. Sometimes these instruments are built in a phased approach, with additional measurement capabilities added in later phases. In other cases, technology may mature to the point that the instrument offers new measurement capabilities that were not planned in the original design of the instrument. In still other cases, high resolution spectral measurements may be too costly to perform on a large sample and therefore lower resolution spectral instruments are used to take the majority of measurements. Many applied science questions that are relevant to the earth science remote sensing community require analysis of enormous amounts of data that were generated by instruments with disparate measurement capabilities. In past work [1], we addressed this problem using Virtual Sensors: a method that uses models trained on spectrally rich (high spectral resolution) data to "fill in" unmeasured spectral channels in spectrally poor (low spectral resolution) data. We demonstrated this method by using models trained on the high spectral resolution Terra MODIS instrument to estimate what the equivalent of the MODIS 1.6 micron channel would be for the NOAA AVHRR2 instrument. The scientific motivation for the simulation of the 1.6 micron channel is to improve the ability of the AVHRR2 sensor to detect clouds over snow and ice. This work contains preliminary experiments demonstrating that the use of spatial information can improve our ability to estimate these spectra.

  17. On improving low-cost IMU performance for online trajectory estimation

    NASA Astrophysics Data System (ADS)

    Yudanto, Risang; Ompusunggu, Agusmian P.; Bey-Temsamani, Abdellatif

    2015-05-01

    We have developed an automatic mitigation method for compensating drifts occurring in low-cost Inertial Measurement Units (IMU), using MEMS (Microelectromechanical systems) accelerometers and gyros, and applied the method for online trajectory estimation of a moving robot arm. The method is based on an automatic detection of system's states which triggers an online (i.e. automatic) recalibration of the sensors parameters. Stationary tests have proven an absolute reduction of drift, mainly due to random walk noise at ambient conditions, up to ~50% by using the recalibrated sensor parameters instead of using the nominal parameters obtained from sensor's datasheet. The proposed calibration methodology works online without needing manual interventions and adaptively compensates drifts under different working conditions. Notably, the proposed method requires neither any information from an aiding sensor nor a priori knowledge about system's model and/or constraints. It is experimentally shown in this paper that the method improves online trajectory estimations of the robot using a low-cost IMU consisting of MEMS-based accelerometer and gyroscope. Applications of the proposed method cover automotive, machinery and robotics industries.

  18. Evaluation of Optimal Reflectivity-Rain Rate (Z-R) Relationships for Improved Precipitation Estimates

    NASA Astrophysics Data System (ADS)

    Ferreira, A.; Teegavarapu, R. S.; Pathak, C. S.

    2009-12-01

    Use of appropriate reflectivity (Z)-rain rate(R) relationships is crucial for accurate estimation of precipitation amounts using radar. The spatial and temporal variability of several storm patterns combined with availability of several variants of Z-R relationships makes this task very difficult. This study evaluates the use of optimization models for optimizing the traditional Z-R functional relationships with constants and coefficients for different storm types and seasons. Optimization model formulations using nonlinear programming methods are investigated and developed in this study. The Z-R relationships will be evaluated for optimized coefficients and exponents based on train and test data. The train data will be used to develop the optimal values of coefficients and constants and the test data will be used for assessment. In order to evaluate the optimal relationships developed as a part of the study, reflectivity data collected from NCDC and rain gage data are analyzed for a region in South Florida. Exhaustive evaluation of Z-R relationships in improving precipitation estimates with and without optimization formulations will be attempted in this study.

  19. Improved radar data processing algorithms for quantitative rainfall estimation in real time.

    PubMed

    Krämer, S; Verworn, H R

    2009-01-01

    This paper describes a new methodology to process C-band radar data for direct use as rainfall input to hydrologic and hydrodynamic models and in real time control of urban drainage systems. In contrast to the adjustment of radar data with the help of rain gauges, the new approach accounts for the microphysical properties of current rainfall. In a first step radar data are corrected for attenuation. This phenomenon has been identified as the main cause for the general underestimation of radar rainfall. Systematic variation of the attenuation coefficients within predefined bounds allows robust reflectivity profiling. Secondly, event specific R-Z relations are applied to the corrected radar reflectivity data in order to generate quantitative reliable radar rainfall estimates. The results of the methodology are validated by a network of 37 rain gauges located in the Emscher and Lippe river basins. Finally, the relevance of the correction methodology for radar rainfall forecasts is demonstrated. It has become clearly obvious, that the new methodology significantly improves the radar rainfall estimation and rainfall forecasts. The algorithms are applicable in real time. PMID:19587415

  20. Calibrated Tully-fisher Relations For Improved Photometric Estimates Of Disk Rotation Velocities

    NASA Astrophysics Data System (ADS)

    Reyes, Reinabelle; Mandelbaum, R.; Gunn, J. E.; Pizagno, J.

    2011-01-01

    We present calibrated scaling relations (also referred to as Tully-Fisher relations or TFRs) between rotation velocity and photometric quantities-- absolute magnitude, stellar mass, and synthetic magnitude (a linear combination of absolute magnitude and color)-- of disk galaxies at z 0.1. First, we selected a parent disk sample of 170,000 galaxies from SDSS DR7, with redshifts between 0.02 and 0.10 and r band absolute magnitudes between -18.0 and -22.5. Then, we constructed a child disk sample of 189 galaxies that span the parameter space-- in absolute magnitude, color, and disk size-- covered by the parent sample, and for which we have obtained kinematic data. Long-slit spectroscopy were obtained from the Dual Imaging Spectrograph (DIS) at the Apache Point Observatory 3.5 m for 99 galaxies, and from Pizagno et al. (2007) for 95 galaxies (five have repeat observations). We find the best photometric estimator of disk rotation velocity to be a synthetic magnitude with a color correction that is consistent with the Bell et al. (2003) color-based stellar mass ratio. The improved rotation velocity estimates have a wide range of scientific applications, and in particular, in combination with weak lensing measurements, they enable us to constrain the ratio of optical-to-virial velocity in disk galaxies.

  1. Consistent Estimates of Tsunami Energy Show Promise for Improved Early Warning

    NASA Astrophysics Data System (ADS)

    Titov, V.; Song, Y. Tony; Tang, L.; Bernard, E. N.; Bar-Sever, Y.; Wei, Y.

    2016-05-01

    Early tsunami warning critically hinges on rapid determination of the tsunami hazard potential in real-time, before waves inundate critical coastlines. Tsunami energy can quickly characterize the destructive potential of generated waves. Traditional seismic analysis is inadequate to accurately predict a tsunami's energy. Recently, two independent approaches have been proposed to determine tsunami source energy: one inverted from the Deep-ocean Assessment and Reporting of Tsunamis (DART) data during the tsunami propagation, and the other derived from the land-based coastal global positioning system (GPS) during tsunami generation. Here, we focus on assessing these two approaches with data from the March 11, 2011 Japanese tsunami. While the GPS approach takes into consideration the dynamic earthquake process, the DART inversion approach provides the actual tsunami energy estimation of the propagating tsunami waves; both approaches lead to consistent energy scales for previously studied tsunamis. Encouraged by these promising results, we examined a real-time approach to determine tsunami source energy by combining these two methods: first, determine the tsunami source from the globally expanding GPS network immediately after an earthquake for near-field early warnings; and then to refine the tsunami energy estimate from nearby DART measurements for improving forecast accuracy and early cancelations. The combination of these two real-time networks may offer an appealing opportunity for: early determination of the tsunami threat for the purpose of saving more lives, and early cancelation of tsunami warnings to avoid unnecessary false alarms.

  2. An advanced shape-fitting algorithm applied to quadrupedal mammals: improving volumetric mass estimates

    PubMed Central

    Brassey, Charlotte A.; Gardiner, James D.

    2015-01-01

    Body mass is a fundamental physical property of an individual and has enormous bearing upon ecology and physiology. Generating reliable estimates for body mass is therefore a necessary step in many palaeontological studies. Whilst early reconstructions of mass in extinct species relied upon isolated skeletal elements, volumetric techniques are increasingly applied to fossils when skeletal completeness allows. We apply a new ‘alpha shapes’ (α-shapes) algorithm to volumetric mass estimation in quadrupedal mammals. α-shapes are defined by: (i) the underlying skeletal struct