Science.gov

Sample records for clusters improved estimates

  1. An Improved Cluster Richness Estimator

    SciTech Connect

    Rozo, Eduardo; Rykoff, Eli S.; Koester, Benjamin P.; McKay, Timothy; Hao, Jiangang; Evrard, August; Wechsler, Risa H.; Hansen, Sarah; Sheldon, Erin; Johnston, David; Becker, Matthew R.; Annis, James T.; Bleem, Lindsey; Scranton, Ryan; /Pittsburgh U.

    2009-08-03

    Minimizing the scatter between cluster mass and accessible observables is an important goal for cluster cosmology. In this work, we introduce a new matched filter richness estimator, and test its performance using the maxBCG cluster catalog. Our new estimator significantly reduces the variance in the L{sub X}-richness relation, from {sigma}{sub lnL{sub X}}{sup 2} = (0.86 {+-} 0.02){sup 2} to {sigma}{sub lnL{sub X}}{sup 2} = (0.69 {+-} 0.02){sup 2}. Relative to the maxBCG richness estimate, it also removes the strong redshift dependence of the richness scaling relations, and is significantly more robust to photometric and redshift errors. These improvements are largely due to our more sophisticated treatment of galaxy color data. We also demonstrate the scatter in the L{sub X}-richness relation depends on the aperture used to estimate cluster richness, and introduce a novel approach for optimizing said aperture which can be easily generalized to other mass tracers.

  2. The cluster graphical lasso for improved estimation of Gaussian graphical models

    PubMed Central

    Tan, Kean Ming; Witten, Daniela; Shojaie, Ali

    2015-01-01

    The task of estimating a Gaussian graphical model in the high-dimensional setting is considered. The graphical lasso, which involves maximizing the Gaussian log likelihood subject to a lasso penalty, is a well-studied approach for this task. A surprising connection between the graphical lasso and hierarchical clustering is introduced: the graphical lasso in effect performs a two-step procedure, in which (1) single linkage hierarchical clustering is performed on the variables in order to identify connected components, and then (2) a penalized log likelihood is maximized on the subset of variables within each connected component. Thus, the graphical lasso determines the connected components of the estimated network via single linkage clustering. The single linkage clustering is known to perform poorly in certain finite-sample settings. Therefore, the cluster graphical lasso, which involves clustering the features using an alternative to single linkage clustering, and then performing the graphical lasso on the subset of variables within each cluster, is proposed. Model selection consistency for this technique is established, and its improved performance relative to the graphical lasso is demonstrated in a simulation study, as well as in applications to a university webpage and a gene expression data sets. PMID:25642008

  3. A nonparametric clustering technique which estimates the number of clusters

    NASA Technical Reports Server (NTRS)

    Ramey, D. B.

    1983-01-01

    In applications of cluster analysis, one usually needs to determine the number of clusters, K, and the assignment of observations to each cluster. A clustering technique based on recursive application of a multivariate test of bimodality which automatically estimates both K and the cluster assignments is presented.

  4. Cluster Sampling with Referral to Improve the Efficiency of Estimating Unmet Needs among Pregnant and Postpartum Women after Disasters

    PubMed Central

    Horney, Jennifer; Zotti, Marianne E.; Williams, Amy; Hsia, Jason

    2015-01-01

    Introduction and Background Women of reproductive age, in particular women who are pregnant or fewer than 6 months postpartum, are uniquely vulnerable to the effects of natural disasters, which may create stressors for caregivers, limit access to prenatal/postpartum care, or interrupt contraception. Traditional approaches (e.g., newborn records, community surveys) to survey women of reproductive age about unmet needs may not be practical after disasters. Finding pregnant or postpartum women is especially challenging because fewer than 5% of women of reproductive age are pregnant or postpartum at any time. Methods From 2009 to 2011, we conducted three pilots of a sampling strategy that aimed to increase the proportion of pregnant and postpartum women of reproductive age who were included in postdisaster reproductive health assessments in Johnston County, North Carolina, after tornadoes, Cobb/Douglas Counties, Georgia, after flooding, and Bertie County, North Carolina, after hurricane-related flooding. Results Using this method, the percentage of pregnant and postpartum women interviewed in each pilot increased from 0.06% to 21%, 8% to 19%, and 9% to 17%, respectively. Conclusion and Discussion Two-stage cluster sampling with referral can be used to increase the proportion of pregnant and postpartum women included in a postdisaster assessment. This strategy may be a promising way to assess unmet needs of pregnant and postpartum women in disaster-affected communities. PMID:22365134

  5. Attitude Estimation in Fractionated Spacecraft Cluster Systems

    NASA Technical Reports Server (NTRS)

    Hadaegh, Fred Y.; Blackmore, James C.

    2011-01-01

    An attitude estimation was examined in fractioned free-flying spacecraft. Instead of a single, monolithic spacecraft, a fractionated free-flying spacecraft uses multiple spacecraft modules. These modules are connected only through wireless communication links and, potentially, wireless power links. The key advantage of this concept is the ability to respond to uncertainty. For example, if a single spacecraft module in the cluster fails, a new one can be launched at a lower cost and risk than would be incurred with onorbit servicing or replacement of the monolithic spacecraft. In order to create such a system, however, it is essential to know what the navigation capabilities of the fractionated system are as a function of the capabilities of the individual modules, and to have an algorithm that can perform estimation of the attitudes and relative positions of the modules with fractionated sensing capabilities. Looking specifically at fractionated attitude estimation with startrackers and optical relative attitude sensors, a set of mathematical tools has been developed that specify the set of sensors necessary to ensure that the attitude of the entire cluster ( cluster attitude ) can be observed. Also developed was a navigation filter that can estimate the cluster attitude if these conditions are satisfied. Each module in the cluster may have either a startracker, a relative attitude sensor, or both. An extended Kalman filter can be used to estimate the attitude of all modules. A range of estimation performances can be achieved depending on the sensors used and the topology of the sensing network.

  6. Tidal radius estimates for three open clusters

    NASA Astrophysics Data System (ADS)

    Danilov, V. M.; Loktin, A. V.

    2015-10-01

    A new method is developed for estimating tidal radii and masses of open star clusters (OCL) based on the sky-plane coordinates and proper motions and/or radial velocities of cluster member stars. To this end, we perform the correlation and spectral analysis of oscillations of absolute values of stellar velocity components relative to the cluster mass center along three coordinate planes and along each coordinate axis in five OCL models. Mutual correlation functions for fluctuations of absolute values of velocity field components are computed. The spatial Fourier transform of the mutual correlation functions in the case of zero time offset is used to compute wavenumber spectra of oscillations of absolute values of stellar velocity components. The oscillation spectra of these quantities contain series of local maxima at equidistant wavenumber k values. The ratio of the tidal radius of the cluster to the wavenumber difference Δ k of adjacent local maxima in the oscillation spectra of absolute values of velocity field components is found to be the same for all five OCL models. This ratio is used to estimate the tidal radii and masses of the Pleiades, Praesepe, and M67 based on the proper motions and sky-plane coordinates of the member stars of these clusters. The radial dependences of the absolute values of the tangential and radial projections of cluster star velocities computed using the proper motions relative to the cluster center are determined, along with the corresponding autocorrelation functions and wavenumber spectra of oscillations of absolute values of velocity field components. The Pleiades virial mass is estimated assuming that the cluster is either isolated or non-isolated. Also derived are the estimates of the Pleiades dynamical mass assuming that it is non-stationary and non-isolated. The inferred Pleiades tidal radii corresponding to these masses are reported.

  7. Optimizing weak lensing mass estimates for cluster profile uncertainty

    SciTech Connect

    Gruen, D.; Bernstein, G. M.; Lam, T. Y.; Seitz, S.

    2011-09-11

    Weak lensing measurements of cluster masses are necessary for calibrating mass-observable relations (MORs) to investigate the growth of structure and the properties of dark energy. However, the measured cluster shear signal varies at fixed mass M200m due to inherent ellipticity of background galaxies, intervening structures along the line of sight, and variations in the cluster structure due to scatter in concentrations, asphericity and substructure. We use N-body simulated halos to derive and evaluate a weak lensing circular aperture mass measurement Map that minimizes the mass estimate variance <(Map - M200m)2> in the presence of all these forms of variability. Depending on halo mass and observational conditions, the resulting mass estimator improves on Map filters optimized for circular NFW-profile clusters in the presence of uncorrelated large scale structure (LSS) about as much as the latter improve on an estimator that only minimizes the influence of shape noise. Optimizing for uncorrelated LSS while ignoring the variation of internal cluster structure puts too much weight on the profile near the cores of halos, and under some circumstances can even be worse than not accounting for LSS at all. As a result, we discuss the impact of variability in cluster structure and correlated structures on the design and performance of weak lensing surveys intended to calibrate cluster MORs.

  8. Optimizing weak lensing mass estimates for cluster profile uncertainty

    DOE PAGESBeta

    Gruen, D.; Bernstein, G. M.; Lam, T. Y.; Seitz, S.

    2011-09-11

    Weak lensing measurements of cluster masses are necessary for calibrating mass-observable relations (MORs) to investigate the growth of structure and the properties of dark energy. However, the measured cluster shear signal varies at fixed mass M200m due to inherent ellipticity of background galaxies, intervening structures along the line of sight, and variations in the cluster structure due to scatter in concentrations, asphericity and substructure. We use N-body simulated halos to derive and evaluate a weak lensing circular aperture mass measurement Map that minimizes the mass estimate variance <(Map - M200m)2> in the presence of all these forms of variability. Dependingmore » on halo mass and observational conditions, the resulting mass estimator improves on Map filters optimized for circular NFW-profile clusters in the presence of uncorrelated large scale structure (LSS) about as much as the latter improve on an estimator that only minimizes the influence of shape noise. Optimizing for uncorrelated LSS while ignoring the variation of internal cluster structure puts too much weight on the profile near the cores of halos, and under some circumstances can even be worse than not accounting for LSS at all. As a result, we discuss the impact of variability in cluster structure and correlated structures on the design and performance of weak lensing surveys intended to calibrate cluster MORs.« less

  9. Estimating the Concrete Compressive Strength Using Hard Clustering and Fuzzy Clustering Based Regression Techniques

    PubMed Central

    Nagwani, Naresh Kumar; Deo, Shirish V.

    2014-01-01

    Understanding of the compressive strength of concrete is important for activities like construction arrangement, prestressing operations, and proportioning new mixtures and for the quality assurance. Regression techniques are most widely used for prediction tasks where relationship between the independent variables and dependent (prediction) variable is identified. The accuracy of the regression techniques for prediction can be improved if clustering can be used along with regression. Clustering along with regression will ensure the more accurate curve fitting between the dependent and independent variables. In this work cluster regression technique is applied for estimating the compressive strength of the concrete and a novel state of the art is proposed for predicting the concrete compressive strength. The objective of this work is to demonstrate that clustering along with regression ensures less prediction errors for estimating the concrete compressive strength. The proposed technique consists of two major stages: in the first stage, clustering is used to group the similar characteristics concrete data and then in the second stage regression techniques are applied over these clusters (groups) to predict the compressive strength from individual clusters. It is found from experiments that clustering along with regression techniques gives minimum errors for predicting compressive strength of concrete; also fuzzy clustering algorithm C-means performs better than K-means algorithm. PMID:25374939

  10. Estimating the concrete compressive strength using hard clustering and fuzzy clustering based regression techniques.

    PubMed

    Nagwani, Naresh Kumar; Deo, Shirish V

    2014-01-01

    Understanding of the compressive strength of concrete is important for activities like construction arrangement, prestressing operations, and proportioning new mixtures and for the quality assurance. Regression techniques are most widely used for prediction tasks where relationship between the independent variables and dependent (prediction) variable is identified. The accuracy of the regression techniques for prediction can be improved if clustering can be used along with regression. Clustering along with regression will ensure the more accurate curve fitting between the dependent and independent variables. In this work cluster regression technique is applied for estimating the compressive strength of the concrete and a novel state of the art is proposed for predicting the concrete compressive strength. The objective of this work is to demonstrate that clustering along with regression ensures less prediction errors for estimating the concrete compressive strength. The proposed technique consists of two major stages: in the first stage, clustering is used to group the similar characteristics concrete data and then in the second stage regression techniques are applied over these clusters (groups) to predict the compressive strength from individual clusters. It is found from experiments that clustering along with regression techniques gives minimum errors for predicting compressive strength of concrete; also fuzzy clustering algorithm C-means performs better than K-means algorithm. PMID:25374939

  11. An Improved Fst Estimator

    PubMed Central

    Chen, Guanjie; Yuan, Ao; Shriner, Daniel; Tekola-Ayele, Fasil; Zhou, Jie; Bentley, Amy R.; Zhou, Yanxun; Wang, Chuntao; Newport, Melanie J.; Adeyemo, Adebowale; Rotimi, Charles N.

    2015-01-01

    The fixation index Fst plays a central role in ecological and evolutionary genetic studies. The estimators of Wright (F^st1), Weir and Cockerham (F^st2), and Hudson et al. (F^st3) are widely used to measure genetic differences among different populations, but all have limitations. We propose a minimum variance estimator F^stm using F^st1 and F^st2. We tested F^stm in simulations and applied it to 120 unrelated East African individuals from Ethiopia and 11 subpopulations in HapMap 3 with 464,642 SNPs. Our simulation study showed that F^stm has smaller bias than F^st2 for small sample sizes and smaller bias than F^st1 for large sample sizes. Also, F^stm has smaller variance than F^st2 for small Fst values and smaller variance than F^st1 for large Fst values. We demonstrated that approximately 30 subpopulations and 30 individuals per subpopulation are required in order to accurately estimate Fst. PMID:26317214

  12. Using second-order generalized estimating equations to model heterogeneous intraclass correlation in cluster randomized trials

    PubMed Central

    Crespi, Catherine M.; Wong, Weng Kee; Mishra, Shiraz I.

    2009-01-01

    SUMMARY In cluster randomized trials, it is commonly assumed that the magnitude of the correlation among subjects within a cluster is constant across clusters. However, the correlation may in fact be heterogeneous and depend on cluster characteristics. Accurate modeling of the correlation has the potential to improve inference. We use second-order generalized estimating equations to model heterogeneous correlation in cluster randomized trials. Using simulation studies we show that accurate modeling of heterogeneous correlation can improve inference when the correlation is high or varies by cluster size. We apply the methods to a cluster randomized trial of an intervention to promote breast cancer screening. PMID:19109804

  13. Efficient Pairwise Composite Likelihood Estimation for Spatial-Clustered Data

    PubMed Central

    Bai, Yun; Kang, Jian; Song, Peter X.-K.

    2015-01-01

    Summary Spatial-clustered data refer to high-dimensional correlated measurements collected from units or subjects that are spatially clustered. Such data arise frequently from studies in social and health sciences. We propose a unified modeling framework, termed as GeoCopula, to characterize both large-scale variation, and small-scale variation for various data types, including continuous data, binary data, and count data as special cases. To overcome challenges in the estimation and inference for the model parameters, we propose an efficient composite likelihood approach in that the estimation efficiency is resulted from a construction of over-identified joint composite estimating equations. Consequently, the statistical theory for the proposed estimation is developed by extending the classical theory of the generalized method of moments. A clear advantage of the proposed estimation method is the computation feasibility. We conduct several simulation studies to assess the performance of the proposed models and estimation methods for both Gaussian and binary spatial-clustered data. Results show a clear improvement on estimation efficiency over the conventional composite likelihood method. An illustrative data example is included to motivate and demonstrate the proposed method. PMID:24945876

  14. Cross-Clustering: A Partial Clustering Algorithm with Automatic Estimation of the Number of Clusters

    PubMed Central

    Tellaroli, Paola; Bazzi, Marco; Donato, Michele; Brazzale, Alessandra R.; Drăghici, Sorin

    2016-01-01

    Four of the most common limitations of the many available clustering methods are: i) the lack of a proper strategy to deal with outliers; ii) the need for a good a priori estimate of the number of clusters to obtain reasonable results; iii) the lack of a method able to detect when partitioning of a specific data set is not appropriate; and iv) the dependence of the result on the initialization. Here we propose Cross-clustering (CC), a partial clustering algorithm that overcomes these four limitations by combining the principles of two well established hierarchical clustering algorithms: Ward’s minimum variance and Complete-linkage. We validated CC by comparing it with a number of existing clustering methods, including Ward’s and Complete-linkage. We show on both simulated and real datasets, that CC performs better than the other methods in terms of: the identification of the correct number of clusters, the identification of outliers, and the determination of real cluster memberships. We used CC to cluster samples in order to identify disease subtypes, and on gene profiles, in order to determine groups of genes with the same behavior. Results obtained on a non-biological dataset show that the method is general enough to be successfully used in such diverse applications. The algorithm has been implemented in the statistical language R and is freely available from the CRAN contributed packages repository. PMID:27015427

  15. Estimating potential evapotranspiration with improved radiation estimation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Potential evapotranspiration (PET) is of great importance to estimation of surface energy budget and water balance calculation. The accurate estimation of PET will facilitate efficient irrigation scheduling, drainage design, and other agricultural and meteorological applications. However, accuracy o...

  16. Memory color assisted illuminant estimation through pixel clustering

    NASA Astrophysics Data System (ADS)

    Zhang, Heng; Quan, Shuxue

    2010-01-01

    The under constrained nature of illuminant estimation determines that in order to resolve the problem, certain assumptions are needed, such as the gray world theory. Including more constraints in this process may help explore the useful information in an image and improve the accuracy of the estimated illuminant, providing that the constraints hold. Based on the observation that most personal images have contents of one or more of the following categories: neutral objects, human beings, sky, and plants, we propose a method for illuminant estimation through the clustering of pixels of gray and three dominant memory colors: skin tone, sky blue, and foliage green. Analysis shows that samples of the above colors cluster around small areas under different illuminants and their characteristics can be used to effectively detect pixels falling into each of the categories. The algorithm requires the knowledge of the spectral sensitivity response of the camera, and a spectral database consisted of the CIE standard illuminants and reflectance or radiance database of samples of the above colors.

  17. Cluster Stability Estimation Based on a Minimal Spanning Trees Approach

    NASA Astrophysics Data System (ADS)

    Volkovich, Zeev (Vladimir); Barzily, Zeev; Weber, Gerhard-Wilhelm; Toledano-Kitai, Dvora

    2009-08-01

    Among the areas of data and text mining which are employed today in science, economy and technology, clustering theory serves as a preprocessing step in the data analyzing. However, there are many open questions still waiting for a theoretical and practical treatment, e.g., the problem of determining the true number of clusters has not been satisfactorily solved. In the current paper, this problem is addressed by the cluster stability approach. For several possible numbers of clusters we estimate the stability of partitions obtained from clustering of samples. Partitions are considered consistent if their clusters are stable. Clusters validity is measured as the total number of edges, in the clusters' minimal spanning trees, connecting points from different samples. Actually, we use the Friedman and Rafsky two sample test statistic. The homogeneity hypothesis, of well mingled samples within the clusters, leads to asymptotic normal distribution of the considered statistic. Resting upon this fact, the standard score of the mentioned edges quantity is set, and the partition quality is represented by the worst cluster corresponding to the minimal standard score value. It is natural to expect that the true number of clusters can be characterized by the empirical distribution having the shortest left tail. The proposed methodology sequentially creates the described value distribution and estimates its left-asymmetry. Numerical experiments, presented in the paper, demonstrate the ability of the approach to detect the true number of clusters.

  18. Learning Markov Random Walks for robust subspace clustering and estimation.

    PubMed

    Liu, Risheng; Lin, Zhouchen; Su, Zhixun

    2014-11-01

    Markov Random Walks (MRW) has proven to be an effective way to understand spectral clustering and embedding. However, due to less global structural measure, conventional MRW (e.g., the Gaussian kernel MRW) cannot be applied to handle data points drawn from a mixture of subspaces. In this paper, we introduce a regularized MRW learning model, using a low-rank penalty to constrain the global subspace structure, for subspace clustering and estimation. In our framework, both the local pairwise similarity and the global subspace structure can be learnt from the transition probabilities of MRW. We prove that under some suitable conditions, our proposed local/global criteria can exactly capture the multiple subspace structure and learn a low-dimensional embedding for the data, in which giving the true segmentation of subspaces. To improve robustness in real situations, we also propose an extension of the MRW learning model based on integrating transition matrix learning and error correction in a unified framework. Experimental results on both synthetic data and real applications demonstrate that our proposed MRW learning model and its robust extension outperform the state-of-the-art subspace clustering methods. PMID:25005156

  19. Thermochemical property estimation of hydrogenated silicon clusters.

    PubMed

    Adamczyk, Andrew J; Broadbelt, Linda J

    2011-08-18

    The thermochemical properties for selected hydrogenated silicon clusters (Si(x)H(y), x = 3-13, y = 0-18) were calculated using quantum chemical calculations and statistical thermodynamics. Standard enthalpy of formation at 298 K and standard entropy and constant pressure heat capacity at various temperatures, i.e., 298-6000 K, were calculated for 162 hydrogenated silicon clusters using G3//B3LYP. The hydrogenated silicon clusters contained ten to twenty fused Si-Si bonds, i.e., bonds participating in more than one three- to six-membered ring. The hydrogenated silicon clusters in this study involved different degrees of hydrogenation, i.e., the ratio of hydrogen to silicon atoms varied widely depending on the size of the cluster and/or degree of multifunctionality. A group additivity database composed of atom-centered groups and ring corrections, as well as bond-centered groups, was created to predict thermochemical properties most accurately. For the training set molecules, the average absolute deviation (AAD) comparing the G3//B3LYP values to the values obtained from the revised group additivity database for standard enthalpy of formation and entropy at 298 K and constant pressure heat capacity at 500, 1000, and 1500 K were 3.2%, 1.9%, 0.40%, 0.43%, and 0.53%, respectively. Sensitivity analysis of the revised group additivity parameter database revealed that the group parameters were able to predict the thermochemical properties of molecules that were not used in the training set within an AAD of 3.8% for standard enthalpy of formation at 298 K. PMID:21728331

  20. Proportion estimation using prior cluster purities

    NASA Technical Reports Server (NTRS)

    Terrell, G. R. (Principal Investigator)

    1980-01-01

    The prior distribution of CLASSY component purities is studied, and this information incorporated into maximum likelihood crop proportion estimators. The method is tested on Transition Year spring small grain segments.

  1. Estimating the abundance of clustered animal population by using adaptive cluster sampling and negative binomial distribution

    NASA Astrophysics Data System (ADS)

    Bo, Yizhou; Shifa, Naima

    2013-09-01

    An estimator for finding the abundance of a rare, clustered and mobile population has been introduced. This model is based on adaptive cluster sampling (ACS) to identify the location of the population and negative binomial distribution to estimate the total in each site. To identify the location of the population we consider both sampling with replacement (WR) and sampling without replacement (WOR). Some mathematical properties of the model are also developed.

  2. A clustering routing algorithm based on improved ant colony clustering for wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Xiao, Xiaoli; Li, Yang

    Because of real wireless sensor network node distribution uniformity, this paper presents a clustering strategy based on the ant colony clustering algorithm (ACC-C). To reduce the energy consumption of the head near the base station and the whole network, The algorithm uses ant colony clustering on non-uniform clustering. The improve route optimal degree is presented to evaluate the performance of the chosen route. Simulation results show that, compared with other algorithms, like the LEACH algorithm and the improve particle cluster kind of clustering algorithm (PSC - C), the proposed approach is able to keep away from the node with less residual energy, which can improve the life of networks.

  3. IMPROVING BIOGENIC EMISSION ESTIMATES WITH SATELLITE IMAGERY

    EPA Science Inventory

    This presentation will review how existing and future applications of satellite imagery can improve the accuracy of biogenic emission estimates. Existing applications of satellite imagery to biogenic emission estimates have focused on characterizing land cover. Vegetation dat...

  4. Spatial dependence clusters in the estimation of forest structural parameters

    NASA Astrophysics Data System (ADS)

    Wulder, Michael Albert

    1999-12-01

    In this thesis we provide a summary of the methods by which remote sensing may be applied in forestry, while also acknowledging the various limitations which are faced. The application of spatial statistics to high spatial resolution imagery is explored as a means of increasing the information which may be extracted from digital images. A number of high spatial resolution optical remote sensing satellites that are soon to be launched will increase the availability of imagery for the monitoring of forest structure. This technological advancement is timely as current forest management practices have been altered to reflect the need for sustainable ecosystem level management. The low accuracy level at which forest structural parameters have been estimated in the past is partly due to low image spatial resolution. A large pixel is often composed of a number of surface features, resulting in a spectral value which is due to the reflectance characteristics of all surface features within that pixel. In the case of small pixels, a portion of a surface feature may be represented by a single pixel. When a single pixel represents a portion of a surface object, the potential to isolate distinct surface features exists. Spatial statistics, such as the Gets statistic, provide for an image processing method to isolate distinct surface features. In this thesis, high spatial resolution imagery sensed over a forested landscape is processed with spatial statistics to combine distinct image objects into clusters, representing individual or groups of trees. Tree clusters are a means to deal with the inevitable foliage overlap which occurs within complex mixed and deciduous forest stands. The generation of image objects, that is, clusters, is necessary to deal with the presence of spectrally mixed pixels. The ability to estimate forest inventory and biophysical parameters from image clusters generated from spatially dependent image features is tested in this thesis. The inventory

  5. Rod cluster having improved vane configuration

    SciTech Connect

    Shockling, L.A.; Francis, T.A.

    1989-09-05

    This patent describes a pressurized water reactor vessel, the vessel defining a predetermined axial direction of the flow of coolant therewithin and having plural spider assemblies supporting, for vertical movement within the vessel, respective clusters of rods in spaced, parallel axial relationship, parallel to the predetermined axial direction of coolant flow, and a rod guide for each spider assembly and respective cluster of rods. The rod guide having horizontally oriented support plates therewithin, each plate having an interior opening for accommodating axial movement therethrough of the spider assembly and respective cluster of rods. The opening defining plural radially extending channels and corresponding parallel interior wall surfaces of the support plate.

  6. Identifying sampling locations for field-scale soil moisture estimation using K-means clustering

    NASA Astrophysics Data System (ADS)

    Van Arkel, Zach; Kaleita, Amy L.

    2014-08-01

    Identifying and understanding the impact of field-scale soil moisture patterns is currently limited by the time and resources required to do sufficient monitoring. This study uses K-means clustering to find critical sampling points to estimate field-scale near-surface soil moisture. Points within the field are clustered based upon topographic and soils data and the points representing the center of those clusters are identified as the critical sampling points. Soil moisture observations at 42 sites across the growing seasons of 4 years were collected several times per week. Using soil moisture observations at the critical sampling points and the number of points within each cluster, a weighted average is found and used as the estimated mean field-scale soil moisture. Field-scale soil moisture estimations from this method are compared to the rank stability approach (RSA) to find optimal sampling locations based upon temporal soil moisture data. The clustering approach on soil and topography data resulted in field-scale average moisture estimates that were as good or better than RSA, but without the need for exhaustive presampling of soil moisture. Using an electromagnetic inductance map as a proxy for soils data significantly improved the estimates over those obtained based on topography alone.

  7. Improved Ant Colony Clustering Algorithm and Its Performance Study.

    PubMed

    Gao, Wei

    2016-01-01

    Clustering analysis is used in many disciplines and applications; it is an important tool that descriptively identifies homogeneous groups of objects based on attribute values. The ant colony clustering algorithm is a swarm-intelligent method used for clustering problems that is inspired by the behavior of ant colonies that cluster their corpses and sort their larvae. A new abstraction ant colony clustering algorithm using a data combination mechanism is proposed to improve the computational efficiency and accuracy of the ant colony clustering algorithm. The abstraction ant colony clustering algorithm is used to cluster benchmark problems, and its performance is compared with the ant colony clustering algorithm and other methods used in existing literature. Based on similar computational difficulties and complexities, the results show that the abstraction ant colony clustering algorithm produces results that are not only more accurate but also more efficiently determined than the ant colony clustering algorithm and the other methods. Thus, the abstraction ant colony clustering algorithm can be used for efficient multivariate data clustering. PMID:26839533

  8. Improved Ant Colony Clustering Algorithm and Its Performance Study

    PubMed Central

    Gao, Wei

    2016-01-01

    Clustering analysis is used in many disciplines and applications; it is an important tool that descriptively identifies homogeneous groups of objects based on attribute values. The ant colony clustering algorithm is a swarm-intelligent method used for clustering problems that is inspired by the behavior of ant colonies that cluster their corpses and sort their larvae. A new abstraction ant colony clustering algorithm using a data combination mechanism is proposed to improve the computational efficiency and accuracy of the ant colony clustering algorithm. The abstraction ant colony clustering algorithm is used to cluster benchmark problems, and its performance is compared with the ant colony clustering algorithm and other methods used in existing literature. Based on similar computational difficulties and complexities, the results show that the abstraction ant colony clustering algorithm produces results that are not only more accurate but also more efficiently determined than the ant colony clustering algorithm and the other methods. Thus, the abstraction ant colony clustering algorithm can be used for efficient multivariate data clustering. PMID:26839533

  9. Improving clustering by imposing network information

    PubMed Central

    Gerber, Susanne; Horenko, Illia

    2015-01-01

    Cluster analysis is one of the most popular data analysis tools in a wide range of applied disciplines. We propose and justify a computationally efficient and straightforward-to-implement way of imposing the available information from networks/graphs (a priori available in many application areas) on a broad family of clustering methods. The introduced approach is illustrated on the problem of a noninvasive unsupervised brain signal classification. This task is faced with several challenging difficulties such as nonstationary noisy signals and a small sample size, combined with a high-dimensional feature space and huge noise-to-signal ratios. Applying this approach results in an exact unsupervised classification of very short signals, opening new possibilities for clustering methods in the area of a noninvasive brain-computer interface. PMID:26601225

  10. Improving performance through concept formation and conceptual clustering

    NASA Technical Reports Server (NTRS)

    Fisher, Douglas H.

    1992-01-01

    Research from June 1989 through October 1992 focussed on concept formation, clustering, and supervised learning for purposes of improving the efficiency of problem-solving, planning, and diagnosis. These projects resulted in two dissertations on clustering, explanation-based learning, and means-ends planning, and publications in conferences and workshops, several book chapters, and journals; a complete Bibliography of NASA Ames supported publications is included. The following topics are studied: clustering of explanations and problem-solving experiences; clustering and means-end planning; and diagnosis of space shuttle and space station operating modes.

  11. Galaxy cluster mass estimation from stacked spectroscopic analysis

    NASA Astrophysics Data System (ADS)

    Farahi, Arya; Evrard, August E.; Rozo, Eduardo; Rykoff, Eli S.; Wechsler, Risa H.

    2016-08-01

    We use simulated galaxy surveys to study: (i) how galaxy membership in redMaPPer clusters maps to the underlying halo population, and (ii) the accuracy of a mean dynamical cluster mass, Mσ(λ), derived from stacked pairwise spectroscopy of clusters with richness λ. Using ˜130 000 galaxy pairs patterned after the Sloan Digital Sky Survey (SDSS) redMaPPer cluster sample study of Rozo et al., we show that the pairwise velocity probability density function of central-satellite pairs with mi < 19 in the simulation matches the form seen in Rozo et al. Through joint membership matching, we deconstruct the main Gaussian velocity component into its halo contributions, finding that the top-ranked halo contributes ˜60 per cent of the stacked signal. The halo mass scale inferred by applying the virial scaling of Evrard et al. to the velocity normalization matches, to within a few per cent, the log-mean halo mass derived through galaxy membership matching. We apply this approach, along with miscentring and galaxy velocity bias corrections, to estimate the log-mean matched halo mass at z = 0.2 of SDSS redMaPPer clusters. Employing the velocity bias constraints of Guo et al., we find = ln (M30) + αm ln (λ/30) with M30 = 1.56 ± 0.35 × 1014 M⊙ and αm = 1.31 ± 0.06stat ± 0.13sys. Systematic uncertainty in the velocity bias of satellite galaxies overwhelmingly dominates the error budget.

  12. Nanostar Clustering Improves the Sensitivity of Plasmonic Assays.

    PubMed

    Park, Yong Il; Im, Hyungsoon; Weissleder, Ralph; Lee, Hakho

    2015-08-19

    Star-shaped Au nanoparticles (Au nanostars, AuNS) have been developed to improve the plasmonic sensitivity, but their application has largely been limited to single-particle probes. We herein describe a AuNS clustering assay based on nanoscale self-assembly of multiple AuNS and which further increases detection sensitivity. We show that each cluster contains multiple nanogaps to concentrate electric fields, thereby amplifying the signal via plasmon coupling. Numerical simulation indicated that AuNS clusters assume up to 460-fold higher field density than Au nanosphere clusters of similar mass. The results were validated in model assays of protein biomarker detection. The AuNS clustering assay showed higher sensitivity than Au nanosphere. Minimizing the size of affinity ligand was found important to tightly confine electric fields and improve the sensitivity. The resulting assay is simple and fast and can be readily applied to point-of-care molecular detection schemes. PMID:26102604

  13. Fuzzy C-mean clustering on kinetic parameter estimation with generalized linear least square algorithm in SPECT

    NASA Astrophysics Data System (ADS)

    Choi, Hon-Chit; Wen, Lingfeng; Eberl, Stefan; Feng, Dagan

    2006-03-01

    Dynamic Single Photon Emission Computed Tomography (SPECT) has the potential to quantitatively estimate physiological parameters by fitting compartment models to the tracer kinetics. The generalized linear least square method (GLLS) is an efficient method to estimate unbiased kinetic parameters and parametric images. However, due to the low sensitivity of SPECT, noisy data can cause voxel-wise parameter estimation by GLLS to fail. Fuzzy C-Mean (FCM) clustering and modified FCM, which also utilizes information from the immediate neighboring voxels, are proposed to improve the voxel-wise parameter estimation of GLLS. Monte Carlo simulations were performed to generate dynamic SPECT data with different noise levels and processed by general and modified FCM clustering. Parametric images were estimated by Logan and Yokoi graphical analysis and GLLS. The influx rate (K I), volume of distribution (V d) were estimated for the cerebellum, thalamus and frontal cortex. Our results show that (1) FCM reduces the bias and improves the reliability of parameter estimates for noisy data, (2) GLLS provides estimates of micro parameters (K I-k 4) as well as macro parameters, such as volume of distribution (Vd) and binding potential (BP I & BP II) and (3) FCM clustering incorporating neighboring voxel information does not improve the parameter estimates, but improves noise in the parametric images. These findings indicated that it is desirable for pre-segmentation with traditional FCM clustering to generate voxel-wise parametric images with GLLS from dynamic SPECT data.

  14. Clustering of Casablanca stock market based on hurst exponent estimates

    NASA Astrophysics Data System (ADS)

    Lahmiri, Salim

    2016-08-01

    This paper deals with the problem of Casablanca Stock Exchange (CSE) topology modeling as a complex network during three different market regimes: general trend characterized by ups and downs, increasing trend, and decreasing trend. In particular, a set of seven different Hurst exponent estimates are used to characterize long-range dependence in each industrial sector generating process. They are employed in conjunction with hierarchical clustering approach to examine the co-movements of the Casablanca Stock Exchange industrial sectors. The purpose is to investigate whether cluster structures are similar across variable, increasing and decreasing regimes. It is observed that the general structure of the CSE topology has been considerably changed over 2009 (variable regime), 2010 (increasing regime), and 2011 (decreasing regime) time periods. The most important findings follow. First, in general a high value of Hurst exponent is associated to a variable regime and a small one to a decreasing regime. In addition, Hurst estimates during increasing regime are higher than those of a decreasing regime. Second, correlations between estimated Hurst exponent vectors of industrial sectors increase when Casablanca stock exchange follows an upward regime, whilst they decrease when the overall market follows a downward regime.

  15. Estimating adjusted prevalence ratio in clustered cross-sectional epidemiological data

    PubMed Central

    Santos, Carlos Antônio ST; Fiaccone, Rosemeire L; Oliveira, Nelson F; Cunha, Sérgio; Barreto, Maurício L; do Carmo, Maria Beatriz B; Moncayo, Ana-Lucia; Rodrigues, Laura C; Cooper, Philip J; Amorim, Leila D

    2008-01-01

    Background Many epidemiologic studies report the odds ratio as a measure of association for cross-sectional studies with common outcomes. In such cases, the prevalence ratios may not be inferred from the estimated odds ratios. This paper overviews the most commonly used procedures to obtain adjusted prevalence ratios and extends the discussion to the analysis of clustered cross-sectional studies. Methods Prevalence ratios(PR) were estimated using logistic models with random effects. Their 95% confidence intervals were obtained using delta method and clustered bootstrap. The performance of these approaches was evaluated through simulation studies. Using data from two studies with health-related outcomes in children, we discuss the interpretation of the measures of association and their implications. Results The results from data analysis highlighted major differences between estimated OR and PR. Results from simulation studies indicate an improved performance of delta method compared to bootstrap when there are small number of clusters. Conclusion We recommend the use of logistic model with random effects for analysis of clustered data. The choice of method to estimate confidence intervals for PR (delta or bootstrap method) should be based on study design. PMID:19087281

  16. Estimating interstellar extinction towards elliptical galaxies and star clusters.

    NASA Astrophysics Data System (ADS)

    de Amôres, E. B.; Lépine, J. R. D.

    The ability to estimate interstellar extinction is essential for color corrections and distance calculations of all sorts of astronomical objects being fundamental for galactic structure studies. We performed comparisons of interstellar extinction models by Amores & Lépine (2005) that are available at: http://www.astro.iag.usp.br/\\symbol{126}amores. These models are based on the hypothesis that gas and dust are homogeneously mixed, and make use of the dust-to gas ratio. The gas density distribution used in the models is obtained from the gas large scale surveys: Berkeley and Parkes HI surveys and from the Columbia University CO survey. In the present work, we compared these models with extinction predictions of elliptical galaxies (gE) and star clusters. We used the similar sample of gE galaxies proposed by Burstein for the comparison between the extinction calculation methods of Burstein & Heiles (1978, 1982) and of Schlegel et al. (1998) extending the comparison to our models. We found rms differences equal to 0.0179 and 0.0189 mag respectively, in the comparison of the predictions of our "model A" with the two methods mentioned. The comparison takes into account the "zero points" introduced by Burstein. The correlation coefficient obtained in the comparison is around 0.85. These results bring to light that our models can be safely used for the estimation of extinction in our Galaxy for extragalactic work, as an alternative method to the BH and SFD predictions. In the comparison with the globular clusters we found rms differences equal to 0.32 and 0.30 for our models A and S, respectively. For the open clusters we made comparisons using different samples and the rms differences were around 0.25.

  17. Improving Osteoporosis Screening: Results from a Randomized Cluster Trial

    PubMed Central

    Kolk, Deneil; Peterson, Edward L.; McCarthy, Bruce D.; Weiss, Thomas W.; Chen, Ya-Ting; Muma, Bruce K.

    2007-01-01

    Background Despite recommendations, osteoporosis screening rates among women aged 65 years and older remain low. We present results from a clustered, randomized trial evaluating patient mailed reminders, alone and in combination with physician prompts, to improve osteoporosis screening and treatment. Methods Primary care clinics (n = 15) were randomized to usual care, mailed reminders alone, or mailed reminders with physician prompts. Study patients were females aged 65–89 years (N = 10,354). Using automated clinical and pharmacy data, information was collected on bone mineral density testing, pharmacy dispensings, and other patient characteristics. Unadjusted/adjusted differences in testing and treatment were assessed using generalized estimating equation approaches. Results Osteoporosis screening rates were 10.8% in usual care, 24.1% in mailed reminder, and 28.9% in mailed reminder with physician prompt. Results adjusted for differences at baseline indicated that mailed reminders significantly improved testing rates compared to usual care, and that the addition of prompts further improved testing. This effect increased with patient age. Treatment rates were 5.2% in usual care, 8.4% in mailed reminders, and 9.1% in mailed reminders with prompt. No significant differences were found in treatment rates between those receiving mailed reminders alone or in combination with physician prompts. However, women receiving usual care were significantly less likely to be treated. Conclusions The use of mailed reminders, either alone or with physician prompts, can significantly improve osteoporosis screening and treatment rates among insured primary care patients (Clinical Trials.gov number NCT00139425). PMID:17356966

  18. Unsupervised, Robust Estimation-based Clustering for Multispectral Images

    NASA Technical Reports Server (NTRS)

    Netanyahu, Nathan S.

    1997-01-01

    To prepare for the challenge of handling the archiving and querying of terabyte-sized scientific spatial databases, the NASA Goddard Space Flight Center's Applied Information Sciences Branch (AISB, Code 935) developed a number of characterization algorithms that rely on supervised clustering techniques. The research reported upon here has been aimed at continuing the evolution of some of these supervised techniques, namely the neural network and decision tree-based classifiers, plus extending the approach to incorporating unsupervised clustering algorithms, such as those based on robust estimation (RE) techniques. The algorithms developed under this task should be suited for use by the Intelligent Information Fusion System (IIFS) metadata extraction modules, and as such these algorithms must be fast, robust, and anytime in nature. Finally, so that the planner/schedule module of the IlFS can oversee the use and execution of these algorithms, all information required by the planner/scheduler must be provided to the IIFS development team to ensure the timely integration of these algorithms into the overall system.

  19. A sparse-sampling strategy for the estimation of large-scale clustering from redshift surveys

    NASA Astrophysics Data System (ADS)

    Kaiser, N.

    1986-04-01

    It is shown that a fractional faint-magnitude limited redshift survey can significantly reduce the uncertainty in the two-point function for a given telescope time investment, in the estimation of large scale clustering. The signal-to-noise ratio for a 1-in-20 bright galaxy sample is roughly twice that provided by a same-cost complete survey, and this performance is the same as for a larger complete survey of about seven times the cost. A similar performance increase is achieved with a wide-field telescope multiple redshift collection from a close to full sky coverage survey. Little performance improvement is seen for smaller multiply collected surveys ideally sampled at a 1-in-10 bright galaxy rate. The optimum sampling fraction for Abell's rich clusters is found to be close to unity, with little sparse sampling performance improvement.

  20. Time-calibrated estimates of oceanographic profiles using empirical orthogonal functions and clustering

    NASA Astrophysics Data System (ADS)

    Hjelmervik, Karina; Hjelmervik, Karl Thomas

    2014-05-01

    Oceanographic climatology is widely used in different applications, such as climate studies, ocean model validation and planning of naval operations. Conventional climatological estimates are based on historic measurements, typically by averaging the measurements and thereby smoothing local phenomena. Such phenomena are often local in time and space, but crucial to some applications. Here, we propose a new method to estimate time-calibrated oceanographic profiles based on combined historic and real-time measurements. The real-time measurements may, for instance, be SAR pictures or autonomous underwater vehicles providing temperature values at a limited set of depths. The method employs empirical orthogonal functions and clustering on a training data set in order to divide the ocean into climatological regions. The real-time measurements are first used to determine what climatological region is most representative. Secondly, an improved estimate is determined using an optimisation approach that minimises the difference between the real-time measurements and the final estimate.

  1. Estimating cougar predation rates from GPS location clusters

    USGS Publications Warehouse

    Anderson, C.R., Jr.; Lindzey, F.G.

    2003-01-01

    We examined cougar (Puma concolor) predation from Global Positioning System (GPS) location clusters (???2 locations within 200 m on the same or consecutive nights) of 11 cougars during September-May, 1999-2001. Location success of GPS averaged 2.4-5.0 of 6 location attempts/night/cougar. We surveyed potential predation sites during summer-fall 2000 and summer 2001 to identify prey composition (n = 74; 3-388 days post predation) and record predation-site variables (n = 97; 3-270 days post predation). We developed a model to estimate probability that a cougar killed a large mammal from data collected at GPS location clusters where the probability of predation increased with number of nights (defined as locations at 2200, 0200, or 0500 hr) of cougar presence within a 200-m radius (P < 0.001). Mean estimated cougar predation rates for large mammals were 7.3 days/kill for subadult females (1-2.5 yr; n = 3, 90% CI: 6.3 to 9.9), 7.0 days/kill for adult females (n = 2, 90% CI: 5.8 to 10.8), 5.4 days/kill for family groups (females with young; n = 3, 90% CI: 4.5 to 8.4), 9.5 days/kill for a subadult male (1-2.5 yr; n = 1, 90% CI: 6.9 to 16.4), and 7.8 days/kill for adult males (n = 2, 90% CI: 6.8 to 10.7). We may have slightly overestimated cougar predation rates due to our inability to separate scavenging from predation. We detected 45 deer (Odocoileus spp.), 15 elk (Cervus elaphus), 6 pronghorn (Antilocapra americana), 2 livestock, 1 moose (Alces alces), and 6 small mammals at cougar predation sites. Comparisons between cougar sexes suggested that females selected mule deer and males selected elk (P < 0.001). Cougars averaged 3.0 nights on pronghorn carcasses, 3.4 nights on deer carcasses, and 6.0 nights on elk carcasses. Most cougar predation (81.7%) occurred between 1901-0500 hr and peaked from 2201-0200 hr (31.7%). Applying GPS technology to identify predation rates and prey selection will allow managers to efficiently estimate the ability of an area's prey base to

  2. A Hierarchical Clustering Methodology for the Estimation of Toxicity

    EPA Science Inventory

    A Quantitative Structure Activity Relationship (QSAR) methodology based on hierarchical clustering was developed to predict toxicological endpoints. This methodology utilizes Ward's method to divide a training set into a series of structurally similar clusters. The structural sim...

  3. Improved dose estimates for nuclear criticality accidents

    SciTech Connect

    Wilkinson, A.D.; Basoglu, B.; Bentley, C.L.; Dunn, M.E.; Plaster, M.J.; Dodds, H.L.; Haught, C.F.; Yamamoto, T.; Hopper, C.M.

    1995-08-01

    Slide rules are improved for estimating doses and dose rates resulting from nuclear criticality accidents. The original slide rules were created for highly enriched uranium solutions and metals using hand calculations along with the decades old Way-Wigner radioactive decay relationship and the inverse square law. This work uses state-of-the-art methods and better data to improve the original slide rules and also to extend the slide rule concept to three additional systems; i.e., highly enriched (93.2 wt%) uranium damp (H/{sup 235}U = 10) powder (U{sub 3}O{sub 8}) and low-enriched (5 wt%) uranium mixtures (UO{sub 2}F{sub 2}) with a H/{sup 235}U ratio of 200 and 500. Although the improved slide rules differ only slightly from the original slide rules, the improved slide rules and also the new slide rules can be used with greater confidence since they are based on more rigorous methods and better nuclear data.

  4. Process control improvements realized in a vertical reactor cluster tool

    NASA Astrophysics Data System (ADS)

    Werkhoven, Chris J.; Granneman, E. H.; Lindow, E.

    1993-04-01

    Advance cell structures present in high-density memories and logic devices require high quality, ultra thin dielectric and conductor films. By controlling the interface properties of such films, remarkable process control enhancements of manufacturing proven, vertical LPCVD and oxidation processes are realized. To this end, an HF/H2O vapor etch reactor is integrated in a vacuum cluster tool comprising vertical reactors for the various LPCVD and oxidation processes. Data of process control improvement are provided for polysilicon emitters, polysilicon contacts, polysilicon gates, and NO capacitors. Finally, the cost of ownership of cluster tool use is compared with that of stand-along equipment.

  5. The Effect of Mergers on Galaxy Cluster Mass Estimates

    NASA Astrophysics Data System (ADS)

    Johnson, Ryan E.; Zuhone, John A.; Thorsen, Tessa; Hinds, Andre

    2015-08-01

    At vertices within the filamentary structure that describes the universal matter distribution, clusters of galaxies grow hierarchically through merging with other clusters. As such, the most massive galaxy clusters should have experienced many such mergers in their histories. Though we cannot see them evolve over time, these mergers leave lasting, measurable effects in the cluster galaxies' phase space. By simulating several different galaxy cluster mergers here, we examine how the cluster galaxies kinematics are altered as a result of these mergers. Further, we also examine the effect of our line of sight viewing angle with respect to the merger axis. In projecting the 6-dimensional galaxy phase space onto a 3-dimensional plane, we are able to simulate how these clusters might actually appear to optical redshift surveys. We find that for those optical cluster statistics which are most often used as a proxy for the cluster mass (variants of σv), the uncertainty due to an inprecise or unknown line of sight may alter the derived cluster masses moreso than the kinematic disturbance of the merger itself. Finally, by examining these, and several other clustering statistics, we find that significant events (such as pericentric crossings) are identifiable over a range of merger initial conditions and from many different lines of sight.

  6. Accounting for One-Group Clustering in Effect-Size Estimation

    ERIC Educational Resources Information Center

    Citkowicz, Martyna; Hedges, Larry V.

    2013-01-01

    In some instances, intentionally or not, study designs are such that there is clustering in one group but not in the other. This paper describes methods for computing effect size estimates and their variances when there is clustering in only one group and the analysis has not taken that clustering into account. The authors provide the effect size…

  7. A improvement to the cluster recognition model for peripheral collisions

    SciTech Connect

    Garcia-Solis, E.J.; Mignerey, A.C.

    1996-02-01

    Among the microscopic dynamical simulations used for the study of the evolution of nuclear collisions at energies around 100 MeV, it has ben found, that the BUU-type of calculation describes adequately the general features of nuclear collisions in that energy regime. The BUU method consists of the numerical solution of the modified Vlaslov equation for a generated phase-space distribution of nucleons. It generally describes satisfactorily the first stages of a nuclear reaction, however it is not able to separate the fragments formed during the projectile-target interaction. It therefore is necessary to insert a clusterization procedure to obtain the primary fragments of the reaction. The general description of the clustering model proposed by the authors can be found elsewhere. The current paper deals with improvements that have been made to the clustering procedure.

  8. Comparative analysis of missing value imputation methods to improve clustering and interpretation of microarray experiments

    PubMed Central

    2010-01-01

    Background Microarray technologies produced large amount of data. In a previous study, we have shown the interest of k-Nearest Neighbour approach for restoring the missing gene expression values, and its positive impact of the gene clustering by hierarchical algorithm. Since, numerous replacement methods have been proposed to impute missing values (MVs) for microarray data. In this study, we have evaluated twelve different usable methods, and their influence on the quality of gene clustering. Interestingly we have used several datasets, both kinetic and non kinetic experiments from yeast and human. Results We underline the excellent efficiency of approaches proposed and implemented by Bo and co-workers and especially one based on expected maximization (EM_array). These improvements have been observed also on the imputation of extreme values, the most difficult predictable values. We showed that the imputed MVs have still important effects on the stability of the gene clusters. The improvement on the clustering obtained by hierarchical clustering remains limited and, not sufficient to restore completely the correct gene associations. However, a common tendency can be found between the quality of the imputation method and the gene cluster stability. Even if the comparison between clustering algorithms is a complex task, we observed that k-means approach is more efficient to conserve gene associations. Conclusions More than 6.000.000 independent simulations have assessed the quality of 12 imputation methods on five very different biological datasets. Important improvements have so been done since our last study. The EM_array approach constitutes one efficient method for restoring the missing expression gene values, with a lower estimation error level. Nonetheless, the presence of MVs even at a low rate is a major factor of gene cluster instability. Our study highlights the need for a systematic assessment of imputation methods and so of dedicated benchmarks. A

  9. A Multicriteria Decision Making Approach for Estimating the Number of Clusters in a Data Set

    PubMed Central

    Peng, Yi; Zhang, Yong; Kou, Gang; Shi, Yong

    2012-01-01

    Determining the number of clusters in a data set is an essential yet difficult step in cluster analysis. Since this task involves more than one criterion, it can be modeled as a multiple criteria decision making (MCDM) problem. This paper proposes a multiple criteria decision making (MCDM)-based approach to estimate the number of clusters for a given data set. In this approach, MCDM methods consider different numbers of clusters as alternatives and the outputs of any clustering algorithm on validity measures as criteria. The proposed method is examined by an experimental study using three MCDM methods, the well-known clustering algorithm–k-means, ten relative measures, and fifteen public-domain UCI machine learning data sets. The results show that MCDM methods work fairly well in estimating the number of clusters in the data and outperform the ten relative measures considered in the study. PMID:22870181

  10. IMPROVED RISK ESTIMATES FOR CARBON TETRACHLORIDE

    SciTech Connect

    Benson, Janet M.; Springer, David L.

    1999-12-31

    Carbon tetrachloride has been used extensively within the DOE nuclear weapons facilities. Rocky Flats was formerly the largest volume consumer of CCl4 in the United States using 5000 gallons in 1977 alone (Ripple, 1992). At the Hanford site, several hundred thousand gallons of CCl4 were discharged between 1955 and 1973 into underground cribs for storage. Levels of CCl4 in groundwater at highly contaminated sites at the Hanford facility have exceeded 8 the drinking water standard of 5 ppb by several orders of magnitude (Illman, 1993). High levels of CCl4 at these facilities represent a potential health hazard for workers conducting cleanup operations and for surrounding communities. The level of CCl4 cleanup required at these sites and associated costs are driven by current human health risk estimates, which assume that CCl4 is a genotoxic carcinogen. The overall purpose of these studies was to improve the scientific basis for assessing the health risk associated with human exposure to CCl4. Specific research objectives of this project were to: (1) compare the rates of CCl4 metabolism by rats, mice and hamsters in vivo and extrapolate those rates to man based on parallel studies on the metabolism of CCl4 by rat, mouse, hamster and human hepatic microsomes in vitro; (2) using hepatic microsome preparations, determine the role of specific cytochrome P450 isoforms in CCl4-mediated toxicity and the effects of repeated inhalation and ingestion of CCl4 on these isoforms; and (3) evaluate the toxicokinetics of inhaled CCl4 in rats, mice and hamsters. This information has been used to improve the physiologically based pharmacokinetic (PBPK) model for CCl4 originally developed by Paustenbach et al. (1988) and more recently revised by Thrall and Kenny (1996). Another major objective of the project was to provide scientific evidence that CCl4, like chloroform, is a hepatocarcinogen only when exposure results in cell damage, cell killing and regenerative proliferation. In

  11. Research opportunities to improve DSM impact estimates

    SciTech Connect

    Misuriello, H.; Hopkins, M.E.F.

    1992-03-01

    This report was commissioned by the California Institute for Energy Efficiency (CIEE) as part of its research mission to advance the energy efficiency and productivity of all end-use sectors in California. Our specific goal in this effort has been to identify viable research and development (R&D) opportunities that can improve capabilities to determine the energy-use and demand reductions achieved through demand-side management (DSM) programs and measures. We surveyed numerous practitioners in California and elsewhere to identify the major obstacles to effective impact evaluation, drawing on their collective experience. As a separate effort, we have also profiled the status of regulatory practices in leading states with respect to DSM impact evaluation. We have synthesized this information, adding our own perspective and experience to those of our survey-respondent colleagues, to characterize today`s state of the art in impact-evaluation practices. This scoping study takes a comprehensive look at the problems and issues involved in DSM impact estimates at the customer-facility or site level. The major portion of our study investigates three broad topic areas of interest to CIEE: Data analysis issues, field-monitoring issues, issues in evaluating DSM measures. Across these three topic areas, we have identified 22 potential R&D opportunities, to which we have assigned priority levels. These R&D opportunities are listed by topic area and priority.

  12. Research opportunities to improve DSM impact estimates

    SciTech Connect

    Misuriello, H.; Hopkins, M.E.F. )

    1992-03-01

    This report was commissioned by the California Institute for Energy Efficiency (CIEE) as part of its research mission to advance the energy efficiency and productivity of all end-use sectors in California. Our specific goal in this effort has been to identify viable research and development (R D) opportunities that can improve capabilities to determine the energy-use and demand reductions achieved through demand-side management (DSM) programs and measures. We surveyed numerous practitioners in California and elsewhere to identify the major obstacles to effective impact evaluation, drawing on their collective experience. As a separate effort, we have also profiled the status of regulatory practices in leading states with respect to DSM impact evaluation. We have synthesized this information, adding our own perspective and experience to those of our survey-respondent colleagues, to characterize today's state of the art in impact-evaluation practices. This scoping study takes a comprehensive look at the problems and issues involved in DSM impact estimates at the customer-facility or site level. The major portion of our study investigates three broad topic areas of interest to CIEE: Data analysis issues, field-monitoring issues, issues in evaluating DSM measures. Across these three topic areas, we have identified 22 potential R D opportunities, to which we have assigned priority levels. These R D opportunities are listed by topic area and priority.

  13. Estimation of Carcinogenicity using Hierarchical Clustering and Nearest Neighbor Methodologies

    EPA Science Inventory

    Previously a hierarchical clustering (HC) approach and a nearest neighbor (NN) approach were developed to model acute aquatic toxicity end points. These approaches were developed to correlate the toxicity for large, noncongeneric data sets. In this study these approaches applie...

  14. A comparison of acromion marker cluster calibration methods for estimating scapular kinematics during upper extremity ergometry.

    PubMed

    Richardson, R Tyler; Nicholson, Kristen F; Rapp, Elizabeth A; Johnston, Therese E; Richards, James G

    2016-05-01

    Accurate measurement of joint kinematics is required to understand the musculoskeletal effects of a therapeutic intervention such as upper extremity (UE) ergometry. Traditional surface-based motion capture is effective for quantifying humerothoracic motion, but scapular kinematics are challenging to obtain. Methods for estimating scapular kinematics include the widely-reported acromion marker cluster (AMC) which utilizes a static calibration between the scapula and the AMC to estimate the orientation of the scapula during motion. Previous literature demonstrates that including additional calibration positions throughout the motion improves AMC accuracy for single plane motions; however this approach has not been assessed for the non-planar shoulder complex motion occurring during UE ergometry. The purpose of this study was to evaluate the accuracy of single, dual, and multiple AMC calibration methods during UE ergometry. The orientations of the UE segments of 13 healthy subjects were recorded with motion capture. Scapular landmarks were palpated at eight evenly-spaced static positions around the 360° cycle. The single AMC method utilized one static calibration position to estimate scapular kinematics for the entire cycle, while the dual and multiple AMC methods used two and four static calibration positions, respectively. Scapulothoracic angles estimated by the three AMC methods were compared with scapulothoracic angles determined by palpation. The multiple AMC method produced the smallest RMS errors and was not significantly different from palpation about any axis. We recommend the multiple AMC method as a practical and accurate way to estimate scapular kinematics during UE ergometry. PMID:26976228

  15. Towards Improved Estimates of Ocean Heat Flux

    NASA Astrophysics Data System (ADS)

    Bentamy, Abderrahim; Hollman, Rainer; Kent, Elisabeth; Haines, Keith

    2014-05-01

    Recommendations and priorities for ocean heat flux research are for instance outlined in recent CLIVAR and WCRP reports, eg. Yu et al (2013). Among these is the need for improving the accuracy, the consistency, and the spatial and temporal resolution of air-sea fluxes over global as well as at region scales. To meet the main air-sea flux requirements, this study is aimed at obtaining and analyzing all the heat flux components (latent, sensible and radiative) at the ocean surface over global oceans using multiple satellite sensor observations in combination with in-situ measurements and numerical model analyses. The fluxes will be generated daily and monthly for the 20-year (1992-2011) period, between 80N and 80S and at 0.25deg resolution. Simultaneous estimates of all surface heat flux terms have not yet been calculated at such large scale and long time period. Such an effort requires a wide range of expertise and data sources that only recently are becoming available. Needed are methods for integrating many data sources to calculate energy fluxes (short-wave, long wave, sensible and latent heat) across the air-sea interface. We have access to all the relevant, recently available satellite data to perform such computations. Yu, L., K. Haines, M. Bourassa, M. Cronin, S. Gulev, S. Josey, S. Kato, A. Kumar, T. Lee, D. Roemmich: Towards achieving global closure of ocean heat and freshwater budgets: Recommendations for advancing research in air-sea fluxes through collaborative activities. INTERNATIONAL CLIVAR PROJECT OFFICE, 2013: International CLIVAR Publication Series No 189. http://www.clivar.org/sites/default/files/ICPO189_WHOI_fluxes_workshop.pdf

  16. An improved distance matrix computation algorithm for multicore clusters.

    PubMed

    Al-Neama, Mohammed W; Reda, Naglaa M; Ghaleb, Fayed F M

    2014-01-01

    Distance matrix has diverse usage in different research areas. Its computation is typically an essential task in most bioinformatics applications, especially in multiple sequence alignment. The gigantic explosion of biological sequence databases leads to an urgent need for accelerating these computations. DistVect algorithm was introduced in the paper of Al-Neama et al. (in press) to present a recent approach for vectorizing distance matrix computing. It showed an efficient performance in both sequential and parallel computing. However, the multicore cluster systems, which are available now, with their scalability and performance/cost ratio, meet the need for more powerful and efficient performance. This paper proposes DistVect1 as highly efficient parallel vectorized algorithm with high performance for computing distance matrix, addressed to multicore clusters. It reformulates DistVect1 vectorized algorithm in terms of clusters primitives. It deduces an efficient approach of partitioning and scheduling computations, convenient to this type of architecture. Implementations employ potential of both MPI and OpenMP libraries. Experimental results show that the proposed method performs improvement of around 3-fold speedup upon SSE2. Further it also achieves speedups more than 9 orders of magnitude compared to the publicly available parallel implementation utilized in ClustalW-MPI. PMID:25013779

  17. Improved Yield Estimation by Trellis Tension Monitoring

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Most yield estimation practices for commercial vineyards rely on hand-sampling fruit on one or a small number of dates during the growing season. Limitations associated with the static yield estimates may be overcome with Trellis Tension Monitors (TTMs), systems that measure dynamically changes in t...

  18. Improved Estimation by Trellis Tension Monitoring

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Most yield estimation practices for commercial vineyards are based on longstanding but individually variable industry protocols that rely on hand sampling fruit on one or a small number of dates during the growing season. Limitations associated with the static nature of yield estimation may be overc...

  19. Communication: Improved pair approximations in local coupled-cluster methods

    SciTech Connect

    Schwilk, Max; Werner, Hans-Joachim; Usvyat, Denis

    2015-03-28

    In local coupled cluster treatments the electron pairs can be classified according to the magnitude of their energy contributions or distances into strong, close, weak, and distant pairs. Different approximations are introduced for the latter three classes. In this communication, an improved simplified treatment of close and weak pairs is proposed, which is based on long-range cancellations of individually slowly decaying contributions in the amplitude equations. Benchmark calculations for correlation, reaction, and activation energies demonstrate that these approximations work extremely well, while pair approximations based on local second-order Møller-Plesset theory can lead to errors that are 1-2 orders of magnitude larger.

  20. A novel method to estimate the impact parameter on a drift cell by using the information of single ionization clusters

    NASA Astrophysics Data System (ADS)

    Signorelli, G.; D`Onofrio, A.; Venturini, M.

    2016-07-01

    Measuring the time of each ionization cluster in drift chambers has been proposed to improve the single hit resolution, especially for very low mass tracking systems. Ad hoc formulae have been developed to combine the information from the single clusters. We show that the problem falls in a wide category of problems that can be solved with an algorithm called Maximum Possible Spacing (MPS) which has been demonstrated to find the optimal estimator. We show that the MPS approach is applicable and gives the expected results. Its application in a real tracking device, namely the MEG II cylindrical drift chamber, is discussed.

  1. Infant immunization coverage in Italy: estimates by simultaneous EPI cluster surveys of regions. ICONA Study Group.

    PubMed Central

    Salmaso, S.; Rota, M. C.; Ciofi Degli Atti, M. L.; Tozzi, A. E.; Kreidl, P.

    1999-01-01

    In 1998, a series of regional cluster surveys (the ICONA Study) was conducted simultaneously in 19 out of the 20 regions in Italy to estimate the mandatory immunization coverage of children aged 12-24 months with oral poliovirus (OPV), diphtheria-tetanus (DT) and viral hepatitis B (HBV) vaccines, as well as optional immunization coverage with pertussis, measles and Haemophilus influenzae b (Hib) vaccines. The study children were born in 1996 and selected from birth registries using the Expanded Programme of Immunization (EPI) cluster sampling technique. Interviews with parents were conducted to determine each child's immunization status and the reasons for any missed or delayed vaccinations. The study population comprised 4310 children aged 12-24 months. Coverage for both mandatory and optional vaccinations differed by region. The overall coverage for mandatory vaccines (OPV, DT and HBV) exceeded 94%, but only 79% had been vaccinated in accord with the recommended schedule (i.e. during the first year of life). Immunization coverage for pertussis increased from 40% (1993 survey) to 88%, but measles coverage (56%) remained inadequate for controlling the disease; Hib coverage was 20%. These results confirm that in Italy the coverage of only mandatory immunizations is satisfactory. Pertussis immunization coverage has improved dramatically since the introduction of acellular vaccines. A greater effort to educate parents and physicians is still needed to improve the coverage of optional vaccinations in all regions. PMID:10593033

  2. Estimated number of field stars toward Galactic globular clusters and Local Group Galaxies

    NASA Technical Reports Server (NTRS)

    Ratnatunga, K. U.; Bahcall, J. N.

    1985-01-01

    Field star densities are estimated for 89 fields with /b/ greater than 10 degrees based on the Galaxy model of Bahcall and Soneira (1980, 1984; Bahcall et al. 1985). Calculated tables are presented for 76 of the fields toward Galactic globular clusters, and 16 Local Group Galaxies in 13 fields. The estimates can be used as an initial guide for planning both ground-based and Space Telescope observations of globular clusters at intermediate-to-high Galactic latitudes.

  3. Performance of small cluster surveys and the clustered LQAS design to estimate local-level vaccination coverage in Mali

    PubMed Central

    2012-01-01

    Background Estimation of vaccination coverage at the local level is essential to identify communities that may require additional support. Cluster surveys can be used in resource-poor settings, when population figures are inaccurate. To be feasible, cluster samples need to be small, without losing robustness of results. The clustered LQAS (CLQAS) approach has been proposed as an alternative, as smaller sample sizes are required. Methods We explored (i) the efficiency of cluster surveys of decreasing sample size through bootstrapping analysis and (ii) the performance of CLQAS under three alternative sampling plans to classify local VC, using data from a survey carried out in Mali after mass vaccination against meningococcal meningitis group A. Results VC estimates provided by a 10 × 15 cluster survey design were reasonably robust. We used them to classify health areas in three categories and guide mop-up activities: i) health areas not requiring supplemental activities; ii) health areas requiring additional vaccination; iii) health areas requiring further evaluation. As sample size decreased (from 10 × 15 to 10 × 3), standard error of VC and ICC estimates were increasingly unstable. Results of CLQAS simulations were not accurate for most health areas, with an overall risk of misclassification greater than 0.25 in one health area out of three. It was greater than 0.50 in one health area out of two under two of the three sampling plans. Conclusions Small sample cluster surveys (10 × 15) are acceptably robust for classification of VC at local level. We do not recommend the CLQAS method as currently formulated for evaluating vaccination programmes. PMID:23057445

  4. Improving Reliability of Subject-Level Resting-State fMRI Parcellation with Shrinkage Estimators

    PubMed Central

    Mejia, Amanda F.; Nebel, Mary Beth; Shou, Haochang; Crainiceanu, Ciprian M.; Pekar, James J.; Mostofsky, Stewart; Caffo, Brian; Lindquist, Martin A.

    2015-01-01

    A recent interest in resting state functional magnetic resonance imaging (rsfMRI) lies in subdividing the human brain into anatomically and functionally distinct regions of interest. For example, brain parcellation is often a necessary step for defining the network nodes used in connectivity studies. While inference has traditionally been performed on group-level data, there is a growing interest in parcellating single subject data. However, this is difficult due to the inherent low signal-to-noise ratio of rsfMRI data, combined with typically short scan lengths. A large number of brain parcellation approaches employ clustering, which begins with a measure of similarity or distance between voxels. The goal of this work is to improve the reproducibility of single-subject parcellation using shrinkage-based estimators of such measures, allowing the noisy subject-specific estimator to “borrow strength” in a principled manner from a larger population of subjects. We present several empirical Bayes shrinkage estimators and outline methods for shrinkage when multiple scans are not available for each subject. We perform shrinkage on raw inter-voxel correlation estimates and use both raw and shrinkage estimates to produce parcellations by performing clustering on the voxels. While we employ a standard spectral clustering approach, our proposed method is agnostic to the choice of clustering method and can be used as a pre-processing step for any clustering algorithm. Using two datasets – a simulated dataset where the true parcellation is known and is subject-specific and a test-retest dataset consisting of two 7-minute resting-state fMRI scans from 20 subjects – we show that parcellations produced from shrinkage correlation estimates have higher reliability and validity than those produced from raw correlation estimates. Application to test-retest data shows that using shrinkage estimators increases the reproducibility of subject-specific parcellations of the motor

  5. Improving reliability of subject-level resting-state fMRI parcellation with shrinkage estimators.

    PubMed

    Mejia, Amanda F; Nebel, Mary Beth; Shou, Haochang; Crainiceanu, Ciprian M; Pekar, James J; Mostofsky, Stewart; Caffo, Brian; Lindquist, Martin A

    2015-05-15

    A recent interest in resting state functional magnetic resonance imaging (rsfMRI) lies in subdividing the human brain into anatomically and functionally distinct regions of interest. For example, brain parcellation is often a necessary step for defining the network nodes used in connectivity studies. While inference has traditionally been performed on group-level data, there is a growing interest in parcellating single subject data. However, this is difficult due to the inherent low signal-to-noise ratio of rsfMRI data, combined with typically short scan lengths. A large number of brain parcellation approaches employ clustering, which begins with a measure of similarity or distance between voxels. The goal of this work is to improve the reproducibility of single-subject parcellation using shrinkage-based estimators of such measures, allowing the noisy subject-specific estimator to "borrow strength" in a principled manner from a larger population of subjects. We present several empirical Bayes shrinkage estimators and outline methods for shrinkage when multiple scans are not available for each subject. We perform shrinkage on raw inter-voxel correlation estimates and use both raw and shrinkage estimates to produce parcellations by performing clustering on the voxels. While we employ a standard spectral clustering approach, our proposed method is agnostic to the choice of clustering method and can be used as a pre-processing step for any clustering algorithm. Using two datasets - a simulated dataset where the true parcellation is known and is subject-specific and a test-retest dataset consisting of two 7-minute resting-state fMRI scans from 20 subjects - we show that parcellations produced from shrinkage correlation estimates have higher reliability and validity than those produced from raw correlation estimates. Application to test-retest data shows that using shrinkage estimators increases the reproducibility of subject-specific parcellations of the motor cortex by

  6. IMPROVED RISK ESTIMATES FOR CARBON TETRACHLORIDE

    EPA Science Inventory

    Carbon tetrachloride (CCl4) has been used extensively within the Department of Energy (DOE) nuclear weapons facilities. Costs associated with cleanup of CCl4 at DOE facilities are driven by current cancer risk estimates which assume CCl4 is a genotoxic carcinogen. However, a grow...

  7. Clustering-based urbanisation to improve enterprise information systems agility

    NASA Astrophysics Data System (ADS)

    Imache, Rabah; Izza, Said; Ahmed-Nacer, Mohamed

    2015-11-01

    Enterprises are daily facing pressures to demonstrate their ability to adapt quickly to the unpredictable changes of their dynamic in terms of technology, social, legislative, competitiveness and globalisation. Thus, to ensure its place in this hard context, enterprise must always be agile and must ensure its sustainability by a continuous improvement of its information system (IS). Therefore, the agility of enterprise information systems (EISs) can be considered today as a primary objective of any enterprise. One way of achieving this objective is by the urbanisation of the EIS in the context of continuous improvement to make it a real asset servicing enterprise strategy. This paper investigates the benefits of EISs urbanisation based on clustering techniques as a driver for agility production and/or improvement to help managers and IT management departments to improve continuously the performance of the enterprise and make appropriate decisions in the scope of the enterprise objectives and strategy. This approach is applied to the urbanisation of a tour operator EIS.

  8. Cluster Structure in Cosmological Simulations. I. Correlation to Observables, Mass Estimates, and Evolution

    NASA Astrophysics Data System (ADS)

    Jeltema, Tesla E.; Hallman, Eric J.; Burns, Jack O.; Motl, Patrick M.

    2008-07-01

    We use Enzo, a hybrid Eulerian adaptive mesh refinement/N-body code including nongravitational heating and cooling, to explore the morphology of the X-ray gas in clusters of galaxies and its evolution in current-generation cosmological simulations. We employ and compare two observationally motivated structure measures: power ratios and centroid shift. Overall, the structure of our simulated clusters compares remarkably well to low-redshift observations, although some differences remain that may point to incomplete gas physics. We find no dependence on cluster structure in the mass-observable scaling relations, TX-M and YX-M, when using the true cluster masses. However, estimates of the total mass based on the assumption of hydrostatic equilibrium, as assumed in observational studies, are systematically low. We show that the hydrostatic mass bias strongly correlates with cluster structure and, more weakly, with cluster mass. When the hydrostatic masses are used, the mass-observable scaling relations and gas mass fractions depend significantly on cluster morphology, and the true relations are not recovered even if the most relaxed clusters are used. We show that cluster structure, via the power ratios, can be used to effectively correct the hydrostatic mass estimates and mass scaling relations, suggesting that we can calibrate for this systematic effect in cosmological studies. Similar to observational studies, we find that cluster structure, particularly centroid shift, evolves with redshift. This evolution is mild but will lead to additional errors at high redshift. Projection along the line of sight leads to significant uncertainty in the structure of individual clusters: less than 50% of clusters which appear relaxed in projection based on our structure measures are truly relaxed.

  9. Extending Zelterman's approach for robust estimation of population size to zero-truncated clustered Data.

    PubMed

    Navaratna, W C W; Del Rio Vilas, Victor J; Böhning, Dankmar

    2008-08-01

    Estimation of population size with missing zero-class is an important problem that is encountered in epidemiological assessment studies. Fitting a Poisson model to the observed data by the method of maximum likelihood and estimation of the population size based on this fit is an approach that has been widely used for this purpose. In practice, however, the Poisson assumption is seldom satisfied. Zelterman (1988) has proposed a robust estimator for unclustered data that works well in a wide class of distributions applicable for count data. In the work presented here, we extend this estimator to clustered data. The estimator requires fitting a zero-truncated homogeneous Poisson model by maximum likelihood and thereby using a Horvitz-Thompson estimator of population size. This was found to work well, when the data follow the hypothesized homogeneous Poisson model. However, when the true distribution deviates from the hypothesized model, the population size was found to be underestimated. In the search of a more robust estimator, we focused on three models that use all clusters with exactly one case, those clusters with exactly two cases and those with exactly three cases to estimate the probability of the zero-class and thereby use data collected on all the clusters in the Horvitz-Thompson estimator of population size. Loss in efficiency associated with gain in robustness was examined based on a simulation study. As a trade-off between gain in robustness and loss in efficiency, the model that uses data collected on clusters with at most three cases to estimate the probability of the zero-class was found to be preferred in general. In applications, we recommend obtaining estimates from all three models and making a choice considering the estimates from the three models, robustness and the loss in efficiency. PMID:18663764

  10. The estimation of masses of individual galaxies in clusters of galaxies.

    NASA Technical Reports Server (NTRS)

    Wolf, R. A.; Bahcall, J. N.

    1972-01-01

    Three different methods of estimating masses are discussed. The 'density method' is based on the analysis of the density distribution of galaxies around the object whose mass is to be found. The 'bound-galaxy method' gives estimates of the mass of a double, triple, or quadruple system from analysis of the orbital motion of the components. The 'virial method' utilizes the formulas derived for the second method to obtain estimates of the virial-theorem masses of whole clusters, and thus to obtain upper limits on the mass of an individual galaxy in a cluster. The analytic formulas are developed and compared with computer experiments, and some applications are given.

  11. A Clustering Classification of Spare Parts for Improving Inventory Policies

    NASA Astrophysics Data System (ADS)

    Meri Lumban Raja, Anton; Ai, The Jin; Diar Astanti, Ririn

    2016-02-01

    Inventory policies in a company may consist of storage, control, and replenishment policy. Since the result of common ABC inventory classification can only affect the replenishment policy, we are proposing a clustering based classification technique as a basis for developing inventory policy especially for storage and control policy. Hierarchical clustering procedure is used after clustering variables are defined. Since hierarchical clustering procedure requires metric variables only, therefore a step to convert non-metric variables to metric variables is performed. The clusters resulted from the clustering techniques are analyzed in order to define each cluster characteristics. Then, the inventory policies are determined for each group according to its characteristics. A real data, which consists of 612 items from a local manufacturer's spare part warehouse, are used in the research of this paper to show the applicability of the proposed methodology.

  12. Structural Nested Mean Models to Estimate the Effects of Time-Varying Treatments on Clustered Outcomes.

    PubMed

    He, Jiwei; Stephens-Shields, Alisa; Joffe, Marshall

    2015-11-01

    In assessing the efficacy of a time-varying treatment structural nested models (SNMs) are useful in dealing with confounding by variables affected by earlier treatments. These models often consider treatment allocation and repeated measures at the individual level. We extend SNMMs to clustered observations with time-varying confounding and treatments. We demonstrate how to formulate models with both cluster- and unit-level treatments and show how to derive semiparametric estimators of parameters in such models. For unit-level treatments, we consider interference, namely the effect of treatment on outcomes in other units of the same cluster. The properties of estimators are evaluated through simulations and compared with the conventional GEE regression method for clustered outcomes. To illustrate our method, we use data from the treatment arm of a glaucoma clinical trial to compare the effectiveness of two commonly used ocular hypertension medications. PMID:26115504

  13. Recent improvements in ocean heat content estimation

    NASA Astrophysics Data System (ADS)

    Abraham, J. P.

    2015-12-01

    Increase of ocean heat content is an outcome of a persistent and ongoing energy imbalance to the Earth's climate system. This imbalance, largely caused by human emissions of greenhouse gases, has engendered a multi-decade increase in stored thermal energy within the Earth system, manifest principally as an increase in ocean heat content. Consequently, in order to quantify the rate of global warming, it is necessary to measure the rate of increase of ocean heat content. The historical record of ocean heat content is extracted from a history of various devices and spatial/temporal coverage across the globe. One of the most important historical devices is the eXpendable BathyThermograph (XBT) which has been used for decades to measure ocean temperatures to depths of 700m and deeper. Here, recent progress in improving the XBT record of upper ocean heat content is described including corrections to systematic biases, filling in spatial gaps where data does not exist, and the selection of a proper climatology. In addition, comparisons of the revised historical record and CMIP5 climate models are made. It is seen that there is very good agreement between the models and measurements, with the models slightly under-predicting the increase of ocean heat content in the upper water layers over the past 45 years.

  14. Improving the performance of molecular dynamics simulations on parallel clusters.

    PubMed

    Borstnik, Urban; Hodoscek, Milan; Janezic, Dusanka

    2004-01-01

    In this article a procedure is derived to obtain a performance gain for molecular dynamics (MD) simulations on existing parallel clusters. Parallel clusters use a wide array of interconnection technologies to connect multiple processors together, often at different speeds, such as multiple processor computers and networking. It is demonstrated how to configure existing programs for MD simulations to efficiently handle collective communication on parallel clusters with processor interconnections of different speeds. PMID:15032512

  15. Improving Collective Estimations Using Resistance to Social Influence

    PubMed Central

    Madirolas, Gabriel; de Polavieja, Gonzalo G.

    2015-01-01

    Groups can make precise collective estimations in cases like the weight of an object or the number of items in a volume. However, in others tasks, for example those requiring memory or mental calculation, subjects often give estimations with large deviations from factual values. Allowing members of the group to communicate their estimations has the additional perverse effect of shifting individual estimations even closer to the biased collective estimation. Here we show that this negative effect of social interactions can be turned into a method to improve collective estimations. We first obtained a statistical model of how humans change their estimation when receiving the estimates made by other individuals. We confirmed using existing experimental data its prediction that individuals use the weighted geometric mean of private and social estimations. We then used this result and the fact that each individual uses a different value of the social weight to devise a method that extracts the subgroups resisting social influence. We found that these subgroups of individuals resisting social influence can make very large improvements in group estimations. This is in contrast to methods using the confidence that each individual declares, for which we find no improvement in group estimations. Also, our proposed method does not need to use historical data to weight individuals by performance. These results show the benefits of using the individual characteristics of the members in a group to better extract collective wisdom. PMID:26565619

  16. First Estimates of the Fundamental Parameters of Three Large Magellanic Cloud Clusters

    NASA Astrophysics Data System (ADS)

    Piatti, Andrés E.; Clariá, Juan J.; Parisi, María Celeste; Ahumada, Andrea V.

    2011-05-01

    As part of an ongoing project to investigate the cluster formation and chemical evolution history in the Large Magellanic Cloud (LMC), we have used the CTIO 0.9 m telescope to obtain CCD imaging in the Washington system of NGC 2161, SL 874, and KMHK 1719—three unstudied star clusters located in the outer region of the LMC. We measured T1 magnitudes and C - T1 colors for a total of 9611 stars distributed throughout cluster areas of 13.6 × 13.6 arcmin2. Cluster radii were estimated from star counts distributed throughout the entire observed fields. Careful attention was paid to setting apart the cluster and field star distributions so that statistically cleaned color-magnitude diagrams (CMDs) were obtained. Based on the best fits of isochrones computed by the Padova group to the (T1, C - T1) CMDs, the δT1 index, and the standard giant branch procedure, ages and metallicities were derived for the three clusters. The different methods for both age and metallicity determination are in good agreement. The three clusters were found to be of intermediate-age (~1 Gyr) and relatively metal-poor ([Fe/H] ~ -0.7 dex). By combining the current results with others available in the literature, a total sample of 45 well-known LMC clusters older than 1 Gyr was compiled. By adopting an age interval varying in terms of age according to a logarithmic law, we built the cluster age histogram, which statistically represents the intermediate-age and old stellar populations in the LMC. Two main cluster formation episodes that peaked at t ~ 2 and ~14 Gyr were detected. The present cluster age distribution was compared with star formation rates that were analytically derived in previous studies.

  17. How to Estimate the Value of Service Reliability Improvements

    SciTech Connect

    Sullivan, Michael J.; Mercurio, Matthew G.; Schellenberg, Josh A.; Eto, Joseph H.

    2010-06-08

    A robust methodology for estimating the value of service reliability improvements is presented. Although econometric models for estimating value of service (interruption costs) have been established and widely accepted, analysts often resort to applying relatively crude interruption cost estimation techniques in assessing the economic impacts of transmission and distribution investments. This paper first shows how the use of these techniques can substantially impact the estimated value of service improvements. A simple yet robust methodology that does not rely heavily on simplifying assumptions is presented. When a smart grid investment is proposed, reliability improvement is one of the most frequently cited benefits. Using the best methodology for estimating the value of this benefit is imperative. By providing directions on how to implement this methodology, this paper sends a practical, usable message to the industry.

  18. Improving the Discipline of Cost Estimation and Analysis

    NASA Technical Reports Server (NTRS)

    Piland, William M.; Pine, David J.; Wilson, Delano M.

    2000-01-01

    The need to improve the quality and accuracy of cost estimates of proposed new aerospace systems has been widely recognized. The industry has done the best job of maintaining related capability with improvements in estimation methods and giving appropriate priority to the hiring and training of qualified analysts. Some parts of Government, and National Aeronautics and Space Administration (NASA) in particular, continue to need major improvements in this area. Recently, NASA recognized that its cost estimation and analysis capabilities had eroded to the point that the ability to provide timely, reliable estimates was impacting the confidence in planning man), program activities. As a result, this year the Agency established a lead role for cost estimation and analysis. The Independent Program Assessment Office located at the Langley Research Center was given this responsibility.

  19. High-Resolution Spatial Distribution and Estimation of Access to Improved Sanitation in Kenya

    PubMed Central

    Jia, Peng; Anderson, John D.; Leitner, Michael; Rheingans, Richard

    2016-01-01

    Background Access to sanitation facilities is imperative in reducing the risk of multiple adverse health outcomes. A distinct disparity in sanitation exists among different wealth levels in many low-income countries, which may hinder the progress across each of the Millennium Development Goals. Methods The surveyed households in 397 clusters from 2008–2009 Kenya Demographic and Health Surveys were divided into five wealth quintiles based on their national asset scores. A series of spatial analysis methods including excess risk, local spatial autocorrelation, and spatial interpolation were applied to observe disparities in coverage of improved sanitation among different wealth categories. The total number of the population with improved sanitation was estimated by interpolating, time-adjusting, and multiplying the surveyed coverage rates by high-resolution population grids. A comparison was then made with the annual estimates from United Nations Population Division and World Health Organization /United Nations Children's Fund Joint Monitoring Program for Water Supply and Sanitation. Results The Empirical Bayesian Kriging interpolation produced minimal root mean squared error for all clusters and five quintiles while predicting the raw and spatial coverage rates of improved sanitation. The coverage in southern regions was generally higher than in the north and east, and the coverage in the south decreased from Nairobi in all directions, while Nyanza and North Eastern Province had relatively poor coverage. The general clustering trend of high and low sanitation improvement among surveyed clusters was confirmed after spatial smoothing. Conclusions There exists an apparent disparity in sanitation among different wealth categories across Kenya and spatially smoothed coverage rates resulted in a closer estimation of the available statistics than raw coverage rates. Future intervention activities need to be tailored for both different wealth categories and nationally

  20. Improving visual estimates of cervical spine range of motion.

    PubMed

    Hirsch, Brandon P; Webb, Matthew L; Bohl, Daniel D; Fu, Michael; Buerba, Rafael A; Gruskay, Jordan A; Grauer, Jonathan N

    2014-11-01

    Cervical spine range of motion (ROM) is a common measure of cervical conditions, surgical outcomes, and functional impairment. Although ROM is routinely assessed by visual estimation in clinical practice, visual estimates have been shown to be unreliable and inaccurate. Reliable goniometers can be used for assessments, but the associated costs and logistics generally limit their clinical acceptance. To investigate whether training can improve visual estimates of cervical spine ROM, we asked attending surgeons, residents, and medical students at our institution to visually estimate the cervical spine ROM of healthy subjects before and after a training session. This training session included review of normal cervical spine ROM in 3 planes and demonstration of partial and full motion in 3 planes by multiple subjects. Estimates before, immediately after, and 1 month after this training session were compared to assess reliability and accuracy. Immediately after training, errors decreased by 11.9° (flexion-extension), 3.8° (lateral bending), and 2.9° (axial rotation). These improvements were statistically significant. One month after training, visual estimates remained improved, by 9.5°, 1.6°, and 3.1°, respectively, but were statistically significant only in flexion-extension. Although the accuracy of visual estimates can be improved, clinicians should be aware of the limitations of visual estimates of cervical spine ROM. Our study results support scrutiny of visual assessment of ROM as a criterion for diagnosing permanent impairment or disability. PMID:25379754

  1. Age and Mass Estimates for 41 Star Clusters in M33

    NASA Astrophysics Data System (ADS)

    Ma, Jun; Zhou, Xu; Chen, Jian-Sheng

    2004-04-01

    In this second paper of our series, we estimate the age of 41 star clusters, which were detected by Melnick & D'odorico in the nearby spiral galaxy M33, by comparing the integrated photometric measurements with theoretical stellar population synthesis models of Bruzual & Charlot. Also, we calculate the mass of these star clusters using the theoretical M/L_V ratio. The results show that, these star clusters formed continuously in M33 from ˜ 7× 106 -- 1010 years and have masses between ˜ 103 and 2 ×106 M⊙. M33 frames were observed as a part of the BATC Multicolor Survey of the sky in 13 intermediate-band filters from 3800 to 10 000 Å. The relation between age and mass confirms that the sample star cluster masses systematically decrease from the oldest to the youngest.

  2. Statistical uncertainties and systematic errors in weak lensing mass estimates of galaxy clusters

    NASA Astrophysics Data System (ADS)

    Köhlinger, F.; Hoekstra, H.; Eriksen, M.

    2015-11-01

    Upcoming and ongoing large area weak lensing surveys will also discover large samples of galaxy clusters. Accurate and precise masses of galaxy clusters are of major importance for cosmology, for example, in establishing well-calibrated observational halo mass functions for comparison with cosmological predictions. We investigate the level of statistical uncertainties and sources of systematic errors expected for weak lensing mass estimates. Future surveys that will cover large areas on the sky, such as Euclid or LSST and to lesser extent DES, will provide the largest weak lensing cluster samples with the lowest level of statistical noise regarding ensembles of galaxy clusters. However, the expected low level of statistical uncertainties requires us to scrutinize various sources of systematic errors. In particular, we investigate the bias due to cluster member galaxies which are erroneously treated as background source galaxies due to wrongly assigned photometric redshifts. We find that this effect is significant when referring to stacks of galaxy clusters. Finally, we study the bias due to miscentring, i.e. the displacement between any observationally defined cluster centre and the true minimum of its gravitational potential. The impact of this bias might be significant with respect to the statistical uncertainties. However, complementary future missions such as eROSITA will allow us to define stringent priors on miscentring parameters which will mitigate this bias significantly.

  3. An Improved Fuzzy c-Means Clustering Algorithm Based on Shadowed Sets and PSO

    PubMed Central

    Zhang, Jian; Shen, Ling

    2014-01-01

    To organize the wide variety of data sets automatically and acquire accurate classification, this paper presents a modified fuzzy c-means algorithm (SP-FCM) based on particle swarm optimization (PSO) and shadowed sets to perform feature clustering. SP-FCM introduces the global search property of PSO to deal with the problem of premature convergence of conventional fuzzy clustering, utilizes vagueness balance property of shadowed sets to handle overlapping among clusters, and models uncertainty in class boundaries. This new method uses Xie-Beni index as cluster validity and automatically finds the optimal cluster number within a specific range with cluster partitions that provide compact and well-separated clusters. Experiments show that the proposed approach significantly improves the clustering effect. PMID:25477953

  4. Estimators for Clustered Education RCTs Using the Neyman Model for Causal Inference

    ERIC Educational Resources Information Center

    Schochet, Peter Z.

    2013-01-01

    This article examines the estimation of two-stage clustered designs for education randomized control trials (RCTs) using the nonparametric Neyman causal inference framework that underlies experiments. The key distinction between the considered causal models is whether potential treatment and control group outcomes are considered to be fixed for…

  5. Improvements in estimating proportions of objects from multispectral data

    NASA Technical Reports Server (NTRS)

    Horwitz, H. M.; Hyde, P. D.; Richardson, W.

    1974-01-01

    Methods for estimating proportions of objects and materials imaged within the instantaneous field of view of a multispectral sensor were developed further. Improvements in the basic proportion estimation algorithm were devised as well as improved alien object detection procedures. Also, a simplified signature set analysis scheme was introduced for determining the adequacy of signature set geometry for satisfactory proportion estimation. Averaging procedures used in conjunction with the mixtures algorithm were examined theoretically and applied to artificially generated multispectral data. A computationally simpler estimator was considered and found unsatisfactory. Experiments conducted to find a suitable procedure for setting the alien object threshold yielded little definitive result. Mixtures procedures were used on a limited amount of ERTS data to estimate wheat proportion in selected areas. Results were unsatisfactory, partly because of the ill-conditioned nature of the pure signature set.

  6. An Example of an Improvable Rao–Blackwell Improvement, Inefficient Maximum Likelihood Estimator, and Unbiased Generalized Bayes Estimator

    PubMed Central

    Galili, Tal; Meilijson, Isaac

    2016-01-01

    The Rao–Blackwell theorem offers a procedure for converting a crude unbiased estimator of a parameter θ into a “better” one, in fact unique and optimal if the improvement is based on a minimal sufficient statistic that is complete. In contrast, behind every minimal sufficient statistic that is not complete, there is an improvable Rao–Blackwell improvement. This is illustrated via a simple example based on the uniform distribution, in which a rather natural Rao–Blackwell improvement is uniformly improvable. Furthermore, in this example the maximum likelihood estimator is inefficient, and an unbiased generalized Bayes estimator performs exceptionally well. Counterexamples of this sort can be useful didactic tools for explaining the true nature of a methodology and possible consequences when some of the assumptions are violated. [Received December 2014. Revised September 2015.

  7. Improving three-dimensional mass mapping with weak gravitational lensing using galaxy clustering

    NASA Astrophysics Data System (ADS)

    Simon, Patrick

    2013-12-01

    Context. The weak gravitational lensing distortion of distant galaxy images (defined as sources) probes the projected large-scale matter distribution in the Universe. The availability of redshift information in galaxy surveys also allows us to recover the radial matter distribution to a certain degree. Aims: To improve quality in the mass mapping, we combine the lensing information with the spatial clustering of a population of galaxies (defined as tracers) that trace the matter density with a known galaxy bias. Methods: We construct a minimum-variance estimator for the 3D matter density that incorporates the angular distribution of galaxy tracers, which are coarsely binned in redshift. Merely the second-order bias of the tracers has to be known, which can in principle be self-consistently constrained in the data by lensing techniques. This synergy introduces a new noise component because of the stochasticity in the matter-tracer density relation. We give a description of the stochasticity noise in the Gaussian regime, and we investigate the estimator characteristics analytically. We apply the estimator to a mock survey based on the Millennium Simulation. Results: The estimator linearly mixes the individual lensing mass and tracer number density maps into a combined smoothed mass map. The weighting in the mix depends on the signal-to-noise ratio (S/N) of the individual maps and the correlation, R, between the matter and galaxy density. The weight of the tracers can be reduced by hand. For moderate mixing, the S/N in the mass map improves by a factor ~2-3 for R ≳ 0.4. Importantly, the systematic offset between a true and apparent mass peak distance (defined as z-shift bias) in a lensing-only map is eliminated, even for weak correlations of R ~ 0.4. Conclusions: If the second-order bias of tracer galaxies can be determined, the synergy technique potentially provides an option to improve redshift accuracy and completeness of the lensing 3D mass map. Herein, the aim

  8. Estimate of the Total Mechanical Feedback Energy from Galaxy Cluster-centered Black Holes: Implications for Black Hole Evolution, Cluster Gas Fraction, and Entropy

    NASA Astrophysics Data System (ADS)

    Mathews, William G.; Guo, Fulai

    2011-09-01

    The total feedback energy injected into hot gas in galaxy clusters by central black holes can be estimated by comparing the potential energy of observed cluster gas profiles with the potential energy of non-radiating, feedback-free hot gas atmospheres resulting from gravitational collapse in clusters of the same total mass. Feedback energy from cluster-centered black holes expands the cluster gas, lowering the gas-to-dark-matter mass ratio below the cosmic value. Feedback energy is unnecessarily delivered by radio-emitting jets to distant gas far beyond the cooling radius where the cooling time equals the cluster lifetime. For clusters of mass (4-11) × 1014 M sun, estimates of the total feedback energy, (1-3) × 1063 erg, far exceed feedback energies estimated from observations of X-ray cavities and shocks in the cluster gas, energies gained from supernovae, and energies lost from cluster gas by radiation. The time-averaged mean feedback luminosity is comparable to those of powerful quasars, implying that some significant fraction of this energy may arise from the spin of the black hole. The universal entropy profile in feedback-free gaseous atmospheres in Navarro-Frenk-White cluster halos can be recovered by multiplying the observed gas entropy profile of any relaxed cluster by a factor involving the gas fraction profile. While the feedback energy and associated mass outflow in the clusters we consider far exceed that necessary to stop cooling inflow, the time-averaged mass outflow at the cooling radius almost exactly balances the mass that cools within this radius, an essential condition to shut down cluster cooling flows.

  9. Estimating regression coefficients from clustered samples: Sampling errors and optimum sample allocation

    NASA Technical Reports Server (NTRS)

    Kalton, G.

    1983-01-01

    A number of surveys were conducted to study the relationship between the level of aircraft or traffic noise exposure experienced by people living in a particular area and their annoyance with it. These surveys generally employ a clustered sample design which affects the precision of the survey estimates. Regression analysis of annoyance on noise measures and other variables is often an important component of the survey analysis. Formulae are presented for estimating the standard errors of regression coefficients and ratio of regression coefficients that are applicable with a two- or three-stage clustered sample design. Using a simple cost function, they also determine the optimum allocation of the sample across the stages of the sample design for the estimation of a regression coefficient.

  10. Estimating regression coefficients from clustered samples: Sampling errors and optimum sample allocation

    NASA Astrophysics Data System (ADS)

    Kalton, G.

    1983-05-01

    A number of surveys were conducted to study the relationship between the level of aircraft or traffic noise exposure experienced by people living in a particular area and their annoyance with it. These surveys generally employ a clustered sample design which affects the precision of the survey estimates. Regression analysis of annoyance on noise measures and other variables is often an important component of the survey analysis. Formulae are presented for estimating the standard errors of regression coefficients and ratio of regression coefficients that are applicable with a two- or three-stage clustered sample design. Using a simple cost function, they also determine the optimum allocation of the sample across the stages of the sample design for the estimation of a regression coefficient.

  11. Scanning linear estimation: improvements over region of interest (ROI) methods

    NASA Astrophysics Data System (ADS)

    Kupinski, Meredith K.; Clarkson, Eric W.; Barrett, Harrison H.

    2013-03-01

    In tomographic medical imaging, a signal activity is typically estimated by summing voxels from a reconstructed image. We introduce an alternative estimation scheme that operates on the raw projection data and offers a substantial improvement, as measured by the ensemble mean-square error (EMSE), when compared to using voxel values from a maximum-likelihood expectation-maximization (MLEM) reconstruction. The scanning-linear (SL) estimator operates on the raw projection data and is derived as a special case of maximum-likelihood estimation with a series of approximations to make the calculation tractable. The approximated likelihood accounts for background randomness, measurement noise and variability in the parameters to be estimated. When signal size and location are known, the SL estimate of signal activity is unbiased, i.e. the average estimate equals the true value. By contrast, unpredictable bias arising from the null functions of the imaging system affect standard algorithms that operate on reconstructed data. The SL method is demonstrated for two different tasks: (1) simultaneously estimating a signal’s size, location and activity; (2) for a fixed signal size and location, estimating activity. Noisy projection data are realistically simulated using measured calibration data from the multi-module multi-resolution small-animal SPECT imaging system. For both tasks, the same set of images is reconstructed using the MLEM algorithm (80 iterations), and the average and maximum values within the region of interest (ROI) are calculated for comparison. This comparison shows dramatic improvements in EMSE for the SL estimates. To show that the bias in ROI estimates affects not only absolute values but also relative differences, such as those used to monitor the response to therapy, the activity estimation task is repeated for three different signal sizes.

  12. Improving The Discipline of Cost Estimation and Analysis

    NASA Technical Reports Server (NTRS)

    Piland, William M.; Pine, David J.; Wilson, Delano M.

    2000-01-01

    The need to improve the quality and accuracy of cost estimates of proposed new aerospace systems has been widely recognized. The industry has done the best job of maintaining related capability with improvements in estimation methods and giving appropriate priority to the hiring and training of qualified analysts. Some parts of Government, and National Aeronautics and Space Administration (NASA) in particular, continue to need major improvements in this area. Recently, NASA recognized that its cost estimation and analysis capabilities had eroded to the point that the ability to provide timely, reliable estimates was impacting the confidence in planning many program activities. As a result, this year the Agency established a lead role for cost estimation and analysis. The Independent Program Assessment Office located at the Langley Research Center was given this responsibility. This paper presents the plans for the newly established role. Described is how the Independent Program Assessment Office, working with all NASA Centers, NASA Headquarters, other Government agencies, and industry, is focused on creating cost estimation and analysis as a professional discipline that will be recognized equally with the technical disciplines needed to design new space and aeronautics activities. Investments in selected, new analysis tools, creating advanced training opportunities for analysts, and developing career paths for future analysts engaged in the discipline are all elements of the plan. Plans also include increasing the human resources available to conduct independent cost analysis of Agency programs during their formulation, to improve near-term capability to conduct economic cost-benefit assessments, to support NASA management's decision process, and to provide cost analysis results emphasizing "full-cost" and "full-life cycle" considerations. The Agency cost analysis improvement plan has been approved for implementation starting this calendar year. Adequate financial

  13. Spectral clustering for optical confirmation and redshift estimation of X-ray selected galaxy cluster candidates in the SDSS Stripe 82

    NASA Astrophysics Data System (ADS)

    Mahmoud, E.; Takey, A.; Shoukry, A.

    2016-07-01

    We develop a galaxy cluster finding algorithm based on spectral clustering technique to identify optical counterparts and estimate optical redshifts for X-ray selected cluster candidates. As an application, we run our algorithm on a sample of X-ray cluster candidates selected from the third XMM-Newton serendipitous source catalog (3XMM-DR5) that are located in the Stripe 82 of the Sloan Digital Sky Survey (SDSS). Our method works on galaxies described in the color-magnitude feature space. We begin by examining 45 galaxy clusters with published spectroscopic redshifts in the range of 0.1-0.8 with a median of 0.36. As a result, we are able to identify their optical counterparts and estimate their photometric redshifts, which have a typical accuracy of 0.025 and agree with the published ones. Then, we investigate another 40 X-ray cluster candidates (from the same cluster survey) with no redshift information in the literature and found that 12 candidates are considered as galaxy clusters in the redshift range from 0.29 to 0.76 with a median of 0.57. These systems are newly discovered clusters in X-rays and optical data. Among them 7 clusters have spectroscopic redshifts for at least one member galaxy.

  14. Comparative assessment of bone pose estimation using Point Cluster Technique and OpenSim.

    PubMed

    Lathrop, Rebecca L; Chaudhari, Ajit M W; Siston, Robert A

    2011-11-01

    Estimating the position of the bones from optical motion capture data is a challenge associated with human movement analysis. Bone pose estimation techniques such as the Point Cluster Technique (PCT) and simulations of movement through software packages such as OpenSim are used to minimize soft tissue artifact and estimate skeletal position; however, using different methods for analysis may produce differing kinematic results which could lead to differences in clinical interpretation such as a misclassification of normal or pathological gait. This study evaluated the differences present in knee joint kinematics as a result of calculating joint angles using various techniques. We calculated knee joint kinematics from experimental gait data using the standard PCT, the least squares approach in OpenSim applied to experimental marker data, and the least squares approach in OpenSim applied to the results of the PCT algorithm. Maximum and resultant RMS differences in knee angles were calculated between all techniques. We observed differences in flexion/extension, varus/valgus, and internal/external rotation angles between all approaches. The largest differences were between the PCT results and all results calculated using OpenSim. The RMS differences averaged nearly 5° for flexion/extension angles with maximum differences exceeding 15°. Average RMS differences were relatively small (< 1.08°) between results calculated within OpenSim, suggesting that the choice of marker weighting is not critical to the results of the least squares inverse kinematics calculations. The largest difference between techniques appeared to be a constant offset between the PCT and all OpenSim results, which may be due to differences in the definition of anatomical reference frames, scaling of musculoskeletal models, and/or placement of virtual markers within OpenSim. Different methods for data analysis can produce largely different kinematic results, which could lead to the misclassification

  15. Improved Versions of Common Estimators of the Recombination Rate.

    PubMed

    Gärtner, Kerstin; Futschik, Andreas

    2016-09-01

    The scaled recombination parameter [Formula: see text] is one of the key parameters, turning up frequently in population genetic models. Accurate estimates of [Formula: see text] are difficult to obtain, as recombination events do not always leave traces in the data. One of the most widely used approaches is composite likelihood. Here, we show that popular implementations of composite likelihood estimators can often be uniformly improved by optimizing the trade-off between bias and variance. The amount of possible improvement depends on parameters such as the sequence length, the sample size, and the mutation rate, and it can be considerable in some cases. It turns out that approximate Bayesian computation, with composite likelihood as a summary statistic, also leads to improved estimates, but now in terms of the posterior risk. Finally, we demonstrate a practical application on real data from Drosophila. PMID:27409412

  16. Improving terrain height estimates from RADARSAT interferometric measurements

    SciTech Connect

    Thompson, P.A.; Eichel, P.H.; Calloway, T.M.

    1998-03-01

    The authors describe two methods of combining two-pass RADAR-SAT interferometric phase maps with existing DTED (digital terrain elevation data) to produce improved terrain height estimates. The first is a least-squares estimation procedure that fits the unwrapped phase data to a phase map computed from the DTED. The second is a filtering technique that combines the interferometric height map with the DTED map based on spatial frequency content. Both methods preserve the high fidelity of the interferometric data.

  17. Distributing Power Grid State Estimation on HPC Clusters A System Architecture Prototype

    SciTech Connect

    Liu, Yan; Jiang, Wei; Jin, Shuangshuang; Rice, Mark J.; Chen, Yousu

    2012-08-20

    The future power grid is expected to further expand with highly distributed energy sources and smart loads. The increased size and complexity lead to increased burden on existing computational resources in energy control centers. Thus the need to perform real-time assessment on such systems entails efficient means to distribute centralized functions such as state estimation in the power system. In this paper, we present our early prototype of a system architecture that connects distributed state estimators individually running parallel programs to solve non-linear estimation procedure. The prototype consists of a middleware and data processing toolkits that allows data exchange in the distributed state estimation. We build a test case based on the IEEE 118 bus system and partition the state estimation of the whole system model to available HPC clusters. The measurement from the testbed demonstrates the low overhead of our solution.

  18. Under What Circumstances Does External Knowledge about the Correlation Structure Improve Power in Cluster Randomized Designs?

    ERIC Educational Resources Information Center

    Rhoads, Christopher

    2014-01-01

    Recent publications have drawn attention to the idea of utilizing prior information about the correlation structure to improve statistical power in cluster randomized experiments. Because power in cluster randomized designs is a function of many different parameters, it has been difficult for applied researchers to discern a simple rule explaining…

  19. Improving warm rain estimation in the PERSIANN-CCS satellite-based retrieval algorithm

    NASA Astrophysics Data System (ADS)

    Karbalaee, N.; Hsu, K. L.; Sorooshian, S.

    2015-12-01

    The Precipitation Estimation from remotely Sensed Information using Artificial Neural Networks-Cloud Classification System (PERSIANN-CCS) is one of the algorithms being integrated in the IMERG (Integrated Multi-Satellite Retrievals for the Global Precipitation Mission GPM) to estimate precipitation at 0.04 lat-long scale every 30-minute. PERSIANN-CCS extracts features from infrared cloud image segmentation from three brightness temperature thresholds (220K, 235K, and 253K). Warm raining clouds with brightness temperature higher than 253K are not covered from the current algorithm. To improve rain detection from warm rain, in this study, the cloud image segmentation threshold to cover warmer clouds is extended from 253K to 300K. Several other temperature thresholds between 253K and 300K were also examined. K-means cluster algorithm was used to classify extracted image features to 400 groups. Rainfall rates from each cluster were retrained using radar rainfall measurements. Case studies were carried out over CONUS to investigate the ability to improve detection of warm rainfall from segmentation and image classification using warmer temperature thresholds. Satellite image and radar rainfall data in both summer and winter seasons were used in the experiments in year 2012 as a training data. Overall results show that rain detection from warm clouds is significantly improved. However, it also shows that the false rain detection is also relatively increased when the segmentation temperature is increased.

  20. Improving PERSIANN-CCS Rainfall Estimation using Passive Microwave Rainfall Estimation

    NASA Astrophysics Data System (ADS)

    Karbalaee, N.; Hsu, K. L.; Sorooshian, S.

    2014-12-01

    This presentation discusses the recent improvements to the PERSIANN-CCS (Precipitation Estimation from remotely Sensed Information using Artificial Neural Networks-Cloud Classification System). The PERSIANN-CCS is one of the algorithms being integrated in the IMERG (Integrated Multi-Satellite Retrievals for the Global Precipitation Mission GPM) to estimate precipitation at 0.04o lat-long scale at every 30-minute interval. While PERSIANN-CCS has a relatively fine temporal and spatial resolution for generating rainfall estimation over the globe, it sometimes underestimates or overestimates over some regions, depending on certain conditions. In this study, improving the PERSIANN-CCS precipitation estimation using long-term passive microwave (PMW) rainfall estimation is explored. The adjustment is proceeded by matching the probability distribution of PERSIANN-CCS estimates to the PMW rainfall estimation. Four years of concurrent samples from 2008 to 2011 were used in the calibration while one year (2012) of the data was used for the validation of the PMW-adjusted PERSIANN-CCS estimates. Samples over a 5 o x5 o lat-long coverage were collected and an adjustment look up table for each month covering 60oS-60oN was generated. The validation of PERSIANN-CCS estimation before and after PMW adjustment over CONUS using radar data was investigated. The results show that the adjustment has different impact on the PERSIANN-CCS rain estimates depending on the location and time of the year. PERSIANN-CCS adjustments were found to be more significant over high latitude and winter time periods and less significant over the low latitude and summer time period.

  1. Optimal Cluster-based Models for Estimation of Missing Precipitation Records

    NASA Astrophysics Data System (ADS)

    Teegavarapu, R. S.

    2008-05-01

    Deterministic and stochastic weighting methods are the most frequently used methods for estimating missing rainfall values at a gage based on values recorded at all other available recording gages. Distance-based weighting methods suffer from one major conceptual limitation based on the fact that Euclidian distance is not always a definitive measure of the correlation among spatial point measurements. Another point of contention is the number of control points used in estimation process. Several spatial weighting methods and optimal cluster based models are proposed, developed and investigated for estimation of missing precipitation records. These methods use mathematical programming formulations and evolutionary algorithms. Historical daily precipitation data obtained from 15 rain gauging stations from a temperate climatic region are used to test and derive conclusions about the efficacy these of methods. Results suggest that the weights and cluster-based models derived based on mathematical programming formulations and surrogate parameters for correlations are superior to those obtained from tarditional distance-based weights used in spatial interpolation methods for estimation of missing rainfall data at points of interest.

  2. Comparison of methods for estimating the intraclass correlation coefficient for binary responses in cancer prevention cluster randomized trials.

    PubMed

    Wu, Sheng; Crespi, Catherine M; Wong, Weng Kee

    2012-09-01

    The intraclass correlation coefficient (ICC) is a fundamental parameter of interest in cluster randomized trials as it can greatly affect statistical power. We compare common methods of estimating the ICC in cluster randomized trials with binary outcomes, with a specific focus on their application to community-based cancer prevention trials with primary outcome of self-reported cancer screening. Using three real data sets from cancer screening intervention trials with different numbers and types of clusters and cluster sizes, we obtained point estimates and 95% confidence intervals for the ICC using five methods: the analysis of variance estimator, the Fleiss-Cuzick estimator, the Pearson estimator, an estimator based on generalized estimating equations and an estimator from a random intercept logistic regression model. We compared estimates of the ICC for the overall sample and by study condition. Our results show that ICC estimates from different methods can be quite different, although confidence intervals generally overlap. The ICC varied substantially by study condition in two studies, suggesting that the common practice of assuming a common ICC across all clusters in the trial is questionable. A simulation study confirmed pitfalls of erroneously assuming a common ICC. Investigators should consider using sample size and analysis methods that allow the ICC to vary by study condition. PMID:22627076

  3. A simple recipe for estimating masses of elliptical galaxies and clusters of galaxies

    NASA Astrophysics Data System (ADS)

    Lyskova, N.

    2013-04-01

    We discuss a simple and robust procedure to evaluate the mass/circular velocity of massive elliptical galaxies and clusters of galaxies. It relies only on the surface density and the projected velocity dispersion profiles of tracer particles and therefore can be applied even in case of poor or noisy observational data. Stars, globular clusters or planetary nebulae can be used as tracers for mass determination of elliptical galaxies. For clusters the galaxies themselves can be used as tracer particles. The key element of the proposed procedure is the selection of a ``sweet'' radius R_sweet, where the sensitivity to the unknown anisotropy of the tracers' orbits is minimal. At this radius the surface density of tracers declines approximately as I(R)∝ R-2, thus placing R_sweet not far from the half-light radius of the tracers R_eff. The procedure was tested on a sample of cosmological simulations of individual galaxies and galaxy clusters and then applied to real observational data. Independently the total mass profile was derived from the hydrostatic equilibrium equation for the gaseous atmosphere. Mismatch in mass profiles obtained from optical and X-ray data is used to estimate the non-thermal contribution to the gas pressure and/or to constrain the distribution of tracers' orbits.

  4. Distance Estimates for High Redshift Clusters SZ and X-Ray Measurements

    NASA Technical Reports Server (NTRS)

    Joy, Marshall K.

    1999-01-01

    I present interferometric images of the Sunyaev-Zel'dovich effect for the high redshift (z $ greater than $ 0.5) galaxy clusters in the \\emph(Einstein) Medium Sensitivity Survey: MS0451.5-0305 (z = 0.54), MS0015.9+1609 (z = 0.55), MS2053.7-0449 (z = 0.58), MS1 137.5+6625 (z = 0.78), and MS 1054.5-0321 (z = 0.83). Isothermal $\\beta$ models are applied to the data to determine the magnitude of the Sunyaev-Zel'dovich (S-Z) decrement in each cluster. Complementary ROSAT PSPC and HRI x-ray data are also analyzed, and are combined with the S-Z data to generate an independent estimate of the cluster distance. Since the Sunyaev-Zel'dovich Effect is invariant with redshift, sensitive S-Z imaging can provide an independent determination of the size, shape, density, and distance of high redshift galaxy clusters; we will discuss current systematic uncertainties with this approach, as well as future observations which will yield stronger constraints.

  5. Performance Analysis of an Improved MUSIC DoA Estimator

    NASA Astrophysics Data System (ADS)

    Vallet, Pascal; Mestre, Xavier; Loubaton, Philippe

    2015-12-01

    This paper adresses the statistical performance of subspace DoA estimation using a sensor array, in the asymptotic regime where the number of samples and sensors both converge to infinity at the same rate. Improved subspace DoA estimators were derived (termed as G-MUSIC) in previous works, and were shown to be consistent and asymptotically Gaussian distributed in the case where the number of sources and their DoA remain fixed. In this case, which models widely spaced DoA scenarios, it is proved in the present paper that the traditional MUSIC method also provides DoA consistent estimates having the same asymptotic variances as the G-MUSIC estimates. The case of DoA that are spaced of the order of a beamwidth, which models closely spaced sources, is also considered. It is shown that G-MUSIC estimates are still able to consistently separate the sources, while it is no longer the case for the MUSIC ones. The asymptotic variances of G-MUSIC estimates are also evaluated.

  6. Estimating the incubation period of raccoon rabies: a time-space clustering approach.

    PubMed

    Tinline, Rowland; Rosatte, Rick; MacInnes, Charles

    2002-11-29

    We used a time-space clustering approach to estimate the incubation period of raccoon rabies in the wild using data from the 1999-2001 invasion of raccoon rabies into eastern Ontario from northern New York State. The time differences and geographical distances between all possible pairs of rabies cases were computed, classified and assembled into a time-space matrix. The rows of that matrix represented differences in cases in weeks and the columns represent distances between cases in kilometers and the values in the cells of the matrix represent the counts of cases at specific time and distance intervals. There was a significant cluster of pairs 5 weeks apart with apparent harmonics at additional 5-week intervals. These results are explained by assuming the incubation period of raccoon rabies had a mode of 5 weeks. The time clusters appeared consistently at distance intervals of 5 km. We discuss the possibility that the spatial intervals were influenced by the 5 km radius of the point infection control depopulation process used in 1999 and the 10-15 km radial areas used in 2000. With the practical limits of those radii, there was an intensive effort to eliminate raccoons. Our procedure is easy to implement and provides an estimate of the shape of the distribution of incubation periods for raccoon rabies. PMID:12419602

  7. Comparison of Three Plot Selection Methods for Estimating Change in Temporally Variable, Spatially Clustered Populations.

    SciTech Connect

    Thompson, William L.

    2001-07-01

    Monitoring population numbers is important for assessing trends and meeting various legislative mandates. However, sampling across time introduces a temporal aspect to survey design in addition to the spatial one. For instance, a sample that is initially representative may lose this attribute if there is a shift in numbers and/or spatial distribution in the underlying population that is not reflected in later sampled plots. Plot selection methods that account for this temporal variability will produce the best trend estimates. Consequently, I used simulation to compare bias and relative precision of estimates of population change among stratified and unstratified sampling designs based on permanent, temporary, and partial replacement plots under varying levels of spatial clustering, density, and temporal shifting of populations. Permanent plots produced more precise estimates of change than temporary plots across all factors. Further, permanent plots performed better than partial replacement plots except for high density (5 and 10 individuals per plot) and 25% - 50% shifts in the population. Stratified designs always produced less precise estimates of population change for all three plot selection methods, and often produced biased change estimates and greatly inflated variance estimates under sampling with partial replacement. Hence, stratification that remains fixed across time should be avoided when monitoring populations that are likely to exhibit large changes in numbers and/or spatial distribution during the study period. Key words: bias; change estimation; monitoring; permanent plots; relative precision; sampling with partial replacement; temporary plots.

  8. An improved approximate-Bayesian model-choice method for estimating shared evolutionary history

    PubMed Central

    2014-01-01

    Background To understand biological diversification, it is important to account for large-scale processes that affect the evolutionary history of groups of co-distributed populations of organisms. Such events predict temporally clustered divergences times, a pattern that can be estimated using genetic data from co-distributed species. I introduce a new approximate-Bayesian method for comparative phylogeographical model-choice that estimates the temporal distribution of divergences across taxa from multi-locus DNA sequence data. The model is an extension of that implemented in msBayes. Results By reparameterizing the model, introducing more flexible priors on demographic and divergence-time parameters, and implementing a non-parametric Dirichlet-process prior over divergence models, I improved the robustness, accuracy, and power of the method for estimating shared evolutionary history across taxa. Conclusions The results demonstrate the improved performance of the new method is due to (1) more appropriate priors on divergence-time and demographic parameters that avoid prohibitively small marginal likelihoods for models with more divergence events, and (2) the Dirichlet-process providing a flexible prior on divergence histories that does not strongly disfavor models with intermediate numbers of divergence events. The new method yields more robust estimates of posterior uncertainty, and thus greatly reduces the tendency to incorrectly estimate models of shared evolutionary history with strong support. PMID:24992937

  9. Motion estimation in the frequency domain using fuzzy c-planes clustering.

    PubMed

    Erdem, C E; Karabulut, G Z; Yanmaz, E; Anarim, E

    2001-01-01

    A recent work explicitly models the discontinuous motion estimation problem in the frequency domain where the motion parameters are estimated using a harmonic retrieval approach. The vertical and horizontal components of the motion are independently estimated from the locations of the peaks of respective periodogram analyses and they are paired to obtain the motion vectors using a procedure proposed. In this paper, we present a more efficient method that replaces the motion component pairing task and hence eliminates the problems of the pairing method described. The method described in this paper uses the fuzzy c-planes (FCP) clustering approach to fit planes to three-dimensional (3-D) frequency domain data obtained from the peaks of the periodograms. Experimental results are provided to demonstrate the effectiveness of the proposed method. PMID:18255527

  10. Improved estimation of reflectance spectra by utilizing prior knowledge.

    PubMed

    Dierl, Marcel; Eckhard, Timo; Frei, Bernhard; Klammer, Maximilian; Eichstädt, Sascha; Elster, Clemens

    2016-07-01

    Estimating spectral reflectance has attracted extensive research efforts in color science and machine learning, motivated through a wide range of applications. In many practical situations, prior knowledge is available that ought to be used. Here, we have developed a general Bayesian method that allows the incorporation of prior knowledge from previous monochromator and spectrophotometer measurements. The approach yields analytical expressions for fast and efficient estimation of spectral reflectance. In addition to point estimates, probability distributions are also obtained, which completely characterize the uncertainty associated with the reconstructed spectrum. We demonstrate that, through the incorporation of prior knowledge, our approach yields improved reconstruction results compared with methods that resort to training data only. Our method is particularly useful when the spectral reflectance to be recovered resides beyond the scope of the training data. PMID:27409695

  11. An Accurate Link Correlation Estimator for Improving Wireless Protocol Performance

    PubMed Central

    Zhao, Zhiwei; Xu, Xianghua; Dong, Wei; Bu, Jiajun

    2015-01-01

    Wireless link correlation has shown significant impact on the performance of various sensor network protocols. Many works have been devoted to exploiting link correlation for protocol improvements. However, the effectiveness of these designs heavily relies on the accuracy of link correlation measurement. In this paper, we investigate state-of-the-art link correlation measurement and analyze the limitations of existing works. We then propose a novel lightweight and accurate link correlation estimation (LACE) approach based on the reasoning of link correlation formation. LACE combines both long-term and short-term link behaviors for link correlation estimation. We implement LACE as a stand-alone interface in TinyOS and incorporate it into both routing and flooding protocols. Simulation and testbed results show that LACE: (1) achieves more accurate and lightweight link correlation measurements than the state-of-the-art work; and (2) greatly improves the performance of protocols exploiting link correlation. PMID:25686314

  12. An accurate link correlation estimator for improving wireless protocol performance.

    PubMed

    Zhao, Zhiwei; Xu, Xianghua; Dong, Wei; Bu, Jiajun

    2015-01-01

    Wireless link correlation has shown significant impact on the performance of various sensor network protocols. Many works have been devoted to exploiting link correlation for protocol improvements. However, the effectiveness of these designs heavily relies on the accuracy of link correlation measurement. In this paper, we investigate state-of-the-art link correlation measurement and analyze the limitations of existing works. We then propose a novel lightweight and accurate link correlation estimation (LACE) approach based on the reasoning of link correlation formation. LACE combines both long-term and short-term link behaviors for link correlation estimation. We implement LACE as a stand-alone interface in TinyOS and incorporate it into both routing and flooding protocols. Simulation and testbed results show that LACE: (1) achieves more accurate and lightweight link correlation measurements than the state-of-the-art work; and (2) greatly improves the performance of protocols exploiting link correlation. PMID:25686314

  13. Age estimates of globular clusters in the Milky Way: constraints on cosmology.

    PubMed

    Krauss, Lawrence M; Chaboyer, Brian

    2003-01-01

    Recent observations of stellar globular clusters in the Milky Way Galaxy, combined with revised ranges of parameters in stellar evolution codes and new estimates of the earliest epoch of globular cluster formation, result in a 95% confidence level lower limit on the age of the Universe of 11.2 billion years. This age is inconsistent with the expansion age for a flat Universe for the currently allowed range of the Hubble constant, unless the cosmic equation of state is dominated by a component that violates the strong energy condition. This means that the three fundamental observables in cosmology-the age of the Universe, the distance-redshift relation, and the geometry of the Universe-now independently support the case for a dark energy-dominated Universe. PMID:12511641

  14. Application of the Direct Distance Estimation procedure to eclipsing binaries in star clusters

    NASA Astrophysics Data System (ADS)

    Milone, E. F.; Schiller, S. J.

    2013-02-01

    We alert the community to a paradigm method to calibrate a range of standard candles by means of well-calibrated photometry of eclipsing binaries in star clusters. In particular, we re-examine systems studied as part of our Binaries-in-Clusters program, and previously analyzed with earlier versions of the Wilson-Devinney light-curve modeling program. We make use of the 2010 version of this program, which incorporates a procedure to estimate the distance to an eclipsing system directly, as a system parameter, and is thus dependent on the data and analysis model alone. As such, the derived distance is accorded a standard error, independent of any additional assumptions or approximations that such analyses conventionally require.

  15. A Novel Tool Improves Existing Estimates of Recent Tuberculosis Transmission in Settings of Sparse Data Collection.

    PubMed

    Kasaie, Parastu; Mathema, Barun; Kelton, W David; Azman, Andrew S; Pennington, Jeff; Dowdy, David W

    2015-01-01

    In any setting, a proportion of incident active tuberculosis (TB) reflects recent transmission ("recent transmission proportion"), whereas the remainder represents reactivation. Appropriately estimating the recent transmission proportion has important implications for local TB control, but existing approaches have known biases, especially where data are incomplete. We constructed a stochastic individual-based model of a TB epidemic and designed a set of simulations (derivation set) to develop two regression-based tools for estimating the recent transmission proportion from five inputs: underlying TB incidence, sampling coverage, study duration, clustered proportion of observed cases, and proportion of observed clusters in the sample. We tested these tools on a set of unrelated simulations (validation set), and compared their performance against that of the traditional 'n-1' approach. In the validation set, the regression tools reduced the absolute estimation bias (difference between estimated and true recent transmission proportion) in the 'n-1' technique by a median [interquartile range] of 60% [9%, 82%] and 69% [30%, 87%]. The bias in the 'n-1' model was highly sensitive to underlying levels of study coverage and duration, and substantially underestimated the recent transmission proportion in settings of incomplete data coverage. By contrast, the regression models' performance was more consistent across different epidemiological settings and study characteristics. We provide one of these regression models as a user-friendly, web-based tool. Novel tools can improve our ability to estimate the recent TB transmission proportion from data that are observable (or estimable) by public health practitioners with limited available molecular data. PMID:26679499

  16. A Novel Tool Improves Existing Estimates of Recent Tuberculosis Transmission in Settings of Sparse Data Collection

    PubMed Central

    Kasaie, Parastu; Mathema, Barun; Kelton, W. David; Azman, Andrew S.; Pennington, Jeff; Dowdy, David W.

    2015-01-01

    In any setting, a proportion of incident active tuberculosis (TB) reflects recent transmission (“recent transmission proportion”), whereas the remainder represents reactivation. Appropriately estimating the recent transmission proportion has important implications for local TB control, but existing approaches have known biases, especially where data are incomplete. We constructed a stochastic individual-based model of a TB epidemic and designed a set of simulations (derivation set) to develop two regression-based tools for estimating the recent transmission proportion from five inputs: underlying TB incidence, sampling coverage, study duration, clustered proportion of observed cases, and proportion of observed clusters in the sample. We tested these tools on a set of unrelated simulations (validation set), and compared their performance against that of the traditional ‘n-1’ approach. In the validation set, the regression tools reduced the absolute estimation bias (difference between estimated and true recent transmission proportion) in the ‘n-1’ technique by a median [interquartile range] of 60% [9%, 82%] and 69% [30%, 87%]. The bias in the ‘n-1’ model was highly sensitive to underlying levels of study coverage and duration, and substantially underestimated the recent transmission proportion in settings of incomplete data coverage. By contrast, the regression models’ performance was more consistent across different epidemiological settings and study characteristics. We provide one of these regression models as a user-friendly, web-based tool. Novel tools can improve our ability to estimate the recent TB transmission proportion from data that are observable (or estimable) by public health practitioners with limited available molecular data. PMID:26679499

  17. Applying clustering approach in predictive uncertainty estimation: a case study with the UNEEC method

    NASA Astrophysics Data System (ADS)

    Dogulu, Nilay; Solomatine, Dimitri; Lal Shrestha, Durga

    2014-05-01

    Within the context of flood forecasting, assessment of predictive uncertainty has become a necessity for most of the modelling studies in operational hydrology. There are several uncertainty analysis and/or prediction methods available in the literature; however, most of them rely on normality and homoscedasticity assumptions for model residuals occurring in reproducing the observed data. This study focuses on a statistical method analyzing model residuals without having any assumptions and based on a clustering approach: Uncertainty Estimation based on local Errors and Clustering (UNEEC). The aim of this work is to provide a comprehensive evaluation of the UNEEC method's performance in view of clustering approach employed within its methodology. This is done by analyzing normality of model residuals and comparing uncertainty analysis results (for 50% and 90% confidence level) with those obtained from uniform interval and quantile regression methods. An important part of the basis by which the methods are compared is analysis of data clusters representing different hydrometeorological conditions. The validation measures used are PICP, MPI, ARIL and NUE where necessary. A new validation measure linking prediction interval to the (hydrological) model quality - weighted mean prediction interval (WMPI) - is also proposed for comparing the methods more effectively. The case study is Brue catchment, located in the South West of England. A different parametrization of the method than its previous application in Shrestha and Solomatine (2008) is used, i.e. past error values in addition to discharge and effective rainfall is considered. The results show that UNEEC's notable characteristic in its methodology, i.e. applying clustering to data of predictors upon which catchment behaviour information is encapsulated, contributes increased accuracy of the method's results for varying flow conditions. Besides, classifying data so that extreme flow events are individually

  18. Can modeling improve estimation of desert tortoise population densities?

    USGS Publications Warehouse

    Nussear, K.E.; Tracy, C.R.

    2007-01-01

    The federally listed desert tortoise (Gopherus agassizii) is currently monitored using distance sampling to estimate population densities. Distance sampling, as with many other techniques for estimating population density, assumes that it is possible to quantify the proportion of animals available to be counted in any census. Because desert tortoises spend much of their life in burrows, and the proportion of tortoises in burrows at any time can be extremely variable, this assumption is difficult to meet. This proportion of animals available to be counted is used as a correction factor (g0) in distance sampling and has been estimated from daily censuses of small populations of tortoises (6-12 individuals). These censuses are costly and produce imprecise estimates of g0 due to small sample sizes. We used data on tortoise activity from a large (N = 150) experimental population to model activity as a function of the biophysical attributes of the environment, but these models did not improve the precision of estimates from the focal populations. Thus, to evaluate how much of the variance in tortoise activity is apparently not predictable, we assessed whether activity on any particular day can predict activity on subsequent days with essentially identical environmental conditions. Tortoise activity was only weakly correlated on consecutive days, indicating that behavior was not repeatable or consistent among days with similar physical environments. ?? 2007 by the Ecological Society of America.

  19. Improving Estimated Optical Constants With MSTM and DDSCAT Modeling

    NASA Astrophysics Data System (ADS)

    Pitman, K. M.; Wolff, M. J.

    2015-12-01

    We present numerical experiments to determine quantitatively the effects of mineral particle clustering on Mars spacecraft spectral signatures and to improve upon the values of refractive indices (optical constants n, k) derived from Mars dust laboratory analog spectra such as those from RELAB and MRO CRISM libraries. Whereas spectral properties for Mars analog minerals and actual Mars soil are dominated by aggregates of particles smaller than the size of martian atmospheric dust, the analytic radiative transfer (RT) solutions used to interpret planetary surfaces assume that individual, well-separated particles dominate the spectral signature. Both in RT models and in the refractive index derivation methods that include analytic RT approximations, spheres are also over-used to represent nonspherical particles. Part of the motivation is that the integrated effect over randomly oriented particles on quantities such as single scattering albedo and phase function are relatively less than for single particles. However, we have seen in previous numerical experiments that when varying the shape and size of individual grains within a cluster, the phase function changes in both magnitude and slope, thus the "relatively less" effect is more significant than one might think. Here we examine the wavelength dependence of the forward scattering parameter with multisphere T-matrix (MSTM) and discrete dipole approximation (DDSCAT) codes that compute light scattering by layers of particles on planetary surfaces to see how albedo is affected and integrate our model results into refractive index calculations to remove uncertainties in approximations and parameters that can lower the accuracy of optical constants. By correcting the single scattering albedo and phase function terms in the refractive index determinations, our data will help to improve the understanding of Mars in identifying, mapping the distributions, and quantifying abundances for these minerals and will address long

  20. A clustering approach for estimating parameters of a profile hidden Markov model.

    PubMed

    Aghdam, Rosa; Pezeshk, Hamid; Malekpour, Seyed Amir; Shemehsavar, Soudabeh; Eslahchi, Changiz

    2013-01-01

    A Profile Hidden Markov Model (PHMM) is a standard form of a Hidden Markov Models used for modeling protein and DNA sequence families based on multiple alignment. In this paper, we implement Baum-Welch algorithm and the Bayesian Monte Carlo Markov Chain (BMCMC) method for estimating parameters of small artificial PHMM. In order to improve the prediction accuracy of the estimation of the parameters of the PHMM, we classify the training data using the weighted values of sequences in the PHMM then apply an algorithm for estimating parameters of the PHMM. The results show that the BMCMC method performs better than the Maximum Likelihood estimation. PMID:23865165

  1. A new estimate of the Hubble constant using the Virgo cluster distance

    NASA Astrophysics Data System (ADS)

    Visvanathan, N.

    The Hubble constant, which defines the size and age of the universe, remains substantially uncertain. Attention is presently given to an improved distance to the Virgo Cluster obtained by means of the 1.05-micron luminosity-H I width relation of spirals. In order to improve the absolute calibration of the relation, accurate distances to the nearby SMC, LMC, N6822, SEX A and N300 galaxies have also been obtained, on the basis of the near-IR P-L relation of the Cepheids. A value for the global Hubble constant of 67 + or 4 km/sec per Mpc is obtained.

  2. IPEG- IMPROVED PRICE ESTIMATION GUIDELINES (IBM PC VERSION)

    NASA Technical Reports Server (NTRS)

    Aster, R. W.

    1994-01-01

    The Improved Price Estimation Guidelines, IPEG, program provides a simple yet accurate estimate of the price of a manufactured product. IPEG facilitates sensitivity studies of price estimates at considerably less expense than would be incurred by using the Standard Assembly-line Manufacturing Industry Simulation, SAMIS, program (COSMIC program NPO-16032). A difference of less than one percent between the IPEG and SAMIS price estimates has been observed with realistic test cases. However, the IPEG simplification of SAMIS allows the analyst with limited time and computing resources to perform a greater number of sensitivity studies than with SAMIS. Although IPEG was developed for the photovoltaics industry, it is readily adaptable to any standard assembly line type of manufacturing industry. IPEG estimates the annual production price per unit. The input data includes cost of equipment, space, labor, materials, supplies, and utilities. Production on an industry wide basis or a process wide basis can be simulated. Once the IPEG input file is prepared, the original price is estimated and sensitivity studies may be performed. The IPEG user selects a sensitivity variable and a set of values. IPEG will compute a price estimate and a variety of other cost parameters for every specified value of the sensitivity variable. IPEG is designed as an interactive system and prompts the user for all required information and offers a variety of output options. The IPEG/PC program is written in TURBO PASCAL for interactive execution on an IBM PC computer under DOS 2.0 or above with at least 64K of memory. The IBM PC color display and color graphics adapter are needed to use the plotting capabilities in IPEG/PC. IPEG/PC was developed in 1984. The original IPEG program is written in SIMSCRIPT II.5 for interactive execution and has been implemented on an IBM 370 series computer with a central memory requirement of approximately 300K of 8 bit bytes. The original IPEG was developed in 1980.

  3. IPEG- IMPROVED PRICE ESTIMATION GUIDELINES (IBM 370 VERSION)

    NASA Technical Reports Server (NTRS)

    Chamberlain, R. G.

    1994-01-01

    The Improved Price Estimation Guidelines, IPEG, program provides a simple yet accurate estimate of the price of a manufactured product. IPEG facilitates sensitivity studies of price estimates at considerably less expense than would be incurred by using the Standard Assembly-line Manufacturing Industry Simulation, SAMIS, program (COSMIC program NPO-16032). A difference of less than one percent between the IPEG and SAMIS price estimates has been observed with realistic test cases. However, the IPEG simplification of SAMIS allows the analyst with limited time and computing resources to perform a greater number of sensitivity studies than with SAMIS. Although IPEG was developed for the photovoltaics industry, it is readily adaptable to any standard assembly line type of manufacturing industry. IPEG estimates the annual production price per unit. The input data includes cost of equipment, space, labor, materials, supplies, and utilities. Production on an industry wide basis or a process wide basis can be simulated. Once the IPEG input file is prepared, the original price is estimated and sensitivity studies may be performed. The IPEG user selects a sensitivity variable and a set of values. IPEG will compute a price estimate and a variety of other cost parameters for every specified value of the sensitivity variable. IPEG is designed as an interactive system and prompts the user for all required information and offers a variety of output options. The IPEG/PC program is written in TURBO PASCAL for interactive execution on an IBM PC computer under DOS 2.0 or above with at least 64K of memory. The IBM PC color display and color graphics adapter are needed to use the plotting capabilities in IPEG/PC. IPEG/PC was developed in 1984. The original IPEG program is written in SIMSCRIPT II.5 for interactive execution and has been implemented on an IBM 370 series computer with a central memory requirement of approximately 300K of 8 bit bytes. The original IPEG was developed in 1980.

  4. A New X-ray/Infrared Age Estimator For Young Stellar Clusters

    NASA Astrophysics Data System (ADS)

    Getman, Konstantin; Feigelson, Eric; Kuhn, Michael; Broos, Patrick; Townsley, Leisa; Naylor, Tim; Povich, Matthew; Luhman, Kevin; Garmire, Gordon

    2013-07-01

    The MYStIX (Massive Young Star-Forming Complex Study in Infrared and X-ray; Feigelson et al. 2013) project seeks to characterize 20 OB-dominated young star forming regions (SFRs) at distances <4 kpc using photometric catalogs from the Chandra X-ray Observatory, Spitzer Space Telescope, and UKIRT and 2MASS NIR telescopes. A major impediment to understand star formation in the massive SFRs is the absence of a reliable stellar chronometer to unravel their complex star formation histories. We present estimation of stellar ages using a new method that employs NIR and X-ray photometry, t(JX). Stellar masses are directly derived from absorption-corrected X-ray luminosities using the Lx-Mass relation from the Taurus cloud. J-band magnitudes corrected for absorption and distance are compared to the mass-dependent pre-main-sequence evolutionary models of Siess et al. (2000) to estimate ages. Unlike some other age estimators, t(JX) is sensitive to all stages of evolution, from deeply embedded disky objects to widely dispersed older pre-main sequence stars. The method has been applied to >5500 out of >30000 MYStIX stars in 20 SFRs. As individual t(JX) values can be highly uncertain, we report median ages of samples within (sub)clusters defined by the companion study of Kuhn et al. (2013). Here a maximum likelihood model of the spatial distribution produces an objective assignment of each star to an isothermal ellipsoid or a distributed population. The MYStIX (sub)clusters show 0.5 < t(JX) < 5 Myr. The important science result of our study is the discovery of previously unknown age gradients across many different MYStIX regions and clusters. The t(JX) ages are often correlated with (sub)cluster extinction and location with respect to molecular cores and ionized pillars on the peripheries of HII regions. The NIR color J-H, a surrogate measure of extinction, can serve as an approximate age predictor for young embedded clusters.

  5. Can streaming potential data improve permeability estimates in EGS reservoirs?

    NASA Astrophysics Data System (ADS)

    Vogt, Christian; Klitzsch, Norbert

    2013-04-01

    We study the capability of streaming potential data to improve the estimation of permeability in fractured geothermal systems. To this end, we simulate a tracer experiment numerically carried out at the Enhanced Geothermal System (EGS) at Soultz-sous-Forêts, France, in 2005. The EGS is located in the Lower Rhine Graben. Here, at approximately 5000 m depth an engineered reservoir was established. The tracer circulation test provides information on hydraulic connectivity between the injection borehole GPK3 and the two production boreholes GPK2 and GPK4. Vogt et al. (2011) performed stochastic inversion approaches to estimate heterogeneous permeability at Soultz in an equivalent porous medium approach and studied the non-uniqueness of the possible pathways in the reservoir. They identified three different possible groups of pathway configurations between GPK2 and GPK3 and corresponding hydraulic properties. Using the Ensemble Kalman Fitler, Vogt et al. (2012) estimated permeability by updating sequentially an ensemble of heterogeneous Monte Carlo reservoir models. Additionally, this approach quantifies the heterogeneously distributed uncertainty. Here, we study whether considering hypothetical streaming potential (SP) data during the stochastic inversion can improve the determination of the hydraulic reservoir properties. In particular, we study whether the three groups are characterized uniquely by their corresponding SP signals along the boreholes and whether the Ensemble Kalman Filter fit could be improved by joint inversion of SP and tracer data. During the actual tracer test, no SP data were recorded. Therefore, this study is based on synthetic data. We find that SP data predominantly yields information on the near field of permeability around the wells. Therefore, SP observations along wells will not help to characterize large-scale reservoir flow paths. However, we investigate whether additional passive SP monitoring from deviated wells around the injection

  6. Improving Evapotranspiration Estimates Using Multi-Platform Remote Sensing

    NASA Astrophysics Data System (ADS)

    Knipper, Kyle; Hogue, Terri; Franz, Kristie; Scott, Russell

    2016-04-01

    Understanding the linkages between energy and water cycles through evapotranspiration (ET) is uniquely challenging given its dependence on a range of climatological parameters and surface/atmospheric heterogeneity. A number of methods have been developed to estimate ET either from primarily remote-sensing observations, in-situ measurements, or a combination of the two. However, the scale of many of these methods may be too large to provide needed information about the spatial and temporal variability of ET that can occur over regions with acute or chronic land cover change and precipitation driven fluxes. The current study aims to improve the spatial and temporal variability of ET utilizing only satellite-based observations by incorporating a potential evapotranspiration (PET) methodology with satellite-based down-scaled soil moisture estimates in southern Arizona, USA. Initially, soil moisture estimates from AMSR2 and SMOS are downscaled to 1km through a triangular relationship between MODIS land surface temperature (MYD11A1), vegetation indices (MOD13Q1/MYD13Q1), and brightness temperature. Downscaled soil moisture values are then used to scale PET to actual ET (AET) at a daily, 1km resolution. Derived AET estimates are compared to observed flux tower estimates, the North American Land Data Assimilation System (NLDAS) model output (i.e. Variable Infiltration Capacity (VIC) Macroscale Hydrologic Model, Mosiac Model, and Noah Model simulations), the Operational Simplified Surface Energy Balance Model (SSEBop), and a calibrated empirical ET model created specifically for the region. Preliminary results indicate a strong increase in correlation when incorporating the downscaling technique to original AMSR2 and SMOS soil moisture values, with the added benefit of being able to decipher small scale heterogeneity in soil moisture (riparian versus desert grassland). AET results show strong correlations with relatively low error and bias when compared to flux tower

  7. Estimation of Missing Precipitation Records using Classifier, Cluster and Proximity Metric-Based Interpolation Schemes

    NASA Astrophysics Data System (ADS)

    Teegavarapu, R. S.

    2012-12-01

    New optimal proximity-based imputation, k-nn (k-nearest neighbor) classification and k-means clustering methods are proposed and developed for estimation of missing precipitation records in this study. Variants of these methods are embedded in optimization formulations to optimize the weighing schemes involving proximity measures. Ten different binary and real valued distance metrics are used as proximity measures. Two climatic regions, Kentucky and Florida, (temperate and tropical) in the United States, with different gauge density and gauge network structure are used as case studies to evaluate the efficacy of these methods for estimation of missing precipitation data. A comprehensive exercise is undertaken in this study to compare the performances of the developed new methods and their variants to those of already available methods in literature. Several deterministic and stochastic spatial interpolation methods and their improvised variants using optimization formulations are used for comparisons. Results from these comparisons indicate that the optimal proximity-based imputation, k-mean cluster-based and k-nn classification methods are competitive when combined with mathematical programming formulations and provided better estimates of missing precipitation data than available deterministic and stochastic interpolation methods.

  8. Clustering and training set selection methods for improving the accuracy of quantitative laser induced breakdown spectroscopy

    NASA Astrophysics Data System (ADS)

    Anderson, Ryan B.; Bell, James F., III; Wiens, Roger C.; Morris, Richard V.; Clegg, Samuel M.

    2012-04-01

    We investigated five clustering and training set selection methods to improve the accuracy of quantitative chemical analysis of geologic samples by laser induced breakdown spectroscopy (LIBS) using partial least squares (PLS) regression. The LIBS spectra were previously acquired for 195 rock slabs and 31 pressed powder geostandards under 7 Torr CO2 at a stand-off distance of 7 m at 17 mJ per pulse to simulate the operational conditions of the ChemCam LIBS instrument on the Mars Science Laboratory Curiosity rover. The clustering and training set selection methods, which do not require prior knowledge of the chemical composition of the test-set samples, are based on grouping similar spectra and selecting appropriate training spectra for the partial least squares (PLS2) model. These methods were: (1) hierarchical clustering of the full set of training spectra and selection of a subset for use in training; (2) k-means clustering of all spectra and generation of PLS2 models based on the training samples within each cluster; (3) iterative use of PLS2 to predict sample composition and k-means clustering of the predicted compositions to subdivide the groups of spectra; (4) soft independent modeling of class analogy (SIMCA) classification of spectra, and generation of PLS2 models based on the training samples within each class; (5) use of Bayesian information criteria (BIC) to determine an optimal number of clusters and generation of PLS2 models based on the training samples within each cluster. The iterative method and the k-means method using 5 clusters showed the best performance, improving the absolute quadrature root mean squared error (RMSE) by ~ 3 wt.%. The statistical significance of these improvements was ~ 85%. Our results show that although clustering methods can modestly improve results, a large and diverse training set is the most reliable way to improve the accuracy of quantitative LIBS. In particular, additional sulfate standards and specifically fabricated

  9. Snowpack Estimates Improve Water Resources Climate-Change Adaptation Strategies

    NASA Astrophysics Data System (ADS)

    Lestak, L.; Molotch, N. P.; Guan, B.; Granger, S. L.; Nemeth, S.; Rizzardo, D.; Gehrke, F.; Franz, K. J.; Karsten, L. R.; Margulis, S. A.; Case, K.; Anderson, M.; Painter, T. H.; Dozier, J.

    2010-12-01

    Observed climate trends over the past 50 years indicate a reduction in snowpack water storage across the Western U.S. As the primary water source for the region, the loss in snowpack water storage presents significant challenges for managing water deliveries to meet agricultural, municipal, and hydropower demands. Improved snowpack information via remote sensing shows promise for improving seasonal water supply forecasts and for informing decadal scale infrastructure planning. An ongoing project in the California Sierra Nevada and examples from the Rocky Mountains indicate the tractability of estimating snowpack water storage on daily time steps using a distributed snowpack reconstruction model. Fractional snow covered area (FSCA) derived from Moderate Resolution Imaging Spectroradiometer (MODIS) satellite data were used with modeled snowmelt from the snowpack model to estimate snow water equivalent (SWE) in the Sierra Nevada (64,515 km2). Spatially distributed daily SWE estimates were calculated for 10 years, 2000-2009, with detailed analysis for two anamolous years, 2006, a wet year and 2009, an over-forecasted year. Sierra-wide mean SWE was 0.8 cm for 01 April 2006 versus 0.4 cm for 01 April 2009, comparing favorably with known outflow. Modeled SWE was compared to in-situ (observed) SWE for 01 April 2006 for the Feather (northern Sierra, lower-elevation) and Merced (central Sierra, higher-elevation) basins, with mean modeled SWE 80% of observed SWE. Integration of spatial SWE estimates into forecasting operations will allow for better visualization and analysis of high-altitude late-season snow missed by in-situ snow sensors and inter-annual anomalies associated with extreme precipitation events/atmospheric rivers. Collaborations with state and local entities establish protocols on how to meet current and future information needs and improve climate-change adaptation strategies.

  10. Improving the quality of parameter estimates obtained from slug tests

    USGS Publications Warehouse

    Butler, J.J., Jr.; McElwee, C.D.; Liu, W.

    1996-01-01

    The slug test is one of the most commonly used field methods for obtaining in situ estimates of hydraulic conductivity. Despite its prevalence, this method has received criticism from many quarters in the ground-water community. This criticism emphasizes the poor quality of the estimated parameters, a condition that is primarily a product of the somewhat casual approach that is often employed in slug tests. Recently, the Kansas Geological Survey (KGS) has pursued research directed it improving methods for the performance and analysis of slug tests. Based on extensive theoretical and field research, a series of guidelines have been proposed that should enable the quality of parameter estimates to be improved. The most significant of these guidelines are: (1) three or more slug tests should be performed at each well during a given test period; (2) two or more different initial displacements (Ho) should be used at each well during a test period; (3) the method used to initiate a test should enable the slug to be introduced in a near-instantaneous manner and should allow a good estimate of Ho to be obtained; (4) data-acquisition equipment that enables a large quantity of high quality data to be collected should be employed; (5) if an estimate of the storage parameter is needed, an observation well other than the test well should be employed; (6) the method chosen for analysis of the slug-test data should be appropriate for site conditions; (7) use of pre- and post-analysis plots should be an integral component of the analysis procedure, and (8) appropriate well construction parameters should be employed. Data from slug tests performed at a number of KGS field sites demonstrate the importance of these guidelines.

  11. Tuning target selection algorithms to improve galaxy redshift estimates

    NASA Astrophysics Data System (ADS)

    Hoyle, Ben; Paech, Kerstin; Rau, Markus Michael; Seitz, Stella; Weller, Jochen

    2016-06-01

    We showcase machine learning (ML) inspired target selection algorithms to determine which of all potential targets should be selected first for spectroscopic follow-up. Efficient target selection can improve the ML redshift uncertainties as calculated on an independent sample, while requiring less targets to be observed. We compare seven different ML targeting algorithms with the Sloan Digital Sky Survey (SDSS) target order, and with a random targeting algorithm. The ML inspired algorithms are constructed iteratively by estimating which of the remaining target galaxies will be most difficult for the ML methods to accurately estimate redshifts using the previously observed data. This is performed by predicting the expected redshift error and redshift offset (or bias) of all of the remaining target galaxies. We find that the predicted values of bias and error are accurate to better than 10-30 per cent of the true values, even with only limited training sample sizes. We construct a hypothetical follow-up survey and find that some of the ML targeting algorithms are able to obtain the same redshift predictive power with 2-3 times less observing time, as compared to that of the SDSS, or random, target selection algorithms. The reduction in the required follow-up resources could allow for a change to the follow-up strategy, for example by obtaining deeper spectroscopy, which could improve ML redshift estimates for deeper test data.

  12. Improving stochastic estimates with inference methods: calculating matrix diagonals.

    PubMed

    Selig, Marco; Oppermann, Niels; Ensslin, Torsten A

    2012-02-01

    Estimating the diagonal entries of a matrix, that is not directly accessible but only available as a linear operator in the form of a computer routine, is a common necessity in many computational applications, especially in image reconstruction and statistical inference. Here, methods of statistical inference are used to improve the accuracy or the computational costs of matrix probing methods to estimate matrix diagonals. In particular, the generalized Wiener filter methodology, as developed within information field theory, is shown to significantly improve estimates based on only a few sampling probes, in cases in which some form of continuity of the solution can be assumed. The strength, length scale, and precise functional form of the exploited autocorrelation function of the matrix diagonal is determined from the probes themselves. The developed algorithm is successfully applied to mock and real world problems. These performance tests show that, in situations where a matrix diagonal has to be calculated from only a small number of computationally expensive probes, a speedup by a factor of 2 to 10 is possible with the proposed method. PMID:22463179

  13. A PARAMETERIZED GALAXY CATALOG SIMULATOR FOR TESTING CLUSTER FINDING, MASS ESTIMATION, AND PHOTOMETRIC REDSHIFT ESTIMATION IN OPTICAL AND NEAR-INFRARED SURVEYS

    SciTech Connect

    Song, Jeeseon; Mohr, Joseph J.; Barkhouse, Wayne A.; Rude, Cody; Warren, Michael S.; Dolag, Klaus

    2012-03-01

    We present a galaxy catalog simulator that converts N-body simulations with halo and subhalo catalogs into mock, multiband photometric catalogs. The simulator assigns galaxy properties to each subhalo in a way that reproduces the observed cluster galaxy halo occupation distribution, the radial and mass-dependent variation in fractions of blue galaxies, the luminosity functions in the cluster and the field, and the color-magnitude relation in clusters. Moreover, the evolution of these parameters is tuned to match existing observational constraints. Parameterizing an ensemble of cluster galaxy properties enables us to create mock catalogs with variations in those properties, which in turn allows us to quantify the sensitivity of cluster finding to current observational uncertainties in these properties. Field galaxies are sampled from existing multiband photometric surveys of similar depth. We present an application of the catalog simulator to characterize the selection function and contamination of a galaxy cluster finder that utilizes the cluster red sequence together with galaxy clustering on the sky. We estimate systematic uncertainties in the selection to be at the {<=}15% level with current observational constraints on cluster galaxy populations and their evolution. We find the contamination in this cluster finder to be {approx}35% to redshift z {approx} 0.6. In addition, we use the mock galaxy catalogs to test the optical mass indicator B{sub gc} and a red-sequence redshift estimator. We measure the intrinsic scatter of the B{sub gc}-mass relation to be approximately log normal with {sigma}{sub log10M}{approx}0.25 and we demonstrate photometric redshift accuracies for massive clusters at the {approx}3% level out to z {approx} 0.7.

  14. Estimating accuracy of land-cover composition from two-stage cluster sampling

    USGS Publications Warehouse

    Stehman, S.V.; Wickham, J.D.; Fattorini, L.; Wade, T.D.; Baffetta, F.; Smith, J.H.

    2009-01-01

    Land-cover maps are often used to compute land-cover composition (i.e., the proportion or percent of area covered by each class), for each unit in a spatial partition of the region mapped. We derive design-based estimators of mean deviation (MD), mean absolute deviation (MAD), root mean square error (RMSE), and correlation (CORR) to quantify accuracy of land-cover composition for a general two-stage cluster sampling design, and for the special case of simple random sampling without replacement (SRSWOR) at each stage. The bias of the estimators for the two-stage SRSWOR design is evaluated via a simulation study. The estimators of RMSE and CORR have small bias except when sample size is small and the land-cover class is rare. The estimator of MAD is biased for both rare and common land-cover classes except when sample size is large. A general recommendation is that rare land-cover classes require large sample sizes to ensure that the accuracy estimators have small bias. ?? 2009 Elsevier Inc.

  15. Cosmological parameter estimation from CMB and X-ray cluster after Planck

    NASA Astrophysics Data System (ADS)

    Hu, Jian-Wei; Cai, Rong-Gen; Guo, Zong-Kuan; Hu, Bin

    2014-05-01

    We investigate constraints on cosmological parameters in three 8-parameter models with the summed neutrino mass as a free parameter, by a joint analysis of CCCP X-ray cluster data, the newly released Planck CMB data as well as some external data sets including baryon acoustic oscillation measurements from the 6dFGS, SDSS DR7 and BOSS DR9 surveys, and Hubble Space Telescope H0 measurement. We find that the combined data strongly favor a non-zero neutrino masses at more than 3σ confidence level in these non-vanilla models. Allowing the CMB lensing amplitude AL to vary, we find AL > 1 at 3σ confidence level. For dark energy with a constant equation of state w, we obtain w < -1 at 3σ confidence level. The estimate of the matter power spectrum amplitude σ8 is discrepant with the Planck value at 2σ confidence level, which reflects some tension between X-ray cluster data and Planck data in these non-vanilla models. The tension can be alleviated by adding a 9% systematic shift in the cluster mass function.

  16. Cosmological parameter estimation from CMB and X-ray cluster after Planck

    SciTech Connect

    Hu, Jian-Wei; Cai, Rong-Gen; Guo, Zong-Kuan; Hu, Bin E-mail: cairg@itp.ac.cn E-mail: hu@lorentz.leidenuniv.nl

    2014-05-01

    We investigate constraints on cosmological parameters in three 8-parameter models with the summed neutrino mass as a free parameter, by a joint analysis of CCCP X-ray cluster data, the newly released Planck CMB data as well as some external data sets including baryon acoustic oscillation measurements from the 6dFGS, SDSS DR7 and BOSS DR9 surveys, and Hubble Space Telescope H{sub 0} measurement. We find that the combined data strongly favor a non-zero neutrino masses at more than 3σ confidence level in these non-vanilla models. Allowing the CMB lensing amplitude A{sub L} to vary, we find A{sub L} > 1 at 3σ confidence level. For dark energy with a constant equation of state w, we obtain w < −1 at 3σ confidence level. The estimate of the matter power spectrum amplitude σ{sub 8} is discrepant with the Planck value at 2σ confidence level, which reflects some tension between X-ray cluster data and Planck data in these non-vanilla models. The tension can be alleviated by adding a 9% systematic shift in the cluster mass function.

  17. Speed Profiles for Improvement of Maritime Emission Estimation

    PubMed Central

    Yau, Pui Shan; Lee, Shun-Cheng; Ho, Kin Fai

    2012-01-01

    Abstract Maritime emissions play an important role in anthropogenic emissions, particularly for cities with busy ports such as Hong Kong. Ship emissions are strongly dependent on vessel speed, and thus accurate vessel speed is essential for maritime emission studies. In this study, we determined minute-by-minute high-resolution speed profiles of container ships on four major routes in Hong Kong waters using Automatic Identification System (AIS). The activity-based ship emissions of NOx, CO, HC, CO2, SO2, and PM10 were estimated using derived vessel speed profiles, and results were compared with those using the speed limits of control zones. Estimation using speed limits resulted in up to twofold overestimation of ship emissions. Compared with emissions estimated using the speed limits of control zones, emissions estimated using vessel speed profiles could provide results with up to 88% higher accuracy. Uncertainty analysis and sensitivity analysis of the model demonstrated the significance of improvement of vessel speed resolution. From spatial analysis, it is revealed that SO2 and PM10 emissions during maneuvering within 1 nautical mile from port were the highest. They contributed 7%–22% of SO2 emissions and 8%–17% of PM10 emissions of the entire voyage in Hong Kong. PMID:23236250

  18. Adaptive noise estimation and suppression for improving microseismic event detection

    NASA Astrophysics Data System (ADS)

    Mousavi, S. Mostafa; Langston, Charles A.

    2016-09-01

    Microseismic data recorded by surface arrays are often strongly contaminated by unwanted noise. This background noise makes the detection of small magnitude events difficult. A noise level estimation and noise reduction algorithm is presented for microseismic data analysis based upon minimally controlled recursive averaging and neighborhood shrinkage estimators. The method might not be compared with more sophisticated and computationally expensive denoising algorithm in terms of preserving detailed features of seismic signal. However, it is fast and data-driven and can be applied in real-time processing of continuous data for event detection purposes. Results from application of this algorithm to synthetic and real seismic data show that it holds a great promise for improving microseismic event detection.

  19. An improved sparse LS-SVR for estimating illumination

    NASA Astrophysics Data System (ADS)

    Zhu, Zhenmin; Lv, Zhaokang; Liu, Baifen

    2015-07-01

    Support Vector Regression performs well on estimating illumination chromaticity in a scene. Then the concept of Least Squares Support Vector Regression has been put forward as an effective, statistical and learning prediction model. Although it is successful to solve some problems of estimation, it also has obvious defects. Due to a large amount of support vectors which are chosen in the process of training LS-SVR , the calculation become very complex and it lost the sparsity of SVR. In this paper, we get inspiration from WLS-SVM(Weighted Least Squares Support Vector Machines) and a new method for sparse model. A Density Weighted Pruning algorithm is used to improve the sparsity of LS-SVR and named SLS-SVR(Sparse Least Squares Support Vector Regression).The simulation indicates that only need to select 30 percent of support vectors, the prediction can reach to 75 percent of the original one.

  20. Improved estimates of coordinate error for molecular replacement

    SciTech Connect

    Oeffner, Robert D.; Bunkóczi, Gábor; McCoy, Airlie J.; Read, Randy J.

    2013-11-01

    A function for estimating the effective root-mean-square deviation in coordinates between two proteins has been developed that depends on both the sequence identity and the size of the protein and is optimized for use with molecular replacement in Phaser. A top peak translation-function Z-score of over 8 is found to be a reliable metric of when molecular replacement has succeeded. The estimate of the root-mean-square deviation (r.m.s.d.) in coordinates between the model and the target is an essential parameter for calibrating likelihood functions for molecular replacement (MR). Good estimates of the r.m.s.d. lead to good estimates of the variance term in the likelihood functions, which increases signal to noise and hence success rates in the MR search. Phaser has hitherto used an estimate of the r.m.s.d. that only depends on the sequence identity between the model and target and which was not optimized for the MR likelihood functions. Variance-refinement functionality was added to Phaser to enable determination of the effective r.m.s.d. that optimized the log-likelihood gain (LLG) for a correct MR solution. Variance refinement was subsequently performed on a database of over 21 000 MR problems that sampled a range of sequence identities, protein sizes and protein fold classes. Success was monitored using the translation-function Z-score (TFZ), where a TFZ of 8 or over for the top peak was found to be a reliable indicator that MR had succeeded for these cases with one molecule in the asymmetric unit. Good estimates of the r.m.s.d. are correlated with the sequence identity and the protein size. A new estimate of the r.m.s.d. that uses these two parameters in a function optimized to fit the mean of the refined variance is implemented in Phaser and improves MR outcomes. Perturbing the initial estimate of the r.m.s.d. from the mean of the distribution in steps of standard deviations of the distribution further increases MR success rates.

  1. Estimating {Omega} from galaxy redshifts: Linear flow distortions and nonlinear clustering

    SciTech Connect

    Bromley, B.C. |; Warren, M.S.; Zurek, W.H.

    1997-02-01

    We propose a method to determine the cosmic mass density {Omega} from redshift-space distortions induced by large-scale flows in the presence of nonlinear clustering. Nonlinear structures in redshift space, such as fingers of God, can contaminate distortions from linear flows on scales as large as several times the small-scale pairwise velocity dispersion {sigma}{sub {nu}}. Following Peacock & Dodds, we work in the Fourier domain and propose a model to describe the anisotropy in the redshift-space power spectrum; tests with high-resolution numerical data demonstrate that the model is robust for both mass and biased galaxy halos on translinear scales and above. On the basis of this model, we propose an estimator of the linear growth parameter {beta}={Omega}{sup 0.6}/b, where b measures bias, derived from sampling functions that are tuned to eliminate distortions from nonlinear clustering. The measure is tested on the numerical data and found to recover the true value of {beta} to within {approximately}10{percent}. An analysis of {ital IRAS} 1.2 Jy galaxies yields {beta}=0.8{sub {minus}0.3}{sup +0.4} at a scale of 1000kms{sup {minus}1}, which is close to optimal given the shot noise and finite size of the survey. This measurement is consistent with dynamical estimates of {beta} derived from both real-space and redshift-space information. The importance of the method presented here is that nonlinear clustering effects are removed to enable linear correlation anisotropy measurements on scales approaching the translinear regime. We discuss implications for analyses of forthcoming optical redshift surveys in which the dispersion is more than a factor of 2 greater than in the {ital IRAS} data. {copyright} {ital 1997} {ital The American Astronomical Society}

  2. A stochastic movement simulator improves estimates of landscape connectivity.

    PubMed

    Coulon, A; Aben, J; Palmer, S C F; Stevens, V M; Callens, T; Strubbe, D; Lens, L; Matthysen, E; Baguette, M; Travis, J M J

    2015-08-01

    Conservation actions often focus on restoration or creation of natural areas designed to facilitate the movements of organisms among populations. To be efficient, these actions need to be based on reliable estimates or predictions of landscape connectivity. While circuit theory and least-cost paths (LCPs) are increasingly being used to estimate connectivity, these methods also have proven limitations. We compared their performance in predicting genetic connectivity with that of an alternative approach based on a simple, individual-based "stochastic movement simulator" (SMS). SMS predicts dispersal of organisms using the same landscape representation as LCPs and circuit theory-based estimates (i.e., a cost surface), while relaxing key LCP assumptions, namely individual omniscience of the landscape (by incorporating perceptual range) and the optimality of individual movements (by including stochasticity in simulated movements). The performance of the three estimators was assessed by the degree to which they correlated with genetic estimates of connectivity in two species with contrasting movement abilities (Cabanis's Greenbul, an Afrotropical forest bird species, and natterjack toad, an amphibian restricted to European sandy and heathland areas). For both species, the correlation between dispersal model and genetic data was substantially higher when SMS was used. Importantly, the results also demonstrate that the improvement gained by using SMS is robust both to variation in spatial resolution of the landscape and to uncertainty in the perceptual range model parameter. Integration of this individual-based approach with other developing methods in the field of connectivity research, such as graph theory, can yield rapid progress towards more robust connectivity indices and more effective recommendations for land management. PMID:26405745

  3. Improved soil moisture balance methodology for recharge estimation

    NASA Astrophysics Data System (ADS)

    Rushton, K. R.; Eilers, V. H. M.; Carter, R. C.

    2006-03-01

    Estimation of recharge in a variety of climatic conditions is possible using a daily soil moisture balance based on a single soil store. Both transpiration from crops and evaporation from bare soil are included in the conceptual and computational models. The actual evapotranspiration is less than the potential value when the soil is under stress; the stress factor is estimated in terms of the readily and total available water, parameters which depend on soil properties and the effective depth of the roots. Runoff is estimated as a function of the daily rainfall intensity and the current soil moisture deficit. A new concept, near surface soil storage, is introduced to account for continuing evapotranspiration on days following heavy rainfall even though a large soil moisture deficit exists. Algorithms for the computational model are provided. The data required for the soil moisture balance calculations are widely available or they can be deduced from published data. This methodology for recharge estimation using a soil moisture balance is applied to two contrasting case studies. The first case study refers to a rainfed crop in semi-arid northeast Nigeria; recharge occurs during the period of main crop growth. For the second case study in England, a location is selected where the long-term average rainfall and potential evapotranspiration are of similar magnitudes. For each case study, detailed information is presented about the selection of soil, crop and other parameters. The plausibility of the model outputs is examined using a variety of independent information and data. Uncertainties and variations in parameter values are explored using sensitivity analyses. These two case studies indicate that the improved single-store soil moisture balance model is a reliable approach for potential recharge estimation in a wide variety of situations.

  4. Improved risk estimates for carbon tetrachloride. 1998 annual progress report

    SciTech Connect

    Benson, J.M.; Springer, D.L.; Thrall, K.D.

    1998-06-01

    'The overall purpose of these studies is to improve the scientific basis for assessing the cancer risk associated with human exposure to carbon tetrachloride. Specifically, the toxicokinetics of inhaled carbon tetrachloride is being determined in rats, mice and hamsters. Species differences in the metabolism of carbon tetrachloride by rats, mice and hamsters is being determined in vivo and in vitro using tissues and microsomes from these rodent species and man. Dose-response relationships will be determined in all studies. The information will be used to improve the current physiologically based pharmacokinetic model for carbon tetrachloride. The authors will also determine whether carbon tetrachloride is a hepatocarcinogen only when exposure results in cell damage, cell killing, and regenerative cell proliferation. In combination, the results of these studies will provide the types of information needed to enable a refined risk estimate for carbon tetrachloride under EPA''s new guidelines for cancer risk assessment.'

  5. Improving the Accuracy of Estimation of Climate Extremes

    NASA Astrophysics Data System (ADS)

    Zolina, Olga; Detemmerman, Valery; Trenberth, Kevin E.

    2010-12-01

    Workshop on Metrics and Methodologies of Estimation of Extreme Climate Events; Paris, France, 27-29 September 2010; Climate projections point toward more frequent and intense weather and climate extremes such as heat waves, droughts, and floods, in a warmer climate. These projections, together with recent extreme climate events, including flooding in Pakistan and the heat wave and wildfires in Russia, highlight the need for improved risk assessments to help decision makers and the public. But accurate analysis and prediction of risk of extreme climate events require new methodologies and information from diverse disciplines. A recent workshop sponsored by the World Climate Research Programme (WCRP) and hosted at United Nations Educational, Scientific and Cultural Organization (UNESCO) headquarters in France brought together, for the first time, a unique mix of climatologists, statisticians, meteorologists, oceanographers, social scientists, and risk managers (such as those from insurance companies) who sought ways to improve scientists' ability to characterize and predict climate extremes in a changing climate.

  6. Improving transportation data for mobile source emission estimates. Final report

    SciTech Connect

    Chatterjee, A.; Miller, T.L.; Philpot, J.W.; Wholley, T.F.; Guensler, R.

    1997-12-31

    The report provides an overview of federal statutes and policies which form the foundation for air quality planning related to transportation systems development. It also provides a detailed presentation regarding the use of federally mandated air quality models in estimating mobile source emissions resulting from transportation development and operations. The authors suggest ways in which current practice and analysis tools can be improved to increase the accuracy of their results. They also suggest some priorities for additional related research. Finally, the report should assist federal agency practitioners in their efforts to improve analytical methods and tools for determining conformity. The report also serves as a basic educational resource for current and future transportation and air quality modeling.

  7. Reducing measurement scale mismatch to improve surface energy flux estimation

    NASA Astrophysics Data System (ADS)

    Iwema, Joost; Rosolem, Rafael; Rahman, Mostaquimur; Blyth, Eleanor; Wagener, Thorsten

    2016-04-01

    Soil moisture importantly controls land surface processes such as energy and water partitioning. A good understanding of these controls is needed especially when recognizing the challenges in providing accurate hyper-resolution hydrometeorological simulations at sub-kilometre scales. Soil moisture controlling factors can, however, differ at distinct scales. In addition, some parameters in land surface models are still often prescribed based on observations obtained at another scale not necessarily employed by such models (e.g., soil properties obtained from lab samples used in regional simulations). To minimize such effects, parameters can be constrained with local data from Eddy-Covariance (EC) towers (i.e., latent and sensible heat fluxes) and Point Scale (PS) soil moisture observations (e.g., TDR). However, measurement scales represented by EC and PS still differ substantially. Here we use the fact that Cosmic-Ray Neutron Sensors (CRNS) estimate soil moisture at horizontal footprint similar to that of EC fluxes to help answer the following question: Does reduced observation scale mismatch yield better soil moisture - surface fluxes representation in land surface models? To answer this question we analysed soil moisture and surface fluxes measurements from twelve COSMOS-Ameriflux sites in the USA characterized by distinct climate, soils and vegetation types. We calibrated model parameters of the Joint UK Land Environment Simulator (JULES) against PS and CRNS soil moisture data, respectively. We analysed the improvement in soil moisture estimation compared to uncalibrated model simulations and then evaluated the degree of improvement in surface fluxes before and after calibration experiments. Preliminary results suggest that a more accurate representation of soil moisture dynamics is achieved when calibrating against observed soil moisture and further improvement obtained with CRNS relative to PS. However, our results also suggest that a more accurate

  8. An Improved Clustering Algorithm of Tunnel Monitoring Data for Cloud Computing

    PubMed Central

    Zhong, Luo; Tang, KunHao; Li, Lin; Yang, Guang; Ye, JingJing

    2014-01-01

    With the rapid development of urban construction, the number of urban tunnels is increasing and the data they produce become more and more complex. It results in the fact that the traditional clustering algorithm cannot handle the mass data of the tunnel. To solve this problem, an improved parallel clustering algorithm based on k-means has been proposed. It is a clustering algorithm using the MapReduce within cloud computing that deals with data. It not only has the advantage of being used to deal with mass data but also is more efficient. Moreover, it is able to compute the average dissimilarity degree of each cluster in order to clean the abnormal data. PMID:24982971

  9. Improving Estimates of Cloud Radiative Forcing over Greenland

    NASA Astrophysics Data System (ADS)

    Wang, W.; Zender, C. S.

    2014-12-01

    Multiple driving mechanisms conspire to increase melt extent and extreme melt events frequency in the Arctic: changing heat transport, shortwave radiation (SW), and longwave radiation (LW). Cloud Radiative Forcing (CRF) of Greenland's surface is amplified by a dry atmosphere and by albedo feedback, making its contribution to surface melt even more variable in time and space. Unfortunately accurate cloud observations and thus CRF estimates are hindered by Greenland's remoteness, harsh conditions, and low contrast between surface and cloud reflectance. In this study, cloud observations from satellites and reanalyses are ingested into and evaluated within a column radiative transfer model. An improved CRF dataset is obtained by correcting systematic discrepancies derived from sensitivity experiments. First, we compare the surface radiation budgets from the Column Radiation Model (CRM) driven by different cloud datasets, with surface observations from Greenland Climate Network (GC-Net). In clear skies, CRM-estimated surface radiation driven by water vapor profiles from both AIRS and MODIS during May-Sept 2010-2012 are similar, stable, and reliable. For example, although AIRS water vapor path exceeds MODIS by 1.4 kg/m2 on a daily average, the overall absolute difference in downwelling SW is < 4 W/m2. CRM estimates are within 20 W/m2 range of GC-Net downwelling SW. After calibrating CRM in clear skies, the remaining differences between CRM and observed surface radiation are primarily attributable to differences in cloud observations. We estimate CRF using cloud products from MODIS and from MERRA. The SW radiative forcing of thin clouds is mainly controlled by cloud water path (CWP). As CWP increases from near 0 to 200 g/m2, the net surface SW drops from over 100 W/m2 to 30 W/m2 almost linearly, beyond which it becomes relatively insensitive to CWP. The LW is dominated by cloud height. For clouds at all altitudes, the lower the clouds, the greater the LW forcing. By

  10. Which Elements of Improvement Collaboratives Are Most Effective? A Cluster-Randomized Trial

    PubMed Central

    Gustafson, D. H.; Quanbeck, A. R.; Robinson, J. M.; Ford, J. H.; Pulvermacher, A.; French, M. T.; McConnell, K. J.; Batalden, P. B.; Hoffman, K. A.; McCarty, D.

    2013-01-01

    Aims Improvement collaboratives consisting of various components are used throughout healthcare to improve quality, but no study has identified which components work best. This study tested the effectiveness of different components in addiction treatment services, hypothesizing that a combination of all components would be most effective. Design An unblinded cluster-randomized trial assigned clinics to one of four groups: interest circle calls (group teleconferences), clinic-level coaching, learning sessions (large face-to-face meetings), and a combination of all three. Interest circle calls functioned as a minimal intervention comparison group. Setting Outpatient addiction treatment clinics in the U.S. Participants 201 clinics in 5 states. Measurements Clinic data managers submitted data on three primary outcomes: waiting time (mean days between first contact and first treatment), retention (percent of patients retained from first to fourth treatment session), and annual number of new patients. State and group costs were collected for a cost-effectiveness analysis. Findings Waiting time declined significantly for 3 groups: coaching (an average of −4.6 days/clinic, P=0.001), learning sessions (−3.5 days/clinic, P=0.012), and the combination (−4.7 days/clinic, P=0.001). The coaching and combination groups significantly increased the number of new patients (19.5%, P=0.028; 8.9%, P=0.029; respectively). Interest circle calls showed no significant effects on outcomes. None of the groups significantly improved retention. The estimated cost/clinic was $2,878 for coaching versus $7,930 for the combination. Coaching and the combination of collaborative components were about equally effective in achieving study aims, but coaching was substantially more cost effective. Conclusions When trying to improve the effectiveness of addiction treatment services, clinic-level coaching appears to help improve waiting time and number of new patients while other components of

  11. Laser photogrammetry improves size and demographic estimates for whale sharks.

    PubMed

    Rohner, Christoph A; Richardson, Anthony J; Prebble, Clare E M; Marshall, Andrea D; Bennett, Michael B; Weeks, Scarla J; Cliff, Geremy; Wintner, Sabine P; Pierce, Simon J

    2015-01-01

    Whale sharks Rhincodon typus are globally threatened, but a lack of biological and demographic information hampers an accurate assessment of their vulnerability to further decline or capacity to recover. We used laser photogrammetry at two aggregation sites to obtain more accurate size estimates of free-swimming whale sharks compared to visual estimates, allowing improved estimates of biological parameters. Individual whale sharks ranged from 432-917 cm total length (TL) (mean ± SD = 673 ± 118.8 cm, N = 122) in southern Mozambique and from 420-990 cm TL (mean ± SD = 641 ± 133 cm, N = 46) in Tanzania. By combining measurements of stranded individuals with photogrammetry measurements of free-swimming sharks, we calculated length at 50% maturity for males in Mozambique at 916 cm TL. Repeat measurements of individual whale sharks measured over periods from 347-1,068 days yielded implausible growth rates, suggesting that the growth increment over this period was not large enough to be detected using laser photogrammetry, and that the method is best applied to estimating growth rates over longer (decadal) time periods. The sex ratio of both populations was biased towards males (74% in Mozambique, 89% in Tanzania), the majority of which were immature (98% in Mozambique, 94% in Tanzania). The population structure for these two aggregations was similar to most other documented whale shark aggregations around the world. Information on small (<400 cm) whale sharks, mature individuals, and females in this region is lacking, but necessary to inform conservation initiatives for this globally threatened species. PMID:25870776

  12. Laser photogrammetry improves size and demographic estimates for whale sharks

    PubMed Central

    Richardson, Anthony J.; Prebble, Clare E.M.; Marshall, Andrea D.; Bennett, Michael B.; Weeks, Scarla J.; Cliff, Geremy; Wintner, Sabine P.; Pierce, Simon J.

    2015-01-01

    Whale sharks Rhincodon typus are globally threatened, but a lack of biological and demographic information hampers an accurate assessment of their vulnerability to further decline or capacity to recover. We used laser photogrammetry at two aggregation sites to obtain more accurate size estimates of free-swimming whale sharks compared to visual estimates, allowing improved estimates of biological parameters. Individual whale sharks ranged from 432–917 cm total length (TL) (mean ± SD = 673 ± 118.8 cm, N = 122) in southern Mozambique and from 420–990 cm TL (mean ± SD = 641 ± 133 cm, N = 46) in Tanzania. By combining measurements of stranded individuals with photogrammetry measurements of free-swimming sharks, we calculated length at 50% maturity for males in Mozambique at 916 cm TL. Repeat measurements of individual whale sharks measured over periods from 347–1,068 days yielded implausible growth rates, suggesting that the growth increment over this period was not large enough to be detected using laser photogrammetry, and that the method is best applied to estimating growth rates over longer (decadal) time periods. The sex ratio of both populations was biased towards males (74% in Mozambique, 89% in Tanzania), the majority of which were immature (98% in Mozambique, 94% in Tanzania). The population structure for these two aggregations was similar to most other documented whale shark aggregations around the world. Information on small (<400 cm) whale sharks, mature individuals, and females in this region is lacking, but necessary to inform conservation initiatives for this globally threatened species. PMID:25870776

  13. Towards Improved Snow Water Equivalent Estimation via GRACE Assimilation

    NASA Technical Reports Server (NTRS)

    Forman, Bart; Reichle, Rofl; Rodell, Matt

    2011-01-01

    Passive microwave (e.g. AMSR-E) and visible spectrum (e.g. MODIS) measurements of snow states have been used in conjunction with land surface models to better characterize snow pack states, most notably snow water equivalent (SWE). However, both types of measurements have limitations. AMSR-E, for example, suffers a loss of information in deep/wet snow packs. Similarly, MODIS suffers a loss of temporal correlation information beyond the initial accumulation and final ablation phases of the snow season. Gravimetric measurements, on the other hand, do not suffer from these limitations. In this study, gravimetric measurements from the Gravity Recovery and Climate Experiment (GRACE) mission are used in a land surface model data assimilation (DA) framework to better characterize SWE in the Mackenzie River basin located in northern Canada. Comparisons are made against independent, ground-based SWE observations, state-of-the-art modeled SWE estimates, and independent, ground-based river discharge observations. Preliminary results suggest improved SWE estimates, including improved timing of the subsequent ablation and runoff of the snow pack. Additionally, use of the DA procedure can add vertical and horizontal resolution to the coarse-scale GRACE measurements as well as effectively downscale the measurements in time. Such findings offer the potential for better understanding of the hydrologic cycle in snow-dominated basins located in remote regions of the globe where ground-based observation collection if difficult, if not impossible. This information could ultimately lead to improved freshwater resource management in communities dependent on snow melt as well as a reduction in the uncertainty of river discharge into the Arctic Ocean.

  14. Improved PPP ambiguity resolution by COES FCB estimation

    NASA Astrophysics Data System (ADS)

    Li, Yihe; Gao, Yang; Shi, Junbo

    2016-05-01

    Precise point positioning (PPP) integer ambiguity resolution is able to significantly improve the positioning accuracy with the correction of fractional cycle biases (FCBs) by shortening the time to first fix (TTFF) of ambiguities. When satellite orbit products are adopted to estimate the satellite FCB corrections, the narrow-lane (NL) FCB corrections will be contaminated by the orbit's line-of-sight (LOS) errors which subsequently affect ambiguity resolution (AR) performance, as well as positioning accuracy. To effectively separate orbit errors from satellite FCBs, we propose a cascaded orbit error separation (COES) method for the PPP implementation. Instead of using only one direction-independent component in previous studies, the satellite NL improved FCB corrections are modeled by one direction-independent component and three directional-dependent components per satellite in this study. More specifically, the direction-independent component assimilates actual FCBs, whereas the directional-dependent components are used to assimilate the orbit errors. To evaluate the performance of the proposed method, GPS measurements from a regional and a global network are processed with the IGSReal-time service (RTS), IGS rapid (IGR) products and predicted orbits with >10 cm 3D root mean square (RMS) error. The improvements by the proposed FCB estimation method are validated in terms of ambiguity fractions after applying FCB corrections and positioning accuracy. The numerical results confirm that the obtained FCBs using the proposed method outperform those by conventional method. The RMS of ambiguity fractions after applying FCB corrections is reduced by 13.2 %. The position RMSs in north, east and up directions are reduced by 30.0, 32.0 and 22.0 % on average.

  15. Evaluation of Incremental Improvement in the NWS MPE Precipitation Estimates

    NASA Astrophysics Data System (ADS)

    Qin, L.; Habib, E. H.

    2009-12-01

    This study focuses on assessment of incremental improvement in the multi-sensor precipitation estimates (MPE) developed by the National Weather Service (NWS) River Forecast Centers (RFC). The MPE product is based upon merging of data from WSR-88D radar, surface rain gauge, and occasionally geo-stationary satellite data. The MPE algorithm produces 5 intermediate sets of products known as: RMOSAIC, BMOSAIC, MMOSAIC, LMOSAIC, and MLMOSAIC. These products have different bias-removal and optimal gauge-merging mechanisms. The final product used in operational applications is selected by the RFC forecasters. All the MPE products are provided at hourly temporal resolution and over a national Hydrologic Rainfall Analysis Project (HRAP) grid of a nominal size of 4 square kilometers. To help determine the incremental improvement of MPE estimates, an evaluation analysis was performed over a two-year period (2005-2006) using 13 independently operated rain gauges located within an area of ~30 km2 in south Louisiana. The close proximity of gauge sites to each other allows for multiple gauges to be located within the same HRAP pixel and thus provides reliable estimates of true surface rainfall to be used as a reference dataset. The evaluation analysis is performed over two temporal scales: hourly and event duration. Besides graphical comparisons using scatter and histogram plots, several statistical measures are also applied such as multiplicative bias, additive bias, correlation, and error standard deviation. The results indicated a mixed performance of the different products over the study site depending on which statistical metric is used. The products based on local bias adjustment have lowest error standard deviation but worst multiplicative bias. The opposite is true for products that are based on mean-filed bias adjustment. Optimal merging with gauge fields lead to a reduction in the error quantiles of the products. The results of the current study will provide insight into

  16. Identifying victims of workplace bullying by integrating traditional estimation approaches into a latent class cluster model.

    PubMed

    Leon-Perez, Jose M; Notelaers, Guy; Arenas, Alicia; Munduate, Lourdes; Medina, Francisco J

    2014-05-01

    Research findings underline the negative effects of exposure to bullying behaviors and document the detrimental health effects of being a victim of workplace bullying. While no one disputes its negative consequences, debate continues about the magnitude of this phenomenon since very different prevalence rates of workplace bullying have been reported. Methodological aspects may explain these findings. Our contribution to this debate integrates behavioral and self-labeling estimation methods of workplace bullying into a measurement model that constitutes a bullying typology. Results in the present sample (n = 1,619) revealed that six different groups can be distinguished according to the nature and intensity of reported bullying behaviors. These clusters portray different paths for the workplace bullying process, where negative work-related and person-degrading behaviors are strongly intertwined. The analysis of the external validity showed that integrating previous estimation methods into a single measurement latent class model provides a reliable estimation method of workplace bullying, which may overcome previous flaws. PMID:24257593

  17. [Division of winter wheat yield estimation by remote sensing based on MODIS EVI time series data and spectral angle clustering].

    PubMed

    Zhu, Zai-Chun; Chen, Lian-Qun; Zhang, Jin-Shui; Pan, Yao-Zhong; Zhu, Wen-Quan; Hu, Tan-Gao

    2012-07-01

    Crop yield estimation division is the basis of crop yield estimation; it provides an important scientific basis for estimation research and practice. In the paper, MODIS EVI time-series data during winter wheat growth period is selected as the division data; JiangSu province is study area; A division method combined of advanced spectral angle mapping(SVM) and K-means clustering is presented, and tested in winter wheat yield estimation by remote sensing. The results show that: division method of spectral angle clustering can take full advantage of crop growth process that is reflected by MODIS time series data, and can fully reflect region differences of winter wheat that is brought by climate difference. Compared with the traditional division method, yield estimation result based on division result of spectral angle clustering has higher R2 (0.702 6 than 0.624 8) and lower RMSE (343.34 than 381.34 kg x hm(-2)), reflecting the advantages of the new division method in the winter wheat yield estimation. The division method in the paper only use convenient obtaining time-series remote sensing data of low-resolution as division data, can divide winter wheat into similar and well characterized region, accuracy and stability of yield estimation model is also very good, which provides an efficient way for winter wheat estimation by remote sensing, and is conducive to winter wheat yield estimation. PMID:23016349

  18. The Effect of Clustering on Estimations of the UV Ionizing Background from the Proximity Effect

    NASA Astrophysics Data System (ADS)

    Pascarelle, S. M.; Lanzetta, K. M.; Chen, H. W.

    1999-09-01

    There have been several determinations of the ionizing background using the proximity effect observed in the distibution of Lyman-alpha absorption lines in the spectra of QSOs at high redshift. It is usually assumed that the distribution of lines should be the same at very small impact parameters to the QSO as it is at large impact parameters, and any decrease in line density at small impact parameters is due to ionizing radiation from the QSO. However, if these Lyman-alpha absorption lines arise in galaxies (Lanzetta et al. 1995, Chen et al. 1998), then the strength of the proximity effect may have been underestimated in previous work, since galaxies are known to cluster around QSOs. Therefore, the UV background estimations have likely been overestimated by the same factor.

  19. Improved Estimates of Air Pollutant Emissions from Biorefinery

    SciTech Connect

    Tan, Eric C. D.

    2015-11-13

    We have attempted to use detailed kinetic modeling approach for improved estimation of combustion air pollutant emissions from biorefinery. We have developed a preliminary detailed reaction mechanism for biomass combustion. Lignin is the only biomass component included in the current mechanism and methane is used as the biogas surrogate. The model is capable of predicting the combustion emissions of greenhouse gases (CO2, N2O, CH4) and criteria air pollutants (NO, NO2, CO). The results are yet to be compared with the experimental data. The current model is still in its early stages of development. Given the acknowledged complexity of biomass oxidation, as well as the components in the feed to the combustor, obviously the modeling approach and the chemistry set discussed here may undergo revision, extension, and further validation in the future.

  20. Using SVD on Clusters to Improve Precision of Interdocument Similarity Measure.

    PubMed

    Zhang, Wen; Xiao, Fan; Li, Bin; Zhang, Siguang

    2016-01-01

    Recently, LSI (Latent Semantic Indexing) based on SVD (Singular Value Decomposition) is proposed to overcome the problems of polysemy and homonym in traditional lexical matching. However, it is usually criticized as with low discriminative power for representing documents although it has been validated as with good representative quality. In this paper, SVD on clusters is proposed to improve the discriminative power of LSI. The contribution of this paper is three manifolds. Firstly, we make a survey of existing linear algebra methods for LSI, including both SVD based methods and non-SVD based methods. Secondly, we propose SVD on clusters for LSI and theoretically explain that dimension expansion of document vectors and dimension projection using SVD are the two manipulations involved in SVD on clusters. Moreover, we develop updating processes to fold in new documents and terms in a decomposed matrix by SVD on clusters. Thirdly, two corpora, a Chinese corpus and an English corpus, are used to evaluate the performances of the proposed methods. Experiments demonstrate that, to some extent, SVD on clusters can improve the precision of interdocument similarity measure in comparison with other SVD based LSI methods. PMID:27579031

  1. Using SVD on Clusters to Improve Precision of Interdocument Similarity Measure

    PubMed Central

    Xiao, Fan; Li, Bin; Zhang, Siguang

    2016-01-01

    Recently, LSI (Latent Semantic Indexing) based on SVD (Singular Value Decomposition) is proposed to overcome the problems of polysemy and homonym in traditional lexical matching. However, it is usually criticized as with low discriminative power for representing documents although it has been validated as with good representative quality. In this paper, SVD on clusters is proposed to improve the discriminative power of LSI. The contribution of this paper is three manifolds. Firstly, we make a survey of existing linear algebra methods for LSI, including both SVD based methods and non-SVD based methods. Secondly, we propose SVD on clusters for LSI and theoretically explain that dimension expansion of document vectors and dimension projection using SVD are the two manipulations involved in SVD on clusters. Moreover, we develop updating processes to fold in new documents and terms in a decomposed matrix by SVD on clusters. Thirdly, two corpora, a Chinese corpus and an English corpus, are used to evaluate the performances of the proposed methods. Experiments demonstrate that, to some extent, SVD on clusters can improve the precision of interdocument similarity measure in comparison with other SVD based LSI methods. PMID:27579031

  2. THE NEXT GENERATION VIRGO CLUSTER SURVEY. XV. THE PHOTOMETRIC REDSHIFT ESTIMATION FOR BACKGROUND SOURCES

    SciTech Connect

    Raichoor, A.; Mei, S.; Huertas-Company, M.; Licitra, R.; Erben, T.; Hildebrandt, H.; Ilbert, O.; Boissier, S.; Boselli, A.; Ball, N. M.; Côté, P.; Ferrarese, L.; Gwyn, S. D. J.; Kavelaars, J. J.; Chen, Y.-T.; Cuillandre, J.-C.; Duc, P. A.; Guhathakurta, P.; and others

    2014-12-20

    The Next Generation Virgo Cluster Survey (NGVS) is an optical imaging survey covering 104 deg{sup 2} centered on the Virgo cluster. Currently, the complete survey area has been observed in the u*giz bands and one third in the r band. We present the photometric redshift estimation for the NGVS background sources. After a dedicated data reduction, we perform accurate photometry, with special attention to precise color measurements through point-spread function homogenization. We then estimate the photometric redshifts with the Le Phare and BPZ codes. We add a new prior that extends to i {sub AB} = 12.5 mag. When using the u* griz bands, our photometric redshifts for 15.5 mag ≤ i ≲ 23 mag or z {sub phot} ≲ 1 galaxies have a bias |Δz| < 0.02, less than 5% outliers, a scatter σ{sub outl.rej.}, and an individual error on z {sub phot} that increases with magnitude (from 0.02 to 0.05 and from 0.03 to 0.10, respectively). When using the u*giz bands over the same magnitude and redshift range, the lack of the r band increases the uncertainties in the 0.3 ≲ z {sub phot} ≲ 0.8 range (–0.05 < Δz < –0.02, σ{sub outl.rej} ∼ 0.06, 10%-15% outliers, and z {sub phot.err.} ∼ 0.15). We also present a joint analysis of the photometric redshift accuracy as a function of redshift and magnitude. We assess the quality of our photometric redshifts by comparison to spectroscopic samples and by verifying that the angular auto- and cross-correlation function w(θ) of the entire NGVS photometric redshift sample across redshift bins is in agreement with the expectations.

  3. Adaptive whitening of the electromyogram to improve amplitude estimation.

    PubMed

    Clancy, E A; Farry, K A

    2000-06-01

    Previous research showed that whitening the surface electromyogram (EMG) can improve EMG amplitude estimation (where EMG amplitude is defined as the time-varying standard deviation of the EMG). However, conventional whitening via a linear filter seems to fail at low EMG amplitude levels, perhaps due to additive background noise in the measured EMG. This paper describes an adaptive whitening technique that overcomes this problem by cascading a nonadaptive whitening filter, an adaptive Wiener filter, and an adaptive gain correction. These stages can be calibrated from two, five second duration, constant-angle, constant-force contractions, one at a reference level [e.g., 50% maximum voluntary contraction (MVC)] and one at 0% MVC. In experimental studies, subjects used real-time EMG amplitude estimates to track a uniform-density, band-limited random target. With a 0.25-Hz bandwidth target, either adaptive whitening or multiple-channel processing reduced the tracking error roughly half-way to the error achieved using the dynamometer signal as the feedback. At the 1.00-Hz bandwidth, all of the EMG processors had errors equivalent to that of the dynamometer signal, reflecting that errors in this task were dominated by subjects' inability to track targets at this bandwidth. Increases in the additive noise level, smoothing window length, and tracking bandwidth diminish the advantages of whitening. PMID:10833845

  4. Improving estimates of air pollution exposure through ubiquitous sensing technologies.

    PubMed

    de Nazelle, Audrey; Seto, Edmund; Donaire-Gonzalez, David; Mendez, Michelle; Matamala, Jaume; Nieuwenhuijsen, Mark J; Jerrett, Michael

    2013-05-01

    Traditional methods of exposure assessment in epidemiological studies often fail to integrate important information on activity patterns, which may lead to bias, loss of statistical power, or both in health effects estimates. Novel sensing technologies integrated with mobile phones offer potential to reduce exposure measurement error. We sought to demonstrate the usability and relevance of the CalFit smartphone technology to track person-level time, geographic location, and physical activity patterns for improved air pollution exposure assessment. We deployed CalFit-equipped smartphones in a free-living population of 36 subjects in Barcelona, Spain. Information obtained on physical activity and geographic location was linked to space-time air pollution mapping. We found that information from CalFit could substantially alter exposure estimates. For instance, on average travel activities accounted for 6% of people's time and 24% of their daily inhaled NO2. Due to the large number of mobile phone users, this technology potentially provides an unobtrusive means of enhancing epidemiologic exposure data at low cost. PMID:23416743

  5. An HST/WFPC2 survey of bright young clusters in M 31. IV. Age and mass estimates

    NASA Astrophysics Data System (ADS)

    Perina, S.; Cohen, J. G.; Barmby, P.; Beasley, M. A.; Bellazzini, M.; Brodie, J. P.; Federici, L.; Fusi Pecci, F.; Galleti, S.; Hodge, P. W.; Huchra, J. P.; Kissler-Patig, M.; Puzia, T. H.; Strader, J.

    2010-02-01

    Aims: We present the main results of an imaging survey of possible young massive clusters (YMC) in M 31 performed with the Wide Field and Planetary Camera 2 (WFPC2) on the Hubble Space Telescope (HST), with the aim of estimating their age and their mass. We obtained shallow (to B ˜ 25) photometry of individual stars in 19 clusters (of the 20 targets of the survey). We present the images and color magnitude diagrams (CMDs) of all of our targets. Methods: Point spread function fitting photometry of individual stars was obtained for all the WFPC2 images of the target clusters, and the completeness of the final samples was estimated using extensive sets of artificial stars experiments. The reddening, age, and metallicity of the clusters were estimated by comparing the observed CMDs and luminosity functions (LFs) with theoretical models. Stellar masses were estimated by comparison with theoretical models in the log(Age) vs. absolute integrated magnitude plane, using ages estimated from our CMDs and integrated J, H, K magnitudes from 2MASS-6X. Results: Nineteen of the twenty surveyed candidates were confirmed to be real star clusters, while one turned out to be a bright star. Three of the clusters were found not to be good YMC candidates from newly available integrated spectroscopy and were in fact found to be old from their CMD. Of the remaining sixteen clusters, fourteen have ages between 25 Myr and 280 Myr, two have older ages than 500 Myr (lower limits). By including ten other YMC with HST photometry from the literature, we assembled a sample of 25 clusters younger than 1 Gyr, with mass ranging from 0.6× 10^4 Msun to 6× 10^4 Msun, with an average of ˜3× 10^4 Msun. Our estimates of ages and masses well agree with recent independent studies based on integrated spectra. Conclusions: The clusters considered here are confirmed to have masses significantly higher than Galactic open clusters (OC) in the same age range. Our analysis indicates that YMCs are relatively

  6. Propensity score methods for estimating relative risks in cluster randomized trials with low-incidence binary outcomes and selection bias.

    PubMed

    Leyrat, Clémence; Caille, Agnès; Donner, Allan; Giraudeau, Bruno

    2014-09-10

    Despite randomization, selection bias may occur in cluster randomized trials. Classical multivariable regression usually allows for adjusting treatment effect estimates with unbalanced covariates. However, for binary outcomes with low incidence, such a method may fail because of separation problems. This simulation study focused on the performance of propensity score (PS)-based methods to estimate relative risks from cluster randomized trials with binary outcomes with low incidence. The results suggested that among the different approaches used (multivariable regression, direct adjustment on PS, inverse weighting on PS, and stratification on PS), only direct adjustment on the PS fully corrected the bias and moreover had the best statistical properties. PMID:24771662

  7. Small sample performance of bias-corrected sandwich estimators for cluster-randomized trials with binary outcomes.

    PubMed

    Li, Peng; Redden, David T

    2015-01-30

    The sandwich estimator in generalized estimating equations (GEE) approach underestimates the true variance in small samples and consequently results in inflated type I error rates in hypothesis testing. This fact limits the application of the GEE in cluster-randomized trials (CRTs) with few clusters. Under various CRT scenarios with correlated binary outcomes, we evaluate the small sample properties of the GEE Wald tests using bias-corrected sandwich estimators. Our results suggest that the GEE Wald z-test should be avoided in the analyses of CRTs with few clusters even when bias-corrected sandwich estimators are used. With t-distribution approximation, the Kauermann and Carroll (KC)-correction can keep the test size to nominal levels even when the number of clusters is as low as 10 and is robust to the moderate variation of the cluster sizes. However, in cases with large variations in cluster sizes, the Fay and Graubard (FG)-correction should be used instead. Furthermore, we derive a formula to calculate the power and minimum total number of clusters one needs using the t-test and KC-correction for the CRTs with binary outcomes. The power levels as predicted by the proposed formula agree well with the empirical powers from the simulations. The proposed methods are illustrated using real CRT data. We conclude that with appropriate control of type I error rates under small sample sizes, we recommend the use of GEE approach in CRTs with binary outcomes because of fewer assumptions and robustness to the misspecification of the covariance structure. PMID:25345738

  8. Improved fuzzy clustering algorithms in segmentation of DC-enhanced breast MRI.

    PubMed

    Kannan, S R; Ramathilagam, S; Devi, Pandiyarajan; Sathya, A

    2012-02-01

    Segmentation of medical images is a difficult and challenging problem due to poor image contrast and artifacts that result in missing or diffuse organ/tissue boundaries. Many researchers have applied various techniques however fuzzy c-means (FCM) based algorithms is more effective compared to other methods. The objective of this work is to develop some robust fuzzy clustering segmentation systems for effective segmentation of DCE - breast MRI. This paper obtains the robust fuzzy clustering algorithms by incorporating kernel methods, penalty terms, tolerance of the neighborhood attraction, additional entropy term and fuzzy parameters. The initial centers are obtained using initialization algorithm to reduce the computation complexity and running time of proposed algorithms. Experimental works on breast images show that the proposed algorithms are effective to improve the similarity measurement, to handle large amount of noise, to have better results in dealing the data corrupted by noise, and other artifacts. The clustering results of proposed methods are validated using Silhouette Method. PMID:20703716

  9. Improved Soundings and Error Estimates using AIRS/AMSU Data

    NASA Technical Reports Server (NTRS)

    Susskind, Joel

    2006-01-01

    AIRS was launched on EOS Aqua on May 4, 2002, together with AMSU A and HSB, to form a next generation polar orbiting infrared and microwave atmospheric sounding system. The primary products of AIRS/AMSU are twice daily global fields of atmospheric temperature-humidity profiles, ozone profiles, sea/land surface skin temperature, and cloud related parameters including OLR. The sounding goals of AIRS are to produce 1 km tropospheric layer mean temperatures with an rms error of 1 K, and layer precipitable water with an rms error of 20 percent, in cases with up to 80 percent effective cloud cover. The basic theory used to analyze AIRS/AMSU/HSB data in the presence of clouds, called the at-launch algorithm, and a post-launch algorithm which differed only in the minor details from the at-launch algorithm, have been described previously. The post-launch algorithm, referred to as AIRS Version 4.0, has been used by the Goddard DAAC to analyze and distribute AIRS retrieval products. In this paper we show progress made toward the AIRS Version 5.0 algorithm which will be used by the Goddard DAAC starting late in 2006. A new methodology has been developed to provide accurate case by case error estimates for retrieved geophysical parameters and for the channel by channel cloud cleared radiances used to derive the geophysical parameters from the AIRS/AMSU observations. These error estimates are in turn used for quality control of the derived geophysical parameters and clear column radiances. Improvements made to the retrieval algorithm since Version 4.0 are described as well as results comparing Version 5.0 retrieval accuracy and spatial coverage with those obtained using Version 4.0.

  10. Improved Image Registration by Sparse Patch-Based Deformation Estimation

    PubMed Central

    Kim, Minjeong; Wu, Guorong; Wang, Qian; Shen, Dinggang

    2014-01-01

    Despite of intensive efforts for decades, deformable image registration is still a challenging problem due to the potential large anatomical differences across individual images, which limits the registration performance. Fortunately, this issue could be alleviated if a good initial deformation can be provided for the two images under registration, which are often termed as the moving subject and the fixed template, respectively. In this work, we present a novel patch-based initial deformation prediction framework for improving the performance of existing registration algorithms. Our main idea is to estimate the initial deformation between subject and template in a patch-wise fashion by using the sparse representation technique. We argue that two image patches should follow the same deformation towards the template image if their patch-wise appearance patterns are similar. To this end, our framework consists of two stages, i.e., the training stage and the application stage. In the training stage, we register all training images to the pre-selected template, such that the deformation of each training image with respect to the template is known. In the application stage, we apply the following four steps to efficiently calculate the initial deformation field for the new test subject: (1) We pick a small number of key points in the distinctive regions of the test subject; (2) For each key point, we extract a local patch and form a coupled appearance-deformation dictionary from training images where each dictionary atom consists of the image intensity patch as well as their respective local deformations; (3) A small set of training image patches in the coupled dictionary are selected to represent the image patch of each subject key point by sparse representation. Then, we can predict the initial deformation for each subject key point by propagating the pre-estimated deformations on the selected training patches with the same sparse representation coefficients. (4) We

  11. Dendrimer mediated clustering of bacteria: improved aggregation and evaluation of bacterial response and viability.

    PubMed

    Leire, Emma; Amaral, Sandra P; Louzao, Iria; Winzer, Klaus; Alexander, Cameron; Fernandez-Megia, Eduardo; Fernandez-Trillo, Francisco

    2016-06-24

    Here, we evaluate how cationic gallic acid-triethylene glycol (GATG) dendrimers interact with bacteria and their potential to develop new antimicrobials. We demonstrate that GATG dendrimers functionalised with primary amines in their periphery can induce the formation of clusters in Vibrio harveyi, an opportunistic marine pathogen, in a generation dependent manner. Moreover, these cationic GATG dendrimers demonstrate an improved ability to induce cluster formation when compared to poly(N-[3-(dimethylamino)propyl]methacrylamide) [p(DMAPMAm)], a cationic linear polymer previously shown to cluster bacteria. Viability of the bacteria within the formed clusters and evaluation of quorum sensing controlled phenotypes (i.e. light production in V. harveyi) suggest that GATG dendrimers may be activating microbial responses by maintaining a high concentration of quorum sensing signals inside the clusters while increasing permeability of the microbial outer membranes. Thus, the reported GATG dendrimers constitute a valuable platform for the development of novel antimicrobial materials that can target microbial viability and/or virulence. PMID:27127812

  12. Disseminating quality improvement: study protocol for a large cluster-randomized trial

    PubMed Central

    2011-01-01

    Background Dissemination is a critical facet of implementing quality improvement in organizations. As a field, addiction treatment has produced effective interventions but disseminated them slowly and reached only a fraction of people needing treatment. This study investigates four methods of disseminating quality improvement (QI) to addiction treatment programs in the U.S. It is, to our knowledge, the largest study of organizational change ever conducted in healthcare. The trial seeks to determine the most cost-effective method of disseminating quality improvement in addiction treatment. Methods The study is evaluating the costs and effectiveness of different QI approaches by randomizing 201 addiction-treatment programs to four interventions. Each intervention used a web-based learning kit plus monthly phone calls, coaching, face-to-face meetings, or the combination of all three. Effectiveness is defined as reducing waiting time (days between first contact and treatment), increasing program admissions, and increasing continuation in treatment. Opportunity costs will be estimated for the resources associated with providing the services. Outcomes The study has three primary outcomes: waiting time, annual program admissions, and continuation in treatment. Secondary outcomes include: voluntary employee turnover, treatment completion, and operating margin. We are also seeking to understand the role of mediators, moderators, and other factors related to an organization's success in making changes. Analysis We are fitting a mixed-effect regression model to each program's average monthly waiting time and continuation rates (based on aggregated client records), including terms to isolate state and intervention effects. Admissions to treatment are aggregated to a yearly level to compensate for seasonality. We will order the interventions by cost to compare them pair-wise to the lowest cost intervention (monthly phone calls). All randomized sites with outcome data will be

  13. Using Satellite Rainfall Estimates to Improve Climate Services in Africa

    NASA Astrophysics Data System (ADS)

    Dinku, T.

    2012-12-01

    Climate variability and change pose serious challenges to sustainable development in Africa. The recent famine crisis in Horn of Africa is yet again another evidence of how fluctuations in the climate can destroy lives and livelihoods. Building resilience against the negative impacts of climate and maximizing the benefits from favorable conditions will require mainstreaming climate issues into development policy, planning and practice at different levels. The availability of decision-relevant climate information at different levels is very critical. The number and quality of weather stations in many part of Africa, however, has been declining. The available stations are unevenly distributed with most of the stations located along the main roads. This imposes severe limitations to the availability of climate information and services to rural communities where these services are needed most. Where observations are taken, they suffer from gaps and poor quality and are often unavailable beyond the respective national meteorological services. Combining available local observation with satellite products, making data and products available through the Internet, and training the user community to understand and use climate information will help to alleviate these problems. Improving data availability involves organizing and cleaning all available national station observations and combining them with satellite rainfall estimates. The main advantage of the satellite products is the excellent spatial coverage at increasingly improved spatial and temporal resolutions. This approach has been implemented in Ethiopia and Tanzania, and it is in the process being implemented in West Africa. The main outputs include: 1. Thirty-year times series of combined satellite-gauge rainfall time series at 10-daily time scale 10-km spatial resolution; 2. An array of user-specific products for climate analysis and monitoring; 3. An online facility providing user-friendly tools for

  14. Improved Estimate of Phobos Secular Acceleration from MOLA Observations

    NASA Technical Reports Server (NTRS)

    Bills, Bruce; Neumann, Gregory; Smith, David; Zuber, Maria

    2004-01-01

    We report on new observations of the orbital position of Phobos, and use them to obtain a new and improved estimate of the rate of secular acceleration in longitude due to tidal dissipation within Mars. Phobos is the inner-most natural satellite of Mars, and one of the few natural satellites in the solar system with orbital period shorter than the rotation period of its primary. As a result, any departure from a perfect elastic response by Mars in the tides raised on it by Phobos will cause a transfer of angular momentum from the orbit of Phobos to the spin of Mars. Since its discovery in 1877, Phobos has completed over 145,500 orbits, and has one of the best studied orbits in the solar system, with over 6000 earth-based astrometric observations, and over 300 spacecraft observations. As early as 1945, Sharpless noted that there is a secular acceleration in mean longitude, with rate (1.88 + 0.25) 10(exp -3) degrees per square year. In preparation for the 1989 Russian spacecraft mission to Phobos, considerable work was done compiling past observations, and refining the orbital model. All of the published estimates from that era are in good agreement. A typical solution (Jacobson et al., 1989) yields (1.249 + 0.018) 10(exp -3) degrees per square year. The MOLA instrument on MGS is a laser altimeter, and was designed to measure the topography of Mars. However, it has also been used to make observations of the position of Phobos. In 1998, a direct range measurement was made, which indicated that Phobos was slightly ahead of the predicted position. The MOLA detector views the surface of Mars in a narrow field of view, at 1064 nanometer wavelength, and can detect shadows cast by Phobos on the surface of Mars. We have found 15 such serendipitous shadow transit events over the interval from xx to xx, and all of them show Phobos to be ahead of schedule, and getting progressively farther ahead of the predicted position. In contrast, the cross-track positions are quite close

  15. Technical Methods Report: The Estimation of Average Treatment Effects for Clustered RCTs of Education Interventions. NCEE 2009-0061 rev.

    ERIC Educational Resources Information Center

    Schochet, Peter Z.

    2009-01-01

    This paper examines the estimation of two-stage clustered RCT designs in education research using the Neyman causal inference framework that underlies experiments. The key distinction between the considered causal models is whether potential treatment and control group outcomes are considered to be fixed for the study population (the…

  16. Distributed sensing for atmospheric probing: an improved concept of laser firefly clustering

    NASA Astrophysics Data System (ADS)

    Kedar, Debbie; Arnon, Shlomi

    2004-10-01

    In this paper, we present an improved concept of "Laser Firefly Clustering" for atmospheric probing, elaborating upon previous published work. The laser firefly cluster is a mobile, flexible and versatile distributed sensing system, whose purpose is to profile the chemical and particulate composition of the atmosphere for pollution monitoring, meteorology, detection of contamination and other aims. The fireflies are deployed in situ at the altitude of interest, and evoke a backscatter response form aerosols and molecules in the immediate vicinity using a coded laser signal. In the improved system a laser transmitter and one imaging receiver telescope are placed at a base station, while sophisticated miniature distributed sensors (fireflies), are deployed in the atmosphere. The fireflies are interrogated by the base station laser, and emit non-coded probing signals in response. The backscatter signal is processed on the firefly and the transduced data is transmitted to the imaging receiver on the ground. These improvements lead to better performance at lower energy cost and expand the scope of application of the innovative concept of laser firefly clustering. A numerical example demonstrates the potential of the novel system.

  17. Improved Rosetta Pedotransfer Estimation of Hydraulic Properties and Their Covariance

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Schaap, M. G.

    2014-12-01

    Quantitative knowledge of the soil hydraulic properties is necessary for most studies involving water flow and solute transport in the vadose zone. However, it is always expensive, difficult, and time consuming to measure hydraulic properties directly. Pedotransfer functions (PTFs) have been widely used to forecast soil hydraulic parameters. Rosetta is is one of many PTFs and based on artificial neural network analysis coupled with the bootstrap sampling method. The model provides hierarchical PTFs for different levels of input data for Rosetta (H1-H5 models, with higher order models requiring more input variables). The original Rosetta model consists of separate PTFs for the four "van Genuchten" (VG) water retention parameters and saturated hydraulic conductivity (Ks) because different numbers of samples were available for these characteristics. In this study, we present an improved Rosetta pedotransfer function that uses a single model for all five parameters combined; these parameters are weighed for each sample individually using the covariance matrix obtained from the curve-fit of the VG parameters to the primary data. The optimal number of hidden nodes, weights for saturated hydraulic conductivity and water retention parameters in the neural network and bootstrap realization were selected. Results show that root mean square error (RMSE) for water retention decreased from 0.076 to 0.072 cm3/cm3 for the H2 model and decreased from 0.044 to 0.039 cm3/cm3 for the H5 model. Mean errors which indicate variable matric potential-dependent bias were also reduced significantly in the new model. The RMSE for Ks increased slightly (H2: 0.717 to 0.722; H5: 0.581 to 0.594); this increase is minimal and a result of using a single model for water retention and Ks. Despite this small increase the new model is recommended because of its improved estimation of water retention, and because it is now possible to calculate the full covariance matrix of soil water retention

  18. Improved Critical Eigenfunction Restriction Estimates on Riemannian Surfaces with Nonpositive Curvature

    NASA Astrophysics Data System (ADS)

    Xi, Yakun; Zhang, Cheng

    2016-07-01

    We show that one can obtain improved L 4 geodesic restriction estimates for eigenfunctions on compact Riemannian surfaces with nonpositive curvature. We achieve this by adapting Sogge's strategy in (Improved critical eigenfunction estimates on manifolds of nonpositive curvature, Preprint). We first combine the improved L 2 restriction estimate of Blair and Sogge (Concerning Toponogov's Theorem and logarithmic improvement of estimates of eigenfunctions, Preprint) and the classical improved {L^∞} estimate of Bérard to obtain an improved weak-type L 4 restriction estimate. We then upgrade this weak estimate to a strong one by using the improved Lorentz space estimate of Bak and Seeger (Math Res Lett 18(4):767-781, 2011). This estimate improves the L 4 restriction estimate of Burq et al. (Duke Math J 138:445-486, 2007) and Hu (Forum Math 6:1021-1052, 2009) by a power of {(log logλ)^{-1}} . Moreover, in the case of compact hyperbolic surfaces, we obtain further improvements in terms of {(logλ)^{-1}} by applying the ideas from (Chen and Sogge, Commun Math Phys 329(3):435-459, 2014) and (Blair and Sogge, Concerning Toponogov's Theorem and logarithmic improvement of estimates of eigenfunctions, Preprint). We are able to compute various constants that appeared in (Chen and Sogge, Commun Math Phys 329(3):435-459, 2014) explicitly, by proving detailed oscillatory integral estimates and lifting calculations to the universal cover H^2.

  19. Dynamical evolution of stellar mass black holes in dense stellar clusters: estimate for merger rate of binary black holes originating from globular clusters

    NASA Astrophysics Data System (ADS)

    Tanikawa, A.

    2013-10-01

    We have performed N-body simulations of globular clusters (GCs) in order to estimate a detection rate of mergers of binary stellar mass black holes (BBHs) by means of gravitational wave (GW) observatories. For our estimate, we have only considered mergers of BBHs which escape from GCs (BBH escapers). BBH escapers merge more quickly than BBHs inside GCs because of their small semimajor axes. N-body simulation cannot deal with a GC with the number of stars N ˜ 106 due to its high calculation cost. We have simulated dynamical evolution of small N clusters (104 ≲ N ≲ 105), and have extrapolated our simulation results to large N clusters. From our simulation results, we have found the following dependence of BBH properties on N. BBHs escape from a cluster at each two-body relaxation time at a rate proportional to N. Semimajor axes of BBH escapers are inversely proportional to N, if initial mass densities of clusters are fixed. Eccentricities, primary masses and mass ratios of BBH escapers are independent of N. Using this dependence of BBH properties, we have artificially generated a population of BBH escapers from a GC with N ˜ 106, and have estimated a detection rate of mergers of BBH escapers by next-generation GW observatories. We have assumed that all the GCs are formed 10 or 12 Gyr ago with their initial numbers of stars Ni = 5 × 105-2 × 106 and their initial stellar mass densities inside their half-mass radii ρh,i = 6 × 103-106 M⊙ pc-3. Then, the detection rate of BBH escapers is 0.5-20 yr-1 for a BH retention fraction RBH = 0.5. A few BBH escapers are components of hierarchical triple systems, although we do not consider secular perturbation on such BBH escapers for our estimate. Our simulations have shown that BHs are still inside some of GCs at the present day. These BHs may marginally contribute to BBH detection.

  20. An Effective Intrusion Detection Algorithm Based on Improved Semi-supervised Fuzzy Clustering

    NASA Astrophysics Data System (ADS)

    Li, Xueyong; Zhang, Baojian; Sun, Jiaxia; Yan, Shitao

    An algorithm for intrusion detection based on improved evolutionary semi- supervised fuzzy clustering is proposed which is suited for situation that gaining labeled data is more difficulty than unlabeled data in intrusion detection systems. The algorithm requires a small number of labeled data only and a large number of unlabeled data and class labels information provided by labeled data is used to guide the evolution process of each fuzzy partition on unlabeled data, which plays the role of chromosome. This algorithm can deal with fuzzy label, uneasily plunges locally optima and is suited to implement on parallel architecture. Experiments show that the algorithm can improve classification accuracy and has high detection efficiency.

  1. Propensity score matching with clustered data. An application to the estimation of the impact of caesarean section on the Apgar score.

    PubMed

    Arpino, Bruno; Cannas, Massimo

    2016-05-30

    This article focuses on the implementation of propensity score matching for clustered data. Different approaches to reduce bias due to cluster-level confounders are considered and compared using Monte Carlo simulations. We investigated methods that exploit the clustered structure of the data in two ways: in the estimation of the propensity score model (through the inclusion of fixed or random effects) or in the implementation of the matching algorithm. In addition to a pure within-cluster matching, we also assessed the performance of a new approach, 'preferential' within-cluster matching. This approach first searches for control units to be matched to treated units within the same cluster. If matching is not possible within-cluster, then the algorithm searches in other clusters. All considered approaches successfully reduced the bias due to the omission of a cluster-level confounder. The preferential within-cluster matching approach, combining the advantages of within-cluster and between-cluster matching, showed a relatively good performance both in the presence of big and small clusters, and it was often the best method. An important advantage of this approach is that it reduces the number of unmatched units as compared with a pure within-cluster matching. We applied these methods to the estimation of the effect of caesarean section on the Apgar score using birth register data. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26833893

  2. REGIONAL APPROACH TO ESTIMATING RECREATION BENEFITS OF IMPROVED WATER QUALITY

    EPA Science Inventory

    Recreation demand and value are estimated with the travel-cost method for fishing, camping, boating, and swimming on a site-specific regional basis. The model is regional in that 197 sites are defined for the Pacific Northwest. A gravity model is employed to estimate the number o...

  3. Cluster Observations for Combined X-Ray and Sunyaev-Zel'dovich Estimates of Peculiar Velocities and Distances

    NASA Technical Reports Server (NTRS)

    Lange, A. E.

    1997-01-01

    Measurements of the peculiar velocities of galaxy clusters with respect to the Hubble flow allow the determination of the gravitational field from all matter in the universe, not just the visible component. The Sunyaev-Zel'dovich (SZ) effect (the inverse-Compton scattering of cosmic microwave background photons by the hot gas in clusters of galaxies) allows these velocities to be measured without the use of empirical distance indicators. Additionally, because the magnitude of the SZ effect is independent of redshift, the technique can be used to measure velocities out to the epoch of cluster formation. The SZ technique requires a determination of the temperature of the hot cluster gas from X-ray observations, and measurements of the SZ effect at millimeter wavelengths to separate the contribution from the thermal motions within the gas from that of the cluster peculiax velocity. We have constructed a bolometric receiver, the Sunyaev-Zel'dovich Infrared Experiment, specifically to make measurements of the SZ effect at millimeter wavelengths in order to apply the SZ technique to peculiar velocity measurements. This receiver has already been used to set limits to the peculiar velocities of two galaxy clusters at z approx. 0.2. As a test of the SZ technique, the double cluster pair Abell 222 and 223 was selected for observation. Measurements of the redshifts of the two components suggest that, if the clusters are gravitationally bound, they should exhibit a relative velocity of 10OO km/ s, well above the expected precision of 200 km/ s (set by astrophysical confusion) that is expected from the SZ method. The temperature can be measured from ASCA data which we obtained for this cluster pair. However, in order to ensure that the temperature estimate from the ASCA data was not dominated by cooling flows within the cluster, we requested ROSAT HRI observations of this cluster pair. Analysis of the X-ray properties of the cluster pair is continuing by combining the ROSAT

  4. Using Smartphone Sensors for Improving Energy Expenditure Estimation

    PubMed Central

    Zhu, Jindan; Das, Aveek K.; Zeng, Yunze; Mohapatra, Prasant; Han, Jay J.

    2015-01-01

    Energy expenditure (EE) estimation is an important factor in tracking personal activity and preventing chronic diseases, such as obesity and diabetes. Accurate and real-time EE estimation utilizing small wearable sensors is a difficult task, primarily because the most existing schemes work offline or use heuristics. In this paper, we focus on accurate EE estimation for tracking ambulatory activities (walking, standing, climbing upstairs, or downstairs) of a typical smartphone user. We used built-in smartphone sensors (accelerometer and barometer sensor), sampled at low frequency, to accurately estimate EE. Using a barometer sensor, in addition to an accelerometer sensor, greatly increases the accuracy of EE estimation. Using bagged regression trees, a machine learning technique, we developed a generic regression model for EE estimation that yields upto 96% correlation with actual EE. We compare our results against the state-of-the-art calorimetry equations and consumer electronics devices (Fitbit and Nike+ FuelBand). The newly developed EE estimation algorithm demonstrated superior accuracy compared with currently available methods. The results were calibrated against COSMED K4b2 calorimeter readings. PMID:27170901

  5. Improving Estimation Accuracy of Aggregate Queries on Data Cubes

    SciTech Connect

    Pourabbas, Elaheh; Shoshani, Arie

    2008-08-15

    In this paper, we investigate the problem of estimation of a target database from summary databases derived from a base data cube. We show that such estimates can be derived by choosing a primary database which uses a proxy database to estimate the results. This technique is common in statistics, but an important issue we are addressing is the accuracy of these estimates. Specifically, given multiple primary and multiple proxy databases, that share the same summary measure, the problem is how to select the primary and proxy databases that will generate the most accurate target database estimation possible. We propose an algorithmic approach for determining the steps to select or compute the source databases from multiple summary databases, which makes use of the principles of information entropy. We show that the source databases with the largest number of cells in common provide the more accurate estimates. We prove that this is consistent with maximizing the entropy. We provide some experimental results on the accuracy of the target database estimation in order to verify our results.

  6. Using Smartphone Sensors for Improving Energy Expenditure Estimation.

    PubMed

    Pande, Amit; Zhu, Jindan; Das, Aveek K; Zeng, Yunze; Mohapatra, Prasant; Han, Jay J

    2015-01-01

    Energy expenditure (EE) estimation is an important factor in tracking personal activity and preventing chronic diseases, such as obesity and diabetes. Accurate and real-time EE estimation utilizing small wearable sensors is a difficult task, primarily because the most existing schemes work offline or use heuristics. In this paper, we focus on accurate EE estimation for tracking ambulatory activities (walking, standing, climbing upstairs, or downstairs) of a typical smartphone user. We used built-in smartphone sensors (accelerometer and barometer sensor), sampled at low frequency, to accurately estimate EE. Using a barometer sensor, in addition to an accelerometer sensor, greatly increases the accuracy of EE estimation. Using bagged regression trees, a machine learning technique, we developed a generic regression model for EE estimation that yields upto 96% correlation with actual EE. We compare our results against the state-of-the-art calorimetry equations and consumer electronics devices (Fitbit and Nike+ FuelBand). The newly developed EE estimation algorithm demonstrated superior accuracy compared with currently available methods. The results were calibrated against COSMED K4b2 calorimeter readings. PMID:27170901

  7. Reference Cluster Normalization Improves Detection of Frontotemporal Lobar Degeneration by Means of FDG-PET

    PubMed Central

    Dukart, Juergen; Perneczky, Robert; Förster, Stefan; Barthel, Henryk; Diehl-Schmid, Janine; Draganski, Bogdan; Obrig, Hellmuth; Santarnecchi, Emiliano; Drzezga, Alexander; Fellgiebel, Andreas; Frackowiak, Richard; Kurz, Alexander; Müller, Karsten; Sabri, Osama; Schroeter, Matthias L.; Yakushev, Igor

    2013-01-01

    Positron emission tomography with [18F] fluorodeoxyglucose (FDG-PET) plays a well-established role in assisting early detection of frontotemporal lobar degeneration (FTLD). Here, we examined the impact of intensity normalization to different reference areas on accuracy of FDG-PET to discriminate between patients with mild FTLD and healthy elderly subjects. FDG-PET was conducted at two centers using different acquisition protocols: 41 FTLD patients and 42 controls were studied at center 1, 11 FTLD patients and 13 controls were studied at center 2. All PET images were intensity normalized to the cerebellum, primary sensorimotor cortex (SMC), cerebral global mean (CGM), and a reference cluster with most preserved FDG uptake in the aforementioned patients group of center 1. Metabolic deficits in the patient group at center 1 appeared 1.5, 3.6, and 4.6 times greater in spatial extent, when tracer uptake was normalized to the reference cluster rather than to the cerebellum, SMC, and CGM, respectively. Logistic regression analyses based on normalized values from FTLD-typical regions showed that at center 1, cerebellar, SMC, CGM, and cluster normalizations differentiated patients from controls with accuracies of 86%, 76%, 75% and 90%, respectively. A similar order of effects was found at center 2. Cluster normalization leads to a significant increase of statistical power in detecting early FTLD-associated metabolic deficits. The established FTLD-specific cluster can be used to improve detection of FTLD on a single case basis at independent centers – a decisive step towards early diagnosis and prediction of FTLD syndromes enabling specific therapies in the future. PMID:23451025

  8. Reference cluster normalization improves detection of frontotemporal lobar degeneration by means of FDG-PET.

    PubMed

    Dukart, Juergen; Perneczky, Robert; Förster, Stefan; Barthel, Henryk; Diehl-Schmid, Janine; Draganski, Bogdan; Obrig, Hellmuth; Santarnecchi, Emiliano; Drzezga, Alexander; Fellgiebel, Andreas; Frackowiak, Richard; Kurz, Alexander; Müller, Karsten; Sabri, Osama; Schroeter, Matthias L; Yakushev, Igor

    2013-01-01

    Positron emission tomography with [18F] fluorodeoxyglucose (FDG-PET) plays a well-established role in assisting early detection of frontotemporal lobar degeneration (FTLD). Here, we examined the impact of intensity normalization to different reference areas on accuracy of FDG-PET to discriminate between patients with mild FTLD and healthy elderly subjects. FDG-PET was conducted at two centers using different acquisition protocols: 41 FTLD patients and 42 controls were studied at center 1, 11 FTLD patients and 13 controls were studied at center 2. All PET images were intensity normalized to the cerebellum, primary sensorimotor cortex (SMC), cerebral global mean (CGM), and a reference cluster with most preserved FDG uptake in the aforementioned patients group of center 1. Metabolic deficits in the patient group at center 1 appeared 1.5, 3.6, and 4.6 times greater in spatial extent, when tracer uptake was normalized to the reference cluster rather than to the cerebellum, SMC, and CGM, respectively. Logistic regression analyses based on normalized values from FTLD-typical regions showed that at center 1, cerebellar, SMC, CGM, and cluster normalizations differentiated patients from controls with accuracies of 86%, 76%, 75% and 90%, respectively. A similar order of effects was found at center 2. Cluster normalization leads to a significant increase of statistical power in detecting early FTLD-associated metabolic deficits. The established FTLD-specific cluster can be used to improve detection of FTLD on a single case basis at independent centers - a decisive step towards early diagnosis and prediction of FTLD syndromes enabling specific therapies in the future. PMID:23451025

  9. Improving estimates of exposures for epidemiologic studies of plutonium workers.

    PubMed

    Ruttenber, A J; Schonbeck, M; McCrea, J; McClure, D; Martyny, J

    2001-01-01

    Epidemiologic studies of nuclear facilities usually focus on relations between cancer and doses from external penetrating radiation, and describe these exposures with little detail on measurement error and missing data. We demonstrate ways to document complex exposures to nuclear workers with data on external and internal exposures to ionizing radiation and toxic chemicals. We describe methods for assessing internal exposures to plutonium and external doses from neutrons; the use of a job exposure matrix for estimating chemical exposures; and methods for imputing missing data for exposures and doses. For plutonium workers at Rocky Flats, errors in estimating neutron doses resulted in underestimating the total external dose for production workers by about 16%. Estimates of systemic deposition do not correlate well with estimates of organ doses. Only a small percentage of workers had exposures to toxic chemicals, making epidemiologic assessments of risk difficult. PMID:11319050

  10. Improved estimation of random vibration loads in launch vehicles

    NASA Technical Reports Server (NTRS)

    Mehta, R.; Erwin, E.; Suryanarayan, S.; Krishna, Murali M. R.

    1993-01-01

    Random vibration induced load is an important component of the total design load environment for payload and launch vehicle components and their support structures. The current approach to random vibration load estimation is based, particularly at the preliminary design stage, on the use of Miles' equation which assumes a single degree-of-freedom (DOF) system and white noise excitation. This paper examines the implications of the use of multi-DOF system models and response calculation based on numerical integration using the actual excitation spectra for random vibration load estimation. The analytical study presented considers a two-DOF system and brings out the effects of modal mass, damping and frequency ratios on the random vibration load factor. The results indicate that load estimates based on the Miles' equation can be significantly different from the more accurate estimates based on multi-DOF models.

  11. Estimating Treatment Effects via Multilevel Matching within Homogenous Groups of Clusters

    ERIC Educational Resources Information Center

    Steiner, Peter M.; Kim, Jee-Seon

    2015-01-01

    Despite the popularity of propensity score (PS) techniques they are not yet well studied for matching multilevel data where selection into treatment takes place among level-one units within clusters. This paper suggests a PS matching strategy that tries to avoid the disadvantages of within- and across-cluster matching. The idea is to first…

  12. Rigid and non-rigid geometrical transformations of a marker-cluster and their impact on bone-pose estimation.

    PubMed

    Bonci, T; Camomilla, V; Dumas, R; Chèze, L; Cappozzo, A

    2015-11-26

    When stereophotogrammetry and skin-markers are used, bone-pose estimation is jeopardised by the soft tissue artefact (STA). At marker-cluster level, this can be represented using a modal series of rigid (RT; translation and rotation) and non-rigid (NRT; homothety and scaling) geometrical transformations. The NRT has been found to be smaller than the RT and claimed to have a limited impact on bone-pose estimation. This study aims to investigate this matter and comparatively assessing the propagation of both STA components to bone-pose estimate, using different numbers of markers. Twelve skin-markers distributed over the anterior aspect of a thigh were considered and STA time functions were generated for each of them, as plausibly occurs during walking, using an ad hoc model and represented through the geometrical transformations. Using marker-clusters made of four to 12 markers affected by these STAs, and a Procrustes superimposition approach, bone-pose and the relevant accuracy were estimated. This was done also for a selected four marker-cluster affected by STAs randomly simulated by modifying the original STA NRT component, so that its energy fell in the range 30-90% of total STA energy. The pose error, which slightly decreased while increasing the number of markers in the marker-cluster, was independent from the NRT amplitude, and was always null when the RT component was removed. It was thus demonstrated that only the RT component impacts pose estimation accuracy and should thus be accounted for when designing algorithms aimed at compensating for STA. PMID:26555716

  13. Improved Recharge Estimation from Portable, Low-Cost Weather Stations.

    PubMed

    Holländer, Hartmut M; Wang, Zijian; Assefa, Kibreab A; Woodbury, Allan D

    2016-03-01

    Groundwater recharge estimation is a critical quantity for sustainable groundwater management. The feasibility and robustness of recharge estimation was evaluated using physical-based modeling procedures, and data from a low-cost weather station with remote sensor techniques in Southern Abbotsford, British Columbia, Canada. Recharge was determined using the Richards-based vadose zone hydrological model, HYDRUS-1D. The required meteorological data were recorded with a HOBO(TM) weather station for a short observation period (about 1 year) and an existing weather station (Abbotsford A) for long-term study purpose (27 years). Undisturbed soil cores were taken at two locations in the vicinity of the HOBO(TM) weather station. The derived soil hydraulic parameters were used to characterize the soil in the numerical model. Model performance was evaluated using observed soil moisture and soil temperature data obtained from subsurface remote sensors. A rigorous sensitivity analysis was used to test the robustness of the model. Recharge during the short observation period was estimated at 863 and 816 mm. The mean annual recharge was estimated at 848 and 859 mm/year based on a time series of 27 years. The relative ratio of annual recharge-precipitation varied from 43% to 69%. From a monthly recharge perspective, the majority (80%) of recharge due to precipitation occurred during the hydrologic winter period. The comparison of the recharge estimates with other studies indicates a good agreement. Furthermore, this method is able to predict transient recharge estimates, and can provide a reasonable tool for estimates on nutrient leaching that is often controlled by strong precipitation events and rapid infiltration of water and nitrate into the soil. PMID:26011672

  14. Analyzing indirect effects in cluster randomized trials. The effect of estimation method, number of groups and group sizes on accuracy and power

    PubMed Central

    Hox, Joop J.; Moerbeek, Mirjam; Kluytmans, Anouck; van de Schoot, Rens

    2013-01-01

    Cluster randomized trials assess the effect of an intervention that is carried out at the group or cluster level. Ajzen's theory of planned behavior is often used to model the effect of the intervention as an indirect effect mediated in turn by attitude, norms and behavioral intention. Structural equation modeling (SEM) is the technique of choice to estimate indirect effects and their significance. However, this is a large sample technique, and its application in a cluster randomized trial assumes a relatively large number of clusters. In practice, the number of clusters in these studies tends to be relatively small, e.g., much less than fifty. This study uses simulation methods to find the lowest number of clusters needed when multilevel SEM is used to estimate the indirect effect. Maximum likelihood estimation is compared to Bayesian analysis, with the central quality criteria being accuracy of the point estimate and the confidence interval. We also investigate the power of the test for the indirect effect. We conclude that Bayes estimation works well with much smaller cluster level sample sizes such as 20 cases than maximum likelihood estimation; although the bias is larger the coverage is much better. When only 5–10 clusters are available per treatment condition even with Bayesian estimation problems occur. PMID:24550881

  15. Analyzing indirect effects in cluster randomized trials. The effect of estimation method, number of groups and group sizes on accuracy and power.

    PubMed

    Hox, Joop J; Moerbeek, Mirjam; Kluytmans, Anouck; van de Schoot, Rens

    2014-01-01

    Cluster randomized trials assess the effect of an intervention that is carried out at the group or cluster level. Ajzen's theory of planned behavior is often used to model the effect of the intervention as an indirect effect mediated in turn by attitude, norms and behavioral intention. Structural equation modeling (SEM) is the technique of choice to estimate indirect effects and their significance. However, this is a large sample technique, and its application in a cluster randomized trial assumes a relatively large number of clusters. In practice, the number of clusters in these studies tends to be relatively small, e.g., much less than fifty. This study uses simulation methods to find the lowest number of clusters needed when multilevel SEM is used to estimate the indirect effect. Maximum likelihood estimation is compared to Bayesian analysis, with the central quality criteria being accuracy of the point estimate and the confidence interval. We also investigate the power of the test for the indirect effect. We conclude that Bayes estimation works well with much smaller cluster level sample sizes such as 20 cases than maximum likelihood estimation; although the bias is larger the coverage is much better. When only 5-10 clusters are available per treatment condition even with Bayesian estimation problems occur. PMID:24550881

  16. Estimating the local geometry of magnetic field lines with Cluster: a theoretical discussion of physical and geometrical errors

    NASA Astrophysics Data System (ADS)

    Chanteur, Gerard

    A multi-spacecraft mission with at least four spacecraft, like CLUSTER, MMS, or Cross-Scales, can determine the local geometry of the magnetic field lines when the size of the cluster of spacecraft is small enough compared to the gradient scale lengths of the magnetic field. Shen et al. (2003) and Runov et al. (2003 and 2005) used CLUSTER data to estimate the normal and the curvature of magnetic field lines in the terrestrial current sheet: the two groups used different approaches. Reciprocal vectors of the tetrahedron formed by four spacecraft are a powerful tool for estimating gradients of fields (Chanteur, 1998 and 2000). Considering a thick and planar current sheet model and making use of the statistical properties of the reciprocal vectors allows to discuss theoretically how physical and geometrical errors affect these estimations. References Chanteur, G., Spatial Interpolation for Four Spacecraft: Theory, in Analysis Methods for Multi-Spacecraft Data, ISSI SR-001, pp. 349-369, ESA Publications Division, 1998. Chanteur, G., Accuracy of field gradient estimations by Cluster: Explanation of its dependency upon elongation and planarity of the tetrahedron, pp. 265-268, ESA SP-449, 2000. Runov, A., Nakamura, R., Baumjohann, W., Treumann, R. A., Zhang, T. L., Volwerk, M., V¨r¨s, Z., Balogh, A., Glaßmeier, K.-H., Klecker, B., R‘eme, H., and Kistler, L., Current sheet oo structure near magnetic X-line observed by Cluster, Geophys. Res. Lett., 30, 33-1, 2003. Runov, A., Sergeev, V. A., Nakamura, R., Baumjohann, W., Apatenkov, S., Asano, Y., Takada, T., Volwerk, M.,V¨r¨s, Z., Zhang, T. L., Sauvaud, J.-A., R‘eme, H., and Balogh, A., Local oo structure of the magnetotail current sheet: 2001 Cluster observations, Ann. Geophys., 24, 247-262, 2006. Shen, C., Li, X., Dunlop, M., Liu, Z. X., Balogh, A., Baker, D. N., Hapgood, M., and Wang, X., Analyses on the geometrical structure of magnetic field in the current sheet based on cluster measurements, J. Geophys. Res

  17. Validation of an Improved Pediatric Weight Estimation Strategy

    PubMed Central

    Abdel-Rahman, Susan M.; Ahlers, Nichole; Holmes, Anne; Wright, Krista; Harris, Ann; Weigel, Jaylene; Hill, Talita; Baird, Kim; Michaels, Marla; Kearns, Gregory L.

    2013-01-01

    OBJECTIVES To validate the recently described Mercy method for weight estimation in an independent cohort of children living in the United States. METHODS Anthropometric data including weight, height, humeral length, and mid upper arm circumference were collected from 976 otherwise healthy children (2 months to 14 years old). The data were used to examine the predictive performances of the Mercy method and four other weight estimation strategies (the Advanced Pediatric Life Support [APLS] method, the Broselow tape, and the Luscombe and Owens and the Nelson methods). RESULTS The Mercy method demonstrated accuracy comparable to that observed in the original study (mean error: −0.3 kg; mean percentage error: −0.3%; root mean square error: 2.62 kg; 95% limits of agreement: 0.83–1.19). This method estimated weight within 20% of actual for 95% of children compared with 58.7% for APLS, 78% for Broselow, 54.4% for Luscombe and Owens, and 70.4% for Nelson. Furthermore, the Mercy method was the only weight estimation strategy which enabled prediction of weight in all of the children enrolled. CONCLUSIONS The Mercy method proved to be highly accurate and more robust than existing weight estimation strategies across a wider range of age and body mass index values, thereby making it superior to other existing approaches. PMID:23798905

  18. IMPROVING EMISSIONS ESTIMATES WITH COMPUTATIONAL INTELLIGENCE, DATABASE EXPANSION, AND COMPREHENSIVE VALIDATION

    EPA Science Inventory

    The report discusses an EPA investigation of techniques to improve methods for estimating volatile organic compound (VOC) emissions from area sources. Using the automobile refinishing industry for a detailed area source case study, an emission estimation method is being developed...

  19. Maximum-Likelihood Fits to Histograms for Improved Parameter Estimation

    NASA Astrophysics Data System (ADS)

    Fowler, J. W.

    2014-08-01

    Straightforward methods for adapting the familiar statistic to histograms of discrete events and other Poisson distributed data generally yield biased estimates of the parameters of a model. The bias can be important even when the total number of events is large. For the case of estimating a microcalorimeter's energy resolution at 6 keV from the observed shape of the Mn K fluorescence spectrum, a poor choice of can lead to biases of at least 10 % in the estimated resolution when up to thousands of photons are observed. The best remedy is a Poisson maximum-likelihood fit, through a simple modification of the standard Levenberg-Marquardt algorithm for minimization. Where the modification is not possible, another approach allows iterative approximation of the maximum-likelihood fit.

  20. CFD modelling of most probable bubble nucleation rate from binary mixture with estimation of components' mole fraction in critical cluster

    NASA Astrophysics Data System (ADS)

    Hong, Ban Zhen; Keong, Lau Kok; Shariff, Azmi Mohd

    2016-05-01

    The employment of different mathematical models to address specifically for the bubble nucleation rates of water vapour and dissolved air molecules is essential as the physics for them to form bubble nuclei is different. The available methods to calculate bubble nucleation rate in binary mixture such as density functional theory are complicated to be coupled along with computational fluid dynamics (CFD) approach. In addition, effect of dissolved gas concentration was neglected in most study for the prediction of bubble nucleation rates. The most probable bubble nucleation rate for the water vapour and dissolved air mixture in a 2D quasi-stable flow across a cavitating nozzle in current work was estimated via the statistical mean of all possible bubble nucleation rates of the mixture (different mole fractions of water vapour and dissolved air) and the corresponding number of molecules in critical cluster. Theoretically, the bubble nucleation rate is greatly dependent on components' mole fraction in a critical cluster. Hence, the dissolved gas concentration effect was included in current work. Besides, the possible bubble nucleation rates were predicted based on the calculated number of molecules required to form a critical cluster. The estimation of components' mole fraction in critical cluster for water vapour and dissolved air mixture was obtained by coupling the enhanced classical nucleation theory and CFD approach. In addition, the distribution of bubble nuclei of water vapour and dissolved air mixture could be predicted via the utilisation of population balance model.

  1. Estimating f{sub NL} and g{sub NL} from massive high-redshift galaxy clusters

    SciTech Connect

    Enqvist, Kari; Hotchkiss, Shaun; Taanila, Olli E-mail: shaun.hotchkiss@helsinki.fi

    2011-04-01

    There are observations of at least 14 high-redshift massive galaxy clusters, which have an extremely small probability with a purely Gaussian initial curvature perturbation. Here we revisit the estimation of the contribution of non-Gaussianities to the cluster mass function and point out serious problems that have resulted from the application of the mass function out of the range of its validity. We remedy the situation and show that the values of f{sub NL} previously claimed to completely reconcile (i.e. at ∼ 100% confidence) the existence of the clusters with ΛCDM are unphysically small. However, for WMAP cosmology and at 95% confidence, we arrive at the limit f{sub NL}∼>411, which is similar to previous estimates. We also explore the possibility of a large g{sub NL} as the reason for the observed excess of the massive galaxy clusters. This scenario, g{sub NL} > 2 × 10{sup 6}, appears to be in more agreement with CMB and LSS limits for the non-Gaussianity parameters and could also provide an explanation for the overabundance of large voids in the early universe.

  2. An improved border detection in dermoscopy images for density based clustering

    PubMed Central

    2011-01-01

    Background Dermoscopy is one of the major imaging modalities used in the diagnosis of melanoma and other pigmented skin lesions. In current practice, dermatologists determine lesion area by manually drawing lesion borders. Therefore, automated assessment tools for dermoscopy images have become an important research field mainly because of inter- and intra-observer variations in human interpretation. One of the most important steps in dermoscopy image analysis is automated detection of lesion borders. To our knowledge, in our 2010 study we achieved one of the highest accuracy rates in the automated lesion border detection field by using modified density based clustering algorithm. In the previous study, we proposed a novel method which removes redundant computations in well-known spatial density based clustering algorithm, DBSCAN; thus, in turn it speeds up clustering process considerably. Findings Our previous study was heavily dependent on the pre-processing step which creates a binary image from original image. In this study, we embed a new distance measure to the existing algorithm. This provides twofold benefits. First, since new approach removes pre-processing step, it directly works on color images instead of binary ones. Thus, very important color information is not lost. Second, accuracy of delineated lesion borders is improved on 75% of 100 dermoscopy image dataset. Conclusion Previous and improved methods are tested within the same dermoscopy dataset along with the same set of dermatologist drawn ground truth images. Results revealed that the improved method directly works on color images without any pre-processing and generates more accurate results than existing method. PMID:22166058

  3. A study of area clustering using factor analysis in small area estimation (An analysis of per capita expenditures of subdistricts level in regency and municipality of Bogor)

    NASA Astrophysics Data System (ADS)

    Wahyudi, Notodiputro, Khairil Anwar; Kurnia, Anang; Anisa, Rahma

    2016-02-01

    Empirical Best Linear Unbiased Prediction (EBLUP) is one of indirect estimating methods which used to estimate parameters of small areas. EBLUP methods works in using auxiliary variables of area while adding the area random effects. In estimating non-sampled area, the standard EBLUP can no longer be used due to no information of area random effects. To obtain more proper estimation methods for non sampled area, the standard EBLUP model has to be modified by adding cluster information. The aim of this research was to study clustering methods using factor analysis by means of simulation, provide better cluster information. The criteria used to evaluate the goodness of fit of the methods in the simulation study were the mean percentage of clustering accuracy. The results of the simulation study showed the use of factor analysis in clustering has increased the average percentage of accuracy particularly when using Ward method. The method was taken into account to estimate the per capita expenditures based on Small Area Estimation (SAE) techniques. The method was eventually used to estimate the per capita expenditures from SUSENAS and the quality of the estimates was measured by RMSE. This research has shown that the standard-modified EBLUP model provided with factor analysis better estimates when compared with standard EBLUP model and the standard-modified EBLUP without the factor analysis. Moreover, it was also shown that the clustering information is important in estimating non sampled area.

  4. X-SRQ - Improving Scalability and Performance of Multi-Core InfiniBand Clusters

    SciTech Connect

    Shipman, Galen M; Poole, Stephen W

    2008-01-01

    To improve the scalability of InfiniBand on large scale clusters Open MPI introduced a protocol known as B-SRQ [2]. This protocol was shown to provide much better memory utilization of send and receive buffers for a wide variety of benchmarks and real-world applications. Unfortunately B-SRQ increases the number of connections between communicating peers. While addressing one scalability problem of InfiniBand the protocol introduced another. To alleviate the connection scalability problem of the B-SRQ protocol a small enhancement to the reliable connection transport was requested which would allow multiple shared receive queues to be attached to a single reliable connection. This modified reliable connection transport is now known as the extended reliable connection transport. X-SRQ is a new transport protocol in Open MPI based on B-SRQwhich takes advantage of this improvement in connection scalability. This paper introduces the X-SRQ protocol and details the significantly improved scalability of the protocol over B-SRQand its reduction of the memory footprint of connection state by as much as 2 orders of magnitude on large scale multi-core systems. In addition to improving scalability, performance of latency-sensitive collective operations are improved by up to 38% while significantly decreasing the variability of results. A detailed analysis of the improved memory scalability as well as the improved performance are discussed.

  5. An Improved Hybrid Recommender System Using Multi-Based Clustering Method

    NASA Astrophysics Data System (ADS)

    Puntheeranurak, Sutheera; Tsuji, Hidekazu

    Recommender systems have become an important research area as they provide some kind of intelligent web techniques to search through the enormous volume of information available on the internet. Content-based filtering and collaborative filtering methods are the most widely recommendation techniques adopted to date. Each of them has both advantages and disadvantages in providing high quality recommendations therefore a hybrid recommendation mechanism incorporating components from both of these methods would yield satisfactory results in many situations. In this paper, we present an elegant and effective framework for combining content-based filtering and collaborative filtering methods. Our approach clusters on user information and item information for content-based filtering to enhance existing user data and item data. Based on the result from the first step, we calculate the predicted rating data for collaborative filtering. We then do cluster on predicted rating data in the last step to enhance the scalability of our proposed system. We call our proposal multi-based clustering method. We show that our proposed system can solve a cold start problem, a sparsity problem, suitable for various situations in real-life applications. It thus contributes to the improvement of prediction quality of a hybrid recommender system as shown in the experimental results.

  6. USING COLORS TO IMPROVE PHOTOMETRIC METALLICITY ESTIMATES FOR GALAXIES

    SciTech Connect

    Sanders, N. E.; Soderberg, A. M.; Levesque, E. M.

    2013-10-01

    There is a well known correlation between the mass and metallicity of star-forming galaxies. Because mass is correlated with luminosity, this relation is often exploited, when spectroscopy is not available, to estimate galaxy metallicities based on single band photometry. However, we show that galaxy color is typically more effective than luminosity as a predictor of metallicity. This is a consequence of the correlation between color and the galaxy mass-to-light ratio and the recently discovered correlation between star formation rate (SFR) and residuals from the mass-metallicity relation. Using Sloan Digital Sky Survey spectroscopy of ∼180, 000 nearby galaxies, we derive 'LZC relations', empirical relations between metallicity (in seven common strong line diagnostics), luminosity, and color (in 10 filter pairs and four methods of photometry). We show that these relations allow photometric metallicity estimates, based on luminosity and a single optical color, that are ∼50% more precise than those made based on luminosity alone; galaxy metallicity can be estimated to within ∼0.05-0.1 dex of the spectroscopically derived value depending on the diagnostic used. Including color information in photometric metallicity estimates also reduces systematic biases for populations skewed toward high or low SFR environments, as we illustrate using the host galaxy of the supernova SN 2010ay. This new tool will lend more statistical power to studies of galaxy populations, such as supernova and gamma-ray burst host environments, in ongoing and future wide-field imaging surveys.

  7. Improved surface volume estimates for surface irrigation balance calculations

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Volume balance calculations used in surface irrigation engineering analysis require estimates of surface storage. Typically, these calculations use the Manning formula and normal depth assumption to calculate upstream flow depth (and thus flow area), and a constant shape factor to describe the rela...

  8. A novel ULA-based geometry for improving AOA estimation

    NASA Astrophysics Data System (ADS)

    Shirvani-Moghaddam, Shahriar; Akbari, Farida

    2011-12-01

    Due to relatively simple implementation, Uniform Linear Array (ULA) is a popular geometry for array signal processing. Despite this advantage, it does not have a uniform performance in all directions and Angle of Arrival (AOA) estimation performance degrades considerably in the angles close to endfire. In this article, a new configuration is proposed which can solve this problem. Proposed Array (PA) configuration adds two elements to the ULA in top and bottom of the array axis. By extending signal model of the ULA to the new proposed ULA-based array, AOA estimation performance has been compared in terms of angular accuracy and resolution threshold through two well-known AOA estimation algorithms, MUSIC and MVDR. In both algorithms, Root Mean Square Error (RMSE) of the detected angles descends as the input Signal to Noise Ratio (SNR) increases. Simulation results show that the proposed array geometry introduces uniform accurate performance and higher resolution in middle angles as well as border ones. The PA also presents less RMSE than the ULA in endfire directions. Therefore, the proposed array offers better performance for the border angles with almost the same array size and simplicity in both MUSIC and MVDR algorithms with respect to the conventional ULA. In addition, AOA estimation performance of the PA geometry is compared with two well-known 2D-array geometries: L-shape and V-shape, and acceptable results are obtained with equivalent or lower complexity.

  9. Trellis Tension Monitoring Improves Yield Estimation in Vineyards

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The preponderance of yield estimation practices for commercial vineyards is based on longstanding but individually variable industry protocols that rely on hand sampling fruit on one or a small number of dates during the growing season. Limitations associated with the static nature of yield estimati...

  10. Improved alternatives for estimating in-use material stocks.

    PubMed

    Chen, Wei-Qiang; Graedel, T E

    2015-03-01

    Determinations of in-use material stocks are useful for exploring past patterns and future scenarios of materials use, for estimating end-of-life flows of materials, and thereby for guiding policies on recycling and sustainable management of materials. This is especially true when those determinations are conducted for individual products or product groups such as "automobiles" rather than general (and sometimes nebulous) sectors such as "transportation". We propose four alternatives to the existing top-down and bottom-up methods for estimating in-use material stocks, with the choice depending on the focus of the study and on the available data. We illustrate with aluminum use in automobiles the robustness of and consistencies and differences among these four alternatives and demonstrate that a suitable combination of the four methods permits estimation of the in-use stock of a material contained in all products employing that material, or in-use stocks of different materials contained in a particular product. Therefore, we anticipate the estimation in the future of in-use stocks for many materials in many products or product groups, for many regions, and for longer time periods, by taking advantage of methodologies that fully employ the detailed data sets now becoming available. PMID:25636045

  11. Improved Uncertainty Quantification in Groundwater Flux Estimation Using GRACE

    NASA Astrophysics Data System (ADS)

    Reager, J. T., II; Rao, P.; Famiglietti, J. S.; Turmon, M.

    2015-12-01

    Groundwater change is difficult to monitor over large scales. One of the most successful approaches is in the remote sensing of time-variable gravity using NASA Gravity Recovery and Climate Experiment (GRACE) mission data, and successful case studies have created the opportunity to move towards a global groundwater monitoring framework for the world's largest aquifers. To achieve these estimates, several approximations are applied, including those in GRACE processing corrections, the formulation of the formal GRACE errors, destriping and signal recovery, and the numerical model estimation of snow water, surface water and soil moisture storage states used to isolate a groundwater component. A major weakness in these approaches is inconsistency: different studies have used different sources of primary and ancillary data, and may achieve different results based on alternative choices in these approximations. In this study, we present two cases of groundwater change estimation in California and the Colorado River basin, selected for their good data availability and varied climates. We achieve a robust numerical estimate of post-processing uncertainties resulting from land-surface model structural shortcomings and model resolution errors. Groundwater variations should demonstrate less variability than the overlying soil moisture state does, as groundwater has a longer memory of past events due to buffering by infiltration and drainage rate limits. We apply a model ensemble approach in a Bayesian framework constrained by the assumption of decreasing signal variability with depth in the soil column. We also discuss time variable errors vs. time constant errors, across-scale errors v. across-model errors, and error spectral content (across scales and across model). More robust uncertainty quantification for GRACE-based groundwater estimates would take all of these issues into account, allowing for more fair use in management applications and for better integration of GRACE

  12. A new cluster-based oversampling method for improving survival prediction of hepatocellular carcinoma patients.

    PubMed

    Santos, Miriam Seoane; Abreu, Pedro Henriques; García-Laencina, Pedro J; Simão, Adélia; Carvalho, Armando

    2015-12-01

    Liver cancer is the sixth most frequently diagnosed cancer and, particularly, Hepatocellular Carcinoma (HCC) represents more than 90% of primary liver cancers. Clinicians assess each patient's treatment on the basis of evidence-based medicine, which may not always apply to a specific patient, given the biological variability among individuals. Over the years, and for the particular case of Hepatocellular Carcinoma, some research studies have been developing strategies for assisting clinicians in decision making, using computational methods (e.g. machine learning techniques) to extract knowledge from the clinical data. However, these studies have some limitations that have not yet been addressed: some do not focus entirely on Hepatocellular Carcinoma patients, others have strict application boundaries, and none considers the heterogeneity between patients nor the presence of missing data, a common drawback in healthcare contexts. In this work, a real complex Hepatocellular Carcinoma database composed of heterogeneous clinical features is studied. We propose a new cluster-based oversampling approach robust to small and imbalanced datasets, which accounts for the heterogeneity of patients with Hepatocellular Carcinoma. The preprocessing procedures of this work are based on data imputation considering appropriate distance metrics for both heterogeneous and missing data (HEOM) and clustering studies to assess the underlying patient groups in the studied dataset (K-means). The final approach is applied in order to diminish the impact of underlying patient profiles with reduced sizes on survival prediction. It is based on K-means clustering and the SMOTE algorithm to build a representative dataset and use it as training example for different machine learning procedures (logistic regression and neural networks). The results are evaluated in terms of survival prediction and compared across baseline approaches that do not consider clustering and/or oversampling using the

  13. Estimating Time of Infection Using Prior Serological and Individual Information Can Greatly Improve Incidence Estimation of Human and Wildlife Infections

    PubMed Central

    Hens, Niel; Beutels, Philippe; Leirs, Herwig; Reijniers, Jonas

    2016-01-01

    Diseases of humans and wildlife are typically tracked and studied through incidence, the number of new infections per time unit. Estimating incidence is not without difficulties, as asymptomatic infections, low sampling intervals and low sample sizes can introduce large estimation errors. After infection, biomarkers such as antibodies or pathogens often change predictably over time, and this temporal pattern can contain information about the time since infection that could improve incidence estimation. Antibody level and avidity have been used to estimate time since infection and to recreate incidence, but the errors on these estimates using currently existing methods are generally large. Using a semi-parametric model in a Bayesian framework, we introduce a method that allows the use of multiple sources of information (such as antibody level, pathogen presence in different organs, individual age, season) for estimating individual time since infection. When sufficient background data are available, this method can greatly improve incidence estimation, which we show using arenavirus infection in multimammate mice as a test case. The method performs well, especially compared to the situation in which seroconversion events between sampling sessions are the main data source. The possibility to implement several sources of information allows the use of data that are in many cases already available, which means that existing incidence data can be improved without the need for additional sampling efforts or laboratory assays. PMID:27177244

  14. Estimating Time of Infection Using Prior Serological and Individual Information Can Greatly Improve Incidence Estimation of Human and Wildlife Infections.

    PubMed

    Borremans, Benny; Hens, Niel; Beutels, Philippe; Leirs, Herwig; Reijniers, Jonas

    2016-05-01

    Diseases of humans and wildlife are typically tracked and studied through incidence, the number of new infections per time unit. Estimating incidence is not without difficulties, as asymptomatic infections, low sampling intervals and low sample sizes can introduce large estimation errors. After infection, biomarkers such as antibodies or pathogens often change predictably over time, and this temporal pattern can contain information about the time since infection that could improve incidence estimation. Antibody level and avidity have been used to estimate time since infection and to recreate incidence, but the errors on these estimates using currently existing methods are generally large. Using a semi-parametric model in a Bayesian framework, we introduce a method that allows the use of multiple sources of information (such as antibody level, pathogen presence in different organs, individual age, season) for estimating individual time since infection. When sufficient background data are available, this method can greatly improve incidence estimation, which we show using arenavirus infection in multimammate mice as a test case. The method performs well, especially compared to the situation in which seroconversion events between sampling sessions are the main data source. The possibility to implement several sources of information allows the use of data that are in many cases already available, which means that existing incidence data can be improved without the need for additional sampling efforts or laboratory assays. PMID:27177244

  15. Improved source term estimation using blind outlier detection

    NASA Astrophysics Data System (ADS)

    Martinez-Camara, Marta; Bejar Haro, Benjamin; Vetterli, Martin; Stohl, Andreas

    2014-05-01

    Emissions of substances into the atmosphere are produced in situations such as volcano eruptions, nuclear accidents or pollutant releases. It is necessary to know the source term - how the magnitude of these emissions changes with time - in order to predict the consequences of the emissions, such as high radioactivity levels in a populated area or high concentration of volcanic ash in an aircraft flight corridor. However, in general, we know neither how much material was released in total, nor the relative variation of emission strength with time. Hence, estimating the source term is a crucial task. Estimating the source term generally involves solving an ill-posed linear inverse problem using datasets of sensor measurements. Several so-called inversion methods have been developed for this task. Unfortunately, objective quantitative evaluation of the performance of inversion methods is difficult due to the fact that the ground truth is unknown for practically all the available measurement datasets. In this work we use the European Tracer Experiment (ETEX) - a rare example of an experiment where the ground truth is available - to develop and to test new source estimation algorithms. Knowledge of the ground truth grants us access to the additive error term. We show that the distribution of this error is heavy-tailed, which means that some measurements are outliers. We also show that precisely these outliers severely degrade the performance of traditional inversion methods. Therefore, we develop blind outlier detection algorithms specifically suited to the source estimation problem. Then, we propose new inversion methods that combine traditional regularization techniques with blind outlier detection. Such hybrid methods reduce the error of reconstruction of the source term up to 45% with respect to previously proposed methods.

  16. Uncertainty Estimation Improves Energy Measurement and Verification Procedures

    SciTech Connect

    Walter, Travis; Price, Phillip N.; Sohn, Michael D.

    2014-05-14

    Implementing energy conservation measures in buildings can reduce energy costs and environmental impacts, but such measures cost money to implement so intelligent investment strategies require the ability to quantify the energy savings by comparing actual energy used to how much energy would have been used in absence of the conservation measures (known as the baseline energy use). Methods exist for predicting baseline energy use, but a limitation of most statistical methods reported in the literature is inadequate quantification of the uncertainty in baseline energy use predictions. However, estimation of uncertainty is essential for weighing the risks of investing in retrofits. Most commercial buildings have, or soon will have, electricity meters capable of providing data at short time intervals. These data provide new opportunities to quantify uncertainty in baseline predictions, and to do so after shorter measurement durations than are traditionally used. In this paper, we show that uncertainty estimation provides greater measurement and verification (M&V) information and helps to overcome some of the difficulties with deciding how much data is needed to develop baseline models and to confirm energy savings. We also show that cross-validation is an effective method for computing uncertainty. In so doing, we extend a simple regression-based method of predicting energy use using short-interval meter data. We demonstrate the methods by predicting energy use in 17 real commercial buildings. We discuss the benefits of uncertainty estimates which can provide actionable decision making information for investing in energy conservation measures.

  17. Improved plausibility bounds about the 2005 HIV and AIDS estimates

    PubMed Central

    Morgan, M; Walker, N; Gouws, E; Stanecki, K A; Stover, J

    2006-01-01

    Background Since 1998 the Joint United Nations Programme on HIV/AIDS and the World Health Organization has provided estimates on the magnitude of the HIV epidemic for individual countries. Starting with the 2003 estimates, plausibility bounds about the estimates were also reported. The bounds are intended to serve as a guide as to what reasonable or plausible ranges are for the uncertainty in HIV incidence, prevalence, and mortality. Methods Plausibility bounds were developed for three situations: for countries with generalised epidemics, for countries with low level or concentrated epidemics (LLC), and for regions. The techniques used build on those developed for the previous reporting round. However the current bounds are based on the available surveillance and survey data from each individual country rather than on data from a few prototypical countries. Results The uncertainty around the HIV estimates depends on the quality of the surveillance system in the country. Countries with population based HIV seroprevalence surveys have the tightest plausibility bounds (average relative range about the adult HIV prevalence (ARR) of −18% to +19%.) Generalised epidemic countries without a survey have the next tightest ranges (average ARR of −46% to +59%). Those LLC countries which have conducted multiple surveys over time for HIV among the populations most at risk have the bounds similar to those in generalised epidemic countries (ARR −40% to +67%). As the number and quality of the studies in LLC countries goes down, the plausibility bounds increase (ARR of −38% to +102% for countries with medium quality data and ARR of −53% to +183% for countries with poor quality data). The plausibility bounds for regions directly reflect the bounds for the countries in those regions. Conclusions Although scientific, the plausibility bounds do not represent and should not be interpreted as formal statistical confidence intervals. However in order to make the bounds as

  18. RCWIM - an improved global water isotope pattern prediction model using fuzzy climatic clustering regionalization

    NASA Astrophysics Data System (ADS)

    Terzer, Stefan; Araguás-Araguás, Luis; Wassenaar, Leonard I.; Aggarwal, Pradeep K.

    2013-04-01

    Prediction of geospatial H and O isotopic patterns in precipitation has become increasingly important to diverse disciplines beyond hydrology, such as climatology, ecology, food authenticity, and criminal forensics, because these two isotopes of rainwater often control the terrestrial isotopic spatial patterns that facilitate the linkage of products (food, wildlife, water) to origin or movement (food, criminalistics). Currently, spatial water isotopic pattern prediction relies on combined regression and interpolation techniques to create gridded datasets by using data obtained from the Global Network of Isotopes In Precipitation (GNIP). However, current models suffer from two shortcomings: (a) models may have limited covariates and/or parameterization fitted to a global domain, which results in poor predictive outcomes at regional scales, or (b) the spatial domain is intentionally restricted to regional settings, and thereby of little use in providing information at global geospatial scales. Here we present a new global climatically regionalized isotope prediction model which overcomes these limitations through the use of fuzzy clustering of climatic data subsets, allowing us to better identify and customize appropriate covariates and their multiple regression coefficients instead of aiming for a one-size-fits-all global fit (RCWIM - Regionalized Climate Cluster Water Isotope Model). The new model significantly reduces the point-based regression residuals and results in much lower overall isotopic prediction uncertainty, since residuals are interpolated onto the regression surface. The new precipitation δ2H and δ18O isoscape model is available on a global scale at 10 arc-minutes spatial and at monthly, seasonal and annual temporal resolution, and will provide improved predicted stable isotope values used for a growing number of applications. The model further provides a flexible framework for future improvements using regional climatic clustering.

  19. The Impact of Galaxy Cluster Mergers on Cosmological Parameter Estimation from Surveys of the Sunyaev-Zel'dovich Effect

    NASA Astrophysics Data System (ADS)

    Wik, Daniel R.; Sarazin, Craig L.; Ricker, Paul M.; Randall, Scott W.

    2008-06-01

    Sensitive surveys of the cosmic microwave background will detect thousands of galaxy clusters via the Sunyaev-Zel'dovich (SZ) effect. Two SZ observables, the central or maximum and integrated Comptonization parameters ymax and Y, relate in a simple way to the total cluster mass, which allows the construction of mass functions (MFs) that can be used to estimate cosmological parameters such as ΩM, σ8, and the dark energy parameter w. However, clusters form from the mergers of smaller structures, events that can disrupt the equilibrium of intracluster gas on which SZ- M relations rely. From a set of N-body/hydrodynamical simulations of binary cluster mergers, we calculate the evolution of Y and ymax over the course of merger events and find that both parameters are transiently "boosted," primarily during the first core passage. We then use a semianalytic technique developed by Randall et al. to estimate the effect of merger boosts on the distribution functions YF and yF of Y and ymax, respectively, via cluster merger histories determined from extended Press-Schechter (PS) merger trees. We find that boosts do not induce an overall systematic effect on YFs, and the values of ΩM, σ8, and w were returned to within 2% of values expected from the nonboosted YFs. The boosted yFs are significantly biased, however, causing ΩM to be underestimated by 15%-45%, σ8 to be overestimated by 10%-25%, and w to be pushed to more negative values by 25%-45%. We confirm that the integrated SZ effect, Y, is far more robust to mergers than ymax, as previously reported by Motl et al. and similarly found for the X-ray equivalent YX, and we conclude that Y is the superior choice for constraining cosmological parameters.

  20. Robust estimation of the arterial input function for Logan plots using an intersectional searching algorithm and clustering in positron emission tomography for neuroreceptor imaging.

    PubMed

    Naganawa, Mika; Kimura, Yuichi; Yano, Junichi; Mishina, Masahiro; Yanagisawa, Masao; Ishii, Kenji; Oda, Keiichi; Ishiwata, Kiichi

    2008-03-01

    The Logan plot is a powerful algorithm used to generate binding-potential images from dynamic positron emission tomography (PET) images in neuroreceptor studies. However, it requires arterial blood sampling and metabolite correction to provide an input function, and clinically it is preferable that this need for arterial blood sampling be obviated. Estimation of the input function with metabolite correction using an intersectional searching algorithm (ISA) has been proposed. The ISA seeks the input function from the intersection between the planes spanned by measured radioactivity curves in tissue and their cumulative integrals in data space. However, the ISA is sensitive to noise included in measured curves, and it often fails to estimate the input function. In this paper, we propose a robust estimation of the cumulative integral of the plasma time-activity curve (pTAC) using ISA (robust EPISA) to overcome noise issues. The EPISA reduces noise in the measured PET data using averaging and clustering that gathers radioactivity curves with similar kinetic parameters. We confirmed that a little noise made the estimation of the input function extremely difficult in the simulation. The robust EPISA was validated by application to eight real dynamic [(11)C]TMSX PET data sets used to visualize adenosine A(2A) receptors and four real dynamic [(11)C]PIB PET data sets used to visualize amyloid-beta plaque. Peripherally, the latter showed faster metabolism than the former. The clustering operation improved the signal-to-noise ratio for the PET data sufficiently to estimate the input function, and the calculated neuroreceptor images had a quality equivalent to that using measured pTACs after metabolite correction. Our proposed method noninvasively yields an alternative input function for Logan plots, allowing the Logan plot to be more useful in neuroreceptor studies. PMID:18187345

  1. Modified distance in average linkage based on M-estimator and MADn criteria in hierarchical cluster analysis

    NASA Astrophysics Data System (ADS)

    Muda, Nora; Othman, Abdul Rahman

    2015-10-01

    The process of grouping a set of objects into classes of similar objects is called clustering. It divides a large group of observations into smaller groups so that the observations within each group are relatively similar and the observations in different groups are relatively dissimilar. In this study, an agglomerative method in hierarchical cluster analysis is chosen and clusters were constructed by using an average linkage technique. An average linkage technique requires distance between clusters, which is calculated based on the average distance between all pairs of points, one group with another group. In calculating the average distance, the distance will not be robust when there is an outlier. Therefore, the average distance in average linkage needs to be modified in order to overcome the problem of outlier. Therefore, the criteria of outlier detection based on MADn criteria is used and the average distance is recalculated without the outlier. Next, the distance in average linkage is calculated based on a modified one step M-estimator (MOM). The groups of cluster are presented in dendrogram graph. To evaluate the goodness of a modified distance in the average linkage clustering, the bootstrap analysis is conducted on the dendrogram graph and the bootstrap value (BP) are assessed for each branch in dendrogram that formed the group, to ensure the reliability of the branches constructed. This study found that the average linkage technique with modified distance is significantly superior than the usual average linkage technique, if there is an outlier. Both of these techniques are said to be similar if there is no outlier.

  2. Global Water Resources Under Future Changes: Toward an Improved Estimation

    NASA Astrophysics Data System (ADS)

    Islam, M.; Agata, Y.; Hanasaki, N.; Kanae, S.; Oki, T.

    2005-05-01

    Global water resources availability in the 21st century is going to be an important concern. Despite its international recognition, however, until now there are very limited global estimates of water resources, which considered the geographical linkage between water supply and demand, defined by runoff and its passage through river network. The available studies are again insufficient due to reasons like different approaches in defining water scarcity, simply based on annual average figures without considering the inter-annual or seasonal variability, absence of the inclusion of virtual water trading, etc. In this study, global water resources under future climate change associated with several socio-economic factors were estimated varying over both temporal and spatial scale. Global runoff data was derived from several land surface models under the GSWP2 (Global Soil Wetness Project) project, which was further processed through TRIP (Total Runoff Integrated Pathways) river routing model to produce a 0.5x0.5 degree grid based figure. Water abstraction was estimated for the same spatial resolution for three sectors as domestic, industrial and agriculture. GCM outputs from CCSR and MRI were collected to predict the runoff changes. Socio-economic factors like population and GDP growth, affected mostly the demand part. Instead of simply looking at annual figures, monthly figures for both supply and demand was considered. For an average year, such a seasonal variability can affect the crop yield significantly. In other case, inter-annual variability of runoff can cause for an absolute drought condition. To account for vulnerabilities of a region to future changes, both inter-annual and seasonal effects were thus considered. At present, the study assumed the future agricultural water uses to be unchanged under climatic changes. In this connection, EPIC model is underway to use for estimating future agricultural water demand under climatic changes on a monthly basis. From

  3. Improved fire radiative energy estimation in high latitude ecosystems

    NASA Astrophysics Data System (ADS)

    Melchiorre, A.; Boschetti, L.

    2014-12-01

    Scientists, land managers, and policy makers are facing new challenges as fire regimes are evolving as a result of climate change (Westerling et al. 2006). In high latitudes fires are increasing in number and size as temperatures increase and precipitation decreases (Kasischke and Turetsky 2006). Peatlands, like the large complexes in the Alaskan tundra, are burning more frequently and severely as a result of these changes, releasing large amounts of greenhouse gases. Remotely sensed data are routinely used to monitor the location of active fires and the extent of burned areas, but they are not sensitive to the depth of the organic soil layer combusted, resulting in underestimation of peatland greenhouse gas emissions when employing the conventional 'bottom up' approach (Seiler and Crutzen 1980). An alternative approach would be the direct estimation of the biomass burned from the energy released by the fire (Fire Radiative Energy, FRE) (Wooster et al. 2003). Previous works (Boschetti and Roy 2009; Kumar et al. 2011) showed that the sampling interval of polar orbiting satellite systems severely limits the accuracy of the FRE in tropical ecosystems (up to four overpasses a day with MODIS), but because of the convergence of the orbits, more observations are available at higher latitudes. In this work, we used a combination of MODIS thermal data and Landsat optical data for the estimation of biomass burned in peatland ecosystems. First, the global MODIS active fire detection algorithm (Giglio et al. 2003) was modified, adapting the temperature thresholds to maximize the number of detections in boreal regions. Then, following the approach proposed by Boschetti and Roy (2009), the FRP point estimations were interpolated in time and space to cover the full temporal and spatial extent of the burned area, mapped with Landsat5 TM data. The methodology was tested on a large burned area in Alaska, and the results compared to published field measurements (Turetsky et al. 2011).

  4. Improved Speech Coding Based on Open-Loop Parameter Estimation

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Chen, Ya-Chin; Longman, Richard W.

    2000-01-01

    A nonlinear optimization algorithm for linear predictive speech coding was developed early that not only optimizes the linear model coefficients for the open loop predictor, but does the optimization including the effects of quantization of the transmitted residual. It also simultaneously optimizes the quantization levels used for each speech segment. In this paper, we present an improved method for initialization of this nonlinear algorithm, and demonstrate substantial improvements in performance. In addition, the new procedure produces monotonically improving speech quality with increasing numbers of bits used in the transmitted error residual. Examples of speech encoding and decoding are given for 8 speech segments and signal to noise levels as high as 47 dB are produced. As in typical linear predictive coding, the optimization is done on the open loop speech analysis model. Here we demonstrate that minimizing the error of the closed loop speech reconstruction, instead of the simpler open loop optimization, is likely to produce negligible improvement in speech quality. The examples suggest that the algorithm here is close to giving the best performance obtainable from a linear model, for the chosen order with the chosen number of bits for the codebook.

  5. Improving Mantel-Haenszel DIF Estimation through Bayesian Updating

    ERIC Educational Resources Information Center

    Zwick, Rebecca; Ye, Lei; Isham, Steven

    2012-01-01

    This study demonstrates how the stability of Mantel-Haenszel (MH) DIF (differential item functioning) methods can be improved by integrating information across multiple test administrations using Bayesian updating (BU). The authors conducted a simulation that showed that this approach, which is based on earlier work by Zwick, Thayer, and Lewis,…

  6. Estimation of feasible solution space using Cluster Newton Method: application to pharmacokinetic analysis of irinotecan with physiologically-based pharmacokinetic models

    PubMed Central

    2013-01-01

    Background To facilitate new drug development, physiologically-based pharmacokinetic (PBPK) modeling methods receive growing attention as a tool to fully understand and predict complex pharmacokinetic phenomena. As the number of parameters to reproduce physiological functions tend to be large in PBPK models, efficient parameter estimation methods are essential. We have successfully applied a recently developed algorithm to estimate a feasible solution space, called Cluster Newton Method (CNM), to reveal the cause of irinotecan pharmacokinetic alterations in two cancer patient groups. Results After improvements in the original CNM algorithm to maintain parameter diversities, a feasible solution space was successfully estimated for 55 or 56 parameters in the irinotecan PBPK model, within ten iterations, 3000 virtual samples, and in 15 minutes (Intel Xeon E5-1620 3.60GHz × 1 or Intel Core i7-870 2.93GHz × 1). Control parameters or parameter correlations were clarified after the parameter estimation processes. Possible causes in the irinotecan pharmacokinetic alterations were suggested, but they were not conclusive. Conclusions Application of CNM achieved a feasible solution space by solving inverse problems of a system containing ordinary differential equations (ODEs). This method may give us reliable insights into other complicated phenomena, which have a large number of parameters to estimate, under limited information. It is also helpful to design prospective studies for further investigation of phenomena of interest. PMID:24555857

  7. Theoretical estimation of solvation parameters and interfacial tension of clusters of potassium halides in aqueous solutions

    NASA Astrophysics Data System (ADS)

    Polak, W.; Sangwal, K.

    1996-03-01

    Using the model of the formation of ionic clusters, an analytical equation valid for the equilibrium concentration of solute in the solution is derived. Employing Boltzmann statistics in conjunction with the experimental values of the equilibrium concentration of KF, KCl, KBr and KI electrolytes in aqueous solution at 25°C, the above analytical equation is used to compute the best values of the dielectric permittivity of the solvation shell for the K + ion and four anions separately. These values of the dielectric permittivity of the solvation shells are then used to compute adsorption energy of water molecules on the {100} surface of regular clusters and their surface tension in the solution as functions of type of the salt, its concentration and cluster size. It is found that both the average adsorption energy and the interfacial tension of regular clusters composed of i ions can be approximated by a linear function of i - {1}/{2} for different concentrations of all the investigated potassium halides, and that, depending on the concentration of the solutions, the surface tension of regular clusters in solutions can increase or decrease with their size.

  8. An Improved Bandstrength Index for the CH G Band of Globular Cluster Giants

    NASA Astrophysics Data System (ADS)

    Martell, Sarah L.; Smith, Graeme H.; Briley, Michael M.

    2008-08-01

    Spectral indices are useful tools for quantifying the strengths of features in moderate-resolution spectra and relating them to intrinsic stellar parameters. This paper focuses on the 4300 Å CH G-band, a classic example of a feature interpreted through use of spectral indices. G-band index definitions, as applied to globular clusters of different metallicity, abound in the literature, and transformations between the various systems, or comparisons between different authors' work, are difficult and not always useful. We present a method for formulating an optimized G-band index, using a large grid of synthetic spectra. To make our new index a reliable measure of carbon abundance, we minimize its dependence on [N/Fe] and simultaneously maximize its sensitivity to [C/Fe]. We present a definition for the new index S2(CH), along with estimates of the errors inherent in using it for [C/Fe] determination, and conclude that it is valid for use with spectra of bright globular cluster red giants over a large range in [Fe/H], [C/Fe], and [N/Fe].

  9. Estimating Daytime Ecosystem Respiration to Improve Estimates of Gross Primary Production of a Temperate Forest

    PubMed Central

    Sun, Jinwei; Wu, Jiabing; Guan, Dexin; Yao, Fuqi; Yuan, Fenghui; Wang, Anzhi; Jin, Changjie

    2014-01-01

    Leaf respiration is an important component of carbon exchange in terrestrial ecosystems, and estimates of leaf respiration directly affect the accuracy of ecosystem carbon budgets. Leaf respiration is inhibited by light; therefore, gross primary production (GPP) will be overestimated if the reduction in leaf respiration by light is ignored. However, few studies have quantified GPP overestimation with respect to the degree of light inhibition in forest ecosystems. To determine the effect of light inhibition of leaf respiration on GPP estimation, we assessed the variation in leaf respiration of seedlings of the dominant tree species in an old mixed temperate forest with different photosynthetically active radiation levels using the Laisk method. Canopy respiration was estimated by combining the effect of light inhibition on leaf respiration of these species with within-canopy radiation. Leaf respiration decreased exponentially with an increase in light intensity. Canopy respiration and GPP were overestimated by approximately 20.4% and 4.6%, respectively, when leaf respiration reduction in light was ignored compared with the values obtained when light inhibition of leaf respiration was considered. This study indicates that accurate estimates of daytime ecosystem respiration are needed for the accurate evaluation of carbon budgets in temperate forests. In addition, this study provides a valuable approach to accurately estimate GPP by considering leaf respiration reduction in light in other ecosystems. PMID:25419844

  10. RSQRT: AN HEURISTIC FOR ESTIMATING THE NUMBER OF CLUSTERS TO REPORT.

    PubMed

    Carlis, John; Bruso, Kelsey

    2012-03-01

    Clustering can be a valuable tool for analyzing large datasets, such as in e-commerce applications. Anyone who clusters must choose how many item clusters, K, to report. Unfortunately, one must guess at K or some related parameter. Elsewhere we introduced a strongly-supported heuristic, RSQRT, which predicts K as a function of the attribute or item count, depending on attribute scales. We conducted a second analysis where we sought confirmation of the heuristic, analyzing data sets from theUCImachine learning benchmark repository. For the 25 studies where sufficient detail was available, we again found strong support. Also, in a side-by-side comparison of 28 studies, RSQRT best-predicted K and the Bayesian information criterion (BIC) predicted K are the same. RSQRT has a lower cost of O(log log n) versus O(n(2)) for BIC, and is more widely applicable. Using RSQRT prospectively could be much better than merely guessing. PMID:22773923

  11. Oxidative dehydrogenation of cyclohexene on size selected subnanometer cobalt clusters: improved catalytic performance via evolution of cluster-assembled nanostructures.

    PubMed

    Lee, Sungsik; Di Vece, Marcel; Lee, Byeongdu; Seifert, Sönke; Winans, Randall E; Vajda, Stefan

    2012-07-14

    The catalytic activity of oxide-supported metal nanoclusters strongly depends on their size and support. In this study, the origin of morphology transformation and chemical state changes during the oxidative dehydrogenation of cyclohexene was investigated in terms of metal-support interactions. Model catalyst systems were prepared by deposition of size selected subnanometer Co(27±4) clusters on various metal oxide supports (Al(2)O(3), ZnO and TiO(2) and MgO). The oxidation state and reactivity of the supported cobalt clusters were investigated by temperature programmed reaction (TPRx) and in situ grazing incidence X-ray absorption (GIXAS) during oxidative dehydrogenation of cyclohexene, while the sintering resistance monitored with grazing incidence small angle X-ray scattering (GISAXS). The activity and selectivity of cobalt clusters shows strong dependence on the support. GIXAS reveals that metal-support interaction plays a key role in the reaction. The most pronounced support effect is observed for MgO, where during the course of the reaction in its activity, composition and size dynamically evolving nanoassembly is formed from subnanometer cobalt clusters. PMID:22419008

  12. [An improved motion estimation of medical image series via wavelet transform].

    PubMed

    Zhang, Ying; Rao, Nini; Wang, Gang

    2006-10-01

    The compression of medical image series is very important in telemedicine. The motion estimation plays a key role in the video sequence compression. In this paper, an improved square-diamond search (SDS) algorithm is proposed for the motion estimation of medical image series. The improved SDS algorithm reduces the number of the searched points. This improved SDS algorithm is used in wavelet transformation field to estimate the motion of medical image series. A simulation experiment for digital subtraction angiography (DSA) is made. The experiment results show that the algorithm accuracy is higher than that of other algorithms in the motion estimation of medical image series. PMID:17121333

  13. Covariance specification and estimation to improve top-down Green House Gas emission estimates

    NASA Astrophysics Data System (ADS)

    Ghosh, S.; Lopez-Coto, I.; Prasad, K.; Whetstone, J. R.

    2015-12-01

    The National Institute of Standards and Technology (NIST) operates the North-East Corridor (NEC) project and the Indianapolis Flux Experiment (INFLUX) in order to develop measurement methods to quantify sources of Greenhouse Gas (GHG) emissions as well as their uncertainties in urban domains using a top down inversion method. Top down inversion updates prior knowledge using observations in a Bayesian way. One primary consideration in a Bayesian inversion framework is the covariance structure of (1) the emission prior residuals and (2) the observation residuals (i.e. the difference between observations and model predicted observations). These covariance matrices are respectively referred to as the prior covariance matrix and the model-data mismatch covariance matrix. It is known that the choice of these covariances can have large effect on estimates. The main objective of this work is to determine the impact of different covariance models on inversion estimates and their associated uncertainties in urban domains. We use a pseudo-data Bayesian inversion framework using footprints (i.e. sensitivities of tower measurements of GHGs to surface emissions) and emission priors (based on Hestia project to quantify fossil-fuel emissions) to estimate posterior emissions using different covariance schemes. The posterior emission estimates and uncertainties are compared to the hypothetical truth. We find that, if we correctly specify spatial variability and spatio-temporal variability in prior and model-data mismatch covariances respectively, then we can compute more accurate posterior estimates. We discuss few covariance models to introduce space-time interacting mismatches along with estimation of the involved parameters. We then compare several candidate prior spatial covariance models from the Matern covariance class and estimate their parameters with specified mismatches. We find that best-fitted prior covariances are not always best in recovering the truth. To achieve

  14. Estimating Missing Features to Improve Multimedia Information Retrieval

    SciTech Connect

    Bagherjeiran, A; Love, N S; Kamath, C

    2006-09-28

    Retrieval in a multimedia database usually involves combining information from different modalities of data, such as text and images. However, all modalities of the data may not be available to form the query. The retrieval results from such a partial query are often less than satisfactory. In this paper, we present an approach to complete a partial query by estimating the missing features in the query. Our experiments with a database of images and their associated captions show that, with an initial text-only query, our completion method has similar performance to a full query with both image and text features. In addition, when we use relevance feedback, our approach outperforms the results obtained using a full query.

  15. Does Integrating Family Planning into HIV Services Improve Gender Equitable Attitudes? Results from a Cluster Randomized Trial in Nyanza, Kenya.

    PubMed

    Newmann, Sara J; Rocca, Corinne H; Zakaras, Jennifer M; Onono, Maricianah; Bukusi, Elizabeth A; Grossman, Daniel; Cohen, Craig R

    2016-09-01

    This study investigated whether integrating family planning (FP) services into HIV care was associated with gender equitable attitudes among HIV-positive adults in western Kenya. Surveys were conducted with 480 women and 480 men obtaining HIV services from 18 clinics 1 year after the sites were randomized to integrated FP/HIV services (N = 12) or standard referral for FP (N = 6). We used multivariable regression, with generalized estimating equations to account for clustering, to assess whether gender attitudes (range 0-12) were associated with integrated care and with contraceptive use. Men at intervention sites had stronger gender equitable attitudes than those at control sites (adjusted mean difference in scores = 0.89, 95 % CI 0.03-1.74). Among women, attitudes did not differ by study arm. Gender equitable attitudes were not associated with contraceptive use among men (AOR = 1.06, 95 % CI 0.93-1.21) or women (AOR = 1.03, 95 % CI 0.94-1.13). Further work is needed to understand how integrating FP into HIV care affects gender relations, and how improved gender equity among men might be leveraged to improve contraceptive use and other reproductive health outcomes. PMID:26837632

  16. Strategies for Improved CALIPSO Aerosol Optical Depth Estimates

    NASA Technical Reports Server (NTRS)

    Vaughan, Mark A.; Kuehn, Ralph E.; Tackett, Jason L.; Rogers, Raymond R.; Liu, Zhaoyan; Omar, A.; Getzewich, Brian J.; Powell, Kathleen A.; Hu, Yongxiang; Young, Stuart A.; Avery, Melody A.; Winker, David M.; Trepte, Charles R.

    2010-01-01

    In the spring of 2010, the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) project will be releasing version 3 of its level 2 data products. In this paper we describe several changes to the algorithms and code that yield substantial improvements in CALIPSO's retrieval of aerosol optical depths (AOD). Among these are a retooled cloud-clearing procedure and a new approach to determining the base altitudes of aerosol layers in the planetary boundary layer (PBL). The results derived from these modifications are illustrated using case studies prepared using a late beta version of the level 2 version 3 processing code.

  17. Application of the 2013 Wilson-Devinney Program’s Direct Distance Estimation procedure and enhanced spot modeling capability to eclipsing binaries in star clusters

    NASA Astrophysics Data System (ADS)

    Milone, Eugene F.; Schiller, Stephen J.

    2014-06-01

    A paradigm method to calibrate a range of standard candles by means of well-calibrated photometry of eclipsing binaries in star clusters is the Direct Distance Estimation (DDE) procedure, contained in the 2010 and 2013 versions of the Wilson-Devinney light-curve modeling program. In particular, we are re-examining systems previously studied in our Binaries-in-Clusters program and analyzed with earlier versions of the Wilson-Devinney program. Earlier we reported on our use of the 2010 version of this program, which incorporates the DDE procedure to estimate the distance to an eclipsing system directly, as a system parameter, and is thus dependent on the data and analysis model alone. As such, the derived distance is accorded a standard error, independent of any additional assumptions or approximations that such analyses conventionally require. Additionally we have now made use of the 2013 version, which introduces temporal evolution of spots, an important improvement for systems containing variable active regions, as is the case for the systems we are studying currently, namely HD 27130 in the Hyades and DS And in NGC 752. Our work provides some constraints on the effects of spot treatment on distance determination of active systems.

  18. Effectiveness of Improvement Plans in Primary Care Practice Accreditation: A Clustered Randomized Trial

    PubMed Central

    Nouwens, Elvira; van Lieshout, Jan; Bouma, Margriet; Braspenning, Jozé; Wensing, Michel

    2014-01-01

    Background Accreditation of healthcare organizations is a widely used method to assess and improve quality of healthcare. Our aim was to determine the effectiveness of improvement plans in practice accreditation of primary care practices, focusing on cardiovascular risk management (CVRM). Method A two-arm cluster randomized controlled trial with a block design was conducted with measurements at baseline and follow-up. Primary care practices allocated to the intervention group (n = 22) were instructed to focus improvement plans during the intervention period on CVRM, while practices in the control group (n = 23) could focus on any domain except on CVRM and diabetes mellitus. Primary outcomes were systolic blood pressure <140 mmHg, LDL cholesterol <2.5 mmol/l and prescription of antiplatelet drugs. Secondary outcomes were 17 indicators of CVRM and physician's perceived goal attainment for the chosen improvement project. Results No effect was found on the primary outcomes. Blood pressure targets were reached in 39.8% of patients in the intervention and 38.7% of patients in the control group; cholesterol target levels were reached in 44.5% and 49.0% respectively; antiplatelet drugs were prescribed in 82.7% in both groups. Six secondary outcomes improved: smoking status, exercise control, diet control, registration of alcohol intake, measurement of waist circumference, and fasting glucose. Participants' perceived goal attainment was high in both arms: mean scores of 7.9 and 8.2 on the 10-point scale. Conclusions The focus of improvement plans on CVRM in the practice accreditation program led to some improvements of CVRM, but not on the primary outcomes. ClinicalTrials.gov NCT00791362 PMID:25463149

  19. An Adaptive Displacement Estimation Algorithm for Improved Reconstruction of Thermal Strain

    PubMed Central

    Ding, Xuan; Dutta, Debaditya; Mahmoud, Ahmed M.; Tillman, Bryan; Leers, Steven A.; Kim, Kang

    2014-01-01

    Thermal strain imaging (TSI) can be used to differentiate between lipid and water-based tissues in atherosclerotic arteries. However, detecting small lipid pools in vivo requires accurate and robust displacement estimation over a wide range of displacement magnitudes. Phase-shift estimators such as Loupas’ estimator and time-shift estimators like normalized cross-correlation (NXcorr) are commonly used to track tissue displacements. However, Loupas’ estimator is limited by phase-wrapping and NXcorr performs poorly when the signal-to-noise ratio (SNR) is low. In this paper, we present an adaptive displacement estimation algorithm that combines both Loupas’ estimator and NXcorr. We evaluated this algorithm using computer simulations and an ex-vivo human tissue sample. Using 1-D simulation studies, we showed that when the displacement magnitude induced by thermal strain was >λ/8 and the electronic system SNR was >25.5 dB, the NXcorr displacement estimate was less biased than the estimate found using Loupas’ estimator. On the other hand, when the displacement magnitude was ≤λ/4 and the electronic system SNR was ≤25.5 dB, Loupas’ estimator had less variance than NXcorr. We used these findings to design an adaptive displacement estimation algorithm. Computer simulations of TSI using Field II showed that the adaptive displacement estimator was less biased than either Loupas’ estimator or NXcorr. Strain reconstructed from the adaptive displacement estimates improved the strain SNR by 43.7–350% and the spatial accuracy by 1.2–23.0% (p < 0.001). An ex-vivo human tissue study provided results that were comparable to computer simulations. The results of this study showed that a novel displacement estimation algorithm, which combines two different displacement estimators, yielded improved displacement estimation and results in improved strain reconstruction. PMID:25585398

  20. Sample Size Estimation in Cluster Randomized Educational Trials: An Empirical Bayes Approach

    ERIC Educational Resources Information Center

    Rotondi, Michael A.; Donner, Allan

    2009-01-01

    The educational field has now accumulated an extensive literature reporting on values of the intraclass correlation coefficient, a parameter essential to determining the required size of a planned cluster randomized trial. We propose here a simple simulation-based approach including all relevant information that can facilitate this task. An…

  1. Improving a regional model using reduced complexity and parameter estimation

    USGS Publications Warehouse

    Kelson, Victor A.; Hunt, Randall J.; Haitjema, Henk M.

    2002-01-01

    The availability of powerful desktop computers and graphical user interfaces for ground water flow models makes possible the construction of ever more complex models. A proposed copper-zinc sulfide mine in northern Wisconsin offers a unique case in which the same hydrologic system has been modeled using a variety of techniques covering a wide range of sophistication and complexity. Early in the permitting process, simple numerical models were used to evaluate the necessary amount of water to be pumped from the mine, reductions in streamflow, and the drawdowns in the regional aquifer. More complex models have subsequently been used in an attempt to refine the predictions. Even after so much modeling effort, questions regarding the accuracy and reliability of the predictions remain. We have performed a new analysis of the proposed mine using the two-dimensional analytic element code GFLOW coupled with the nonlinear parameter estimation code UCODE. The new model is parsimonious, containing fewer than 10 parameters, and covers a region several times larger in areal extent than any of the previous models. The model demonstrates the suitability of analytic element codes for use with parameter estimation codes. The simplified model results are similar to the more complex models; predicted mine inflows and UCODE-derived 95% confidence intervals are consistent with the previous predictions. More important, the large areal extent of the model allowed us to examine hydrological features not included in the previous models, resulting in new insights about the effects that far-field boundary conditions can have on near-field model calibration and parameterization. In this case, the addition of surface water runoff into a lake in the headwaters of a stream while holding recharge constant moved a regional ground watershed divide and resulted in some of the added water being captured by the adjoining basin. Finally, a simple analytical solution was used to clarify the GFLOW model

  2. An Improved Source-Scanning Algorithm for Locating Earthquake Clusters or Aftershock Sequences

    NASA Astrophysics Data System (ADS)

    Liao, Y.; Kao, H.; Hsu, S.

    2010-12-01

    The Source-scanning Algorithm (SSA) was originally introduced in 2004 to locate non-volcanic tremors. Its application was later expanded to the identification of earthquake rupture planes and the near-real-time detection and monitoring of landslides and mud/debris flows. In this study, we further improve SSA for the purpose of locating earthquake clusters or aftershock sequences when only a limited number of waveform observations are available. The main improvements include the application of a ground motion analyzer to separate P and S waves, the automatic determination of resolution based on the grid size and time step of the scanning process, and a modified brightness function to utilize constraints from multiple phases. Specifically, the improved SSA (named as ISSA) addresses two major issues related to locating earthquake clusters/aftershocks. The first one is the massive amount of both time and labour to locate a large number of seismic events manually. And the second one is to efficiently and correctly identify the same phase across the entire recording array when multiple events occur closely in time and space. To test the robustness of ISSA, we generate synthetic waveforms consisting of 3 separated events such that individual P and S phases arrive at different stations in different order, thus making correct phase picking nearly impossible. Using these very complicated waveforms as the input, the ISSA scans all model space for possible combination of time and location for the existence of seismic sources. The scanning results successfully associate various phases from each event at all stations, and correctly recover the input. To further demonstrate the advantage of ISSA, we apply it to the waveform data collected by a temporary OBS array for the aftershock sequence of an offshore earthquake southwest of Taiwan. The overall signal-to-noise ratio is inadequate for locating small events; and the precise arrival times of P and S phases are difficult to

  3. An improved method for nonlinear parameter estimation: a case study of the Rössler model

    NASA Astrophysics Data System (ADS)

    He, Wen-Ping; Wang, Liu; Jiang, Yun-Di; Wan, Shi-Quan

    2016-08-01

    Parameter estimation is an important research topic in nonlinear dynamics. Based on the evolutionary algorithm (EA), Wang et al. (2014) present a new scheme for nonlinear parameter estimation and numerical tests indicate that the estimation precision is satisfactory. However, the convergence rate of the EA is relatively slow when multiple unknown parameters in a multidimensional dynamical system are estimated simultaneously. To solve this problem, an improved method for parameter estimation of nonlinear dynamical equations is provided in the present paper. The main idea of the improved scheme is to use all of the known time series for all of the components in some dynamical equations to estimate the parameters in single component one by one, instead of estimating all of the parameters in all of the components simultaneously. Thus, we can estimate all of the parameters stage by stage. The performance of the improved method was tested using a classic chaotic system—Rössler model. The numerical tests show that the amended parameter estimation scheme can greatly improve the searching efficiency and that there is a significant increase in the convergence rate of the EA, particularly for multiparameter estimation in multidimensional dynamical equations. Moreover, the results indicate that the accuracy of parameter estimation and the CPU time consumed by the presented method have no obvious dependence on the sample size.

  4. An improved method for nonlinear parameter estimation: a case study of the Rössler model

    NASA Astrophysics Data System (ADS)

    He, Wen-Ping; Wang, Liu; Jiang, Yun-Di; Wan, Shi-Quan

    2015-06-01

    Parameter estimation is an important research topic in nonlinear dynamics. Based on the evolutionary algorithm (EA), Wang et al. (2014) present a new scheme for nonlinear parameter estimation and numerical tests indicate that the estimation precision is satisfactory. However, the convergence rate of the EA is relatively slow when multiple unknown parameters in a multidimensional dynamical system are estimated simultaneously. To solve this problem, an improved method for parameter estimation of nonlinear dynamical equations is provided in the present paper. The main idea of the improved scheme is to use all of the known time series for all of the components in some dynamical equations to estimate the parameters in single component one by one, instead of estimating all of the parameters in all of the components simultaneously. Thus, we can estimate all of the parameters stage by stage. The performance of the improved method was tested using a classic chaotic system—Rössler model. The numerical tests show that the amended parameter estimation scheme can greatly improve the searching efficiency and that there is a significant increase in the convergence rate of the EA, particularly for multiparameter estimation in multidimensional dynamical equations. Moreover, the results indicate that the accuracy of parameter estimation and the CPU time consumed by the presented method have no obvious dependence on the sample size.

  5. Ionospheric perturbation degree estimates for improving GNSS applications

    NASA Astrophysics Data System (ADS)

    Jakowski, Norbert; Mainul Hoque, M.; Wilken, Volker; Berdermann, Jens; Hlubek, Nikolai

    Ionosphere can adversely affect accuracy, continuity, availability, and integrity of modern Global Navigation Satellite Systems (GNSS) in different ways. Hence, reliable information on key parameters describing the perturbation degree of the ionosphere is helpful for estimating the potential degradation of the performance of these systems. So, to guarantee the required safety level in aviation, Ground Based Augmentation Systems (GBAS) and Satellite Based Augmentation Systems (SBAS) have been established for detecting and mitigating ionospheric threats in particular due to ionospheric gradients. The paper reviews various attempts and capabilities to characterize the perturbation degree of the ionosphere currently being used in precise positioning and safety of life applications. Continuity and availability of signals are mainly impacted by amplitude and phase scintillations characterized by indices such as S4 or phase noise. To characterize medium and large scale ionospheric perturbations that may seriously affect accuracy and integrity of GNSS, the use of an internationally standardized Disturbance Ionosphere Index (DIX) is recommended. The definition of such a DIX must take into account the practical needs, should be an objective measure of ionospheric conditions and easy and reproducible to compute. A preliminary DIX approach is presented and discussed. Such a robust and easy adaptable index should have a great potential for being used in operational ionospheric weather services and GNSS augmentation systems.

  6. Improving discharge estimates from routine river flow monitoring in Sweden

    NASA Astrophysics Data System (ADS)

    Capell, Rene; Arheimer, Berit

    2016-04-01

    The Swedish Meteorological and Hydrological Institute (SMHI) maintains a permanent river gauging network for national hydrological monitoring which includes 263 gauging stations in Sweden. At all these stations, water levels are measured continuously, and discharges are computed through rating curves. The network represents a wide range of environmental settings, different gauging measurement types and gauging frequencies. Gauging frequencies are typically low compared with river gauges in more research-oriented settings, and thus uncertainties in discharges, particularly extremes, can be large. On the other hand, the gauging stations have often been in use for very long, with the oldest measurements dating back to 1900, and at least partly exhibit very stable conditions. Here, we show the variation in gauging stability in the SMHI's gauging network in order to try to identify more error-prone conditions. We investigate how the current, largely subjective, way of updating rating curves influences discharge estimates, and discuss ways forward towards a more objective evaluation of both discharge uncertainty and rating curve updating procedures.

  7. Estimating the Power Characteristics of Clusters of Large Offshore Wind Farms

    NASA Astrophysics Data System (ADS)

    Drew, D.; Barlow, J. F.; Coceal, O.; Coker, P.; Brayshaw, D.; Lenaghan, D.

    2014-12-01

    The next phase of offshore wind projects in the UK focuses on the development of very large wind farms clustered within several allocated zones. However, this change in the distribution of wind capacity brings uncertainty for the operational planning of the power system. Firstly, there are concerns that concentrating large amounts of capacity in one area could reduce some of the benefits seen by spatially dispersing the turbines, such as the smoothing of the power generation variability. Secondly, wind farms of the scale planned are likely to influence the boundary layer sufficiently to impact the performance of adjacent farms, therefore the power generation characteristics of the clusters are largely unknown. The aim of this study is to use the Weather Research and Forecasting (WRF) model to investigate the power output of a cluster of offshore wind farms for a range of extreme events, taking into account the wake effects of the individual turbines and the neighbouring farms. Each wind farm in the cluster is represented as an elevated momentum sink and a source of turbulent kinetic energy using the WRF Wind Farm Parameterization. The research focuses on the Dogger Bank zone (located in the North Sea approximately 125 km off the East coast of the UK), which could have 7.2 GW of installed capacity across six separate wind farms. For this site, a 33 year reanalysis data set (MERRA, from NASA-GMAO) has been used to identify a series of extreme event case studies. These are characterised by either periods of persistent low (or high) wind speeds, or by rapid changes in power output. The latter could be caused by small changes in the wind speed inducing large changes in power output, very high winds prompting turbine shut down, or a change in the wind direction which shifts the wake effects of the neighbouring farms in the cluster and therefore changes the wind resource available.

  8. An improved Pearson's correlation proximity-based hierarchical clustering for mining biological association between genes.

    PubMed

    Booma, P M; Prabhakaran, S; Dhanalakshmi, R

    2014-01-01

    Microarray gene expression datasets has concerned great awareness among molecular biologist, statisticians, and computer scientists. Data mining that extracts the hidden and usual information from datasets fails to identify the most significant biological associations between genes. A search made with heuristic for standard biological process measures only the gene expression level, threshold, and response time. Heuristic search identifies and mines the best biological solution, but the association process was not efficiently addressed. To monitor higher rate of expression levels between genes, a hierarchical clustering model was proposed, where the biological association between genes is measured simultaneously using proximity measure of improved Pearson's correlation (PCPHC). Additionally, the Seed Augment algorithm adopts average linkage methods on rows and columns in order to expand a seed PCPHC model into a maximal global PCPHC (GL-PCPHC) model and to identify association between the clusters. Moreover, a GL-PCPHC applies pattern growing method to mine the PCPHC patterns. Compared to existing gene expression analysis, the PCPHC model achieves better performance. Experimental evaluations are conducted for GL-PCPHC model with standard benchmark gene expression datasets extracted from UCI repository and GenBank database in terms of execution time, size of pattern, significance level, biological association efficiency, and pattern quality. PMID:25136661

  9. Improving modeled snow albedo estimates during the spring melt season

    NASA Astrophysics Data System (ADS)

    Malik, M. Jahanzeb; Velde, Rogier; Vekerdy, Zoltan; Su, Zhongbo

    2014-06-01

    Snow albedo influences snow-covered land energy and water budgets and is thus an important variable for energy and water fluxes calculations. Here, we quantify the performance of the three existing snow albedo parameterizations under alpine, tundra, and prairie snow conditions when implemented in the Noah land surface model (LSM)—Noah's default and ones from the Biosphere-Atmosphere Transfer Scheme (BATS) and the Canadian Land Surface Scheme (CLASS) LSMs. The Noah LSM is forced with and its output is evaluated using in situ measurements from seven sites in U.S. and France. Comparison of the snow albedo simulations with the in situ measurements reveals that the three parameterizations overestimate snow albedo during springtime. An alternative snow albedo parameterization is introduced that adopts the shape of the variogram for the optically thick snowpacks and decreases the albedo further for optically thin conditions by mixing the snow with the land surface (background) albedo as a function of snow depth. In comparison with the in situ measurements, the new parameterization improves albedo simulation of the alpine and tundra snowpacks and positively impacts the simulation of snow depth, snowmelt rate, and upward shortwave radiation. An improved model performance with the variogram-shaped parameterization can, however, not be unambiguously detected for prairie snowpacks, which may be attributed to uncertainties associated with the simulation of snow density. An assessment of the model performance for the Upper Colorado River Basin highlights that with the variogram-shaped parameterization Noah simulates more evapotranspiration and larger runoff peaks in Spring, whereas the Summer runoff is lower.

  10. Community Mobilization in Mumbai Slums to Improve Perinatal Care and Outcomes: A Cluster Randomized Controlled Trial

    PubMed Central

    More, Neena Shah; Bapat, Ujwala; Das, Sushmita; Alcock, Glyn; Patil, Sarita; Porel, Maya; Vaidya, Leena; Fernandez, Armida; Joshi, Wasundhara; Osrin, David

    2012-01-01

    Introduction Improving maternal and newborn health in low-income settings requires both health service and community action. Previous community initiatives have been predominantly rural, but India is urbanizing. While working to improve health service quality, we tested an intervention in which urban slum-dweller women's groups worked to improve local perinatal health. Methods and Findings A cluster randomized controlled trial in 24 intervention and 24 control settlements covered a population of 283,000. In each intervention cluster, a facilitator supported women's groups through an action learning cycle in which they discussed perinatal experiences, improved their knowledge, and took local action. We monitored births, stillbirths, and neonatal deaths, and interviewed mothers at 6 weeks postpartum. The primary outcomes described perinatal care, maternal morbidity, and extended perinatal mortality. The analysis included 18,197 births over 3 years from 2006 to 2009. We found no differences between trial arms in uptake of antenatal care, reported work, rest, and diet in later pregnancy, institutional delivery, early and exclusive breastfeeding, or care-seeking. The stillbirth rate was non-significantly lower in the intervention arm (odds ratio 0.86, 95% CI 0.60–1.22), and the neonatal mortality rate higher (1.48, 1.06–2.08). The extended perinatal mortality rate did not differ between arms (1.19, 0.90–1.57). We have no evidence that these differences could be explained by the intervention. Conclusions Facilitating urban community groups was feasible, and there was evidence of behaviour change, but we did not see population-level effects on health care or mortality. In cities with multiple sources of health care, but inequitable access to services, community mobilization should be integrated with attempts to deliver services for the poorest and most vulnerable, and with initiatives to improve quality of care in both public and private sectors. Trial registration