An Improved Cluster Richness Estimator
Rozo, Eduardo; Rykoff, Eli S.; Koester, Benjamin P.; McKay, Timothy; Hao, Jiangang; Evrard, August; Wechsler, Risa H.; Hansen, Sarah; Sheldon, Erin; Johnston, David; Becker, Matthew R.; Annis, James T.; Bleem, Lindsey; Scranton, Ryan; /Pittsburgh U.
2009-08-03
Minimizing the scatter between cluster mass and accessible observables is an important goal for cluster cosmology. In this work, we introduce a new matched filter richness estimator, and test its performance using the maxBCG cluster catalog. Our new estimator significantly reduces the variance in the L{sub X}-richness relation, from {sigma}{sub lnL{sub X}}{sup 2} = (0.86 {+-} 0.02){sup 2} to {sigma}{sub lnL{sub X}}{sup 2} = (0.69 {+-} 0.02){sup 2}. Relative to the maxBCG richness estimate, it also removes the strong redshift dependence of the richness scaling relations, and is significantly more robust to photometric and redshift errors. These improvements are largely due to our more sophisticated treatment of galaxy color data. We also demonstrate the scatter in the L{sub X}-richness relation depends on the aperture used to estimate cluster richness, and introduce a novel approach for optimizing said aperture which can be easily generalized to other mass tracers.
Improving lensing cluster mass estimate with flexion
NASA Astrophysics Data System (ADS)
Cardone, V. F.; Vicinanza, M.; Er, X.; Maoli, R.; Scaramella, R.
2016-11-01
Gravitational lensing has long been considered as a valuable tool to determine the total mass of galaxy clusters. The shear profile, as inferred from the statistics of ellipticity of background galaxies, allows us to probe the cluster intermediate and outer regions, thus determining the virial mass estimate. However, the mass sheet degeneracy and the need for a large number of background galaxies motivate the search for alternative tracers which can break the degeneracy among model parameters and hence improve the accuracy of the mass estimate. Lensing flexion, i.e. the third derivative of the lensing potential, has been suggested as a good answer to the above quest since it probes the details of the mass profile. We investigate here whether this is indeed the case considering jointly using weak lensing, magnification and flexion. We use a Fisher matrix analysis to forecast the relative improvement in the mass accuracy for different assumptions on the shear and flexion signal-to- noise (S/N) ratio also varying the cluster mass, redshift, and ellipticity. It turns out that the error on the cluster mass may be reduced up to a factor of ˜2 for reasonable values of the flexion S/N ratio. As a general result, we get that the improvement in mass accuracy is larger for more flattened haloes, but it extracting general trends is difficult because of the many parameters at play. We nevertheless find that flexion is as efficient as magnification to increase the accuracy in both mass and concentration determination.
Validation tests of an improved kernel density estimation method for identifying disease clusters
NASA Astrophysics Data System (ADS)
Cai, Qiang; Rushton, Gerard; Bhaduri, Budhendra
2012-07-01
The spatial filter method, which belongs to the class of kernel density estimation methods, has been used to make morbidity and mortality maps in several recent studies. We propose improvements in the method to include spatially adaptive filters to achieve constant standard error of the relative risk estimates; a staircase weight method for weighting observations to reduce estimation bias; and a parameter selection tool to enhance disease cluster detection performance, measured by sensitivity, specificity, and false discovery rate. We test the performance of the method using Monte Carlo simulations of hypothetical disease clusters over a test area of four counties in Iowa. The simulations include different types of spatial disease patterns and high-resolution population distribution data. Results confirm that the new features of the spatial filter method do substantially improve its performance in realistic situations comparable to those where the method is likely to be used.
Validation tests of an improved kernel density estimation method for identifying disease clusters
Cai, Qiang; Rushton, Gerald; Bhaduri, Budhendra L
2011-01-01
The spatial filter method, which belongs to the class of kernel density estimation methods, has been used to make morbidity and mortality maps in several recent studies. We propose improvements in the method that include a spatial basis of support designed to give a constant standard error for the standardized mortality/morbidity rate; a stair-case weight method for weighting observations to reduce estimation bias; and a method for selecting parameters to control three measures of performance of the method: sensitivity, specificity and false discovery rate. We test the performance of the method using Monte Carlo simulations of hypothetical disease clusters over a test area of four counties in Iowa. The simulations include different types of spatial disease patterns and high resolution population distribution data. Results confirm that the new features of the spatial filter method do substantially improve its performance in realistic situations comparable to those where the method is likely to be used.
A nonparametric clustering technique which estimates the number of clusters
NASA Technical Reports Server (NTRS)
Ramey, D. B.
1983-01-01
In applications of cluster analysis, one usually needs to determine the number of clusters, K, and the assignment of observations to each cluster. A clustering technique based on recursive application of a multivariate test of bimodality which automatically estimates both K and the cluster assignments is presented.
Horney, Jennifer; Zotti, Marianne E.; Williams, Amy; Hsia, Jason
2015-01-01
Introduction and Background Women of reproductive age, in particular women who are pregnant or fewer than 6 months postpartum, are uniquely vulnerable to the effects of natural disasters, which may create stressors for caregivers, limit access to prenatal/postpartum care, or interrupt contraception. Traditional approaches (e.g., newborn records, community surveys) to survey women of reproductive age about unmet needs may not be practical after disasters. Finding pregnant or postpartum women is especially challenging because fewer than 5% of women of reproductive age are pregnant or postpartum at any time. Methods From 2009 to 2011, we conducted three pilots of a sampling strategy that aimed to increase the proportion of pregnant and postpartum women of reproductive age who were included in postdisaster reproductive health assessments in Johnston County, North Carolina, after tornadoes, Cobb/Douglas Counties, Georgia, after flooding, and Bertie County, North Carolina, after hurricane-related flooding. Results Using this method, the percentage of pregnant and postpartum women interviewed in each pilot increased from 0.06% to 21%, 8% to 19%, and 9% to 17%, respectively. Conclusion and Discussion Two-stage cluster sampling with referral can be used to increase the proportion of pregnant and postpartum women included in a postdisaster assessment. This strategy may be a promising way to assess unmet needs of pregnant and postpartum women in disaster-affected communities. PMID:22365134
Cooper, Glinda S.; Bynum, Milele L.K.; Somers, Emily C.
2009-01-01
Previous studies have estimated a prevalence of a broad grouping of autoimmune diseases of 3.2%, based on literature review of studies published between 1965 and 1995, and 5.3%, based on national hospitalization registry data in Denmark. We examine more recent studies pertaining to the prevalence of 29 autoimmune diseases, and use these data to correct for the underascertainment of some diseases in the hospitalization registry data. This analysis results in an estimated prevalence of 7.6–9.4%, depending on the size of the correction factor used. The rates for most diseases for which data are available from many geographic regions span overlapping ranges. We also review studies of the co-occurrence of diseases within individuals and within families, focusing on specific pairs of diseases to better distinguish patterns that may result in insights pertaining to shared etiological pathways. Overall, data support a tendency for autoimmune diseases to co-occur at greater than expected rates within proband patients and their families, but this does not appear to be a uniform phenomenon across all diseases. Multiple sclerosis and rheumatoid arthritis is one disease pair that appears to have a decreased chance of coexistence. PMID:19819109
Attitude Estimation in Fractionated Spacecraft Cluster Systems
NASA Technical Reports Server (NTRS)
Hadaegh, Fred Y.; Blackmore, James C.
2011-01-01
An attitude estimation was examined in fractioned free-flying spacecraft. Instead of a single, monolithic spacecraft, a fractionated free-flying spacecraft uses multiple spacecraft modules. These modules are connected only through wireless communication links and, potentially, wireless power links. The key advantage of this concept is the ability to respond to uncertainty. For example, if a single spacecraft module in the cluster fails, a new one can be launched at a lower cost and risk than would be incurred with onorbit servicing or replacement of the monolithic spacecraft. In order to create such a system, however, it is essential to know what the navigation capabilities of the fractionated system are as a function of the capabilities of the individual modules, and to have an algorithm that can perform estimation of the attitudes and relative positions of the modules with fractionated sensing capabilities. Looking specifically at fractionated attitude estimation with startrackers and optical relative attitude sensors, a set of mathematical tools has been developed that specify the set of sensors necessary to ensure that the attitude of the entire cluster ( cluster attitude ) can be observed. Also developed was a navigation filter that can estimate the cluster attitude if these conditions are satisfied. Each module in the cluster may have either a startracker, a relative attitude sensor, or both. An extended Kalman filter can be used to estimate the attitude of all modules. A range of estimation performances can be achieved depending on the sensors used and the topology of the sensing network.
Optimizing weak lensing mass estimates for cluster profile uncertainty
Gruen, D.; Bernstein, G. M.; Lam, T. Y.; Seitz, S.
2011-09-11
Weak lensing measurements of cluster masses are necessary for calibrating mass-observable relations (MORs) to investigate the growth of structure and the properties of dark energy. However, the measured cluster shear signal varies at fixed mass M200m due to inherent ellipticity of background galaxies, intervening structures along the line of sight, and variations in the cluster structure due to scatter in concentrations, asphericity and substructure. We use N-body simulated halos to derive and evaluate a weak lensing circular aperture mass measurement Map that minimizes the mass estimate variance <(Map - M200m)2> in the presence of all these forms of variability. Dependingmore » on halo mass and observational conditions, the resulting mass estimator improves on Map filters optimized for circular NFW-profile clusters in the presence of uncorrelated large scale structure (LSS) about as much as the latter improve on an estimator that only minimizes the influence of shape noise. Optimizing for uncorrelated LSS while ignoring the variation of internal cluster structure puts too much weight on the profile near the cores of halos, and under some circumstances can even be worse than not accounting for LSS at all. As a result, we discuss the impact of variability in cluster structure and correlated structures on the design and performance of weak lensing surveys intended to calibrate cluster MORs.« less
Optimizing weak lensing mass estimates for cluster profile uncertainty
Gruen, D.; Bernstein, G. M.; Lam, T. Y.; Seitz, S.
2011-09-11
Weak lensing measurements of cluster masses are necessary for calibrating mass-observable relations (MORs) to investigate the growth of structure and the properties of dark energy. However, the measured cluster shear signal varies at fixed mass M_{200m }due to inherent ellipticity of background galaxies, intervening structures along the line of sight, and variations in the cluster structure due to scatter in concentrations, asphericity and substructure. We use N-body simulated halos to derive and evaluate a weak lensing circular aperture mass measurement M_{ap} that minimizes the mass estimate variance <(M_{ap} - M_{200m})^{2}> in the presence of all these forms of variability. Depending on halo mass and observational conditions, the resulting mass estimator improves on M_{ap} filters optimized for circular NFW-profile clusters in the presence of uncorrelated large scale structure (LSS) about as much as the latter improve on an estimator that only minimizes the influence of shape noise. Optimizing for uncorrelated LSS while ignoring the variation of internal cluster structure puts too much weight on the profile near the cores of halos, and under some circumstances can even be worse than not accounting for LSS at all. As a result, we discuss the impact of variability in cluster structure and correlated structures on the design and performance of weak lensing surveys intended to calibrate cluster MORs.
Nagwani, Naresh Kumar; Deo, Shirish V.
2014-01-01
Understanding of the compressive strength of concrete is important for activities like construction arrangement, prestressing operations, and proportioning new mixtures and for the quality assurance. Regression techniques are most widely used for prediction tasks where relationship between the independent variables and dependent (prediction) variable is identified. The accuracy of the regression techniques for prediction can be improved if clustering can be used along with regression. Clustering along with regression will ensure the more accurate curve fitting between the dependent and independent variables. In this work cluster regression technique is applied for estimating the compressive strength of the concrete and a novel state of the art is proposed for predicting the concrete compressive strength. The objective of this work is to demonstrate that clustering along with regression ensures less prediction errors for estimating the concrete compressive strength. The proposed technique consists of two major stages: in the first stage, clustering is used to group the similar characteristics concrete data and then in the second stage regression techniques are applied over these clusters (groups) to predict the compressive strength from individual clusters. It is found from experiments that clustering along with regression techniques gives minimum errors for predicting compressive strength of concrete; also fuzzy clustering algorithm C-means performs better than K-means algorithm. PMID:25374939
Cross-Clustering: A Partial Clustering Algorithm with Automatic Estimation of the Number of Clusters
Tellaroli, Paola; Bazzi, Marco; Donato, Michele; Brazzale, Alessandra R.; Drăghici, Sorin
2016-01-01
Four of the most common limitations of the many available clustering methods are: i) the lack of a proper strategy to deal with outliers; ii) the need for a good a priori estimate of the number of clusters to obtain reasonable results; iii) the lack of a method able to detect when partitioning of a specific data set is not appropriate; and iv) the dependence of the result on the initialization. Here we propose Cross-clustering (CC), a partial clustering algorithm that overcomes these four limitations by combining the principles of two well established hierarchical clustering algorithms: Ward’s minimum variance and Complete-linkage. We validated CC by comparing it with a number of existing clustering methods, including Ward’s and Complete-linkage. We show on both simulated and real datasets, that CC performs better than the other methods in terms of: the identification of the correct number of clusters, the identification of outliers, and the determination of real cluster memberships. We used CC to cluster samples in order to identify disease subtypes, and on gene profiles, in order to determine groups of genes with the same behavior. Results obtained on a non-biological dataset show that the method is general enough to be successfully used in such diverse applications. The algorithm has been implemented in the statistical language R and is freely available from the CRAN contributed packages repository. PMID:27015427
Improving clustering with metabolic pathway data
2014-01-01
Background It is a common practice in bioinformatics to validate each group returned by a clustering algorithm through manual analysis, according to a-priori biological knowledge. This procedure helps finding functionally related patterns to propose hypotheses for their behavior and the biological processes involved. Therefore, this knowledge is used only as a second step, after data are just clustered according to their expression patterns. Thus, it could be very useful to be able to improve the clustering of biological data by incorporating prior knowledge into the cluster formation itself, in order to enhance the biological value of the clusters. Results A novel training algorithm for clustering is presented, which evaluates the biological internal connections of the data points while the clusters are being formed. Within this training algorithm, the calculation of distances among data points and neurons centroids includes a new term based on information from well-known metabolic pathways. The standard self-organizing map (SOM) training versus the biologically-inspired SOM (bSOM) training were tested with two real data sets of transcripts and metabolites from Solanum lycopersicum and Arabidopsis thaliana species. Classical data mining validation measures were used to evaluate the clustering solutions obtained by both algorithms. Moreover, a new measure that takes into account the biological connectivity of the clusters was applied. The results of bSOM show important improvements in the convergence and performance for the proposed clustering method in comparison to standard SOM training, in particular, from the application point of view. Conclusions Analyses of the clusters obtained with bSOM indicate that including biological information during training can certainly increase the biological value of the clusters found with the proposed method. It is worth to highlight that this fact has effectively improved the results, which can simplify their further analysis
Open-cluster density profiles derived using a kernel estimator
NASA Astrophysics Data System (ADS)
Seleznev, Anton F.
2016-03-01
Surface and spatial radial density profiles in open clusters are derived using a kernel estimator method. Formulae are obtained for the contribution of every star into the spatial density profile. The evaluation of spatial density profiles is tested against open-cluster models from N-body experiments with N = 500. Surface density profiles are derived for seven open clusters (NGC 1502, 1960, 2287, 2516, 2682, 6819 and 6939) using Two-Micron All-Sky Survey data and for different limiting magnitudes. The selection of an optimal kernel half-width is discussed. It is shown that open-cluster radius estimates hardly depend on the kernel half-width. Hints of stellar mass segregation and structural features indicating cluster non-stationarity in the regular force field are found. A comparison with other investigations shows that the data on open-cluster sizes are often underestimated. The existence of an extended corona around the open cluster NGC 6939 was confirmed. A combined function composed of the King density profile for the cluster core and the uniform sphere for the cluster corona is shown to be a better approximation of the surface radial density profile.The King function alone does not reproduce surface density profiles of sample clusters properly. The number of stars, the cluster masses and the tidal radii in the Galactic gravitational field for the sample clusters are estimated. It is shown that NGC 6819 and 6939 are extended beyond their tidal surfaces.
Memory color assisted illuminant estimation through pixel clustering
NASA Astrophysics Data System (ADS)
Zhang, Heng; Quan, Shuxue
2010-01-01
The under constrained nature of illuminant estimation determines that in order to resolve the problem, certain assumptions are needed, such as the gray world theory. Including more constraints in this process may help explore the useful information in an image and improve the accuracy of the estimated illuminant, providing that the constraints hold. Based on the observation that most personal images have contents of one or more of the following categories: neutral objects, human beings, sky, and plants, we propose a method for illuminant estimation through the clustering of pixels of gray and three dominant memory colors: skin tone, sky blue, and foliage green. Analysis shows that samples of the above colors cluster around small areas under different illuminants and their characteristics can be used to effectively detect pixels falling into each of the categories. The algorithm requires the knowledge of the spectral sensitivity response of the camera, and a spectral database consisted of the CIE standard illuminants and reflectance or radiance database of samples of the above colors.
Estimating the number of clusters via system evolution for cluster analysis of gene expression data.
Wang, Kaijun; Zheng, Jie; Zhang, Junying; Dong, Jiyang
2009-09-01
The estimation of the number of clusters (NC) is one of crucial problems in the cluster analysis of gene expression data. Most approaches available give their answers without the intuitive information about separable degrees between clusters. However, this information is useful for understanding cluster structures. To provide this information, we propose system evolution (SE) method to estimate NC based on partitioning around medoids (PAM) clustering algorithm. SE analyzes cluster structures of a dataset from the viewpoint of a pseudothermodynamics system. The system will go to its stable equilibrium state, at which the optimal NC is found, via its partitioning process and merging process. The experimental results on simulated and real gene expression data demonstrate that the SE works well on the data with well-separated clusters and the one with slightly overlapping clusters. PMID:19527960
Improved metabolite profile smoothing for flux estimation.
Dromms, Robert A; Styczynski, Mark P
2015-09-01
As genome-scale metabolic models become more sophisticated and dynamic, one significant challenge in using these models is to effectively integrate increasingly prevalent systems-scale metabolite profiling data into them. One common data processing step when integrating metabolite data is to smooth experimental time course measurements: the smoothed profiles can be used to estimate metabolite accumulation (derivatives), and thus the flux distribution of the metabolic model. However, this smoothing step is susceptible to the (often significant) noise in experimental measurements, limiting the accuracy of downstream model predictions. Here, we present several improvements to current approaches for smoothing metabolite time course data using defined functions. First, we use a biologically-inspired mathematical model function taken from transcriptional profiling and clustering literature that captures the dynamics of many biologically relevant transient processes. We demonstrate that it is competitive with, and often superior to, previously described fitting schemas, and may serve as an effective single option for data smoothing in metabolic flux applications. We also implement a resampling-based approach to buffer out sensitivity to specific data sets and allow for more accurate fitting of noisy data. We found that this method, as well as the addition of parameter space constraints, yielded improved estimates of concentrations and derivatives (fluxes) in previously described fitting functions. These methods have the potential to improve the accuracy of existing and future dynamic metabolic models by allowing for the more effective integration of metabolite profiling data.
NASA Astrophysics Data System (ADS)
Bo, Yizhou; Shifa, Naima
2013-09-01
An estimator for finding the abundance of a rare, clustered and mobile population has been introduced. This model is based on adaptive cluster sampling (ACS) to identify the location of the population and negative binomial distribution to estimate the total in each site. To identify the location of the population we consider both sampling with replacement (WR) and sampling without replacement (WOR). Some mathematical properties of the model are also developed.
Accuracy in parameter estimation in cluster randomized designs.
Pornprasertmanit, Sunthud; Schneider, W Joel
2014-09-01
When planning to conduct a study, not only is it important to select a sample size that will ensure adequate statistical power, often it is important to select a sample size that results in accurate effect size estimates. In cluster-randomized designs (CRD), such planning presents special challenges. In CRD studies, instead of assigning individual objects to treatment conditions, objects are grouped in clusters, and these clusters are then assigned to different treatment conditions. Sample size in CRD studies is a function of 2 components: the number of clusters and the cluster size. Planning to conduct a CRD study is difficult because 2 distinct sample size combinations might be associated with similar costs but can result in dramatically different levels of statistical power and accuracy in effect size estimation. Thus, we present a method that assists researchers in finding the least expensive sample size combination that still results in adequate accuracy in effect size estimation. Alternatively, if researchers have a fixed budget, they can select the sample size combination that results in the most precise estimate of effect size. A free computer program that automates these procedures is available. PMID:25046449
A hierarchical clustering methodology for the estimation of toxicity.
Martin, Todd M; Harten, Paul; Venkatapathy, Raghuraman; Das, Shashikala; Young, Douglas M
2008-01-01
ABSTRACT A quantitative structure-activity relationship (QSAR) methodology based on hierarchical clustering was developed to predict toxicological endpoints. This methodology utilizes Ward's method to divide a training set into a series of structurally similar clusters. The structural similarity is defined in terms of 2-D physicochemical descriptors (such as connectivity and E-state indices). A genetic algorithm-based technique is used to generate statistically valid QSAR models for each cluster (using the pool of descriptors described above). The toxicity for a given query compound is estimated using the weighted average of the predictions from the closest cluster from each step in the hierarchical clustering assuming that the compound is within the domain of applicability of the cluster. The hierarchical clustering methodology was tested using a Tetrahymena pyriformis acute toxicity data set containing 644 chemicals in the training set and with two prediction sets containing 339 and 110 chemicals. The results from the hierarchical clustering methodology were compared to the results from several different QSAR methodologies.
IMPROVING BIOGENIC EMISSION ESTIMATES WITH SATELLITE IMAGERY
This presentation will review how existing and future applications of satellite imagery can improve the accuracy of biogenic emission estimates. Existing applications of satellite imagery to biogenic emission estimates have focused on characterizing land cover. Vegetation dat...
Improved Ant Colony Clustering Algorithm and Its Performance Study.
Gao, Wei
2016-01-01
Clustering analysis is used in many disciplines and applications; it is an important tool that descriptively identifies homogeneous groups of objects based on attribute values. The ant colony clustering algorithm is a swarm-intelligent method used for clustering problems that is inspired by the behavior of ant colonies that cluster their corpses and sort their larvae. A new abstraction ant colony clustering algorithm using a data combination mechanism is proposed to improve the computational efficiency and accuracy of the ant colony clustering algorithm. The abstraction ant colony clustering algorithm is used to cluster benchmark problems, and its performance is compared with the ant colony clustering algorithm and other methods used in existing literature. Based on similar computational difficulties and complexities, the results show that the abstraction ant colony clustering algorithm produces results that are not only more accurate but also more efficiently determined than the ant colony clustering algorithm and the other methods. Thus, the abstraction ant colony clustering algorithm can be used for efficient multivariate data clustering. PMID:26839533
Improved Ant Colony Clustering Algorithm and Its Performance Study
Gao, Wei
2016-01-01
Clustering analysis is used in many disciplines and applications; it is an important tool that descriptively identifies homogeneous groups of objects based on attribute values. The ant colony clustering algorithm is a swarm-intelligent method used for clustering problems that is inspired by the behavior of ant colonies that cluster their corpses and sort their larvae. A new abstraction ant colony clustering algorithm using a data combination mechanism is proposed to improve the computational efficiency and accuracy of the ant colony clustering algorithm. The abstraction ant colony clustering algorithm is used to cluster benchmark problems, and its performance is compared with the ant colony clustering algorithm and other methods used in existing literature. Based on similar computational difficulties and complexities, the results show that the abstraction ant colony clustering algorithm produces results that are not only more accurate but also more efficiently determined than the ant colony clustering algorithm and the other methods. Thus, the abstraction ant colony clustering algorithm can be used for efficient multivariate data clustering. PMID:26839533
Improving clustering by imposing network information
Gerber, Susanne; Horenko, Illia
2015-01-01
Cluster analysis is one of the most popular data analysis tools in a wide range of applied disciplines. We propose and justify a computationally efficient and straightforward-to-implement way of imposing the available information from networks/graphs (a priori available in many application areas) on a broad family of clustering methods. The introduced approach is illustrated on the problem of a noninvasive unsupervised brain signal classification. This task is faced with several challenging difficulties such as nonstationary noisy signals and a small sample size, combined with a high-dimensional feature space and huge noise-to-signal ratios. Applying this approach results in an exact unsupervised classification of very short signals, opening new possibilities for clustering methods in the area of a noninvasive brain-computer interface. PMID:26601225
Biases on cosmological parameter estimators from galaxy cluster number counts
Penna-Lima, M.; Wuensche, C.A.; Makler, M. E-mail: martin@cbpf.br
2014-05-01
Sunyaev-Zel'dovich (SZ) surveys are promising probes of cosmology — in particular for Dark Energy (DE) —, given their ability to find distant clusters and provide estimates for their mass. However, current SZ catalogs contain tens to hundreds of objects and maximum likelihood estimators may present biases for such sample sizes. In this work we study estimators from cluster abundance for some cosmological parameters, in particular the DE equation of state parameter w{sub 0}, the amplitude of density fluctuations σ{sub 8}, and the Dark Matter density parameter Ω{sub c}. We begin by deriving an unbinned likelihood for cluster number counts, showing that it is equivalent to the one commonly used in the literature. We use the Monte Carlo approach to determine the presence of bias using this likelihood and study its behavior with both the area and depth of the survey, and the number of cosmological parameters fitted. Our fiducial models are based on the South Pole Telescope (SPT) SZ survey. Assuming perfect knowledge of mass and redshift some estimators have non-negligible biases. For example, the bias of σ{sub 8} corresponds to about 40% of its statistical error bar when fitted together with Ω{sub c} and w{sub 0}. Including a SZ mass-observable relation decreases the relevance of the bias, for the typical sizes of current SZ surveys. Considering a joint likelihood for cluster abundance and the so-called ''distance priors'', we obtain that the biases are negligible compared to the statistical errors. However, we show that the biases from SZ estimators do not go away with increasing sample sizes and they may become the dominant source of error for an all sky survey at the SPT sensitivity. Finally, we compute the confidence regions for the cosmological parameters using Fisher matrix and profile likelihood approaches, showing that they are compatible with the Monte Carlo ones. The results of this work validate the use of the current maximum likelihood methods for
Improved correction of VIPERS angular selection effects in clustering measurements
NASA Astrophysics Data System (ADS)
Pezzotta, A.; Granett, B. R.; Bel, J.; Guzzo, L.; de la Torre, S.; Aff004
2016-10-01
Clustering estimates in galaxy redshift surveys need to account and correct for the way targets are selected from the general population, as to avoid biasing the measured values of cosmological parameters. The VIMOS Public Extragalactic Redshift Survey (VIPERS) is no exception to this, involving slit collisions and masking effects. Pushed by the increasing precision of the measurements, e.g. of the growth rate f, we have been re-assessing these effects in detail. We present here an improved correction for the two-point correlation function, capable to recover the amplitude of the monopole of the two-point correlation function ξ(r) above 1 h-1 Mpc to better than 2.
Clustering-based redshift estimation: application to VIPERS/CFHTLS
NASA Astrophysics Data System (ADS)
Scottez, V.; Mellier, Y.; Granett, B. R.; Moutard, T.; Kilbinger, M.; Scodeggio, M.; Garilli, B.; Bolzonella, M.; de la Torre, S.; Guzzo, L.; Abbas, U.; Adami, C.; Arnouts, S.; Bottini, D.; Branchini, E.; Cappi, A.; Cucciati, O.; Davidzon, I.; Fritz, A.; Franzetti, P.; Iovino, A.; Krywult, J.; Le Brun, V.; Le Fèvre, O.; Maccagni, D.; Małek, K.; Marulli, F.; Polletta, M.; Pollo, A.; Tasca, L. A. M.; Tojeiro, R.; Vergani, D.; Zanichelli, A.; Bel, J.; Coupon, J.; De Lucia, G.; Ilbert, O.; McCracken, H. J.; Moscardini, L.
2016-10-01
We explore the accuracy of the clustering-based redshift estimation proposed by Ménard et al. when applied to VIMOS Public Extragalactic Redshift Survey (VIPERS) and Canada-France-Hawaii Telescope Legacy Survey (CFHTLS) real data. This method enables us to reconstruct redshift distributions from measurement of the angular clustering of objects using a set of secure spectroscopic redshifts. We use state-of-the-art spectroscopic measurements with iAB < 22.5 from the VIPERS as reference population to infer the redshift distribution of galaxies from the CFHTLS T0007 release. VIPERS provides a nearly representative sample to a flux limit of iAB < 22.5 at a redshift of >0.5 which allows us to test the accuracy of the clustering-based redshift distributions. We show that this method enables us to reproduce the true mean colour-redshift relation when both populations have the same magnitude limit. We also show that this technique allows the inference of redshift distributions for a population fainter than the reference and we give an estimate of the colour-redshift mapping in this case. This last point is of great interest for future large-redshift surveys which require a complete faint spectroscopic sample.
Galaxy cluster mass estimation from stacked spectroscopic analysis
NASA Astrophysics Data System (ADS)
Farahi, Arya; Evrard, August E.; Rozo, Eduardo; Rykoff, Eli S.; Wechsler, Risa H.
2016-08-01
We use simulated galaxy surveys to study: (i) how galaxy membership in redMaPPer clusters maps to the underlying halo population, and (ii) the accuracy of a mean dynamical cluster mass, Mσ(λ), derived from stacked pairwise spectroscopy of clusters with richness λ. Using ˜130 000 galaxy pairs patterned after the Sloan Digital Sky Survey (SDSS) redMaPPer cluster sample study of Rozo et al., we show that the pairwise velocity probability density function of central-satellite pairs with mi < 19 in the simulation matches the form seen in Rozo et al. Through joint membership matching, we deconstruct the main Gaussian velocity component into its halo contributions, finding that the top-ranked halo contributes ˜60 per cent of the stacked signal. The halo mass scale inferred by applying the virial scaling of Evrard et al. to the velocity normalization matches, to within a few per cent, the log-mean halo mass derived through galaxy membership matching. We apply this approach, along with miscentring and galaxy velocity bias corrections, to estimate the log-mean matched halo mass at z = 0.2 of SDSS redMaPPer clusters. Employing the velocity bias constraints of Guo et al., we find
Clustering of Casablanca stock market based on hurst exponent estimates
NASA Astrophysics Data System (ADS)
Lahmiri, Salim
2016-08-01
This paper deals with the problem of Casablanca Stock Exchange (CSE) topology modeling as a complex network during three different market regimes: general trend characterized by ups and downs, increasing trend, and decreasing trend. In particular, a set of seven different Hurst exponent estimates are used to characterize long-range dependence in each industrial sector generating process. They are employed in conjunction with hierarchical clustering approach to examine the co-movements of the Casablanca Stock Exchange industrial sectors. The purpose is to investigate whether cluster structures are similar across variable, increasing and decreasing regimes. It is observed that the general structure of the CSE topology has been considerably changed over 2009 (variable regime), 2010 (increasing regime), and 2011 (decreasing regime) time periods. The most important findings follow. First, in general a high value of Hurst exponent is associated to a variable regime and a small one to a decreasing regime. In addition, Hurst estimates during increasing regime are higher than those of a decreasing regime. Second, correlations between estimated Hurst exponent vectors of industrial sectors increase when Casablanca stock exchange follows an upward regime, whilst they decrease when the overall market follows a downward regime.
Improving performance through concept formation and conceptual clustering
NASA Technical Reports Server (NTRS)
Fisher, Douglas H.
1992-01-01
Research from June 1989 through October 1992 focussed on concept formation, clustering, and supervised learning for purposes of improving the efficiency of problem-solving, planning, and diagnosis. These projects resulted in two dissertations on clustering, explanation-based learning, and means-ends planning, and publications in conferences and workshops, several book chapters, and journals; a complete Bibliography of NASA Ames supported publications is included. The following topics are studied: clustering of explanations and problem-solving experiences; clustering and means-end planning; and diagnosis of space shuttle and space station operating modes.
NASA Astrophysics Data System (ADS)
Choi, Hon-Chit; Wen, Lingfeng; Eberl, Stefan; Feng, Dagan
2006-03-01
Dynamic Single Photon Emission Computed Tomography (SPECT) has the potential to quantitatively estimate physiological parameters by fitting compartment models to the tracer kinetics. The generalized linear least square method (GLLS) is an efficient method to estimate unbiased kinetic parameters and parametric images. However, due to the low sensitivity of SPECT, noisy data can cause voxel-wise parameter estimation by GLLS to fail. Fuzzy C-Mean (FCM) clustering and modified FCM, which also utilizes information from the immediate neighboring voxels, are proposed to improve the voxel-wise parameter estimation of GLLS. Monte Carlo simulations were performed to generate dynamic SPECT data with different noise levels and processed by general and modified FCM clustering. Parametric images were estimated by Logan and Yokoi graphical analysis and GLLS. The influx rate (K I), volume of distribution (V d) were estimated for the cerebellum, thalamus and frontal cortex. Our results show that (1) FCM reduces the bias and improves the reliability of parameter estimates for noisy data, (2) GLLS provides estimates of micro parameters (K I-k 4) as well as macro parameters, such as volume of distribution (Vd) and binding potential (BP I & BP II) and (3) FCM clustering incorporating neighboring voxel information does not improve the parameter estimates, but improves noise in the parametric images. These findings indicated that it is desirable for pre-segmentation with traditional FCM clustering to generate voxel-wise parametric images with GLLS from dynamic SPECT data.
Motion estimation using point cluster method and Kalman filter.
Senesh, M; Wolf, A
2009-05-01
The most frequently used method in a three dimensional human gait analysis involves placing markers on the skin of the analyzed segment. This introduces a significant artifact, which strongly influences the bone position and orientation and joint kinematic estimates. In this study, we tested and evaluated the effect of adding a Kalman filter procedure to the previously reported point cluster technique (PCT) in the estimation of a rigid body motion. We demonstrated the procedures by motion analysis of a compound planar pendulum from indirect opto-electronic measurements of markers attached to an elastic appendage that is restrained to slide along the rigid body long axis. The elastic frequency is close to the pendulum frequency, as in the biomechanical problem, where the soft tissue frequency content is similar to the actual movement of the bones. Comparison of the real pendulum angle to that obtained by several estimation procedures--PCT, Kalman filter followed by PCT, and low pass filter followed by PCT--enables evaluation of the accuracy of the procedures. When comparing the maximal amplitude, no effect was noted by adding the Kalman filter; however, a closer look at the signal revealed that the estimated angle based only on the PCT method was very noisy with fluctuation, while the estimated angle based on the Kalman filter followed by the PCT was a smooth signal. It was also noted that the instantaneous frequencies obtained from the estimated angle based on the PCT method is more dispersed than those obtained from the estimated angle based on Kalman filter followed by the PCT method. Addition of a Kalman filter to the PCT method in the estimation procedure of rigid body motion results in a smoother signal that better represents the real motion, with less signal distortion than when using a digital low pass filter. Furthermore, it can be concluded that adding a Kalman filter to the PCT procedure substantially reduces the dispersion of the maximal and minimal
Nanostar Clustering Improves the Sensitivity of Plasmonic Assays.
Park, Yong Il; Im, Hyungsoon; Weissleder, Ralph; Lee, Hakho
2015-08-19
Star-shaped Au nanoparticles (Au nanostars, AuNS) have been developed to improve the plasmonic sensitivity, but their application has largely been limited to single-particle probes. We herein describe a AuNS clustering assay based on nanoscale self-assembly of multiple AuNS and which further increases detection sensitivity. We show that each cluster contains multiple nanogaps to concentrate electric fields, thereby amplifying the signal via plasmon coupling. Numerical simulation indicated that AuNS clusters assume up to 460-fold higher field density than Au nanosphere clusters of similar mass. The results were validated in model assays of protein biomarker detection. The AuNS clustering assay showed higher sensitivity than Au nanosphere. Minimizing the size of affinity ligand was found important to tightly confine electric fields and improve the sensitivity. The resulting assay is simple and fast and can be readily applied to point-of-care molecular detection schemes. PMID:26102604
Unsupervised, Robust Estimation-based Clustering for Multispectral Images
NASA Technical Reports Server (NTRS)
Netanyahu, Nathan S.
1997-01-01
To prepare for the challenge of handling the archiving and querying of terabyte-sized scientific spatial databases, the NASA Goddard Space Flight Center's Applied Information Sciences Branch (AISB, Code 935) developed a number of characterization algorithms that rely on supervised clustering techniques. The research reported upon here has been aimed at continuing the evolution of some of these supervised techniques, namely the neural network and decision tree-based classifiers, plus extending the approach to incorporating unsupervised clustering algorithms, such as those based on robust estimation (RE) techniques. The algorithms developed under this task should be suited for use by the Intelligent Information Fusion System (IIFS) metadata extraction modules, and as such these algorithms must be fast, robust, and anytime in nature. Finally, so that the planner/schedule module of the IlFS can oversee the use and execution of these algorithms, all information required by the planner/scheduler must be provided to the IIFS development team to ensure the timely integration of these algorithms into the overall system.
Improved Estimation Model of Lunar Surface Temperature
NASA Astrophysics Data System (ADS)
Zheng, Y.
2015-12-01
Lunar surface temperature (LST) is of great scientific interest both uncovering the thermal properties and designing the lunar robotic or manned landing missions. In this paper, we proposed the improved LST estimation model based on the one-dimensional partial differential equation (PDE). The shadow and surface tilts effects were combined into the model. Using the Chang'E (CE-1) DEM data from the Laser Altimeter (LA), the topographic effect can be estimated with an improved effective solar irradiance (ESI) model. In Fig. 1, the highest LST of the global Moon has been estimated with the spatial resolution of 1 degree /pixel, applying the solar albedo data derived from Clementine UV-750nm in solving the PDE function. The topographic effect is significant in the LST map. It can be identified clearly the maria, highland, and craters. The maximum daytime LST presents at the regions with low albedo, i.g. mare Procellarum, mare Serenitatis and mare Imbrium. The results are consistent with the Diviner's measurements of the LRO mission. Fig. 2 shows the temperature variations at the center of the disk in one year, assuming the Moon to be standard spherical. The seasonal variation of LST at the equator is about 10K. The highest LST occurs in early May. Fig.1. Estimated maximum surface temperatures of the global Moon in spatial resolution of 1 degree /pixel
Estimating cougar predation rates from GPS location clusters
Anderson, C.R.; Lindzey, F.G.
2003-01-01
We examined cougar (Puma concolor) predation from Global Positioning System (GPS) location clusters (???2 locations within 200 m on the same or consecutive nights) of 11 cougars during September-May, 1999-2001. Location success of GPS averaged 2.4-5.0 of 6 location attempts/night/cougar. We surveyed potential predation sites during summer-fall 2000 and summer 2001 to identify prey composition (n = 74; 3-388 days post predation) and record predation-site variables (n = 97; 3-270 days post predation). We developed a model to estimate probability that a cougar killed a large mammal from data collected at GPS location clusters where the probability of predation increased with number of nights (defined as locations at 2200, 0200, or 0500 hr) of cougar presence within a 200-m radius (P < 0.001). Mean estimated cougar predation rates for large mammals were 7.3 days/kill for subadult females (1-2.5 yr; n = 3, 90% CI: 6.3 to 9.9), 7.0 days/kill for adult females (n = 2, 90% CI: 5.8 to 10.8), 5.4 days/kill for family groups (females with young; n = 3, 90% CI: 4.5 to 8.4), 9.5 days/kill for a subadult male (1-2.5 yr; n = 1, 90% CI: 6.9 to 16.4), and 7.8 days/kill for adult males (n = 2, 90% CI: 6.8 to 10.7). We may have slightly overestimated cougar predation rates due to our inability to separate scavenging from predation. We detected 45 deer (Odocoileus spp.), 15 elk (Cervus elaphus), 6 pronghorn (Antilocapra americana), 2 livestock, 1 moose (Alces alces), and 6 small mammals at cougar predation sites. Comparisons between cougar sexes suggested that females selected mule deer and males selected elk (P < 0.001). Cougars averaged 3.0 nights on pronghorn carcasses, 3.4 nights on deer carcasses, and 6.0 nights on elk carcasses. Most cougar predation (81.7%) occurred between 1901-0500 hr and peaked from 2201-0200 hr (31.7%). Applying GPS technology to identify predation rates and prey selection will allow managers to efficiently estimate the ability of an area's prey base to
A Hierarchical Clustering Methodology for the Estimation of Toxicity
A Quantitative Structure Activity Relationship (QSAR) methodology based on hierarchical clustering was developed to predict toxicological endpoints. This methodology utilizes Ward's method to divide a training set into a series of structurally similar clusters. The structural sim...
The Effect of Mergers on Galaxy Cluster Mass Estimates
NASA Astrophysics Data System (ADS)
Johnson, Ryan E.; Zuhone, John A.; Thorsen, Tessa; Hinds, Andre
2015-08-01
At vertices within the filamentary structure that describes the universal matter distribution, clusters of galaxies grow hierarchically through merging with other clusters. As such, the most massive galaxy clusters should have experienced many such mergers in their histories. Though we cannot see them evolve over time, these mergers leave lasting, measurable effects in the cluster galaxies' phase space. By simulating several different galaxy cluster mergers here, we examine how the cluster galaxies kinematics are altered as a result of these mergers. Further, we also examine the effect of our line of sight viewing angle with respect to the merger axis. In projecting the 6-dimensional galaxy phase space onto a 3-dimensional plane, we are able to simulate how these clusters might actually appear to optical redshift surveys. We find that for those optical cluster statistics which are most often used as a proxy for the cluster mass (variants of σv), the uncertainty due to an inprecise or unknown line of sight may alter the derived cluster masses moreso than the kinematic disturbance of the merger itself. Finally, by examining these, and several other clustering statistics, we find that significant events (such as pericentric crossings) are identifiable over a range of merger initial conditions and from many different lines of sight.
Improved Gravitation Field Algorithm and Its Application in Hierarchical Clustering
Zheng, Ming; Sun, Ying; Liu, Gui-xia; Zhou, You; Zhou, Chun-guang
2012-01-01
Background Gravitation field algorithm (GFA) is a new optimization algorithm which is based on an imitation of natural phenomena. GFA can do well both for searching global minimum and multi-minima in computational biology. But GFA needs to be improved for increasing efficiency, and modified for applying to some discrete data problems in system biology. Method An improved GFA called IGFA was proposed in this paper. Two parts were improved in IGFA. The first one is the rule of random division, which is a reasonable strategy and makes running time shorter. The other one is rotation factor, which can improve the accuracy of IGFA. And to apply IGFA to the hierarchical clustering, the initial part and the movement operator were modified. Results Two kinds of experiments were used to test IGFA. And IGFA was applied to hierarchical clustering. The global minimum experiment was used with IGFA, GFA, GA (genetic algorithm) and SA (simulated annealing). Multi-minima experiment was used with IGFA and GFA. The two experiments results were compared with each other and proved the efficiency of IGFA. IGFA is better than GFA both in accuracy and running time. For the hierarchical clustering, IGFA is used to optimize the smallest distance of genes pairs, and the results were compared with GA and SA, singular-linkage clustering, UPGMA. The efficiency of IGFA is proved. PMID:23173043
A improvement to the cluster recognition model for peripheral collisions
Garcia-Solis, E.J.; Mignerey, A.C.
1996-02-01
Among the microscopic dynamical simulations used for the study of the evolution of nuclear collisions at energies around 100 MeV, it has ben found, that the BUU-type of calculation describes adequately the general features of nuclear collisions in that energy regime. The BUU method consists of the numerical solution of the modified Vlaslov equation for a generated phase-space distribution of nucleons. It generally describes satisfactorily the first stages of a nuclear reaction, however it is not able to separate the fragments formed during the projectile-target interaction. It therefore is necessary to insert a clusterization procedure to obtain the primary fragments of the reaction. The general description of the clustering model proposed by the authors can be found elsewhere. The current paper deals with improvements that have been made to the clustering procedure.
Estimation of Carcinogenicity using Hierarchical Clustering and Nearest Neighbor Methodologies
Previously a hierarchical clustering (HC) approach and a nearest neighbor (NN) approach were developed to model acute aquatic toxicity end points. These approaches were developed to correlate the toxicity for large, noncongeneric data sets. In this study these approaches applie...
IMPROVED RISK ESTIMATES FOR CARBON TETRACHLORIDE
Benson, Janet M.; Springer, David L.
1999-12-31
Carbon tetrachloride has been used extensively within the DOE nuclear weapons facilities. Rocky Flats was formerly the largest volume consumer of CCl4 in the United States using 5000 gallons in 1977 alone (Ripple, 1992). At the Hanford site, several hundred thousand gallons of CCl4 were discharged between 1955 and 1973 into underground cribs for storage. Levels of CCl4 in groundwater at highly contaminated sites at the Hanford facility have exceeded 8 the drinking water standard of 5 ppb by several orders of magnitude (Illman, 1993). High levels of CCl4 at these facilities represent a potential health hazard for workers conducting cleanup operations and for surrounding communities. The level of CCl4 cleanup required at these sites and associated costs are driven by current human health risk estimates, which assume that CCl4 is a genotoxic carcinogen. The overall purpose of these studies was to improve the scientific basis for assessing the health risk associated with human exposure to CCl4. Specific research objectives of this project were to: (1) compare the rates of CCl4 metabolism by rats, mice and hamsters in vivo and extrapolate those rates to man based on parallel studies on the metabolism of CCl4 by rat, mouse, hamster and human hepatic microsomes in vitro; (2) using hepatic microsome preparations, determine the role of specific cytochrome P450 isoforms in CCl4-mediated toxicity and the effects of repeated inhalation and ingestion of CCl4 on these isoforms; and (3) evaluate the toxicokinetics of inhaled CCl4 in rats, mice and hamsters. This information has been used to improve the physiologically based pharmacokinetic (PBPK) model for CCl4 originally developed by Paustenbach et al. (1988) and more recently revised by Thrall and Kenny (1996). Another major objective of the project was to provide scientific evidence that CCl4, like chloroform, is a hepatocarcinogen only when exposure results in cell damage, cell killing and regenerative proliferation. In
Xiao, Yongling; Abrahamowicz, Michal
2010-03-30
We propose two bootstrap-based methods to correct the standard errors (SEs) from Cox's model for within-cluster correlation of right-censored event times. The cluster-bootstrap method resamples, with replacement, only the clusters, whereas the two-step bootstrap method resamples (i) the clusters, and (ii) individuals within each selected cluster, with replacement. In simulations, we evaluate both methods and compare them with the existing robust variance estimator and the shared gamma frailty model, which are available in statistical software packages. We simulate clustered event time data, with latent cluster-level random effects, which are ignored in the conventional Cox's model. For cluster-level covariates, both proposed bootstrap methods yield accurate SEs, and type I error rates, and acceptable coverage rates, regardless of the true random effects distribution, and avoid serious variance under-estimation by conventional Cox-based standard errors. However, the two-step bootstrap method over-estimates the variance for individual-level covariates. We also apply the proposed bootstrap methods to obtain confidence bands around flexible estimates of time-dependent effects in a real-life analysis of cluster event times.
Richardson, R Tyler; Nicholson, Kristen F; Rapp, Elizabeth A; Johnston, Therese E; Richards, James G
2016-05-01
Accurate measurement of joint kinematics is required to understand the musculoskeletal effects of a therapeutic intervention such as upper extremity (UE) ergometry. Traditional surface-based motion capture is effective for quantifying humerothoracic motion, but scapular kinematics are challenging to obtain. Methods for estimating scapular kinematics include the widely-reported acromion marker cluster (AMC) which utilizes a static calibration between the scapula and the AMC to estimate the orientation of the scapula during motion. Previous literature demonstrates that including additional calibration positions throughout the motion improves AMC accuracy for single plane motions; however this approach has not been assessed for the non-planar shoulder complex motion occurring during UE ergometry. The purpose of this study was to evaluate the accuracy of single, dual, and multiple AMC calibration methods during UE ergometry. The orientations of the UE segments of 13 healthy subjects were recorded with motion capture. Scapular landmarks were palpated at eight evenly-spaced static positions around the 360° cycle. The single AMC method utilized one static calibration position to estimate scapular kinematics for the entire cycle, while the dual and multiple AMC methods used two and four static calibration positions, respectively. Scapulothoracic angles estimated by the three AMC methods were compared with scapulothoracic angles determined by palpation. The multiple AMC method produced the smallest RMS errors and was not significantly different from palpation about any axis. We recommend the multiple AMC method as a practical and accurate way to estimate scapular kinematics during UE ergometry.
Research opportunities to improve DSM impact estimates
Misuriello, H.; Hopkins, M.E.F.
1992-03-01
This report was commissioned by the California Institute for Energy Efficiency (CIEE) as part of its research mission to advance the energy efficiency and productivity of all end-use sectors in California. Our specific goal in this effort has been to identify viable research and development (R&D) opportunities that can improve capabilities to determine the energy-use and demand reductions achieved through demand-side management (DSM) programs and measures. We surveyed numerous practitioners in California and elsewhere to identify the major obstacles to effective impact evaluation, drawing on their collective experience. As a separate effort, we have also profiled the status of regulatory practices in leading states with respect to DSM impact evaluation. We have synthesized this information, adding our own perspective and experience to those of our survey-respondent colleagues, to characterize today`s state of the art in impact-evaluation practices. This scoping study takes a comprehensive look at the problems and issues involved in DSM impact estimates at the customer-facility or site level. The major portion of our study investigates three broad topic areas of interest to CIEE: Data analysis issues, field-monitoring issues, issues in evaluating DSM measures. Across these three topic areas, we have identified 22 potential R&D opportunities, to which we have assigned priority levels. These R&D opportunities are listed by topic area and priority.
Research opportunities to improve DSM impact estimates
Misuriello, H.; Hopkins, M.E.F. )
1992-03-01
This report was commissioned by the California Institute for Energy Efficiency (CIEE) as part of its research mission to advance the energy efficiency and productivity of all end-use sectors in California. Our specific goal in this effort has been to identify viable research and development (R D) opportunities that can improve capabilities to determine the energy-use and demand reductions achieved through demand-side management (DSM) programs and measures. We surveyed numerous practitioners in California and elsewhere to identify the major obstacles to effective impact evaluation, drawing on their collective experience. As a separate effort, we have also profiled the status of regulatory practices in leading states with respect to DSM impact evaluation. We have synthesized this information, adding our own perspective and experience to those of our survey-respondent colleagues, to characterize today's state of the art in impact-evaluation practices. This scoping study takes a comprehensive look at the problems and issues involved in DSM impact estimates at the customer-facility or site level. The major portion of our study investigates three broad topic areas of interest to CIEE: Data analysis issues, field-monitoring issues, issues in evaluating DSM measures. Across these three topic areas, we have identified 22 potential R D opportunities, to which we have assigned priority levels. These R D opportunities are listed by topic area and priority.
Improved diagnostic model for estimating wind energy
Endlich, R.M.; Lee, J.D.
1983-03-01
Because wind data are available only at scattered locations, a quantitative method is needed to estimate the wind resource at specific sites where wind energy generation may be economically feasible. This report describes a computer model that makes such estimates. The model uses standard weather reports and terrain heights in deriving wind estimates; the method of computation has been changed from what has been used previously. The performance of the current model is compared with that of the earlier version at three sites; estimates of wind energy at four new sites are also presented.
Communication: Improved pair approximations in local coupled-cluster methods
Schwilk, Max; Werner, Hans-Joachim; Usvyat, Denis
2015-03-28
In local coupled cluster treatments the electron pairs can be classified according to the magnitude of their energy contributions or distances into strong, close, weak, and distant pairs. Different approximations are introduced for the latter three classes. In this communication, an improved simplified treatment of close and weak pairs is proposed, which is based on long-range cancellations of individually slowly decaying contributions in the amplitude equations. Benchmark calculations for correlation, reaction, and activation energies demonstrate that these approximations work extremely well, while pair approximations based on local second-order Møller-Plesset theory can lead to errors that are 1-2 orders of magnitude larger.
Hierarchical clustering method for improved prostate cancer imaging in diffuse optical tomography
NASA Astrophysics Data System (ADS)
Kavuri, Venkaiah C.; Liu, Hanli
2013-03-01
We investigate the feasibility of trans-rectal near infrared (NIR) based diffuse optical tomography (DOT) for early detection of prostate cancer using a transrectal ultrasound (TRUS) compatible imaging probe. For this purpose, we designed a TRUS-compatible, NIR-based image system (780nm), in which the photo diodes were placed on the trans-rectal probe. DC signals were recorded and used for estimating the absorption coefficient. We validated the system using laboratory phantoms. For further improvement, we also developed a hierarchical clustering method (HCM) to improve the accuracy of image reconstruction with limited prior information. We demonstrated the method using computer simulations laboratory phantom experiments.
Bayesian Estimation of Conditional Independence Graphs Improves Functional Connectivity Estimates
Hinne, Max; Janssen, Ronald J.; Heskes, Tom; van Gerven, Marcel A.J.
2015-01-01
Functional connectivity concerns the correlated activity between neuronal populations in spatially segregated regions of the brain, which may be studied using functional magnetic resonance imaging (fMRI). This coupled activity is conveniently expressed using covariance, but this measure fails to distinguish between direct and indirect effects. A popular alternative that addresses this issue is partial correlation, which regresses out the signal of potentially confounding variables, resulting in a measure that reveals only direct connections. Importantly, provided the data are normally distributed, if two variables are conditionally independent given all other variables, their respective partial correlation is zero. In this paper, we propose a probabilistic generative model that allows us to estimate functional connectivity in terms of both partial correlations and a graph representing conditional independencies. Simulation results show that this methodology is able to outperform the graphical LASSO, which is the de facto standard for estimating partial correlations. Furthermore, we apply the model to estimate functional connectivity for twenty subjects using resting-state fMRI data. Results show that our model provides a richer representation of functional connectivity as compared to considering partial correlations alone. Finally, we demonstrate how our approach can be extended in several ways, for instance to achieve data fusion by informing the conditional independence graph with data from probabilistic tractography. As our Bayesian formulation of functional connectivity provides access to the posterior distribution instead of only to point estimates, we are able to quantify the uncertainty associated with our results. This reveals that while we are able to infer a clear backbone of connectivity in our empirical results, the data are not accurately described by simply looking at the mode of the distribution over connectivity. The implication of this is that
NASA Astrophysics Data System (ADS)
Signorelli, G.; D`Onofrio, A.; Venturini, M.
2016-07-01
Measuring the time of each ionization cluster in drift chambers has been proposed to improve the single hit resolution, especially for very low mass tracking systems. Ad hoc formulae have been developed to combine the information from the single clusters. We show that the problem falls in a wide category of problems that can be solved with an algorithm called Maximum Possible Spacing (MPS) which has been demonstrated to find the optimal estimator. We show that the MPS approach is applicable and gives the expected results. Its application in a real tracking device, namely the MEG II cylindrical drift chamber, is discussed.
Formation of Education Clusters as a Way to Improve Education
ERIC Educational Resources Information Center
Aitbayeva, Gul'zamira D.; Zhubanova, Mariyash K.; Kulgildinova, Tulebike A.; Tusupbekova, Gulsum M.; Uaisova, Gulnar I.
2016-01-01
The purpose of this research is to analyze basic prerequisites formation and development factors of educational clusters of the world's leading nations for studying the possibility of cluster policy introduction and creating educational clusters in the Republic of Kazakhstan. The authors of this study concluded that educational cluster could be…
Uncertain Data Clustering-Based Distance Estimation in Wireless Sensor Networks
Luo, Qinghua; Peng, Yu; Peng, Xiyuan; Saddik, Abdulmotaleb El
2014-01-01
For communication distance estimations in Wireless Sensor Networks (WSNs), the RSSI (Received Signal Strength Indicator) value is usually assumed to have a linear relationship with the logarithm of the communication distance. However, this is not always true in reality because there are always uncertainties in RSSI readings due to obstacles, wireless interferences, etc. In this paper, we specifically propose a novel RSSI-based communication distance estimation method based on the idea of interval data clustering. We first use interval data, combined with statistical information of RSSI values, to interpret the distribution characteristics of RSSI. We then use interval data hard clustering and soft clustering to overcome different levels of RSSI uncertainties, respectively. We have used real RSSI measurements to evaluate our communication distance estimation method in three representative wireless environments. Extensive experimental results show that our communication distance estimation method can effectively achieve promising estimation accuracy with high efficiency when compared to other state-of-art approaches. PMID:24721772
Neuronal spike train entropy estimation by history clustering.
Watters, Nicholas; Reeke, George N
2014-09-01
Neurons send signals to each other by means of sequences of action potentials (spikes). Ignoring variations in spike amplitude and shape that are probably not meaningful to a receiving cell, the information content, or entropy of the signal depends on only the timing of action potentials, and because there is no external clock, only the interspike intervals, and not the absolute spike times, are significant. Estimating spike train entropy is a difficult task, particularly with small data sets, and many methods of entropy estimation have been proposed. Here we present two related model-based methods for estimating the entropy of neural signals and compare them to existing methods. One of the methods is fast and reasonably accurate, and it converges well with short spike time records; the other is impractically time-consuming but apparently very accurate, relying on generating artificial data that are a statistical match to the experimental data. Using the slow, accurate method to generate a best-estimate entropy value, we find that the faster estimator converges to this value more closely and with smaller data sets than many existing entropy estimators.
Size Estimation and Time Evolution of Large Size Rare Gas Clusters by Rayleigh Scattering Techniques
NASA Astrophysics Data System (ADS)
Liu, Bing-Chen; Zhu, Pin-Pin; Li, Zhao-Hui; Ni, Guo-Quan; Xu, Zhi-Zhan
2002-05-01
Large rare gas clusters Arn, Krn and Xen were produced at room temperature in the process of supersonic adiabatic expansion. The cluster size is examined by a Rayleigh scattering experiment. Power variations of the average cluster size 0256-307X/19/5/316/art16 with the gas backing pressure P0 give size scaling as 0256-307X/19/5/316/art16 ∝P02.0, resulting in the largest cluster sizes which are estimated in the present work to be about 1.5×104, 2.6×104 and 4.0×104 atoms (the corresponding diameters of the cluster spheres are about 9, 13 and 17 nm) for Ar, Kr and Xe, respectively. A time resolving Rayleigh scattering experiment was conducted to investigate the time evolution of cluster formation and decay processes. A surprising two-plateau structure of the time evolution characteristic of cluster formation and decay processes of Kr and Xe clusters was revealed as compared with a ``normal'' single structure for the case of Ar gas. In the second plateau, the intensity of the scattered light is enhanced greatly, by even as much as 62 times, over that in the first plateau, indicating a significant increase in cluster size. This finding supports the importance of nuclei in the gas condensation process and may be helpful for further insight into the phenomenon of clustering.
Clustering-based urbanisation to improve enterprise information systems agility
NASA Astrophysics Data System (ADS)
Imache, Rabah; Izza, Said; Ahmed-Nacer, Mohamed
2015-11-01
Enterprises are daily facing pressures to demonstrate their ability to adapt quickly to the unpredictable changes of their dynamic in terms of technology, social, legislative, competitiveness and globalisation. Thus, to ensure its place in this hard context, enterprise must always be agile and must ensure its sustainability by a continuous improvement of its information system (IS). Therefore, the agility of enterprise information systems (EISs) can be considered today as a primary objective of any enterprise. One way of achieving this objective is by the urbanisation of the EIS in the context of continuous improvement to make it a real asset servicing enterprise strategy. This paper investigates the benefits of EISs urbanisation based on clustering techniques as a driver for agility production and/or improvement to help managers and IT management departments to improve continuously the performance of the enterprise and make appropriate decisions in the scope of the enterprise objectives and strategy. This approach is applied to the urbanisation of a tour operator EIS.
Improving reliability of subject-level resting-state fMRI parcellation with shrinkage estimators.
Mejia, Amanda F; Nebel, Mary Beth; Shou, Haochang; Crainiceanu, Ciprian M; Pekar, James J; Mostofsky, Stewart; Caffo, Brian; Lindquist, Martin A
2015-05-15
A recent interest in resting state functional magnetic resonance imaging (rsfMRI) lies in subdividing the human brain into anatomically and functionally distinct regions of interest. For example, brain parcellation is often a necessary step for defining the network nodes used in connectivity studies. While inference has traditionally been performed on group-level data, there is a growing interest in parcellating single subject data. However, this is difficult due to the inherent low signal-to-noise ratio of rsfMRI data, combined with typically short scan lengths. A large number of brain parcellation approaches employ clustering, which begins with a measure of similarity or distance between voxels. The goal of this work is to improve the reproducibility of single-subject parcellation using shrinkage-based estimators of such measures, allowing the noisy subject-specific estimator to "borrow strength" in a principled manner from a larger population of subjects. We present several empirical Bayes shrinkage estimators and outline methods for shrinkage when multiple scans are not available for each subject. We perform shrinkage on raw inter-voxel correlation estimates and use both raw and shrinkage estimates to produce parcellations by performing clustering on the voxels. While we employ a standard spectral clustering approach, our proposed method is agnostic to the choice of clustering method and can be used as a pre-processing step for any clustering algorithm. Using two datasets - a simulated dataset where the true parcellation is known and is subject-specific and a test-retest dataset consisting of two 7-minute resting-state fMRI scans from 20 subjects - we show that parcellations produced from shrinkage correlation estimates have higher reliability and validity than those produced from raw correlation estimates. Application to test-retest data shows that using shrinkage estimators increases the reproducibility of subject-specific parcellations of the motor cortex by
Improving Reliability of Subject-Level Resting-State fMRI Parcellation with Shrinkage Estimators
Mejia, Amanda F.; Nebel, Mary Beth; Shou, Haochang; Crainiceanu, Ciprian M.; Pekar, James J.; Mostofsky, Stewart; Caffo, Brian; Lindquist, Martin A.
2015-01-01
A recent interest in resting state functional magnetic resonance imaging (rsfMRI) lies in subdividing the human brain into anatomically and functionally distinct regions of interest. For example, brain parcellation is often a necessary step for defining the network nodes used in connectivity studies. While inference has traditionally been performed on group-level data, there is a growing interest in parcellating single subject data. However, this is difficult due to the inherent low signal-to-noise ratio of rsfMRI data, combined with typically short scan lengths. A large number of brain parcellation approaches employ clustering, which begins with a measure of similarity or distance between voxels. The goal of this work is to improve the reproducibility of single-subject parcellation using shrinkage-based estimators of such measures, allowing the noisy subject-specific estimator to “borrow strength” in a principled manner from a larger population of subjects. We present several empirical Bayes shrinkage estimators and outline methods for shrinkage when multiple scans are not available for each subject. We perform shrinkage on raw inter-voxel correlation estimates and use both raw and shrinkage estimates to produce parcellations by performing clustering on the voxels. While we employ a standard spectral clustering approach, our proposed method is agnostic to the choice of clustering method and can be used as a pre-processing step for any clustering algorithm. Using two datasets – a simulated dataset where the true parcellation is known and is subject-specific and a test-retest dataset consisting of two 7-minute resting-state fMRI scans from 20 subjects – we show that parcellations produced from shrinkage correlation estimates have higher reliability and validity than those produced from raw correlation estimates. Application to test-retest data shows that using shrinkage estimators increases the reproducibility of subject-specific parcellations of the motor
Improving reliability of subject-level resting-state fMRI parcellation with shrinkage estimators.
Mejia, Amanda F; Nebel, Mary Beth; Shou, Haochang; Crainiceanu, Ciprian M; Pekar, James J; Mostofsky, Stewart; Caffo, Brian; Lindquist, Martin A
2015-05-15
A recent interest in resting state functional magnetic resonance imaging (rsfMRI) lies in subdividing the human brain into anatomically and functionally distinct regions of interest. For example, brain parcellation is often a necessary step for defining the network nodes used in connectivity studies. While inference has traditionally been performed on group-level data, there is a growing interest in parcellating single subject data. However, this is difficult due to the inherent low signal-to-noise ratio of rsfMRI data, combined with typically short scan lengths. A large number of brain parcellation approaches employ clustering, which begins with a measure of similarity or distance between voxels. The goal of this work is to improve the reproducibility of single-subject parcellation using shrinkage-based estimators of such measures, allowing the noisy subject-specific estimator to "borrow strength" in a principled manner from a larger population of subjects. We present several empirical Bayes shrinkage estimators and outline methods for shrinkage when multiple scans are not available for each subject. We perform shrinkage on raw inter-voxel correlation estimates and use both raw and shrinkage estimates to produce parcellations by performing clustering on the voxels. While we employ a standard spectral clustering approach, our proposed method is agnostic to the choice of clustering method and can be used as a pre-processing step for any clustering algorithm. Using two datasets - a simulated dataset where the true parcellation is known and is subject-specific and a test-retest dataset consisting of two 7-minute resting-state fMRI scans from 20 subjects - we show that parcellations produced from shrinkage correlation estimates have higher reliability and validity than those produced from raw correlation estimates. Application to test-retest data shows that using shrinkage estimators increases the reproducibility of subject-specific parcellations of the motor cortex by
Using Cluster Gas Fractions to Estimate Total BH Mechanical Feedback Energy
NASA Astrophysics Data System (ADS)
Mathews, Bill
2011-08-01
The total mechanical feedback energy received by clusters of mass 4-11 × 10^{14} M_{sun} exceeds 10^{63} ergs and mean feedback luminosity 10^{46} erg/s. This can be estimated by comparing gas density profiles in idealized adiabatic clusters evolved to zero redshift with entropy and gas fraction profiles in clusters of the same mass. Feedback energy, stored as potential energy in the cluster gas, can be estimated by comparing the PE within the same gas mass in adiabatic and observed atmospheres which have expanded considerably. The total feedback energy far exceeds energy gained by supernovae or lost by radiation. Less than 1% of the feedback energy is deposited within the cooling radius, but the time-averaged mass inflow from cooling is nicely offset by outflow due to feedback expansion.
Improving lidar turbulence estimates for wind energy
NASA Astrophysics Data System (ADS)
Newman, J. F.; Clifton, A.; Churchfield, M. J.; Klein, P.
2016-09-01
Remote sensing devices (e.g., lidars) are quickly becoming a cost-effective and reliable alternative to meteorological towers for wind energy applications. Although lidars can measure mean wind speeds accurately, these devices measure different values of turbulence intensity (TI) than an instrument on a tower. In response to these issues, a lidar TI error reduction model was recently developed for commercially available lidars. The TI error model first applies physics-based corrections to the lidar measurements, then uses machine-learning techniques to further reduce errors in lidar TI estimates. The model was tested at two sites in the Southern Plains where vertically profiling lidars were collocated with meteorological towers. Results indicate that the model works well under stable conditions but cannot fully mitigate the effects of variance contamination under unstable conditions. To understand how variance contamination affects lidar TI estimates, a new set of equations was derived in previous work to characterize the actual variance measured by a lidar. Terms in these equations were quantified using a lidar simulator and modeled wind field, and the new equations were then implemented into the TI error model.
A Clustering Classification of Spare Parts for Improving Inventory Policies
NASA Astrophysics Data System (ADS)
Meri Lumban Raja, Anton; Ai, The Jin; Diar Astanti, Ririn
2016-02-01
Inventory policies in a company may consist of storage, control, and replenishment policy. Since the result of common ABC inventory classification can only affect the replenishment policy, we are proposing a clustering based classification technique as a basis for developing inventory policy especially for storage and control policy. Hierarchical clustering procedure is used after clustering variables are defined. Since hierarchical clustering procedure requires metric variables only, therefore a step to convert non-metric variables to metric variables is performed. The clusters resulted from the clustering techniques are analyzed in order to define each cluster characteristics. Then, the inventory policies are determined for each group according to its characteristics. A real data, which consists of 612 items from a local manufacturer's spare part warehouse, are used in the research of this paper to show the applicability of the proposed methodology.
Effect of Random Clustering on Surface Damage Density Estimates
Matthews, M J; Feit, M D
2007-10-29
Identification and spatial registration of laser-induced damage relative to incident fluence profiles is often required to characterize the damage properties of laser optics near damage threshold. Of particular interest in inertial confinement laser systems are large aperture beam damage tests (>1cm{sup 2}) where the number of initiated damage sites for {phi}>14J/cm{sup 2} can approach 10{sup 5}-10{sup 6}, requiring automatic microscopy counting to locate and register individual damage sites. However, as was shown for the case of bacteria counting in biology decades ago, random overlapping or 'clumping' prevents accurate counting of Poisson-distributed objects at high densities, and must be accounted for if the underlying statistics are to be understood. In this work we analyze the effect of random clumping on damage initiation density estimates at fluences above damage threshold. The parameter {psi} = a{rho} = {rho}/{rho}{sub 0}, where a = 1/{rho}{sub 0} is the mean damage site area and {rho} is the mean number density, is used to characterize the onset of clumping, and approximations based on a simple model are used to derive an expression for clumped damage density vs. fluence and damage site size. The influence of the uncorrected {rho} vs. {phi} curve on damage initiation probability predictions is also discussed.
Recent improvements in ocean heat content estimation
NASA Astrophysics Data System (ADS)
Abraham, J. P.
2015-12-01
Increase of ocean heat content is an outcome of a persistent and ongoing energy imbalance to the Earth's climate system. This imbalance, largely caused by human emissions of greenhouse gases, has engendered a multi-decade increase in stored thermal energy within the Earth system, manifest principally as an increase in ocean heat content. Consequently, in order to quantify the rate of global warming, it is necessary to measure the rate of increase of ocean heat content. The historical record of ocean heat content is extracted from a history of various devices and spatial/temporal coverage across the globe. One of the most important historical devices is the eXpendable BathyThermograph (XBT) which has been used for decades to measure ocean temperatures to depths of 700m and deeper. Here, recent progress in improving the XBT record of upper ocean heat content is described including corrections to systematic biases, filling in spatial gaps where data does not exist, and the selection of a proper climatology. In addition, comparisons of the revised historical record and CMIP5 climate models are made. It is seen that there is very good agreement between the models and measurements, with the models slightly under-predicting the increase of ocean heat content in the upper water layers over the past 45 years.
Olanzapine improves social dysfunction in cluster B personality disorder.
Zullino, Daniele F; Quinche, Philippe; Häfliger, Thomas; Stigler, Michael
2002-07-01
Treatment with antipsychotics is a common approach for personality disorder. Conventional antipsychotics may be efficacious particularly against psychoticism, but less against other symptoms in these patients. They are, furthermore, associated with adverse drug reactions poorly tolerated by patients with personality disorder. Atypical antipsychotics have a more convenient pharmacological profile with a lower risk for extrapyramidal symptoms and a broader therapeutic profile, showing some efficacy against impulsivity, aggressivity and affective symptoms. The medical records of ten patients with a DSM-IV diagnosis of a cluster B personality disorder who had received olanzapine treatment were reviewed. A mirror-image design anchored to the start date of olanzapine treatment and extending 8 weeks in either direction was used. The assessment consisted of a qualitative chart review and a retrospective completion of the GCI-C and an adapted French version of the SDAS, using the observer-rated items. The olanzapine dose range was 2.5-20 mg during the 8 weeks of observation. The mean SDAS score (items 1-15) was 28.8+/-8.4 for the 8 weeks preceding olanzapine prescription and was improved to 13.6+/-5.8 after 8 weeks of treatment.
Improving Collective Estimations Using Resistance to Social Influence
Madirolas, Gabriel; de Polavieja, Gonzalo G.
2015-01-01
Groups can make precise collective estimations in cases like the weight of an object or the number of items in a volume. However, in others tasks, for example those requiring memory or mental calculation, subjects often give estimations with large deviations from factual values. Allowing members of the group to communicate their estimations has the additional perverse effect of shifting individual estimations even closer to the biased collective estimation. Here we show that this negative effect of social interactions can be turned into a method to improve collective estimations. We first obtained a statistical model of how humans change their estimation when receiving the estimates made by other individuals. We confirmed using existing experimental data its prediction that individuals use the weighted geometric mean of private and social estimations. We then used this result and the fact that each individual uses a different value of the social weight to devise a method that extracts the subgroups resisting social influence. We found that these subgroups of individuals resisting social influence can make very large improvements in group estimations. This is in contrast to methods using the confidence that each individual declares, for which we find no improvement in group estimations. Also, our proposed method does not need to use historical data to weight individuals by performance. These results show the benefits of using the individual characteristics of the members in a group to better extract collective wisdom. PMID:26565619
Improving Collective Estimations Using Resistance to Social Influence.
Madirolas, Gabriel; de Polavieja, Gonzalo G
2015-11-01
Groups can make precise collective estimations in cases like the weight of an object or the number of items in a volume. However, in others tasks, for example those requiring memory or mental calculation, subjects often give estimations with large deviations from factual values. Allowing members of the group to communicate their estimations has the additional perverse effect of shifting individual estimations even closer to the biased collective estimation. Here we show that this negative effect of social interactions can be turned into a method to improve collective estimations. We first obtained a statistical model of how humans change their estimation when receiving the estimates made by other individuals. We confirmed using existing experimental data its prediction that individuals use the weighted geometric mean of private and social estimations. We then used this result and the fact that each individual uses a different value of the social weight to devise a method that extracts the subgroups resisting social influence. We found that these subgroups of individuals resisting social influence can make very large improvements in group estimations. This is in contrast to methods using the confidence that each individual declares, for which we find no improvement in group estimations. Also, our proposed method does not need to use historical data to weight individuals by performance. These results show the benefits of using the individual characteristics of the members in a group to better extract collective wisdom. PMID:26565619
Improving Performance for Gifted Students in a Cluster Grouping Model
ERIC Educational Resources Information Center
Brulles, Dina; Saunders, Rachel; Cohn, Sanford J.
2010-01-01
Although experts in gifted education widely promote cluster grouping gifted students, little empirical evidence is available to attest to its effectiveness. This study is an example of comparative action research in the form of a quantitative case study that focused on the mandated cluster grouping practices for gifted students in an urban…
NASA Astrophysics Data System (ADS)
Gallego, Francisco Javier; Stibig, Hans Jürgen
2013-06-01
Several projects dealing with land cover area estimation in large regions consider samples of sites to be analysed with high or very high resolution satellite images. This paper analyses the impact of stratification on the efficiency of sampling schemes of large-support units or clusters with a size between 5 km × 5 km and 30 km × 30 km. Cluster sampling schemes are compared with samples of unclustered points, both without and with stratification. The correlograms of land cover classes provide a useful tool to assess the sampling value of clusters in terms of variance; this sampling value is expressed as “equivalent number of points” of a cluster. We show that the “equivalent number of points” is generally higher for stratified cluster sampling than for non-stratified cluster sampling, whose values remain however moderate. When land cover data are acquired by photo-interpretation of tiles extracted from larger images, such as Landsat TM, a sampling plan based on a larger number of smaller clusters might be more efficient.
Estimators for Clustered Education RCTs Using the Neyman Model for Causal Inference
ERIC Educational Resources Information Center
Schochet, Peter Z.
2013-01-01
This article examines the estimation of two-stage clustered designs for education randomized control trials (RCTs) using the nonparametric Neyman causal inference framework that underlies experiments. The key distinction between the considered causal models is whether potential treatment and control group outcomes are considered to be fixed for…
How to Estimate the Value of Service Reliability Improvements
Sullivan, Michael J.; Mercurio, Matthew G.; Schellenberg, Josh A.; Eto, Joseph H.
2010-06-08
A robust methodology for estimating the value of service reliability improvements is presented. Although econometric models for estimating value of service (interruption costs) have been established and widely accepted, analysts often resort to applying relatively crude interruption cost estimation techniques in assessing the economic impacts of transmission and distribution investments. This paper first shows how the use of these techniques can substantially impact the estimated value of service improvements. A simple yet robust methodology that does not rely heavily on simplifying assumptions is presented. When a smart grid investment is proposed, reliability improvement is one of the most frequently cited benefits. Using the best methodology for estimating the value of this benefit is imperative. By providing directions on how to implement this methodology, this paper sends a practical, usable message to the industry.
Improving the Discipline of Cost Estimation and Analysis
NASA Technical Reports Server (NTRS)
Piland, William M.; Pine, David J.; Wilson, Delano M.
2000-01-01
The need to improve the quality and accuracy of cost estimates of proposed new aerospace systems has been widely recognized. The industry has done the best job of maintaining related capability with improvements in estimation methods and giving appropriate priority to the hiring and training of qualified analysts. Some parts of Government, and National Aeronautics and Space Administration (NASA) in particular, continue to need major improvements in this area. Recently, NASA recognized that its cost estimation and analysis capabilities had eroded to the point that the ability to provide timely, reliable estimates was impacting the confidence in planning man), program activities. As a result, this year the Agency established a lead role for cost estimation and analysis. The Independent Program Assessment Office located at the Langley Research Center was given this responsibility.
Improving visual estimates of cervical spine range of motion.
Hirsch, Brandon P; Webb, Matthew L; Bohl, Daniel D; Fu, Michael; Buerba, Rafael A; Gruskay, Jordan A; Grauer, Jonathan N
2014-11-01
Cervical spine range of motion (ROM) is a common measure of cervical conditions, surgical outcomes, and functional impairment. Although ROM is routinely assessed by visual estimation in clinical practice, visual estimates have been shown to be unreliable and inaccurate. Reliable goniometers can be used for assessments, but the associated costs and logistics generally limit their clinical acceptance. To investigate whether training can improve visual estimates of cervical spine ROM, we asked attending surgeons, residents, and medical students at our institution to visually estimate the cervical spine ROM of healthy subjects before and after a training session. This training session included review of normal cervical spine ROM in 3 planes and demonstration of partial and full motion in 3 planes by multiple subjects. Estimates before, immediately after, and 1 month after this training session were compared to assess reliability and accuracy. Immediately after training, errors decreased by 11.9° (flexion-extension), 3.8° (lateral bending), and 2.9° (axial rotation). These improvements were statistically significant. One month after training, visual estimates remained improved, by 9.5°, 1.6°, and 3.1°, respectively, but were statistically significant only in flexion-extension. Although the accuracy of visual estimates can be improved, clinicians should be aware of the limitations of visual estimates of cervical spine ROM. Our study results support scrutiny of visual assessment of ROM as a criterion for diagnosing permanent impairment or disability. PMID:25379754
High-Resolution Spatial Distribution and Estimation of Access to Improved Sanitation in Kenya
Jia, Peng; Anderson, John D.; Leitner, Michael; Rheingans, Richard
2016-01-01
Background Access to sanitation facilities is imperative in reducing the risk of multiple adverse health outcomes. A distinct disparity in sanitation exists among different wealth levels in many low-income countries, which may hinder the progress across each of the Millennium Development Goals. Methods The surveyed households in 397 clusters from 2008–2009 Kenya Demographic and Health Surveys were divided into five wealth quintiles based on their national asset scores. A series of spatial analysis methods including excess risk, local spatial autocorrelation, and spatial interpolation were applied to observe disparities in coverage of improved sanitation among different wealth categories. The total number of the population with improved sanitation was estimated by interpolating, time-adjusting, and multiplying the surveyed coverage rates by high-resolution population grids. A comparison was then made with the annual estimates from United Nations Population Division and World Health Organization /United Nations Children's Fund Joint Monitoring Program for Water Supply and Sanitation. Results The Empirical Bayesian Kriging interpolation produced minimal root mean squared error for all clusters and five quintiles while predicting the raw and spatial coverage rates of improved sanitation. The coverage in southern regions was generally higher than in the north and east, and the coverage in the south decreased from Nairobi in all directions, while Nyanza and North Eastern Province had relatively poor coverage. The general clustering trend of high and low sanitation improvement among surveyed clusters was confirmed after spatial smoothing. Conclusions There exists an apparent disparity in sanitation among different wealth categories across Kenya and spatially smoothed coverage rates resulted in a closer estimation of the available statistics than raw coverage rates. Future intervention activities need to be tailored for both different wealth categories and nationally
NASA Astrophysics Data System (ADS)
Wu, Jee-Cheng; Wu, Heng-Yang; Tsuei, Gwo-Chyang
2013-10-01
A classical spectral un-mixing of hyperspectral image involves identifying the unique signatures of the endmembers (i.e. pure materials) and estimating the proportions of endmembers for each pixel by inversion. The key to successful spectral un-mixing is indicating the number of endmembers and their corresponding spectral signatures. Currently, eigenvaluebased estimation of the number of endmembers in hyperspectral image is widely used. However, the eigenvalue-based methods are difficult to separate signal sources such as anomalies. In this paper, a two-stage process is proposed to estimate the endmember numbers. At the preprocessing stage, the spectral dimensions are reduced using principal component analysis and the spatial dimensions are reduced using convex hull computation based on reduced-spectral bands. At the hierarchical agglomerate clustering stage, a pixel vector is found by applying orthogonal subspace projection (OSP) and cluster pixel vectors using the spectral angle mapper (SAM), hierarchically. If the number of pixel vectors in a cluster is greater than the predefined number, the found pixel vector is set as the endmember. Otherwise, anomalous vectors are found. The proposed method was carried with both synthetic and real images for estimating the number of endmembers. The results demonstrate that the proposed method can be used to estimate more reasonable and precise number of endmembers than the eigenvalue-based methods.
NASA Technical Reports Server (NTRS)
Kalton, G.
1983-01-01
A number of surveys were conducted to study the relationship between the level of aircraft or traffic noise exposure experienced by people living in a particular area and their annoyance with it. These surveys generally employ a clustered sample design which affects the precision of the survey estimates. Regression analysis of annoyance on noise measures and other variables is often an important component of the survey analysis. Formulae are presented for estimating the standard errors of regression coefficients and ratio of regression coefficients that are applicable with a two- or three-stage clustered sample design. Using a simple cost function, they also determine the optimum allocation of the sample across the stages of the sample design for the estimation of a regression coefficient.
NASA Astrophysics Data System (ADS)
Mathews, William G.; Guo, Fulai
2011-09-01
The total feedback energy injected into hot gas in galaxy clusters by central black holes can be estimated by comparing the potential energy of observed cluster gas profiles with the potential energy of non-radiating, feedback-free hot gas atmospheres resulting from gravitational collapse in clusters of the same total mass. Feedback energy from cluster-centered black holes expands the cluster gas, lowering the gas-to-dark-matter mass ratio below the cosmic value. Feedback energy is unnecessarily delivered by radio-emitting jets to distant gas far beyond the cooling radius where the cooling time equals the cluster lifetime. For clusters of mass (4-11) × 1014 M sun, estimates of the total feedback energy, (1-3) × 1063 erg, far exceed feedback energies estimated from observations of X-ray cavities and shocks in the cluster gas, energies gained from supernovae, and energies lost from cluster gas by radiation. The time-averaged mean feedback luminosity is comparable to those of powerful quasars, implying that some significant fraction of this energy may arise from the spin of the black hole. The universal entropy profile in feedback-free gaseous atmospheres in Navarro-Frenk-White cluster halos can be recovered by multiplying the observed gas entropy profile of any relaxed cluster by a factor involving the gas fraction profile. While the feedback energy and associated mass outflow in the clusters we consider far exceed that necessary to stop cooling inflow, the time-averaged mass outflow at the cooling radius almost exactly balances the mass that cools within this radius, an essential condition to shut down cluster cooling flows.
Improved Noise-Power Estimators Based On Order Statistics
NASA Technical Reports Server (NTRS)
Zimmerman, George A.
1995-01-01
Technique based on order statistics enables design of improved noise-power estimators. In original intended application, noise-power estimators part of microwave-signal-processing system of Search for Extraterrestrial Intelligence project. Involves limiting dynamic range of value to be estimated; making it possible to achieve performance of order-statistical estimator with simple algorithms and equipment and with only one pass over input data. Technique also applicable to other signal-detection systems and to image-detection systems required to exhibit constant false-alarm rates.
NASA Astrophysics Data System (ADS)
Mahmoud, E.; Takey, A.; Shoukry, A.
2016-07-01
We develop a galaxy cluster finding algorithm based on spectral clustering technique to identify optical counterparts and estimate optical redshifts for X-ray selected cluster candidates. As an application, we run our algorithm on a sample of X-ray cluster candidates selected from the third XMM-Newton serendipitous source catalog (3XMM-DR5) that are located in the Stripe 82 of the Sloan Digital Sky Survey (SDSS). Our method works on galaxies described in the color-magnitude feature space. We begin by examining 45 galaxy clusters with published spectroscopic redshifts in the range of 0.1-0.8 with a median of 0.36. As a result, we are able to identify their optical counterparts and estimate their photometric redshifts, which have a typical accuracy of 0.025 and agree with the published ones. Then, we investigate another 40 X-ray cluster candidates (from the same cluster survey) with no redshift information in the literature and found that 12 candidates are considered as galaxy clusters in the redshift range from 0.29 to 0.76 with a median of 0.57. These systems are newly discovered clusters in X-rays and optical data. Among them 7 clusters have spectroscopic redshifts for at least one member galaxy.
Improvements in estimating proportions of objects from multispectral data
NASA Technical Reports Server (NTRS)
Horwitz, H. M.; Hyde, P. D.; Richardson, W.
1974-01-01
Methods for estimating proportions of objects and materials imaged within the instantaneous field of view of a multispectral sensor were developed further. Improvements in the basic proportion estimation algorithm were devised as well as improved alien object detection procedures. Also, a simplified signature set analysis scheme was introduced for determining the adequacy of signature set geometry for satisfactory proportion estimation. Averaging procedures used in conjunction with the mixtures algorithm were examined theoretically and applied to artificially generated multispectral data. A computationally simpler estimator was considered and found unsatisfactory. Experiments conducted to find a suitable procedure for setting the alien object threshold yielded little definitive result. Mixtures procedures were used on a limited amount of ERTS data to estimate wheat proportion in selected areas. Results were unsatisfactory, partly because of the ill-conditioned nature of the pure signature set.
Unsupervised feature relevance analysis applied to improve ECG heartbeat clustering.
Rodríguez-Sotelo, J L; Peluffo-Ordoñez, D; Cuesta-Frau, D; Castellanos-Domínguez, G
2012-10-01
The computer-assisted analysis of biomedical records has become an essential tool in clinical settings. However, current devices provide a growing amount of data that often exceeds the processing capacity of normal computers. As this amount of information rises, new demands for more efficient data extracting methods appear. This paper addresses the task of data mining in physiological records using a feature selection scheme. An unsupervised method based on relevance analysis is described. This scheme uses a least-squares optimization of the input feature matrix in a single iteration. The output of the algorithm is a feature weighting vector. The performance of the method was assessed using a heartbeat clustering test on real ECG records. The quantitative cluster validity measures yielded a correctly classified heartbeat rate of 98.69% (specificity), 85.88% (sensitivity) and 95.04% (general clustering performance), which is even higher than the performance achieved by other similar ECG clustering studies. The number of features was reduced on average from 100 to 18, and the temporal cost was a 43% lower than in previous ECG clustering schemes. PMID:22672933
Unsupervised feature relevance analysis applied to improve ECG heartbeat clustering.
Rodríguez-Sotelo, J L; Peluffo-Ordoñez, D; Cuesta-Frau, D; Castellanos-Domínguez, G
2012-10-01
The computer-assisted analysis of biomedical records has become an essential tool in clinical settings. However, current devices provide a growing amount of data that often exceeds the processing capacity of normal computers. As this amount of information rises, new demands for more efficient data extracting methods appear. This paper addresses the task of data mining in physiological records using a feature selection scheme. An unsupervised method based on relevance analysis is described. This scheme uses a least-squares optimization of the input feature matrix in a single iteration. The output of the algorithm is a feature weighting vector. The performance of the method was assessed using a heartbeat clustering test on real ECG records. The quantitative cluster validity measures yielded a correctly classified heartbeat rate of 98.69% (specificity), 85.88% (sensitivity) and 95.04% (general clustering performance), which is even higher than the performance achieved by other similar ECG clustering studies. The number of features was reduced on average from 100 to 18, and the temporal cost was a 43% lower than in previous ECG clustering schemes.
Kasaie, Parastu; Mathema, Barun; Kelton, W David; Azman, Andrew S; Pennington, Jeff; Dowdy, David W
2015-01-01
In any setting, a proportion of incident active tuberculosis (TB) reflects recent transmission ("recent transmission proportion"), whereas the remainder represents reactivation. Appropriately estimating the recent transmission proportion has important implications for local TB control, but existing approaches have known biases, especially where data are incomplete. We constructed a stochastic individual-based model of a TB epidemic and designed a set of simulations (derivation set) to develop two regression-based tools for estimating the recent transmission proportion from five inputs: underlying TB incidence, sampling coverage, study duration, clustered proportion of observed cases, and proportion of observed clusters in the sample. We tested these tools on a set of unrelated simulations (validation set), and compared their performance against that of the traditional 'n-1' approach. In the validation set, the regression tools reduced the absolute estimation bias (difference between estimated and true recent transmission proportion) in the 'n-1' technique by a median [interquartile range] of 60% [9%, 82%] and 69% [30%, 87%]. The bias in the 'n-1' model was highly sensitive to underlying levels of study coverage and duration, and substantially underestimated the recent transmission proportion in settings of incomplete data coverage. By contrast, the regression models' performance was more consistent across different epidemiological settings and study characteristics. We provide one of these regression models as a user-friendly, web-based tool. Novel tools can improve our ability to estimate the recent TB transmission proportion from data that are observable (or estimable) by public health practitioners with limited available molecular data.
Distributing Power Grid State Estimation on HPC Clusters A System Architecture Prototype
Liu, Yan; Jiang, Wei; Jin, Shuangshuang; Rice, Mark J.; Chen, Yousu
2012-08-20
The future power grid is expected to further expand with highly distributed energy sources and smart loads. The increased size and complexity lead to increased burden on existing computational resources in energy control centers. Thus the need to perform real-time assessment on such systems entails efficient means to distribute centralized functions such as state estimation in the power system. In this paper, we present our early prototype of a system architecture that connects distributed state estimators individually running parallel programs to solve non-linear estimation procedure. The prototype consists of a middleware and data processing toolkits that allows data exchange in the distributed state estimation. We build a test case based on the IEEE 118 bus system and partition the state estimation of the whole system model to available HPC clusters. The measurement from the testbed demonstrates the low overhead of our solution.
Space-time stick-breaking processes for small area disease cluster estimation.
Hossain, Md Monir; Lawson, Andrew B; Cai, Bo; Choi, Jungsoon; Liu, Jihong; Kirby, Russell S
2013-03-01
We propose a space-time stick-breaking process for the disease cluster estimation. The dependencies for spatial and temporal effects are introduced by using space-time covariate dependent kernel stick-breaking processes. We compared this model with the space-time standard random effect model by checking each model's ability in terms of cluster detection of various shapes and sizes. This comparison was made for simulated data where the true risks were known. For the simulated data, we have observed that space-time stick-breaking process performs better in detecting medium- and high-risk clusters. For the real data, county specific low birth weight incidences for the state of South Carolina for the years 1997-2007, we have illustrated how the proposed model can be used to find grouping of counties of higher incidence rate.
Improving The Discipline of Cost Estimation and Analysis
NASA Technical Reports Server (NTRS)
Piland, William M.; Pine, David J.; Wilson, Delano M.
2000-01-01
The need to improve the quality and accuracy of cost estimates of proposed new aerospace systems has been widely recognized. The industry has done the best job of maintaining related capability with improvements in estimation methods and giving appropriate priority to the hiring and training of qualified analysts. Some parts of Government, and National Aeronautics and Space Administration (NASA) in particular, continue to need major improvements in this area. Recently, NASA recognized that its cost estimation and analysis capabilities had eroded to the point that the ability to provide timely, reliable estimates was impacting the confidence in planning many program activities. As a result, this year the Agency established a lead role for cost estimation and analysis. The Independent Program Assessment Office located at the Langley Research Center was given this responsibility. This paper presents the plans for the newly established role. Described is how the Independent Program Assessment Office, working with all NASA Centers, NASA Headquarters, other Government agencies, and industry, is focused on creating cost estimation and analysis as a professional discipline that will be recognized equally with the technical disciplines needed to design new space and aeronautics activities. Investments in selected, new analysis tools, creating advanced training opportunities for analysts, and developing career paths for future analysts engaged in the discipline are all elements of the plan. Plans also include increasing the human resources available to conduct independent cost analysis of Agency programs during their formulation, to improve near-term capability to conduct economic cost-benefit assessments, to support NASA management's decision process, and to provide cost analysis results emphasizing "full-cost" and "full-life cycle" considerations. The Agency cost analysis improvement plan has been approved for implementation starting this calendar year. Adequate financial
Estimating galaxy cluster magnetic fields by the classical and hadronic minimum energy criterion
NASA Astrophysics Data System (ADS)
Pfrommer, C.; Enßlin, T. A.
2004-07-01
We wish to estimate magnetic field strengths of radio emitting galaxy clusters by minimizing the non-thermal energy density contained in cosmic ray electrons (CRe), protons (CRp), and magnetic fields. The classical minimum energy estimate can be constructed independently of the origin of the radio synchrotron emitting CRe yielding thus an absolute minimum of the non-thermal energy density. Provided the observed synchrotron emission is generated by a CRe population originating from hadronic interactions of CRp with the ambient thermal gas of the intra-cluster medium, the parameter space of the classical scenario can be tightened by means of the hadronic minimum energy criterion. For both approaches, we derive the theoretically expected tolerance regions for the inferred minimum energy densities. Application to the radio halo of the Coma cluster and the radio mini-halo of the Perseus cluster yields equipartition between cosmic rays and magnetic fields within the expected tolerance regions. In the hadronic scenario, the inferred central magnetic field strength ranges from 2.4 μG (Coma) to 8.8 μG (Perseus), while the optimal CRp energy density is constrained to 2 per cent +/- 1 per cent of the thermal energy density (Perseus). We discuss the possibility of a hadronic origin of the Coma radio halo while current observations favour such a scenario for the Perseus radio mini-halo. Combining future expected detections of radio synchrotron, hard X-ray inverse Compton, and hadronically induced γ-ray emission should allow an estimate of volume averaged cluster magnetic fields and provide information about their dynamical state.
Improving terrain height estimates from RADARSAT interferometric measurements
Thompson, P.A.; Eichel, P.H.; Calloway, T.M.
1998-03-01
The authors describe two methods of combining two-pass RADAR-SAT interferometric phase maps with existing DTED (digital terrain elevation data) to produce improved terrain height estimates. The first is a least-squares estimation procedure that fits the unwrapped phase data to a phase map computed from the DTED. The second is a filtering technique that combines the interferometric height map with the DTED map based on spatial frequency content. Both methods preserve the high fidelity of the interferometric data.
Improved Versions of Common Estimators of the Recombination Rate.
Gärtner, Kerstin; Futschik, Andreas
2016-09-01
The scaled recombination parameter [Formula: see text] is one of the key parameters, turning up frequently in population genetic models. Accurate estimates of [Formula: see text] are difficult to obtain, as recombination events do not always leave traces in the data. One of the most widely used approaches is composite likelihood. Here, we show that popular implementations of composite likelihood estimators can often be uniformly improved by optimizing the trade-off between bias and variance. The amount of possible improvement depends on parameters such as the sequence length, the sample size, and the mutation rate, and it can be considerable in some cases. It turns out that approximate Bayesian computation, with composite likelihood as a summary statistic, also leads to improved estimates, but now in terms of the posterior risk. Finally, we demonstrate a practical application on real data from Drosophila. PMID:27409412
Improved Versions of Common Estimators of the Recombination Rate.
Gärtner, Kerstin; Futschik, Andreas
2016-09-01
The scaled recombination parameter [Formula: see text] is one of the key parameters, turning up frequently in population genetic models. Accurate estimates of [Formula: see text] are difficult to obtain, as recombination events do not always leave traces in the data. One of the most widely used approaches is composite likelihood. Here, we show that popular implementations of composite likelihood estimators can often be uniformly improved by optimizing the trade-off between bias and variance. The amount of possible improvement depends on parameters such as the sequence length, the sample size, and the mutation rate, and it can be considerable in some cases. It turns out that approximate Bayesian computation, with composite likelihood as a summary statistic, also leads to improved estimates, but now in terms of the posterior risk. Finally, we demonstrate a practical application on real data from Drosophila.
Distributed Noise Generation for Density Estimation Based Clustering without Trusted Third Party
NASA Astrophysics Data System (ADS)
Su, Chunhua; Bao, Feng; Zhou, Jianying; Takagi, Tsuyoshi; Sakurai, Kouichi
The rapid growth of the Internet provides people with tremendous opportunities for data collection, knowledge discovery and cooperative computation. However, it also brings the problem of sensitive information leakage. Both individuals and enterprises may suffer from the massive data collection and the information retrieval by distrusted parties. In this paper, we propose a privacy-preserving protocol for the distributed kernel density estimation-based clustering. Our scheme applies random data perturbation (RDP) technique and the verifiable secret sharing to solve the security problem of distributed kernel density estimation in [4] which assumed a mediate party to help in the computation.
Improved Estimation and Interpretation of Correlations in Neural Circuits
Yatsenko, Dimitri; Josić, Krešimir; Ecker, Alexander S.; Froudarakis, Emmanouil; Cotton, R. James; Tolias, Andreas S.
2015-01-01
Ambitious projects aim to record the activity of ever larger and denser neuronal populations in vivo. Correlations in neural activity measured in such recordings can reveal important aspects of neural circuit organization. However, estimating and interpreting large correlation matrices is statistically challenging. Estimation can be improved by regularization, i.e. by imposing a structure on the estimate. The amount of improvement depends on how closely the assumed structure represents dependencies in the data. Therefore, the selection of the most efficient correlation matrix estimator for a given neural circuit must be determined empirically. Importantly, the identity and structure of the most efficient estimator informs about the types of dominant dependencies governing the system. We sought statistically efficient estimators of neural correlation matrices in recordings from large, dense groups of cortical neurons. Using fast 3D random-access laser scanning microscopy of calcium signals, we recorded the activity of nearly every neuron in volumes 200 μm wide and 100 μm deep (150–350 cells) in mouse visual cortex. We hypothesized that in these densely sampled recordings, the correlation matrix should be best modeled as the combination of a sparse graph of pairwise partial correlations representing local interactions and a low-rank component representing common fluctuations and external inputs. Indeed, in cross-validation tests, the covariance matrix estimator with this structure consistently outperformed other regularized estimators. The sparse component of the estimate defined a graph of interactions. These interactions reflected the physical distances and orientation tuning properties of cells: The density of positive ‘excitatory’ interactions decreased rapidly with geometric distances and with differences in orientation preference whereas negative ‘inhibitory’ interactions were less selective. Because of its superior performance, this
ERIC Educational Resources Information Center
Rhoads, Christopher
2014-01-01
Recent publications have drawn attention to the idea of utilizing prior information about the correlation structure to improve statistical power in cluster randomized experiments. Because power in cluster randomized designs is a function of many different parameters, it has been difficult for applied researchers to discern a simple rule explaining…
A simple recipe for estimating masses of elliptical galaxies and clusters of galaxies
NASA Astrophysics Data System (ADS)
Lyskova, N.
2013-04-01
We discuss a simple and robust procedure to evaluate the mass/circular velocity of massive elliptical galaxies and clusters of galaxies. It relies only on the surface density and the projected velocity dispersion profiles of tracer particles and therefore can be applied even in case of poor or noisy observational data. Stars, globular clusters or planetary nebulae can be used as tracers for mass determination of elliptical galaxies. For clusters the galaxies themselves can be used as tracer particles. The key element of the proposed procedure is the selection of a ``sweet'' radius R_sweet, where the sensitivity to the unknown anisotropy of the tracers' orbits is minimal. At this radius the surface density of tracers declines approximately as I(R)∝ R-2, thus placing R_sweet not far from the half-light radius of the tracers R_eff. The procedure was tested on a sample of cosmological simulations of individual galaxies and galaxy clusters and then applied to real observational data. Independently the total mass profile was derived from the hydrostatic equilibrium equation for the gaseous atmosphere. Mismatch in mass profiles obtained from optical and X-ray data is used to estimate the non-thermal contribution to the gas pressure and/or to constrain the distribution of tracers' orbits.
Distance Estimates for High Redshift Clusters SZ and X-Ray Measurements
NASA Technical Reports Server (NTRS)
Joy, Marshall K.
1999-01-01
I present interferometric images of the Sunyaev-Zel'dovich effect for the high redshift (z $ greater than $ 0.5) galaxy clusters in the \\emph(Einstein) Medium Sensitivity Survey: MS0451.5-0305 (z = 0.54), MS0015.9+1609 (z = 0.55), MS2053.7-0449 (z = 0.58), MS1 137.5+6625 (z = 0.78), and MS 1054.5-0321 (z = 0.83). Isothermal $\\beta$ models are applied to the data to determine the magnitude of the Sunyaev-Zel'dovich (S-Z) decrement in each cluster. Complementary ROSAT PSPC and HRI x-ray data are also analyzed, and are combined with the S-Z data to generate an independent estimate of the cluster distance. Since the Sunyaev-Zel'dovich Effect is invariant with redshift, sensitive S-Z imaging can provide an independent determination of the size, shape, density, and distance of high redshift galaxy clusters; we will discuss current systematic uncertainties with this approach, as well as future observations which will yield stronger constraints.
Improving Quantum State Estimation with Mutually Unbiased Bases
NASA Astrophysics Data System (ADS)
Adamson, R. B. A.; Steinberg, A. M.
2010-07-01
When used in quantum state estimation, projections onto mutually unbiased bases have the ability to maximize information extraction per measurement and to minimize redundancy. We present the first experimental demonstration of quantum state tomography of two-qubit polarization states to take advantage of mutually unbiased bases. We demonstrate improved state estimation as compared to standard measurement strategies and discuss how this can be understood from the structure of the measurements we use. We experimentally compared our method to the standard state estimation method for three different states and observe that the infidelity was up to 1.84±0.06 times lower by using our technique than it was by using standard state estimation methods.
Thompson, William L.
2001-07-01
Monitoring population numbers is important for assessing trends and meeting various legislative mandates. However, sampling across time introduces a temporal aspect to survey design in addition to the spatial one. For instance, a sample that is initially representative may lose this attribute if there is a shift in numbers and/or spatial distribution in the underlying population that is not reflected in later sampled plots. Plot selection methods that account for this temporal variability will produce the best trend estimates. Consequently, I used simulation to compare bias and relative precision of estimates of population change among stratified and unstratified sampling designs based on permanent, temporary, and partial replacement plots under varying levels of spatial clustering, density, and temporal shifting of populations. Permanent plots produced more precise estimates of change than temporary plots across all factors. Further, permanent plots performed better than partial replacement plots except for high density (5 and 10 individuals per plot) and 25% - 50% shifts in the population. Stratified designs always produced less precise estimates of population change for all three plot selection methods, and often produced biased change estimates and greatly inflated variance estimates under sampling with partial replacement. Hence, stratification that remains fixed across time should be avoided when monitoring populations that are likely to exhibit large changes in numbers and/or spatial distribution during the study period. Key words: bias; change estimation; monitoring; permanent plots; relative precision; sampling with partial replacement; temporary plots.
Yuwono, Mitchell; Su, Steven W; Moulton, Bruce D; Nguyen, Hung T
2013-01-01
When undertaking gait-analysis, one of the most important factors to consider is heel-strike (HS). Signals from a waist worn Inertial Measurement Unit (IMU) provides sufficient accelerometric and gyroscopic information for estimating gait parameter and identifying HS events. In this paper we propose a novel adaptive, unsupervised, and parameter-free identification method for detection of HS events during gait episodes. Our proposed method allows the device to learn and adapt to the profile of the user without the need of supervision. The algorithm is completely parameter-free and requires no prior fine tuning. Autocorrelation features (ACF) of both antero-posterior acceleration (aAP) and medio-lateral acceleration (aML) are used to determine cadence episodes. The Discrete Wavelet Transform (DWT) features of signal peaks during cadence are extracted and clustered using Swarm Rapid Centroid Estimation (Swarm RCE). Left HS (LHS), Right HS (RHS), and movement artifacts are clustered based on intra-cluster correlation. Initial pilot testing of the system on 8 subjects show promising results up to 84.3%±9.2% and 86.7%±6.9% average accuracy with 86.8%±9.2% and 88.9%±7.1% average precision for the segmentation of LHS and RHS respectively. PMID:24109847
Motion estimation in the frequency domain using fuzzy c-planes clustering.
Erdem, C E; Karabulut, G Z; Yanmaz, E; Anarim, E
2001-01-01
A recent work explicitly models the discontinuous motion estimation problem in the frequency domain where the motion parameters are estimated using a harmonic retrieval approach. The vertical and horizontal components of the motion are independently estimated from the locations of the peaks of respective periodogram analyses and they are paired to obtain the motion vectors using a procedure proposed. In this paper, we present a more efficient method that replaces the motion component pairing task and hence eliminates the problems of the pairing method described. The method described in this paper uses the fuzzy c-planes (FCP) clustering approach to fit planes to three-dimensional (3-D) frequency domain data obtained from the peaks of the periodograms. Experimental results are provided to demonstrate the effectiveness of the proposed method.
Leon-Perez, Jose M; Notelaers, Guy; Arenas, Alicia; Munduate, Lourdes; Medina, Francisco J
2014-05-01
Research findings underline the negative effects of exposure to bullying behaviors and document the detrimental health effects of being a victim of workplace bullying. While no one disputes its negative consequences, debate continues about the magnitude of this phenomenon since very different prevalence rates of workplace bullying have been reported. Methodological aspects may explain these findings. Our contribution to this debate integrates behavioral and self-labeling estimation methods of workplace bullying into a measurement model that constitutes a bullying typology. Results in the present sample (n = 1,619) revealed that six different groups can be distinguished according to the nature and intensity of reported bullying behaviors. These clusters portray different paths for the workplace bullying process, where negative work-related and person-degrading behaviors are strongly intertwined. The analysis of the external validity showed that integrating previous estimation methods into a single measurement latent class model provides a reliable estimation method of workplace bullying, which may overcome previous flaws.
Age estimates of globular clusters in the Milky Way: constraints on cosmology.
Krauss, Lawrence M; Chaboyer, Brian
2003-01-01
Recent observations of stellar globular clusters in the Milky Way Galaxy, combined with revised ranges of parameters in stellar evolution codes and new estimates of the earliest epoch of globular cluster formation, result in a 95% confidence level lower limit on the age of the Universe of 11.2 billion years. This age is inconsistent with the expansion age for a flat Universe for the currently allowed range of the Hubble constant, unless the cosmic equation of state is dominated by a component that violates the strong energy condition. This means that the three fundamental observables in cosmology-the age of the Universe, the distance-redshift relation, and the geometry of the Universe-now independently support the case for a dark energy-dominated Universe.
Pu, Xiangke; Gao, Ge; Fan, Yubo; Wang, Mian
2016-01-01
Randomized response is a research method to get accurate answers to sensitive questions in structured sample survey. Simple random sampling is widely used in surveys of sensitive questions but hard to apply on large targeted populations. On the other side, more sophisticated sampling regimes and corresponding formulas are seldom employed to sensitive question surveys. In this work, we developed a series of formulas for parameter estimation in cluster sampling and stratified cluster sampling under two kinds of randomized response models by using classic sampling theories and total probability formulas. The performances of the sampling methods and formulas in the survey of premarital sex and cheating on exams at Soochow University were also provided. The reliability of the survey methods and formulas for sensitive question survey was found to be high.
Improving RESTORE for robust diffusion tensor estimation: a simulation study
NASA Astrophysics Data System (ADS)
Chang, Lin-Ching
2010-03-01
Diffusion tensor magnetic resonance imaging (DT-MRI) is increasingly used in clinical research and applications for its ability to depict white matter tracts and for its sensitivity to microstructural and architectural features of brain tissue. However, artifacts are common in clinical DT-MRI acquisitions. Signal perturbations produced by such artifacts can be severe and neglecting to account for their contribution can result in erroneous diffusion tensor values. The Robust Estimation of Tensors by Outlier Rejection (RESTORE) has been demonstrated to be an effective method for improving tensor estimation on a voxel-by-voxel basis in the presence of artifactual data points in diffusion weighted images. Despite the very good performance of the RESTORE algorithm, there are some limitations and opportunities for improvement. Instabilities in tensor estimation using RESTORE have been observed in clinical human brain data. Those instabilities can come from the intrinsic high frequency spin inflow effects in non-DWIs or from excluding too many data points from the fitting. This paper proposes several practical constraints to the original RESTORE method. Results from Monte Carlo simulation indicate that the improved RESTORE method reduces the instabilities in tensor estimation observed from the original RESTORE method.
Improving warm rain estimation in the PERSIANN-CCS satellite-based retrieval algorithm
NASA Astrophysics Data System (ADS)
Karbalaee, N.; Hsu, K. L.; Sorooshian, S.
2015-12-01
The Precipitation Estimation from remotely Sensed Information using Artificial Neural Networks-Cloud Classification System (PERSIANN-CCS) is one of the algorithms being integrated in the IMERG (Integrated Multi-Satellite Retrievals for the Global Precipitation Mission GPM) to estimate precipitation at 0.04 lat-long scale every 30-minute. PERSIANN-CCS extracts features from infrared cloud image segmentation from three brightness temperature thresholds (220K, 235K, and 253K). Warm raining clouds with brightness temperature higher than 253K are not covered from the current algorithm. To improve rain detection from warm rain, in this study, the cloud image segmentation threshold to cover warmer clouds is extended from 253K to 300K. Several other temperature thresholds between 253K and 300K were also examined. K-means cluster algorithm was used to classify extracted image features to 400 groups. Rainfall rates from each cluster were retrained using radar rainfall measurements. Case studies were carried out over CONUS to investigate the ability to improve detection of warm rainfall from segmentation and image classification using warmer temperature thresholds. Satellite image and radar rainfall data in both summer and winter seasons were used in the experiments in year 2012 as a training data. Overall results show that rain detection from warm clouds is significantly improved. However, it also shows that the false rain detection is also relatively increased when the segmentation temperature is increased.
Does more accurate exposure prediction necessarily improve health effect estimates?
Szpiro, Adam A; Paciorek, Christopher J; Sheppard, Lianne
2011-09-01
A unique challenge in air pollution cohort studies and similar applications in environmental epidemiology is that exposure is not measured directly at subjects' locations. Instead, pollution data from monitoring stations at some distance from the study subjects are used to predict exposures, and these predicted exposures are used to estimate the health effect parameter of interest. It is usually assumed that minimizing the error in predicting the true exposure will improve health effect estimation. We show in a simulation study that this is not always the case. We interpret our results in light of recently developed statistical theory for measurement error, and we discuss implications for the design and analysis of epidemiologic research.
NASA Astrophysics Data System (ADS)
Dogulu, Nilay; Solomatine, Dimitri; Lal Shrestha, Durga
2014-05-01
Within the context of flood forecasting, assessment of predictive uncertainty has become a necessity for most of the modelling studies in operational hydrology. There are several uncertainty analysis and/or prediction methods available in the literature; however, most of them rely on normality and homoscedasticity assumptions for model residuals occurring in reproducing the observed data. This study focuses on a statistical method analyzing model residuals without having any assumptions and based on a clustering approach: Uncertainty Estimation based on local Errors and Clustering (UNEEC). The aim of this work is to provide a comprehensive evaluation of the UNEEC method's performance in view of clustering approach employed within its methodology. This is done by analyzing normality of model residuals and comparing uncertainty analysis results (for 50% and 90% confidence level) with those obtained from uniform interval and quantile regression methods. An important part of the basis by which the methods are compared is analysis of data clusters representing different hydrometeorological conditions. The validation measures used are PICP, MPI, ARIL and NUE where necessary. A new validation measure linking prediction interval to the (hydrological) model quality - weighted mean prediction interval (WMPI) - is also proposed for comparing the methods more effectively. The case study is Brue catchment, located in the South West of England. A different parametrization of the method than its previous application in Shrestha and Solomatine (2008) is used, i.e. past error values in addition to discharge and effective rainfall is considered. The results show that UNEEC's notable characteristic in its methodology, i.e. applying clustering to data of predictors upon which catchment behaviour information is encapsulated, contributes increased accuracy of the method's results for varying flow conditions. Besides, classifying data so that extreme flow events are individually
A clustering approach for estimating parameters of a profile hidden Markov model.
Aghdam, Rosa; Pezeshk, Hamid; Malekpour, Seyed Amir; Shemehsavar, Soudabeh; Eslahchi, Changiz
2013-01-01
A Profile Hidden Markov Model (PHMM) is a standard form of a Hidden Markov Models used for modeling protein and DNA sequence families based on multiple alignment. In this paper, we implement Baum-Welch algorithm and the Bayesian Monte Carlo Markov Chain (BMCMC) method for estimating parameters of small artificial PHMM. In order to improve the prediction accuracy of the estimation of the parameters of the PHMM, we classify the training data using the weighted values of sequences in the PHMM then apply an algorithm for estimating parameters of the PHMM. The results show that the BMCMC method performs better than the Maximum Likelihood estimation. PMID:23865165
Performance Analysis of an Improved MUSIC DoA Estimator
NASA Astrophysics Data System (ADS)
Vallet, Pascal; Mestre, Xavier; Loubaton, Philippe
2015-12-01
This paper adresses the statistical performance of subspace DoA estimation using a sensor array, in the asymptotic regime where the number of samples and sensors both converge to infinity at the same rate. Improved subspace DoA estimators were derived (termed as G-MUSIC) in previous works, and were shown to be consistent and asymptotically Gaussian distributed in the case where the number of sources and their DoA remain fixed. In this case, which models widely spaced DoA scenarios, it is proved in the present paper that the traditional MUSIC method also provides DoA consistent estimates having the same asymptotic variances as the G-MUSIC estimates. The case of DoA that are spaced of the order of a beamwidth, which models closely spaced sources, is also considered. It is shown that G-MUSIC estimates are still able to consistently separate the sources, while it is no longer the case for the MUSIC ones. The asymptotic variances of G-MUSIC estimates are also evaluated.
A new estimate of the Hubble constant using the Virgo cluster distance
NASA Astrophysics Data System (ADS)
Visvanathan, N.
The Hubble constant, which defines the size and age of the universe, remains substantially uncertain. Attention is presently given to an improved distance to the Virgo Cluster obtained by means of the 1.05-micron luminosity-H I width relation of spirals. In order to improve the absolute calibration of the relation, accurate distances to the nearby SMC, LMC, N6822, SEX A and N300 galaxies have also been obtained, on the basis of the near-IR P-L relation of the Cepheids. A value for the global Hubble constant of 67 + or 4 km/sec per Mpc is obtained.
An improved approximate-Bayesian model-choice method for estimating shared evolutionary history
2014-01-01
Background To understand biological diversification, it is important to account for large-scale processes that affect the evolutionary history of groups of co-distributed populations of organisms. Such events predict temporally clustered divergences times, a pattern that can be estimated using genetic data from co-distributed species. I introduce a new approximate-Bayesian method for comparative phylogeographical model-choice that estimates the temporal distribution of divergences across taxa from multi-locus DNA sequence data. The model is an extension of that implemented in msBayes. Results By reparameterizing the model, introducing more flexible priors on demographic and divergence-time parameters, and implementing a non-parametric Dirichlet-process prior over divergence models, I improved the robustness, accuracy, and power of the method for estimating shared evolutionary history across taxa. Conclusions The results demonstrate the improved performance of the new method is due to (1) more appropriate priors on divergence-time and demographic parameters that avoid prohibitively small marginal likelihoods for models with more divergence events, and (2) the Dirichlet-process providing a flexible prior on divergence histories that does not strongly disfavor models with intermediate numbers of divergence events. The new method yields more robust estimates of posterior uncertainty, and thus greatly reduces the tendency to incorrectly estimate models of shared evolutionary history with strong support. PMID:24992937
Improved estimation of reflectance spectra by utilizing prior knowledge.
Dierl, Marcel; Eckhard, Timo; Frei, Bernhard; Klammer, Maximilian; Eichstädt, Sascha; Elster, Clemens
2016-07-01
Estimating spectral reflectance has attracted extensive research efforts in color science and machine learning, motivated through a wide range of applications. In many practical situations, prior knowledge is available that ought to be used. Here, we have developed a general Bayesian method that allows the incorporation of prior knowledge from previous monochromator and spectrophotometer measurements. The approach yields analytical expressions for fast and efficient estimation of spectral reflectance. In addition to point estimates, probability distributions are also obtained, which completely characterize the uncertainty associated with the reconstructed spectrum. We demonstrate that, through the incorporation of prior knowledge, our approach yields improved reconstruction results compared with methods that resort to training data only. Our method is particularly useful when the spectral reflectance to be recovered resides beyond the scope of the training data. PMID:27409695
An Accurate Link Correlation Estimator for Improving Wireless Protocol Performance
Zhao, Zhiwei; Xu, Xianghua; Dong, Wei; Bu, Jiajun
2015-01-01
Wireless link correlation has shown significant impact on the performance of various sensor network protocols. Many works have been devoted to exploiting link correlation for protocol improvements. However, the effectiveness of these designs heavily relies on the accuracy of link correlation measurement. In this paper, we investigate state-of-the-art link correlation measurement and analyze the limitations of existing works. We then propose a novel lightweight and accurate link correlation estimation (LACE) approach based on the reasoning of link correlation formation. LACE combines both long-term and short-term link behaviors for link correlation estimation. We implement LACE as a stand-alone interface in TinyOS and incorporate it into both routing and flooding protocols. Simulation and testbed results show that LACE: (1) achieves more accurate and lightweight link correlation measurements than the state-of-the-art work; and (2) greatly improves the performance of protocols exploiting link correlation. PMID:25686314
An accurate link correlation estimator for improving wireless protocol performance.
Zhao, Zhiwei; Xu, Xianghua; Dong, Wei; Bu, Jiajun
2015-02-12
Wireless link correlation has shown significant impact on the performance of various sensor network protocols. Many works have been devoted to exploiting link correlation for protocol improvements. However, the effectiveness of these designs heavily relies on the accuracy of link correlation measurement. In this paper, we investigate state-of-the-art link correlation measurement and analyze the limitations of existing works. We then propose a novel lightweight and accurate link correlation estimation (LACE) approach based on the reasoning of link correlation formation. LACE combines both long-term and short-term link behaviors for link correlation estimation. We implement LACE as a stand-alone interface in TinyOS and incorporate it into both routing and flooding protocols. Simulation and testbed results show that LACE: (1) achieves more accurate and lightweight link correlation measurements than the state-of-the-art work; and (2) greatly improves the performance of protocols exploiting link correlation.
Improving the estimation of the tuberculosis burden in India.
Cowling, Krycia; Dandona, Rakhi; Dandona, Lalit
2014-11-01
Although India is considered to be the country with the greatest tuberculosis burden, estimates of the disease's incidence, prevalence and mortality in India rely on sparse data with substantial uncertainty. The relevant available data are less reliable than those from countries that have recently improved systems for case reporting or recently invested in national surveys of tuberculosis prevalence. We explored ways to improve the estimation of the tuberculosis burden in India. We focused on case notification data - among the most reliable data available - and ways to investigate the associated level of underreporting, as well as the need for a national tuberculosis prevalence survey. We discuss several recent developments - i.e. changes in national policies relating to tuberculosis, World Health Organization guidelines for the investigation of the disease, and a rapid diagnostic test - that should improve data collection for the estimation of the tuberculosis burden in India and elsewhere. We recommend the implementation of an inventory study in India to assess the underreporting of tuberculosis cases, as well as a national survey of tuberculosis prevalence. A national assessment of drug resistance in Indian strains of Mycobacterium tuberculosis should also be considered. The results of such studies will be vital for the accurate monitoring of tuberculosis control efforts in India and globally.
Improvement of Source Number Estimation Method for Single Channel Signal
Du, Bolun; He, Yunze
2016-01-01
Source number estimation methods for single channel signal have been investigated and the improvements for each method are suggested in this work. Firstly, the single channel data is converted to multi-channel form by delay process. Then, algorithms used in the array signal processing, such as Gerschgorin’s disk estimation (GDE) and minimum description length (MDL), are introduced to estimate the source number of the received signal. The previous results have shown that the MDL based on information theoretic criteria (ITC) obtains a superior performance than GDE at low SNR. However it has no ability to handle the signals containing colored noise. On the contrary, the GDE method can eliminate the influence of colored noise. Nevertheless, its performance at low SNR is not satisfactory. In order to solve these problems and contradictions, the work makes remarkable improvements on these two methods on account of the above consideration. A diagonal loading technique is employed to ameliorate the MDL method and a jackknife technique is referenced to optimize the data covariance matrix in order to improve the performance of the GDE method. The results of simulation have illustrated that the performance of original methods have been promoted largely. PMID:27736959
Estimating accuracy of land-cover composition from two-stage cluster sampling
Stehman, S.V.; Wickham, J.D.; Fattorini, L.; Wade, T.D.; Baffetta, F.; Smith, J.H.
2009-01-01
Land-cover maps are often used to compute land-cover composition (i.e., the proportion or percent of area covered by each class), for each unit in a spatial partition of the region mapped. We derive design-based estimators of mean deviation (MD), mean absolute deviation (MAD), root mean square error (RMSE), and correlation (CORR) to quantify accuracy of land-cover composition for a general two-stage cluster sampling design, and for the special case of simple random sampling without replacement (SRSWOR) at each stage. The bias of the estimators for the two-stage SRSWOR design is evaluated via a simulation study. The estimators of RMSE and CORR have small bias except when sample size is small and the land-cover class is rare. The estimator of MAD is biased for both rare and common land-cover classes except when sample size is large. A general recommendation is that rare land-cover classes require large sample sizes to ensure that the accuracy estimators have small bias. ?? 2009 Elsevier Inc.
NASA Astrophysics Data System (ADS)
Pedersen, A.; Lybekk, B.; André, M.; Eriksson, A.; Masson, A.; Mozer, F. S.; Lindqvist, P.-A.; DéCréAu, P. M. E.; Dandouras, I.; Sauvaud, J.-A.; Fazakerley, A.; Taylor, M.; Paschmann, G.; Svenes, K. R.; Torkar, K.; Whipple, E.
2008-07-01
Spacecraft potential measurements by the EFW electric field experiment on the Cluster satellites can be used to obtain plasma density estimates in regions barely accessible to other type of plasma experiments. Direct calibrations of the plasma density as a function of the measured potential difference between the spacecraft and the probes can be carried out in the solar wind, the magnetosheath, and the plasmashere by the use of CIS ion density and WHISPER electron density measurements. The spacecraft photoelectron characteristic (photoelectrons escaping to the plasma in current balance with collected ambient electrons) can be calculated from knowledge of the electron current to the spacecraft based on plasma density and electron temperature data from the above mentioned experiments and can be extended to more positive spacecraft potentials by CIS ion and the PEACE electron experiments in the plasma sheet. This characteristic enables determination of the electron density as a function of spacecraft potential over the polar caps and in the lobes of the magnetosphere, regions where other experiments on Cluster have intrinsic limitations. Data from 2001 to 2006 reveal that the photoelectron characteristics of the Cluster spacecraft as well as the electric field probes vary with the solar cycle and solar activity. The consequences for plasma density measurements are addressed. Typical examples are presented to demonstrate the use of this technique in a polar cap/lobe plasma.
Cosmological parameter estimation from CMB and X-ray cluster after Planck
Hu, Jian-Wei; Cai, Rong-Gen; Guo, Zong-Kuan; Hu, Bin E-mail: cairg@itp.ac.cn E-mail: hu@lorentz.leidenuniv.nl
2014-05-01
We investigate constraints on cosmological parameters in three 8-parameter models with the summed neutrino mass as a free parameter, by a joint analysis of CCCP X-ray cluster data, the newly released Planck CMB data as well as some external data sets including baryon acoustic oscillation measurements from the 6dFGS, SDSS DR7 and BOSS DR9 surveys, and Hubble Space Telescope H{sub 0} measurement. We find that the combined data strongly favor a non-zero neutrino masses at more than 3σ confidence level in these non-vanilla models. Allowing the CMB lensing amplitude A{sub L} to vary, we find A{sub L} > 1 at 3σ confidence level. For dark energy with a constant equation of state w, we obtain w < −1 at 3σ confidence level. The estimate of the matter power spectrum amplitude σ{sub 8} is discrepant with the Planck value at 2σ confidence level, which reflects some tension between X-ray cluster data and Planck data in these non-vanilla models. The tension can be alleviated by adding a 9% systematic shift in the cluster mass function.
Cosmological parameter estimation from CMB and X-ray cluster after Planck
NASA Astrophysics Data System (ADS)
Hu, Jian-Wei; Cai, Rong-Gen; Guo, Zong-Kuan; Hu, Bin
2014-05-01
We investigate constraints on cosmological parameters in three 8-parameter models with the summed neutrino mass as a free parameter, by a joint analysis of CCCP X-ray cluster data, the newly released Planck CMB data as well as some external data sets including baryon acoustic oscillation measurements from the 6dFGS, SDSS DR7 and BOSS DR9 surveys, and Hubble Space Telescope H0 measurement. We find that the combined data strongly favor a non-zero neutrino masses at more than 3σ confidence level in these non-vanilla models. Allowing the CMB lensing amplitude AL to vary, we find AL > 1 at 3σ confidence level. For dark energy with a constant equation of state w, we obtain w < -1 at 3σ confidence level. The estimate of the matter power spectrum amplitude σ8 is discrepant with the Planck value at 2σ confidence level, which reflects some tension between X-ray cluster data and Planck data in these non-vanilla models. The tension can be alleviated by adding a 9% systematic shift in the cluster mass function.
Improving estimates of tree mortality probability using potential growth rate
Das, Adrian J.; Stephenson, Nathan L.
2015-01-01
Tree growth rate is frequently used to estimate mortality probability. Yet, growth metrics can vary in form, and the justification for using one over another is rarely clear. We tested whether a growth index (GI) that scales the realized diameter growth rate against the potential diameter growth rate (PDGR) would give better estimates of mortality probability than other measures. We also tested whether PDGR, being a function of tree size, might better correlate with the baseline mortality probability than direct measurements of size such as diameter or basal area. Using a long-term dataset from the Sierra Nevada, California, U.S.A., as well as existing species-specific estimates of PDGR, we developed growth–mortality models for four common species. For three of the four species, models that included GI, PDGR, or a combination of GI and PDGR were substantially better than models without them. For the fourth species, the models including GI and PDGR performed roughly as well as a model that included only the diameter growth rate. Our results suggest that using PDGR can improve our ability to estimate tree survival probability. However, in the absence of PDGR estimates, the diameter growth rate was the best empirical predictor of mortality, in contrast to assumptions often made in the literature.
Can modeling improve estimation of desert tortoise population densities?
Nussear, K.E.; Tracy, C.R.
2007-01-01
The federally listed desert tortoise (Gopherus agassizii) is currently monitored using distance sampling to estimate population densities. Distance sampling, as with many other techniques for estimating population density, assumes that it is possible to quantify the proportion of animals available to be counted in any census. Because desert tortoises spend much of their life in burrows, and the proportion of tortoises in burrows at any time can be extremely variable, this assumption is difficult to meet. This proportion of animals available to be counted is used as a correction factor (g0) in distance sampling and has been estimated from daily censuses of small populations of tortoises (6-12 individuals). These censuses are costly and produce imprecise estimates of g0 due to small sample sizes. We used data on tortoise activity from a large (N = 150) experimental population to model activity as a function of the biophysical attributes of the environment, but these models did not improve the precision of estimates from the focal populations. Thus, to evaluate how much of the variance in tortoise activity is apparently not predictable, we assessed whether activity on any particular day can predict activity on subsequent days with essentially identical environmental conditions. Tortoise activity was only weakly correlated on consecutive days, indicating that behavior was not repeatable or consistent among days with similar physical environments. ?? 2007 by the Ecological Society of America.
Improving energy expenditure estimation by using a triaxial accelerometer.
Chen, K Y; Sun, M
1997-12-01
In our study of 125 subjects (53 men and 72 women) for two 24-h periods, we validated energy expenditure (EE), estimated by a triaxial accelerometer (Tritrac-R3D), by using a whole-room indirect calorimeter under close-to-normal living conditions. The estimated EE was correlated with the measured total EE for the 2 days (r = 0. 925 and r = 0.855; P < 0.001) and in minute-by-minute EE (P < 0.01). Resting EE formulated by the Tritrac was found to be similar to the measured values [standard errors of estimation (SEE) = 0.112 W/kg; P = 0.822]. The Tritrac significantly underestimated total EE, EE for physical activities, EE of sedentary and light-intensity activities, and EE for exercise such as stepping (all P < 0.001). We developed a linear and a nonlinear model to predict EE by using the acceleration components from the Tritrac. Predicted EE was significantly improved with both models in estimating total EE, total EE for physical activities, EE in low-intensity activities, minute-by-minute averaged relative difference, and minute-by-minute SEE (all P < 0. 05). Furthermore, with our generalized models and by using subjects' physical characteristics and body acceleration, EE can be estimated with higher accuracy (averaged SEE = 0.418 W/kg) than with the Tritrac model.
IPEG- IMPROVED PRICE ESTIMATION GUIDELINES (IBM 370 VERSION)
NASA Technical Reports Server (NTRS)
Chamberlain, R. G.
1994-01-01
The Improved Price Estimation Guidelines, IPEG, program provides a simple yet accurate estimate of the price of a manufactured product. IPEG facilitates sensitivity studies of price estimates at considerably less expense than would be incurred by using the Standard Assembly-line Manufacturing Industry Simulation, SAMIS, program (COSMIC program NPO-16032). A difference of less than one percent between the IPEG and SAMIS price estimates has been observed with realistic test cases. However, the IPEG simplification of SAMIS allows the analyst with limited time and computing resources to perform a greater number of sensitivity studies than with SAMIS. Although IPEG was developed for the photovoltaics industry, it is readily adaptable to any standard assembly line type of manufacturing industry. IPEG estimates the annual production price per unit. The input data includes cost of equipment, space, labor, materials, supplies, and utilities. Production on an industry wide basis or a process wide basis can be simulated. Once the IPEG input file is prepared, the original price is estimated and sensitivity studies may be performed. The IPEG user selects a sensitivity variable and a set of values. IPEG will compute a price estimate and a variety of other cost parameters for every specified value of the sensitivity variable. IPEG is designed as an interactive system and prompts the user for all required information and offers a variety of output options. The IPEG/PC program is written in TURBO PASCAL for interactive execution on an IBM PC computer under DOS 2.0 or above with at least 64K of memory. The IBM PC color display and color graphics adapter are needed to use the plotting capabilities in IPEG/PC. IPEG/PC was developed in 1984. The original IPEG program is written in SIMSCRIPT II.5 for interactive execution and has been implemented on an IBM 370 series computer with a central memory requirement of approximately 300K of 8 bit bytes. The original IPEG was developed in 1980.
IPEG- IMPROVED PRICE ESTIMATION GUIDELINES (IBM PC VERSION)
NASA Technical Reports Server (NTRS)
Aster, R. W.
1994-01-01
The Improved Price Estimation Guidelines, IPEG, program provides a simple yet accurate estimate of the price of a manufactured product. IPEG facilitates sensitivity studies of price estimates at considerably less expense than would be incurred by using the Standard Assembly-line Manufacturing Industry Simulation, SAMIS, program (COSMIC program NPO-16032). A difference of less than one percent between the IPEG and SAMIS price estimates has been observed with realistic test cases. However, the IPEG simplification of SAMIS allows the analyst with limited time and computing resources to perform a greater number of sensitivity studies than with SAMIS. Although IPEG was developed for the photovoltaics industry, it is readily adaptable to any standard assembly line type of manufacturing industry. IPEG estimates the annual production price per unit. The input data includes cost of equipment, space, labor, materials, supplies, and utilities. Production on an industry wide basis or a process wide basis can be simulated. Once the IPEG input file is prepared, the original price is estimated and sensitivity studies may be performed. The IPEG user selects a sensitivity variable and a set of values. IPEG will compute a price estimate and a variety of other cost parameters for every specified value of the sensitivity variable. IPEG is designed as an interactive system and prompts the user for all required information and offers a variety of output options. The IPEG/PC program is written in TURBO PASCAL for interactive execution on an IBM PC computer under DOS 2.0 or above with at least 64K of memory. The IBM PC color display and color graphics adapter are needed to use the plotting capabilities in IPEG/PC. IPEG/PC was developed in 1984. The original IPEG program is written in SIMSCRIPT II.5 for interactive execution and has been implemented on an IBM 370 series computer with a central memory requirement of approximately 300K of 8 bit bytes. The original IPEG was developed in 1980.
Kasaie, Parastu; Mathema, Barun; Kelton, W. David; Azman, Andrew S.; Pennington, Jeff; Dowdy, David W.
2015-01-01
In any setting, a proportion of incident active tuberculosis (TB) reflects recent transmission (“recent transmission proportion”), whereas the remainder represents reactivation. Appropriately estimating the recent transmission proportion has important implications for local TB control, but existing approaches have known biases, especially where data are incomplete. We constructed a stochastic individual-based model of a TB epidemic and designed a set of simulations (derivation set) to develop two regression-based tools for estimating the recent transmission proportion from five inputs: underlying TB incidence, sampling coverage, study duration, clustered proportion of observed cases, and proportion of observed clusters in the sample. We tested these tools on a set of unrelated simulations (validation set), and compared their performance against that of the traditional ‘n-1’ approach. In the validation set, the regression tools reduced the absolute estimation bias (difference between estimated and true recent transmission proportion) in the ‘n-1’ technique by a median [interquartile range] of 60% [9%, 82%] and 69% [30%, 87%]. The bias in the ‘n-1’ model was highly sensitive to underlying levels of study coverage and duration, and substantially underestimated the recent transmission proportion in settings of incomplete data coverage. By contrast, the regression models’ performance was more consistent across different epidemiological settings and study characteristics. We provide one of these regression models as a user-friendly, web-based tool. Novel tools can improve our ability to estimate the recent TB transmission proportion from data that are observable (or estimable) by public health practitioners with limited available molecular data. PMID:26679499
Kasaie, Parastu; Mathema, Barun; Kelton, W David; Azman, Andrew S; Pennington, Jeff; Dowdy, David W
2015-01-01
In any setting, a proportion of incident active tuberculosis (TB) reflects recent transmission ("recent transmission proportion"), whereas the remainder represents reactivation. Appropriately estimating the recent transmission proportion has important implications for local TB control, but existing approaches have known biases, especially where data are incomplete. We constructed a stochastic individual-based model of a TB epidemic and designed a set of simulations (derivation set) to develop two regression-based tools for estimating the recent transmission proportion from five inputs: underlying TB incidence, sampling coverage, study duration, clustered proportion of observed cases, and proportion of observed clusters in the sample. We tested these tools on a set of unrelated simulations (validation set), and compared their performance against that of the traditional 'n-1' approach. In the validation set, the regression tools reduced the absolute estimation bias (difference between estimated and true recent transmission proportion) in the 'n-1' technique by a median [interquartile range] of 60% [9%, 82%] and 69% [30%, 87%]. The bias in the 'n-1' model was highly sensitive to underlying levels of study coverage and duration, and substantially underestimated the recent transmission proportion in settings of incomplete data coverage. By contrast, the regression models' performance was more consistent across different epidemiological settings and study characteristics. We provide one of these regression models as a user-friendly, web-based tool. Novel tools can improve our ability to estimate the recent TB transmission proportion from data that are observable (or estimable) by public health practitioners with limited available molecular data. PMID:26679499
Electron density estimation in cold magnetospheric plasmas with the Cluster Active Archive
NASA Astrophysics Data System (ADS)
Masson, A.; Pedersen, A.; Taylor, M. G.; Escoubet, C. P.; Laakso, H. E.
2009-12-01
Electron density is a key physical quantity to characterize any plasma medium. Its measurement is thus essential to understand the various physical processes occurring in the environment of a magnetized planet. However, any magnetosphere of the solar system is far from being an homogeneous medium with a constant electron density and temperature. For instance, the Earth’s magnetosphere is composed of a variety of regions with densities and temperatures spanning over at least 6 decades of magnitude. For this reason, different types of scientific instruments are usually carried onboard a magnetospheric spacecraft to estimate in situ the electron density of the various plasma regions crossed by different means. In the case of the European Space Agency Cluster mission, five different instruments on each of its four identical spacecraft can be used to estimate it: two particle instruments, a DC electric field instrument, a relaxation sounder and a high-time resolution passive wave receiver. Each of these instruments has its pros and cons depending on the plasma conditions. The focus of this study is the accurate estimation of the electron density in cold plasma regions of the magnetosphere including the magnetotail lobes (Ne ≤ 0.01 e-/cc, Te ~ 100 eV) and the plasmasphere (Ne> 10 e-/cc, Te <10 eV). In these regions, particle instruments can be blind to low energy ions outflowing from the ionosphere or measuring only a portion of the energy range of the particles due to photoelectrons. This often results in an under estimation of the bulk density. Measurements from a relaxation sounder enables accurate estimation of the bulk electron density above a fraction of 1 e-/cc but requires careful calibration of the resonances and/or the cutoffs detected. On Cluster, active soundings enable to derive precise density estimates between 0.2 and 80 e-/cc every minute or two. Spacecraft-to-probe difference potential measurements from a double probe electric field experiment can be
Improving Estimated Optical Constants With MSTM and DDSCAT Modeling
NASA Astrophysics Data System (ADS)
Pitman, K. M.; Wolff, M. J.
2015-12-01
We present numerical experiments to determine quantitatively the effects of mineral particle clustering on Mars spacecraft spectral signatures and to improve upon the values of refractive indices (optical constants n, k) derived from Mars dust laboratory analog spectra such as those from RELAB and MRO CRISM libraries. Whereas spectral properties for Mars analog minerals and actual Mars soil are dominated by aggregates of particles smaller than the size of martian atmospheric dust, the analytic radiative transfer (RT) solutions used to interpret planetary surfaces assume that individual, well-separated particles dominate the spectral signature. Both in RT models and in the refractive index derivation methods that include analytic RT approximations, spheres are also over-used to represent nonspherical particles. Part of the motivation is that the integrated effect over randomly oriented particles on quantities such as single scattering albedo and phase function are relatively less than for single particles. However, we have seen in previous numerical experiments that when varying the shape and size of individual grains within a cluster, the phase function changes in both magnitude and slope, thus the "relatively less" effect is more significant than one might think. Here we examine the wavelength dependence of the forward scattering parameter with multisphere T-matrix (MSTM) and discrete dipole approximation (DDSCAT) codes that compute light scattering by layers of particles on planetary surfaces to see how albedo is affected and integrate our model results into refractive index calculations to remove uncertainties in approximations and parameters that can lower the accuracy of optical constants. By correcting the single scattering albedo and phase function terms in the refractive index determinations, our data will help to improve the understanding of Mars in identifying, mapping the distributions, and quantifying abundances for these minerals and will address long
Improving Evapotranspiration Estimates Using Multi-Platform Remote Sensing
NASA Astrophysics Data System (ADS)
Knipper, Kyle; Hogue, Terri; Franz, Kristie; Scott, Russell
2016-04-01
Understanding the linkages between energy and water cycles through evapotranspiration (ET) is uniquely challenging given its dependence on a range of climatological parameters and surface/atmospheric heterogeneity. A number of methods have been developed to estimate ET either from primarily remote-sensing observations, in-situ measurements, or a combination of the two. However, the scale of many of these methods may be too large to provide needed information about the spatial and temporal variability of ET that can occur over regions with acute or chronic land cover change and precipitation driven fluxes. The current study aims to improve the spatial and temporal variability of ET utilizing only satellite-based observations by incorporating a potential evapotranspiration (PET) methodology with satellite-based down-scaled soil moisture estimates in southern Arizona, USA. Initially, soil moisture estimates from AMSR2 and SMOS are downscaled to 1km through a triangular relationship between MODIS land surface temperature (MYD11A1), vegetation indices (MOD13Q1/MYD13Q1), and brightness temperature. Downscaled soil moisture values are then used to scale PET to actual ET (AET) at a daily, 1km resolution. Derived AET estimates are compared to observed flux tower estimates, the North American Land Data Assimilation System (NLDAS) model output (i.e. Variable Infiltration Capacity (VIC) Macroscale Hydrologic Model, Mosiac Model, and Noah Model simulations), the Operational Simplified Surface Energy Balance Model (SSEBop), and a calibrated empirical ET model created specifically for the region. Preliminary results indicate a strong increase in correlation when incorporating the downscaling technique to original AMSR2 and SMOS soil moisture values, with the added benefit of being able to decipher small scale heterogeneity in soil moisture (riparian versus desert grassland). AET results show strong correlations with relatively low error and bias when compared to flux tower
Song, Jeeseon; Mohr, Joseph J.; Barkhouse, Wayne A.; Rude, Cody; Warren, Michael S.; Dolag, Klaus
2012-03-01
We present a galaxy catalog simulator that converts N-body simulations with halo and subhalo catalogs into mock, multiband photometric catalogs. The simulator assigns galaxy properties to each subhalo in a way that reproduces the observed cluster galaxy halo occupation distribution, the radial and mass-dependent variation in fractions of blue galaxies, the luminosity functions in the cluster and the field, and the color-magnitude relation in clusters. Moreover, the evolution of these parameters is tuned to match existing observational constraints. Parameterizing an ensemble of cluster galaxy properties enables us to create mock catalogs with variations in those properties, which in turn allows us to quantify the sensitivity of cluster finding to current observational uncertainties in these properties. Field galaxies are sampled from existing multiband photometric surveys of similar depth. We present an application of the catalog simulator to characterize the selection function and contamination of a galaxy cluster finder that utilizes the cluster red sequence together with galaxy clustering on the sky. We estimate systematic uncertainties in the selection to be at the {<=}15% level with current observational constraints on cluster galaxy populations and their evolution. We find the contamination in this cluster finder to be {approx}35% to redshift z {approx} 0.6. In addition, we use the mock galaxy catalogs to test the optical mass indicator B{sub gc} and a red-sequence redshift estimator. We measure the intrinsic scatter of the B{sub gc}-mass relation to be approximately log normal with {sigma}{sub log10M}{approx}0.25 and we demonstrate photometric redshift accuracies for massive clusters at the {approx}3% level out to z {approx} 0.7.
Remote chlorophyll-a estimates for inland waters based on a cluster-based classification.
Shi, Kun; Li, Yunmei; Li, Lin; Lu, Heng; Song, Kaishan; Liu, Zhonghua; Xu, Yifan; Li, Zuchuan
2013-02-01
Accurate estimates of chlorophyll-a concentration (Chl-a) from remotely sensed data for inland waters are challenging due to their optical complexity. In this study, a framework of Chl-a estimation is established for optically complex inland waters based on combination of water optical classification and two semi-empirical algorithms. Three spectrally distinct water types (Type I to Type III) are first identified using a clustering method performed on remote sensing reflectance (R(rs)) from datasets containing 231 samples from Lake Taihu, Lake Chaohu, Lake Dianchi, and Three Gorges Reservoir. The classification criteria for each optical water type are subsequently defined for MERIS images based on the spectral characteristics of the three water types. The criteria cluster every R(rs) spectrum into one of the three water types by comparing the values from band 7 (central band: 665 nm), band 8 (central band: 681.25 nm), and band 9 (central band: 708.75 nm) of MERIS images. Based on the water classification, the type-specific three-band algorithms (TBA) and type-specific advanced three-band algorithm (ATBA) are developed for each water type using the same datasets. By pre-classifying, errors are decreased for the two algorithms, with the mean absolute percent error (MAPE) of TBA decreasing from 36.5% to 23% for the calibration datasets, and from 40% to 28% for ATBA. The accuracy of the two algorithms for validation data indicates that optical classification eliminates the need to adjust the optimal locations of the three bands or to re-parameterize to estimate Chl-a for other waters. The classification criteria and the type-specific ATBA are additionally validated by two MERIS images. The framework of first classifying optical water types based on reflectance characteristics and subsequently developing type-specific algorithms for different water types is a valid scheme for reducing errors in Chl-a estimation for optically complex inland waters.
The ALHAMBRA survey: Estimation of the clustering signal encoded in the cosmic variance
NASA Astrophysics Data System (ADS)
López-Sanjuan, C.; Cenarro, A. J.; Hernández-Monteagudo, C.; Arnalte-Mur, P.; Varela, J.; Viironen, K.; Fernández-Soto, A.; Martínez, V. J.; Alfaro, E.; Ascaso, B.; del Olmo, A.; Díaz-García, L. A.; Hurtado-Gil, Ll.; Moles, M.; Molino, A.; Perea, J.; Pović, M.; Aguerri, J. A. L.; Aparicio-Villegas, T.; Benítez, N.; Broadhurst, T.; Cabrera-Caño, J.; Castander, F. J.; Cepa, J.; Cerviño, M.; Cristóbal-Hornillos, D.; González Delgado, R. M.; Husillos, C.; Infante, L.; Márquez, I.; Masegosa, J.; Prada, F.; Quintana, J. M.
2015-10-01
Aims: The relative cosmic variance (σv) is a fundamental source of uncertainty in pencil-beam surveys and, as a particular case of count-in-cell statistics, can be used to estimate the bias between galaxies and their underlying dark-matter distribution. Our goal is to test the significance of the clustering information encoded in the σv measured in the ALHAMBRA survey. Methods: We measure the cosmic variance of several galaxy populations selected with B-band luminosity at 0.35 ≤ z< 1.05 as the intrinsic dispersion in the number density distribution derived from the 48 ALHAMBRA subfields. We compare the observational σv with the cosmic variance of the dark matter expected from the theory, σv,dm. This provides an estimation of the galaxy bias b. Results: The galaxy bias from the cosmic variance is in excellent agreement with the bias estimated by two-point correlation function analysis in ALHAMBRA. This holds for different redshift bins, for red and blue subsamples, and for several B-band luminosity selections. We find that b increases with the B-band luminosity and the redshift, as expected from previous work. Moreover, red galaxies have a larger bias than blue galaxies, with a relative bias of brel = 1.4 ± 0.2. Conclusions: Our results demonstrate that the cosmic variance measured in ALHAMBRA is due to the clustering of galaxies and can be used to characterise the σv affecting pencil-beam surveys. In addition, it can also be used to estimate the galaxy bias b from a method independent of correlation functions. Based on observations collected at the German-Spanish Astronomical Center, Calar Alto, jointly operated by the Max-Planck-Institut für Astronomie (MPIA) at Heidelberg and the Instituto de Astrofísica de Andalucía (CSIC).
Improving the performance of a petroleum reservoir model on workstation clusters using MPI
Dantas, M.A.R.; Zaluska, E.J.
1996-11-01
The relatively low-performance of a petroleum reservoir model when it executes on a single workstation is one of the key motivating factors for exploiting high-performance computing on workstation clusters. Workstation clusters, connected through a Local Area Network, are at a stage where their effectiveness as a suitable configuration for high-performance parallel processing has already been established. This paper discusses the improvement in performance of an engineering application on a workstation cluster using the MPI (Message Passing Interface) software environment. The importance of this approach for many engineering and scientific applications is illustrated by the case study, which also provides a recommended porting methodology for similar applications.
Improving the quality of parameter estimates obtained from slug tests
Butler, J.J.; McElwee, C.D.; Liu, W.
1996-01-01
The slug test is one of the most commonly used field methods for obtaining in situ estimates of hydraulic conductivity. Despite its prevalence, this method has received criticism from many quarters in the ground-water community. This criticism emphasizes the poor quality of the estimated parameters, a condition that is primarily a product of the somewhat casual approach that is often employed in slug tests. Recently, the Kansas Geological Survey (KGS) has pursued research directed it improving methods for the performance and analysis of slug tests. Based on extensive theoretical and field research, a series of guidelines have been proposed that should enable the quality of parameter estimates to be improved. The most significant of these guidelines are: (1) three or more slug tests should be performed at each well during a given test period; (2) two or more different initial displacements (Ho) should be used at each well during a test period; (3) the method used to initiate a test should enable the slug to be introduced in a near-instantaneous manner and should allow a good estimate of Ho to be obtained; (4) data-acquisition equipment that enables a large quantity of high quality data to be collected should be employed; (5) if an estimate of the storage parameter is needed, an observation well other than the test well should be employed; (6) the method chosen for analysis of the slug-test data should be appropriate for site conditions; (7) use of pre- and post-analysis plots should be an integral component of the analysis procedure, and (8) appropriate well construction parameters should be employed. Data from slug tests performed at a number of KGS field sites demonstrate the importance of these guidelines.
Improving the quality of parameter estimates obtained from slug tests
Butler, J.J. Jr.; McElwee, C.D.; Liu, W.
1996-05-01
The slug test is one of the most commonly used field methods for obtaining in situ estimates of hydraulic conductivity. Despite its prevalence, this method has received criticism from many quarters in the ground-water community. This criticism emphasizes the poor quality of the estimated parameters, a condition that is primarily a product of the somewhat casual approach that is often employed in slug tests. Recently, the Kansas Geological Survey (KGS) has pursued research directed at improving methods for the performance and analysis of slug tests. Based on extensive theoretical and field research, a series of guidelines have been proposed that should enable the quality of parameter estimates to be improved. The most significant of these guidelines are: (1) three or more slug tests should be performed at each well during a given test period; (2) two or more different initial displacements (H{sub 0}) should be used at each well during a test period; (3) the method used to initiate a test should enable the slug to e introduced in a near-instantaneous manner and should allow a good estimate of H{sub 0} to be obtained; (4) data-acquisition equipment that enables a large quantity of high quality data to be collected should be employed; (5) if an estimate of the storage parameter is needed, an observation well other than the test well should be employed; (6) the method chosen for analysis of the slug-test data should be appropriate for site conditions; (7) use of pre- and post-analysis plots should be an integral component of the analysis procedure, and (8) appropriate well construction parameters should be employed. Data from slug tests performed at a number of KGS field sites demonstrate the importance of these guidelines.
Estimating {Omega} from galaxy redshifts: Linear flow distortions and nonlinear clustering
Bromley, B.C. |; Warren, M.S.; Zurek, W.H.
1997-02-01
We propose a method to determine the cosmic mass density {Omega} from redshift-space distortions induced by large-scale flows in the presence of nonlinear clustering. Nonlinear structures in redshift space, such as fingers of God, can contaminate distortions from linear flows on scales as large as several times the small-scale pairwise velocity dispersion {sigma}{sub {nu}}. Following Peacock & Dodds, we work in the Fourier domain and propose a model to describe the anisotropy in the redshift-space power spectrum; tests with high-resolution numerical data demonstrate that the model is robust for both mass and biased galaxy halos on translinear scales and above. On the basis of this model, we propose an estimator of the linear growth parameter {beta}={Omega}{sup 0.6}/b, where b measures bias, derived from sampling functions that are tuned to eliminate distortions from nonlinear clustering. The measure is tested on the numerical data and found to recover the true value of {beta} to within {approximately}10{percent}. An analysis of {ital IRAS} 1.2 Jy galaxies yields {beta}=0.8{sub {minus}0.3}{sup +0.4} at a scale of 1000kms{sup {minus}1}, which is close to optimal given the shot noise and finite size of the survey. This measurement is consistent with dynamical estimates of {beta} derived from both real-space and redshift-space information. The importance of the method presented here is that nonlinear clustering effects are removed to enable linear correlation anisotropy measurements on scales approaching the translinear regime. We discuss implications for analyses of forthcoming optical redshift surveys in which the dispersion is more than a factor of 2 greater than in the {ital IRAS} data. {copyright} {ital 1997} {ital The American Astronomical Society}
Tuning target selection algorithms to improve galaxy redshift estimates
NASA Astrophysics Data System (ADS)
Hoyle, Ben; Paech, Kerstin; Rau, Markus Michael; Seitz, Stella; Weller, Jochen
2016-06-01
We showcase machine learning (ML) inspired target selection algorithms to determine which of all potential targets should be selected first for spectroscopic follow-up. Efficient target selection can improve the ML redshift uncertainties as calculated on an independent sample, while requiring less targets to be observed. We compare seven different ML targeting algorithms with the Sloan Digital Sky Survey (SDSS) target order, and with a random targeting algorithm. The ML inspired algorithms are constructed iteratively by estimating which of the remaining target galaxies will be most difficult for the ML methods to accurately estimate redshifts using the previously observed data. This is performed by predicting the expected redshift error and redshift offset (or bias) of all of the remaining target galaxies. We find that the predicted values of bias and error are accurate to better than 10-30 per cent of the true values, even with only limited training sample sizes. We construct a hypothetical follow-up survey and find that some of the ML targeting algorithms are able to obtain the same redshift predictive power with 2-3 times less observing time, as compared to that of the SDSS, or random, target selection algorithms. The reduction in the required follow-up resources could allow for a change to the follow-up strategy, for example by obtaining deeper spectroscopy, which could improve ML redshift estimates for deeper test data.
Improving stochastic estimates with inference methods: calculating matrix diagonals.
Selig, Marco; Oppermann, Niels; Ensslin, Torsten A
2012-02-01
Estimating the diagonal entries of a matrix, that is not directly accessible but only available as a linear operator in the form of a computer routine, is a common necessity in many computational applications, especially in image reconstruction and statistical inference. Here, methods of statistical inference are used to improve the accuracy or the computational costs of matrix probing methods to estimate matrix diagonals. In particular, the generalized Wiener filter methodology, as developed within information field theory, is shown to significantly improve estimates based on only a few sampling probes, in cases in which some form of continuity of the solution can be assumed. The strength, length scale, and precise functional form of the exploited autocorrelation function of the matrix diagonal is determined from the probes themselves. The developed algorithm is successfully applied to mock and real world problems. These performance tests show that, in situations where a matrix diagonal has to be calculated from only a small number of computationally expensive probes, a speedup by a factor of 2 to 10 is possible with the proposed method.
Speed Profiles for Improvement of Maritime Emission Estimation
Yau, Pui Shan; Lee, Shun-Cheng; Ho, Kin Fai
2012-01-01
Abstract Maritime emissions play an important role in anthropogenic emissions, particularly for cities with busy ports such as Hong Kong. Ship emissions are strongly dependent on vessel speed, and thus accurate vessel speed is essential for maritime emission studies. In this study, we determined minute-by-minute high-resolution speed profiles of container ships on four major routes in Hong Kong waters using Automatic Identification System (AIS). The activity-based ship emissions of NOx, CO, HC, CO2, SO2, and PM10 were estimated using derived vessel speed profiles, and results were compared with those using the speed limits of control zones. Estimation using speed limits resulted in up to twofold overestimation of ship emissions. Compared with emissions estimated using the speed limits of control zones, emissions estimated using vessel speed profiles could provide results with up to 88% higher accuracy. Uncertainty analysis and sensitivity analysis of the model demonstrated the significance of improvement of vessel speed resolution. From spatial analysis, it is revealed that SO2 and PM10 emissions during maneuvering within 1 nautical mile from port were the highest. They contributed 7%–22% of SO2 emissions and 8%–17% of PM10 emissions of the entire voyage in Hong Kong. PMID:23236250
Adaptive noise estimation and suppression for improving microseismic event detection
NASA Astrophysics Data System (ADS)
Mousavi, S. Mostafa; Langston, Charles A.
2016-09-01
Microseismic data recorded by surface arrays are often strongly contaminated by unwanted noise. This background noise makes the detection of small magnitude events difficult. A noise level estimation and noise reduction algorithm is presented for microseismic data analysis based upon minimally controlled recursive averaging and neighborhood shrinkage estimators. The method might not be compared with more sophisticated and computationally expensive denoising algorithm in terms of preserving detailed features of seismic signal. However, it is fast and data-driven and can be applied in real-time processing of continuous data for event detection purposes. Results from application of this algorithm to synthetic and real seismic data show that it holds a great promise for improving microseismic event detection.
Improved estimates of coordinate error for molecular replacement
Oeffner, Robert D.; Bunkóczi, Gábor; McCoy, Airlie J.; Read, Randy J.
2013-11-01
A function for estimating the effective root-mean-square deviation in coordinates between two proteins has been developed that depends on both the sequence identity and the size of the protein and is optimized for use with molecular replacement in Phaser. A top peak translation-function Z-score of over 8 is found to be a reliable metric of when molecular replacement has succeeded. The estimate of the root-mean-square deviation (r.m.s.d.) in coordinates between the model and the target is an essential parameter for calibrating likelihood functions for molecular replacement (MR). Good estimates of the r.m.s.d. lead to good estimates of the variance term in the likelihood functions, which increases signal to noise and hence success rates in the MR search. Phaser has hitherto used an estimate of the r.m.s.d. that only depends on the sequence identity between the model and target and which was not optimized for the MR likelihood functions. Variance-refinement functionality was added to Phaser to enable determination of the effective r.m.s.d. that optimized the log-likelihood gain (LLG) for a correct MR solution. Variance refinement was subsequently performed on a database of over 21 000 MR problems that sampled a range of sequence identities, protein sizes and protein fold classes. Success was monitored using the translation-function Z-score (TFZ), where a TFZ of 8 or over for the top peak was found to be a reliable indicator that MR had succeeded for these cases with one molecule in the asymmetric unit. Good estimates of the r.m.s.d. are correlated with the sequence identity and the protein size. A new estimate of the r.m.s.d. that uses these two parameters in a function optimized to fit the mean of the refined variance is implemented in Phaser and improves MR outcomes. Perturbing the initial estimate of the r.m.s.d. from the mean of the distribution in steps of standard deviations of the distribution further increases MR success rates.
Improved risk estimates for carbon tetrachloride. 1998 annual progress report
Benson, J.M.; Springer, D.L.; Thrall, K.D.
1998-06-01
'The overall purpose of these studies is to improve the scientific basis for assessing the cancer risk associated with human exposure to carbon tetrachloride. Specifically, the toxicokinetics of inhaled carbon tetrachloride is being determined in rats, mice and hamsters. Species differences in the metabolism of carbon tetrachloride by rats, mice and hamsters is being determined in vivo and in vitro using tissues and microsomes from these rodent species and man. Dose-response relationships will be determined in all studies. The information will be used to improve the current physiologically based pharmacokinetic model for carbon tetrachloride. The authors will also determine whether carbon tetrachloride is a hepatocarcinogen only when exposure results in cell damage, cell killing, and regenerative cell proliferation. In combination, the results of these studies will provide the types of information needed to enable a refined risk estimate for carbon tetrachloride under EPA''s new guidelines for cancer risk assessment.'
Improving the Accuracy of Estimation of Climate Extremes
NASA Astrophysics Data System (ADS)
Zolina, Olga; Detemmerman, Valery; Trenberth, Kevin E.
2010-12-01
Workshop on Metrics and Methodologies of Estimation of Extreme Climate Events; Paris, France, 27-29 September 2010; Climate projections point toward more frequent and intense weather and climate extremes such as heat waves, droughts, and floods, in a warmer climate. These projections, together with recent extreme climate events, including flooding in Pakistan and the heat wave and wildfires in Russia, highlight the need for improved risk assessments to help decision makers and the public. But accurate analysis and prediction of risk of extreme climate events require new methodologies and information from diverse disciplines. A recent workshop sponsored by the World Climate Research Programme (WCRP) and hosted at United Nations Educational, Scientific and Cultural Organization (UNESCO) headquarters in France brought together, for the first time, a unique mix of climatologists, statisticians, meteorologists, oceanographers, social scientists, and risk managers (such as those from insurance companies) who sought ways to improve scientists' ability to characterize and predict climate extremes in a changing climate.
Reducing measurement scale mismatch to improve surface energy flux estimation
NASA Astrophysics Data System (ADS)
Iwema, Joost; Rosolem, Rafael; Rahman, Mostaquimur; Blyth, Eleanor; Wagener, Thorsten
2016-04-01
Soil moisture importantly controls land surface processes such as energy and water partitioning. A good understanding of these controls is needed especially when recognizing the challenges in providing accurate hyper-resolution hydrometeorological simulations at sub-kilometre scales. Soil moisture controlling factors can, however, differ at distinct scales. In addition, some parameters in land surface models are still often prescribed based on observations obtained at another scale not necessarily employed by such models (e.g., soil properties obtained from lab samples used in regional simulations). To minimize such effects, parameters can be constrained with local data from Eddy-Covariance (EC) towers (i.e., latent and sensible heat fluxes) and Point Scale (PS) soil moisture observations (e.g., TDR). However, measurement scales represented by EC and PS still differ substantially. Here we use the fact that Cosmic-Ray Neutron Sensors (CRNS) estimate soil moisture at horizontal footprint similar to that of EC fluxes to help answer the following question: Does reduced observation scale mismatch yield better soil moisture - surface fluxes representation in land surface models? To answer this question we analysed soil moisture and surface fluxes measurements from twelve COSMOS-Ameriflux sites in the USA characterized by distinct climate, soils and vegetation types. We calibrated model parameters of the Joint UK Land Environment Simulator (JULES) against PS and CRNS soil moisture data, respectively. We analysed the improvement in soil moisture estimation compared to uncalibrated model simulations and then evaluated the degree of improvement in surface fluxes before and after calibration experiments. Preliminary results suggest that a more accurate representation of soil moisture dynamics is achieved when calibrating against observed soil moisture and further improvement obtained with CRNS relative to PS. However, our results also suggest that a more accurate
Leon-Perez, Jose M; Notelaers, Guy; Arenas, Alicia; Munduate, Lourdes; Medina, Francisco J
2014-05-01
Research findings underline the negative effects of exposure to bullying behaviors and document the detrimental health effects of being a victim of workplace bullying. While no one disputes its negative consequences, debate continues about the magnitude of this phenomenon since very different prevalence rates of workplace bullying have been reported. Methodological aspects may explain these findings. Our contribution to this debate integrates behavioral and self-labeling estimation methods of workplace bullying into a measurement model that constitutes a bullying typology. Results in the present sample (n = 1,619) revealed that six different groups can be distinguished according to the nature and intensity of reported bullying behaviors. These clusters portray different paths for the workplace bullying process, where negative work-related and person-degrading behaviors are strongly intertwined. The analysis of the external validity showed that integrating previous estimation methods into a single measurement latent class model provides a reliable estimation method of workplace bullying, which may overcome previous flaws. PMID:24257593
Cardiac motion estimation by using high-dimensional features and K-means clustering method
NASA Astrophysics Data System (ADS)
Oubel, Estanislao; Hero, Alfred O.; Frangi, Alejandro F.
2006-03-01
Tagged Magnetic Resonance Imaging (MRI) is currently the reference modality for myocardial motion and strain analysis. Mutual Information (MI) based non rigid registration has proven to be an accurate method to retrieve cardiac motion and overcome many drawbacks present on previous approaches. In a previous work1, we used Wavelet-based Attribute Vectors (WAVs) instead of pixel intensity to measure similarity between frames. Since the curse of dimensionality forbids the use of histograms to estimate MI of high dimensional features, k-Nearest Neighbors Graphs (kNNG) were applied to calculate α-MI. Results showed that cardiac motion estimation was feasible with that approach. In this paper, K-Means clustering method is applied to compute MI from the same set of WAVs. The proposed method was applied to four tagging MRI sequences, and the resulting displacements were compared with respect to manual measurements made by two observers. Results show that more accurate motion estimation is obtained with respect to the use of pixel intensity.
Improving Estimates of Cloud Radiative Forcing over Greenland
NASA Astrophysics Data System (ADS)
Wang, W.; Zender, C. S.
2014-12-01
Multiple driving mechanisms conspire to increase melt extent and extreme melt events frequency in the Arctic: changing heat transport, shortwave radiation (SW), and longwave radiation (LW). Cloud Radiative Forcing (CRF) of Greenland's surface is amplified by a dry atmosphere and by albedo feedback, making its contribution to surface melt even more variable in time and space. Unfortunately accurate cloud observations and thus CRF estimates are hindered by Greenland's remoteness, harsh conditions, and low contrast between surface and cloud reflectance. In this study, cloud observations from satellites and reanalyses are ingested into and evaluated within a column radiative transfer model. An improved CRF dataset is obtained by correcting systematic discrepancies derived from sensitivity experiments. First, we compare the surface radiation budgets from the Column Radiation Model (CRM) driven by different cloud datasets, with surface observations from Greenland Climate Network (GC-Net). In clear skies, CRM-estimated surface radiation driven by water vapor profiles from both AIRS and MODIS during May-Sept 2010-2012 are similar, stable, and reliable. For example, although AIRS water vapor path exceeds MODIS by 1.4 kg/m2 on a daily average, the overall absolute difference in downwelling SW is < 4 W/m2. CRM estimates are within 20 W/m2 range of GC-Net downwelling SW. After calibrating CRM in clear skies, the remaining differences between CRM and observed surface radiation are primarily attributable to differences in cloud observations. We estimate CRF using cloud products from MODIS and from MERRA. The SW radiative forcing of thin clouds is mainly controlled by cloud water path (CWP). As CWP increases from near 0 to 200 g/m2, the net surface SW drops from over 100 W/m2 to 30 W/m2 almost linearly, beyond which it becomes relatively insensitive to CWP. The LW is dominated by cloud height. For clouds at all altitudes, the lower the clouds, the greater the LW forcing. By
Raichoor, A.; Mei, S.; Huertas-Company, M.; Licitra, R.; Erben, T.; Hildebrandt, H.; Ilbert, O.; Boissier, S.; Boselli, A.; Ball, N. M.; Côté, P.; Ferrarese, L.; Gwyn, S. D. J.; Kavelaars, J. J.; Chen, Y.-T.; Cuillandre, J.-C.; Duc, P. A.; Guhathakurta, P.; and others
2014-12-20
The Next Generation Virgo Cluster Survey (NGVS) is an optical imaging survey covering 104 deg{sup 2} centered on the Virgo cluster. Currently, the complete survey area has been observed in the u*giz bands and one third in the r band. We present the photometric redshift estimation for the NGVS background sources. After a dedicated data reduction, we perform accurate photometry, with special attention to precise color measurements through point-spread function homogenization. We then estimate the photometric redshifts with the Le Phare and BPZ codes. We add a new prior that extends to i {sub AB} = 12.5 mag. When using the u* griz bands, our photometric redshifts for 15.5 mag ≤ i ≲ 23 mag or z {sub phot} ≲ 1 galaxies have a bias |Δz| < 0.02, less than 5% outliers, a scatter σ{sub outl.rej.}, and an individual error on z {sub phot} that increases with magnitude (from 0.02 to 0.05 and from 0.03 to 0.10, respectively). When using the u*giz bands over the same magnitude and redshift range, the lack of the r band increases the uncertainties in the 0.3 ≲ z {sub phot} ≲ 0.8 range (–0.05 < Δz < –0.02, σ{sub outl.rej} ∼ 0.06, 10%-15% outliers, and z {sub phot.err.} ∼ 0.15). We also present a joint analysis of the photometric redshift accuracy as a function of redshift and magnitude. We assess the quality of our photometric redshifts by comparison to spectroscopic samples and by verifying that the angular auto- and cross-correlation function w(θ) of the entire NGVS photometric redshift sample across redshift bins is in agreement with the expectations.
Laser photogrammetry improves size and demographic estimates for whale sharks
Richardson, Anthony J.; Prebble, Clare E.M.; Marshall, Andrea D.; Bennett, Michael B.; Weeks, Scarla J.; Cliff, Geremy; Wintner, Sabine P.; Pierce, Simon J.
2015-01-01
Whale sharks Rhincodon typus are globally threatened, but a lack of biological and demographic information hampers an accurate assessment of their vulnerability to further decline or capacity to recover. We used laser photogrammetry at two aggregation sites to obtain more accurate size estimates of free-swimming whale sharks compared to visual estimates, allowing improved estimates of biological parameters. Individual whale sharks ranged from 432–917 cm total length (TL) (mean ± SD = 673 ± 118.8 cm, N = 122) in southern Mozambique and from 420–990 cm TL (mean ± SD = 641 ± 133 cm, N = 46) in Tanzania. By combining measurements of stranded individuals with photogrammetry measurements of free-swimming sharks, we calculated length at 50% maturity for males in Mozambique at 916 cm TL. Repeat measurements of individual whale sharks measured over periods from 347–1,068 days yielded implausible growth rates, suggesting that the growth increment over this period was not large enough to be detected using laser photogrammetry, and that the method is best applied to estimating growth rates over longer (decadal) time periods. The sex ratio of both populations was biased towards males (74% in Mozambique, 89% in Tanzania), the majority of which were immature (98% in Mozambique, 94% in Tanzania). The population structure for these two aggregations was similar to most other documented whale shark aggregations around the world. Information on small (<400 cm) whale sharks, mature individuals, and females in this region is lacking, but necessary to inform conservation initiatives for this globally threatened species. PMID:25870776
Laser photogrammetry improves size and demographic estimates for whale sharks.
Rohner, Christoph A; Richardson, Anthony J; Prebble, Clare E M; Marshall, Andrea D; Bennett, Michael B; Weeks, Scarla J; Cliff, Geremy; Wintner, Sabine P; Pierce, Simon J
2015-01-01
Whale sharks Rhincodon typus are globally threatened, but a lack of biological and demographic information hampers an accurate assessment of their vulnerability to further decline or capacity to recover. We used laser photogrammetry at two aggregation sites to obtain more accurate size estimates of free-swimming whale sharks compared to visual estimates, allowing improved estimates of biological parameters. Individual whale sharks ranged from 432-917 cm total length (TL) (mean ± SD = 673 ± 118.8 cm, N = 122) in southern Mozambique and from 420-990 cm TL (mean ± SD = 641 ± 133 cm, N = 46) in Tanzania. By combining measurements of stranded individuals with photogrammetry measurements of free-swimming sharks, we calculated length at 50% maturity for males in Mozambique at 916 cm TL. Repeat measurements of individual whale sharks measured over periods from 347-1,068 days yielded implausible growth rates, suggesting that the growth increment over this period was not large enough to be detected using laser photogrammetry, and that the method is best applied to estimating growth rates over longer (decadal) time periods. The sex ratio of both populations was biased towards males (74% in Mozambique, 89% in Tanzania), the majority of which were immature (98% in Mozambique, 94% in Tanzania). The population structure for these two aggregations was similar to most other documented whale shark aggregations around the world. Information on small (<400 cm) whale sharks, mature individuals, and females in this region is lacking, but necessary to inform conservation initiatives for this globally threatened species.
Improved PPP ambiguity resolution by COES FCB estimation
NASA Astrophysics Data System (ADS)
Li, Yihe; Gao, Yang; Shi, Junbo
2016-05-01
Precise point positioning (PPP) integer ambiguity resolution is able to significantly improve the positioning accuracy with the correction of fractional cycle biases (FCBs) by shortening the time to first fix (TTFF) of ambiguities. When satellite orbit products are adopted to estimate the satellite FCB corrections, the narrow-lane (NL) FCB corrections will be contaminated by the orbit's line-of-sight (LOS) errors which subsequently affect ambiguity resolution (AR) performance, as well as positioning accuracy. To effectively separate orbit errors from satellite FCBs, we propose a cascaded orbit error separation (COES) method for the PPP implementation. Instead of using only one direction-independent component in previous studies, the satellite NL improved FCB corrections are modeled by one direction-independent component and three directional-dependent components per satellite in this study. More specifically, the direction-independent component assimilates actual FCBs, whereas the directional-dependent components are used to assimilate the orbit errors. To evaluate the performance of the proposed method, GPS measurements from a regional and a global network are processed with the IGSReal-time service (RTS), IGS rapid (IGR) products and predicted orbits with >10 cm 3D root mean square (RMS) error. The improvements by the proposed FCB estimation method are validated in terms of ambiguity fractions after applying FCB corrections and positioning accuracy. The numerical results confirm that the obtained FCBs using the proposed method outperform those by conventional method. The RMS of ambiguity fractions after applying FCB corrections is reduced by 13.2 %. The position RMSs in north, east and up directions are reduced by 30.0, 32.0 and 22.0 % on average.
Towards Improved Snow Water Equivalent Estimation via GRACE Assimilation
NASA Technical Reports Server (NTRS)
Forman, Bart; Reichle, Rofl; Rodell, Matt
2011-01-01
Passive microwave (e.g. AMSR-E) and visible spectrum (e.g. MODIS) measurements of snow states have been used in conjunction with land surface models to better characterize snow pack states, most notably snow water equivalent (SWE). However, both types of measurements have limitations. AMSR-E, for example, suffers a loss of information in deep/wet snow packs. Similarly, MODIS suffers a loss of temporal correlation information beyond the initial accumulation and final ablation phases of the snow season. Gravimetric measurements, on the other hand, do not suffer from these limitations. In this study, gravimetric measurements from the Gravity Recovery and Climate Experiment (GRACE) mission are used in a land surface model data assimilation (DA) framework to better characterize SWE in the Mackenzie River basin located in northern Canada. Comparisons are made against independent, ground-based SWE observations, state-of-the-art modeled SWE estimates, and independent, ground-based river discharge observations. Preliminary results suggest improved SWE estimates, including improved timing of the subsequent ablation and runoff of the snow pack. Additionally, use of the DA procedure can add vertical and horizontal resolution to the coarse-scale GRACE measurements as well as effectively downscale the measurements in time. Such findings offer the potential for better understanding of the hydrologic cycle in snow-dominated basins located in remote regions of the globe where ground-based observation collection if difficult, if not impossible. This information could ultimately lead to improved freshwater resource management in communities dependent on snow melt as well as a reduction in the uncertainty of river discharge into the Arctic Ocean.
Ironing out the wrinkles in the rare biosphere through improved OTU clustering.
Huse, Susan M; Welch, David Mark; Morrison, Hilary G; Sogin, Mitchell L
2010-07-01
Deep sequencing of PCR amplicon libraries facilitates the detection of low-abundance populations in environmental DNA surveys of complex microbial communities. At the same time, deep sequencing can lead to overestimates of microbial diversity through the generation of low-frequency, error-prone reads. Even with sequencing error rates below 0.005 per nucleotide position, the common method of generating operational taxonomic units (OTUs) by multiple sequence alignment and complete-linkage clustering significantly increases the number of predicted OTUs and inflates richness estimates. We show that a 2% single-linkage preclustering methodology followed by an average-linkage clustering based on pairwise alignments more accurately predicts expected OTUs in both single and pooled template preparations of known taxonomic composition. This new clustering method can reduce the OTU richness in environmental samples by as much as 30-60% but does not reduce the fraction of OTUs in long-tailed rank abundance curves that defines the rare biosphere.
Using SVD on Clusters to Improve Precision of Interdocument Similarity Measure.
Zhang, Wen; Xiao, Fan; Li, Bin; Zhang, Siguang
2016-01-01
Recently, LSI (Latent Semantic Indexing) based on SVD (Singular Value Decomposition) is proposed to overcome the problems of polysemy and homonym in traditional lexical matching. However, it is usually criticized as with low discriminative power for representing documents although it has been validated as with good representative quality. In this paper, SVD on clusters is proposed to improve the discriminative power of LSI. The contribution of this paper is three manifolds. Firstly, we make a survey of existing linear algebra methods for LSI, including both SVD based methods and non-SVD based methods. Secondly, we propose SVD on clusters for LSI and theoretically explain that dimension expansion of document vectors and dimension projection using SVD are the two manipulations involved in SVD on clusters. Moreover, we develop updating processes to fold in new documents and terms in a decomposed matrix by SVD on clusters. Thirdly, two corpora, a Chinese corpus and an English corpus, are used to evaluate the performances of the proposed methods. Experiments demonstrate that, to some extent, SVD on clusters can improve the precision of interdocument similarity measure in comparison with other SVD based LSI methods. PMID:27579031
Using SVD on Clusters to Improve Precision of Interdocument Similarity Measure
Xiao, Fan; Li, Bin; Zhang, Siguang
2016-01-01
Recently, LSI (Latent Semantic Indexing) based on SVD (Singular Value Decomposition) is proposed to overcome the problems of polysemy and homonym in traditional lexical matching. However, it is usually criticized as with low discriminative power for representing documents although it has been validated as with good representative quality. In this paper, SVD on clusters is proposed to improve the discriminative power of LSI. The contribution of this paper is three manifolds. Firstly, we make a survey of existing linear algebra methods for LSI, including both SVD based methods and non-SVD based methods. Secondly, we propose SVD on clusters for LSI and theoretically explain that dimension expansion of document vectors and dimension projection using SVD are the two manipulations involved in SVD on clusters. Moreover, we develop updating processes to fold in new documents and terms in a decomposed matrix by SVD on clusters. Thirdly, two corpora, a Chinese corpus and an English corpus, are used to evaluate the performances of the proposed methods. Experiments demonstrate that, to some extent, SVD on clusters can improve the precision of interdocument similarity measure in comparison with other SVD based LSI methods. PMID:27579031
Improved Estimates of Air Pollutant Emissions from Biorefinery
Tan, Eric C. D.
2015-11-13
We have attempted to use detailed kinetic modeling approach for improved estimation of combustion air pollutant emissions from biorefinery. We have developed a preliminary detailed reaction mechanism for biomass combustion. Lignin is the only biomass component included in the current mechanism and methane is used as the biogas surrogate. The model is capable of predicting the combustion emissions of greenhouse gases (CO2, N2O, CH4) and criteria air pollutants (NO, NO2, CO). The results are yet to be compared with the experimental data. The current model is still in its early stages of development. Given the acknowledged complexity of biomass oxidation, as well as the components in the feed to the combustor, obviously the modeling approach and the chemistry set discussed here may undergo revision, extension, and further validation in the future.
Improving parameter priors for data-scarce estimation problems
NASA Astrophysics Data System (ADS)
Almeida, Susana; Bulygina, Nataliya; McIntyre, Neil; Wagener, Thorsten; Buytaert, Wouter
2013-09-01
Runoff prediction in ungauged catchments is a recurrent problem in hydrology. Conceptual models are usually calibrated by defining a feasible parameter range and then conditioning parameter sets on observed system responses, e.g., streamflow. In ungauged catchments, several studies condition models on regionalized response signatures, such as runoff ratio or base flow index, using a Bayesian procedure. In this technical note, the Model Parameter Estimation Experiment (MOPEX) data set is used to explore the impact on model performance of assumptions made about the prior distribution. In particular, the common assumption of uniform prior on parameters is shown to be unsuitable. This is because the uniform prior on parameters maps onto skewed response signature priors that can counteract the valuable information gained from the regionalization. To address this issue, we test a methodological development based on an initial transformation of the uniform prior on parameters into a prior that maps to a uniform response signature distribution. We demonstrate that this method contributes to improved estimation of the response signatures.
Improving Accuracy of Influenza-Associated Hospitalization Rate Estimates
Reed, Carrie; Kirley, Pam Daily; Aragon, Deborah; Meek, James; Farley, Monica M.; Ryan, Patricia; Collins, Jim; Lynfield, Ruth; Baumbach, Joan; Zansky, Shelley; Bennett, Nancy M.; Fowler, Brian; Thomas, Ann; Lindegren, Mary L.; Atkinson, Annette; Finelli, Lyn; Chaves, Sandra S.
2015-01-01
Diagnostic test sensitivity affects rate estimates for laboratory-confirmed influenza–associated hospitalizations. We used data from FluSurv-NET, a national population-based surveillance system for laboratory-confirmed influenza hospitalizations, to capture diagnostic test type by patient age and influenza season. We calculated observed rates by age group and adjusted rates by test sensitivity. Test sensitivity was lowest in adults >65 years of age. For all ages, reverse transcription PCR was the most sensitive test, and use increased from <10% during 2003–2008 to ≈70% during 2009–2013. Observed hospitalization rates per 100,000 persons varied by season: 7.3–50.5 for children <18 years of age, 3.0–30.3 for adults 18–64 years, and 13.6–181.8 for adults >65 years. After 2009, hospitalization rates adjusted by test sensitivity were ≈15% higher for children <18 years, ≈20% higher for adults 18–64 years, and ≈55% for adults >65 years of age. Test sensitivity adjustments improve the accuracy of hospitalization rate estimates. PMID:26292017
Improving estimates of air pollution exposure through ubiquitous sensing technologies.
de Nazelle, Audrey; Seto, Edmund; Donaire-Gonzalez, David; Mendez, Michelle; Matamala, Jaume; Nieuwenhuijsen, Mark J; Jerrett, Michael
2013-05-01
Traditional methods of exposure assessment in epidemiological studies often fail to integrate important information on activity patterns, which may lead to bias, loss of statistical power, or both in health effects estimates. Novel sensing technologies integrated with mobile phones offer potential to reduce exposure measurement error. We sought to demonstrate the usability and relevance of the CalFit smartphone technology to track person-level time, geographic location, and physical activity patterns for improved air pollution exposure assessment. We deployed CalFit-equipped smartphones in a free-living population of 36 subjects in Barcelona, Spain. Information obtained on physical activity and geographic location was linked to space-time air pollution mapping. We found that information from CalFit could substantially alter exposure estimates. For instance, on average travel activities accounted for 6% of people's time and 24% of their daily inhaled NO2. Due to the large number of mobile phone users, this technology potentially provides an unobtrusive means of enhancing epidemiologic exposure data at low cost.
Adaptive whitening of the electromyogram to improve amplitude estimation.
Clancy, E A; Farry, K A
2000-06-01
Previous research showed that whitening the surface electromyogram (EMG) can improve EMG amplitude estimation (where EMG amplitude is defined as the time-varying standard deviation of the EMG). However, conventional whitening via a linear filter seems to fail at low EMG amplitude levels, perhaps due to additive background noise in the measured EMG. This paper describes an adaptive whitening technique that overcomes this problem by cascading a nonadaptive whitening filter, an adaptive Wiener filter, and an adaptive gain correction. These stages can be calibrated from two, five second duration, constant-angle, constant-force contractions, one at a reference level [e.g., 50% maximum voluntary contraction (MVC)] and one at 0% MVC. In experimental studies, subjects used real-time EMG amplitude estimates to track a uniform-density, band-limited random target. With a 0.25-Hz bandwidth target, either adaptive whitening or multiple-channel processing reduced the tracking error roughly half-way to the error achieved using the dynamometer signal as the feedback. At the 1.00-Hz bandwidth, all of the EMG processors had errors equivalent to that of the dynamometer signal, reflecting that errors in this task were dominated by subjects' inability to track targets at this bandwidth. Increases in the additive noise level, smoothing window length, and tracking bandwidth diminish the advantages of whitening. PMID:10833845
Improvement in volume estimation from confocal sections after image deconvolution.
Difato, F; Mazzone, F; Scaglione, S; Fato, M; Beltrame, F; Kubínová, L; Janácek, J; Ramoino, P; Vicidomini, G; Diaspro, A
2004-06-01
The confocal microscope can image a specimen in its natural environment forming a 3D image of the whole structure by scanning it and collecting light through a small aperture (pinhole), allowing in vivo and in vitro observations. So far, the confocal fluorescence microscope (CFM) is considered a true volume imager because of the role of the pinhole that rejects information coming from out-of-focus planes. Unfortunately, intrinsic imaging properties of the optical scheme presently employed yield a corrupted image that can hamper quantitative analysis of successive image planes. By a post-image collection restoration, it is possible to obtain an estimate, with respect to a given optimization criterium, of the true object, utilizing the impulse response of system or Point Spread Function (PSF). The PSF can be measured or predicted so as to have a mathematical and physical model of the image-formation process. Further modelling and recording noise as an additive Gaussian process has used the regularized Iterative Constrained Tykhonov Miller (ICTM) restoration algorithm for solving the inverse problem. This algorithm finds the best estimate iteratively searching among the possible positive solutions; in the Fourier domain, such an approach is relatively fast and elegant. In order to compare the effective improvement in the quantitative image information analysis, we measured the volume of reference objects before and after image restoration, using the isotropic Fakir method.
NASA Astrophysics Data System (ADS)
Lybekk, B.; Pedersen, A.; Haaland, S.; Svenes, K.; Fazakerley, A. N.; Masson, A.; Taylor, M. G. G. T.; Trotignon, J.-G.
2012-01-01
A sunlit conductive spacecraft, immersed in tenuous plasma, will attain a positive potential relative to the ambient plasma. This potential is primarily governed by solar irradiation, which causes escape of photoelectrons from the surface of the spacecraft, and the electrons in the ambient plasma providing the return current. In this paper we combine potential measurements from the Cluster satellites with measurements of extreme ultraviolet radiation from the TIMED satellite to establish a relation between solar radiation and spacecraft charging from solar maximum to solar minimum. We then use this relation to derive an improved method for determination of the current balance of the spacecraft. By calibration with other instruments we thereafter derive the plasma density. The results show that this method can provide information about plasma densities in the polar cap and magnetotail lobe regions where other measurements have limitations.
2010-01-01
Background Cryptosporidium parvum is one of the most important biological contaminants in drinking water that produces life threatening infection in people with compromised immune systems. Dairy calves are thought to be the primary source of C. parvum contamination in watersheds. Understanding the spatial and temporal variation in the risk of C. parvum infection in dairy cattle is essential for designing cost-effective watershed management strategies to protect drinking water sources. Crude and Bayesian seasonal risk estimates for Cryptosporidium in dairy calves were used to investigate the spatio-temporal dynamics of C. parvum infection on dairy farms in the New York City watershed. Results Both global (Global Moran's I) and specific (SaTScan) cluster analysis methods revealed a significant (p < 0.05) elliptical spatial cluster in the winter with a relative risk of 5.8, but not in other seasons. There was a two-fold increase in the risk of C. parvum infection in all herds in the summer (p = 0.002), compared to the rest of the year. Bayesian estimates did not show significant spatial autocorrelation in any season. Conclusions Although we were not able to identify seasonal clusters using Bayesian approach, crude estimates highlighted both temporal and spatial clusters of C. parvum infection in dairy herds in a major watershed. We recommend that further studies focus on the factors that may lead to the presence of C. parvum clusters within the watershed, so that monitoring and prevention practices such as stream monitoring, riparian buffers, fencing and manure management can be prioritized and improved, to protect drinking water supplies and public health. PMID:20565805
Leire, Emma; Amaral, Sandra P; Louzao, Iria; Winzer, Klaus; Alexander, Cameron; Fernandez-Megia, Eduardo; Fernandez-Trillo, Francisco
2016-06-24
Here, we evaluate how cationic gallic acid-triethylene glycol (GATG) dendrimers interact with bacteria and their potential to develop new antimicrobials. We demonstrate that GATG dendrimers functionalised with primary amines in their periphery can induce the formation of clusters in Vibrio harveyi, an opportunistic marine pathogen, in a generation dependent manner. Moreover, these cationic GATG dendrimers demonstrate an improved ability to induce cluster formation when compared to poly(N-[3-(dimethylamino)propyl]methacrylamide) [p(DMAPMAm)], a cationic linear polymer previously shown to cluster bacteria. Viability of the bacteria within the formed clusters and evaluation of quorum sensing controlled phenotypes (i.e. light production in V. harveyi) suggest that GATG dendrimers may be activating microbial responses by maintaining a high concentration of quorum sensing signals inside the clusters while increasing permeability of the microbial outer membranes. Thus, the reported GATG dendrimers constitute a valuable platform for the development of novel antimicrobial materials that can target microbial viability and/or virulence. PMID:27127812
Disseminating quality improvement: study protocol for a large cluster-randomized trial
2011-01-01
Background Dissemination is a critical facet of implementing quality improvement in organizations. As a field, addiction treatment has produced effective interventions but disseminated them slowly and reached only a fraction of people needing treatment. This study investigates four methods of disseminating quality improvement (QI) to addiction treatment programs in the U.S. It is, to our knowledge, the largest study of organizational change ever conducted in healthcare. The trial seeks to determine the most cost-effective method of disseminating quality improvement in addiction treatment. Methods The study is evaluating the costs and effectiveness of different QI approaches by randomizing 201 addiction-treatment programs to four interventions. Each intervention used a web-based learning kit plus monthly phone calls, coaching, face-to-face meetings, or the combination of all three. Effectiveness is defined as reducing waiting time (days between first contact and treatment), increasing program admissions, and increasing continuation in treatment. Opportunity costs will be estimated for the resources associated with providing the services. Outcomes The study has three primary outcomes: waiting time, annual program admissions, and continuation in treatment. Secondary outcomes include: voluntary employee turnover, treatment completion, and operating margin. We are also seeking to understand the role of mediators, moderators, and other factors related to an organization's success in making changes. Analysis We are fitting a mixed-effect regression model to each program's average monthly waiting time and continuation rates (based on aggregated client records), including terms to isolate state and intervention effects. Admissions to treatment are aggregated to a yearly level to compensate for seasonality. We will order the interventions by cost to compare them pair-wise to the lowest cost intervention (monthly phone calls). All randomized sites with outcome data will be
Improved Soundings and Error Estimates using AIRS/AMSU Data
NASA Technical Reports Server (NTRS)
Susskind, Joel
2006-01-01
AIRS was launched on EOS Aqua on May 4, 2002, together with AMSU A and HSB, to form a next generation polar orbiting infrared and microwave atmospheric sounding system. The primary products of AIRS/AMSU are twice daily global fields of atmospheric temperature-humidity profiles, ozone profiles, sea/land surface skin temperature, and cloud related parameters including OLR. The sounding goals of AIRS are to produce 1 km tropospheric layer mean temperatures with an rms error of 1 K, and layer precipitable water with an rms error of 20 percent, in cases with up to 80 percent effective cloud cover. The basic theory used to analyze AIRS/AMSU/HSB data in the presence of clouds, called the at-launch algorithm, and a post-launch algorithm which differed only in the minor details from the at-launch algorithm, have been described previously. The post-launch algorithm, referred to as AIRS Version 4.0, has been used by the Goddard DAAC to analyze and distribute AIRS retrieval products. In this paper we show progress made toward the AIRS Version 5.0 algorithm which will be used by the Goddard DAAC starting late in 2006. A new methodology has been developed to provide accurate case by case error estimates for retrieved geophysical parameters and for the channel by channel cloud cleared radiances used to derive the geophysical parameters from the AIRS/AMSU observations. These error estimates are in turn used for quality control of the derived geophysical parameters and clear column radiances. Improvements made to the retrieval algorithm since Version 4.0 are described as well as results comparing Version 5.0 retrieval accuracy and spatial coverage with those obtained using Version 4.0.
ERIC Educational Resources Information Center
Schochet, Peter Z.
2009-01-01
This paper examines the estimation of two-stage clustered RCT designs in education research using the Neyman causal inference framework that underlies experiments. The key distinction between the considered causal models is whether potential treatment and control group outcomes are considered to be fixed for the study population (the…
Kostoulas, P; Nielsen, S S; Browne, W J; Leontides, L
2013-06-01
Disease cases are often clustered within herds or generally groups that share common characteristics. Sample size formulae must adjust for the within-cluster correlation of the primary sampling units. Traditionally, the intra-cluster correlation coefficient (ICC), which is an average measure of the data heterogeneity, has been used to modify formulae for individual sample size estimation. However, subgroups of animals sharing common characteristics, may exhibit excessively less or more heterogeneity. Hence, sample size estimates based on the ICC may not achieve the desired precision and power when applied to these groups. We propose the use of the variance partition coefficient (VPC), which measures the clustering of infection/disease for individuals with a common risk profile. Sample size estimates are obtained separately for those groups that exhibit markedly different heterogeneity, thus, optimizing resource allocation. A VPC-based predictive simulation method for sample size estimation to substantiate freedom from disease is presented. To illustrate the benefits of the proposed approach we give two examples with the analysis of data from a risk factor study on Mycobacterium avium subsp. paratuberculosis infection, in Danish dairy cattle and a study on critical control points for Salmonella cross-contamination of pork, in Greek slaughterhouses.
ERIC Educational Resources Information Center
Hunt, Charles R.
A study developed a model to assist school administrators to estimate costs associated with the delivery of a metals cluster program at Norfolk State College, Virginia. It sought to construct the model so that costs could be explained as a function of enrollment levels. Data were collected through a literature review, computer searches of the…
Improved image registration by sparse patch-based deformation estimation.
Kim, Minjeong; Wu, Guorong; Wang, Qian; Lee, Seong-Whan; Shen, Dinggang
2015-01-15
Despite intensive efforts for decades, deformable image registration is still a challenging problem due to the potential large anatomical differences across individual images, which limits the registration performance. Fortunately, this issue could be alleviated if a good initial deformation can be provided for the two images under registration, which are often termed as the moving subject and the fixed template, respectively. In this work, we present a novel patch-based initial deformation prediction framework for improving the performance of existing registration algorithms. Our main idea is to estimate the initial deformation between subject and template in a patch-wise fashion by using the sparse representation technique. We argue that two image patches should follow the same deformation toward the template image if their patch-wise appearance patterns are similar. To this end, our framework consists of two stages, i.e., the training stage and the application stage. In the training stage, we register all training images to the pre-selected template, such that the deformation of each training image with respect to the template is known. In the application stage, we apply the following four steps to efficiently calculate the initial deformation field for the new test subject: (1) We pick a small number of key points in the distinctive regions of the test subject; (2) for each key point, we extract a local patch and form a coupled appearance-deformation dictionary from training images where each dictionary atom consists of the image intensity patch as well as their respective local deformations; (3) a small set of training image patches in the coupled dictionary are selected to represent the image patch of each subject key point by sparse representation. Then, we can predict the initial deformation for each subject key point by propagating the pre-estimated deformations on the selected training patches with the same sparse representation coefficients; and (4) we
Improving lidar-derived turbulence estimates for wind energy
Newman, Jennifer F.; Clifton, Andrew
2016-07-08
Remote sensing devices such as lidars are currently being investigated as alternatives to cup anemometers on meteorological towers. Although lidars can measure mean wind speeds at heights spanning an entire turbine rotor disk and can be easily moved from one location to another, they measure different values of turbulence than an instrument on a tower. Current methods for improving lidar turbulence estimates include the use of analytical turbulence models and expensive scanning lidars. While these methods provide accurate results in a research setting, they cannot be easily applied to smaller, commercially available lidars in locations where high-resolution sonic anemometer datamore » are not available. Thus, there is clearly a need for a turbulence error reduction model that is simpler and more easily applicable to lidars that are used in the wind energy industry. In this work, a new turbulence error reduction algorithm for lidars is described. The algorithm, L-TERRA, can be applied using only data from a stand-alone commercially available lidar and requires minimal training with meteorological tower data. The basis of L-TERRA is a series of corrections that are applied to the lidar data to mitigate errors from instrument noise, volume averaging, and variance contamination. These corrections are applied in conjunction with a trained machine-learning model to improve turbulence estimates from a vertically profiling WINDCUBE v2 lidar. L-TERRA was tested on data from three sites – two in flat terrain and one in semicomplex terrain. L-TERRA significantly reduced errors in lidar turbulence at all three sites, even when the machine-learning portion of the model was trained on one site and applied to a different site. Errors in turbulence were then related to errors in power through the use of a power prediction model for a simulated 1.5 MW turbine. L-TERRA also reduced errors in power significantly at all three sites, although moderate power errors remained for
Using Satellite Rainfall Estimates to Improve Climate Services in Africa
NASA Astrophysics Data System (ADS)
Dinku, T.
2012-12-01
Climate variability and change pose serious challenges to sustainable development in Africa. The recent famine crisis in Horn of Africa is yet again another evidence of how fluctuations in the climate can destroy lives and livelihoods. Building resilience against the negative impacts of climate and maximizing the benefits from favorable conditions will require mainstreaming climate issues into development policy, planning and practice at different levels. The availability of decision-relevant climate information at different levels is very critical. The number and quality of weather stations in many part of Africa, however, has been declining. The available stations are unevenly distributed with most of the stations located along the main roads. This imposes severe limitations to the availability of climate information and services to rural communities where these services are needed most. Where observations are taken, they suffer from gaps and poor quality and are often unavailable beyond the respective national meteorological services. Combining available local observation with satellite products, making data and products available through the Internet, and training the user community to understand and use climate information will help to alleviate these problems. Improving data availability involves organizing and cleaning all available national station observations and combining them with satellite rainfall estimates. The main advantage of the satellite products is the excellent spatial coverage at increasingly improved spatial and temporal resolutions. This approach has been implemented in Ethiopia and Tanzania, and it is in the process being implemented in West Africa. The main outputs include: 1. Thirty-year times series of combined satellite-gauge rainfall time series at 10-daily time scale 10-km spatial resolution; 2. An array of user-specific products for climate analysis and monitoring; 3. An online facility providing user-friendly tools for
Improving estimates of air pollution exposure through ubiquitous sensing technologies
de Nazelle, Audrey; Seto, Edmund; Donaire-Gonzalez, David; Mendez, Michelle; Matamala, Jaume; Nieuwenhuijsen, Mark J; Jerrett, Michael
2013-01-01
Traditional methods of exposure assessment in epidemiological studies often fail to integrate important information on activity patterns, which may lead to bias, loss of statistical power or both in health effects estimates. Novel sensing technologies integrated with mobile phones offer potential to reduce exposure measurement error. We sought to demonstrate the usability and relevance of the CalFit smartphone technology to track person-level time, geographic location, and physical activity patterns for improved air pollution exposure assessment. We deployed CalFit-equipped smartphones in a free living-population of 36 subjects in Barcelona, Spain. Information obtained on physical activity and geographic location was linked to space-time air pollution mapping. For instance, we found on average travel activities accounted for 6% of people’s time and 24% of their daily inhaled NO2. Due to the large number of mobile phone users, this technology potentially provides an unobtrusive means of collecting epidemiologic exposure data at low cost. PMID:23416743
Improved Estimate of Phobos Secular Acceleration from MOLA Observations
NASA Technical Reports Server (NTRS)
Bills, Bruce; Neumann, Gregory; Smith, David; Zuber, Maria
2004-01-01
We report on new observations of the orbital position of Phobos, and use them to obtain a new and improved estimate of the rate of secular acceleration in longitude due to tidal dissipation within Mars. Phobos is the inner-most natural satellite of Mars, and one of the few natural satellites in the solar system with orbital period shorter than the rotation period of its primary. As a result, any departure from a perfect elastic response by Mars in the tides raised on it by Phobos will cause a transfer of angular momentum from the orbit of Phobos to the spin of Mars. Since its discovery in 1877, Phobos has completed over 145,500 orbits, and has one of the best studied orbits in the solar system, with over 6000 earth-based astrometric observations, and over 300 spacecraft observations. As early as 1945, Sharpless noted that there is a secular acceleration in mean longitude, with rate (1.88 + 0.25) 10(exp -3) degrees per square year. In preparation for the 1989 Russian spacecraft mission to Phobos, considerable work was done compiling past observations, and refining the orbital model. All of the published estimates from that era are in good agreement. A typical solution (Jacobson et al., 1989) yields (1.249 + 0.018) 10(exp -3) degrees per square year. The MOLA instrument on MGS is a laser altimeter, and was designed to measure the topography of Mars. However, it has also been used to make observations of the position of Phobos. In 1998, a direct range measurement was made, which indicated that Phobos was slightly ahead of the predicted position. The MOLA detector views the surface of Mars in a narrow field of view, at 1064 nanometer wavelength, and can detect shadows cast by Phobos on the surface of Mars. We have found 15 such serendipitous shadow transit events over the interval from xx to xx, and all of them show Phobos to be ahead of schedule, and getting progressively farther ahead of the predicted position. In contrast, the cross-track positions are quite close
An improved global dynamic routing strategy for scale-free network with tunable clustering
NASA Astrophysics Data System (ADS)
Sun, Lina; Huang, Ning; Zhang, Yue; Bai, Yannan
2016-08-01
An efficient routing strategy can deliver packets quickly to improve the network capacity. Node congestion and transmission path length are inevitable real-time factors for a good routing strategy. Existing dynamic global routing strategies only consider the congestion of neighbor nodes and the shortest path, which ignores other key nodes’ congestion on the path. With the development of detection methods and techniques, global traffic information is readily available and important for the routing choice. Reasonable use of this information can effectively improve the network routing. So, an improved global dynamic routing strategy is proposed, which considers the congestion of all nodes on the shortest path and incorporates the waiting time of the most congested node into the path. We investigate the effectiveness of the proposed routing for scale-free network with different clustering coefficients. The shortest path routing strategy and the traffic awareness routing strategy only considering the waiting time of neighbor node are analyzed comparatively. Simulation results show that network capacity is greatly enhanced compared with the shortest path; congestion state increase is relatively slow compared with the traffic awareness routing strategy. Clustering coefficient increase will not only reduce the network throughput, but also result in transmission average path length increase for scale-free network with tunable clustering. The proposed routing is favorable to ease network congestion and network routing strategy design.
Arpino, Bruno; Cannas, Massimo
2016-05-30
This article focuses on the implementation of propensity score matching for clustered data. Different approaches to reduce bias due to cluster-level confounders are considered and compared using Monte Carlo simulations. We investigated methods that exploit the clustered structure of the data in two ways: in the estimation of the propensity score model (through the inclusion of fixed or random effects) or in the implementation of the matching algorithm. In addition to a pure within-cluster matching, we also assessed the performance of a new approach, 'preferential' within-cluster matching. This approach first searches for control units to be matched to treated units within the same cluster. If matching is not possible within-cluster, then the algorithm searches in other clusters. All considered approaches successfully reduced the bias due to the omission of a cluster-level confounder. The preferential within-cluster matching approach, combining the advantages of within-cluster and between-cluster matching, showed a relatively good performance both in the presence of big and small clusters, and it was often the best method. An important advantage of this approach is that it reduces the number of unmatched units as compared with a pure within-cluster matching. We applied these methods to the estimation of the effect of caesarean section on the Apgar score using birth register data. Copyright © 2016 John Wiley & Sons, Ltd.
Paireau, Juliette; Girond, Florian; Collard, Jean-Marc; Maïnassara, Halima B.; Jusot, Jean-François
2012-01-01
Background Meningococcal meningitis is a major health problem in the “African Meningitis Belt” where recurrent epidemics occur during the hot, dry season. In Niger, a central country belonging to the Meningitis Belt, reported meningitis cases varied between 1,000 and 13,000 from 2003 to 2009, with a case-fatality rate of 5–15%. Methodology/Principal Findings In order to gain insight in the epidemiology of meningococcal meningitis in Niger and to improve control strategies, the emergence of the epidemics and their diffusion patterns at a fine spatial scale have been investigated. A statistical analysis of the spatio-temporal distribution of confirmed meningococcal meningitis cases was performed between 2002 and 2009, based on health centre catchment areas (HCCAs) as spatial units. Anselin's local Moran's I test for spatial autocorrelation and Kulldorff's spatial scan statistic were used to identify spatial and spatio-temporal clusters of cases. Spatial clusters were detected every year and most frequently occurred within nine southern districts. Clusters most often encompassed few HCCAs within a district, without expanding to the entire district. Besides, strong intra-district heterogeneity and inter-annual variability in the spatio-temporal epidemic patterns were observed. To further investigate the benefit of using a finer spatial scale for surveillance and disease control, we compared timeliness of epidemic detection at the HCCA level versus district level and showed that a decision based on threshold estimated at the HCCA level may lead to earlier detection of outbreaks. Conclusions/Significance Our findings provide an evidence-based approach to improve control of meningitis in sub-Saharan Africa. First, they can assist public health authorities in Niger to better adjust allocation of resources (antibiotics, rapid diagnostic tests and medical staff). Then, this spatio-temporal analysis showed that surveillance at a finer spatial scale (HCCA) would be more
Yanai, Koji; Murakami, Takeshi; Bibb, Mervyn
2006-06-20
Streptomyces kanamyceticus 12-6 is a derivative of the wild-type strain developed for industrial kanamycin (Km) production. Southern analysis and DNA sequencing revealed amplification of a large genomic segment including the entire Km biosynthetic gene cluster in the chromosome of strain 12-6. At 145 kb, the amplifiable unit of DNA (AUD) is the largest AUD reported in Streptomyces. Striking repetitive DNA sequences belonging to the clustered regularly interspaced short palindromic repeats family were found in the AUD and may play a role in its amplification. Strain 12-6 contains a mixture of different chromosomes with varying numbers of AUDs, sometimes exceeding 36 copies and producing an amplified region >5.7 Mb. The level of Km production depended on the copy number of the Km biosynthetic gene cluster, suggesting that DNA amplification occurred during strain improvement as a consequence of selection for increased Km resistance. Amplification of DNA segments including entire antibiotic biosynthetic gene clusters might be a common mechanism leading to increased antibiotic production in industrial strains.
Improved measurements of RNA structure conservation with generalized centroid estimators.
Okada, Yohei; Saito, Yutaka; Sato, Kengo; Sakakibara, Yasubumi
2011-01-01
Identification of non-protein-coding RNAs (ncRNAs) in genomes is a crucial task for not only molecular cell biology but also bioinformatics. Secondary structures of ncRNAs are employed as a key feature of ncRNA analysis since biological functions of ncRNAs are deeply related to their secondary structures. Although the minimum free energy (MFE) structure of an RNA sequence is regarded as the most stable structure, MFE alone could not be an appropriate measure for identifying ncRNAs since the free energy is heavily biased by the nucleotide composition. Therefore, instead of MFE itself, several alternative measures for identifying ncRNAs have been proposed such as the structure conservation index (SCI) and the base pair distance (BPD), both of which employ MFE structures. However, these measurements are unfortunately not suitable for identifying ncRNAs in some cases including the genome-wide search and incur high false discovery rate. In this study, we propose improved measurements based on SCI and BPD, applying generalized centroid estimators to incorporate the robustness against low quality multiple alignments. Our experiments show that our proposed methods achieve higher accuracy than the original SCI and BPD for not only human-curated structural alignments but also low quality alignments produced by CLUSTAL W. Furthermore, the centroid-based SCI on CLUSTAL W alignments is more accurate than or comparable with that of the original SCI on structural alignments generated with RAF, a high quality structural aligner, for which twofold expensive computational time is required on average. We conclude that our methods are more suitable for genome-wide alignments which are of low quality from the point of view on secondary structures than the original SCI and BPD.
NASA Astrophysics Data System (ADS)
Xi, Yakun; Zhang, Cheng
2016-07-01
We show that one can obtain improved L 4 geodesic restriction estimates for eigenfunctions on compact Riemannian surfaces with nonpositive curvature. We achieve this by adapting Sogge's strategy in (Improved critical eigenfunction estimates on manifolds of nonpositive curvature, Preprint). We first combine the improved L 2 restriction estimate of Blair and Sogge (Concerning Toponogov's Theorem and logarithmic improvement of estimates of eigenfunctions, Preprint) and the classical improved {L^∞} estimate of Bérard to obtain an improved weak-type L 4 restriction estimate. We then upgrade this weak estimate to a strong one by using the improved Lorentz space estimate of Bak and Seeger (Math Res Lett 18(4):767-781, 2011). This estimate improves the L 4 restriction estimate of Burq et al. (Duke Math J 138:445-486, 2007) and Hu (Forum Math 6:1021-1052, 2009) by a power of {(log logλ)^{-1}} . Moreover, in the case of compact hyperbolic surfaces, we obtain further improvements in terms of {(logλ)^{-1}} by applying the ideas from (Chen and Sogge, Commun Math Phys 329(3):435-459, 2014) and (Blair and Sogge, Concerning Toponogov's Theorem and logarithmic improvement of estimates of eigenfunctions, Preprint). We are able to compute various constants that appeared in (Chen and Sogge, Commun Math Phys 329(3):435-459, 2014) explicitly, by proving detailed oscillatory integral estimates and lifting calculations to the universal cover H^2.
Yoder, P S; St-Pierre, N R; Weiss, W P
2014-09-01
Accurate estimates of mean nutrient composition of feeds, nutrient variance (i.e., standard deviation), and covariance (i.e., correlation) are needed to develop a more quantitative approach of formulating diets to reduce risk and optimize safety factors. Commercial feed-testing laboratories have large databases of composition values for many feeds, but because of potentially misidentified feeds or poorly defined feed names, these databases are possibly contaminated by incorrect results and could generate inaccurate statistics. The objectives of this research were to (1) design a procedure (also known as a mathematical filter) that generates accurate estimates of the first 2 moments [i.e., the mean and (co)variance] of the nutrient distributions for the largest subpopulation within a feed in the presence of outliers and multiple subpopulations, and (2) use the procedure to generate feed composition tables with accurate means, variances, and correlations. Feed composition data (>1,300,000 samples) were collected from 2 major US commercial laboratories. A combination of a univariate step and 2 multivariate steps (principal components analysis and cluster analysis) were used to filter the data. On average, 13.5% of the total samples of a particular feed population were removed, of which the multivariate steps removed the majority (66% of removed samples). For some feeds, inaccurate identification (e.g., corn gluten feed samples included in the corn gluten meal population) was a primary reason for outliers, whereas for other feeds, subpopulations of a broader population were identified (e.g., immature alfalfa silage within a broad population of alfalfa silage). Application of the procedure did not usually affect the mean concentration of nutrients but greatly reduced the standard deviation and often changed the correlation estimates among nutrients. More accurate estimates of the variation of feeds and how they tend to vary will improve the economic evaluation of feeds
Estimating Treatment Effects via Multilevel Matching within Homogenous Groups of Clusters
ERIC Educational Resources Information Center
Steiner, Peter M.; Kim, Jee-Seon
2015-01-01
Despite the popularity of propensity score (PS) techniques they are not yet well studied for matching multilevel data where selection into treatment takes place among level-one units within clusters. This paper suggests a PS matching strategy that tries to avoid the disadvantages of within- and across-cluster matching. The idea is to first…
Dukart, Juergen; Perneczky, Robert; Förster, Stefan; Barthel, Henryk; Diehl-Schmid, Janine; Draganski, Bogdan; Obrig, Hellmuth; Santarnecchi, Emiliano; Drzezga, Alexander; Fellgiebel, Andreas; Frackowiak, Richard; Kurz, Alexander; Müller, Karsten; Sabri, Osama; Schroeter, Matthias L.; Yakushev, Igor
2013-01-01
Positron emission tomography with [18F] fluorodeoxyglucose (FDG-PET) plays a well-established role in assisting early detection of frontotemporal lobar degeneration (FTLD). Here, we examined the impact of intensity normalization to different reference areas on accuracy of FDG-PET to discriminate between patients with mild FTLD and healthy elderly subjects. FDG-PET was conducted at two centers using different acquisition protocols: 41 FTLD patients and 42 controls were studied at center 1, 11 FTLD patients and 13 controls were studied at center 2. All PET images were intensity normalized to the cerebellum, primary sensorimotor cortex (SMC), cerebral global mean (CGM), and a reference cluster with most preserved FDG uptake in the aforementioned patients group of center 1. Metabolic deficits in the patient group at center 1 appeared 1.5, 3.6, and 4.6 times greater in spatial extent, when tracer uptake was normalized to the reference cluster rather than to the cerebellum, SMC, and CGM, respectively. Logistic regression analyses based on normalized values from FTLD-typical regions showed that at center 1, cerebellar, SMC, CGM, and cluster normalizations differentiated patients from controls with accuracies of 86%, 76%, 75% and 90%, respectively. A similar order of effects was found at center 2. Cluster normalization leads to a significant increase of statistical power in detecting early FTLD-associated metabolic deficits. The established FTLD-specific cluster can be used to improve detection of FTLD on a single case basis at independent centers – a decisive step towards early diagnosis and prediction of FTLD syndromes enabling specific therapies in the future. PMID:23451025
Dukart, Juergen; Perneczky, Robert; Förster, Stefan; Barthel, Henryk; Diehl-Schmid, Janine; Draganski, Bogdan; Obrig, Hellmuth; Santarnecchi, Emiliano; Drzezga, Alexander; Fellgiebel, Andreas; Frackowiak, Richard; Kurz, Alexander; Müller, Karsten; Sabri, Osama; Schroeter, Matthias L; Yakushev, Igor
2013-01-01
Positron emission tomography with [18F] fluorodeoxyglucose (FDG-PET) plays a well-established role in assisting early detection of frontotemporal lobar degeneration (FTLD). Here, we examined the impact of intensity normalization to different reference areas on accuracy of FDG-PET to discriminate between patients with mild FTLD and healthy elderly subjects. FDG-PET was conducted at two centers using different acquisition protocols: 41 FTLD patients and 42 controls were studied at center 1, 11 FTLD patients and 13 controls were studied at center 2. All PET images were intensity normalized to the cerebellum, primary sensorimotor cortex (SMC), cerebral global mean (CGM), and a reference cluster with most preserved FDG uptake in the aforementioned patients group of center 1. Metabolic deficits in the patient group at center 1 appeared 1.5, 3.6, and 4.6 times greater in spatial extent, when tracer uptake was normalized to the reference cluster rather than to the cerebellum, SMC, and CGM, respectively. Logistic regression analyses based on normalized values from FTLD-typical regions showed that at center 1, cerebellar, SMC, CGM, and cluster normalizations differentiated patients from controls with accuracies of 86%, 76%, 75% and 90%, respectively. A similar order of effects was found at center 2. Cluster normalization leads to a significant increase of statistical power in detecting early FTLD-associated metabolic deficits. The established FTLD-specific cluster can be used to improve detection of FTLD on a single case basis at independent centers - a decisive step towards early diagnosis and prediction of FTLD syndromes enabling specific therapies in the future. PMID:23451025
Using Smartphone Sensors for Improving Energy Expenditure Estimation.
Pande, Amit; Zhu, Jindan; Das, Aveek K; Zeng, Yunze; Mohapatra, Prasant; Han, Jay J
2015-01-01
Energy expenditure (EE) estimation is an important factor in tracking personal activity and preventing chronic diseases, such as obesity and diabetes. Accurate and real-time EE estimation utilizing small wearable sensors is a difficult task, primarily because the most existing schemes work offline or use heuristics. In this paper, we focus on accurate EE estimation for tracking ambulatory activities (walking, standing, climbing upstairs, or downstairs) of a typical smartphone user. We used built-in smartphone sensors (accelerometer and barometer sensor), sampled at low frequency, to accurately estimate EE. Using a barometer sensor, in addition to an accelerometer sensor, greatly increases the accuracy of EE estimation. Using bagged regression trees, a machine learning technique, we developed a generic regression model for EE estimation that yields upto 96% correlation with actual EE. We compare our results against the state-of-the-art calorimetry equations and consumer electronics devices (Fitbit and Nike+ FuelBand). The newly developed EE estimation algorithm demonstrated superior accuracy compared with currently available methods. The results were calibrated against COSMED K4b2 calorimeter readings. PMID:27170901
Improving Estimation Accuracy of Aggregate Queries on Data Cubes
Pourabbas, Elaheh; Shoshani, Arie
2008-08-15
In this paper, we investigate the problem of estimation of a target database from summary databases derived from a base data cube. We show that such estimates can be derived by choosing a primary database which uses a proxy database to estimate the results. This technique is common in statistics, but an important issue we are addressing is the accuracy of these estimates. Specifically, given multiple primary and multiple proxy databases, that share the same summary measure, the problem is how to select the primary and proxy databases that will generate the most accurate target database estimation possible. We propose an algorithmic approach for determining the steps to select or compute the source databases from multiple summary databases, which makes use of the principles of information entropy. We show that the source databases with the largest number of cells in common provide the more accurate estimates. We prove that this is consistent with maximizing the entropy. We provide some experimental results on the accuracy of the target database estimation in order to verify our results.
Using Smartphone Sensors for Improving Energy Expenditure Estimation
Zhu, Jindan; Das, Aveek K.; Zeng, Yunze; Mohapatra, Prasant; Han, Jay J.
2015-01-01
Energy expenditure (EE) estimation is an important factor in tracking personal activity and preventing chronic diseases, such as obesity and diabetes. Accurate and real-time EE estimation utilizing small wearable sensors is a difficult task, primarily because the most existing schemes work offline or use heuristics. In this paper, we focus on accurate EE estimation for tracking ambulatory activities (walking, standing, climbing upstairs, or downstairs) of a typical smartphone user. We used built-in smartphone sensors (accelerometer and barometer sensor), sampled at low frequency, to accurately estimate EE. Using a barometer sensor, in addition to an accelerometer sensor, greatly increases the accuracy of EE estimation. Using bagged regression trees, a machine learning technique, we developed a generic regression model for EE estimation that yields upto 96% correlation with actual EE. We compare our results against the state-of-the-art calorimetry equations and consumer electronics devices (Fitbit and Nike+ FuelBand). The newly developed EE estimation algorithm demonstrated superior accuracy compared with currently available methods. The results were calibrated against COSMED K4b2 calorimeter readings. PMID:27170901
NASA Astrophysics Data System (ADS)
Chanteur, Gerard
A multi-spacecraft mission with at least four spacecraft, like CLUSTER, MMS, or Cross-Scales, can determine the local geometry of the magnetic field lines when the size of the cluster of spacecraft is small enough compared to the gradient scale lengths of the magnetic field. Shen et al. (2003) and Runov et al. (2003 and 2005) used CLUSTER data to estimate the normal and the curvature of magnetic field lines in the terrestrial current sheet: the two groups used different approaches. Reciprocal vectors of the tetrahedron formed by four spacecraft are a powerful tool for estimating gradients of fields (Chanteur, 1998 and 2000). Considering a thick and planar current sheet model and making use of the statistical properties of the reciprocal vectors allows to discuss theoretically how physical and geometrical errors affect these estimations. References Chanteur, G., Spatial Interpolation for Four Spacecraft: Theory, in Analysis Methods for Multi-Spacecraft Data, ISSI SR-001, pp. 349-369, ESA Publications Division, 1998. Chanteur, G., Accuracy of field gradient estimations by Cluster: Explanation of its dependency upon elongation and planarity of the tetrahedron, pp. 265-268, ESA SP-449, 2000. Runov, A., Nakamura, R., Baumjohann, W., Treumann, R. A., Zhang, T. L., Volwerk, M., V¨r¨s, Z., Balogh, A., Glaßmeier, K.-H., Klecker, B., R‘eme, H., and Kistler, L., Current sheet oo structure near magnetic X-line observed by Cluster, Geophys. Res. Lett., 30, 33-1, 2003. Runov, A., Sergeev, V. A., Nakamura, R., Baumjohann, W., Apatenkov, S., Asano, Y., Takada, T., Volwerk, M.,V¨r¨s, Z., Zhang, T. L., Sauvaud, J.-A., R‘eme, H., and Balogh, A., Local oo structure of the magnetotail current sheet: 2001 Cluster observations, Ann. Geophys., 24, 247-262, 2006. Shen, C., Li, X., Dunlop, M., Liu, Z. X., Balogh, A., Baker, D. N., Hapgood, M., and Wang, X., Analyses on the geometrical structure of magnetic field in the current sheet based on cluster measurements, J. Geophys. Res
Hox, Joop J; Moerbeek, Mirjam; Kluytmans, Anouck; van de Schoot, Rens
2014-01-01
Cluster randomized trials assess the effect of an intervention that is carried out at the group or cluster level. Ajzen's theory of planned behavior is often used to model the effect of the intervention as an indirect effect mediated in turn by attitude, norms and behavioral intention. Structural equation modeling (SEM) is the technique of choice to estimate indirect effects and their significance. However, this is a large sample technique, and its application in a cluster randomized trial assumes a relatively large number of clusters. In practice, the number of clusters in these studies tends to be relatively small, e.g., much less than fifty. This study uses simulation methods to find the lowest number of clusters needed when multilevel SEM is used to estimate the indirect effect. Maximum likelihood estimation is compared to Bayesian analysis, with the central quality criteria being accuracy of the point estimate and the confidence interval. We also investigate the power of the test for the indirect effect. We conclude that Bayes estimation works well with much smaller cluster level sample sizes such as 20 cases than maximum likelihood estimation; although the bias is larger the coverage is much better. When only 5-10 clusters are available per treatment condition even with Bayesian estimation problems occur. PMID:24550881
NASA Astrophysics Data System (ADS)
Hong, Ban Zhen; Keong, Lau Kok; Shariff, Azmi Mohd
2016-05-01
The employment of different mathematical models to address specifically for the bubble nucleation rates of water vapour and dissolved air molecules is essential as the physics for them to form bubble nuclei is different. The available methods to calculate bubble nucleation rate in binary mixture such as density functional theory are complicated to be coupled along with computational fluid dynamics (CFD) approach. In addition, effect of dissolved gas concentration was neglected in most study for the prediction of bubble nucleation rates. The most probable bubble nucleation rate for the water vapour and dissolved air mixture in a 2D quasi-stable flow across a cavitating nozzle in current work was estimated via the statistical mean of all possible bubble nucleation rates of the mixture (different mole fractions of water vapour and dissolved air) and the corresponding number of molecules in critical cluster. Theoretically, the bubble nucleation rate is greatly dependent on components' mole fraction in a critical cluster. Hence, the dissolved gas concentration effect was included in current work. Besides, the possible bubble nucleation rates were predicted based on the calculated number of molecules required to form a critical cluster. The estimation of components' mole fraction in critical cluster for water vapour and dissolved air mixture was obtained by coupling the enhanced classical nucleation theory and CFD approach. In addition, the distribution of bubble nuclei of water vapour and dissolved air mixture could be predicted via the utilisation of population balance model.
Improving estimates of exposures for epidemiologic studies of plutonium workers.
Ruttenber, A J; Schonbeck, M; McCrea, J; McClure, D; Martyny, J
2001-01-01
Epidemiologic studies of nuclear facilities usually focus on relations between cancer and doses from external penetrating radiation, and describe these exposures with little detail on measurement error and missing data. We demonstrate ways to document complex exposures to nuclear workers with data on external and internal exposures to ionizing radiation and toxic chemicals. We describe methods for assessing internal exposures to plutonium and external doses from neutrons; the use of a job exposure matrix for estimating chemical exposures; and methods for imputing missing data for exposures and doses. For plutonium workers at Rocky Flats, errors in estimating neutron doses resulted in underestimating the total external dose for production workers by about 16%. Estimates of systemic deposition do not correlate well with estimates of organ doses. Only a small percentage of workers had exposures to toxic chemicals, making epidemiologic assessments of risk difficult. PMID:11319050
Improved estimation of random vibration loads in launch vehicles
NASA Technical Reports Server (NTRS)
Mehta, R.; Erwin, E.; Suryanarayan, S.; Krishna, Murali M. R.
1993-01-01
Random vibration induced load is an important component of the total design load environment for payload and launch vehicle components and their support structures. The current approach to random vibration load estimation is based, particularly at the preliminary design stage, on the use of Miles' equation which assumes a single degree-of-freedom (DOF) system and white noise excitation. This paper examines the implications of the use of multi-DOF system models and response calculation based on numerical integration using the actual excitation spectra for random vibration load estimation. The analytical study presented considers a two-DOF system and brings out the effects of modal mass, damping and frequency ratios on the random vibration load factor. The results indicate that load estimates based on the Miles' equation can be significantly different from the more accurate estimates based on multi-DOF models.
NASA Astrophysics Data System (ADS)
Priyatikanto, R.; Arifyanto, M. I.
2015-01-01
Stellar membership determination of an open cluster is an important process to do before further analysis. Basically, there are two classes of membership determination method: parametric and non-parametric. In this study, an alternative of non-parametric method based on Binned Kernel Density Estimation that accounts measurements errors (simply called BKDE- e) is proposed. This method is applied upon proper motions data to determine cluster's membership kinematically and estimate the average proper motions of the cluster. Monte Carlo simulations show that the average proper motions determination using this proposed method is statistically more accurate than ordinary Kernel Density Estimator (KDE). By including measurement errors in the calculation, the mode location from the resulting density estimate is less sensitive to non-physical or stochastic fluctuation as compared to ordinary KDE that excludes measurement errors. For the typical mean measurement error of 7 mas/yr, BKDE- e suppresses the potential of miscalculation by a factor of two compared to KDE. With median accuracy of about 93 %, BKDE- e method has comparable accuracy with respect to parametric method (modified Sanders algorithm). Application to real data from The Fourth USNO CCD Astrograph Catalog (UCAC4), especially to NGC 2682 is also performed. The mode of member stars distribution on Vector Point Diagram is located at μ α cos δ=-9.94±0.85 mas/yr and μ δ =-4.92±0.88 mas/yr. Although the BKDE- e performance does not overtake parametric approach, it serves a new view of doing membership analysis, expandable to astrometric and photometric data or even in binary cluster search.
NASA Astrophysics Data System (ADS)
Wahyudi, Notodiputro, Khairil Anwar; Kurnia, Anang; Anisa, Rahma
2016-02-01
Empirical Best Linear Unbiased Prediction (EBLUP) is one of indirect estimating methods which used to estimate parameters of small areas. EBLUP methods works in using auxiliary variables of area while adding the area random effects. In estimating non-sampled area, the standard EBLUP can no longer be used due to no information of area random effects. To obtain more proper estimation methods for non sampled area, the standard EBLUP model has to be modified by adding cluster information. The aim of this research was to study clustering methods using factor analysis by means of simulation, provide better cluster information. The criteria used to evaluate the goodness of fit of the methods in the simulation study were the mean percentage of clustering accuracy. The results of the simulation study showed the use of factor analysis in clustering has increased the average percentage of accuracy particularly when using Ward method. The method was taken into account to estimate the per capita expenditures based on Small Area Estimation (SAE) techniques. The method was eventually used to estimate the per capita expenditures from SUSENAS and the quality of the estimates was measured by RMSE. This research has shown that the standard-modified EBLUP model provided with factor analysis better estimates when compared with standard EBLUP model and the standard-modified EBLUP without the factor analysis. Moreover, it was also shown that the clustering information is important in estimating non sampled area.
Improved Recharge Estimation from Portable, Low-Cost Weather Stations.
Holländer, Hartmut M; Wang, Zijian; Assefa, Kibreab A; Woodbury, Allan D
2016-03-01
Groundwater recharge estimation is a critical quantity for sustainable groundwater management. The feasibility and robustness of recharge estimation was evaluated using physical-based modeling procedures, and data from a low-cost weather station with remote sensor techniques in Southern Abbotsford, British Columbia, Canada. Recharge was determined using the Richards-based vadose zone hydrological model, HYDRUS-1D. The required meteorological data were recorded with a HOBO(TM) weather station for a short observation period (about 1 year) and an existing weather station (Abbotsford A) for long-term study purpose (27 years). Undisturbed soil cores were taken at two locations in the vicinity of the HOBO(TM) weather station. The derived soil hydraulic parameters were used to characterize the soil in the numerical model. Model performance was evaluated using observed soil moisture and soil temperature data obtained from subsurface remote sensors. A rigorous sensitivity analysis was used to test the robustness of the model. Recharge during the short observation period was estimated at 863 and 816 mm. The mean annual recharge was estimated at 848 and 859 mm/year based on a time series of 27 years. The relative ratio of annual recharge-precipitation varied from 43% to 69%. From a monthly recharge perspective, the majority (80%) of recharge due to precipitation occurred during the hydrologic winter period. The comparison of the recharge estimates with other studies indicates a good agreement. Furthermore, this method is able to predict transient recharge estimates, and can provide a reasonable tool for estimates on nutrient leaching that is often controlled by strong precipitation events and rapid infiltration of water and nitrate into the soil. PMID:26011672
Improved Recharge Estimation from Portable, Low-Cost Weather Stations.
Holländer, Hartmut M; Wang, Zijian; Assefa, Kibreab A; Woodbury, Allan D
2016-03-01
Groundwater recharge estimation is a critical quantity for sustainable groundwater management. The feasibility and robustness of recharge estimation was evaluated using physical-based modeling procedures, and data from a low-cost weather station with remote sensor techniques in Southern Abbotsford, British Columbia, Canada. Recharge was determined using the Richards-based vadose zone hydrological model, HYDRUS-1D. The required meteorological data were recorded with a HOBO(TM) weather station for a short observation period (about 1 year) and an existing weather station (Abbotsford A) for long-term study purpose (27 years). Undisturbed soil cores were taken at two locations in the vicinity of the HOBO(TM) weather station. The derived soil hydraulic parameters were used to characterize the soil in the numerical model. Model performance was evaluated using observed soil moisture and soil temperature data obtained from subsurface remote sensors. A rigorous sensitivity analysis was used to test the robustness of the model. Recharge during the short observation period was estimated at 863 and 816 mm. The mean annual recharge was estimated at 848 and 859 mm/year based on a time series of 27 years. The relative ratio of annual recharge-precipitation varied from 43% to 69%. From a monthly recharge perspective, the majority (80%) of recharge due to precipitation occurred during the hydrologic winter period. The comparison of the recharge estimates with other studies indicates a good agreement. Furthermore, this method is able to predict transient recharge estimates, and can provide a reasonable tool for estimates on nutrient leaching that is often controlled by strong precipitation events and rapid infiltration of water and nitrate into the soil.
Improved Estimation of Human Lipoprotein Kinetics with Mixed Effects Models
Berglund, Martin; Adiels, Martin; Taskinen, Marja-Riitta; Borén, Jan; Wennberg, Bernt
2015-01-01
Context Mathematical models may help the analysis of biological systems by providing estimates of otherwise un-measurable quantities such as concentrations and fluxes. The variability in such systems makes it difficult to translate individual characteristics to group behavior. Mixed effects models offer a tool to simultaneously assess individual and population behavior from experimental data. Lipoproteins and plasma lipids are key mediators for cardiovascular disease in metabolic disorders such as diabetes mellitus type 2. By the use of mathematical models and tracer experiments fluxes and production rates of lipoproteins may be estimated. Results We developed a mixed effects model to study lipoprotein kinetics in a data set of 15 healthy individuals and 15 patients with type 2 diabetes. We compare the traditional and the mixed effects approach in terms of group estimates at various sample and data set sizes. Conclusion We conclude that the mixed effects approach provided better estimates using the full data set as well as with both sparse and truncated data sets. Sample size estimates showed that to compare lipoprotein secretion the mixed effects approach needed almost half the sample size as the traditional method. PMID:26422201
Improved False Discovery Rate Estimation Procedure for Shotgun Proteomics.
Keich, Uri; Kertesz-Farkas, Attila; Noble, William Stafford
2015-08-01
Interpreting the potentially vast number of hypotheses generated by a shotgun proteomics experiment requires a valid and accurate procedure for assigning statistical confidence estimates to identified tandem mass spectra. Despite the crucial role such procedures play in most high-throughput proteomics experiments, the scientific literature has not reached a consensus about the best confidence estimation methodology. In this work, we evaluate, using theoretical and empirical analysis, four previously proposed protocols for estimating the false discovery rate (FDR) associated with a set of identified tandem mass spectra: two variants of the target-decoy competition protocol (TDC) of Elias and Gygi and two variants of the separate target-decoy search protocol of Käll et al. Our analysis reveals significant biases in the two separate target-decoy search protocols. Moreover, the one TDC protocol that provides an unbiased FDR estimate among the target PSMs does so at the cost of forfeiting a random subset of high-scoring spectrum identifications. We therefore propose the mix-max procedure to provide unbiased, accurate FDR estimates in the presence of well-calibrated scores. The method avoids biases associated with the two separate target-decoy search protocols and also avoids the propensity for target-decoy competition to discard a random subset of high-scoring target identifications.
Validation of an Improved Pediatric Weight Estimation Strategy
Abdel-Rahman, Susan M.; Ahlers, Nichole; Holmes, Anne; Wright, Krista; Harris, Ann; Weigel, Jaylene; Hill, Talita; Baird, Kim; Michaels, Marla; Kearns, Gregory L.
2013-01-01
OBJECTIVES To validate the recently described Mercy method for weight estimation in an independent cohort of children living in the United States. METHODS Anthropometric data including weight, height, humeral length, and mid upper arm circumference were collected from 976 otherwise healthy children (2 months to 14 years old). The data were used to examine the predictive performances of the Mercy method and four other weight estimation strategies (the Advanced Pediatric Life Support [APLS] method, the Broselow tape, and the Luscombe and Owens and the Nelson methods). RESULTS The Mercy method demonstrated accuracy comparable to that observed in the original study (mean error: −0.3 kg; mean percentage error: −0.3%; root mean square error: 2.62 kg; 95% limits of agreement: 0.83–1.19). This method estimated weight within 20% of actual for 95% of children compared with 58.7% for APLS, 78% for Broselow, 54.4% for Luscombe and Owens, and 70.4% for Nelson. Furthermore, the Mercy method was the only weight estimation strategy which enabled prediction of weight in all of the children enrolled. CONCLUSIONS The Mercy method proved to be highly accurate and more robust than existing weight estimation strategies across a wider range of age and body mass index values, thereby making it superior to other existing approaches. PMID:23798905
An improved scheduling algorithm for 3D cluster rendering with platform LSF
NASA Astrophysics Data System (ADS)
Xu, Wenli; Zhu, Yi; Zhang, Liping
2013-10-01
High-quality photorealistic rendering of 3D modeling needs powerful computing systems. On this demand highly efficient management of cluster resources develops fast to exert advantages. This paper is absorbed in the aim of how to improve the efficiency of 3D rendering tasks in cluster. It focuses research on a dynamic feedback load balance (DFLB) algorithm, the work principle of load sharing facility (LSF) and optimization of external scheduler plug-in. The algorithm can be applied into match and allocation phase of a scheduling cycle. Candidate hosts is prepared in sequence in match phase. And the scheduler makes allocation decisions for each job in allocation phase. With the dynamic mechanism, new weight is assigned to each candidate host for rearrangement. The most suitable one will be dispatched for rendering. A new plugin module of this algorithm has been designed and integrated into the internal scheduler. Simulation experiments demonstrate the ability of improved plugin module is superior to the default one for rendering tasks. It can help avoid load imbalance among servers, increase system throughput and improve system utilization.
The report discusses an EPA investigation of techniques to improve methods for estimating volatile organic compound (VOC) emissions from area sources. Using the automobile refinishing industry for a detailed area source case study, an emission estimation method is being developed...
Santos, Miriam Seoane; Abreu, Pedro Henriques; García-Laencina, Pedro J; Simão, Adélia; Carvalho, Armando
2015-12-01
Liver cancer is the sixth most frequently diagnosed cancer and, particularly, Hepatocellular Carcinoma (HCC) represents more than 90% of primary liver cancers. Clinicians assess each patient's treatment on the basis of evidence-based medicine, which may not always apply to a specific patient, given the biological variability among individuals. Over the years, and for the particular case of Hepatocellular Carcinoma, some research studies have been developing strategies for assisting clinicians in decision making, using computational methods (e.g. machine learning techniques) to extract knowledge from the clinical data. However, these studies have some limitations that have not yet been addressed: some do not focus entirely on Hepatocellular Carcinoma patients, others have strict application boundaries, and none considers the heterogeneity between patients nor the presence of missing data, a common drawback in healthcare contexts. In this work, a real complex Hepatocellular Carcinoma database composed of heterogeneous clinical features is studied. We propose a new cluster-based oversampling approach robust to small and imbalanced datasets, which accounts for the heterogeneity of patients with Hepatocellular Carcinoma. The preprocessing procedures of this work are based on data imputation considering appropriate distance metrics for both heterogeneous and missing data (HEOM) and clustering studies to assess the underlying patient groups in the studied dataset (K-means). The final approach is applied in order to diminish the impact of underlying patient profiles with reduced sizes on survival prediction. It is based on K-means clustering and the SMOTE algorithm to build a representative dataset and use it as training example for different machine learning procedures (logistic regression and neural networks). The results are evaluated in terms of survival prediction and compared across baseline approaches that do not consider clustering and/or oversampling using the
Efforts To Improve Estimates of State and Local Unemployment
ERIC Educational Resources Information Center
Ziegler, Martin
1977-01-01
Describes how local area unemployment statistics are developed by state employment security agencies to provide the Bureau of Labor Statistics with data on the insured unemployed by county of residence. Includes a discussion of the handbook method, a consistent and uniform method of estimating total unemployment for states and areas. (Editor/TA)
A novel ULA-based geometry for improving AOA estimation
NASA Astrophysics Data System (ADS)
Shirvani-Moghaddam, Shahriar; Akbari, Farida
2011-12-01
Due to relatively simple implementation, Uniform Linear Array (ULA) is a popular geometry for array signal processing. Despite this advantage, it does not have a uniform performance in all directions and Angle of Arrival (AOA) estimation performance degrades considerably in the angles close to endfire. In this article, a new configuration is proposed which can solve this problem. Proposed Array (PA) configuration adds two elements to the ULA in top and bottom of the array axis. By extending signal model of the ULA to the new proposed ULA-based array, AOA estimation performance has been compared in terms of angular accuracy and resolution threshold through two well-known AOA estimation algorithms, MUSIC and MVDR. In both algorithms, Root Mean Square Error (RMSE) of the detected angles descends as the input Signal to Noise Ratio (SNR) increases. Simulation results show that the proposed array geometry introduces uniform accurate performance and higher resolution in middle angles as well as border ones. The PA also presents less RMSE than the ULA in endfire directions. Therefore, the proposed array offers better performance for the border angles with almost the same array size and simplicity in both MUSIC and MVDR algorithms with respect to the conventional ULA. In addition, AOA estimation performance of the PA geometry is compared with two well-known 2D-array geometries: L-shape and V-shape, and acceptable results are obtained with equivalent or lower complexity.
Using Colors to Improve Photometric Metallicity Estimates for Galaxies
NASA Astrophysics Data System (ADS)
Sanders, N. E.; Levesque, E. M.; Soderberg, A. M.
2013-10-01
There is a well known correlation between the mass and metallicity of star-forming galaxies. Because mass is correlated with luminosity, this relation is often exploited, when spectroscopy is not available, to estimate galaxy metallicities based on single band photometry. However, we show that galaxy color is typically more effective than luminosity as a predictor of metallicity. This is a consequence of the correlation between color and the galaxy mass-to-light ratio and the recently discovered correlation between star formation rate (SFR) and residuals from the mass-metallicity relation. Using Sloan Digital Sky Survey spectroscopy of ~180, 000 nearby galaxies, we derive "LZC relations," empirical relations between metallicity (in seven common strong line diagnostics), luminosity, and color (in 10 filter pairs and four methods of photometry). We show that these relations allow photometric metallicity estimates, based on luminosity and a single optical color, that are ~50% more precise than those made based on luminosity alone; galaxy metallicity can be estimated to within ~0.05-0.1 dex of the spectroscopically derived value depending on the diagnostic used. Including color information in photometric metallicity estimates also reduces systematic biases for populations skewed toward high or low SFR environments, as we illustrate using the host galaxy of the supernova SN 2010ay. This new tool will lend more statistical power to studies of galaxy populations, such as supernova and gamma-ray burst host environments, in ongoing and future wide-field imaging surveys.
USING COLORS TO IMPROVE PHOTOMETRIC METALLICITY ESTIMATES FOR GALAXIES
Sanders, N. E.; Soderberg, A. M.; Levesque, E. M.
2013-10-01
There is a well known correlation between the mass and metallicity of star-forming galaxies. Because mass is correlated with luminosity, this relation is often exploited, when spectroscopy is not available, to estimate galaxy metallicities based on single band photometry. However, we show that galaxy color is typically more effective than luminosity as a predictor of metallicity. This is a consequence of the correlation between color and the galaxy mass-to-light ratio and the recently discovered correlation between star formation rate (SFR) and residuals from the mass-metallicity relation. Using Sloan Digital Sky Survey spectroscopy of ∼180, 000 nearby galaxies, we derive 'LZC relations', empirical relations between metallicity (in seven common strong line diagnostics), luminosity, and color (in 10 filter pairs and four methods of photometry). We show that these relations allow photometric metallicity estimates, based on luminosity and a single optical color, that are ∼50% more precise than those made based on luminosity alone; galaxy metallicity can be estimated to within ∼0.05-0.1 dex of the spectroscopically derived value depending on the diagnostic used. Including color information in photometric metallicity estimates also reduces systematic biases for populations skewed toward high or low SFR environments, as we illustrate using the host galaxy of the supernova SN 2010ay. This new tool will lend more statistical power to studies of galaxy populations, such as supernova and gamma-ray burst host environments, in ongoing and future wide-field imaging surveys.
Improved Uncertainty Quantification in Groundwater Flux Estimation Using GRACE
NASA Astrophysics Data System (ADS)
Reager, J. T., II; Rao, P.; Famiglietti, J. S.; Turmon, M.
2015-12-01
Groundwater change is difficult to monitor over large scales. One of the most successful approaches is in the remote sensing of time-variable gravity using NASA Gravity Recovery and Climate Experiment (GRACE) mission data, and successful case studies have created the opportunity to move towards a global groundwater monitoring framework for the world's largest aquifers. To achieve these estimates, several approximations are applied, including those in GRACE processing corrections, the formulation of the formal GRACE errors, destriping and signal recovery, and the numerical model estimation of snow water, surface water and soil moisture storage states used to isolate a groundwater component. A major weakness in these approaches is inconsistency: different studies have used different sources of primary and ancillary data, and may achieve different results based on alternative choices in these approximations. In this study, we present two cases of groundwater change estimation in California and the Colorado River basin, selected for their good data availability and varied climates. We achieve a robust numerical estimate of post-processing uncertainties resulting from land-surface model structural shortcomings and model resolution errors. Groundwater variations should demonstrate less variability than the overlying soil moisture state does, as groundwater has a longer memory of past events due to buffering by infiltration and drainage rate limits. We apply a model ensemble approach in a Bayesian framework constrained by the assumption of decreasing signal variability with depth in the soil column. We also discuss time variable errors vs. time constant errors, across-scale errors v. across-model errors, and error spectral content (across scales and across model). More robust uncertainty quantification for GRACE-based groundwater estimates would take all of these issues into account, allowing for more fair use in management applications and for better integration of GRACE
Estimating the number of clusters in multivariate data by self-organizing maps.
Costa, J A; Netto, M L
1999-06-01
Determining the structure of data without prior knowledge of the number of clusters or any information about their composition is a problem of interest in many fields, such as image analysis, astrophysics, biology, etc. Partitioning a set of n patterns in a p-dimensional feature space must be done such that those in a given cluster are more similar to each other than the rest. As there are approximately Kn/K! possible ways of partitioning the patterns among K clusters, finding the best solution is very hard when n is large. The search space is increased when we have no a priori number of partitions. Although the self-organizing feature map (SOM) can be used to visualize clusters, the automation of knowledge discovery by SOM is a difficult task. This paper proposes region-based image processing methods to post-processing the U-matrix obtained after the unsupervised learning performed by SOM. Mathematical morphology is applied to identify regions of neurons that are similar. The number of regions and their labels are automatically found and they are related to the number of clusters in a multivariate data set. New data can be classified by labeling it according to the best match neuron. Simulations using data sets drawn from finite mixtures of p-variate normal densities are presented as well as related advantages and drawbacks of the method.
Improved Battery State Estimation Using Novel Sensing Techniques
NASA Astrophysics Data System (ADS)
Abdul Samad, Nassim
Lithium-ion batteries have been considered a great complement or substitute for gasoline engines due to their high energy and power density capabilities among other advantages. However, these types of energy storage devices are still yet not widespread, mainly because of their relatively high cost and safety issues, especially at elevated temperatures. This thesis extends existing methods of estimating critical battery states using model-based techniques augmented by real-time measurements from novel temperature and force sensors. Typically, temperature sensors are located near the edge of the battery, and away from the hottest core cell regions, which leads to slower response times and increased errors in the prediction of core temperatures. New sensor technology allows for flexible sensor placement at the cell surface between cells in a pack. This raises questions about the optimal locations of these sensors for best observability and temperature estimation. Using a validated model, which is developed and verified using experiments in laboratory fixtures that replicate vehicle pack conditions, it is shown that optimal sensor placement can lead to better and faster temperature estimation. Another equally important state is the state of health or the capacity fading of the cell. This thesis introduces a novel method of using force measurements for capacity fade estimation. Monitoring capacity is important for defining the range of electric vehicles (EVs) and plug-in hybrid electric vehicles (PHEVs). Current capacity estimation techniques require a full discharge to monitor capacity. The proposed method can complement or replace current methods because it only requires a shallow discharge, which is especially useful in EVs and PHEVs. Using the accurate state estimation accomplished earlier, a method for downsizing a battery pack is shown to effectively reduce the number of cells in a pack without compromising safety. The influence on the battery performance (e
RSQRT: AN HEURISTIC FOR ESTIMATING THE NUMBER OF CLUSTERS TO REPORT
Bruso, Kelsey
2012-01-01
Clustering can be a valuable tool for analyzing large datasets, such as in e-commerce applications. Anyone who clusters must choose how many item clusters, K, to report. Unfortunately, one must guess at K or some related parameter. Elsewhere we introduced a strongly-supported heuristic, RSQRT, which predicts K as a function of the attribute or item count, depending on attribute scales. We conducted a second analysis where we sought confirmation of the heuristic, analyzing data sets from theUCImachine learning benchmark repository. For the 25 studies where sufficient detail was available, we again found strong support. Also, in a side-by-side comparison of 28 studies, RSQRT best-predicted K and the Bayesian information criterion (BIC) predicted K are the same. RSQRT has a lower cost of O(log log n) versus O(n2) for BIC, and is more widely applicable. Using RSQRT prospectively could be much better than merely guessing. PMID:22773923
RSQRT: AN HEURISTIC FOR ESTIMATING THE NUMBER OF CLUSTERS TO REPORT.
Carlis, John; Bruso, Kelsey
2012-03-01
Clustering can be a valuable tool for analyzing large datasets, such as in e-commerce applications. Anyone who clusters must choose how many item clusters, K, to report. Unfortunately, one must guess at K or some related parameter. Elsewhere we introduced a strongly-supported heuristic, RSQRT, which predicts K as a function of the attribute or item count, depending on attribute scales. We conducted a second analysis where we sought confirmation of the heuristic, analyzing data sets from theUCImachine learning benchmark repository. For the 25 studies where sufficient detail was available, we again found strong support. Also, in a side-by-side comparison of 28 studies, RSQRT best-predicted K and the Bayesian information criterion (BIC) predicted K are the same. RSQRT has a lower cost of O(log log n) versus O(n(2)) for BIC, and is more widely applicable. Using RSQRT prospectively could be much better than merely guessing. PMID:22773923
NASA Astrophysics Data System (ADS)
Terzer, Stefan; Araguás-Araguás, Luis; Wassenaar, Leonard I.; Aggarwal, Pradeep K.
2013-04-01
Prediction of geospatial H and O isotopic patterns in precipitation has become increasingly important to diverse disciplines beyond hydrology, such as climatology, ecology, food authenticity, and criminal forensics, because these two isotopes of rainwater often control the terrestrial isotopic spatial patterns that facilitate the linkage of products (food, wildlife, water) to origin or movement (food, criminalistics). Currently, spatial water isotopic pattern prediction relies on combined regression and interpolation techniques to create gridded datasets by using data obtained from the Global Network of Isotopes In Precipitation (GNIP). However, current models suffer from two shortcomings: (a) models may have limited covariates and/or parameterization fitted to a global domain, which results in poor predictive outcomes at regional scales, or (b) the spatial domain is intentionally restricted to regional settings, and thereby of little use in providing information at global geospatial scales. Here we present a new global climatically regionalized isotope prediction model which overcomes these limitations through the use of fuzzy clustering of climatic data subsets, allowing us to better identify and customize appropriate covariates and their multiple regression coefficients instead of aiming for a one-size-fits-all global fit (RCWIM - Regionalized Climate Cluster Water Isotope Model). The new model significantly reduces the point-based regression residuals and results in much lower overall isotopic prediction uncertainty, since residuals are interpolated onto the regression surface. The new precipitation δ2H and δ18O isoscape model is available on a global scale at 10 arc-minutes spatial and at monthly, seasonal and annual temporal resolution, and will provide improved predicted stable isotope values used for a growing number of applications. The model further provides a flexible framework for future improvements using regional climatic clustering.
Improved source term estimation using blind outlier detection
NASA Astrophysics Data System (ADS)
Martinez-Camara, Marta; Bejar Haro, Benjamin; Vetterli, Martin; Stohl, Andreas
2014-05-01
Emissions of substances into the atmosphere are produced in situations such as volcano eruptions, nuclear accidents or pollutant releases. It is necessary to know the source term - how the magnitude of these emissions changes with time - in order to predict the consequences of the emissions, such as high radioactivity levels in a populated area or high concentration of volcanic ash in an aircraft flight corridor. However, in general, we know neither how much material was released in total, nor the relative variation of emission strength with time. Hence, estimating the source term is a crucial task. Estimating the source term generally involves solving an ill-posed linear inverse problem using datasets of sensor measurements. Several so-called inversion methods have been developed for this task. Unfortunately, objective quantitative evaluation of the performance of inversion methods is difficult due to the fact that the ground truth is unknown for practically all the available measurement datasets. In this work we use the European Tracer Experiment (ETEX) - a rare example of an experiment where the ground truth is available - to develop and to test new source estimation algorithms. Knowledge of the ground truth grants us access to the additive error term. We show that the distribution of this error is heavy-tailed, which means that some measurements are outliers. We also show that precisely these outliers severely degrade the performance of traditional inversion methods. Therefore, we develop blind outlier detection algorithms specifically suited to the source estimation problem. Then, we propose new inversion methods that combine traditional regularization techniques with blind outlier detection. Such hybrid methods reduce the error of reconstruction of the source term up to 45% with respect to previously proposed methods.
Hens, Niel; Beutels, Philippe; Leirs, Herwig; Reijniers, Jonas
2016-01-01
Diseases of humans and wildlife are typically tracked and studied through incidence, the number of new infections per time unit. Estimating incidence is not without difficulties, as asymptomatic infections, low sampling intervals and low sample sizes can introduce large estimation errors. After infection, biomarkers such as antibodies or pathogens often change predictably over time, and this temporal pattern can contain information about the time since infection that could improve incidence estimation. Antibody level and avidity have been used to estimate time since infection and to recreate incidence, but the errors on these estimates using currently existing methods are generally large. Using a semi-parametric model in a Bayesian framework, we introduce a method that allows the use of multiple sources of information (such as antibody level, pathogen presence in different organs, individual age, season) for estimating individual time since infection. When sufficient background data are available, this method can greatly improve incidence estimation, which we show using arenavirus infection in multimammate mice as a test case. The method performs well, especially compared to the situation in which seroconversion events between sampling sessions are the main data source. The possibility to implement several sources of information allows the use of data that are in many cases already available, which means that existing incidence data can be improved without the need for additional sampling efforts or laboratory assays. PMID:27177244
Uncertainty Estimation Improves Energy Measurement and Verification Procedures
Walter, Travis; Price, Phillip N.; Sohn, Michael D.
2014-05-14
Implementing energy conservation measures in buildings can reduce energy costs and environmental impacts, but such measures cost money to implement so intelligent investment strategies require the ability to quantify the energy savings by comparing actual energy used to how much energy would have been used in absence of the conservation measures (known as the baseline energy use). Methods exist for predicting baseline energy use, but a limitation of most statistical methods reported in the literature is inadequate quantification of the uncertainty in baseline energy use predictions. However, estimation of uncertainty is essential for weighing the risks of investing in retrofits. Most commercial buildings have, or soon will have, electricity meters capable of providing data at short time intervals. These data provide new opportunities to quantify uncertainty in baseline predictions, and to do so after shorter measurement durations than are traditionally used. In this paper, we show that uncertainty estimation provides greater measurement and verification (M&V) information and helps to overcome some of the difficulties with deciding how much data is needed to develop baseline models and to confirm energy savings. We also show that cross-validation is an effective method for computing uncertainty. In so doing, we extend a simple regression-based method of predicting energy use using short-interval meter data. We demonstrate the methods by predicting energy use in 17 real commercial buildings. We discuss the benefits of uncertainty estimates which can provide actionable decision making information for investing in energy conservation measures.
Recent Improvements in Estimating Convective and Stratiform Rainfall in Amazonia
NASA Technical Reports Server (NTRS)
Negri, Andrew J.
1999-01-01
In this paper we present results from the application of a satellite infrared (IR) technique for estimating rainfall over northern South America. Our main objectives are to examine the diurnal variability of rainfall and to investigate the relative contributions from the convective and stratiform components. We apply the technique of Anagnostou et al (1999). In simple functional form, the estimated rain area A(sub rain) may be expressed as: A(sub rain) = f(A(sub mode),T(sub mode)), where T(sub mode) is the mode temperature of a cloud defined by 253 K, and A(sub mode) is the area encompassed by T(sub mode). The technique was trained by a regression between coincident microwave estimates from the Goddard Profiling (GPROF) algorithm (Kummerow et al, 1996) applied to SSM/I data and GOES IR (11 microns) observations. The apportionment of the rainfall into convective and stratiform components is based on the microwave technique described by Anagnostou and Kummerow (1997). The convective area from this technique was regressed against an IR structure parameter (the Convective Index) defined by Anagnostou et al (1999). Finally, rainrates are assigned to the Am.de proportional to (253-temperature), with different rates for the convective and stratiform
Does quantifying antecedent flow conditions improve stream phosphorus export estimation?
NASA Astrophysics Data System (ADS)
Warner, Stuart; Kiely, Gerard; Morgan, Gerard; O'Halloran, John
2009-11-01
SummaryA reliable and economical method for the estimation of nutrient export (e.g. phosphorus) in stream flow from catchments is necessary to quantify the impact of land use or land use change upon aquatic systems. The transport of phosphorus (P) from soil to water is known to impact negatively on water quality. A key observation from studies is that most P export occurs during high stream flow. However, it is not yet clear how flood-antecedent conditions affect the P export during flood events. In this study, the P loss from soil to water as represented by soluble reactive phosphorus (SRP) in stream waters from three different catchments, varying in land use, scale and location in Ireland was monitored over 1 year. This study examined the role of antecedent stream flow conditions on SRP export and identifies a catchment-specific relationship between SRP flood event load (EL) and a flow ratio (FR). The FR is defined as the ratio of the flood event volume (EV) to the pre-event volume (PEV). The latter is the cumulative flow volume for a number of days preceding the event. This PEV period was found to be longer (average 81 days) in the grassland catchments which were known to be saturated with soil P than in the forested catchments (average 21 days) with minimal soil P. This FR ratio is a measure of the antecedent hydrological state (wet or dry) of the catchment. For SRP for each catchment, a specific relationship between SRP EL and FR was identified. The annual SRP export was estimated, using this ratio and compared with the concentration/discharge ( C/ Q) method. The new flow ratio method was used with data from 12 flood events during the year to estimate an annual export of SRP. For the two grassland catchments in the study, using the FR method, we estimated an SRP export of 1.77 and 0.41 kg ha -1 yr -1. Using the C/ Q method, for the same sites, our estimate of SRP export was 1.70 and 0.50 kg ha -1 yr -1 respectively. The C/ Q method used SRP concentrations
An Improved Bandstrength Index for the CH G Band of Globular Cluster Giants
NASA Astrophysics Data System (ADS)
Martell, Sarah L.; Smith, Graeme H.; Briley, Michael M.
2008-08-01
Spectral indices are useful tools for quantifying the strengths of features in moderate-resolution spectra and relating them to intrinsic stellar parameters. This paper focuses on the 4300 Å CH G-band, a classic example of a feature interpreted through use of spectral indices. G-band index definitions, as applied to globular clusters of different metallicity, abound in the literature, and transformations between the various systems, or comparisons between different authors' work, are difficult and not always useful. We present a method for formulating an optimized G-band index, using a large grid of synthetic spectra. To make our new index a reliable measure of carbon abundance, we minimize its dependence on [N/Fe] and simultaneously maximize its sensitivity to [C/Fe]. We present a definition for the new index S2(CH), along with estimates of the errors inherent in using it for [C/Fe] determination, and conclude that it is valid for use with spectra of bright globular cluster red giants over a large range in [Fe/H], [C/Fe], and [N/Fe].
Sample Size Estimation in Cluster Randomized Educational Trials: An Empirical Bayes Approach
ERIC Educational Resources Information Center
Rotondi, Michael A.; Donner, Allan
2009-01-01
The educational field has now accumulated an extensive literature reporting on values of the intraclass correlation coefficient, a parameter essential to determining the required size of a planned cluster randomized trial. We propose here a simple simulation-based approach including all relevant information that can facilitate this task. An…
Improved Speech Coding Based on Open-Loop Parameter Estimation
NASA Technical Reports Server (NTRS)
Juang, Jer-Nan; Chen, Ya-Chin; Longman, Richard W.
2000-01-01
A nonlinear optimization algorithm for linear predictive speech coding was developed early that not only optimizes the linear model coefficients for the open loop predictor, but does the optimization including the effects of quantization of the transmitted residual. It also simultaneously optimizes the quantization levels used for each speech segment. In this paper, we present an improved method for initialization of this nonlinear algorithm, and demonstrate substantial improvements in performance. In addition, the new procedure produces monotonically improving speech quality with increasing numbers of bits used in the transmitted error residual. Examples of speech encoding and decoding are given for 8 speech segments and signal to noise levels as high as 47 dB are produced. As in typical linear predictive coding, the optimization is done on the open loop speech analysis model. Here we demonstrate that minimizing the error of the closed loop speech reconstruction, instead of the simpler open loop optimization, is likely to produce negligible improvement in speech quality. The examples suggest that the algorithm here is close to giving the best performance obtainable from a linear model, for the chosen order with the chosen number of bits for the codebook.
Improving Mantel-Haenszel DIF Estimation through Bayesian Updating
ERIC Educational Resources Information Center
Zwick, Rebecca; Ye, Lei; Isham, Steven
2012-01-01
This study demonstrates how the stability of Mantel-Haenszel (MH) DIF (differential item functioning) methods can be improved by integrating information across multiple test administrations using Bayesian updating (BU). The authors conducted a simulation that showed that this approach, which is based on earlier work by Zwick, Thayer, and Lewis,…
Sun, Jinwei; Wu, Jiabing; Guan, Dexin; Yao, Fuqi; Yuan, Fenghui; Wang, Anzhi; Jin, Changjie
2014-01-01
Leaf respiration is an important component of carbon exchange in terrestrial ecosystems, and estimates of leaf respiration directly affect the accuracy of ecosystem carbon budgets. Leaf respiration is inhibited by light; therefore, gross primary production (GPP) will be overestimated if the reduction in leaf respiration by light is ignored. However, few studies have quantified GPP overestimation with respect to the degree of light inhibition in forest ecosystems. To determine the effect of light inhibition of leaf respiration on GPP estimation, we assessed the variation in leaf respiration of seedlings of the dominant tree species in an old mixed temperate forest with different photosynthetically active radiation levels using the Laisk method. Canopy respiration was estimated by combining the effect of light inhibition on leaf respiration of these species with within-canopy radiation. Leaf respiration decreased exponentially with an increase in light intensity. Canopy respiration and GPP were overestimated by approximately 20.4% and 4.6%, respectively, when leaf respiration reduction in light was ignored compared with the values obtained when light inhibition of leaf respiration was considered. This study indicates that accurate estimates of daytime ecosystem respiration are needed for the accurate evaluation of carbon budgets in temperate forests. In addition, this study provides a valuable approach to accurately estimate GPP by considering leaf respiration reduction in light in other ecosystems.
Improved total variation based CT reconstruction algorithm with noise estimation
NASA Astrophysics Data System (ADS)
Jin, Xin; Li, Liang; Shen, Le; Chen, Zhiqiang
2012-10-01
Nowadays a famous way to solve Computed Tomography (CT) inverse problems is to consider a constrained minimization problem following the Compressed Sensing (CS) theory. The CS theory proves the possibility of sparse signal recovery using under sampled measurements which gives a powerful tool for CT problems that have incomplete measurements or contain heavy noise. Among current CS reconstruction methods, one widely accepted reconstruction framework is to perform a total variation (TV) minimization process and a data fidelity constraint process in an alternative way by two separate iteration loops. However because the two processes are done independently certain misbalance may occur which leads to either over-smoothed or noisy reconstructions. Moreover, such misbalance is usually difficult to adjust as it varies according to the scanning objects and protocols. In our work we try to make good balance between the minimization and the constraint processes by estimating the variance of image noise. First, considering that the noise of projection data follows a Poisson distribution, the Anscombe transform (AT) and its inversion is utilized to calculate the unbiased variance of the projections. Second, an estimation of image noise is given through a noise transform model from projections to the image. Finally a modified CS reconstruction method is proposed which guarantees the desired variance on the reconstructed image thus prevents the block-wising or over-noised caused by misbalanced constrained minimizations. Results show the advantage in both image quality and convergence speed.
Improving the regulation of carcinogens by expediting cancer potency estimation.
Hoover, S M; Zeise, L; Pease, W S; Lee, L E; Hennig, M P; Weiss, L B; Cranor, C
1995-04-01
The statutory language of the Safe Drinking Water and Toxic Enforcement Act of 1986 (Proposition 65; California Health and Safety Code 25249.5 et seq.) encourages rapid adoption of "no significant risk levels" (NSRLs), intakes associated with estimated cancer risks of no more than 1 in 100,000. Derivation of an NSRL for a carcinogen listed under Proposition 65 requires the development of a cancer potency value. This paper discusses the methodology for the derivation of cancer potencies using an expedited procedure, and provides potency estimates for a number of agents listed as carcinogens under Proposition 65. To derive expedited potency values, default risk assessment methods are applied to data sets selected from an extensive tabulation of animal cancer bioassays according to criteria used by regulatory agencies. A subset of these expedited values is compared to values previously developed by regulatory agencies using conventional quantitative risk assessment and found to be in good agreement. Specific regulatory activities which could be facilitated by adopting similar expedited procedures are identified. PMID:7597261
Covariance specification and estimation to improve top-down Green House Gas emission estimates
NASA Astrophysics Data System (ADS)
Ghosh, S.; Lopez-Coto, I.; Prasad, K.; Whetstone, J. R.
2015-12-01
The National Institute of Standards and Technology (NIST) operates the North-East Corridor (NEC) project and the Indianapolis Flux Experiment (INFLUX) in order to develop measurement methods to quantify sources of Greenhouse Gas (GHG) emissions as well as their uncertainties in urban domains using a top down inversion method. Top down inversion updates prior knowledge using observations in a Bayesian way. One primary consideration in a Bayesian inversion framework is the covariance structure of (1) the emission prior residuals and (2) the observation residuals (i.e. the difference between observations and model predicted observations). These covariance matrices are respectively referred to as the prior covariance matrix and the model-data mismatch covariance matrix. It is known that the choice of these covariances can have large effect on estimates. The main objective of this work is to determine the impact of different covariance models on inversion estimates and their associated uncertainties in urban domains. We use a pseudo-data Bayesian inversion framework using footprints (i.e. sensitivities of tower measurements of GHGs to surface emissions) and emission priors (based on Hestia project to quantify fossil-fuel emissions) to estimate posterior emissions using different covariance schemes. The posterior emission estimates and uncertainties are compared to the hypothetical truth. We find that, if we correctly specify spatial variability and spatio-temporal variability in prior and model-data mismatch covariances respectively, then we can compute more accurate posterior estimates. We discuss few covariance models to introduce space-time interacting mismatches along with estimation of the involved parameters. We then compare several candidate prior spatial covariance models from the Matern covariance class and estimate their parameters with specified mismatches. We find that best-fitted prior covariances are not always best in recovering the truth. To achieve
Estimating effects of improved drinking water and sanitation on cholera.
Leidner, Andrew J; Adusumilli, Naveen C
2013-12-01
Demand for adequate provision of drinking-water and sanitation facilities to promote public health and economic growth is increasing in the rapidly urbanizing countries of the developing world. With a panel of data on Asia and Africa from 1990 to 2008, associations are estimated between the occurrence of cholera outbreaks, the case rates in given outbreaks, the mortality rates associated with cholera and two disease control mechanisms, drinking-water and sanitation services. A statistically significant and negative effect is found between drinking-water services and both cholera case rates as well as cholera-related mortality rates. A relatively weak statistical relationship is found between the occurrence of cholera outbreaks and sanitation services.
Estimating Missing Features to Improve Multimedia Information Retrieval
Bagherjeiran, A; Love, N S; Kamath, C
2006-09-28
Retrieval in a multimedia database usually involves combining information from different modalities of data, such as text and images. However, all modalities of the data may not be available to form the query. The retrieval results from such a partial query are often less than satisfactory. In this paper, we present an approach to complete a partial query by estimating the missing features in the query. Our experiments with a database of images and their associated captions show that, with an initial text-only query, our completion method has similar performance to a full query with both image and text features. In addition, when we use relevance feedback, our approach outperforms the results obtained using a full query.
[An improved motion estimation of medical image series via wavelet transform].
Zhang, Ying; Rao, Nini; Wang, Gang
2006-10-01
The compression of medical image series is very important in telemedicine. The motion estimation plays a key role in the video sequence compression. In this paper, an improved square-diamond search (SDS) algorithm is proposed for the motion estimation of medical image series. The improved SDS algorithm reduces the number of the searched points. This improved SDS algorithm is used in wavelet transformation field to estimate the motion of medical image series. A simulation experiment for digital subtraction angiography (DSA) is made. The experiment results show that the algorithm accuracy is higher than that of other algorithms in the motion estimation of medical image series. PMID:17121333
Telescoping strategies for improved parameter estimation of environmental simulation models
NASA Astrophysics Data System (ADS)
Matott, L. Shawn; Hymiak, Beth; Reslink, Camden; Baxter, Christine; Aziz, Shirmin
2013-10-01
The parameters of environmental simulation models are often inferred by minimizing differences between simulated output and observed data. Heuristic global search algorithms are a popular choice for performing minimization but many algorithms yield lackluster results when computational budgets are restricted, as is often required in practice. One way for improving performance is to limit the search domain by reducing upper and lower parameter bounds. While such range reduction is typically done prior to optimization, this study examined strategies for contracting parameter bounds during optimization. Numerical experiments evaluated a set of novel “telescoping” strategies that work in conjunction with a given optimizer to scale parameter bounds in accordance with the remaining computational budget. Various telescoping functions were considered, including a linear scaling of the bounds, and four nonlinear scaling functions that more aggressively reduce parameter bounds either early or late in the optimization. Several heuristic optimizers were integrated with the selected telescoping strategies and applied to numerous optimization test functions as well as calibration problems involving four environmental simulation models. The test suite ranged from simple 2-parameter surfaces to complex 100-parameter landscapes, facilitating robust comparisons of the selected optimizers across a variety of restrictive computational budgets. All telescoping strategies generally improved the performance of the selected optimizers, relative to baseline experiments that used no bounds reduction. Performance improvements varied but were as high as 38% for a real-coded genetic algorithm (RGA), 21% for shuffled complex evolution (SCE), 16% for simulated annealing (SA), 8% for particle swarm optimization (PSO), and 7% for dynamically dimensioned search (DDS). Inter-algorithm comparisons suggest that the SCE and DDS algorithms delivered the best overall performance. SCE appears well
Estimating ages of open star clusters using stellar lumionosity and colour
NASA Astrophysics Data System (ADS)
Williams, Chris
2004-12-01
This paper was designed for the 'armchair' astronomer who is interested in 'amateur research' by utilising the vast amount of images placed on the Internet from various places. Open star clusters are groups of stars that are physically related, bound by mutual gravitational attraction, populate a limited region of space and are all roughly at the same distance from us. We believe they originate from large cosmic gas and dust clouds within the Milky Way and the process of formation takes only a short time, so therefore all members of the cluster are of similar age. Also, as all the stars in a cluster formed from the same cloud, they are all of similar (initial) chemical composition. This 'family' of stars may be of similar birth age but their evolutionary ages differ due to the variation in their masses. High mass stars evolve much quicker than low mass stars they consume their fuel faster, have higher luminosities and die in a very short time (astronomical speaking) compared to a fractional solar mass star.
Multi-sensor merging techniques for improving burned area estimates
NASA Astrophysics Data System (ADS)
Bradley, A.; Tansey, K.; Chuvieco, E.
2012-04-01
The ESA Climate Change Initiative (CCI) aims to create a set of Essential Climate Variables (ECV) to assist climate modellers. One of these is the fire ECV, a product in line with typical requirements of climate, vegetation and ecological modellers investigated by the fire ECV project and documented in the fire product specification document. The product is derived from burned area estimates of three sensors, SPOT VEGETATION (SPOT-VGT), the Along-Track Scanning Radiometer (ATSR) series, and the MEdium Resolution Imaging Spectrometer at Full ReSolution (MERIS FRS). This abstract is concerned with the final stage in the production of the fire product, merging of the burned area estimates from the three sensors into two products. The two products are created at monthly time steps, the pixel (1km) and the aggregated grid product (0.5° and 0.25°). The pixel product contains information on sensors detecting the burn, date of burn detection, confidence of the burn and land cover statistics. The grid product contains aggregated information on burned area totals and proportion, major land cover burned, heterogeneity of burning in the grid cell, confidence and cloud cover levels. The method used to create these products needs to allow for time series gaps due to multiple sensor combinations and different orbital and swath characteristics and comprises a combination statistical, selective, stratification and fusion methods common to the satellite remote sensing community. The method is in three stages, first a combined merge of sensors in the same 1km resolution. The earliest date of detection is recorded and the sensor that performs the best over a particular vegetation type is taken as the most reliable confidence level. The second part involves fusion of the 300 m MERIS FRS data allowing confidence levels and burn dates to be reported to a finer resolution. To allow for MERIS FRS pixels that cross adjacent 1km pixels from the first step the fusion is carried out at 100 m
Improving a regional model using reduced complexity and parameter estimation
Kelson, Victor A.; Hunt, Randall J.; Haitjema, Henk M.
2002-01-01
The availability of powerful desktop computers and graphical user interfaces for ground water flow models makes possible the construction of ever more complex models. A proposed copper-zinc sulfide mine in northern Wisconsin offers a unique case in which the same hydrologic system has been modeled using a variety of techniques covering a wide range of sophistication and complexity. Early in the permitting process, simple numerical models were used to evaluate the necessary amount of water to be pumped from the mine, reductions in streamflow, and the drawdowns in the regional aquifer. More complex models have subsequently been used in an attempt to refine the predictions. Even after so much modeling effort, questions regarding the accuracy and reliability of the predictions remain. We have performed a new analysis of the proposed mine using the two-dimensional analytic element code GFLOW coupled with the nonlinear parameter estimation code UCODE. The new model is parsimonious, containing fewer than 10 parameters, and covers a region several times larger in areal extent than any of the previous models. The model demonstrates the suitability of analytic element codes for use with parameter estimation codes. The simplified model results are similar to the more complex models; predicted mine inflows and UCODE-derived 95% confidence intervals are consistent with the previous predictions. More important, the large areal extent of the model allowed us to examine hydrological features not included in the previous models, resulting in new insights about the effects that far-field boundary conditions can have on near-field model calibration and parameterization. In this case, the addition of surface water runoff into a lake in the headwaters of a stream while holding recharge constant moved a regional ground watershed divide and resulted in some of the added water being captured by the adjoining basin. Finally, a simple analytical solution was used to clarify the GFLOW model
An adaptive displacement estimation algorithm for improved reconstruction of thermal strain.
Ding, Xuan; Dutta, Debaditya; Mahmoud, Ahmed M; Tillman, Bryan; Leers, Steven A; Kim, Kang
2015-01-01
Thermal strain imaging (TSI) can be used to differentiate between lipid and water-based tissues in atherosclerotic arteries. However, detecting small lipid pools in vivo requires accurate and robust displacement estimation over a wide range of displacement magnitudes. Phase-shift estimators such as Loupas' estimator and time-shift estimators such as normalized cross-correlation (NXcorr) are commonly used to track tissue displacements. However, Loupas' estimator is limited by phase-wrapping and NXcorr performs poorly when the SNR is low. In this paper, we present an adaptive displacement estimation algorithm that combines both Loupas' estimator and NXcorr. We evaluated this algorithm using computer simulations and an ex vivo human tissue sample. Using 1-D simulation studies, we showed that when the displacement magnitude induced by thermal strain was >λ/8 and the electronic system SNR was >25.5 dB, the NXcorr displacement estimate was less biased than the estimate found using Loupas' estimator. On the other hand, when the displacement magnitude was ≤λ/4 and the electronic system SNR was ≤25.5 dB, Loupas' estimator had less variance than NXcorr. We used these findings to design an adaptive displacement estimation algorithm. Computer simulations of TSI showed that the adaptive displacement estimator was less biased than either Loupas' estimator or NXcorr. Strain reconstructed from the adaptive displacement estimates improved the strain SNR by 43.7 to 350% and the spatial accuracy by 1.2 to 23.0% (P < 0.001). An ex vivo human tissue study provided results that were comparable to computer simulations. The results of this study showed that a novel displacement estimation algorithm, which combines two different displacement estimators, yielded improved displacement estimation and resulted in improved strain reconstruction.
An Adaptive Displacement Estimation Algorithm for Improved Reconstruction of Thermal Strain
Ding, Xuan; Dutta, Debaditya; Mahmoud, Ahmed M.; Tillman, Bryan; Leers, Steven A.; Kim, Kang
2014-01-01
Thermal strain imaging (TSI) can be used to differentiate between lipid and water-based tissues in atherosclerotic arteries. However, detecting small lipid pools in vivo requires accurate and robust displacement estimation over a wide range of displacement magnitudes. Phase-shift estimators such as Loupas’ estimator and time-shift estimators like normalized cross-correlation (NXcorr) are commonly used to track tissue displacements. However, Loupas’ estimator is limited by phase-wrapping and NXcorr performs poorly when the signal-to-noise ratio (SNR) is low. In this paper, we present an adaptive displacement estimation algorithm that combines both Loupas’ estimator and NXcorr. We evaluated this algorithm using computer simulations and an ex-vivo human tissue sample. Using 1-D simulation studies, we showed that when the displacement magnitude induced by thermal strain was >λ/8 and the electronic system SNR was >25.5 dB, the NXcorr displacement estimate was less biased than the estimate found using Loupas’ estimator. On the other hand, when the displacement magnitude was ≤λ/4 and the electronic system SNR was ≤25.5 dB, Loupas’ estimator had less variance than NXcorr. We used these findings to design an adaptive displacement estimation algorithm. Computer simulations of TSI using Field II showed that the adaptive displacement estimator was less biased than either Loupas’ estimator or NXcorr. Strain reconstructed from the adaptive displacement estimates improved the strain SNR by 43.7–350% and the spatial accuracy by 1.2–23.0% (p < 0.001). An ex-vivo human tissue study provided results that were comparable to computer simulations. The results of this study showed that a novel displacement estimation algorithm, which combines two different displacement estimators, yielded improved displacement estimation and results in improved strain reconstruction. PMID:25585398
Ionospheric perturbation degree estimates for improving GNSS applications
NASA Astrophysics Data System (ADS)
Jakowski, Norbert; Mainul Hoque, M.; Wilken, Volker; Berdermann, Jens; Hlubek, Nikolai
Ionosphere can adversely affect accuracy, continuity, availability, and integrity of modern Global Navigation Satellite Systems (GNSS) in different ways. Hence, reliable information on key parameters describing the perturbation degree of the ionosphere is helpful for estimating the potential degradation of the performance of these systems. So, to guarantee the required safety level in aviation, Ground Based Augmentation Systems (GBAS) and Satellite Based Augmentation Systems (SBAS) have been established for detecting and mitigating ionospheric threats in particular due to ionospheric gradients. The paper reviews various attempts and capabilities to characterize the perturbation degree of the ionosphere currently being used in precise positioning and safety of life applications. Continuity and availability of signals are mainly impacted by amplitude and phase scintillations characterized by indices such as S4 or phase noise. To characterize medium and large scale ionospheric perturbations that may seriously affect accuracy and integrity of GNSS, the use of an internationally standardized Disturbance Ionosphere Index (DIX) is recommended. The definition of such a DIX must take into account the practical needs, should be an objective measure of ionospheric conditions and easy and reproducible to compute. A preliminary DIX approach is presented and discussed. Such a robust and easy adaptable index should have a great potential for being used in operational ionospheric weather services and GNSS augmentation systems.
An improved method for nonlinear parameter estimation: a case study of the Rössler model
NASA Astrophysics Data System (ADS)
He, Wen-Ping; Wang, Liu; Jiang, Yun-Di; Wan, Shi-Quan
2016-08-01
Parameter estimation is an important research topic in nonlinear dynamics. Based on the evolutionary algorithm (EA), Wang et al. (2014) present a new scheme for nonlinear parameter estimation and numerical tests indicate that the estimation precision is satisfactory. However, the convergence rate of the EA is relatively slow when multiple unknown parameters in a multidimensional dynamical system are estimated simultaneously. To solve this problem, an improved method for parameter estimation of nonlinear dynamical equations is provided in the present paper. The main idea of the improved scheme is to use all of the known time series for all of the components in some dynamical equations to estimate the parameters in single component one by one, instead of estimating all of the parameters in all of the components simultaneously. Thus, we can estimate all of the parameters stage by stage. The performance of the improved method was tested using a classic chaotic system—Rössler model. The numerical tests show that the amended parameter estimation scheme can greatly improve the searching efficiency and that there is a significant increase in the convergence rate of the EA, particularly for multiparameter estimation in multidimensional dynamical equations. Moreover, the results indicate that the accuracy of parameter estimation and the CPU time consumed by the presented method have no obvious dependence on the sample size.
Improving the text classification using clustering and a novel HMM to reduce the dimensionality.
Seara Vieira, A; Borrajo, L; Iglesias, E L
2016-11-01
In text classification problems, the representation of a document has a strong impact on the performance of learning systems. The high dimensionality of the classical structured representations can lead to burdensome computations due to the great size of real-world data. Consequently, there is a need for reducing the quantity of handled information to improve the classification process. In this paper, we propose a method to reduce the dimensionality of a classical text representation based on a clustering technique to group documents, and a previously developed Hidden Markov Model to represent them. We have applied tests with the k-NN and SVM classifiers on the OHSUMED and TREC benchmark text corpora using the proposed dimensionality reduction technique. The experimental results obtained are very satisfactory compared to commonly used techniques like InfoGain and the statistical tests performed demonstrate the suitability of the proposed technique for the preprocessing step in a text classification task. PMID:27686709
Improving the text classification using clustering and a novel HMM to reduce the dimensionality.
Seara Vieira, A; Borrajo, L; Iglesias, E L
2016-11-01
In text classification problems, the representation of a document has a strong impact on the performance of learning systems. The high dimensionality of the classical structured representations can lead to burdensome computations due to the great size of real-world data. Consequently, there is a need for reducing the quantity of handled information to improve the classification process. In this paper, we propose a method to reduce the dimensionality of a classical text representation based on a clustering technique to group documents, and a previously developed Hidden Markov Model to represent them. We have applied tests with the k-NN and SVM classifiers on the OHSUMED and TREC benchmark text corpora using the proposed dimensionality reduction technique. The experimental results obtained are very satisfactory compared to commonly used techniques like InfoGain and the statistical tests performed demonstrate the suitability of the proposed technique for the preprocessing step in a text classification task.
Clustering methods for removing outliers from vision-based range estimates
NASA Technical Reports Server (NTRS)
Hussien, B.; Suorsa, R.
1992-01-01
The present approach to the automation of helicopter low-altitude flight uses one or more passive imaging sensors to extract environmental obstacle information; this is then processed via computer-vision techniques to yield a time-varying map of range to obstacles in the sensor's field of view along the vehicle's flight path. Attention is given to two related techniques which can eliminate outliers from a sparse range map, clustering sparse range-map information into different spatial classes that rely on a segmented and labeled image to aid in spatial classification within the image plane.
Rejani, R; Rao, K V; Osman, M; Srinivasa Rao, Ch; Reddy, K Sammi; Chary, G R; Pushpanjali; Samuel, Josily
2016-03-01
The ungauged wet semi-arid watershed cluster, Seethagondi, lies in the Adilabad district of Telangana in India and is prone to severe erosion and water scarcity. The runoff and soil loss data at watershed, catchment, and field level are necessary for planning soil and water conservation interventions. In this study, an attempt was made to develop a spatial soil loss estimation model for Seethagondi cluster using RUSLE coupled with ARCGIS and was used to estimate the soil loss spatially and temporally. The daily rainfall data of Aphrodite for the period from 1951 to 2007 was used, and the annual rainfall varied from 508 to 1351 mm with a mean annual rainfall of 950 mm and a mean erosivity of 6789 MJ mm ha(-1) h(-1) year(-1). Considerable variation in land use land cover especially in crop land and fallow land was observed during normal and drought years, and corresponding variation in the erosivity, C factor, and soil loss was also noted. The mean value of C factor derived from NDVI for crop land was 0.42 and 0.22 in normal year and drought years, respectively. The topography is undulating and major portion of the cluster has slope less than 10°, and 85.3% of the cluster has soil loss below 20 t ha(-1) year(-1). The soil loss from crop land varied from 2.9 to 3.6 t ha(-1) year(-1) in low rainfall years to 31.8 to 34.7 t ha(-1) year(-1) in high rainfall years with a mean annual soil loss of 12.2 t ha(-1) year(-1). The soil loss from crop land was higher in the month of August with an annual soil loss of 13.1 and 2.9 t ha(-1) year(-1) in normal and drought year, respectively. Based on the soil loss in a normal year, the interventions recommended for 85.3% of area of the watershed includes agronomic measures such as contour cultivation, graded bunds, strip cropping, mixed cropping, crop rotations, mulching, summer plowing, vegetative bunds, agri-horticultural system, and management practices such as broad bed furrow, raised sunken beds, and harvesting available water
Rejani, R; Rao, K V; Osman, M; Srinivasa Rao, Ch; Reddy, K Sammi; Chary, G R; Pushpanjali; Samuel, Josily
2016-03-01
The ungauged wet semi-arid watershed cluster, Seethagondi, lies in the Adilabad district of Telangana in India and is prone to severe erosion and water scarcity. The runoff and soil loss data at watershed, catchment, and field level are necessary for planning soil and water conservation interventions. In this study, an attempt was made to develop a spatial soil loss estimation model for Seethagondi cluster using RUSLE coupled with ARCGIS and was used to estimate the soil loss spatially and temporally. The daily rainfall data of Aphrodite for the period from 1951 to 2007 was used, and the annual rainfall varied from 508 to 1351 mm with a mean annual rainfall of 950 mm and a mean erosivity of 6789 MJ mm ha(-1) h(-1) year(-1). Considerable variation in land use land cover especially in crop land and fallow land was observed during normal and drought years, and corresponding variation in the erosivity, C factor, and soil loss was also noted. The mean value of C factor derived from NDVI for crop land was 0.42 and 0.22 in normal year and drought years, respectively. The topography is undulating and major portion of the cluster has slope less than 10°, and 85.3% of the cluster has soil loss below 20 t ha(-1) year(-1). The soil loss from crop land varied from 2.9 to 3.6 t ha(-1) year(-1) in low rainfall years to 31.8 to 34.7 t ha(-1) year(-1) in high rainfall years with a mean annual soil loss of 12.2 t ha(-1) year(-1). The soil loss from crop land was higher in the month of August with an annual soil loss of 13.1 and 2.9 t ha(-1) year(-1) in normal and drought year, respectively. Based on the soil loss in a normal year, the interventions recommended for 85.3% of area of the watershed includes agronomic measures such as contour cultivation, graded bunds, strip cropping, mixed cropping, crop rotations, mulching, summer plowing, vegetative bunds, agri-horticultural system, and management practices such as broad bed furrow, raised sunken beds, and harvesting available water
NASA Astrophysics Data System (ADS)
Milone, Eugene F.; Schiller, Stephen Joseph
2015-08-01
Eclipsing binaries (EB) with well-calibrated photometry and precisely measured double-lined radial velocities are candidate standard candles when analyzed with a version of the Wilson-Devinney (WD) light curve modeling program that includes the direct distance estimation (DDE) algorithm. In the DDE procedure, distance is determined as a system parameter, thus avoiding the assumption of stellar sphericity and yielding a well-determined standard error for distance. The method therefore provides a powerful way to calibrate the distances of other objects in any aggregate that contains suitable EB's. DDE has been successfully applied to nearby systems and to a small number of EB's in open clusters. Previously we reported on one of the systems in our Binaries-in-Clusters program, HD27130 = V818 Tau, that had been analyzed with earlier versions of the WD program (see 1987 AJ 93, 1471; 1988 AJ 95, 1466; and 1995 AJ 109, 359 for examples). Results from those early solutions were entered as starting parameters in the current work with the WD 2013 version.Here we report several series of ongoing modeling experiments on a 1.01-d period, early type EB in the intermediate age cluster NGC 752. In one series, ranges of interstellar extinction and hotter star temperature were assumed, and in another series both component temperatures were adjusted. Consistent parameter sets, including distance, confirm DDE's advantages, essentially limited only by knowledge of interstellar extinction, which is small for DS And. Uncertainties in the bandpass calibration constants (flux in standard units from a zero magnitude star) are much less important because derived distance scales (inversely) only with the calibration's square root. This work was enabled by the unstinting help of Bob Wilson. We acknowledge earlier support for the Binaries-in-Clusters program from NSERC of Canada, and the Research Grants Committee and Department of Physics & Astronomy of the University of Calgary.
Novel angle estimation for bistatic MIMO radar using an improved MUSIC
NASA Astrophysics Data System (ADS)
Li, Jianfeng; Zhang, Xiaofei; Chen, Han
2014-09-01
In this article, we study the problem of angle estimation for bistatic multiple-input multiple-output (MIMO) radar and propose an improved multiple signal classification (MUSIC) algorithm for joint direction of departure (DOD) and direction of arrival (DOA) estimation. The proposed algorithm obtains initial estimations of angles obtained from the signal subspace and uses the local one-dimensional peak searches to achieve the joint estimations of DOD and DOA. The angle estimation performance of the proposed algorithm is better than that of estimation of signal parameters via rotational invariance techniques (ESPRIT) algorithm, and is almost the same as that of two-dimensional MUSIC. Furthermore, the proposed algorithm can be suitable for irregular array geometry, obtain automatically paired DOD and DOA estimations, and avoid two-dimensional peak searching. The simulation results verify the effectiveness and improvement of the algorithm.
Wu, Hao-Yi; Rozo, Eduardo; Wechsler, Risa H.; /KIPAC, Menlo Park /SLAC /CCAPP, Columbus /KICP, Chicago /KIPAC, Menlo Park /SLAC
2010-06-02
The precision of cosmological parameters derived from galaxy cluster surveys is limited by uncertainty in relating observable signals to cluster mass. We demonstrate that a small mass-calibration follow-up program can significantly reduce this uncertainty and improve parameter constraints, particularly when the follow-up targets are judiciously chosen. To this end, we apply a simulated annealing algorithm to maximize the dark energy information at fixed observational cost, and find that optimal follow-up strategies can reduce the observational cost required to achieve a specified precision by up to an order of magnitude. Considering clusters selected from optical imaging in the Dark Energy Survey, we find that approximately 200 low-redshift X-ray clusters or massive Sunyaev-Zel'dovich clusters can improve the dark energy figure of merit by 50%, provided that the follow-up mass measurements involve no systematic error. In practice, the actual improvement depends on (1) the uncertainty in the systematic error in follow-up mass measurements, which needs to be controlled at the 5% level to avoid severe degradation of the results; and (2) the scatter in the optical richness-mass distribution, which needs to be made as tight as possible to improve the efficacy of follow-up observations.
Measuring slope to improve energy expenditure estimates during field-based activities
Duncan, Glen E.; Lester, Jonathan; Migotsky, Sean; Higgins, Lisa; Borriello, Gaetano
2013-01-01
This technical note describes methods to improve activity energy expenditure estimates using a multi-sensor board (MSB) by measuring slope. Ten adults walked over a 2.5-mile course wearing an MSB and mobile calorimeter. Energy expenditure was estimated using accelerometry alone (base) and four methods to measure slope. The barometer and GPS methods improved accuracy 11% from the base (Ps < 0.05) to 86% overall. Measuring slope using the MSB improves energy expenditure estimates during field-based activities. PMID:23537030
Liu, Xiaoqiu; Lewis, James J.; Zhang, Hui; Lu, Wei; Zhang, Shun; Zheng, Guilan; Bai, Liqiong; Li, Jun; Li, Xue; Chen, Hongguang; Liu, Mingming; Chen, Rong; Chi, Junying; Lu, Jian; Huan, Shitong; Cheng, Shiming; Wang, Lixia; Jiang, Shiwen; Chin, Daniel P.; Fielding, Katherine L.
2015-01-01
Background Mobile text messaging and medication monitors (medication monitor boxes) have the potential to improve adherence to tuberculosis (TB) treatment and reduce the need for directly observed treatment (DOT), but to our knowledge they have not been properly evaluated in TB patients. We assessed the effectiveness of text messaging and medication monitors to improve medication adherence in TB patients. Methods and Findings In a pragmatic cluster-randomised trial, 36 districts/counties (each with at least 300 active pulmonary TB patients registered in 2009) within the provinces of Heilongjiang, Jiangsu, Hunan, and Chongqing, China, were randomised using stratification and restriction to one of four case-management approaches in which patients received reminders via text messages, a medication monitor, combined, or neither (control). Patients in the intervention arms received reminders to take their drugs and reminders for monthly follow-up visits, and the managing doctor was recommended to switch patients with adherence problems to more intensive management or DOT. In all arms, patients took medications out of a medication monitor box, which recorded when the box was opened, but the box gave reminders only in the medication monitor and combined arms. Patients were followed up for 6 mo. The primary endpoint was the percentage of patient-months on TB treatment where at least 20% of doses were missed as measured by pill count and failure to open the medication monitor box. Secondary endpoints included additional adherence and standard treatment outcome measures. Interventions were not masked to study staff and patients. From 1 June 2011 to 7 March 2012, 4,292 new pulmonary TB patients were enrolled across the 36 clusters. A total of 119 patients (by arm: 33 control, 33 text messaging, 23 medication monitor, 30 combined) withdrew from the study in the first month because they were reassessed as not having TB by their managing doctor (61 patients) or were switched to
Zarchi, Kian; Haugaard, Vibeke B; Dufour, Deirdre N; Jemec, Gregor B E
2015-03-01
Telemedicine is widely considered as an efficient approach to manage the growing problem of chronic wounds. However, to date, there is no convincing evidence to support the clinical efficacy of telemedicine in wound management. In this prospective cluster controlled study, we tested the hypothesis that advice on wound management provided by a team of wound-care specialists through telemedicine would significantly improve the likelihood of wound healing compared with the best available conventional practice. A total of 90 chronic wound patients in home care met all study criteria and were included: 50 in the telemedicine group and 40 in the conventional group. Patients with pressure ulcers, surgical wounds, and cancer wounds were excluded. During the 1-year follow-up, complete wound healing was achieved in 35 patients (70%) in the telemedicine group compared with 18 patients (45%) in the conventional group. After adjusting for important covariates, offering advice on wound management through telemedicine was associated with significantly increased healing compared with the best available conventional practice (telemedicine vs. conventional practice: adjusted hazard ratio 2.19; 95% confidence interval: 1.15-4.17; P=0.017). This study strongly supports the use of telemedicine to connect home-care nurses to a team of wound experts in order to improve the management of chronic wounds.
Cao, Weihua; Tsiatis, Anastasios A; Davidian, Marie
2009-09-01
Considerable recent interest has focused on doubly robust estimators for a population mean response in the presence of incomplete data, which involve models for both the propensity score and the regression of outcome on covariates. The usual doubly robust estimator may yield severely biased inferences if neither of these models is correctly specified and can exhibit nonnegligible bias if the estimated propensity score is close to zero for some observations. We propose alternative doubly robust estimators that achieve comparable or improved performance relative to existing methods, even with some estimated propensity scores close to zero. PMID:20161511
Estimating Accuracy of Land-Cover Composition From Two-Stage Clustering Sampling
Land-cover maps are often used to compute land-cover composition (i.e., the proportion or percent of area covered by each class), for each unit in a spatial partition of the region mapped. We derive design-based estimators of mean deviation (MD), mean absolute deviation (MAD), ...
Borgermans, Liesbeth; Goderis, Geert; Broeke, Carine Van Den; Mathieu, Chantal; Aertgeerts, Bert; Verbeke, Geert; Carbonez, An; Ivanova, Anna; Grol, Richard; Heyrman, Jan
2008-01-01
Background Most quality improvement programs in diabetes care incorporate aspects of clinician education, performance feedback, patient education, care management, and diabetes care teams to support primary care physicians. Few studies have applied all of these dimensions to address clinical inertia. Aim To evaluate interventions to improve adherence to evidence-based guidelines for diabetes and reduce clinical inertia in primary care physicians. Design Two-arm cluster randomized controlled trial. Participants Primary care physicians in Belgium. Interventions Primary care physicians will be randomly allocated to 'Usual' (UQIP) or 'Advanced' (AQIP) Quality Improvement Programs. Physicians in the UQIP will receive interventions addressing the main physician, patient, and office system factors that contribute to clinical inertia. Physicians in the AQIP will receive additional interventions that focus on sustainable behavior changes in patients and providers. Outcomes Primary endpoints are the proportions of patients within targets for three clinical outcomes: 1) glycosylated hemoglobin < 7%; 2) systolic blood pressure differences ≤130 mmHg; and 3) low density lipoprotein/cholesterol < 100 mg/dl. Secondary endpoints are individual improvements in 12 validated parameters: glycosylated hemoglobin, low and high density lipoprotein/cholesterol, total cholesterol, systolic blood pressure, diastolic blood pressure, weight, physical exercise, healthy diet, smoking status, and statin and anti-platelet therapy. Primary and secondary analysis Statistical analyses will be performed using an intent-to-treat approach with a multilevel model. Linear and generalized linear mixed models will be used to account for the clustered nature of the data, i.e., patients clustered withinimary care physicians, and repeated assessments clustered within patients. To compare patient characteristics at baseline and between the intervention arms, the generalized estimating equations (GEE) approach
High, F. W.; Stalder, B.; Song, J.; Ade, P. A. R.; Aird, K. A.; Allam, S. S.; Buckley-Geer, E. J.; Armstrong, R.; Barkhouse, W. A.; Benson, B. A.; Bertin, E.; Bhattacharya, S.; Bleem, L. E.; Carlstrom, J. E.; Chang, C. L.; Crawford, T. M.; Crites, A. T.; Brodwin, M.; Challis, P.; De Haan, T.
2010-11-10
We present redshifts and optical richness properties of 21 galaxy clusters uniformly selected by their Sunyaev-Zel'dovich (SZ) signature. These clusters, plus an additional, unconfirmed candidate, were detected in a 178 deg{sup 2} area surveyed by the South Pole Telescope (SPT) in 2008. Using griz imaging from the Blanco Cosmology Survey and from pointed Magellan telescope observations, as well as spectroscopy using Magellan facilities, we confirm the existence of clustered red-sequence galaxies, report red-sequence photometric redshifts, present spectroscopic redshifts for a subsample, and derive R{sub 200} radii and M{sub 200} masses from optical richness. The clusters span redshifts from 0.15 to greater than 1, with a median redshift of 0.74; three clusters are estimated to be at z>1. Redshifts inferred from mean red-sequence colors exhibit 2% rms scatter in {sigma}{sub z}/(1 + z) with respect to the spectroscopic subsample for z < 1. We show that the M{sub 200} cluster masses derived from optical richness correlate with masses derived from SPT data and agree with previously derived scaling relations to within the uncertainties. Optical and infrared imaging is an efficient means of cluster identification and redshift estimation in large SZ surveys, and exploiting the same data for richness measurements, as we have done, will be useful for constraining cluster masses and radii for large samples in cosmological analysis.
Jonsen, Ian
2016-01-01
State-space models provide a powerful way to scale up inference of movement behaviours from individuals to populations when the inference is made across multiple individuals. Here, I show how a joint estimation approach that assumes individuals share identical movement parameters can lead to improved inference of behavioural states associated with different movement processes. I use simulated movement paths with known behavioural states to compare estimation error between nonhierarchical and joint estimation formulations of an otherwise identical state-space model. Behavioural state estimation error was strongly affected by the degree of similarity between movement patterns characterising the behavioural states, with less error when movements were strongly dissimilar between states. The joint estimation model improved behavioural state estimation relative to the nonhierarchical model for simulated data with heavy-tailed Argos location errors. When applied to Argos telemetry datasets from 10 Weddell seals, the nonhierarchical model estimated highly uncertain behavioural state switching probabilities for most individuals whereas the joint estimation model yielded substantially less uncertainty. The joint estimation model better resolved the behavioural state sequences across all seals. Hierarchical or joint estimation models should be the preferred choice for estimating behavioural states from animal movement data, especially when location data are error-prone. PMID:26853261
Evaluation of an intervention to improve blood culture practices: a cluster randomised trial.
Pavese, P; Maillet, M; Vitrat-Hincky, V; Recule, C; Vittoz, J-P; Guyomard, A; Seigneurin, A; François, P
2014-12-01
This study aimed to evaluate an intervention to improve blood culture practices. A cluster randomised trial in two parallel groups was performed at the Grenoble University Hospital, France. In October 2009, the results of a practices audit and the guidelines for the optimal use of blood cultures were disseminated to clinical departments. We compared two types of information dissemination: simple presentation or presentation associated with an infectious diseases (ID) specialist intervention. The principal endpoint was blood culture performance measured by the rate of patients having one positive blood culture and the rate of positive blood cultures. The cases of 130 patients in the "ID" group and 119 patients in the "simple presentation" group were audited during the second audit in April 2010. The rate of patients with one positive blood culture increased in both groups (13.62 % vs 9.89 % for the ID group, p = 0.002, 15.90 % vs 13.47 % for the simple presentation group, p = 0.009). The rate of positive blood cultures improved in both groups (6.68 % vs 5.96 % for the ID group, p = 0.003, 6.52 % vs 6.21 % for the simple presentation group, p = 0.017). The blood culture indication was significantly less often specified in the request form in the simple presentation group, while it remained stable in the ID group (p = 0.04). The rate of positive blood cultures and the rate of patients having one positive blood culture improved in both groups. The ID specialist intervention did not have more of an impact on practices than a simple presentation of audit feedback and guidelines.
NASA Astrophysics Data System (ADS)
Troiani, Francesco; Piacentini, Daniela; Seta Marta, Della
2016-04-01
analysis conducted on 52 clusters of high and very high Gi* values indicate that mass movement of slope material represents the dominant process producing over-steeped long-profiles along connected streams, whereas the litho-structure accounts for the main anomalies along disconnected steams. Tectonic structures generally provide to the largest clusters. Our results demonstrate that SL-HCA maps have the same potential of lithologically-filtered SL maps for detecting knickzones due to hillslope processes and/or tectonic structures. The reduced-complexity model derived from SL-HCA approach highly improve the readability of the morphometric outcomes, thus the interpretation at a regional scale of the geological-geomorphological meaning of over-steeped segments on long-profiles. SL-HCA maps are useful to investigate and better interpret knickzones within regions poorly covered by geological data and where field surveys are difficult to be performed.
An Investigation of Methods for Improving Estimation of Test Score Distributions.
ERIC Educational Resources Information Center
Hanson, Bradley A.
Three methods of estimating test score distributions that may improve on using the observed frequencies (OBFs) as estimates of a population test score distribution are considered: the kernel method (KM); the polynomial method (PM); and the four-parameter beta binomial method (FPBBM). The assumption each method makes about the smoothness of the…
Using Local Matching to Improve Estimates of Program Impact: Evidence from Project STAR
ERIC Educational Resources Information Center
Jones, Nathan; Steiner, Peter; Cook, Tom
2011-01-01
In this study the authors test whether matching using intact local groups improves causal estimates over those produced using propensity score matching at the student level. Like the recent analysis of Wilde and Hollister (2007), they draw on data from Project STAR to estimate the effect of small class sizes on student achievement. They propose a…
"Battleship Numberline": A Digital Game for Improving Estimation Accuracy on Fraction Number Lines
ERIC Educational Resources Information Center
Lomas, Derek; Ching, Dixie; Stampfer, Eliane; Sandoval, Melanie; Koedinger, Ken
2011-01-01
Given the strong relationship between number line estimation accuracy and math achievement, might a computer-based number line game help improve math achievement? In one study by Rittle-Johnson, Siegler and Alibali (2001), a simple digital game called "Catch the Monster" provided practice in estimating the location of decimals on a number line.…
Systems analysis and improvement to optimize pMTCT (SAIA): a cluster randomized trial
2014-01-01
Background Despite significant increases in global health investment and the availability of low-cost, efficacious interventions to prevent mother-to-child HIV transmission (pMTCT) in low- and middle-income countries with high HIV burden, the translation of scientific advances into effective delivery strategies has been slow, uneven and incomplete. As a result, pediatric HIV infection remains largely uncontrolled. A five-step, facility-level systems analysis and improvement intervention (SAIA) was designed to maximize effectiveness of pMTCT service provision by improving understanding of inefficiencies (step one: cascade analysis), guiding identification and prioritization of low-cost workflow modifications (step two: value stream mapping), and iteratively testing and redesigning these modifications (steps three through five). This protocol describes the SAIA intervention and methods to evaluate the intervention’s impact on reducing drop-offs along the pMTCT cascade. Methods This study employs a two-arm, longitudinal cluster randomized trial design. The unit of randomization is the health facility. A total of 90 facilities were identified in Côte d’Ivoire, Kenya and Mozambique (30 per country). A subset was randomly selected and assigned to intervention and comparison arms, stratified by country and service volume, resulting in 18 intervention and 18 comparison facilities across all three countries, with six intervention and six comparison facilities per country. The SAIA intervention will be implemented for six months in the 18 intervention facilities. Primary trial outcomes are designed to assess improvements in the pMTCT service cascade, and include the percentage of pregnant women being tested for HIV at the first antenatal care visit, the percentage of HIV-infected pregnant women receiving adequate prophylaxis or combination antiretroviral therapy in pregnancy, and the percentage of newborns exposed to HIV in pregnancy receiving an HIV diagnosis eight
Dangi, Mohan B; Urynowicz, Michael A; Gerow, Kenneth G; Thapa, Resham B
2008-12-01
Relatively few studies have been performed to characterize municipal solid waste (MSW) at household level. This is due in part to the difficulties involved with collecting the data and selecting an appropriate statistical sample size. The previous studies identified in this paper have used statistical tools appropriate for analysing data collected at a material recovery facility or landfill site. This study demonstrates a statistically sound and efficient approach for characterizing MSW at the household level. Moreover, a household approach also allowed for consideration of the socio-economic conditions, level of waste generation, geography, and demography. The study utilized two-stage cluster sampling within strata in Kathmandu Metropolitan City (KMC) to measure MSW for 2 weeks. In KMC, the average household solid waste generation was 161.2 g capita( -1) day(- 1)with an average generation rate between 137.7 and 184.6 g capita(-1) day(-1) for a 95% confidence interval and 14.5% relative margin of error. The results show a positive relation between income and waste production rate. Organic waste was the biggest portion of MSW, and hazardous waste was the smallest of the total. Sample size considerations suggest that 273 households are required in KMC to attain a 10% relative margin of error with a 95% confidence interval.
Using geocoded survey data to improve the accuracy of multilevel small area synthetic estimates.
Taylor, Joanna; Moon, Graham; Twigg, Liz
2016-03-01
This paper examines the secondary data requirements for multilevel small area synthetic estimation (ML-SASE). This research method uses secondary survey data sets as source data for statistical models. The parameters of these models are used to generate data for small areas. The paper assesses the impact of knowing the geographical location of survey respondents on the accuracy of estimates, moving beyond debating the generic merits of geocoded social survey datasets to examine quantitatively the hypothesis that knowing the approximate location of respondents can improve the accuracy of the resultant estimates. Four sets of synthetic estimates are generated to predict expected levels of limiting long term illnesses using different levels of knowledge about respondent location. The estimates were compared to comprehensive census data on limiting long term illness (LLTI). Estimates based on fully geocoded data were more accurate than estimates based on data that did not include geocodes. PMID:26857175
ERIC Educational Resources Information Center
Maskiewicz, April Cordero; Griscom, Heather Peckham; Welch, Nicole Turrill
2012-01-01
In this study, we used targeted active-learning activities to help students improve their ways of reasoning about carbon flow in ecosystems. The results of a validated ecology conceptual inventory (diagnostic question clusters [DQCs]) provided us with information about students' understanding of and reasoning about transformation of inorganic and…
Improved performance due to selective passivation of nitrogen clusters in GaInNAs solar cells
NASA Astrophysics Data System (ADS)
Fukuda, Miwa; Whiteside, Vincent R.; Al Khalfioui, Mohamed; Leroux, Mathieu; Hossain, Khalid; Sellers, Ian R.
2015-03-01
While GaInNAs has the potential to be a fourth-junction in multi-junction solar cells it has proved to be difficult to incorporate due to the low solubility of nitrogen in these materials. Specifically, mid-gap states attributed to nitrogen clusters have proved prohibitive for practical implementation of these systems. Here, we present the selective passivation of nitrogen impurities using a UV-activated hydrogenation process, which enables the removal of defects while retaining substitution nitrogen. Temperature dependent photoluminescence measurements of the intrinsic region of a GaInNAs p-i-n solar cell show a classic ``s-shape'' associated with localization prior to hydrogenation, while after hydrogenation no sign of the ``s-shape'' is evident. This passivation of nitrogen centers is reflected in improved performance of solar cells structures relative to reference, unpassivated devices presenting a potential route to practical implementation of GaInNAs solar cells. The authors acknowledge support through Oklahoma Center for the Advancement of Science and Technology under the Oklahoma Applied Research Support Grant No. AR12.2-040.
2010-01-01
Background Improving nutrition knowledge among children may help them to make healthier food choices. The aim of this study was to assess the effectiveness and acceptability of a novel educational intervention to increase nutrition knowledge among primary school children. Methods We developed a card game 'Top Grub' and a 'healthy eating' curriculum for use in primary schools. Thirty-eight state primary schools comprising 2519 children in years 5 and 6 (aged 9-11 years) were recruited in a pragmatic cluster randomised controlled trial. The main outcome measures were change in nutrition knowledge scores, attitudes to healthy eating and acceptability of the intervention by children and teachers. Results Twelve intervention and 13 control schools (comprising 1133 children) completed the trial. The main reason for non-completion was time pressure of the school curriculum. Mean total nutrition knowledge score increased by 1.1 in intervention (baseline to follow-up: 28.3 to 29.2) and 0.3 in control schools (27.3 to 27.6). Total nutrition knowledge score at follow-up, adjusted for baseline score, deprivation, and school size, was higher in intervention than in control schools (mean difference = 1.1; 95% CI: 0.05 to 2.16; p = 0.042). At follow-up, more children in the intervention schools said they 'are currently eating a healthy diet' (39.6%) or 'would try to eat a healthy diet' (35.7%) than in control schools (34.4% and 31.7% respectively; chi-square test p < 0.001). Most children (75.5%) enjoyed playing the game and teachers considered it a useful resource. Conclusions The 'Top Grub' card game facilitated the enjoyable delivery of nutrition education in a sample of UK primary school age children. Further studies should determine whether improvements in nutrition knowledge are sustained and lead to changes in dietary behaviour. PMID:20219104
NASA Astrophysics Data System (ADS)
Yeck, William L.; Block, Lisa V.; Wood, Christopher K.; King, Vanessa M.
2015-01-01
The Paradox Valley Unit (PVU), a salinity control project in southwest Colorado, disposes of brine in a single deep injection well. Since the initiation of injection at the PVU in 1991, earthquakes have been repeatedly induced. PVU closely monitors all seismicity in the Paradox Valley region with a dense surface seismic network. A key factor for understanding the seismic hazard from PVU injection is the maximum magnitude earthquake that can be induced. The estimate of maximum magnitude of induced earthquakes is difficult to constrain as, unlike naturally occurring earthquakes, the maximum magnitude of induced earthquakes changes over time and is affected by injection parameters. We investigate temporal variations in maximum magnitudes of induced earthquakes at the PVU using two methods. First, we consider the relationship between the total cumulative injected volume and the history of observed largest earthquakes at the PVU. Second, we explore the relationship between maximum magnitude and the geometry of individual seismicity clusters. Under the assumptions that: (i) elevated pore pressures must be distributed over an entire fault surface to initiate rupture and (ii) the location of induced events delineates volumes of sufficiently high pore-pressure to induce rupture, we calculate the largest allowable vertical penny-shaped faults, and investigate the potential earthquake magnitudes represented by their rupture. Results from both the injection volume and geometrical methods suggest that the PVU has the potential to induce events up to roughly MW 5 in the region directly surrounding the well; however, the largest observed earthquake to date has been about a magnitude unit smaller than this predicted maximum. In the seismicity cluster surrounding the injection well, the maximum potential earthquake size estimated by these methods and the observed maximum magnitudes have remained steady since the mid-2000s. These observations suggest that either these methods
Improved method for estimating tree crown diameter using high-resolution airborne data
NASA Astrophysics Data System (ADS)
Brovkina, Olga; Latypov, Iscander Sh.; Cienciala, Emil; Fabianek, Tomas
2016-04-01
Automatic mapping of tree crown size (radius, diameter, or width) from remote sensing can provide a major benefit for practical and scientific purposes, but requires the development of accurate methods. This study presents an improved method for average tree crown diameter estimation at a forest plot level from high-resolution airborne data. The improved method consists of the combination of a window binarization procedure and a granulometric algorithm, and avoids the complicated crown delineation procedure that is currently used to estimate crown size. The systematic error in average crown diameter estimates is corrected with the improved method. The improved method is tested with coniferous, beech, and mixed-species forest plots based on airborne images of various spatial resolutions. The absolute (quantitative) accuracy of the improved crown diameter estimates is comparable or higher for both monospecies plots and mixed-species plots than the current methods. The ability of the improved method to produce good estimates for average crown diameters for monoculture and mixed species, to use remote sensing data of various spatial resolution and to operate in automatic mode promisingly suggests its applicability to a wide range of forest systems.
Cluster-based differential features to improve detection accuracy of focal cortical dysplasia
NASA Astrophysics Data System (ADS)
Yang, Chin-Ann; Kaveh, Mostafa; Erickson, Bradley
2012-03-01
In this paper, a computer aided diagnosis (CAD) system for automatic detection of focal cortical dysplasia (FCD) on T1-weighted MRI is proposed. We introduce a new set of differential cluster-wise features comparing local differences of the candidate lesional area with its surroundings and other GM/WM boundaries. The local differences are measured in a distributional sense using χ2 distances. Finally, a Support Vector Machine (SVM) classifier is used to classify the clusters. Experimental results show an 88% lesion detection rate with only 1.67 false positive clusters per subject. Also, the results show that using additional differential features clearly outperforms the result using only absolute features.
Improving the S-Shape Solar Radiation Estimation Method for Supporting Crop Models
Fodor, Nándor
2012-01-01
In line with the critical comments formulated in relation to the S-shape global solar radiation estimation method, the original formula was improved via a 5-step procedure. The improved method was compared to four-reference methods on a large North-American database. According to the investigated error indicators, the final 7-parameter S-shape method has the same or even better estimation efficiency than the original formula. The improved formula is able to provide radiation estimates with a particularly low error pattern index (PIdoy) which is especially important concerning the usability of the estimated radiation values in crop models. Using site-specific calibration, the radiation estimates of the improved S-shape method caused an average of 2.72 ± 1.02 (α = 0.05) relative error in the calculated biomass. Using only readily available site specific metadata the radiation estimates caused less than 5% relative error in the crop model calculations when they were used for locations in the middle, plain territories of the USA. PMID:22645451
Improved initialisation of model-based clustering using Gaussian hierarchical partitions
Scrucca, Luca; Raftery, Adrian E.
2015-01-01
Initialisation of the EM algorithm in model-based clustering is often crucial. Various starting points in the parameter space often lead to different local maxima of the likelihood function and, so to different clustering partitions. Among the several approaches available in the literature, model-based agglomerative hierarchical clustering is used to provide initial partitions in the popular mclust R package. This choice is computationally convenient and often yields good clustering partitions. However, in certain circumstances, poor initial partitions may cause the EM algorithm to converge to a local maximum of the likelihood function. We propose several simple and fast refinements based on data transformations and illustrate them through data examples. PMID:26949421
NASA Astrophysics Data System (ADS)
Farsadnia, Farhad; Ghahreman, Bijan
2016-04-01
Hydrologic homogeneous group identification is considered both fundamental and applied research in hydrology. Clustering methods are among conventional methods to assess the hydrological homogeneous regions. Recently, Self-Organizing feature Map (SOM) method has been applied in some studies. However, the main problem of this method is the interpretation on the output map of this approach. Therefore, SOM is used as input to other clustering algorithms. The aim of this study is to apply a two-level Self-Organizing feature map and Ward hierarchical clustering method to determine the hydrologic homogenous regions in North and Razavi Khorasan provinces. At first by principal component analysis, we reduced SOM input matrix dimension, then the SOM was used to form a two-dimensional features map. To determine homogeneous regions for flood frequency analysis, SOM output nodes were used as input into the Ward method. Generally, the regions identified by the clustering algorithms are not statistically homogeneous. Consequently, they have to be adjusted to improve their homogeneity. After adjustment of the homogeneity regions by L-moment tests, five hydrologic homogeneous regions were identified. Finally, adjusted regions were created by a two-level SOM and then the best regional distribution function and associated parameters were selected by the L-moment approach. The results showed that the combination of self-organizing maps and Ward hierarchical clustering by principal components as input is more effective than the hierarchical method, by principal components or standardized inputs to achieve hydrologic homogeneous regions.
Xia, Peng; Shimozato, Yuki; Ito, Yasunori; Tahara, Tatsuki; Kakue, Takashi; Awatsuji, Yasuhiro; Nishio, Kenzo; Ura, Shogo; Kubota, Toshihiro; Matoba, Osamu
2011-12-01
We propose a color digital holography by using spectral estimation technique to improve the color reproduction of objects. In conventional color digital holography, there is insufficient spectral information in holograms, and the color of the reconstructed images depend on only reflectances at three discrete wavelengths used in the recording of holograms. Therefore the color-composite image of the three reconstructed images is not accurate in color reproduction. However, in our proposed method, the spectral estimation technique was applied, which has been reported in multispectral imaging. According to the spectral estimation technique, the continuous spectrum of object can be estimated and the color reproduction is improved. The effectiveness of the proposed method was confirmed by a numerical simulation and an experiment, and, in the results, the average color differences are decreased from 35.81 to 7.88 and from 43.60 to 25.28, respectively. PMID:22193005
NASA Astrophysics Data System (ADS)
Chen, Y.; Ho, C.; Chang, L.
2011-12-01
In previous decades, the climate change caused by global warming increases the occurrence frequency of extreme hydrological events. Water supply shortages caused by extreme events create great challenges for water resource management. To evaluate future climate variations, general circulation models (GCMs) are the most wildly known tools which shows possible weather conditions under pre-defined CO2 emission scenarios announced by IPCC. Because the study area of GCMs is the entire earth, the grid sizes of GCMs are much larger than the basin scale. To overcome the gap, a statistic downscaling technique can transform the regional scale weather factors into basin scale precipitations. The statistic downscaling technique can be divided into three categories include transfer function, weather generator and weather type. The first two categories describe the relationships between the weather factors and precipitations respectively based on deterministic algorithms, such as linear or nonlinear regression and ANN, and stochastic approaches, such as Markov chain theory and statistical distributions. In the weather type, the method has ability to cluster weather factors, which are high dimensional and continuous variables, into weather types, which are limited number of discrete states. In this study, the proposed downscaling model integrates the weather type, using the K-means clustering algorithm, and the weather generator, using the kernel density estimation. The study area is Shihmen basin in northern of Taiwan. In this study, the research process contains two steps, a calibration step and a synthesis step. Three sub-steps were used in the calibration step. First, weather factors, such as pressures, humidities and wind speeds, obtained from NCEP and the precipitations observed from rainfall stations were collected for downscaling. Second, the K-means clustering grouped the weather factors into four weather types. Third, the Markov chain transition matrixes and the
Improving power in small-sample longitudinal studies when using generalized estimating equations.
Westgate, Philip M; Burchett, Woodrow W
2016-09-20
Generalized estimating equations (GEE) are often used for the marginal analysis of longitudinal data. Although much work has been performed to improve the validity of GEE for the analysis of data arising from small-sample studies, little attention has been given to power in such settings. Therefore, we propose a valid GEE approach to improve power in small-sample longitudinal study settings in which the temporal spacing of outcomes is the same for each subject. Specifically, we use a modified empirical sandwich covariance matrix estimator within correlation structure selection criteria and test statistics. Use of this estimator can improve the accuracy of selection criteria and increase the degrees of freedom to be used for inference. The resulting impacts on power are demonstrated via a simulation study and application example. Copyright © 2016 John Wiley & Sons, Ltd. PMID:27090375
Improving power in small-sample longitudinal studies when using generalized estimating equations.
Westgate, Philip M; Burchett, Woodrow W
2016-09-20
Generalized estimating equations (GEE) are often used for the marginal analysis of longitudinal data. Although much work has been performed to improve the validity of GEE for the analysis of data arising from small-sample studies, little attention has been given to power in such settings. Therefore, we propose a valid GEE approach to improve power in small-sample longitudinal study settings in which the temporal spacing of outcomes is the same for each subject. Specifically, we use a modified empirical sandwich covariance matrix estimator within correlation structure selection criteria and test statistics. Use of this estimator can improve the accuracy of selection criteria and increase the degrees of freedom to be used for inference. The resulting impacts on power are demonstrated via a simulation study and application example. Copyright © 2016 John Wiley & Sons, Ltd.
Comerford, Julia M.; Moustakas, Leonidas A.; Natarajan, Priyamvada
2010-05-20
Scaling relations of observed galaxy cluster properties are useful tools for constraining cosmological parameters as well as cluster formation histories. One of the key cosmological parameters, {sigma}{sub 8}, is constrained using observed clusters of galaxies, although current estimates of {sigma}{sub 8} from the scaling relations of dynamically relaxed galaxy clusters are limited by the large scatter in the observed cluster mass-temperature (M-T) relation. With a sample of eight strong lensing clusters at 0.3 < z < 0.8, we find that the observed cluster concentration-mass relation can be used to reduce the M-T scatter by a factor of 6. Typically only relaxed clusters are used to estimate {sigma}{sub 8}, but combining the cluster concentration-mass relation with the M-T relation enables the inclusion of unrelaxed clusters as well. Thus, the resultant gains in the accuracy of {sigma}{sub 8} measurements from clusters are twofold: the errors on {sigma}{sub 8} are reduced and the cluster sample size is increased. Therefore, the statistics on {sigma}{sub 8} determination from clusters are greatly improved by the inclusion of unrelaxed clusters. Exploring cluster scaling relations further, we find that the correlation between brightest cluster galaxy (BCG) luminosity and cluster mass offers insight into the assembly histories of clusters. We find preliminary evidence for a steeper BCG luminosity-cluster mass relation for strong lensing clusters than the general cluster population, hinting that strong lensing clusters may have had more active merging histories.
Tarone, Aaron M; Foran, David R
2011-01-01
Forensic entomologists use size and developmental stage to estimate blow fly age, and from those, a postmortem interval. Since such estimates are generally accurate but often lack precision, particularly in the older developmental stages, alternative aging methods would be advantageous. Presented here is a means of incorporating developmentally regulated gene expression levels into traditional stage and size data, with a goal of more precisely estimating developmental age of immature Lucilia sericata. Generalized additive models of development showed improved statistical support compared to models that did not include gene expression data, resulting in an increase in estimate precision, especially for postfeeding third instars and pupae. The models were then used to make blind estimates of development for 86 immature L. sericata raised on rat carcasses. Overall, inclusion of gene expression data resulted in increased precision in aging blow flies.
Improving quality of sample entropy estimation for continuous distribution probability functions
NASA Astrophysics Data System (ADS)
Miśkiewicz, Janusz
2016-05-01
Entropy is a one of the key parameters characterizing state of system in statistical physics. Although, the entropy is defined for systems described by discrete and continuous probability distribution function (PDF), in numerous applications the sample entropy is estimated by a histogram, which, in fact, denotes that the continuous PDF is represented by a set of probabilities. Such a procedure may lead to ambiguities and even misinterpretation of the results. Within this paper, two possible general algorithms based on continuous PDF estimation are discussed in the application to the Shannon and Tsallis entropies. It is shown that the proposed algorithms may improve entropy estimation, particularly in the case of small data sets.
NASA Astrophysics Data System (ADS)
Manukyan, N.; Eppstein, M. J.; Rizzo, D. M.
2011-12-01
A Kohonen self-organizing map (SOM) is a type of unsupervised artificial neural network that results in a self-organized projection of high-dimensional data onto a low-dimensional feature map, wherein vector similarity is implicitly translated into topological closeness, enabling clusters to be identified. In recently published work [1], 209 microbial variables from 22 monitoring wells around the leaking Schuyler Falls Landfill in Clinton, NY [2] were analyzed using a multi-stage non-parametric process to explore how microbial communities may act as indicators for the gradient of contamination in groundwater. The final stage of their analysis used a weighted SOM to identify microbial signatures in this high dimensionality data set that correspond to clean, fringe, and contaminated soils. Resulting clusters were visualized with the standard unified distance matrix (U-matrix). However, while the results of this analysis were very promising, visualized boundaries between clusters in the SOM were indistinct and required manual and somewhat arbitrary identification. In this contribution, we introduce (i) a new cluster reinforcement (CR) phase to be run subsequent to traditional SOM training for automatic sharpening of cluster boundaries, and (ii) a new boundary matrix (B-matrix) approach for visualization of the resulting cluster boundaries. The CR-phase differs from standard SOM training in several ways, most notably by using a feature-based neighborhood function rather than a topologically-based neighborhood function. In contrast to the U-matrix, the B-matrix can be directly superimposed on heat maps of the individual features (as output by the SOM) using grid lines whose thickness corresponds to inter-cluster distances. By thresholding the displayed lines, one obtains hierarchical control of the visual level of cluster resolution. We first illustrate the advantages of these methods on a small synthetic test case, and then apply them to the Schuyler Falls landfill
Estimation of root zone storage capacity at the catchment scale using improved Mass Curve Technique
NASA Astrophysics Data System (ADS)
Zhao, Jie; Xu, Zongxue; Singh, Vijay P.
2016-09-01
The root zone storage capacity (Sr) greatly influences runoff generation, soil water movement, and vegetation growth and is hence an important variable for ecological and hydrological modelling. However, due to the great heterogeneity in soil texture and structure, there seems to be no effective approach to monitor or estimate Sr at the catchment scale presently. To fill the gap, in this study the Mass Curve Technique (MCT) was improved by incorporating a snowmelt module for the estimation of Sr at the catchment scale in different climatic regions. The "range of perturbation" method was also used to generate different scenarios for determining the sensitivity of the improved MCT-derived Sr to its influencing factors after the evaluation of plausibility of Sr derived from the improved MCT. Results can be showed as: (i) Sr estimates of different catchments varied greatly from ∼10 mm to ∼200 mm with the changes of climatic conditions and underlying surface characteristics. (ii) The improved MCT is a simple but powerful tool for the Sr estimation in different climatic regions of China, and incorporation of more catchments into Sr comparisons can further improve our knowledge on the variability of Sr. (iii) Variation of Sr values is an integrated consequence of variations in rainfall, snowmelt water and evapotranspiration. Sr values are most sensitive to variations in evapotranspiration of ecosystems. Besides, Sr values with a longer return period are more stable than those with a shorter return period when affected by fluctuations in its influencing factors.
Use of spot measurements to improve the estimation of low streamflow statistics.
NASA Astrophysics Data System (ADS)
Kroll, C. N.; Stagnitta, T. J.; Vogel, R. M.
2015-12-01
Despite substantial efforts to improve the modeling and prediction of low streamflows at ungauged river sites, most models of low streamflow statistics create estimators with large errors. Often this is because the hydrogeologic characteristics of a watershed, which can strongly impact low streamflows, are difficult to characterize. One solution is to take a nominal number of streamflow measurements at an ungauged site to either estimate improved hydrogeologic indices or correlate with concurrent streamflow measurements at a nearby gauged river site. Past results have indicated that baseflow correlation performs better than regional regression when 4 or more streamflow measurements are available, even when the regional regression models are augmented by improved hydrogeologic indices. Here we revisit this issue within the 19,800 square mile Apalachicola-Chattahoochee-Flint watershed, a USGS WaterSMART region spanning Geogia, southeastern Alabama, and northwestern Florida. This study area is of particular interest because numerous watershed modeling analyses have previously been performed using gauged river sites within this basin. Initial results indicate that baseflow correlation can produce improved estimators when spot-measurements are available, but selection of an appropriate donor site is problematic, especially in regions with a small number of gauged river sites. Estimation of hydrogeologic indices do improve regional regression models, but these models are generally outperformed by baseflow correlation.
NASA Astrophysics Data System (ADS)
Welch, D.; Henden, A.; Bell, T.; Suen, C.; Fare, I.; Sills, A.
2015-12-01
(Abstract only) The variable stars of globular clusters have played and continue to play a significant role in our understanding of certain classes of variable stars. Since all stars associated with a cluster have the same age, metallicity, distance and usually very similar (if not identical reddenings), such variables can produce uniquely powerful constraints on where certain types of pulsation behaviors are excited. Advanced amateur astronomers are increasingly well-positioned to provide long-term CCD monitoring of globular cluster variable star but are hampered by a long history of poor or inaccessible finder charts and coordinates. Many of variable-rich clusters have published photographic finder charts taken in relatively poor seeing with blue-sensitive photographic plates. While useful signal-to-noise ratios are relatively straightforward to achieve for RR Lyrae, Type 2 Cepheids, and red giant variables, correct identification remains a difficult issue—particularly when images are taken at V or longer wavelengths. We describe the project and report its progress using the OC61, TMO61, and SRO telescopes of AAVSOnet after the first year of image acquisition and demonstrate several of the data products being developed for globular cluster variables.
Stimuli-responsive clustered nanoparticles for improved tumor penetration and therapeutic efficacy
Li, Hong-Jun; Du, Jin-Zhi; Du, Xiao-Jiao; Xu, Cong-Fei; Sun, Chun-Yang; Wang, Hong-Xia; Cao, Zhi-Ting; Yang, Xian-Zhu; Zhu, Yan-Hua; Nie, Shuming; Wang, Jun
2016-01-01
A principal goal of cancer nanomedicine is to deliver therapeutics effectively to cancer cells within solid tumors. However, there are a series of biological barriers that impede nanomedicine from reaching target cells. Here, we report a stimuli-responsive clustered nanoparticle to systematically overcome these multiple barriers by sequentially responding to the endogenous attributes of the tumor microenvironment. The smart polymeric clustered nanoparticle (iCluster) has an initial size of ∼100 nm, which is favorable for long blood circulation and high propensity of extravasation through tumor vascular fenestrations. Once iCluster accumulates at tumor sites, the intrinsic tumor extracellular acidity would trigger the discharge of platinum prodrug-conjugated poly(amidoamine) dendrimers (diameter ∼5 nm). Such a structural alteration greatly facilitates tumor penetration and cell internalization of the therapeutics. The internalized dendrimer prodrugs are further reduced intracellularly to release cisplatin to kill cancer cells. The superior in vivo antitumor activities of iCluster are validated in varying intractable tumor models including poorly permeable pancreatic cancer, drug-resistant cancer, and metastatic cancer, demonstrating its versatility and broad applicability. PMID:27035960
The Role of Satellite Imagery to Improve Pastureland Estimates in South America
NASA Astrophysics Data System (ADS)
Graesser, J.
2015-12-01
Agriculture has changed substantially across the globe over the past half century. While much work has been done to improve spatial-temporal estimates of agricultural changes, we still know more about the extent of row-crop agriculture than livestock-grazed land. The gap between cropland and pastureland estimates exists largely because it is challenging to characterize natural versus grazed grasslands from a remote sensing perspective. However, the impasse of pastureland estimates is set to break, with an increasing number of spaceborne sensors and freely available satellite data. The Landsat satellite archive in particular provides researchers with immense amounts of data to improve pastureland information. Here we focus on South America, where pastureland expansion has been scrutinized for the past few decades. We explore the challenges of estimating pastureland using temporal Landsat imagery and focus on key agricultural countries, regions, and ecosystems. We focus on the suggested shift of pastureland from the Argentine Pampas to northern Argentina, and the mixing of small-scale and large-scale ranching in eastern Paraguay and how it could impact the Chaco forest to the west. Further, the Beni Savannahs of northern Bolivia and the Colombian Llanos—both grassland and savannah regions historically used for livestock grazing—have been hinted at as future areas for cropland expansion. There are certainly environmental concerns with pastureland expansion into forests; but what are the environmental implications when well-managed pasture systems are converted to intensive soybean or palm oil plantation? Tropical, grazed grasslands are important habitats for biodiversity, and pasturelands can mitigate soil erosion when well managed. Thus, we must improve estimates of grazed land before we can make informed policy and conservation decisions. This talk presents insights into pastureland estimates in South America and discusses the feasibility to improve current
Improving propensity score estimators' robustness to model misspecification using super learner.
Pirracchio, Romain; Petersen, Maya L; van der Laan, Mark
2015-01-15
The consistency of propensity score (PS) estimators relies on correct specification of the PS model. The PS is frequently estimated using main-effects logistic regression. However, the underlying model assumptions may not hold. Machine learning methods provide an alternative nonparametric approach to PS estimation. In this simulation study, we evaluated the benefit of using Super Learner (SL) for PS estimation. We created 1,000 simulated data sets (n = 500) under 4 different scenarios characterized by various degrees of deviance from the usual main-term logistic regression model for the true PS. We estimated the average treatment effect using PS matching and inverse probability of treatment weighting. The estimators' performance was evaluated in terms of PS prediction accuracy, covariate balance achieved, bias, standard error, coverage, and mean squared error. All methods exhibited adequate overall balancing properties, but in the case of model misspecification, SL performed better for highly unbalanced variables. The SL-based estimators were associated with the smallest bias in cases of severe model misspecification. Our results suggest that use of SL to estimate the PS can improve covariate balance and reduce bias in a meaningful manner in cases of serious model misspecification for treatment assignment.
NASA Astrophysics Data System (ADS)
Kim, Kun-Woo; Lee, Sang-Wha
2015-09-01
Porous hematite clusters were prepared as anode materials for improved Li-ion batteries. First, poly-L-lysine (PLL)-linked Fe3O4 was facilely prepared via cross-linking between the positive amine groups of PLL and carboxylate-bound Fe3O4. The subsequent calcination transformed the PLL-linked Fe3O4 into porous hematite clusters (Fe2O3@PLL) consisting of spherical α-Fe2O3 particles. Compared with standard Fe2O3, Fe3O4@PLL exhibited improved electrochemical performance as anode materials. The discharge capacity of Fe2O3@PLL was retained at 814.7 mAh g-1 after 30 cycles, which is equivalent to 80.4% of the second discharge capacity, whereas standard Fe2O3 exhibited a retention capacity of 352.3 mAh g-1. The improved electrochemical performance of Fe2O3@PLL was mainly attributed to the porous hematite clusters with mesoporosity (20-40 nm), which was beneficial for facilitating ion transport, suggesting a useful guideline for the design of porous architectures with higher retention capacity. [Figure not available: see fulltext.
Improved proper motion determinations for 15 open clusters based on the UCAC4 catalog
NASA Astrophysics Data System (ADS)
Kurtenkov, Alexander; Dimitrova, Nadezhda; Atanasov, Alexander; Aleksiev, Teodor D.
2016-07-01
The proper motions of 15 nearby (d > 1 kpc) open clusters (OCs) were recalculated using data from the UCAC4 catalog. Only evolved or main sequence stars inside a certain radius from the center of the cluster were used. The results significantly differ from the ones presented by Dias et al. (2014). This could be explained by a different approach in which we take the field star contamination into account. The present work aims to emphasize the importance of applying photometric criteria for the calculation of OC proper motions.
Subspace Leakage Analysis and Improved DOA Estimation With Small Sample Size
NASA Astrophysics Data System (ADS)
Shaghaghi, Mahdi; Vorobyov, Sergiy A.
2015-06-01
Classical methods of DOA estimation such as the MUSIC algorithm are based on estimating the signal and noise subspaces from the sample covariance matrix. For a small number of samples, such methods are exposed to performance breakdown, as the sample covariance matrix can largely deviate from the true covariance matrix. In this paper, the problem of DOA estimation performance breakdown is investigated. We consider the structure of the sample covariance matrix and the dynamics of the root-MUSIC algorithm. The performance breakdown in the threshold region is associated with the subspace leakage where some portion of the true signal subspace resides in the estimated noise subspace. In this paper, the subspace leakage is theoretically derived. We also propose a two-step method which improves the performance by modifying the sample covariance matrix such that the amount of the subspace leakage is reduced. Furthermore, we introduce a phenomenon named as root-swap which occurs in the root-MUSIC algorithm in the low sample size region and degrades the performance of the DOA estimation. A new method is then proposed to alleviate this problem. Numerical examples and simulation results are given for uncorrelated and correlated sources to illustrate the improvement achieved by the proposed methods. Moreover, the proposed algorithms are combined with the pseudo-noise resampling method to further improve the performance.
An Overdetermined System for Improved Autocorrelation Based Spectral Moment Estimator Performance
NASA Technical Reports Server (NTRS)
Keel, Byron M.
1996-01-01
from a closed system is shown to improve through the application of additional autocorrelation lags in an overdetermined system. This improvement is greater in the narrowband spectrum region where the information is spread over more lags of the autocorrelation function. The number of lags needed in the overdetermined system is a function of the spectral width, the number of terms in the series expansion, the number of samples used in estimating the autocorrelation function, and the signal-to-noise ratio. The overdetermined system provides a robustness to the chosen variance estimator by expanding the region of spectral widths and signal-to-noise ratios over which the estimator can perform as compared to the closed system.
Using Log-Linear Smoothing to Improve Small-Sample DIF Estimation
ERIC Educational Resources Information Center
Puhan, Gautam; Moses, Timothy P.; Yu, Lei; Dorans, Neil J.
2009-01-01
This study examined the extent to which log-linear smoothing could improve the accuracy of differential item functioning (DIF) estimates in small samples of examinees. Examinee responses from a certification test were analyzed using White examinees in the reference group and African American examinees in the focal group. Using a simulation…
NASA Technical Reports Server (NTRS)
Theis, S. W.; Blanchard, B. J.; Blanchard, A. J.
1984-01-01
Multisensor aircraft data were used to establish the potential of the active microwave sensor response to be used to compensate for roughness in the passive microwave sensor's response to soil moisture. Only bare fields were used. It is found that the L-band radiometer's capability to estimate soil moisture significantly improves when surface roughness is accounted for with the scatterometers.
NASA Technical Reports Server (NTRS)
Theis, S. W.; Blanchard, A. J.; Blanchard, B. J.
1986-01-01
Multisensor aircraft data were used to establish the potential of the active microwave sensor response to be used to compensate for roughness in the passive microwave sensor's response to soil moisture. Only bare fields were used. It is found that the L-band radiometer's capability to estimate soil moisture significantly improves when surface roughness is accounted for with the scatterometers.
Technology Transfer Automated Retrieval System (TEKTRAN)
An Ensemble Kalman Filter-based data assimilation framework that links a crop growth model with active and passive (AP) microwave models was developed to improve estimates of soil moisture (SM) and vegetation biomass over a growing season of soybean. Complementarities in AP observations were incorpo...
2014-01-01
To make use of the sparsity property of broadband multipath wireless communication channels, we mathematically propose an lp-norm-constrained proportionate normalized least-mean-square (LP-PNLMS) sparse channel estimation algorithm. A general lp-norm is weighted by the gain matrix and is incorporated into the cost function of the proportionate normalized least-mean-square (PNLMS) algorithm. This integration is equivalent to adding a zero attractor to the iterations, by which the convergence speed and steady-state performance of the inactive taps are significantly improved. Our simulation results demonstrate that the proposed algorithm can effectively improve the estimation performance of the PNLMS-based algorithm for sparse channel estimation applications. PMID:24782663
NASA Astrophysics Data System (ADS)
Wu, Wei-Huang; Tian, Yuan; Luo, Jie; Shao, Cheng-Gang; Xu, Jia-Hao; Wang, Dian-Hong
2016-09-01
In the measurement of the gravitational constant G with angular acceleration method, the accurate estimation of the amplitude of the useful angular acceleration generated by source masses depends on the effective subtraction of the spurious gravitational signal caused by room fixed background masses. The gravitational background signal is of time-varying frequency, and mainly consists of the prominent fundamental frequency and second harmonic components. We propose an improved correlation method to estimate the amplitudes of the prominent components of the gravitational background signal with high precision. The improved correlation method converts a sinusoidal signal with time-varying frequency into a standard sinusoidal signal by means of the stretch processing of time. Based on Gaussian white noise model, the theoretical result shows the uncertainty of the estimated amplitude is proportional to /σ √{ N T } , where σ and N are the standard deviation of noise and the number of the useful signal period T, respectively.
Li, Yingsong; Hamamura, Masanori
2014-01-01
To make use of the sparsity property of broadband multipath wireless communication channels, we mathematically propose an l p -norm-constrained proportionate normalized least-mean-square (LP-PNLMS) sparse channel estimation algorithm. A general l p -norm is weighted by the gain matrix and is incorporated into the cost function of the proportionate normalized least-mean-square (PNLMS) algorithm. This integration is equivalent to adding a zero attractor to the iterations, by which the convergence speed and steady-state performance of the inactive taps are significantly improved. Our simulation results demonstrate that the proposed algorithm can effectively improve the estimation performance of the PNLMS-based algorithm for sparse channel estimation applications.
NASA Astrophysics Data System (ADS)
Kato, Takeyoshi; Suzuoki, Yasuo
The fluctuation of the total power output of clustered PV systems would be smaller than that of single PV system because of the time difference in the power output fluctuation among PV systems at different locations. This effect, so called smoothing-effect, must be taken into account properly when the impact of clustered PV systems on electric power system is assessed. If the average power output of clustered PV systems can be estimated from the power output of single PV system, it is very useful and helpful for the impact assessment. In this study, we propose a simple method to estimate the total power output fluctuation of clustered PV systems. In the proposed method, a smoothing effect is assumed to be caused as a result of two factors, i.e. time difference of overhead clouds passing among PV systems and the random change in the size and/or shape of clouds. The first one is formulated as a low-pass filter, assuming that output fluctuation is transmitted to the same direction as the wind direction at the constant speed. The second one is taken into account by using a Fourier transform surrogate data. The parameters in the proposed method were selected, so that the estimated fluctuation can be similar with that of ensemble average fluctuation of data observed at 5 points used as a training data set. Then, by using the selected parameters, the fluctuation property was estimated for other data set. The results show that the proposed method is useful for estimating the total power output fluctuation of clustered PV systems.
The Use of Radar to Improve Rainfall Estimation over the Tennessee and San Joaquin River Valleys
NASA Technical Reports Server (NTRS)
Petersen, Walter A.; Gatlin, Patrick N.; Felix, Mariana; Carey, Lawrence D.
2010-01-01
This slide presentation provides an overview of the collaborative radar rainfall project between the Tennessee Valley Authority (TVA), the Von Braun Center for Science & Innovation (VCSI), NASA MSFC and UAHuntsville. Two systems were used in this project, Advanced Radar for Meteorological & Operational Research (ARMOR) Rainfall Estimation Processing System (AREPS), a demonstration project of real-time radar rainfall using a research radar and NEXRAD Rainfall Estimation Processing System (NREPS). The objectives, methodology, some results and validation, operational experience and lessons learned are reviewed. The presentation. Another project that is using radar to improve rainfall estimations is in California, specifically the San Joaquin River Valley. This is part of a overall project to develop a integrated tool to assist water management within the San Joaquin River Valley. This involves integrating several components: (1) Radar precipitation estimates, (2) Distributed hydro model, (3) Snowfall measurements and Surface temperature / moisture measurements. NREPS was selected to provide precipitation component.
Improving Ocean Angular Momentum Estimates Using a Model Constrained by Data
NASA Technical Reports Server (NTRS)
Ponte, Rui M.; Stammer, Detlef; Wunsch, Carl
2001-01-01
Ocean angular momentum (OAM) calculations using forward model runs without any data constraints have, recently revealed the effects of OAM variability on the Earth's rotation. Here we use an ocean model and its adjoint to estimate OAM values by constraining the model to available oceanic data. The optimization procedure yields substantial changes in OAM, related to adjustments in both motion and mass fields, as well as in the wind stress torques acting on the ocean. Constrained and unconstrained OAM values are discussed in the context of closing the planet's angular momentum budget. The estimation procedure, yields noticeable improvements in the agreement with the observed Earth rotation parameters, particularly at the seasonal timescale. The comparison with Earth rotation measurements provides an independent consistency check on the estimated ocean state and underlines the importance of ocean state estimation for quantitative. studies of the variable large-scale oceanic mass and circulation fields, including studies of OAM.
Improving Estimates of m sin i by Expanding RV Data Sets
NASA Astrophysics Data System (ADS)
Brown, Robert A.
2016-07-01
We develop new techniques for estimating the fractional uncertainty ({ F }) in the projected planetary mass (m sin i) resulting from Keplerian fits to radial-velocity (RV) data sets of known Jupiter-class exoplanets. The techniques include (1) estimating the distribution of m sin i using projection, (2) detecting and mitigating chimeras, a source of systematic error, and (3) estimating the reduction in the uncertainty in m sin i if hypothetical observations were made in the future. We demonstrate the techniques on a representative set of RV exoplanets, known as the Sample of 27, which are candidates for detection and characterization by a future astrometric direct imaging mission. We estimate the improvements (reductions) in { F } due to additional, hypothetical RV measurements obtained in the future. We encounter and address a source of systematic error, “chimeras,” which can appear when multiple types of Keplerian solutions are compatible with a single data set.
NASA Astrophysics Data System (ADS)
Cuthbert, M. O.
2010-09-01
An analytical solution to a linearized Boussinesq equation is extended to develop an expression for groundwater drainage using estimations of aquifer parameters. This is then used to develop an improved water table fluctuation (WTF) technique for estimating groundwater recharge. The resulting method extends the standard WTF technique by making it applicable, as long as aquifer properties for the area are relatively well known, in areas with smoothly varying water tables and is not reliant on precipitation data. The method is validated against numerical simulations and a case study from a catchment where recharge is "known" a priori using other means. The approach may also be inverted to provide initial estimates of aquifer parameters in areas where recharge can be reliably estimated by other methods.
Multiple data sources improve DNA-based mark-recapture population estimates of grizzly bears.
Boulanger, John; Kendall, Katherine C; Stetz, Jeffrey B; Roon, David A; Waits, Lisette P; Paetkau, David
2008-04-01
A fundamental challenge to estimating population size with mark-recapture methods is heterogeneous capture probabilities and subsequent bias of population estimates. Confronting this problem usually requires substantial sampling effort that can be difficult to achieve for some species, such as carnivores. We developed a methodology that uses two data sources to deal with heterogeneity and applied this to DNA mark-recapture data from grizzly bears (Ursus arctos). We improved population estimates by incorporating additional DNA "captures" of grizzly bears obtained by collecting hair from unbaited bear rub trees concurrently with baited, grid-based, hair snag sampling. We consider a Lincoln-Petersen estimator with hair snag captures as the initial session and rub tree captures as the recapture session and develop an estimator in program MARK that treats hair snag and rub tree samples as successive sessions. Using empirical data from a large-scale project in the greater Glacier National Park, Montana, USA, area and simulation modeling we evaluate these methods and compare the results to hair-snag-only estimates. Empirical results indicate that, compared with hair-snag-only data, the joint hair-snag-rub-tree methods produce similar but more precise estimates if capture and recapture rates are reasonably high for both methods. Simulation results suggest that estimators are potentially affected by correlation of capture probabilities between sample types in the presence of heterogeneity. Overall, closed population Huggins-Pledger estimators showed the highest precision and were most robust to sparse data, heterogeneity, and capture probability correlation among sampling types. Results also indicate that these estimators can be used when a segment of the population has zero capture probability for one of the methods. We propose that this general methodology may be useful for other species in which mark-recapture data are available from multiple sources.
Multiple data sources improve DNA-based mark-recapture population estimates of grizzly bears.
Boulanger, John; Kendall, Katherine C; Stetz, Jeffrey B; Roon, David A; Waits, Lisette P; Paetkau, David
2008-04-01
A fundamental challenge to estimating population size with mark-recapture methods is heterogeneous capture probabilities and subsequent bias of population estimates. Confronting this problem usually requires substantial sampling effort that can be difficult to achieve for some species, such as carnivores. We developed a methodology that uses two data sources to deal with heterogeneity and applied this to DNA mark-recapture data from grizzly bears (Ursus arctos). We improved population estimates by incorporating additional DNA "captures" of grizzly bears obtained by collecting hair from unbaited bear rub trees concurrently with baited, grid-based, hair snag sampling. We consider a Lincoln-Petersen estimator with hair snag captures as the initial session and rub tree captures as the recapture session and develop an estimator in program MARK that treats hair snag and rub tree samples as successive sessions. Using empirical data from a large-scale project in the greater Glacier National Park, Montana, USA, area and simulation modeling we evaluate these methods and compare the results to hair-snag-only estimates. Empirical results indicate that, compared with hair-snag-only data, the joint hair-snag-rub-tree methods produce similar but more precise estimates if capture and recapture rates are reasonably high for both methods. Simulation results suggest that estimators are potentially affected by correlation of capture probabilities between sample types in the presence of heterogeneity. Overall, closed population Huggins-Pledger estimators showed the highest precision and were most robust to sparse data, heterogeneity, and capture probability correlation among sampling types. Results also indicate that these estimators can be used when a segment of the population has zero capture probability for one of the methods. We propose that this general methodology may be useful for other species in which mark-recapture data are available from multiple sources. PMID:18488618
Enhancing e-waste estimates: Improving data quality by multivariate Input–Output Analysis
Wang, Feng; Huisman, Jaco; Stevels, Ab; Baldé, Cornelis Peter
2013-11-15
Highlights: • A multivariate Input–Output Analysis method for e-waste estimates is proposed. • Applying multivariate analysis to consolidate data can enhance e-waste estimates. • We examine the influence of model selection and data quality on e-waste estimates. • Datasets of all e-waste related variables in a Dutch case study have been provided. • Accurate modeling of time-variant lifespan distributions is critical for estimate. - Abstract: Waste electrical and electronic equipment (or e-waste) is one of the fastest growing waste streams, which encompasses a wide and increasing spectrum of products. Accurate estimation of e-waste generation is difficult, mainly due to lack of high quality data referred to market and socio-economic dynamics. This paper addresses how to enhance e-waste estimates by providing techniques to increase data quality. An advanced, flexible and multivariate Input–Output Analysis (IOA) method is proposed. It links all three pillars in IOA (product sales, stock and lifespan profiles) to construct mathematical relationships between various data points. By applying this method, the data consolidation steps can generate more accurate time-series datasets from available data pool. This can consequently increase the reliability of e-waste estimates compared to the approach without data processing. A case study in the Netherlands is used to apply the advanced IOA model. As a result, for the first time ever, complete datasets of all three variables for estimating all types of e-waste have been obtained. The result of this study also demonstrates significant disparity between various estimation models, arising from the use of data under different conditions. It shows the importance of applying multivariate approach and multiple sources to improve data quality for modelling, specifically using appropriate time-varying lifespan parameters. Following the case study, a roadmap with a procedural guideline is provided to enhance e
Shuaib, Muhammad; Becker, Stan; Rahman, Md. Mokhlesur; Peters, David H.
2011-01-01
Due to an urgent need for information on the coverage of health service for women and children after the fall of Taliban regime in Afghanistan, a multiple indicator cluster survey (MICS) was conducted in 2003 using the outdated 1979 census as the sampling frame. When 2004 pre-census data became available, population-sampling weights were generated based on the survey-sampling scheme. Using these weights, the population estimates for seven maternal and child healthcare-coverage indicators were generated and compared with the unweighted MICS 2003 estimates. The use of sample weights provided unbiased estimates of population parameters. Results of the comparison of weighted and unweighted estimates showed some wide differences for individual provincial estimates and confidence intervals. However, the mean, median and absolute mean of the differences between weighted and unweighted estimates and their confidence intervals were close to zero for all indicators at the national level. Ranking of the five highest and the five lowest provinces on weighted and unweighted estimates also yielded similar results. The general consistency of results suggests that outdated sampling frames can be appropriate for use in similar situations to obtain initial estimates from household surveys to guide policy and programming directions. However, the power to detect change from these estimates is lower than originally planned, requiring a greater tolerance for error when the data are used as a baseline for evaluation. The generalizability of using outdated sampling frames in similar settings is qualified by the specific characteristics of the MICS 2003—low replacement rate of clusters and zero probability of inclusion of clusters created after the 1979 census. PMID:21957678
Improved Estimation of Orbits and Physical Properties of Objects in GEO
NASA Astrophysics Data System (ADS)
Bradley, B.; Axelrad, P.
2013-09-01
Orbital debris is a major concern for satellite operators, both commercial and military. Debris in the geosynchronous (GEO) belt is of particular concern because this unique region is such a valuable, limited resource, and, from the ground we cannot reliably track and characterize GEO objects smaller than 1 meter in diameter. Space-based space surveillance (SBSS) is required to observe GEO objects without weather restriction and with improved viewing geometry. SBSS satellites have thus far been placed in Sun-synchronous orbits. This paper investigates the benefits to GEO orbit determination (including the estimation of mass, area, and shape) that arises from placing observing satellites in geosynchronous transfer orbit (GTO) and a sub-GEO orbit. Recently, several papers have reported on simulation studies to estimate orbits and physical properties; however, these studies use simulated objects and ground-based measurements, often with dense and long data arcs. While this type of simulation provides valuable insight into what is possible, as far as state estimation goes, it is not a very realistic observing scenario and thus may not yield meaningful accuracies. Our research improves upon simulations published to date by utilizing publicly available ephemerides for the WAAS satellites (Anik F1R and Galaxy 15), accurate at the meter level. By simulating and deliberately degrading right ascension and declination observations, consistent with these ephemerides, a realistic assessment of the achievable orbit determination accuracy using GTO and sub-GEO SBSS platforms is performed. Our results show that orbit accuracy is significantly improved as compared to a Sun-synchronous platform. Physical property estimation is also performed using simulated astrometric and photometric data taken from GTO and sub-GEO sensors. Simulations of SBSS-only as well as combined SBSS and ground-based observation tracks are used to study the improvement in area, mass, and shape estimation
Combining SIP and NMR Measurements to Develop Improved Estimates of Permeability in Sandstone Cores
NASA Astrophysics Data System (ADS)
Keating, K.; Binley, A. M.
2013-12-01
Permeability is traditionally measured in-situ by inducing groundwater flow using pumping, slug, or packer tests; however, these methods require the existence of wells, can be labor intensive and can be constrained by measurement support volumes. Indirect estimates of permeability based on geophysical techniques benefit from relatively short measurement times, do not require fluid extraction, and are non-invasive when made from the surface (or minimally invasive when made in a borehole). However, estimates of permeability based on a single geophysical method often require calibration for rock type, and cannot be used to uniquely determine all of the physical properties required to accurately determine permeability. In this laboratory study we present the first critical step towards developing a method for estimating permeability based on the synergistic coupling of two complementary geophysical methods: spectral induced polarization (SIP) and nuclear magnetic resonance (NMR). To develop an improved model for estimating permeability, laboratory SIP and NMR measurements were collected on a series of sandstone cores, covering a wide range of permeabilities. Current models for estimating permeability from each individual geophysical measurement were compared to independently obtained estimates of permeability. The comparison confirmed previous research showing that estimates from SIP or NMR alone only yield the permeability within order of magnitude accuracy and must be calibrated for rock type. Next, the geophysical parameters determined from SIP and NMR were compared to independent measurements the physical properties of the sandstone cores including gravimetric porosity and pores-size distributions (obtained from mercury injection porosimetry); this comparison was used to evaluate which geophysical parameter more consistently and accurately predicted each physical property. Finally, we present an improved method for estimating permeability in sandstone cores based
Pan, Yude; Birdsey, Richard; Hom, John; McCullough, Kevin; Clark, Kenneth
2006-02-01
We compared estimates of net primary production (NPP) from the MODIS satellite with estimates from a forest ecosystem process model (PnET-CN) and forest inventory and analysis (FIA) data for forest types of the mid-Atlantic region of the United States. The regional means were similar for the three methods and for the dominant oak-hickory forests in the region. However, MODIS underestimated NPP for less-dominant northern hardwood forests and overestimated NPP for coniferous forests. Causes of inaccurate estimates of NPP by MODIS were (1) an aggregated classification and parameterization of diverse deciduous forests in different climatic environments into a single class that averages different radiation conversion efficiencies; and (2) lack of soil water constraints on NPP for forests or areas that occur on thin or sandy, coarse-grained soil. We developed the "available soil water index" for adjusting the MODIS NPP estimates, which significantly improved NPP estimates for coniferous forests. The MODIS NPP estimates have many advantages such as globally continuous monitoring and remarkable accuracy for large scales. However, at regional or local scales, our study indicates that it is necessary to adjust estimates to specific vegetation types and soil water conditions.
Kappa statistic for clustered matched-pair data.
Yang, Zhao; Zhou, Ming
2014-07-10
Kappa statistic is widely used to assess the agreement between two procedures in the independent matched-pair data. For matched-pair data collected in clusters, on the basis of the delta method and sampling techniques, we propose a nonparametric variance estimator for the kappa statistic without within-cluster correlation structure or distributional assumptions. The results of an extensive Monte Carlo simulation study demonstrate that the proposed kappa statistic provides consistent estimation and the proposed variance estimator behaves reasonably well for at least a moderately large number of clusters (e.g., K ≥50). Compared with the variance estimator ignoring dependence within a cluster, the proposed variance estimator performs better in maintaining the nominal coverage probability when the intra-cluster correlation is fair (ρ ≥0.3), with more pronounced improvement when ρ is further increased. To illustrate the practical application of the proposed estimator, we analyze two real data examples of clustered matched-pair data.
Improved particle size estimation in digital holography via sign matched filtering.
Lu, Jiang; Shaw, Raymond A; Yang, Weidong
2012-06-01
A matched filter method is provided for obtaining improved particle size estimates from digital in-line holograms. This improvement is relative to conventional reconstruction and pixel counting methods for particle size estimation, which is greatly limited by the CCD camera pixel size. The proposed method is based on iterative application of a sign matched filter in the Fourier domain, with sign meaning the matched filter takes values of ±1 depending on the sign of the angular spectrum of the particle aperture function. Using simulated data the method is demonstrated to work for particle diameters several times the pixel size. Holograms of piezoelectrically generated water droplets taken in the laboratory show greatly improved particle size measurements. The method is robust to additive noise and can be applied to real holograms over a wide range of matched-filter particle sizes.
Zheng, Xiujuan; Tian, Guangjian; Huang, Sung-Cheng; Feng, Dagan
2011-01-01
Tracer kinetic modeling with dynamic Positron Emission Tomography (PET) requires a plasma time-activity curve (PTAC) as an input function. Several image-derived input function (IDIF) methods that rely on drawing the region-of-interest (ROI) in large vascular structures have been proposed to overcome the problems caused by the invasive approach to obtaining the PTAC, especially for small animal studies. However, the manual placement of ROIs for estimating IDIF is subjective and labor-intensive, making it an undesirable and unreliable process. In this paper, we propose a novel hybrid clustering method (HCM) that objectively delineates ROIs in dynamic PET images for the estimation of IDIFs, and demonstrate its application to the mouse PET studies acquired with [18F]Fluoro-2-deoxy-2-D-glucose (FDG). We begin our HCM using K-means clustering for background removal. We then model the time-activity curves using polynomial regression mixture models in curve clustering for heart structure detection. The hierarchical clustering is finally applied for ROI refinements. The HCM achieved accurate ROI delineation in both computer simulations and experimental mouse studies. In the mouse studies the predicted IDIF had a high correlation with the gold standard, the PTAC derived from the invasive blood samples. The results indicate that the proposed HCM has a great potential in ROI delineation for automatic estimation of IDIF in dynamic FDG-PET studies. PMID:20952342
Westgate, Philip M
2014-06-15
Generalized estimating equations are commonly used to analyze correlated data. Choosing an appropriate working correlation structure for the data is important, as the efficiency of generalized estimating equations depends on how closely this structure approximates the true structure. Therefore, most studies have proposed multiple criteria to select the working correlation structure, although some of these criteria have neither been compared nor extensively studied. To ease the correlation selection process, we propose a criterion that utilizes the trace of the empirical covariance matrix. Furthermore, use of the unstructured working correlation can potentially improve estimation precision and therefore should be considered when data arise from a balanced longitudinal study. However, most previous studies have not allowed the unstructured working correlation to be selected as it estimates more nuisance correlation parameters than other structures such as AR-1 or exchangeable. Therefore, we propose appropriate penalties for the selection criteria that can be imposed upon the unstructured working correlation. Via simulation in multiple scenarios and in application to a longitudinal study, we show that the trace of the empirical covariance matrix works very well relative to existing criteria. We further show that allowing criteria to select the unstructured working correlation when utilizing the penalties can substantially improve parameter estimation.
Makeyev, Oleksandr; Besio, Walter G
2016-01-01
Noninvasive concentric ring electrodes are a promising alternative to conventional disc electrodes. Currently, the superiority of tripolar concentric ring electrodes over disc electrodes, in particular, in accuracy of Laplacian estimation, has been demonstrated in a range of applications. In our recent work, we have shown that accuracy of Laplacian estimation can be improved with multipolar concentric ring electrodes using a general approach to estimation of the Laplacian for an (n + 1)-polar electrode with n rings using the (4n + 1)-point method for n ≥ 2. This paper takes the next step toward further improving the Laplacian estimate by proposing novel variable inter-ring distances concentric ring electrodes. Derived using a modified (4n + 1)-point method, linearly increasing and decreasing inter-ring distances tripolar (n = 2) and quadripolar (n = 3) electrode configurations are compared to their constant inter-ring distances counterparts. Finite element method modeling and analytic results are consistent and suggest that increasing inter-ring distances electrode configurations may decrease the truncation error resulting in more accurate Laplacian estimates compared to respective constant inter-ring distances configurations. For currently used tripolar electrode configuration, the truncation error may be decreased more than two-fold, while for the quadripolar configuration more than a six-fold decrease is expected. PMID:27294933
Makeyev, Oleksandr; Besio, Walter G.
2016-01-01
Noninvasive concentric ring electrodes are a promising alternative to conventional disc electrodes. Currently, the superiority of tripolar concentric ring electrodes over disc electrodes, in particular, in accuracy of Laplacian estimation, has been demonstrated in a range of applications. In our recent work, we have shown that accuracy of Laplacian estimation can be improved with multipolar concentric ring electrodes using a general approach to estimation of the Laplacian for an (n + 1)-polar electrode with n rings using the (4n + 1)-point method for n ≥ 2. This paper takes the next step toward further improving the Laplacian estimate by proposing novel variable inter-ring distances concentric ring electrodes. Derived using a modified (4n + 1)-point method, linearly increasing and decreasing inter-ring distances tripolar (n = 2) and quadripolar (n = 3) electrode configurations are compared to their constant inter-ring distances counterparts. Finite element method modeling and analytic results are consistent and suggest that increasing inter-ring distances electrode configurations may decrease the truncation error resulting in more accurate Laplacian estimates compared to respective constant inter-ring distances configurations. For currently used tripolar electrode configuration, the truncation error may be decreased more than two-fold, while for the quadripolar configuration more than a six-fold decrease is expected. PMID:27294933
Li, Ying; Wang, Hong; Li, Xiao Bing
2015-01-01
Vegetation is an important part of ecosystem and estimation of fractional vegetation cover is of significant meaning to monitoring of vegetation growth in a certain region. With Landsat TM images and HJ-1B images as data source, an improved selective endmember linear spectral mixture model (SELSMM) was put forward in this research to estimate the fractional vegetation cover in Huangfuchuan watershed in China. We compared the result with the vegetation coverage estimated with linear spectral mixture model (LSMM) and conducted accuracy test on the two results with field survey data to study the effectiveness of different models in estimation of vegetation coverage. Results indicated that: (1) the RMSE of the estimation result of SELSMM based on TM images is the lowest, which is 0.044. The RMSEs of the estimation results of LSMM based on TM images, SELSMM based on HJ-1B images and LSMM based on HJ-1B images are respectively 0.052, 0.077 and 0.082, which are all higher than that of SELSMM based on TM images; (2) the R2 of SELSMM based on TM images, LSMM based on TM images, SELSMM based on HJ-1B images and LSMM based on HJ-1B images are respectively 0.668, 0.531, 0.342 and 0.336. Among these models, SELSMM based on TM images has the highest estimation accuracy and also the highest correlation with measured vegetation coverage. Of the two methods tested, SELSMM is superior to LSMM in estimation of vegetation coverage and it is also better at unmixing mixed pixels of TM images than pixels of HJ-1B images. So, the SELSMM based on TM images is comparatively accurate and reliable in the research of regional fractional vegetation cover estimation. PMID:25905772
Using flow cytometry to estimate pollen DNA content: improved methodology and applications
Kron, Paul; Husband, Brian C.
2012-01-01
Background and Aims Flow cytometry has been used to measure nuclear DNA content in pollen, mostly to understand pollen development and detect unreduced gametes. Published data have not always met the high-quality standards required for some applications, in part due to difficulties inherent in the extraction of nuclei. Here we describe a simple and relatively novel method for extracting pollen nuclei, involving the bursting of pollen through a nylon mesh, compare it with other methods and demonstrate its broad applicability and utility. Methods The method was tested across 80 species, 64 genera and 33 families, and the data were evaluated using established criteria for estimating genome size and analysing cell cycle. Filter bursting was directly compared with chopping in five species, yields were compared with published values for sonicated samples, and the method was applied by comparing genome size estimates for leaf and pollen nuclei in six species. Key Results Data quality met generally applied standards for estimating genome size in 81 % of species and the higher best practice standards for cell cycle analysis in 51 %. In 41 % of species we met the most stringent criterion of screening 10 000 pollen grains per sample. In direct comparison with two chopping techniques, our method produced better quality histograms with consistently higher nuclei yields, and yields were higher than previously published results for sonication. In three binucleate and three trinucleate species we found that pollen-based genome size estimates differed from leaf tissue estimates by 1·5 % or less when 1C pollen nuclei were used, while estimates from 2C generative nuclei differed from leaf estimates by up to 2·5 %. Conclusions The high success rate, ease of use and wide applicability of the filter bursting method show that this method can facilitate the use of pollen for estimating genome size and dramatically improve unreduced pollen production estimation with flow cytometry. PMID
Li, Ying; Wang, Hong; Li, Xiao Bing
2015-01-01
Vegetation is an important part of ecosystem and estimation of fractional vegetation cover is of significant meaning to monitoring of vegetation growth in a certain region. With Landsat TM images and HJ-1B images as data source, an improved selective endmember linear spectral mixture model (SELSMM) was put forward in this research to estimate the fractional vegetation cover in Huangfuchuan watershed in China. We compared the result with the vegetation coverage estimated with linear spectral mixture model (LSMM) and conducted accuracy test on the two results with field survey data to study the effectiveness of different models in estimation of vegetation coverage. Results indicated that: (1) the RMSE of the estimation result of SELSMM based on TM images is the lowest, which is 0.044. The RMSEs of the estimation results of LSMM based on TM images, SELSMM based on HJ-1B images and LSMM based on HJ-1B images are respectively 0.052, 0.077 and 0.082, which are all higher than that of SELSMM based on TM images; (2) the R2 of SELSMM based on TM images, LSMM based on TM images, SELSMM based on HJ-1B images and LSMM based on HJ-1B images are respectively 0.668, 0.531, 0.342 and 0.336. Among these models, SELSMM based on TM images has the highest estimation accuracy and also the highest correlation with measured vegetation coverage. Of the two methods tested, SELSMM is superior to LSMM in estimation of vegetation coverage and it is also better at unmixing mixed pixels of TM images than pixels of HJ-1B images. So, the SELSMM based on TM images is comparatively accurate and reliable in the research of regional fractional vegetation cover estimation.
Improvements to lawn and garden equipment emissions estimates for Baltimore, Maryland.
Reid, Stephen B; Pollard, Erin K; Sullivan, Dana Coe; Shaw, Stephanie L
2010-12-01
Lawn and garden equipment are a significant source of emissions of volatile organic compounds (VOCs) and other pollutants in suburban and urban areas. Emission estimates for this source category are typically prepared using default equipment populations and activity data contained in emissions models such as the U.S. Environmental Protection Agency's (EPA) NONROAD model or the California Air Resources Board's (CARB) OFFROAD model. Although such default data may represent national or state averages, these data are unlikely to reflect regional or local differences in equipment usage patterns because of variations in climate, lot sizes, and other variables. To assess potential errors in lawn and garden equipment emission estimates produced by the NONROAD model and to demonstrate methods that can be used by local planning agencies to improve those emission estimates, this study used bottom-up data collection techniques in the Baltimore metropolitan area to develop local equipment population, activity, and temporal data for lawn and garden equipment in the area. Results of this study show that emission estimates of VOCs, particulate matter (PM), carbon monoxide (CO), carbon dioxide (CO2), and nitrogen oxides (NO(x)) for the Baltimore area that are based on local data collected through surveys of residential and commercial lawn and garden equipment users are 24-56% lower than estimates produced using NONROAD default data, largely because of a difference in equipment populations for high-usage commercial applications. Survey-derived emission estimates of PM and VOCs are 24 and 26% lower than NONROAD default estimates, respectively, whereas survey-derived emission estimates for CO, CO2, and NO(x) are more than 40% lower than NONROAD default estimates. In addition, study results show that the temporal allocation factors applied to residential lawn and garden equipment in the NONROAD model underestimated weekend activity levels by 30% compared with survey-derived temporal
Improving PAGER's real-time earthquake casualty and loss estimation toolkit: a challenge
Jaiswal, K.S.; Wald, D.J.
2012-01-01
We describe the on-going developments of PAGER’s loss estimation models, and discuss value-added web content that can be generated related to exposure, damage and loss outputs for a variety of PAGER users. These developments include identifying vulnerable building types in any given area, estimating earthquake-induced damage and loss statistics by building type, and developing visualization aids that help locate areas of concern for improving post-earthquake response efforts. While detailed exposure and damage information is highly useful and desirable, significant improvements are still necessary in order to improve underlying building stock and vulnerability data at a global scale. Existing efforts with the GEM’s GED4GEM and GVC consortia will help achieve some of these objectives. This will benefit PAGER especially in regions where PAGER’s empirical model is less-well constrained; there, the semi-empirical and analytical models will provide robust estimates of damage and losses. Finally, we outline some of the challenges associated with rapid casualty and loss estimation that we experienced while responding to recent large earthquakes worldwide.
NASA Astrophysics Data System (ADS)
Yucel, I.; Akcelik, M.; Kuligowski, R. J.
2014-12-01
In support of the National Oceanic and Atmospheric Administration (NOAA) National Weather Service's (NWS) flash flood warning and heavy precipitation forecast efforts, the NOAA National Environmental Satellite Data and Information Service (NESDIS) Center for Satellite Applications and Research (STAR) has been providing satellite based precipitation estimates operationally since 1978. Two of the satellite based rainfall algorithms are the Hydro-Estimator (HE) and the Self-Calibrating Multivariate Precipitation Retrieval (SCaMPR). However, unlike the HE algorithm the SCaMPR does not currently make any adjustments for the effects of complex topography on rainfall. This study investigates the potential for improving the SCaMPR algorithm by incorporating an orographic correction and humidity correction based calibration of the SCaMPR against rain gauge transects in northwestern Mexico to identify correctable biases related to elevation, slope, wind direction and humidity. Elevation-dependent bias structure of the SCaMPR algorithm suggest that the rainfall algorithm underestimates precipitation in case of upward atmospheric movements and overestimates rainfall in case of downward atmospheric movements along with mountainous terrain. A regionally dependent empirical elevation-based bias correction technique may help improve the quality of satellite-derived precipitation products. As well as orography, effect of atmospheric indices over precipitation estimates is analyzed. The findings suggest that continued improvement to the developed orographic correction scheme is warranted in order to advance quantitative precipitation estimation in complex terrain regions for use in weather forecasting and hydrologic applications.
Cosmology with galaxy clusters
NASA Astrophysics Data System (ADS)
Sartoris, Barbara
2015-08-01
Clusters of galaxies are powerful probes to constrain parameters that describe the cosmological models and to distinguish among different models. Since, the evolution of the cluster mass function and large-scale clustering contain the informations about the linear growth rate of perturbations and the expansion history of the Universe, clusters have played an important role in establishing the current cosmological paradigm. It is crucial to know how to determine the cluster mass from observational quantities when using clusters as cosmological tools. For this, numerical simulations are helpful to define and study robust cluster mass proxies that have minimal and well understood scatter across the mass and redshift ranges of interest. Additionally, the bias in cluster mass determination can be constrained via observations of the strong and weak lensing effect, X-ray emission, the Sunyaev- Zel’dovic effect, and the dynamics of galaxies.A major advantage of X-ray surveys is that the observable-mass relation is tight. Moreover, clusters can be easily identified in X-ray as continuous, extended sources. As of today, interesting cosmological constraints have been obtained from relatively small cluster samples (~102), X-ray selected by the ROSAT satellite over a wide redshift range (0
Semi-Supervised Data Summarization: Using Spectral Libraries to Improve Hyperspectral Clustering
NASA Technical Reports Server (NTRS)
Wagstaff, K. L.; Shu, H. P.; Mazzoni, D.; Castano, R.
2005-01-01
Hyperspectral imagers produce very large images, with each pixel recorded at hundreds or thousands of different wavelengths. The ability to automatically generate summaries of these data sets enables several important applications, such as quickly browsing through a large image repository or determining the best use of a limited bandwidth link (e.g., determining which images are most critical for full transmission). Clustering algorithms can be used to generate these summaries, but traditional clustering methods make decisions based only on the information contained in the data set. In contrast, we present a new method that additionally leverages existing spectral libraries to identify materials that are likely to be present in the image target area. We find that this approach simultaneously reduces runtime and produces summaries that are more relevant to science goals.
Improving the estimation of flavonoid intake for study of health outcomes
Dwyer, Johanna T.; Jacques, Paul F.; McCullough, Marjorie L.
2015-01-01
Imprecision in estimating intakes of non-nutrient bioactive compounds such as flavonoids is a challenge in epidemiologic studies of health outcomes. The sources of this imprecision, using flavonoids as an example, include the variability of bioactive compounds in foods due to differences in growing conditions and processing, the challenges in laboratory quantification of flavonoids in foods, the incompleteness of flavonoid food composition tables, and the lack of adequate dietary assessment instruments. Steps to improve databases of bioactive compounds and to increase the accuracy and precision of the estimation of bioactive compound intakes in studies of health benefits and outcomes are suggested. PMID:26084477
Does Ocean Color Data Assimilation Improve Estimates of Global Ocean Inorganic Carbon?
NASA Technical Reports Server (NTRS)
Gregg, Watson
2012-01-01
Ocean color data assimilation has been shown to dramatically improve chlorophyll abundances and distributions globally and regionally in the oceans. Chlorophyll is a proxy for phytoplankton biomass (which is explicitly defined in a model), and is related to the inorganic carbon cycle through the interactions of the organic carbon (particulate and dissolved) and through primary production where inorganic carbon is directly taken out of the system. Does ocean color data assimilation, whose effects on estimates of chlorophyll are demonstrable, trickle through the simulated ocean carbon system to produce improved estimates of inorganic carbon? Our emphasis here is dissolved inorganic carbon, pC02, and the air-sea flux. We use a sequential data assimilation method that assimilates chlorophyll directly and indirectly changes nutrient concentrations in a multi-variate approach. The results are decidedly mixed. Dissolved organic carbon estimates from the assimilation model are not meaningfully different from free-run, or unassimilated results, and comparisons with in situ data are similar. pC02 estimates are generally worse after data assimilation, with global estimates diverging 6.4% from in situ data, while free-run estimates are only 4.7% higher. Basin correlations are, however, slightly improved: r increase from 0.78 to 0.79, and slope closer to unity at 0.94 compared to 0.86. In contrast, air-sea flux of C02 is noticeably improved after data assimilation. Global differences decline from -0.635 mol/m2/y (stronger model sink from the atmosphere) to -0.202 mol/m2/y. Basin correlations are slightly improved from r=O.77 to r=0.78, with slope closer to unity (from 0.93 to 0.99). The Equatorial Atlantic appears as a slight sink in the free-run, but is correctly represented as a moderate source in the assimilation model. However, the assimilation model shows the Antarctic to be a source, rather than a modest sink and the North Indian basin is represented incorrectly as a sink
Estimating the Effect of School Water, Sanitation, and Hygiene Improvements on Pupil Health Outcomes
Garn, Joshua V.; Brumback, Babette A.; Drews-Botsch, Carolyn D.; Lash, Timothy L.; Kramer, Michael R.
2016-01-01
Background: We conducted a cluster-randomized water, sanitation, and hygiene trial in 185 schools in Nyanza province, Kenya. The trial, however, had imperfect school-level adherence at many schools. The primary goal of this study was to estimate the causal effects of school-level adherence to interventions on pupil diarrhea and soil-transmitted helminth infection. Methods: Schools were divided into water availability groups, which were then randomized separately into either water, sanitation, and hygiene intervention arms or a control arm. School-level adherence to the intervention was defined by the number of intervention components—water, latrines, soap—that had been adequately implemented. The outcomes of interest were pupil diarrhea and soil-transmitted helminth infection. We used a weighted generalized structural nested model to calculate prevalence ratio. Results: In the water-scarce group, there was evidence of a reduced prevalence of diarrhea among pupils attending schools that adhered to two or to three intervention components (prevalence ratio = 0.28, 95% confidence interval: 0.10, 0.75), compared with what the prevalence would have been had the same schools instead adhered to zero components or one. In the water-available group, there was no evidence of reduced diarrhea with better adherence. For the soil-transmitted helminth infection and intensity outcomes, we often observed point estimates in the preventive direction with increasing intervention adherence, but primarily among girls, and the confidence intervals were often very wide. Conclusions: Our instrumental variable point estimates sometimes suggested protective effects with increased water, sanitation, and hygiene intervention adherence, although many of the estimates were imprecise. PMID:27276028
Improved tilt-depth method for fast estimation of top and bottom depths of magnetic bodies
NASA Astrophysics Data System (ADS)
Wang, Yan-Guo; Zhang, Jin; Ge, Kun-Peng; Chen, Xiao; Nie, Feng-Jun
2016-06-01
The tilt-depth method can be used to make fast estimation of the top depth of magnetic bodies. However, it is unable to estimate bottom depths and its every inversion point only has a single solution. In order to resolve such weaknesses, this paper presents an improved tilt-depth method based on the magnetic anomaly expression of vertical contact with a finite depth extent, which can simultaneously estimate top and bottom depths of magnetic bodies. In addition, multiple characteristic points are selected on the tilt angle map for joint computation to improve reliability of inversion solutions. Two- and threedimensional model tests show that this improved tilt-depth method is effective in inverting buried depths of top and bottom bodies, and has a higher inversion precision for top depths than the conventional method. The improved method is then used to process aeromagnetic data over the Changling Fault Depression in the Songliao Basin, and inversion results of top depths are found to be more accurate for actual top depths of volcanic rocks in two nearby drilled wells than those using the conventional tilt-depth method.
Development of a mixed pixel filter for improved dimension estimation using AMCW laser scanner
NASA Astrophysics Data System (ADS)
Wang, Qian; Sohn, Hoon; Cheng, Jack C. P.
2016-09-01
Accurate dimension estimation is desired in many fields, but the traditional dimension estimation methods are time-consuming and labor-intensive. In the recent decades, 3D laser scanners have become popular for dimension estimation due to their high measurement speed and accuracy. Nonetheless, scan data obtained by amplitude-modulated continuous-wave (AMCW) laser scanners suffer from erroneous data called mixed pixels, which can influence the accuracy of dimension estimation. This study develops a mixed pixel filter for improved dimension estimation using AMCW laser scanners. The distance measurement of mixed pixels is firstly formulated based on the working principle of laser scanners. Then, a mixed pixel filter that can minimize the classification errors between valid points and mixed pixels is developed. Validation experiments were conducted to verify the formulation of the distance measurement of mixed pixels and to examine the performance of the proposed mixed pixel filter. Experimental results show that, for a specimen with dimensions of 840 mm × 300 mm, the overall errors of the dimensions estimated after applying the proposed filter are 1.9 mm and 1.0 mm for two different scanning resolutions, respectively. These errors are much smaller than the errors (4.8 mm and 3.5 mm) obtained by the scanner's built-in filter.
Improved dichotomous search frequency offset estimator for burst-mode continuous phase modulation
NASA Astrophysics Data System (ADS)
Zhai, Wen-Chao; Li, Zan; Si, Jiang-Bo; Bai, Jun
2015-11-01
A data-aided technique for carrier frequency offset estimation with continuous phase modulation (CPM) in burst-mode transmission is presented. The proposed technique first exploits a special pilot sequence, or training sequence, to form a sinusoidal waveform. Then, an improved dichotomous search frequency offset estimator is introduced to determine the frequency offset using the sinusoid. Theoretical analysis and simulation results indicate that our estimator is noteworthy in the following aspects. First, the estimator can operate independently of timing recovery. Second, it has relatively low outlier, i.e., the minimum signal-to-noise ratio (SNR) required to guarantee estimation accuracy. Finally, the most important property is that our estimator is complexity-reduced compared to the existing dichotomous search methods: it eliminates the need for fast Fourier transform (FFT) and modulation removal, and exhibits faster convergence rate without accuracy degradation. Project supported by the National Natural Science Foundation of China (Grant No. 61301179), the Doctorial Programs Foundation of the Ministry of Education, China (Grant No. 20110203110011), and the Programme of Introducing Talents of Discipline to Universities, China (Grant No. B08038).
Improved rapid magnitude estimation for a community-based, low-cost MEMS accelerometer network
Chung, Angela I.; Cochran, Elizabeth S.; Kaiser, Anna E.; Christensen, Carl M.; Yildirim, Battalgazi; Lawrence, Jesse F.
2015-01-01
Immediately following the Mw 7.2 Darfield, New Zealand, earthquake, over 180 Quake‐Catcher Network (QCN) low‐cost micro‐electro‐mechanical systems accelerometers were deployed in the Canterbury region. Using data recorded by this dense network from 2010 to 2013, we significantly improved the QCN rapid magnitude estimation relationship. The previous scaling relationship (Lawrence et al., 2014) did not accurately estimate the magnitudes of nearby (<35 km) events. The new scaling relationship estimates earthquake magnitudes within 1 magnitude unit of the GNS Science GeoNet earthquake catalog magnitudes for 99% of the events tested, within 0.5 magnitude units for 90% of the events, and within 0.25 magnitude units for 57% of the events. These magnitudes are reliably estimated within 3 s of the initial trigger recorded on at least seven stations. In this report, we present the methods used to calculate a new scaling relationship and demonstrate the accuracy of the revised magnitude estimates using a program that is able to retrospectively estimate event magnitudes using archived data.
Improvement of PPP-inferred tropospheric estimates by integer ambiguity resolution
NASA Astrophysics Data System (ADS)
Shi, J.; Gao, Y.
2012-11-01
Integer ambiguity resolution in Precise Point Positioning (PPP) can improve positioning accuracy and reduce convergence time. The decoupled clock model proposed by Collins (2008) has been used to facilitate integer ambiguity resolution in PPP, and research has been conducted to assess the model's potential to improve positioning accuracy and reduce positioning convergence time. In particular, the biggest benefits have been identified for the positioning solutions within short observation periods such as one hour. However, there is little work reported about the model's potential to improve the estimation of the tropospheric parameter within short observation periods. This paper investigates the effect of PPP ambiguity resolution on the accuracy of the tropospheric estimates within one hour. The tropospheric estimates with float and fixed ambiguities within one hour are compared to two external references. The first reference is the International GNSS Service (IGS) final troposphere product based on the PPP technique. The second reference is the Constellation Observing System for Meteorology Ionosphere and Climate (COSMIC) radio occultation (RO) event based on the atmospheric profiles along the signal travel path. A comparison among ten co-located ground-based GPS and space-based RO troposphere zenith path delays shows that the mean bias of the troposphere estimates with float ambiguities can be significantly reduced from 30.1 to 17.0 mm when compared to the IGS troposphere product and from 36.3 to 19.7 mm when compared to the COSMIC RO. The root mean square (RMS) accuracy improvement of the tropospheric parameters by the ambiguity resolution is 33.3% when compared to the IGS products and 44.3% when compared to the COSMIC RO. All these improvements are achieved within one hour, which indicates the promising prospect of adopting PPP integer ambiguity resolution for time-critical applications such as typhoon prediction.
Wang, Chaolong; Zhan, Xiaowei; Liang, Liming; Abecasis, Gonçalo R; Lin, Xihong
2015-06-01
Accurate estimation of individual ancestry is important in genetic association studies, especially when a large number of samples are collected from multiple sources. However, existing approaches developed for genome-wide SNP data do not work well with modest amounts of genetic data, such as in targeted sequencing or exome chip genotyping experiments. We propose a statistical framework to estimate individual ancestry in a principal component ancestry map generated by a reference set of individuals. This framework extends and improves upon our previous method for estimating ancestry using low-coverage sequence reads (LASER 1.0) to analyze either genotyping or sequencing data. In particular, we introduce a projection Procrustes analysis approach that uses high-dimensional principal components to estimate ancestry in a low-dimensional reference space. Using extensive simulations and empirical data examples, we show that our new method (LASER 2.0), combined with genotype imputation on the reference individuals, can substantially outperform LASER 1.0 in estimating fine-scale genetic ancestry. Specifically, LASER 2.0 can accurately estimate fine-scale ancestry within Europe using either exome chip genotypes or targeted sequencing data with off-target coverage as low as 0.05×. Under the framework of LASER 2.0, we can estimate individual ancestry in a shared reference space for samples assayed at different loci or by different techniques. Therefore, our ancestry estimation method will accelerate discovery in disease association studies not only by helping model ancestry within individual studies but also by facilitating combined analysis of genetic data from multiple sources. PMID:26027497
Probe Region Expression Estimation for RNA-Seq Data for Improved Microarray Comparability.
Uziela, Karolis; Honkela, Antti
2015-01-01
Rapidly growing public gene expression databases contain a wealth of data for building an unprecedentedly detailed picture of human biology and disease. This data comes from many diverse measurement platforms that make integrating it all difficult. Although RNA-sequencing (RNA-seq) is attracting the most attention, at present, the rate of new microarray studies submitted to public databases far exceeds the rate of new RNA-seq studies. There is clearly a need for methods that make it easier to combine data from different technologies. In this paper, we propose a new method for processing RNA-seq data that yields gene expression estimates that are much more similar to corresponding estimates from microarray data, hence greatly improving cross-platform comparability. The method we call PREBS is based on estimating the expression from RNA-seq reads overlapping the microarray probe regions, and processing these estimates with standard microarray summarisation algorithms. Using paired microarray and RNA-seq samples from TCGA LAML data set we show that PREBS expression estimates derived from RNA-seq are more similar to microarray-based expression estimates than those from other RNA-seq processing methods. In an experiment to retrieve paired microarray samples from a database using an RNA-seq query sample, gene signatures defined based on PREBS expression estimates were found to be much more accurate than those from other methods. PREBS also allows new ways of using RNA-seq data, such as expression estimation for microarray probe sets. An implementation of the proposed method is available in the Bioconductor package "prebs."
NASA Technical Reports Server (NTRS)
Allord, G. J. (Principal Investigator); Scarpace, F. L.
1981-01-01
Estimates of low flow and flood frequency in several southwestern Wisconsin basins were improved by determining land cover from LANDSAT imagery. With the use of estimates of land cover in multiple-regression techniques, the standard error of estimate (SE) for the least annual 7-day low flow for 2- and 10-year recurrence intervals of ungaged sites were lowered by 9% each. The SE of flood frequency in the 'Driftless Area' of Wisconsin for 10-, 50-, and 100-year recurrence intervals were lowered by 14%. Four of nine basin characteristics determined from satellite imagery were significant variables in the multiple-regression techniques, whereas only 1 of the 12 characteristics determined from topographic maps was significant. The percentages of land cover categories in each basin were determined by merging basin boundaries, digitized from quadrangles, with a classified LANDSAT scene. Both the basin boundary X-Y polygon coordinates and the satellite coordinates were converted to latitude-longitude for merging compatibility.
Tsiatis, Anastasios A; Davidian, Marie; Cao, Weihua
2011-06-01
A routine challenge is that of making inference on parameters in a statistical model of interest from longitudinal data subject to dropout, which are a special case of the more general setting of monotonely coarsened data. Considerable recent attention has focused on doubly robust (DR) estimators, which in this context involve positing models for both the missingness (more generally, coarsening) mechanism and aspects of the distribution of the full data, that have the appealing property of yielding consistent inferences if only one of these models is correctly specified. DR estimators have been criticized for potentially disastrous performance when both of these models are even only mildly misspecified. We propose a DR estimator applicable in general monotone coarsening problems that achieves comparable or improved performance relative to existing DR methods, which we demonstrate via simulation studies and by application to data from an AIDS clinical trial. PMID:20731640
Improving Land Cover Product-Based Estimates of the Extent of Fragmented Cover Types
NASA Technical Reports Server (NTRS)
Hlavka, Christine A.; Dungan, Jennifer
2002-01-01
The effects of changing land use/land cover on regional and global climate ecosystems depends on accurate estimates of the extent of critical land cover types such as Arctic wetlands and fire scars in boreal forests. To address this information requirement, land cover products at coarse spatial resolution such as Advanced Very High Resolution Radiometer (AVHRR) -based maps and the MODIS Land Cover Product are being produced. The accuracy of the extent of highly fragmented cover types such as fire scars and ponds is in doubt because much (the numerous scars and ponds smaller than the pixel size) is missed. A promising method for improving areal estimates involves modeling the observed distribution of the fragment sizes as a type of truncated distribution, then estimating the sum of unobserved sizes in the lower, truncated tail and adding it to the sum of observed fragment sizes. The method has been tested with both simulated and actual cover products.
Approaches for Improved Doppler Estimation in Lidar Remote Sensing of Atmospheric Dynamics
NASA Astrophysics Data System (ADS)
Bhaskaran, Sreevatsan; Calhoun, Ronald
2016-06-01
Laser radar (Lidar) has been used extensively for remote sensing of wind patterns, turbulence in the atmospheric boundary layer and other important atmospheric transport phenomenon. As in most narrowband radar application, radial velocity of remote objects is encoded in the Doppler shift of the backscattered signal relative to the transmitted signal. In contrast to many applications, however, the backscattered signal in atmospheric Lidar sensing arises from a multitude of moving particles in a spatial cell under examination rather than from a few prominent "target" scattering features. This complicates the process of extracting a single Doppler value and corresponding radial velocity figure to associate with the cell. This paper summarizes the prevalent methods for Doppler estimation in atmospheric Lidar applications and proposes a computationally efficient scheme for improving Doppler estimation by exploiting the local structure of spectral density estimates near spectral peaks.
An improved approach for rainfall estimation over Indian summer monsoon region using Kalpana-1 data
NASA Astrophysics Data System (ADS)
Mahesh, C.; Prakash, Satya; Sathiyamoorthy, V.; Gairola, R. M.
2014-08-01
In this paper, an improved Kalpana-1 infrared (IR) based rainfall estimation algorithm, specific to Indian summer monsoon region is presented. This algorithm comprises of two parts: (i) development of Kalpana-1 IR based rainfall estimation algorithm with improvement for orographic warm rain underestimation generally suffered by IR based rainfall estimation methods and (ii) cooling index to take care of the growth and decay of clouds and thereby improving the precipitation estimation. In the first part, a power-law based regression relationship between cloud top temperature from Kalpana-1 IR channel and rainfall from Tropical Rainfall Measuring Mission (TRMM) - precipitation radar specific to the Indian region is developed. This algorithm tries to overcome the inherent orographic issues of the IR based rainfall estimation techniques. Over the windward sides of the Western Ghats, Himalayas and Arakan Yoma mountain chains, separate regression coefficients are generated to take care of the orographically produced warm rainfall. Generally global rainfall retrieval methods fail to detect the warm rainfall over these regions. Rain estimated over the orographic region is suitably blended with the rain retrieved over the entire domain comprising of the Indian monsoon region and parts of the Indian Ocean using another regression relationship. While blending, a smoothening function is applied to avoid rainfall artefacts and an elliptical weighting function is introduced for the purpose. In the second part, a cooling index to distinguish rain/no-rain conditions is developed using Kalpana-1 IR data. The cooling index identifies the cloud growing/decaying regions using two consecutive half-hourly IR images of Kalpana-1 by assigning appropriate weights to growing and non-growing clouds. Intercomparison of estimated rainfall from the present algorithm with TRMM-3B42/3B43 precipitation products and Indian Meteorological Department (IMD) gridded rain gauge data are found to be
NASA Astrophysics Data System (ADS)
Padgett, J. S.; Engelhart, S. E.; Hemphill-Haley, E.; Kelsey, H. M.; Witter, R. C.
2015-12-01
Geological estimates of subsidence from past earthquakes help to constrain Cascadia subduction zone (CSZ) earthquake rupture models. To improve subsidence estimates for past earthquakes along the southern CSZ, we apply transfer function analysis on microfossils from 3 intertidal marshes in northern Humboldt Bay, California, ~60 km north of the Mendocino Triple Junction. The transfer function method uses elevation-dependent intertidal foraminiferal and diatom assemblages to reconstruct relative sea-level (RSL) change indicated by shifts in microfossil assemblages. We interpret stratigraphic evidence associated with sudden shifts in microfossils to reflect sudden RSL rise due to subsidence during past CSZ earthquakes. Laterally extensive (>5 km) and sharp mud-over-peat contacts beneath marshes at Jacoby Creek, Mad River Slough, and McDaniel Slough demonstrate widespread earthquake subsidence in northern Humboldt Bay. C-14 ages of plant macrofossils taken from above and below three contacts that correlate across all three sites, provide estimates of the times of subsidence at ~250 yr BP, ~1300 yr BP and ~1700 yr BP. Two further contacts observed at only two sites provide evidence for subsidence during possible CSZ earthquakes at ~900 yr BP and ~1100 yr BP. Our study contributes 20 AMS radiocarbon ages, of identifiable plant macrofossils, that improve estimates of the timing of past earthquakes along the southern CSZ. We anticipate that our results will provide more accurate and precise reconstructions of RSL change induced by southern CSZ earthquakes. Prior to our work, studies in northern Humboldt Bay provided subsidence estimates with vertical uncertainties >±0.5 m; too imprecise to adequately constrain earthquake rupture models. Our method, applied recently in coastal Oregon, has shown that subsidence during past CSZ earthquakes can be reconstructed with a precision of ±0.3m and substantially improves constraints on rupture models used for seismic hazard
The importance of crown dimensions to improve tropical tree biomass estimates.
Goodman, Rosa C; Phillips, Oliver L; Baker, Timothy R
2014-06-01
Tropical forests play a vital role in the global carbon cycle, but the amount of carbon they contain and its spatial distribution remain uncertain. Recent studies suggest that once tree height is accounted for in biomass calculations, in addition to diameter and wood density, carbon stock estimates are reduced in many areas. However, it is possible that larger crown sizes might offset the reduction in biomass estimates in some forests where tree heights are lower because even comparatively short trees develop large, well-lit crowns in or above the forest canopy. While current allometric models and theory focus on diameter, wood density, and height, the influence of crown size and structure has not been well studied. To test the extent to which accounting for crown parameters can improve biomass estimates, we harvested and weighed 51 trees (11-169 cm diameter) in southwestern Amazonia where no direct biomass measurements have been made. The trees in our study had nearly half of total aboveground biomass in the branches (44% +/- 2% [mean +/- SE]), demonstrating the importance of accounting for tree crowns. Consistent with our predictions, key pantropical equations that include height, but do not account for crown dimensions, underestimated the sum total biomass of all 51 trees by 11% to 14%, primarily due to substantial underestimates of many of the largest trees. In our models, including crown radius greatly improves performance and reduces error, especially for the largest trees. In addition, over the full data set, crown radius explained more variation in aboveground biomass (10.5%) than height (6.0%). Crown form is also important: Trees with a monopodial architectural type are estimated to have 21-44% less mass than trees with other growth patterns. Our analysis suggests that accounting for crown allometry would substantially improve the accuracy of tropical estimates of tree biomass and its distribution in primary and degraded forests.
NASA Astrophysics Data System (ADS)
Wang, Chao; Ji, Ming; Zhang, Ying; Jiang, Wentao; Lu, Xiaoyan; Wang, Jiaoying; Yang, Heng
2016-01-01
The electronic image stabilization technology based on improved optical-flow motion vector estimation technique can effectively improve the non normal shift, such as jitter, rotation and so on. Firstly, the ORB features are extracted from the image, a set of regions are built on these features; Secondly, the optical-flow vector is computed in the feature regions, in order to reduce the computational complexity, the multi resolution strategy of Pyramid is used to calculate the motion vector of the frame; Finally, qualitative and quantitative analysis of the effect of the algorithm is carried out. The results show that the proposed algorithm has better stability compared with image stabilization based on the traditional optical-flow motion vector estimation method.
NASA Astrophysics Data System (ADS)
Brena, A.; Kendall, A. D.; Hyndman, D. W.
2013-12-01
Large-scale agroecosystems are major providers of agricultural commodities and an important component of the world's food supply. In agroecosystems that depend mainly in groundwater, it is well known that their long-term sustainability can be at risk because of water management strategies and climatic trends. The water balance of groundwater-dependent agroecosystems such as the High Plains aquifer (HPA) are often dominated by pumping and irrigation, which enhance hydrological processes such as evapotranspiration, return flow and recharge in cropland areas. This work provides and validates new quantitative groundwater estimation methods for the HPA that combine satellite-based estimates of terrestrial water storage (GRACE), hydrological data assimilation products (NLDAS-2) and in situ measurements of groundwater levels and irrigation rates. The combined data can be used to elucidate the controls of irrigation on the water balance components of agroecosystems, such as crop evapotranspiration, soil moisture deficit and recharge. Our work covers a decade of continuous observations and model estimates from 2003 to 2013, which includes a significant drought since 2011. This study aims to: (1) test the sensitivity of groundwater storage to soil moisture and irrigation, (2) improve estimates of irrigation and soil moisture deficits (3) infer mean values of groundwater recharge across the HPA. The results show (1) significant improvements in GRACE-derived aquifer storage changes using methods that incorporate irrigation and soil moisture deficit data, (2) an acceptable correlation between the observed and estimated aquifer storage time series for the analyzed period, and (3) empirically-estimated annual rates of groundwater recharge that are consistent with previous geochemical and modeling studies. We suggest testing these correction methods in other large-scale agroecosystems with intensive groundwater pumping and irrigation rates.
Experimental verification of an interpolation algorithm for improved estimates of animal position
NASA Astrophysics Data System (ADS)
Schell, Chad; Jaffe, Jules S.
2004-07-01
This article presents experimental verification of an interpolation algorithm that was previously proposed in Jaffe [J. Acoust. Soc. Am. 105, 3168-3175 (1999)]. The goal of the algorithm is to improve estimates of both target position and target strength by minimizing a least-squares residual between noise-corrupted target measurement data and the output of a model of the sonar's amplitude response to a target at a set of known locations. Although this positional estimator was shown to be a maximum likelihood estimator, in principle, experimental verification was desired because of interest in understanding its true performance. Here, the accuracy of the algorithm is investigated by analyzing the correspondence between a target's true position and the algorithm's estimate. True target position was measured by precise translation of a small test target (bead) or from the analysis of images of fish from a coregistered optical imaging system. Results with the stationary spherical test bead in a high signal-to-noise environment indicate that a large increase in resolution is possible, while results with commercial aquarium fish indicate a smaller increase is obtainable. However, in both experiments the algorithm provides improved estimates of target position over those obtained by simply accepting the angular positions of the sonar beam with maximum output as target position. In addition, increased accuracy in target strength estimation is possible by considering the effects of the sonar beam patterns relative to the interpolated position. A benefit of the algorithm is that it can be applied ``ex post facto'' to existing data sets from commercial multibeam sonar systems when only the beam intensities have been stored after suitable calibration.
NASA Astrophysics Data System (ADS)
Shafian, S.; Maas, S. J.; Rajan, N.
2014-12-01
Water resources and agricultural applications require knowledge of crop water use (CWU) over a range of spatial and temporal scales. Due to the spatial density of meteorological stations, the resolution of CWU estimates based on these data is fairly coarse and not particularly suitable or reliable for water resources planning, irrigation scheduling and decision making. Various methods have been developed for quantifying CWU of agricultural crops. In this study, an improved version of the spectral crop coefficient which includes the effects of stomatal closure is applied. Raw digital count (DC) data in the red, near-infrared, and thermal infrared (TIR) spectral bands of Landsat-7 and Landsat-8 imaging sensors are used to construct the TIR-ground cover (GC) pixel data distribution and estimate the effects of stomatal closure. CWU is then estimated by combining results of the spectral crop coefficient approach and the stomatal closer effect. To test this approach, evapotranspiration was measured in 5 agricultural fields in the semi-arid Texas High Plains during the 2013 and 2014 growing seasons and compared to corresponding estimated values of CWU determined using this approach. The results showed that the estimated CWU from this approach was strongly correlated (R2 = 0.79) with observed evapotranspiration. In addition, the results showed that considering the stomatal closer effect in the proposed approach can improve the accuracy of the spectral crop coefficient method. These results suggest that the proposed approach is suitable for operational estimation of evapotranspiration and irrigation scheduling where irrigation is used to replace the daily CWU of a crop.
Experimental verification of an interpolation algorithm for improved estimates of animal position.
Schell, Chad; Jaffe, Jules S
2004-07-01
This article presents experimental verification of an interpolation algorithm that was previously proposed in Jaffe [J. Acoust. Soc. Am. 105, 3168-3175 (1999)]. The goal of the algorithm is to improve estimates of both target position and target strength by minimizing a least-squares residual between noise-corrupted target measurement data and the output of a model of the sonar's amplitude response to a target at a set of known locations. Although this positional estimator was shown to be a maximum likelihood estimator, in principle, experimental verification was desired because of interest in understanding its true performance. Here, the accuracy of the algorithm is investigated by analyzing the correspondence between a target's true position and the algorithm's estimate. True target position was measured by precise translation of a small test target (bead) or from the analysis of images of fish from a coregistered optical imaging system. Results with the stationary spherical test bead in a high signal-to-noise environment indicate that a large increase in resolution is possible, while results with commercial aquarium fish indicate a smaller increase is obtainable. However, in both experiments the algorithm provides improved estimates of target position over those obtained by simply accepting the angular positions of the sonar beam with maximum output as target position. In addition, increased accuracy in target strength estimation is possible by considering the effects of the sonar beam patterns relative to the interpolated position. A benefit of the algorithm is that it can be applied "ex post facto" to existing data sets from commercial multibeam sonar systems when only the beam intensities have been stored after suitable calibration.
Improved Estimates of Capital Formation in the National Health Expenditure Accounts
Sensenig, Arthur L.; Donahoe, Gerald F.
2006-01-01
The National Health Expenditure Accounts (NHEA) were revised with the release of the 2004 estimates. The largest revision was the incorporation of a more comprehensive measure of investment in medical sector capital. The revision raised total health expenditures' share of gross domestic product (GDP) from 15.4 to 15.8 percent in 2003. The improved measure encompasses investment in moveable equipment and software, as well as expenditures for the construction of structures used by the medical sector. PMID:17290665
NASA Astrophysics Data System (ADS)
Cui, Weiguang; Power, Chris; Biffi, Veronica; Borgani, Stefano; Murante, Giuseppe; Fabjan, Dunja; Knebe, Alexander; Lewis, Geraint F.; Poole, Greg B.
2016-03-01
Galaxy clusters are an established and powerful test-bed for theories of both galaxy evolution and cosmology. Accurate interpretation of cluster observations often requires robust identification of the location of the centre. Using a statistical sample of clusters drawn from a suite of cosmological simulations in which we have explored a range of galaxy formation models, we investigate how the location of this centre is affected by the choice of observable - stars, hot gas, or the full mass distribution as can be probed by the gravitational potential. We explore several measures of cluster centre: the minimum of the gravitational potential, which would expect to define the centre if the cluster is in dynamical equilibrium; the peak of the density; the centre of brightest cluster galaxy (BCG); and the peak and centroid of X-ray luminosity. We find that the centre of BCG correlates more strongly with the minimum of the gravitational potential than the X-ray defined centres, while active galactic nuclei feedback acts to significantly enhance the offset between the peak X-ray luminosity and minimum gravitational potential. These results highlight the importance of centre identification when interpreting clusters observations, in particular when comparing theoretical predictions and observational data.
Estimating Typhoon Rainfall over Sea from SSM/I Satellite Data Using an Improved Genetic Programming
NASA Astrophysics Data System (ADS)
Yeh, K.; Wei, H.; Chen, L.; Liu, G.
2010-12-01
Estimating Typhoon Rainfall over Sea from SSM/I Satellite Data Using an Improved Genetic Programming Keh-Chia Yeha, Hsiao-Ping Weia,d, Li Chenb, and Gin-Rong Liuc a Department of Civil Engineering, National Chiao Tung University, Hsinchu, Taiwan, 300, R.O.C. b Department of Civil Engineering and Engineering Informatics, Chung Hua University, Hsinchu, Taiwan, 300, R.O.C. c Center for Space and Remote Sensing Research, National Central University, Tao-Yuan, Taiwan, 320, R.O.C. d National Science and Technology Center for Disaster Reduction, Taipei County, Taiwan, 231, R.O.C. Abstract This paper proposes an improved multi-run genetic programming (GP) and applies it to predict the rainfall using meteorological satellite data. GP is a well-known evolutionary programming and data mining method, used to automatically discover the complex relationships among nonlinear systems. The main advantage of GP is to optimize appropriate types of function and their associated coefficients simultaneously. This study makes an improvement to enhance escape ability from local optimums during the optimization procedure. The GP continuously runs several times by replacing the terminal nodes at the next run with the best solution at the current run. The current novel model improves GP, obtaining a highly nonlinear mathematical equation to estimate the rainfall. In the case study, this improved GP described above combining with SSM/I satellite data is employed to establish a suitable method for estimating rainfall at sea surface during typhoon periods. These estimated rainfalls are then verified with the data from four rainfall stations located at Peng-Jia-Yu, Don-Gji-Dao, Lan-Yu, and Green Island, which are four small islands around Taiwan. From the results, the improved GP can generate sophisticated and accurate nonlinear mathematical equation through two-run learning procedures which outperforms the traditional multiple linear regression, empirical equations and back-propagated network
Wan Dali, Wan Putri Elena; Lua, Pei Lin
2013-01-01
The aim of the study was to evaluate the effectiveness of implementing multimodal nutrition education intervention (NEI) to improve dietary intake among university students. The design of study used was cluster randomised controlled design at four public universities in East Coast of Malaysia. A total of 417 university students participated in the study. They were randomly selected and assigned into two arms, that is, intervention group (IG) or control group (CG) according to their cluster. The IG received 10-week multimodal intervention using three modes (conventional lecture, brochures, and text messages) while CG did not receive any intervention. Dietary intake was assessed before and after intervention and outcomes reported as nutrient intakes as well as average daily servings of food intake. Analysis of covariance (ANCOVA) and adjusted effect size were used to determine difference in dietary changes between groups and time. Results showed that, compared to CG, participants in IG significantly improved their dietary intake by increasing their energy intake, carbohydrate, calcium, vitamin C and thiamine, fruits and 100% fruit juice, fish, egg, milk, and dairy products while at the same time significantly decreased their processed food intake. In conclusion, multimodal NEI focusing on healthy eating promotion is an effective approach to improve dietary intakes among university students. PMID:24069535
NASA Astrophysics Data System (ADS)
Žugec, P.; Bosnar, D.; Colonna, N.; Gunsing, F.
2016-08-01
The relation between the neutron background in neutron capture measurements and the neutron sensitivity related to the experimental setup is examined. It is pointed out that a proper estimate of the neutron background may only be obtained by means of dedicated simulations taking into account the full framework of the neutron-induced reactions and their complete temporal evolution. No other presently available method seems to provide reliable results, in particular under the capture resonances. An improved neutron background estimation technique is proposed, the main improvement regarding the treatment of the neutron sensitivity, taking into account the temporal evolution of the neutron-induced reactions. The technique is complemented by an advanced data analysis procedure based on relativistic kinematics of neutron scattering. The analysis procedure allows for the calculation of the neutron background in capture measurements, without requiring the time-consuming simulations to be adapted to each particular sample. A suggestion is made on how to improve the neutron background estimates if neutron background simulations are not available.
NASA Astrophysics Data System (ADS)
Wang, Rong; Chen, Jing M.; Pavlic, Goran; Arain, Altaf
2016-09-01
Winter leaf area index (LAI) of evergreen coniferous forests exerts strong control on the interception of snow, snowmelt and energy balance. Simulation of winter LAI and associated winter processes in land surface models is challenging. Retrieving winter LAI from remote sensing data is difficult due to cloud contamination, poor illumination, lower solar elevation and higher radiation reflection by snow background. Underestimated winter LAI in evergreen coniferous forests is one of the major issues limiting the application of current remote sensing LAI products. It has not been fully addressed in past studies in the literature. In this study, we used needle lifespan to correct winter LAI in a remote sensing product developed by the University of Toronto. For the validation purpose, the corrected winter LAI was then used to calculate land surface albedo at five FLUXNET coniferous forests in Canada. The RMSE and bias values for estimated albedo were 0.05 and 0.011, respectively, for all sites. The albedo map over coniferous forests across Canada produced with corrected winter LAI showed much better agreement with the GLASS (Global LAnd Surface Satellites) albedo product than the one produced with uncorrected winter LAI. The results revealed that the corrected winter LAI yielded much greater accuracy in simulating land surface albedo, making the new LAI product an improvement over the original one. Our study will help to increase the usability of remote sensing LAI products in land surface energy budget modeling.
NASA Astrophysics Data System (ADS)
Singh, Gurmeet; Nandi, Apurba; Gadre, Shridhar R.
2016-03-01
A pragmatic method based on the molecular tailoring approach (MTA) for estimating the complete basis set (CBS) limit at Møller-Plesset second order perturbation (MP2) theory accurately for large molecular clusters with limited computational resources is developed. It is applied to water clusters, (H2O)n (n = 7, 8, 10, 16, 17, and 25) optimized employing aug-cc-pVDZ (aVDZ) basis-set. Binding energies (BEs) of these clusters are estimated at the MP2/aug-cc-pVNZ (aVNZ) [N = T, Q, and 5 (whenever possible)] levels of theory employing grafted MTA (GMTA) methodology and are found to lie within 0.2 kcal/mol of the corresponding full calculation MP2 BE, wherever available. The results are extrapolated to CBS limit using a three point formula. The GMTA-MP2 calculations are feasible on off-the-shelf hardware and show around 50%-65% saving of computational time. The methodology has a potential for application to molecular clusters containing ˜100 atoms.
Robineau, Olivier; Frange, Pierre; Barin, Francis; Cazein, Françoise; Girard, Pierre-Marie; Chaix, Marie-Laure; Kreplak, Georges; Boelle, Pierre-Yves; Morand-Joubert, Laurence
2015-01-01
Objectives To relate socio-demographic and virological information to phylogenetic clustering in HIV infected patients in a limited geographical area and to evaluate the role of recently infected individuals in the spread of HIV. Methods HIV-1 pol sequences from newly diagnosed and treatment-naive patients receiving follow-up between 2008 and 2011 by physicians belonging to a health network in Paris were used to build a phylogenetic tree using neighbour-joining analysis. Time since infection was estimated by immunoassay to define recently infected patients (very early infected presenters, VEP). Data on socio-demographic, clinical and biological features in clustered and non-clustered patients were compared. Chains of infection structure was also analysed. Results 547 patients were included, 49 chains of infection containing 108 (20%) patients were identified by phylogenetic analysis. analysis. Eighty individuals formed pairs and 28 individuals were belonging to larger clusters. The median time between two successive HIV diagnoses in the same chain of infection was 248 days [CI = 176–320]. 34.7% of individuals were considered as VEP, and 27% of them were included in chains of infection. Multivariable analysis showed that belonging to a cluster was more frequent in VEP and those under 30 years old (OR: 3.65, 95 CI 1.49–8.95, p = 0.005 and OR: 2.42, 95% CI 1.05–5.85, p = 0.04 respectively). The prevalence of drug resistance was not associated with belonging to a pair or a cluster. Within chains, VEP were not grouped together more than chance predicted (p = 0.97). Conclusions Most newly diagnosed patients did not belong to a chain of infection, confirming the importance of undiagnosed or untreated HIV infected individuals in transmission. Furthermore, clusters involving both recently infected individuals and longstanding infected individuals support a substantial role in transmission of the latter before diagnosis. PMID:26267615
NASA Astrophysics Data System (ADS)
Mianabadi, Ameneh; Alizadeh, Amin; Sanaeinejad, Hossein; Ghahraman, Bijan; Davary, Kamran; Coenders-Gerrits, Miriam
2015-04-01
To have an accurate estimation of actual evapotranspiration, it is a good idea to use every-day images of MODIS. But under clouded condition, it is difficult to have appropriate images and also it is time-consuming to interpret all those images. Therefore, in this paper, we tried to choose the appropriate images to improve estimation of actual evapotranspiration. For this purpose, we introduced a framework to choose appropriate dates to produce best estimation of actual evapotranspiration. On the other hand, finding the location of dry (hot pixel) and wet (cold pixel) endpoints of evapotranspiration spectrum is so important. We dealt with this problem by employing the statistical procedure for automated selection of cold and hot pixels. We also visually reviewed the location of hot and cold pixels using land cover image to ensure that the most appropriate pixels had been selected. To integrate evapotranspiration over time, the linear and spline interpolation techniques were applied. Also, based on the precipitation rates during 5 days before the date of image and the mean seasonal amount of evapotranspiration, we found a logarithmic equation to produce the best estimation of evapotranspiration during the given time. Results showed that the logarithmic equation could produce more accurate estimation of evapotranspiration rather than linear interpolation.
Improved Pulse Wave Velocity Estimation Using an Arterial Tube-Load Model
Gao, Mingwu; Zhang, Guanqun; Olivier, N. Bari; Mukkamala, Ramakrishna
2015-01-01
Pulse wave velocity (PWV) is the most important index of arterial stiffness. It is conventionally estimated by non-invasively measuring central and peripheral blood pressure (BP) and/or velocity (BV) waveforms and then detecting the foot-to-foot time delay between the waveforms wherein wave reflection is presumed absent. We developed techniques for improved estimation of PWV from the same waveforms. The techniques effectively estimate PWV from the entire waveforms, rather than just their feet, by mathematically eliminating the reflected wave via an arterial tube-load model. In this way, the techniques may be more robust to artifact while revealing the true PWV in absence of wave reflection. We applied the techniques to estimate aortic PWV from simultaneously and sequentially measured central and peripheral BP waveforms and simultaneously measured central BV and peripheral BP waveforms from 17 anesthetized animals during diverse interventions that perturbed BP widely. Since BP is the major acute determinant of aortic PWV, especially under anesthesia wherein vasomotor tone changes are minimal, we evaluated the techniques in terms of the ability of their PWV estimates to track the acute BP changes in each subject. Overall, the PWV estimates of the techniques tracked the BP changes better than those of the conventional technique (e.g., diastolic BP root-mean-squared-errors of 3.4 vs. 5.2 mmHg for the simultaneous BP waveforms and 7.0 vs. 12.2 mmHg for the BV and BP waveforms (p < 0.02)). With further testing, the arterial tube-load model-based PWV estimation techniques may afford more accurate arterial stiffness monitoring in hypertensive and other patients. PMID:24263016
Improved method for retinotopy constrained source estimation of visual evoked responses
Hagler, Donald J.; Dale, Anders M.
2011-01-01
Retinotopy constrained source estimation (RCSE) is a method for non-invasively measuring the time courses of activation in early visual areas using magnetoencephalography (MEG) or electroencephalography (EEG). Unlike conventional equivalent current dipole or distributed source models, the use of multiple, retinotopically-mapped stimulus locations to simultaneously constrain the solutions allows for the estimation of independent waveforms for visual areas V1, V2, and V3, despite their close proximity to each other. We describe modifications that improve the reliability and efficiency of this method. First, we find that increasing the number and size of visual stimuli results in source estimates that are less susceptible to noise. Second, to create a more accurate forward solution, we have explicitly modeled the cortical point spread of individual visual stimuli. Dipoles are represented as extended patches on the cortical surface, which take into account the estimated receptive field size at each location in V1, V2, and V3 as well as the contributions from contralateral, ipsilateral, dorsal, and ventral portions of the visual areas. Third, we implemented a map fitting procedure to deform a template to match individual subject retinotopic maps derived from functional magnetic resonance imaging (fMRI). This improves the efficiency of the overall method by allowing automated dipole selection, and it makes the results less sensitive to physiological noise in fMRI retinotopy data. Finally, the iteratively reweighted least squares (IRLS) method was used to reduce the contribution from stimulus locations with high residual error for robust estimation of visual evoked responses. PMID:22102418
Improving radar estimates of rainfall using an input subset of artificial neural networks
NASA Astrophysics Data System (ADS)
Yang, Tsun-Hua; Feng, Lei; Chang, Lung-Yao
2016-04-01
An input subset including average radar reflectivity (Zave) and its standard deviation (SD) is proposed to improve radar estimates of rainfall based on a radial basis function (RBF) neural network. The RBF derives a relationship from a historical input subset, called a training dataset, consisting of radar measurements such as reflectivity (Z) aloft and associated rainfall observation (R) on the ground. The unknown rainfall rate can then be predicted over the derived relationship with known radar measurements. The selection of the input subset has a significant impact on the prediction performance. This study simplified the selection of input subsets and studied its improvement in rainfall estimation. The proposed subset includes: (1) the Zave of the observed Z within a given distance from the ground observation to represent the intensity of a storm system and (2) the SD of the observed Z to describe the spatial variability. Using three historical rainfall events in 1999 near Darwin, Australia, the performance evaluation is conducted using three approaches: an empirical Z-R relation, RBF with Z, and RBF with Zave and SD. The results showed that the RBF with both Zave and SD achieved better rainfall estimations than the RBF using only Z. Two performance measures were used: (1) the Pearson correlation coefficient improved from 0.15 to 0.58 and (2) the average root-mean-square error decreased from 14.14 mm to 11.43 mm. The proposed model and findings can be used for further applications involving the use of neural networks for radar estimates of rainfall.
NASA Astrophysics Data System (ADS)
Chen, Huilin; Montzka, Steve; Andrews, Arlyn; Sweeney, Colm; Jacobson, Andy; Miller, Ben; Masarie, Ken; Jung, Martin; Gerbig, Christoph; Campbell, Elliott; Abu-Naser, Mohammad; Berry, Joe; Baker, Ian; Tans, Pieter
2013-04-01
Understanding the responses of gross primary production (GPP) to climate change is essential for improving our prediction of climate change. To this end, it is important to accurately partition net ecosystem exchange of carbon into GPP and respiration. Recent studies suggest that carbonyl sulfide is a useful tracer to provide a constraint on GPP, based on the fact that both COS and CO2 are simultaneously taken up by plants and the quantitative correlation between GPP and COS plant uptake. We will present an assessment of North American GPP estimates from the Simple Biosphere (SiB) model, the Carnegie-Ames-Stanford Approach (CASA) model, and the MPI-BGC model through atmospheric transport simulations of COS in a receptor oriented framework. The newly upgraded Hybrid Single Particle Lagrangian Integrated Trajectory Model (HYSPLIT) will be employed to compute the influence functions, i.e. footprints, to link the surface fluxes to the concentration changes at the receptor observations. The HYSPLIT is driven by the 3-hourly archived NAM 12km meteorological data from NOAA NCEP. The background concentrations are calculated using empirical curtains along the west coast of North America that have been created by interpolating in time and space the observations at the NOAA/ESRL marine boundary layer stations and from aircraft vertical profiles. The plant uptake of COS is derived from GPP estimates of biospheric models. The soil uptake and anthropogenic emissions are from Kettle et al. 2002. In addition, we have developed a new soil flux map of COS based on observations of molecular hydrogen (H2), which shares a common soil uptake term but lacks a vegetative sink. We will also improve the GPP estimates by assimilating atmospheric observations of COS in the receptor oriented framework, and then present the assessment of the improved GPP estimates against variations of climate variables such as temperature and precipitation.
NASA Astrophysics Data System (ADS)
Ebrahimian, Ali; Wilson, Bruce N.; Gulliver, John S.
2016-05-01
Impervious surfaces are useful indicators of the urbanization impacts on water resources. Effective impervious area (EIA), which is the portion of total impervious area (TIA) that is hydraulically connected to the drainage system, is a better catchment parameter in the determination of actual urban runoff. Development of reliable methods for quantifying EIA rather than TIA is currently one of the knowledge gaps in the rainfall-runoff modeling context. The objective of this study is to improve the rainfall-runoff data analysis method for estimating EIA fraction in urban catchments by eliminating the subjective part of the existing method and by reducing the uncertainty of EIA estimates. First, the theoretical framework is generalized using a general linear least square model and using a general criterion for categorizing runoff events. Issues with the existing method that reduce the precision of the EIA fraction estimates are then identified and discussed. Two improved methods, based on ordinary least square (OLS) and weighted least square (WLS) estimates, are proposed to address these issues. The proposed weighted least squares method is then applied to eleven urban catchments in Europe, Canada, and Australia. The results are compared to map measured directly connected impervious area (DCIA) and are shown to be consistent with DCIA values. In addition, both of the improved methods are applied to nine urban catchments in Minnesota, USA. Both methods were successful in removing the subjective component inherent in the analysis of rainfall-runoff data of the current method. The WLS method is more robust than the OLS method and generates results that are different and more precise than the OLS method in the presence of heteroscedastic residuals in our rainfall-runoff data.
Improving radar estimates of rainfall using an input subset of artificial neural networks
NASA Astrophysics Data System (ADS)
Yang, Tsun-Hua; Feng, Lei; Chang, Lung-Yao
2016-04-01
An input subset including average radar reflectivity (Zave) and its standard deviation (SD) is proposed to improve radar estimates of rainfall based on a radial basis function (RBF) neural network. The RBF derives a relationship from a historical input subset, called a training dataset, consisting of radar measurements such as reflectivity (Z) aloft and associated rainfall observation (R) on the ground. The unknown rainfall rate can then be predicted over the derived relationship with known radar measurements. The selection of the input subset has a significant impact on the prediction performance. This study simplified the selection of input subsets and studied its improvement in rainfall estimation. The proposed subset includes: (1) the Zave of the observed Z within a given distance from the ground observation to represent the intensity of a storm system and (2) the SD of the observed Z to describe the spatial variability. Using three historical rainfall events in 1999 near Darwin, Australia, the performance evaluation is conducted using three approaches: an empirical Z-R relation, RBF with Z, and RBF with Zave and SD. The results showed that the RBF with both Zave and SD achieved better rainfall estimations than the RBF using only Z. Two performance measures were used: (1) the Pearson correlation coefficient improved from 0.15 to 0.58 and (2) the average root-mean-square error decreased from 14.14 mm to 11.43 mm. The proposed model and findings can be used for further applications involving the use of neural networks for radar estimates of rainfall.
An estimation method of MR signal parameters for improved image reconstruction in unilateral scanner
NASA Astrophysics Data System (ADS)
Bergman, Elad; Yeredor, Arie; Nevo, Uri
2013-12-01
Unilateral NMR devices are used in various applications including non-destructive testing and well logging, but are not used routinely for imaging. This is mainly due to the inhomogeneous magnetic field (B0) in these scanners. This inhomogeneity results in low sensitivity and further forces the use of the slow single point imaging scan scheme. Improving the measurement sensitivity is therefore an important factor as it can improve image quality and reduce imaging times. Short imaging times can facilitate the use of this affordable and portable technology for various imaging applications. This work presents a statistical signal-processing method, designed to fit the unique characteristics of imaging with a unilateral device. The method improves the imaging capabilities by improving the extraction of image information from the noisy data. This is done by the use of redundancy in the acquired MR signal and by the use of the noise characteristics. Both types of data were incorporated into a Weighted Least Squares estimation approach. The method performance was evaluated with a series of imaging acquisitions applied on phantoms. Images were extracted from each measurement with the proposed method and were compared to the conventional image reconstruction. All measurements showed a significant improvement in image quality based on the MSE criterion - with respect to gold standard reference images. An integration of this method with further improvements may lead to a prominent reduction in imaging times aiding the use of such scanners in imaging application.
Aulenbach, Brent T.
2013-01-01
A regression-model based approach is a commonly used, efficient method for estimating streamwater constituent load when there is a relationship between streamwater constituent concentration and continuous variables such as streamwater discharge, season and time. A subsetting experiment using a 30-year dataset of daily suspended sediment observations from the Mississippi River at Thebes, Illinois, was performed to determine optimal sampling frequency, model calibration period length, and regression model methodology, as well as to determine the effect of serial correlation of model residuals on load estimate precision. Two regression-based methods were used to estimate streamwater loads, the Adjusted Maximum Likelihood Estimator (AMLE), and the composite method, a hybrid load estimation approach. While both methods accurately and precisely estimated loads at the model’s calibration period time scale, precisions were progressively worse at shorter reporting periods, from annually to monthly. Serial correlation in model residuals resulted in observed AMLE precision to be significantly worse than the model calculated standard errors of prediction. The composite method effectively improved upon AMLE loads for shorter reporting periods, but required a sampling interval of at least 15-days or shorter, when the serial correlations in the observed load residuals were greater than 0.15. AMLE precision was better at shorter sampling intervals and when using the shortest model calibration periods, such that the regression models better fit the temporal changes in the concentration–discharge relationship. The models with the largest errors typically had poor high flow sampling coverage resulting in unrepresentative models. Increasing sampling frequency and/or targeted high flow sampling are more efficient approaches to ensure sufficient sampling and to avoid poorly performing models, than increasing calibration period length.
Habecker, Patrick; Dombrowski, Kirk; Khan, Bilal
2015-01-01
Researchers interested in studying populations that are difficult to reach through traditional survey methods can now draw on a range of methods to access these populations. Yet many of these methods are more expensive and difficult to implement than studies using conventional sampling frames and trusted sampling methods. The network scale-up method (NSUM) provides a middle ground for researchers who wish to estimate the size of a hidden population, but lack the resources to conduct a more specialized hidden population study. Through this method it is possible to generate population estimates for a wide variety of groups that are perhaps unwilling to self-identify as such (for example, users of illegal drugs or other stigmatized populations) via traditional survey tools such as telephone or mail surveys—by asking a representative sample to estimate the number of people they know who are members of such a “hidden” subpopulation. The original estimator is formulated to minimize the weight a single scaling variable can exert upon the estimates. We argue that this introduces hidden and difficult to predict biases, and instead propose a series of methodological advances on the traditional scale-up estimation procedure, including a new estimator. Additionally, we formalize the incorporation of sample weights into the network scale-up estimation process, and propose a recursive process of back estimation “trimming” to identify and remove poorly performing predictors from the estimation process. To demonstrate these suggestions we use data from a network scale-up mail survey conducted in Nebraska during 2014. We find that using the new estimator and recursive trimming process provides more accurate estimates, especially when used in conjunction with sampling weights. PMID:26630261
NASA Astrophysics Data System (ADS)
Adams, R.; Costelloe, J. F.; Western, A. W.; George, B.
2013-10-01
An improved understanding of water balances of rivers is fundamental in water resource management. Effective use of a water balance approach requires thorough identification of sources of uncertainty around all terms in the analysis and can benefit from additional, independent information that can be used to interpret the accuracy of the residual term of a water balance. We use a Monte Carlo approach to estimate a longitudinal river channel water balance and to identify its sources of uncertainty for a regulated river in south-eastern Australia, assuming that the residual term of this water balance represents fluxes between groundwater and the river. Additional information from short term monitoring of ungauged tributaries and groundwater heads is used to further test our confidence in the estimates of error and variance for the major components of this water balance. We identify the following conclusions from the water balance analysis. First, improved identification of the major sources of error in consecutive reaches of a catchment can be used to support monitoring infrastructure design to best reduce the largest sources of error in a water balance. Second, estimation of ungauged inflow using rainfall-runoff modelling is sensitive to the representativeness of available gauged data in characterising the flow regime of sub-catchments along a perennial to intermittent continuum. Lastly, comparison of temporal variability of stream-groundwater head difference data and a residual water balance term provides an independent means of assessing the assumption that the residual term represents net stream-groundwater fluxes.
Gaye-Siessegger, Julia; Mamun, Shamsuddin M; Brinker, Alexander; Focken, Ulfert
2013-04-01
For diet reconstruction studies using stable isotopes, accurate estimates of trophic shift (Δδtrophic) are necessary to get reliable results. Several factors have been identified which affect the trophic shift. The goal of the present experiment was to test whether measurements of the activities of enzymes could improve the accuracy of estimation of trophic shift in fish. Forty-eight Nile tilapia (Oreochromis niloticus) were fed under controlled conditions with two diets differing in their protein content (21 and 41%) each at four different levels (4, 8, 12 and 16gkg(-0.8)d(-1)). At the end of the feeding experiment, proximate composition, whole body δ(13)C and δ(15)N as well as the activities of enzymes involved in anabolism and catabolism were measured. Step-wise regression specified contributing variables for Δδ(15)N (malic enzyme, aspartate aminotransferase and protein content) and Δδ(13)Clipid-free material (aspartate aminotransferase and protein content). Explained variation by using the significant main effects was about 70% for Δδ(15)N and Δδ(13)Clipid-free material, respectively. The results of the present study indicate that enzyme activities are suitable indicators to improve estimates of trophic shift.
Allen, Y.C.; Couvillion, B.R.; Barras, J.A.
2012-01-01
Remote sensing imagery can be an invaluable resource to quantify land change in coastal wetlands. Obtaining an accurate measure of land change can, however, be complicated by differences in fluvial and tidal inundation experienced when the imagery is captured. This study classified Landsat imagery from two wetland areas in coastal Louisiana from 1983 to 2010 into categories of land and water. Tide height, river level, and date were used as independent variables in a multiple regression model to predict land area in the Wax Lake Delta (WLD) and compare those estimates with an adjacent marsh area lacking direct fluvial inputs. Coefficients of determination from regressions using both measures of water level along with date as predictor variables of land extent in the WLD, were higher than those obtained using the current methodology which only uses date to predict land change. Land change trend estimates were also improved when the data were divided by time period. Water level corrected land gain in the WLD from 1983 to 2010 was 1 km 2 year -1, while rates in the adjacent marsh remained roughly constant. This approach of isolating environmental variability due to changing water levels improves estimates of actual land change in a dynamic system, so that other processes that may control delta development such as hurricanes, floods, and sediment delivery, may be further investigated. ?? 2011 Coastal and Estuarine Research Federation (outside the USA).
Integrating SAS and GIS software to improve habitat-use estimates from radiotelemetry data
Kenow, K.P.; Wright, R.G.; Samuel, M.D.; Rasmussen, P.W.
2001-01-01
Radiotelemetry has been used commonly to remotely determine habitat use by a variety of wildlife species. However, habitat misclassification can occur because the true location of a radiomarked animal can only be estimated. Analytical methods that provide improved estimates of habitat use from radiotelemetry location data using a subsampling approach have been proposed previously. We developed software, based on these methods, to conduct improved habitat-use analyses. A Statistical Analysis System (SAS)-executable file generates a random subsample of points from the error distribution of an estimated animal location and formats the output into ARC/INFO-compatible coordinate and attribute files. An associated ARC/INFO Arc Macro Language (AML) creates a coverage of the random points, determines the habitat type at each random point from an existing habitat coverage, sums the number of subsample points by habitat type for each location, and outputs tile results in ASCII format. The proportion and precision of habitat types used is calculated from the subsample of points generated for each radiotelemetry location. We illustrate the method and software by analysis of radiotelemetry data for a female wild turkey (Meleagris gallopavo).
Improving the Carbon Dioxide Emission Estimates from the Combustion of Fossil Fuels in California
de la Rue du Can, Stephane; Wenzel, Tom; Price, Lynn
2008-08-13
Central to any study of climate change is the development of an emission inventory that identifies and quantifies the State's primary anthropogenic sources and sinks of greenhouse gas (GHG) emissions. CO2 emissions from fossil fuel combustion accounted for 80 percent of California GHG emissions (CARB, 2007a). Even though these CO2 emissions are well characterized in the existing state inventory, there still exist significant sources of uncertainties regarding their accuracy. This report evaluates the CO2 emissions accounting based on the California Energy Balance database (CALEB) developed by Lawrence Berkeley National Laboratory (LBNL), in terms of what improvements are needed and where uncertainties lie. The estimated uncertainty for total CO2 emissions ranges between -21 and +37 million metric tons (Mt), or -6percent and +11percent of total CO2 emissions. The report also identifies where improvements are needed for the upcoming updates of CALEB. However, it is worth noting that the California Air Resources Board (CARB) GHG inventory did not use CALEB data for all combustion estimates. Therefore the range in uncertainty estimated in this report does not apply to the CARB's GHG inventory. As much as possible, additional data sources used by CARB in the development of its GHG inventory are summarized in this report for consideration in future updates to CALEB.
NASA Technical Reports Server (NTRS)
Ramapriyan, H. K. (Rama); Peng, Ge; Moroni, David; Shie, Chung-Lin
2016-01-01
Quality of products is always of concern to users regardless of the type of products. The focus of this paper is on the quality of Earth science data products. There are four different aspects of quality scientific, product, stewardship and service. All these aspects taken together constitute Information Quality. With increasing requirement on ensuring and improving information quality, there has been considerable work related to information quality during the last several years. Given this rich background of prior work, the Information Quality Cluster (IQC), established within the Federation of Earth Science Information Partners (ESIP) has been active with membership from multiple organizations. Its objectives and activities, aimed at ensuring and improving information quality for Earth science data and products, are discussed briefly.
NASA Technical Reports Server (NTRS)
Ramapriyan, Hampapuram; Peng, Ge; Moroni, David; Shie, Chung-Lin
2016-01-01
Quality of products is always of concern to users regardless of the type of products. The focus of this paper is on the quality of Earth science data products. There are four different aspects of quality - scientific, product, stewardship and service. All these aspects taken together constitute Information Quality. With increasing requirement on ensuring and improving information quality, there has been considerable work related to information quality during the last several years. Given this rich background of prior work, the Information Quality Cluster (IQC), established within the Federation of Earth Science Information Partners (ESIP) has been active with membership from multiple organizations. Its objectives and activities, aimed at ensuring and improving information quality for Earth science data and products, are discussed briefly.
NASA Astrophysics Data System (ADS)
Susaki, J.
2016-06-01
In this paper, we analyze probability density functions (PDFs) of scatterings derived from fully polarimetric synthetic aperture radar (SAR) images for improving the accuracies of estimated urban density. We have reported a method for estimating urban density that uses an index Tv+c obtained by normalizing the sum of volume and helix scatterings Pv+c. Validation results showed that estimated urban densities have a high correlation with building-to-land ratios (Kajimoto and Susaki, 2013b; Susaki et al., 2014). While the method is found to be effective for estimating urban density, it is not clear why Tv+c is more effective than indices derived from other scatterings, such as surface or double-bounce scatterings, observed in urban areas. In this research, we focus on PDFs of scatterings derived from fully polarimetric SAR images in terms of scattering normalization. First, we introduce a theoretical PDF that assumes that image pixels have scatterers showing random backscattering. We then generate PDFs of scatterings derived from observations of concrete blocks with different orientation angles, and from a satellite-based fully polarimetric SAR image. The analysis of the PDFs and the derived statistics reveals that the curves of the PDFs of Pv+c are the most similar to the normal distribution among all the scatterings derived from fully polarimetric SAR images. It was found that Tv+c works most effectively because of its similarity to the normal distribution.
Rose, Kevin C.; Winslow, Luke A.; Read, Jordan S.; Read, Emily K.; Solomon, Christopher T.; Adrian, Rita; Hanson, Paul C.
2014-01-01
Diel changes in dissolved oxygen are often used to estimate gross primary production (GPP) and ecosystem respiration (ER) in aquatic ecosystems. Despite the widespread use of this approach to understand ecosystem metabolism, we are only beginning to understand the degree and underlying causes of uncertainty for metabolism model parameter estimates. Here, we present a novel approach to improve the precision and accuracy of ecosystem metabolism estimates by identifying physical metrics that indicate when metabolism estimates are highly uncertain. Using datasets from seventeen instrumented GLEON (Global Lake Ecological Observatory Network) lakes, we discovered that many physical characteristics correlated with uncertainty, including PAR (photosynthetically active radiation, 400-700 nm), daily variance in Schmidt stability, and wind speed. Low PAR was a consistent predictor of high variance in GPP model parameters, but also corresponded with low ER model parameter variance. We identified a threshold (30% of clear sky PAR) below which GPP parameter variance increased rapidly and was significantly greater in nearly all lakes compared with variance on days with PAR levels above this threshold. The relationship between daily variance in Schmidt stability and GPP model parameter variance depended on trophic status, whereas daily variance in Schmidt stability was consistently positively related to ER model parameter variance. Wind speeds in the range of ~0.8-3 m s–1 were consistent predictors of high variance for both GPP and ER model parameters, with greater uncertainty in eutrophic lakes. Our findings can be used to reduce ecosystem metabolism model parameter uncertainty and identify potential sources of that uncertainty.
Miller, David A.; Nichols, J.D.; McClintock, B.T.; Grant, E.H.C.; Bailey, L.L.; Weir, L.A.
2011-01-01
Efforts to draw inferences about species occurrence frequently account for false negatives, the common situation when individuals of a species are not detected even when a site is occupied. However, recent studies suggest the need to also deal with false positives, which occur when species are misidentified so that a species is recorded as detected when a site is unoccupied. Bias in estimators of occupancy, colonization, and extinction can be severe when false positives occur. Accordingly, we propose models that simultaneously account for both types of error. Our approach can be used to improve estimates of occupancy for study designs where a subset of detections is of a type or method for which false positives can be assumed to not occur. We illustrate properties of the estimators with simulations and data for three species of frogs. We show that models that account for possible misidentification have greater support (lower AIC for two species) and can yield substantially different occupancy estimates than those that do not. When the potential for misidentification exists, researchers should consider analytical techniques that can account for this source of error, such as those presented here. ?? 2011 by the Ecological Society of America..
Improving waterfowl production estimates: results of a test in the prairie pothole region
Arnold, P.M.; Cowardin, L.M.
1985-01-01
The U.S. Fish and Wildlife Service in an effort to improve and standardize methods for estimating waterfowl production tested a new technique in the four-county Arrowwood Wetland Management District (WMD) for three years (1982-1984). On 14 randomly selected 10.36 km2 plots, upland and wetland habitat was mapped, classified, and digitized. Waterfowl breeding pairs were counted twice each year and the proportion of wetland basins containing water was determined. Pair numbers and habitat conditions were entered into a computer model developed by Northern Prairie Wildlife Research Center. That model estimates production on small federally owned wildlife tracts, federal wetland easements, and private land. Results indicate that production estimates were most accurate for mallards (Anas platyrhynchos), the species for which the computer model and data base were originally designed. Predictions for the pintail (Anas acuta), gadwall (A. strepa), blue-winged teal (A. discors), and northern shoveler (A. clypeata) were believed to be less accurate. Modeling breeding period dynamics of a waterfowl species and making credible production estimates for a geographic area are possible if the data used in the model are adequate. The process of modeling the breeding period of a species aids in locating areas of insufficient biological knowledge. This process will help direct future research efforts and permit more efficient gathering of field data.
Improving regional-model estimates of urban-runoff quality using local data
Hoos, A.B.
1996-01-01
Urban water-quality managers need load estimates of storm-runoff pollutants to design effective remedial programs. Estimates are commonly made using published models calibrated to large regions of the country. This paper presents statistical methods, termed model-adjustment procedures (MAPs), which use a combination of local data and published regional models to improve estimates of urban-runoff quality. Each MAP is a form of regression analysis that uses a local data base as a calibration data set to adjust the regional model, in effect increasing the size of the local data base without additional, expensive data collection. The adjusted regional model can then be used to estimate storm-runoff quality at unmonitored sites and storms in the locality. The four MAPs presented in this study are (1) single-factor regression against the regional model prediction, Pu; (2) least-squares regression against Pu; (3) least-squares regression against Pu and additional local variables; and (4) weighted combination of Pu and a local-regression prediction. Identification of the statistically most valid method among these four depends upon characteristics of the local data base. A MAP-selection scheme based on statistical analysis of the calibration data set is presented and tested.
Improved leaf area index based biomass estimations for Zostera marina L.
Solana-Arellano, Elena; Echavarria-Heras, Hector; Gallegos Martinez, Margarita
2003-12-01
The application of special scanning technologies in plant population studies makes it now possible to offer reliable indirect estimations of Leaf Area Index (LAI). This has stimulated the adaptation of related biomass assessment methods and has provided a way to simplify tedious laboratory procedures whilst avoiding destructive sampling. Particularly, above-ground biomass for Zostera marina L. has been expressed depending linearly on Leaf Area Index. Nevertheless, we demonstrate that this approach produces biased estimations. It is also shown that expressing leaf dry weight by means of an allometric function of length and width can eliminate bias. Furthermore, the dominant term of the associated power series expansion becomes the aforementioned linear representation in terms of Leaf Area Index. The consistency of the estimation methods derived from the allometric model was tested using data from a Z. marina meadow. Consequently, the improved method is expected to become a valuable tool for the reduction of the uncertainty associated with the estimation of above-ground biomass through the use of Leaf Area Index.
Improving Estimation Accuracy of Quasars’ Photometric Redshifts by Integration of KNN and SVM
NASA Astrophysics Data System (ADS)
Han, Bo; Ding, Hongpeng; Zhang, Yanxia; Zhao, Yongheng
2015-08-01
The massive photometric data collected from multiple large-scale sky surveys offers significant opportunities for measuring distances of many celestial objects by photometric redshifts zphot in a wide coverage of the sky. However, catastrophic failure, an unsolved problem for a long time, exists in the current photometric redshift estimation approaches (such as k-nearest-neighbor). In this paper, we propose a novel two-stage approach by integration of k-nearest-neighbor (KNN) and support vector machine (SVM) methods together. In the first stage, we apply KNN algorithm on photometric data and estimate their corresponding zphot. By analysis, we observe two dense regions with catastrophic failure, one in the range of zphot [0.1,1.1], the other in the range of zphot [1.5,2.5]. In the second stage, we map the photometric multiband input pattern of points falling into the two ranges from original attribute space into high dimensional feature space by Gaussian kernel function in SVM. In the high dimensional feature space, many bad estimation points resulted from catastrophic failure by using simple Euclidean distance computation in KNN can be identified by classification hyperplane SVM and further be applied correction. Experimental results based on SDSS data for quasars showed that the two-stage fusion approach can significantly mitigate catastrophic failure and improve the estimation accuracy of photometric redshift.
NASA Astrophysics Data System (ADS)
Hamaker, Henry Chris
1995-12-01
Statistical process control (SPC) techniques often use six times the standard deviation sigma to estimate the range of errors within a process. Two assumptions are inherent in this choice of metric for the range: (1) the normal distribution adequately describes the errors, and (2) the fraction of errors falling within plus or minus 3 sigma, about 99.73%, is sufficiently large that we may consider the fraction occurring outside this range to be negligible. In state-of-the-art photomasks, however, the assumption of normality frequently breaks down, and consequently plus or minus 3 sigma is not a good estimate of the range of errors. In this study, we show that improved estimates for the effective maximum error Em, which is defined as the value for which 99.73% of all errors fall within plus or minus Em of the mean mu, may be obtained by quantifying the deviation from normality of the error distributions using the skewness and kurtosis of the error sampling. Data are presented indicating that in laser reticle- writing tools, Em less than or equal to 3 sigma. We also extend this technique for estimating the range of errors to specifications that are usually described by mu plus 3 sigma. The implications for SPC are examined.
van Twillert, Inonge; Bonačić Marinović, Axel A; van Gaans-van den Brink, Jacqueline A M; Kuipers, Betsy; Berbers, Guy A M; van der Maas, Nicoline A T; Verheij, Theo J M; Versteegh, Florens G A; Teunis, Peter F M; van Els, Cécile A C M
2016-01-01
Bordetella pertussis circulates even in highly vaccinated countries affecting all age groups. Insight into the scale of concealed reinfections is important as they may contribute to transmission. We therefore investigated whether current single-point serodiagnostic methods are suitable to estimate the prevalence of pertussis reinfection. Two methods based on IgG-Ptx plasma levels alone were used to evaluate the proportion of renewed seroconversions in the past year in a cohort of retrospective pertussis cases ≥ 24 months after a proven earlier symptomatic infection. A Dutch population database was used as a baseline. Applying a classical 62.5 IU/ml IgG-Ptx cut-off, we calculated a seroprevalence of 15% in retrospective cases, higher than the 10% observed in the population baseline. However, this method could not discriminate between renewed seroconversion and waning of previously infection-enhanced IgG-Ptx levels. Two-component cluster analysis of the IgG-Ptx datasets of both pertussis cases and the general population revealed a continuum of intermediate IgG-Ptx levels, preventing the establishment of a positive population and the comparison of prevalence by this alternative method. Next, we investigated the complementary serodiagnostic value of IgA-Ptx levels. When modelling datasets including both convalescent and retrospective cases we obtained new cut-offs for both IgG-Ptx and IgA-Ptx that were optimized to evaluate renewed seroconversions in the ex-cases target population. Combining these cut-offs two-dimensionally, we calculated 8.0% reinfections in retrospective cases, being below the baseline seroprevalence. Our study for the first time revealed the shortcomings of using only IgG-Ptx data in conventional serodiagnostic methods to determine pertussis reinfections. Improved results can be obtained with two-dimensional serodiagnostic profiling. The proportion of reinfections thus established suggests a relatively increased period of protection to renewed
van Twillert, Inonge; Bonačić Marinović, Axel A.; van Gaans-van den Brink, Jacqueline A. M.; Kuipers, Betsy; Berbers, Guy A. M.; van der Maas, Nicoline A. T.; Verheij, Theo J. M.; Versteegh, Florens G. A.; Teunis, Peter F. M.; van Els, Cécile A. C. M.
2016-01-01
Bordetella pertussis circulates even in highly vaccinated countries affecting all age groups. Insight into the scale of concealed reinfections is important as they may contribute to transmission. We therefore investigated whether current single-point serodiagnostic methods are suitable to estimate the prevalence of pertussis reinfection. Two methods based on IgG-Ptx plasma levels alone were used to evaluate the proportion of renewed seroconversions in the past year in a cohort of retrospective pertussis cases ≥ 24 months after a proven earlier symptomatic infection. A Dutch population database was used as a baseline. Applying a classical 62.5 IU/ml IgG-Ptx cut-off, we calculated a seroprevalence of 15% in retrospective cases, higher than the 10% observed in the population baseline. However, this method could not discriminate between renewed seroconversion and waning of previously infection-enhanced IgG-Ptx levels. Two-component cluster analysis of the IgG-Ptx datasets of both pertussis cases and the general population revealed a continuum of intermediate IgG-Ptx levels, preventing the establishment of a positive population and the comparison of prevalence by this alternative method. Next, we investigated the complementary serodiagnostic value of IgA-Ptx levels. When modelling datasets including both convalescent and retrospective cases we obtained new cut-offs for both IgG-Ptx and IgA-Ptx that were optimized to evaluate renewed seroconversions in the ex-cases target population. Combining these cut-offs two-dimensionally, we calculated 8.0% reinfections in retrospective cases, being below the baseline seroprevalence. Our study for the first time revealed the shortcomings of using only IgG-Ptx data in conventional serodiagnostic methods to determine pertussis reinfections. Improved results can be obtained with two-dimensional serodiagnostic profiling. The proportion of reinfections thus established suggests a relatively increased period of protection to renewed
Jones, A T; Ovenden, J R; Wang, Y-G
2016-10-01
The linkage disequilibrium method is currently the most widely used single sample estimator of genetic effective population size. The commonly used software packages come with two options, referred to as the parametric and jackknife methods, for computing the associated confidence intervals. However, little is known on the coverage performance of these methods, and the published data suggest there may be some room for improvement. Here, we propose two new methods for generating confidence intervals and compare them with the two in current use through a simulation study. The new confidence interval methods tend to be conservative but outperform the existing methods for generating confidence intervals under certain circumstances, such as those that may be encountered when making estimates using large numbers of single-nucleotide polymorphisms.
Khoubrouy, Soudeh A; Panahi, Issa M S
2012-01-01
Adaptive Feedback Cancellation (AFC) methods are used to find an FIR filter to cancel the negative effect of acoustic feedback between the loudspeaker and microphone of the hearing aid. Finding the AFC filter of appropriate order/length directly affects the performance and complexity of the system. In this paper, we use noise injection method to find the AFC filter estimating the feedback path model. We show that the optimum length which guarantees a good compromise between the quality and the complexity of the system may be smaller than the length of the actual feedback path model. However, in order to improve the performance of the system in terms of Misalignment criterion, we propose using multiple short-time noise injections and averaging method to find the best filter estimate of appropriate length. PMID:23367108
Jones, A T; Ovenden, J R; Wang, Y-G
2016-10-01
The linkage disequilibrium method is currently the most widely used single sample estimator of genetic effective population size. The commonly used software packages come with two options, referred to as the parametric and jackknife methods, for computing the associated confidence intervals. However, little is known on the coverage performance of these methods, and the published data suggest there may be some room for improvement. Here, we propose two new methods for generating confidence intervals and compare them with the two in current use through a simulation study. The new confidence interval methods tend to be conservative but outperform the existing methods for generating confidence intervals under certain circumstances, such as those that may be encountered when making estimates using large numbers of single-nucleotide polymorphisms. PMID:27005004
Multiple ping sonar accuracy improvement using robust motion estimation and ping fusion.
Yu, Lian; Neretti, Nicola; Intrator, Nathan
2006-04-01
Noise degrades the accuracy of sonar systems. We demonstrate a practical method for increasing the effective signal-to-noise ratio (SNR) by fusing time delay information from a burst of multiple sonar pings. This approach can be useful when there is no relative motion between the sonar and the target during the burst of sonar pinging. Otherwise, the relative motion degrades the fusion and therefore, has to be addressed before fusion can be used. In this paper, we present a robust motion estimation algorithm which uses information from multiple receivers to estimate the relative motion between pings in the burst. We then compensate for motion, and show that the fusion of information from the burst of motion compensated pings improves both the resilience to noise and sonar accuracy, consequently increasing the operating range of the sonar system.
Mackie, C D
2014-01-01
Impacts of underground longwall mining on groundwater systems are commonly assessed using numerical groundwater flow models that are capable of forecasting changes to strata pore pressures and rates of groundwater seepage over the mine life. Groundwater ingress to a mining operation is typically estimated using zone budgets to isolate relevant parts of a model that represent specific mining areas, and to aggregate flows at nominated times within specific model stress periods. These rates can be easily misinterpreted if simplistic averaging of daily flow budgets is adopted. Such misinterpretation has significant implications for design of underground dewatering systems for a new mine site or it may lead to model calibration errors where measured mine water seepage rates are used as a primary calibration constraint. Improved estimates of groundwater ingress can be made by generating a cumulative flow history from zone budget data, then differentiating the cumulative flow history using a low order polynomial convolved through the data set.
Studying Dark Energy with Galaxy Cluster Surveys
NASA Astrophysics Data System (ADS)
Mohr, J.; Majumdar, S.
2003-05-01
Galaxy cluster surveys provide a powerful means of studying the amount and nature of the dark energy. Cluster surveys are complementary to studies using supernova distance estimates, because the cosmological parameter degeneracies are quite different. The redshift distribution of detected clusters in a deep, large solid angle survey is very sensitive to the dark energy equation of state, but robust constraints require mass--observable relations that connect cluster halo mass to observables such as the X-ray luminosity, Sunyaev-Zel'dovich effect distortion, galaxy light or weak lensing shear. Observed regularity in the cluster population and the application of multiple, independent mass estimators provide evidence that these scaling relations exist in the local and intermediate redshift universe. Large cluster surveys contain enough information to study the dark energy and solve for these scaling relations and their evolution with redshift. This self--calibrating nature of galaxy cluster surveys provides a level of robustness that is extremely attractive. Cosmological constraints from a survey can be improved by including more than just the redshift distribution. Limited followup of as few as 1% of the surveyed clusters to make detailed mass measurements improves the cosmological constraints. Including constraints on the mass function at each redshift provides additional power in solving for the evolution of the mass--observable relation. An analysis of the clustering of the surveyed clusters provides additional cosmological discriminating power. There are several planned or proposed cluster surveys that will take place over the next decade. Observational challenges include estimating cluster redshifts and understanding the survey completeness. These challenges vary with wavelength regime, suggesting that multiwavelength surveys provide the most promising avenue for precise galaxy cluster studies of the dark energy. This work is supported in part by the NASA Long
Spoorenberg, Veroniek; Hulscher, Marlies E. J. L.; Geskus, Ronald B.; de Reijke, Theo M.; Opmeer, Brent C.; Prins, Jan M.; Geerlings, Suzanne E.
2015-01-01
Background Up to 50% of hospital antibiotic use is inappropriate and therefore improvement strategies are urgently needed. We compared the effectiveness of two strategies to improve the quality of antibiotic use in patients with a complicated urinary tract infection (UTI). Methods In a multicentre, cluster-randomized trial 19 Dutch hospitals (departments Internal Medicine and Urology) were allocated to either a multi-faceted strategy including feedback, educational sessions, reminders and additional/optional improvement actions, or a competitive feedback strategy, i.e. providing professionals with non-anonymous comparative feedback on the department’s appropriateness of antibiotic use. Retrospective baseline- and post-intervention measurements were performed in 2009 and 2012 in 50 patients per department, resulting in 1,964 and 2,027 patients respectively. Principal outcome measures were nine validated guideline-based quality indicators (QIs) that define appropriate antibiotic use in patients with a complicated UTI, and a QI sumscore that summarizes for each patient the appropriateness of antibiotic use. Results Performance scores on several individual QIs showed improvement from baseline to post-intervention measurements, but no significant differences were found between both strategies. The mean patient’s QI sum score improved significantly in both strategy groups (multi-faceted: 61.7% to 65.0%, P = 0.04 and competitive feedback: 62.8% to 66.7%, P = 0.01). Compliance with the strategies was suboptimal, but better compliance was associated with more improvement. Conclusion The effectiveness of both strategies was comparable and better compliance with the strategies was associated with more improvement. To increase effectiveness, improvement activities should be rigorously applied, preferably by a locally initiated multidisciplinary team. Trial Registration Nederlands Trial Register 1742 PMID:26637169
Integrating Soft Data into Hydrologic Modeling to Improve Post-fire Parameter Estimates
NASA Astrophysics Data System (ADS)
Jung, H. Y.; Hogue, T. S.
2008-12-01
A significant problem with post-fire streamflow prediction is the limited availability of data for parameter estimation and for independent validation of model performance. The goal of the current study is to evaluate a range of optimization techniques which allow integration of alternative soft-data data into a hydrologic modeling and prediction framework and improve post-fire simulations. This project utilizes the Sacramento Soil Moisture Accounting Model (SAC-SMA), the National Weather Service operational conceptual rainfall- runoff model and incorporates both discharge and geochemical data to estimate model parameters. The analysis is undertaken in a watershed which has undergone an extensive land cover change (fire) and for which both pre- and post-fire geochemical and streamflow data are available. We utilize the Shuffled Complex Evolution Metropolis (SCEM) and the Generalized Likelihood Uncertainty Estimation (GLUE) algorithms coupled to the SACSMA and integrate estimates of geochemically-derived flow components. Success is determined not only by the accurate prediction of total discharge, but also by the prediction of flow from contributing sources (i.e. overland, lateral and baseflow components). The coupled SCEM-SACSMA, using only discharge as a criterion, shows reasonable simulation of total runoff and various flow components under pre-fire conditions. Post-fire model simulations show less accurate simulation of total discharge and unrealistic representation of watershed behavior. Pre-fire model runs using the coupled GLUE-SACSMA show reasonable performance integrating total discharge as the threshold criteria; whereas the post fire model run returned empty parameter sets (no sets met threshold criteria). Predictions using the GLUE- SACSMA and derived flow components showed significant improvement, narrowing the uncertainty bounds in total discharge as well as all observed flow components.
Estimating the impacts of federal efforts to improve energy efficiency: The case of buildings
LaMontagne, J; Jones, R; Nicholls, A; Shankle, S
1994-09-01
The US Department of Energy`s Office of Energy Efficiency and Renewable Energy (EE) has for more than a decade focused its efforts on research to develop new technologies for improving the efficiency of energy use and increasing the role of renewable energy; success has usually been measured in term of energy saved or displaced. Estimates of future energy savings remain an important factor in program planning and prioritization. A variety of internal and external factors are now radically changing the planning process, and in turn the composition and thrust of the EE program. The Energy Policy Act of 1992, the Framework Convention on Climate Change (and the Administration`s Climate Change Action Plan), and concerns for the future of the economy (especially employment and international competitiveness) are increasing emphasis on technology deployment and near-term results. The Reinventing Government Initiative, the Government Performance and Results Act, and the Executive Order on Environmental Justice are all forcing Federal programs to demonstrate that they are producing desired results in a cost-effective manner. The application of Total Quality management principles has increased the scope and importance of producing quantified measures of benefit. EE has established a process for estimating the benefits of DOE`s energy efficiency and renewable energy programs called ``Quality Metrics`` (QM). The ``metrics`` are: energy, employment, equity, environment, risk, economics. This paper describes the approach taken by EE`s Office of Building Technologies to prepare estimates of program benefits in terms of these metrics, presents the estimates, discusses their implications, and explores possible improvements to the QM process as it is currently configured.
Estimating the impacts of federal efforts to improve energy efficiency: The case of building
Nicolls, A.K.; Shankle, S.A.; LaMontagne, J.; Jones, R.E.
1994-11-01
The US Department of Energy`s Office of Energy Efficiency and Renewable Energy [EE] has for more than a decade focused its efforts on research to develop new technologies for improving the efficiency of energy use and increasing the role of renewable energy; success has usually been measured in terms of energy saved or displaced. Estimates of future energy savings remain an important factor in program planning and prioritization. A variety of internal and external factors are now radically changing the planning process, and in turn the composition and thrust of the EE program. The Energy Policy Act of 1992, the Framework Convention on Climate Change (and the Administration`s Climate Change Action Plan), and concerns for the future of the economy (especially employment and international competitiveness) are increasing emphasis on technology deployment and near-term results. The Reinventing Government Initiative, the Government Performance and Results Act, and the Executive Order on Environmental Justice are all forcing Federal programs to demonstrate that they are producing desired results in a cost-effective manner. The application of Total Quality Management principles has increased the scope and importance of producing quantified measures of benefit. EE has established a process for estimating the benefits of DOE`s energy efficiency and renewable energy programs called `Quality Metrics` (QM). The ``metrics`` are: Energy; Environment; Employment; Risk; Equity; Economics. This paper describes the approach taken by EE`s Office of Building Technologies to prepare estimates of program benefits in terms of these metrics, presents the estimates, discusses their implications, and explores possible improvements to the QM process as it is currently configured.
NASA Technical Reports Server (NTRS)
Zhang, Qingyuan; Cheng, Yen-Ben; Lyapustin, Alexei I.; Wang, Yujie; Zhang, Xiaoyang; Suyker, Andrew; Verma, Shashi; Shuai, Yanmin; Middleton, Elizabeth M.
2015-01-01
Satellite remote sensing estimates of Gross Primary Production (GPP) have routinely been made using spectral Vegetation Indices (VIs) over the past two decades. The Normalized Difference Vegetation Index (NDVI), the Enhanced Vegetation Index (EVI), the green band Wide Dynamic Range Vegetation Index (WDRVIgreen), and the green band Chlorophyll Index (CIgreen) have been employed to estimate GPP under the assumption that GPP is proportional to the product of VI and photosynthetically active radiation (PAR) (where VI is one of four VIs: NDVI, EVI, WDRVIgreen, or CIgreen). However, the empirical regressions between VI*PAR and GPP measured locally at flux towers do not pass through the origin (i.e., the zero X-Y value for regressions). Therefore they are somewhat difficult to interpret and apply. This study investigates (1) what are the scaling factors and offsets (i.e., regression slopes and intercepts) between the fraction of PAR absorbed by chlorophyll of a canopy (fAPARchl) and the VIs, and (2) whether the scaled VIs developed in (1) can eliminate the deficiency and improve the accuracy of GPP estimates. Three AmeriFlux maize and soybean fields were selected for this study, two of which are irrigated and one is rainfed. The four VIs and fAPARchl of the fields were computed with the MODerate resolution Imaging Spectroradiometer (MODIS) satellite images. The GPP estimation performance for the scaled VIs was compared to results obtained with the original VIs and evaluated with standard statistics: the coefficient of determination (R2), the root mean square error (RMSE), and the coefficient of variation (CV). Overall, the scaled EVI obtained the best performance. The performance of the scaled NDVI, EVI and WDRVIgreen was improved across sites, crop types and soil/background wetness conditions. The scaled CIgreen did not improve results, compared to the original CIgreen. The scaled green band indices (WDRVIgreen, CIgreen) did not exhibit superior performance to either the
NASA Astrophysics Data System (ADS)
Stenz, Ronald D.
As Deep Convective Systems (DCSs) are responsible for most severe weather events, increased understanding of these systems along with more accurate satellite precipitation estimates will improve NWS (National Weather Service) warnings and monitoring of hazardous weather conditions. A DCS can be classified into convective core (CC) regions (heavy rain), stratiform (SR) regions (moderate-light rain), and anvil (AC) regions (no rain). These regions share similar infrared (IR) brightness temperatures (BT), which can create large errors for many existing rain detection algorithms. This study assesses the performance of the National Mosaic and Multi-sensor Quantitative Precipitation Estimation System (NMQ) Q2, and a simplified version of the GOES-R Rainfall Rate algorithm (also known as the Self-Calibrating Multivariate Precipitation Retrieval, or SCaMPR), over the state of Oklahoma (OK) using OK MESONET observations as ground truth. While the average annual Q2 precipitation estimates were about 35% higher than MESONET observations, there were very strong correlations between these two data sets for multiple temporal and spatial scales. Additionally, the Q2 estimated precipitation distributions over the CC, SR, and AC regions of DCSs strongly resembled the MESONET observed ones, indicating that Q2 can accurately capture the precipitation characteristics of DCSs although it has a wet bias . SCaMPR retrievals were typically three to four times higher than the collocated MESONET observations, with relatively weak correlations during a year of comparisons in 2012. Overestimates from SCaMPR retrievals that produced a high false alarm rate were primarily caused by precipitation retrievals from the anvil regions of DCSs when collocated MESONET stations recorded no precipitation. A modified SCaMPR retrieval algorithm, employing both cloud optical depth and IR temperature, has the potential to make significant improvements to reduce the SCaMPR false alarm rate of retrieved
Coburn, T.C.; Freeman, P.A.; Attanasi, E.D.
2012-01-01
The primary objectives of this research were to (1) investigate empirical methods for establishing regional trends in unconventional gas resources as exhibited by historical production data and (2) determine whether or not incorporating additional knowledge of a regional trend in a suite of previously established local nonparametric resource prediction algorithms influences assessment results. Three different trend detection methods were applied to publicly available production data (well EUR aggregated to 80-acre cells) from the Devonian Antrim Shale gas play in the Michigan Basin. This effort led to the identification of a southeast-northwest trend in cell EUR values across the play that, in a very general sense, conforms to the primary fracture and structural orientations of the province. However, including this trend in the resource prediction algorithms did not lead to improved results. Further analysis indicated the existence of clustering among cell EUR values that likely dampens the contribution of the regional trend. The reason for the clustering, a somewhat unexpected result, is not completely understood, although the geological literature provides some possible explanations. With appropriate data, a better understanding of this clustering phenomenon may lead to important information about the factors and their interactions that control Antrim Shale gas production, which may, in turn, help establish a more general protocol for better estimating resources in this and other shale gas plays. ?? 2011 International Association for Mathematical Geology (outside the USA).
Improved estimates of upper-ocean warming and multi-decadal sea-level rise.
Domingues, Catia M; Church, John A; White, Neil J; Gleckler, Peter J; Wijffels, Susan E; Barker, Paul M; Dunn, Jeff R
2008-06-19
Changes in the climate system's energy budget are predominantly revealed in ocean temperatures and the associated thermal expansion contribution to sea-level rise. Climate models, however, do not reproduce the large decadal variability in globally averaged ocean heat content inferred from the sparse observational database, even when volcanic and other variable climate forcings are included. The sum of the observed contributions has also not adequately explained the overall multi-decadal rise. Here we report improved estimates of near-global ocean heat content and thermal expansion for the upper 300 m and 700 m of the ocean for 1950-2003, using statistical techniques that allow for sparse data coverage and applying recent corrections to reduce systematic biases in the most common ocean temperature observations. Our ocean warming and thermal expansion trends for 1961-2003 are about 50 per cent larger than earlier estimates but about 40 per cent smaller for 1993-2003, which is consistent with the recognition that previously estimated rates for the 1990s had a positive bias as a result of instrumental errors. On average, the decadal variability of the climate models with volcanic forcing now agrees approximately with the observations, but the modelled multi-decadal trends are smaller than observed. We add our observational estimate of upper-ocean thermal expansion to other contributions to sea-level rise and find that the sum of contributions from 1961 to 2003 is about 1.5 +/- 0.4 mm yr(-1), in good agreement with our updated estimate of near-global mean sea-level rise (using techniques established in earlier studies) of 1.6 +/- 0.2 mm yr(-1).
An Improved Performance Frequency Estimation Algorithm for Passive Wireless SAW Resonant Sensors
Liu, Boquan; Zhang, Chenrui; Ji, Xiaojun; Chen, Jing; Han, Tao
2014-01-01
Passive wireless surface acoustic wave (SAW) resonant sensors are suitable for applications in harsh environments. The traditional SAW resonant sensor system requires, however, Fourier transformation (FT) which has a resolution restriction and decreases the accuracy. In order to improve the accuracy and resolution of the measurement, the singular value decomposition (SVD)-based frequency estimation algorithm is applied for wireless SAW resonant sensor responses, which is a combination of a single tone undamped and damped sinusoid signal with the same frequency. Compared with the FT algorithm, the accuracy and the resolution of the method used in the self-developed wireless SAW resonant sensor system are validated. PMID:25429410
NASA Astrophysics Data System (ADS)
Rafieeinasab, Arezoo; Norouzi, Amir; Seo, Dong-Jun; Nelson, Brian
2015-12-01
For monitoring and prediction of water-related hazards in urban areas such as flash flooding, high-resolution hydrologic and hydraulic modeling is necessary. Because of large sensitivity and scale dependence of rainfall-runoff models to errors in quantitative precipitation estimates (QPE), it is very important that the accuracy of QPE be improved in high-resolution hydrologic modeling to the greatest extent possible. With the availability of multiple radar-based precipitation products in many areas, one may now consider fusing them to produce more accurate high-resolution QPE for a wide spectrum of applications. In this work, we formulate and comparatively evaluate four relatively simple procedures for such fusion based on Fisher estimation and its conditional bias-penalized variant: Direct Estimation (DE), Bias Correction (BC), Reduced-Dimension Bias Correction (RBC) and Simple Estimation (SE). They are applied to fuse the Multisensor Precipitation Estimator (MPE) and radar-only Next Generation QPE (Q2) products at the 15-min 1-km resolution (Experiment 1), and the MPE and Collaborative Adaptive Sensing of the Atmosphere (CASA) QPE products at the 15-min 500-m resolution (Experiment 2). The resulting fused estimates are evaluated using the 15-min rain gauge observations from the City of Grand Prairie in the Dallas-Fort Worth Metroplex (DFW) in north Texas. The main criterion used for evaluation is that the fused QPE improves over the ingredient QPEs at their native spatial resolutions, and that, at the higher resolution, the fused QPE improves not only over the ingredient higher-resolution QPE but also over the ingredient lower-resolution QPE trivially disaggregated using the ingredient high-resolution QPE. All four procedures assume that the ingredient QPEs are unbiased, which is not likely to hold true in reality even if real-time bias correction is in operation. To test robustness under more realistic conditions, the fusion procedures were evaluated with and
Vogt, Natalja; Demaison, Jean; Cocinero, Emilio J; Écija, Patricia; Lesarri, Alberto; Rudolph, Heinz Dieter; Vogt, Jürgen
2016-06-21
Fructose and deoxyribose (24 and 19 atoms, respectively) are too large for determining accurate equilibrium structures, either by high-level ab initio methods or by experiments alone. We show in this work that the semiexperimental (SE) mixed estimation (ME) method offers a valuable alternative for equilibrium structure determinations in moderate-sized molecules such as these monosaccharides or other biochemical building blocks. The SE/ME method proceeds by fitting experimental rotational data for a number of isotopologues, which have been corrected with theoretical vibration-rotation interaction parameters (αi), and predicate observations for the structure. The derived SE constants are later supplemented by carefully chosen structural parameters from medium level ab initio calculations, including those for hydrogen atoms. The combined data are then used in a weighted least-squares fit to determine an equilibrium structure (r). We applied the ME method here to fructose and 2-deoxyribose and checked the accuracy of the calculations for 2-deoxyribose against the high level ab initio r structure fully optimized at the CCSD(T) level. We show that the ME method allows determining a complete and reliable equilibrium structure for relatively large molecules, even when experimental rotational information includes a limited number of isotopologues. With a moderate computational cost the ME method could be applied to larger molecules, thereby improving the structural evidence for subtle orbital interactions such as the anomeric effect.
Wang, Jun; Zhou, Bihua; Zhou, Shudao
2016-01-01
This paper proposes an improved cuckoo search (ICS) algorithm to establish the parameters of chaotic systems. In order to improve the optimization capability of the basic cuckoo search (CS) algorithm, the orthogonal design and simulated annealing operation are incorporated in the CS algorithm to enhance the exploitation search ability. Then the proposed algorithm is used to establish parameters of the Lorenz chaotic system and Chen chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the algorithm can estimate parameters with high accuracy and reliability. Finally, the results are compared with the CS algorithm, genetic algorithm, and particle swarm optimization algorithm, and the compared results demonstrate the method is energy-efficient and superior. PMID:26880874
Wang, Jun; Zhou, Bihua; Zhou, Shudao
2016-01-01
This paper proposes an improved cuckoo search (ICS) algorithm to establish the parameters of chaotic systems. In order to improve the optimization capability of the basic cuckoo search (CS) algorithm, the orthogonal design and simulated annealing operation are incorporated in the CS algorithm to enhance the exploitation search ability. Then the proposed algorithm is used to establish parameters of the Lorenz chaotic system and Chen chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the algorithm can estimate parameters with high accuracy and reliability. Finally, the results are compared with the CS algorithm, genetic algorithm, and particle swarm optimization algorithm, and the compared results demonstrate the method is energy-efficient and superior. PMID:26880874
Estimating the value of improved wastewater treatment: the case of River Ganga, India.
Birol, Ekin; Das, Sukanya
2010-11-01
In this paper we employ a stated preference environmental valuation technique, namely the choice experiment method, to estimate local public's willingness to pay (WTP) for improvements in the capacity and technology of a sewage treatment plant (STP) in Chandernagore municipality, located on the banks of the River Ganga in India. A pilot choice experiment study is administered to 150 randomly selected Chandernagore residents and the data are analysed using the conditional logit model with interactions. The results reveal that residents of this municipality are willing to pay significant amounts in terms of higher monthly municipality taxes to ensure the full capacity of the STP is used for primary treatment and the technology is upgraded to enable secondary treatment. Overall, the results reported in this paper support increased investments to improve the capacity and technology of STPs to reduce water pollution, and hence environmental and health risks that are currently threatening the sustainability of the economic, cultural and religious values this sacred river generates.
Wang, Jun; Zhou, Bihua; Zhou, Shudao
2016-01-01
This paper proposes an improved cuckoo search (ICS) algorithm to establish the parameters of chaotic systems. In order to improve the optimization capability of the basic cuckoo search (CS) algorithm, the orthogonal design and simulated annealing operation are incorporated in the CS algorithm to enhance the exploitation search ability. Then the proposed algorithm is used to establish parameters of the Lorenz chaotic system and Chen chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the algorithm can estimate parameters with high accuracy and reliability. Finally, the results are compared with the CS algorithm, genetic algorithm, and particle swarm optimization algorithm, and the compared results demonstrate the method is energy-efficient and superior.
NASA Astrophysics Data System (ADS)
Cherif, Ines; Alexandridis, Thomas; Chambel Leitao, Pedro; Jauch, Eduardo; Stavridou, Domna; Iordanidis, Charalampos; Silleos, Nikolaos; Misopolinos, Nikolaos; Neves, Ramiro; Safara Araujo, Antonio
2013-04-01
). A correlation analysis was performed at the common spatial resolution of 1km using selected homogeneous pixels (from the land cover point of view). A statistically significant correlation factor of 0.6 was found, and the RMSE was 0.92 mm/day. Using raster meteorological data the ITA-MyWater algorithms were able to catch the variability of weather patterns over the river basin and thus improved the spatial distribution of evapotranpiration estimations at low resolution. The work presented is part of the FP7-EU project "Merging hydrological models and Earth observation data for reliable information on water - MyWater".
Alexander, Kelly T; Dreibelbis, Robert; Freeman, Matthew C; Ojeny, Betty; Rheingans, Richard
2013-09-01
Water, sanitation, and hygiene (WASH) programs in schools have been shown to improve health and reduce absence. In resource-poor settings, barriers such as inadequate budgets, lack of oversight, and competing priorities limit effective and sustained WASH service delivery in schools. We employed a cluster-randomized trial to examine if schools could improve WASH conditions within existing administrative structures. Seventy schools were divided into a control group and three intervention groups. All intervention schools received a budget for purchasing WASH-related items. One group received no further intervention. A second group received additional funding for hiring a WASH attendant and making repairs to WASH infrastructure, and a third group was given guides for student and community monitoring of conditions. Intervention schools made significant improvements in provision of soap and handwashing water, treated drinking water, and clean latrines compared with controls. Teachers reported benefits of monitoring, repairs, and a WASH attendant, but quantitative data of WASH conditions did not determine whether expanded interventions out-performed our budget-only intervention. Providing schools with budgets for WASH operational costs improved access to necessary supplies, but did not ensure consistent service delivery to students. Further work is needed to clarify how schools can provide WASH services daily.
Improved Atmospheric Soundings and Error Estimates from Analysis of AIRS/AMSU Data
NASA Technical Reports Server (NTRS)
Susskind, Joel
2007-01-01
The AIRS Science Team Version 5.0 retrieval algorithm became operational at the Goddard DAAC in July 2007 generating near real-time products from analysis of AIRS/AMSU sounding data. This algorithm contains many significant theoretical advances over the AIRS Science Team Version 4.0 retrieval algorithm used previously. Three very significant developments of Version 5 are: 1) the development and implementation of an improved Radiative Transfer Algorithm (RTA) which allows for accurate treatment of non-Local Thermodynamic Equilibrium (non-LTE) effects on shortwave sounding channels; 2) the development of methodology to obtain very accurate case by case product error estimates which are in turn used for quality control; and 3) development of an accurate AIRS only cloud clearing and retrieval system. These theoretical improvements taken together enabled a new methodology to be developed which further improves soundings in partially cloudy conditions, without the need for microwave observations in the cloud clearing step as has been done previously. In this methodology, longwave C02 channel observations in the spectral region 700 cm-' to 750 cm-' are used exclusively for cloud clearing purposes, while shortwave C02 channels in the spectral region 2195 cm-' to 2395 cm-' are used for temperature sounding purposes. The new methodology for improved error estimates and their use in quality control is described briefly and results are shown indicative of their accuracy. Results are also shown of forecast impact experiments assimilating AIRS Version 5.0 retrieval products in the Goddard GEOS 5 Data Assimilation System using different quality control thresholds.
2014-01-01
Background In high-resource settings, obstetric ultrasound is a standard component of prenatal care used to identify pregnancy complications and to establish an accurate gestational age in order to improve obstetric care. Whether or not ultrasound use will improve care and ultimately pregnancy outcomes in low-resource settings is unknown. Methods/Design This multi-country cluster randomized trial will assess the impact of antenatal ultrasound screening performed by health care staff on a composite outcome consisting of maternal mortality and maternal near-miss, stillbirth and neonatal mortality in low-resource community settings. The trial will utilize an existing research infrastructure, the Global Network for Women’s and Children’s Health Research with sites in Pakistan, Kenya, Zambia, Democratic Republic of Congo and Guatemala. A maternal and newborn health registry in defined geographic areas which documents all pregnancies and their outcomes to 6 weeks post-delivery will provide population-based rates of maternal mortality and morbidity, stillbirth, neonatal mortality and morbidity, and health care utilization for study clusters. A total of 58 study clusters each with a health center and about 500 births per year will be randomized (29 intervention and 29 control). The intervention includes training of health workers (e.g., nurses, midwives, clinical officers) to perform ultrasound examinations during antenatal care, generally at 18–22 and at 32–36 weeks for each subject. Women who are identified as having a complication of pregnancy will be referred to a hospital for appropriate care. Finally, the intervention includes community sensitization activities to inform women and their families of the availability of ultrasound at the antenatal care clinic and training in emergency obstetric and neonatal care at referral facilities. Discussion In summary, our trial will evaluate whether introduction of ultrasound during antenatal care improves pregnancy
Strategies for Improving Power in Cluster Randomized Studies of Professional Development
ERIC Educational Resources Information Center
Kelcey, Ben; Spybrook, Jessaca; Zhang, Jiaqi; Phelps, Geoffrey; Jones, Nathan
2015-01-01
With research indicating substantial differences among teachers in terms of their effectiveness (Nye, Konstantopoulous, & Hedges, 2004), a major focus of recent research in education has been on improving teacher quality through professional development (Desimone, 2009; Institute of Educations Sciences [IES], 2012; Measures of Effective…
ERIC Educational Resources Information Center
Xu, Zeyu; Nichols, Austin
2010-01-01
The gold standard in making causal inference on program effects is a randomized trial. Most randomization designs in education randomize classrooms or schools rather than individual students. Such "clustered randomization" designs have one principal drawback: They tend to have limited statistical power or precision. This study aims to provide…
NASA Astrophysics Data System (ADS)
Anayah, F. M.; Kaluarachchi, J. J.
2014-06-01
Reliable estimation of evapotranspiration (ET) is important for the purpose of water resources planning and management. Complementary methods, including complementary relationship areal evapotranspiration (CRAE), advection aridity (AA) and Granger and Gray (GG), have been used to estimate ET because these methods are simple and practical in estimating regional ET using meteorological data only. However, prior studies have found limitations in these methods especially in contrasting climates. This study aims to develop a calibration-free universal method using the complementary relationships to compute regional ET in contrasting climatic and physical conditions with meteorological data only. The proposed methodology consists of a systematic sensitivity analysis using the existing complementary methods. This work used 34 global FLUXNET sites where eddy covariance (EC) fluxes of ET are available for validation. A total of 33 alternative model variations from the original complementary methods were proposed. Further analysis using statistical methods and simplified climatic class definitions produced one distinctly improved GG-model-based alternative. The proposed model produced a single-step ET formulation with results equal to or better than the recent studies using data-intensive, classical methods. Average root mean square error (RMSE), mean absolute bias (BIAS) and R2 (coefficient of determination) across 34 global sites were 20.57 mm month-1, 10.55 mm month-1 and 0.64, respectively. The proposed model showed a step forward toward predicting ET in large river basins with limited data and requiring no calibration.
NASA Astrophysics Data System (ADS)
Anayah, F. M.; Kaluarachchi, J. J.
2013-11-01
Reliable estimation of evapotranspiration (ET) is important for the purpose of water resources planning and management. Complementary methods, including Complementary Relationship Areal Evapotranspiration (CRAE), Advection-Aridity (AA) and Granger and Gray (GG), have been used to estimate ET because these methods are simple and practical in estimating regional ET using meteorological data only. However, prior studies have found limitations in these methods especially in contrasting climates. This study aims to develop a calibration-free universal model using the complementary relationships to compute regional ET in contrasting climatic and physical conditions with meteorological data only. The proposed methodology consists of a systematic sensitivity analysis using the existing complementary methods. This work used 34 global FLUXNET sites where eddy covariance (EC) fluxes of ET are available for validation. A total of 33 alternative model variations from the original complementary methods were proposed. Further analysis using statistical methods and simplified climatic class definitions produced one distinctly improved GG-model based alternative. The proposed model produced a single-step ET formulation with results equal or better than the recent studies using data-intensive, classical methods. Average root mean square error (RMSE), mean absolute bias (BIAS) and R2 values across 34 global sites were 20.57 mm month-1, 10.55 mm month-1 and 0.64, respectively. The proposed model showed a step forward toward predicting ET in large river basins with limited data and requiring no calibration.
Gotardo, Paulo Fabiano Urnau; Bellon, Olga Regina Pereira; Boyer, Kim L; Silva, Luciano
2004-12-01
This paper presents a novel range image segmentation method employing an improved robust estimator to iteratively detect and extract distinct planar and quadric surfaces. Our robust estimator extends M-estimator Sample Consensus/Random Sample Consensus (MSAC/RANSAC) to use local surface orientation information, enhancing the accuracy of inlier/outlier classification when processing noisy range data describing multiple structures. An efficient approximation to the true geometric distance between a point and a quadric surface also contributes to effectively reject weak surface hypotheses and avoid the extraction of false surface components. Additionally, a genetic algorithm was specifically designed to accelerate the optimization process of surface extraction, while avoiding premature convergence. We present thorough experimental results with quantitative evaluation against ground truth. The segmentation algorithm was applied to three real range image databases and competes favorably against eleven other segmenters using the most popular evaluation framework in the literature. Our approach lends itself naturally to parallel implementation and application in real-time tasks. The method fits well, into several of today's applications in man-made environments, such as target detection and autonomous navigation, for which obstacle detection, but not description or reconstruction, is required. It can also be extended to process point clouds resulting from range image registration.
State Estimation and Forecasting of the Ski-Slope Model Using an Improved Shadowing Filter
NASA Astrophysics Data System (ADS)
Mat Daud, Auni Aslah
In this paper, we present the application of the gradient descent of indeterminism (GDI) shadowing filter to a chaotic system, that is the ski-slope model. The paper focuses on the quality of the estimated states and their usability for forecasting. One main problem is that the existing GDI shadowing filter fails to provide stability to the convergence of the root mean square error and the last point error of the ski-slope model. Furthermore, there are unexpected cases in which the better state estimates give worse forecasts than the worse state estimates. We investigate these unexpected cases in particular and show how the presence of the humps contributes to them. However, the results show that the GDI shadowing filter can successfully be applied to the ski-slope model with only slight modification, that is, by introducing the adaptive step-size to ensure the convergence of indeterminism. We investigate its advantages over fixed step-size and how it can improve the performance of our shadowing filter.
Estimation of contrast agent bolus arrival delays for improved reproducibility of liver DCE MRI
NASA Astrophysics Data System (ADS)
Chouhan, Manil D.; Bainbridge, Alan; Atkinson, David; Punwani, Shonit; Mookerjee, Rajeshwar P.; Lythgoe, Mark F.; Taylor, Stuart A.
2016-10-01
Delays between contrast agent (CA) arrival at the site of vascular input function (VIF) sampling and the tissue of interest affect dynamic contrast enhanced (DCE) MRI pharmacokinetic modelling. We investigate effects of altering VIF CA bolus arrival delays on liver DCE MRI perfusion parameters, propose an alternative approach to estimating delays and evaluate reproducibility. Thirteen healthy volunteers (28.7 ± 1.9 years, seven males) underwent liver DCE MRI using dual-input single compartment modelling, with reproducibility (n = 9) measured at 7 days. Effects of VIF CA bolus arrival delays were assessed for arterial and portal venous input functions. Delays were pre-estimated using linear regression, with restricted free modelling around the pre-estimated delay. Perfusion parameters and 7 days reproducibility were compared using this method, freely modelled delays and no delays using one-way ANOVA. Reproducibility was assessed using Bland-Altman analysis of agreement. Maximum percent change relative to parameters obtained using zero delays, were -31% for portal venous (PV) perfusion, +43% for total liver blood flow (TLBF), +3247% for hepatic arterial (HA) fraction, +150% for mean transit time and -10% for distribution volume. Differences were demonstrated between the 3 methods for PV perfusion (p = 0.0085) and HA fraction (p < 0.0001), but not other parameters. Improved mean differences and Bland-Altman 95% Limits-of-Agreement for reproducibility of PV perfusion (9.3 ml/min/100 g, ±506.1 ml/min/100 g) and TLBF (43.8 ml/min/100 g, ±586.7 ml/min/100 g) were demonstrated using pre-estimated delays with constrained free modelling. CA bolus arrival delays cause profound differences in liver DCE MRI quantification. Pre-estimation of delays with constrained free modelling improved 7 days reproducibility of perfusion parameters in volunteers.
Improved estimation of anomalous diffusion exponents in single-particle tracking experiments
NASA Astrophysics Data System (ADS)
Kepten, Eldad; Bronshtein, Irena; Garini, Yuval
2013-05-01
The mean square displacement is a central tool in the analysis of single-particle tracking experiments, shedding light on various biophysical phenomena. Frequently, parameters are extracted by performing time averages on single-particle trajectories followed by ensemble averaging. This procedure, however, suffers from two systematic errors when applied to particles that perform anomalous diffusion. The first is significant at short-time lags and is induced by measurement errors. The second arises from the natural heterogeneity in biophysical systems. We show how to estimate and correct these two errors and improve the estimation of the anomalous parameters for the whole particle distribution. As a consequence, we manage to characterize ensembles of heterogeneous particles even for rather short and noisy measurements where regular time-averaged mean square displacement analysis fails. We apply this method to both simulations and in vivo measurements of telomere diffusion in 3T3 mouse embryonic fibroblast cells. The motion of telomeres is found to be subdiffusive with an average exponent constant in time. Individual telomere exponents are normally distributed around the average exponent. The proposed methodology has the potential to improve experimental accuracy while maintaining lower experimental costs and complexity.
Using dark current data to estimate AVIRIS noise covariance and improve spectral analyses
NASA Technical Reports Server (NTRS)
Boardman, Joseph W.
1995-01-01
Starting in 1994, all AVIRIS data distributions include a new product useful for quantification and modeling of the noise in the reported radiance data. The 'postcal' file contains approximately 100 lines of dark current data collected at the end of each data acquisition run. In essence this is a regular spectral-image cube, with 614 samples, 100 lines and 224 channels, collected with a closed shutter. Since there is no incident radiance signal, the recorded DN measure only the DC signal level and the noise in the system. Similar dark current measurements, made at the end of each line are used, with a 100 line moving average, to remove the DC signal offset. Therefore, the pixel-by-pixel fluctuations about the mean of this dark current image provide an excellent model for the additive noise that is present in AVIRIS reported radiance data. The 61,400 dark current spectra can be used to calculate the noise levels in each channel and the noise covariance matrix. Both of these noise parameters should be used to improve spectral processing techniques. Some processing techniques, such as spectral curve fitting, will benefit from a robust estimate of the channel-dependent noise levels. Other techniques, such as automated unmixing and classification, will be improved by the stable and scene-independence noise covariance estimate. Future imaging spectrometry systems should have a similar ability to record dark current data, permitting this noise characterization and modeling.
NASA Astrophysics Data System (ADS)
Hohle, M. M.; Eisenbeiss, T.; Mugrauer, M.; Freistetter, F.; Moualla, M.; Neuhäuser, R.; Raetz, St.; Schmidt, T. O. B.; Tetzlaff, N.; Vaňko, M.
2009-05-01
In this work we present detailed photometric results of the trapezium like galactic nearby OB clusters NGC 1502 and NGC 2169 carried out at the University Observatory Jena. We determined absolute BV RI magnitudes of the mostly resolved components using Landolt standard stars. This multi colour photometry enables us to estimate spectral type and absorption as well as the masses of the components, which were not available for most of the cluster members in the literature so far, using models of stellar evolution. Furthermore, we investigated the optical spectrum of the components ADS 2984A and SZ Cam of the sextuple system in NGC 1502. Our spectra clearly confirm the multiplicity of these components, which is the first investigation of this kind at the University Observatory Jena. Based on observations obtained with telescopes of the University Observatory Jena, which is operated by the Astrophysical Institute of the Friedrich-Schiller-University.
NASA Astrophysics Data System (ADS)
Hazenberg, Pieter; Leijnse, Hidde; Uijlenhoet, Remko
2016-04-01
Weather radars provide information on the characteristics of precipitation at high spatial and temporal resolution. Unfortunately, rainfall measurements by radar are affected by multiple error sources. The current study is focused on the impact of variations of the raindrop size distribution on radar rainfall estimates. Such variations lead to errors in the estimated rainfall intensity (R) and specific attenuation (k) when using fixed relations for the conversion of the observed reflectivity (Z) into R and k. For non-polarimetric radar, this error source has received relatively little attention compared to other error sources. We propose to link the parameters of the Z-R and Z-k relations directly to those of the normalized gamma DSD. The benefit of this procedure is that it reduces the number of unknown parameters. In this work, the DSD parameters are obtained using 1) surface observations from a Parsivel and Thies LPM disdrometer, and 2) a Monte Carlo optimization procedure using surface rain gauge observations. The impact of both approaches for a given precipitation type is assessed for 45 days of summertime precipitation observed in The Netherlands. Accounting for DSD variations using disdrometer observations leads to an improved radar QPE product as compared to applying climatological Z-R and Z-k relations. This especially holds for situations where widespread stratiform precipitation is observed. The best results are obtained when the DSD parameters are optimized. However, the optimized Z-R and Z-k relations show an unrealistic variability that arises from uncorrected error sources. As such, the optimization approach does not result in a realistic DSD shape but instead also accounts for uncorrected error sources resulting in the best radar rainfall adjustment. Therefore, to further improve the quality of preciptitation estimates by weather radar, usage should either be made of polarimetric radar or by extending the network of disdrometers.
Crop suitability monitoring for improved yield estimations with 100m PROBA-V data
NASA Astrophysics Data System (ADS)
Özüm Durgun, Yetkin; Gilliams, Sven; Gobin, Anne; Duveiller, Grégory; Djaby, Bakary; Tychon, Bernard
2015-04-01
This study has been realised within the framework of a PhD targeting to advance agricultural monitoring with improved yield estimations using SPOT VEGETATION remotely sensed data. For the first research question, the aim was to improve dry matter productivity (DMP) for C3 and C4 plants by adding a water stress factor. Additionally, the relation between the actual crop yield and DMP was studied. One of the limitations was the lack of crop specific maps which leads to the second research question on 'crop suitability monitoring'. The objective of this work is to create a methodological approach based on the spectral and temporal characteristics of PROBA-V images and ancillary data such as meteorology, soil and topographic data to improve the estimation of annual crop yields. The PROBA-V satellite was launched on 6th May 2013, and was designed to bridge the gap in space-borne vegetation measurements between SPOT-VGT (March 1998 - May 2014) and the upcoming Sentinel-3 satellites scheduled for launch in 2015/2016. PROBA -V has products in four spectral bands: BLUE (centred at 0.463 µm), RED (0.655 µm), NIR (0.845 µm), and SWIR (1.600 µm) with a spatial resolution ranging from 1km to 300m. Due to the construction of the sensor, the central camera can provide a 100m data product with a 5 to 8 days revisiting time. Although the 100m data product is still in test phase a methodology for crop suitability monitoring was developed. The multi-spectral composites, NDVI (Normalised Difference Vegetation Index) (NIR_RED/NIR+RED) and NDII (Normalised Difference Infrared Index) (NIR-SWIR/NIR+SWIR) profiles are used in addition to secondary data such as digital elevation data, precipitation, temperature, soil types and administrative boundaries to improve the accuracy of crop yield estimations. The methodology is evaluated on several FP7 SIGMA test sites for the 2014 - 2015 period. Reference data in the form of vector GIS with boundaries and cover type of agricultural fields are
NASA Astrophysics Data System (ADS)
Minjarez-Sosa, Carlos Manuel
Thunderstorms that occur in areas of complex terrain are a major severe weather hazard in the intermountain western U.S. Short-term quantitative estimation (QPE) of precipitation in complex terrain is a pressing need to better forecast flash flooding. Currently available techniques for QPE, that utilize a combination of rain gauge and weather radar information, may underestimate precipitation in areas where gauges do not exist or there is radar beam blockage. These are typically very mountainous and remote areas, that are quite vulnerable to flash flooding because of the steep topography. Lightning has been one of the novel ways suggested by the scientific community as an alternative to estimate precipitation over regions that experience convective precipitation, especially those continental areas with complex topography where the precipitation sensor measurements are scarce. This dissertation investigates the relationship between cloud-to-ground lightning and precipitation associated with convection with the purpose of estimating precipitation- mainly over areas of complex terrain which have precipitation sensor coverage problems (e.g. Southern Arizona). The results of this research are presented in two papers. The first, entitled Toward Development of Improved QPE in Complex Terrain Using Cloud-to-Ground Lighting Data: A case Study for the 2005 Monsoon in Southern Arizona, was published in the Journal of Hydrometeorology in December 2012. This initial study explores the relationship between cloud-to-ground lightning occurrences and multi-sensor gridded precipitation over southern Arizona. QPE is performed using a least squares approach for several time resolutions (seasonal---June, July and August---24 hourly and hourly) and for a 8 km grid size. The paper also presents problems that arise when the time resolution is increased, such as the spatial misplacing of discrete lightning events with gridded precipitation and the need to define a "diurnal day" that is
Hunter, Margaret E.; Oyler-McCance, Sara J.; Dorazio, Robert M.; Fike, Jennifer A.; Smith, Brian J.; Hunter, Charles T.; Reed, Robert N.; Hart, Kristen M.
2015-01-01
Environmental DNA (eDNA) methods are used to detect DNA that is shed into the aquatic environment by cryptic or low density species. Applied in eDNA studies, occupancy models can be used to estimate occurrence and detection probabilities and thereby account for imperfect detection. However, occupancy terminology has been applied inconsistently in eDNA studies, and many have calculated occurrence probabilities while not considering the effects of imperfect detection. Low detection of invasive giant constrictors using visual surveys and traps has hampered the estimation of occupancy and detection estimates needed for population management in southern Florida, USA. Giant constrictor snakes pose a threat to native species and the ecological restoration of the Florida Everglades. To assist with detection, we developed species-specific eDNA assays using quantitative PCR (qPCR) for the Burmese python (Python molurus bivittatus), Northern African python (P. sebae), boa constrictor (Boa constrictor), and the green (Eunectes murinus) and yellow anaconda (E. notaeus). Burmese pythons, Northern African pythons, and boa constrictors are established and reproducing, while the green and yellow anaconda have the potential to become established. We validated the python and boa constrictor assays using laboratory trials and tested all species in 21 field locations distributed in eight southern Florida regions. Burmese python eDNA was detected in 37 of 63 field sampling events; however, the other species were not detected. Although eDNA was heterogeneously distributed in the environment, occupancy models were able to provide the first estimates of detection probabilities, which were greater than 91%. Burmese python eDNA was detected along the leading northern edge of the known population boundary. The development of informative detection tools and eDNA occupancy models can improve conservation efforts in southern Florida and support more extensive studies of invasive constrictors
Hunter, Margaret E; Oyler-McCance, Sara J; Dorazio, Robert M; Fike, Jennifer A; Smith, Brian J; Hunter, Charles T; Reed, Robert N; Hart, Kristen M
2015-01-01
Environmental DNA (eDNA) methods are used to detect DNA that is shed into the aquatic environment by cryptic or low density species. Applied in eDNA studies, occupancy models can be used to estimate occurrence and detection probabilities and thereby account for imperfect detection. However, occupancy terminology has been applied inconsistently in eDNA studies, and many have calculated occurrence probabilities while not considering the effects of imperfect detection. Low detection of invasive giant constrictors using visual surveys and traps has hampered the estimation of occupancy and detection estimates needed for population management in southern Florida, USA. Giant constrictor snakes pose a threat to native species and the ecological restoration of the Florida Everglades. To assist with detection, we developed species-specific eDNA assays using quantitative PCR (qPCR) for the Burmese python (Python molurus bivittatus), Northern African python (P. sebae), boa constrictor (Boa constrictor), and the green (Eunectes murinus) and yellow anaconda (E. notaeus). Burmese pythons, Northern African pythons, and boa constrictors are established and reproducing, while the green and yellow anaconda have the potential to become established. We validated the python and boa constrictor assays using laboratory trials and tested all species in 21 field locations distributed in eight southern Florida regions. Burmese python eDNA was detected in 37 of 63 field sampling events; however, the other species were not detected. Although eDNA was heterogeneously distributed in the environment, occupancy models were able to provide the first estimates of detection probabilities, which were greater than 91%. Burmese python eDNA was detected along the leading northern edge of the known population boundary. The development of informative detection tools and eDNA occupancy models can improve conservation efforts in southern Florida and support more extensive studies of invasive constrictors
Hunter, Margaret E.; Oyler-McCance, Sara J.; Dorazio, Robert M.; Fike, Jennifer A.; Smith, Brian J.; Hunter, Charles T.; Reed, Robert N.; Hart, Kristen M.
2015-01-01
Environmental DNA (eDNA) methods are used to detect DNA that is shed into the aquatic environment by cryptic or low density species. Applied in eDNA studies, occupancy models can be used to estimate occurrence and detection probabilities and thereby account for imperfect detection. However, occupancy terminology has been applied inconsistently in eDNA studies, and many have calculated occurrence probabilities while not considering the effects of imperfect detection. Low detection of invasive giant constrictors using visual surveys and traps has hampered the estimation of occupancy and detection estimates needed for population management in southern Florida, USA. Giant constrictor snakes pose a threat to native species and the ecological restoration of the Florida Everglades. To assist with detection, we developed species-specific eDNA assays using quantitative PCR (qPCR) for the Burmese python (Python molurus bivittatus), Northern African python (P. sebae), boa constrictor (Boa constrictor), and the green (Eunectes murinus) and yellow anaconda (E. notaeus). Burmese pythons, Northern African pythons, and boa constrictors are established and reproducing, while the green and yellow anaconda have the potential to become established. We validated the python and boa constrictor assays using laboratory trials and tested all species in 21 field locations distributed in eight southern Florida regions. Burmese python eDNA was detected in 37 of 63 field sampling events; however, the other species were not detected. Although eDNA was heterogeneously distributed in the environment, occupancy models were able to provide the first estimates of detection probabilities, which were greater than 91%. Burmese python eDNA was detected along the leading northern edge of the known population boundary. The development of informative detection tools and eDNA occupancy models can improve conservation efforts in southern Florida and support more extensive studies of invasive constrictors
NASA Astrophysics Data System (ADS)
St. Pé, Alexandra; Wesloh, Daniel; Antoszewski, Graham; Daham, Farrah; Goudarzi, Navid; Rabenhorst, Scott; Delgado, Ruben
2016-06-01
There is enormous potential to harness the kinetic energy of offshore wind and produce power. However significant uncertainties are introduced in the offshore wind resource assessment process, due in part to limited observational networks and a poor understanding of the marine atmosphere's complexity. Given the cubic relationship between a turbine's power output and wind speed, a relatively small error in the wind speed estimate translates to a significant error in expected power production. The University of Maryland Baltimore County (UMBC) collected in-situ measurements offshore, within Maryland's Wind Energy Area (WEA) from July-August 2013. This research demonstrates the ability of Doppler wind lidar technology to reduce uncertainty in estimating an offshore wind resource, compared to traditional resource assessment techniques, by providing a more accurate representation of the wind profile and associated hub-height wind speed variability. The second objective of this research is to elucidate the impact of offshore micrometeorology controls (stability, wind shear, turbulence) on a turbine's ability to produce power. Compared to lidar measurements, power law extrapolation estimates and operational National Weather Service models underestimated hub-height wind speeds in the WEA. In addition, lidar observations suggest the frequent development of a low-level wind maximum (LLWM), with high turbinelayer wind shear and low turbulence intensity within a turbine's rotor layer (40m-160m). Results elucidate the advantages of using Doppler wind lidar technology to improve offshore wind resource estimates and its ability to monitor under-sampled offshore meteorological controls impact on a potential turbine's ability to produce power.
Hunter, Margaret E; Oyler-McCance, Sara J; Dorazio, Robert M; Fike, Jennifer A; Smith, Brian J; Hunter, Charles T; Reed, Robert N; Hart, Kristen M
2015-01-01
Environmental DNA (eDNA) methods are used to detect DNA that is shed into the aquatic environment by cryptic or low density species. Applied in eDNA studies, occupancy models can be used to estimate occurrence and detection probabilities and thereby account for imperfect detection. However, occupancy terminology has been applied inconsistently in eDNA studies, and many have calculated occurrence probabilities while not considering the effects of imperfect detection. Low detection of invasive giant constrictors using visual surveys and traps has hampered the estimation of occupancy and detection estimates needed for population management in southern Florida, USA. Giant constrictor snakes pose a threat to native species and the ecological restoration of the Florida Everglades. To assist with detection, we developed species-specific eDNA assays using quantitative PCR (qPCR) for the Burmese python (Python molurus bivittatus), Northern African python (P. sebae), boa constrictor (Boa constrictor), and the green (Eunectes murinus) and yellow anaconda (E. notaeus). Burmese pythons, Northern African pythons, and boa constrictors are established and reproducing, while the green and yellow anaconda have the potential to become established. We validated the python and boa constrictor assays using laboratory trials and tested all species in 21 field locations distributed in eight southern Florida regions. Burmese python eDNA was detected in 37 of 63 field sampling events; however, the other species were not detected. Although eDNA was heterogeneously distributed in the environment, occupancy models were able to provide the first estimates of detection probabilities, which were greater than 91%. Burmese python eDNA was detected along the leading northern edge of the known population boundary. The development of informative detection tools and eDNA occupancy models can improve conservation efforts in southern Florida and support more extensive studies of invasive constrictors
Abedini, Mohammad; Moradi, Mohammad H; Hosseinian, S M
2016-03-01
This paper proposes a novel method to address reliability and technical problems of microgrids (MGs) based on designing a number of self-adequate autonomous sub-MGs via adopting MGs clustering thinking. In doing so, a multi-objective optimization problem is developed where power losses reduction, voltage profile improvement and reliability enhancement are considered as the objective functions. To solve the optimization problem a hybrid algorithm, named HS-GA, is provided, based on genetic and harmony search algorithms, and a load flow method is given to model different types of DGs as droop controller. The performance of the proposed method is evaluated in two case studies. The results provide support for the performance of the proposed method. PMID:26767800
NASA Astrophysics Data System (ADS)
Wagstaff, Kiri L.
2012-03-01
On obtaining a new data set, the researcher is immediately faced with the challenge of obtaining a high-level understanding from the observations. What does a typical item look like? What are the dominant trends? How many distinct groups are included in the data set, and how is each one characterized? Which observable values are common, and which rarely occur? Which items stand out as anomalies or outliers from the rest of the data? This challenge is exacerbated by the steady growth in data set size [11] as new instruments push into new frontiers of parameter space, via improvements in temporal, spatial, and spectral resolution, or by the desire to "fuse" observations from different modalities and instruments into a larger-picture understanding of the same underlying phenomenon. Data clustering algorithms provide a variety of solutions for this task. They can generate summaries, locate outliers, compress data, identify dense or sparse regions of feature space, and build data models. It is useful to note up front that "clusters" in this context refer to groups of items within some descriptive feature space, not (necessarily) to "galaxy clusters" which are dense regions in physical space. The goal of this chapter is to survey a variety of data clustering methods, with an eye toward their applicability to astronomical data analysis. In addition to improving the individual researcher’s understanding of a given data set, clustering has led directly to scientific advances, such as the discovery of new subclasses of stars [14] and gamma-ray bursts (GRBs) [38]. All clustering algorithms seek to identify groups within a data set that reflect some observed, quantifiable structure. Clustering is traditionally an unsupervised approach to data analysis, in the sense that it operates without any direct guidance about which items should be assigned to which clusters. There has been a recent trend in the clustering literature toward supporting semisupervised or constrained
Chaplin, Katherine; Bower, Peter; Brookes, Sara; Fitzpatrick, Bridie; Guthrie, Bruce; Shaw, Alison; Mercer, Stewart; Rafi, Imran; Thorn, Joanna
2016-01-01
Introduction An increasing number of people are living with multimorbidity. The evidence base for how best to manage these patients is weak. Current clinical guidelines generally focus on single conditions, which may not reflect the needs of patients with multimorbidity. The aim of the 3D study is to develop, implement and evaluate an intervention to improve the management of patients with multimorbidity in general practice. Methods and analysis This is a pragmatic two-arm cluster randomised controlled trial. 32 general practices around Bristol, Greater Manchester and Glasgow will be randomised to receive either the ‘3D intervention’ or usual care. 3D is a complex intervention including components affecting practice organisation, the conduct of patient reviews, integration with secondary care and measures to promote change in practice organisation. Changes include improving continuity of care and replacing reviews of each disease with patient-centred reviews with a focus on patients' quality of life, mental health and polypharmacy. We aim to recruit 1383 patients who have 3 or more chronic conditions. This provides 90% power at 5% significance level to detect an effect size of 0.27 SDs in the primary outcome, which is health-related quality of life at 15 months using the EQ-5D-5L. Secondary outcome measures assess patient centredness, illness burden and treatment burden. The primary analysis will be a multilevel regression model adjusted for baseline, stratification/minimisation, clustering and important co-variables. Nested process evaluation will assess implementation, mechanisms of effectiveness and interaction of the intervention with local context. Economic analysis of cost-consequences and cost-effectiveness will be based on quality-adjusted life years. Ethics and dissemination This study has approval from South-West (Frenchay) National Health Service (NHS) Research Ethics Committee (14/SW/0011). Findings will be disseminated via final report, peer
De Groote, F; De Laet, T; Jonkers, I; De Schutter, J
2008-12-01
We developed a Kalman smoothing algorithm to improve estimates of joint kinematics from measured marker trajectories during motion analysis. Kalman smoothing estimates are based on complete marker trajectories. This is an improvement over other techniques, such as the global optimisation method (GOM), Kalman filtering, and local marker estimation (LME), where the estimate at each time instant is only based on part of the marker trajectories. We applied GOM, Kalman filtering, LME, and Kalman smoothing to marker trajectories from both simulated and experimental gait motion, to estimate the joint kinematics of a ten segment biomechanical model, with 21 degrees of freedom. Three simulated marker trajectories were studied: without errors, with instrumental errors, and with soft tissue artefacts (STA). Two modelling errors were studied: increased thigh length and hip centre dislocation. We calculated estimation errors from the known joint kinematics in the simulation study. Compared with other techniques, Kalman smoothing reduced the estimation errors for the joint positions, by more than 50% for the simulated marker trajectories without errors and with instrumental errors. Compared with GOM, Kalman smoothing reduced the estimation errors for the joint moments by more than 35%. Compared with Kalman filtering and LME, Kalman smoothing reduced the estimation errors for the joint accelerations by at least 50%. Our simulation results show that the use of Kalman smoothing substantially improves the estimates of joint kinematics and kinetics compared with previously proposed techniques (GOM, Kalman filtering, and LME) for both simulated, with and without modelling errors, and experimentally measured gait motion.
NASA Astrophysics Data System (ADS)
Verrier, N.; Grosjean, N.; Dib, E.; Méès, L.; Fournier, C.; Marié, J.-L.
2016-04-01
Digital holography is a valuable tool for three-dimensional information extraction. Among existing configurations, the originally proposed set-up (i.e. Gabor, or in-line holography), is reasonably immune to variations in the experimental environment making it a method of choice for studies of fluid dynamics. Nevertheless, standard hologram reconstruction techniques, based on numerical light back-propagation are prone to artifacts such as twin images or aliases that limit both the quality and quantity of information extracted from the acquired holograms. To get round this issue, the hologram reconstruction as a parametric inverse problem has been shown to accurately estimate 3D positions and the size of seeding particles directly from the hologram. To push the bounds of accuracy on size estimation still further, we propose to fully exploit the information redundancy of a hologram video sequence using joint estimation reconstruction. Applying this approach in a bench-top experiment, we show that it led to a relative precision of 0.13% (for a 60 μm diameter droplet) for droplet size estimation, and a tracking precision of {σx}× {σy}× {σz}=0.15× 0.15× 1~\\text{pixels} .
2013-01-01
Background Poverty undermines adherence to tuberculosis treatment. Economic support may both encourage and enable patients to complete treatment. In South Africa, which carries a high burden of tuberculosis, such support may improve the currently poor outcomes of patients on tuberculosis treatment. The aim of this study was to test the feasibility and effectiveness of delivering economic support to patients with pulmonary tuberculosis in a high-burden province of South Africa. Methods This was a pragmatic, unblinded, two-arm cluster-randomized controlled trial, where 20 public sector clinics acted as clusters. Patients with pulmonary tuberculosis in intervention clinics (n = 2,107) were offered a monthly voucher of ZAR120.00 (approximately US$15) until the completion of their treatment. Vouchers were redeemed at local shops for foodstuffs. Patients in control clinics (n = 1,984) received usual tuberculosis care. Results Intention to treat analysis showed a small but non-significant improvement in treatment success rates in intervention clinics (intervention 76.2%; control 70.7%; risk difference 5.6% (95% confidence interval: -1.2%, 12.3%), P = 0.107). Low fidelity to the intervention meant that 36.2% of eligible patients did not receive a voucher at all, 32.3% received a voucher for between one and three months and 31.5% received a voucher for four to eight months of treatment. There was a strong dose–response relationship between frequency of receipt of the voucher and treatment success (P <0.001). Conclusions Our pragmatic trial has shown that, in the real world setting of public sector clinics in South Africa, economic support to patients with tuberculosis does not significantly improve outcomes on treatment. However, the low fidelity to the delivery of our voucher meant that a third of eligible patients did not receive it. Among patients in intervention clinics who received the voucher at least once, treatment success rates were significantly
NASA Astrophysics Data System (ADS)
McNally, A.; Funk, C. C.; Yatheendradas, S.; Michaelsen, J.; Cappelarere, B.; Peters-Lidard, C. D.; Verdin, J. P.
2012-12-01
The Famine Early Warning Systems Network (FEWS NET) relies heavily on remotely sensed rainfall and vegetation data to monitor agricultural drought in Sub-Saharan Africa and other places around the world. Analysts use satellite rainfall to calculate rainy season statistics and force crop water accounting models that show how the magnitude and timing of rainfall might lead to above or below average harvest. The Normalized Difference Vegetation Index (NDVI) is also an important indicator of growing season progress and is given more weight over regions where, for example, lack of rain gauges increases error in satellite rainfall estimates. Currently, however, near-real time NDVI is not integrated into a modeling framework that informs growing season predictions. To meet this need for our drought monitoring system a land surface model (LSM) is a critical component. We are currently enhancing the FEWS NET monitoring activities by configuring a custom instance of NASA's Land Information System (LIS) called the FEWS NET Land Data Assimilation System. Using the LIS Noah LSM, in-situ measurements, and remotely sensed data, we focus on the following questions: What is the relationship between NDVI and in-situ soil moisture measurements over the West Africa Sahel? How can we use this relationship to improve modeled water and energy fluxes over the West Africa Sahel? We investigate soil moisture and NDVI cross-correlation in the time and frequency domain to develop a transfer function model to predict soil moisture from NDVI. This work compares sites in southwest Niger, Benin, Burkina Faso, and Mali to test the generality of the transfer function. For several sites with fallow and millet vegetation in the Wankama catchment in southwest Niger we developed a non-parametric frequency response model, using NDVI inputs and soil moisture outputs, that accurately estimates root zone soil moisture (40-70cm). We extend this analysis by developing a low order parametric transfer function
NASA Astrophysics Data System (ADS)
Quintanilla-Domínguez, Joel; Ojeda-Magaña, Benjamín; Marcano-Cedeño, Alexis; Cortina-Januchs, María G.; Vega-Corona, Antonio; Andina, Diego
2011-12-01
A new method for detecting microcalcifications in regions of interest (ROIs) extracted from digitized mammograms is proposed. The top-hat transform is a technique based on mathematical morphology operations and, in this paper, is used to perform contrast enhancement of the mi-crocalcifications. To improve microcalcification detection, a novel image sub-segmentation approach based on the possibilistic fuzzy c-means algorithm is used. From the original ROIs, window-based features, such as the mean and standard deviation, were extracted; these features were used as an input vector in a classifier. The classifier is based on an artificial neural network to identify patterns belonging to microcalcifications and healthy tissue. Our results show that the proposed method is a good alternative for automatically detecting microcalcifications, because this stage is an important part of early breast cancer detection.
Liu, Hong; Wang, Jie; Xu, Xiangyang; Song, Enmin; Wang, Qian; Jin, Renchao; Hung, Chih-Cheng; Fei, Baowei
2014-11-01
A robust and accurate center-frequency (CF) estimation (RACE) algorithm for improving the performance of the local sine-wave modeling (SinMod) method, which is a good motion estimation method for tagged cardiac magnetic resonance (MR) images, is proposed in this study. The RACE algorithm can automatically, effectively and efficiently produce a very appropriate CF estimate for the SinMod method, under the circumstance that the specified tagging parameters are unknown, on account of the following two key techniques: (1) the well-known mean-shift algorithm, which can provide accurate and rapid CF estimation; and (2) an original two-direction-combination strategy, which can further enhance the accuracy and robustness of CF estimation. Some other available CF estimation algorithms are brought out for comparison. Several validation approaches that can work on the real data without ground truths are specially designed. Experimental results on human body in vivo cardiac data demonstrate the significance of accurate CF estimation for SinMod, and validate the effectiveness of RACE in facilitating the motion estimation performance of SinMod.
Cha, Seungman; Kang, Douk; Tuffuor, Benedict; Lee, Gyuhong; Cho, Jungmyung; Chung, Jihye; Kim, Myongjin; Lee, Hoonsang; Lee, Jaeeun; Oh, Chunghyeon
2015-01-01
Although a number of studies have been conducted to explore the effect of water quality improvement, the majority of them have focused mainly on point-of-use water treatment, and the studies investigating the effect of improved water supply have been based on observational or inadequately randomized trials. We report the results of a matched cluster randomized trial investigating the effect of improved water supply on diarrheal prevalence of children under five living in rural areas of the Volta Region in Ghana. We compared the diarrheal prevalence of 305 children in 10 communities of intervention with 302 children in 10 matched communities with no intervention (October 2012 to February 2014). A modified Poisson regression was used to estimate the prevalence ratio. An intention-to-treat analysis was undertaken. The crude prevalence ratio of diarrhea in the intervention compared with the control communities was 0.85 (95% CI 0.74–0.97) for Krachi West, 0.96 (0.87–1.05) for Krachi East, and 0.91 (0.83–0.98) for both districts. Sanitation was adjusted for in the model to remove the bias due to residual imbalance since it was not balanced even after randomization. The adjusted prevalence ratio was 0.82 (95% CI 0.71–0.96) for Krachi West, 0.95 (0.86–1.04) for Krachi East, and 0.89 (0.82–0.97) for both districts. This study provides a basis for a better approach to water quality interventions. PMID:26404337
Improved Estimation of Earth Rotation Parameters Using the Adaptive Ridge Regression
NASA Astrophysics Data System (ADS)
Huang, Chengli; Jin, Wenjing
1998-05-01
The multicollinearity among regression variables is a common phenomenon in the reduction of astronomical data. The phenomenon of multicollinearity and the diagnostic factors are introduced first. As a remedy, a new method, called adaptive ridge regression (ARR), which is an improved method of choosing the departure constant θ in ridge regression, is suggested and applied in a case that the Earth orientation parameters (EOP) are determined by lunar laser ranging (LLR). It is pointed out, via a diagnosis, the variance inflation factors (VIFs), that there exists serious multicollinearity among the regression variables. It is shown that the ARR method is effective in reducing the multicollinearity and makes the regression coefficients more stable than that of using ordinary least squares estimation (LS), especially when there is serious multicollinearity.
NASA Astrophysics Data System (ADS)
Flora, Jeffrey B.; Alam, Mahbubul; Iftekharuddin, Khan M.
2014-09-01
The goal of this intelligent transportation systems work is to improve the understanding of the impact of carbon emissions caused by vehicular traffic on highway systems. In order to achieve this goal, this work implements a pipeline for vehicle segmentation, feature extraction, and classification using the existing Virginia Department of Transportation (VDOT) infrastructure on networked traffic cameras. The VDOT traffic video is analyzed for vehicle detection and segmentation using an adaptive Gaussian mixture model algorithm. The morphological properties and histogram of oriented features are derived from the detected and segmented vehicles. Finally, vehicle classification is performed using a multiclass support vector machine classifier. The resulting classification scheme offers an average classification rate of 86% under good quality segmentation. The segmented vehicle and classification data can be used to obtain estimation of carbon emissions.
Improved least squares MR image reconstruction using estimates of k-space data consistency.
Johnson, Kevin M; Block, Walter F; Reeder, Scott B; Samsonov, Alexey
2012-06-01
This study describes a new approach to reconstruct data that has been corrupted by unfavorable magnetization evolution. In this new framework, images are reconstructed in a weighted least squares fashion using all available data and a measure of consistency determined from the data itself. The reconstruction scheme optimally balances uncertainties from noise error with those from data inconsistency, is compatible with methods that model signal corruption, and may be advantageous for more accurate and precise reconstruction with many least squares-based image estimation techniques including parallel imaging and constrained reconstruction/compressed sensing applications. Performance of the several variants of the algorithm tailored for fast spin echo and self-gated respiratory gating applications was evaluated in simulations, phantom experiments, and in vivo scans. The data consistency weighting technique substantially improved image quality and reduced noise as compared to traditional reconstruction approaches.
Uniting Space, Ground and Underwater Measurements for Improved Estimates of Rain Rate
NASA Technical Reports Server (NTRS)
Amitai, E.; Nystuen, J. A.; Liao, L.; Meneghini, R.; Morin, E.
2003-01-01
Global precipitation is monitored from a variety of platforms including space-borne, ground- and ocean-based platforms. Intercomparisons of these observations are crucial to validating the measurements and providing confidence for each measurement technique. Probability distribution functions of rain rates are used to compare satellite and ground-based radar observations. A preferred adjustment technique for improving rain rate distribution estimates is identified using measurements from ground-based radar and radar and rain gauges within the coverage area of the radar. The underwater measurement of rainfall shows similarities to radar measurements, but with intermediate spatial resolution and high temporal resolution. Reconciling these different measurement techniques provides understanding and confidence for all of the methods.
Scott, Bobby R.; Tokarskaya, Zoya B.; Zhuntova, Galina V.; Osovets, Sergey V.; Syrchikov, Victor A., Belyaeva, Zinaida D.
2007-12-14
This report summarizes 4 years of research achievements in this Office of Science (BER), U.S. Department of Energy (DOE) project. The research described was conducted by scientists and supporting staff at Lovelace Respiratory Research Institute (LRRI)/Lovelace Biomedical and Environmental Research Institute (LBERI) and the Southern Urals Biophysics Institute (SUBI). All project objectives and goals were achieved. A major focus was on obtaining improved cancer risk estimates for exposure via inhalation to plutonium (Pu) isotopes in the workplace (DOE radiation workers) and environment (public exposures to Pu-contaminated soil). A major finding was that low doses and dose rates of gamma rays can significantly suppress cancer induction by alpha radiation from inhaled Pu isotopes. The suppression relates to stimulation of the body's natural defenses, including immunity against cancer cells and selective apoptosis which removes precancerous and other aberrant cells.
Improved estimation of PM2.5 using Lagrangian satellite-measured aerosol optical depth
NASA Astrophysics Data System (ADS)
Olivas Saunders, Rolando
Suspended particulate matter (aerosols) with aerodynamic diameters less than 2.5 mum (PM2.5) has negative effects on human health, plays an important role in climate change and also causes the corrosion of structures by acid deposition. Accurate estimates of PM2.5 concentrations are thus relevant in air quality, epidemiology, cloud microphysics and climate forcing studies. Aerosol optical depth (AOD) retrieved by the Moderate Resolution Imaging Spectroradiometer (MODIS) satellite instrument has been used as an empirical predictor to estimate ground-level concentrations of PM2.5 . These estimates usually have large uncertainties and errors. The main objective of this work is to assess the value of using upwind (Lagrangian) MODIS-AOD as predictors in empirical models of PM2.5. The upwind locations of the Lagrangian AOD were estimated using modeled backward air trajectories. Since the specification of an arrival elevation is somewhat arbitrary, trajectories were calculated to arrive at four different elevations at ten measurement sites within the continental United States. A systematic examination revealed trajectory model calculations to be sensitive to starting elevation. With a 500 m difference in starting elevation, the 48-hr mean horizontal separation of trajectory endpoints was 326 km. When the difference in starting elevation was doubled and tripled to 1000 m and 1500m, the mean horizontal separation of trajectory endpoints approximately doubled and tripled to 627 km and 886 km, respectively. A seasonal dependence of this sensitivity was also found: the smallest mean horizontal separation of trajectory endpoints was exhibited during the summer and the largest separations during the winter. A daily average AOD product was generated and coupled to the trajectory model in order to determine AOD values upwind of the measurement sites during the period 2003-2007. Empirical models that included in situ AOD and upwind AOD as predictors of PM2.5 were generated by
Improved Least Squares MR Image Reconstruction Using Estimates of k-Space Data Consistency
Johnson, Kevin M.; Block, Walter F.; Reeder, Scott. B.; Samsonov, Alexey
2011-01-01
This work describes a new approach to reconstruct data that has been corrupted by unfavorable magnetization evolution. In this new framework, images are reconstructed in a weighted least squares fashion using all available data and a measure of consistency determined from the data itself. The reconstruction scheme optimally balances uncertainties from noise error with those from data inconsistency, is compatible with methods that model signal corruption, and may be advantageous for more accurate and precise reconstruction with many least-squares based image estimation techniques including parallel imaging and constrained reconstruction/compressed sensing applications. Performance of the several variants of the algorithm tailored for fast spin echo (FSE) and self gated respiratory gating applications was evaluated in simulations, phantom experiments, and in-vivo scans. The data consistency weighting technique substantially improved image quality and reduced noise as compared to traditional reconstruction approaches. PMID:22135155
Improved Estimates of Temporally Coherent Internal Tides and Energy Fluxes from Satellite Altimetry
NASA Technical Reports Server (NTRS)
Ray, Richard D.; Chao, Benjamin F. (Technical Monitor)
2002-01-01
Satellite altimetry has opened a surprising new avenue to observing internal tides in the open ocean. The tidal surface signatures are very small, a few cm at most, but in many areas they are robust, owing to averaging over many years. By employing a simplified two dimensional wave fitting to the surface elevations in combination with climatological hydrography to define the relation between the surface height and the current and pressure at depth, we may obtain rough estimates of internal tide energy fluxes. Initial results near Hawaii with Topex/Poseidon (T/P) data show good agreement with detailed 3D (three dimensional) numerical models, but the altimeter picture is somewhat blurred owing to the widely spaced T/P tracks. The resolution may be enhanced somewhat by using data from the ERS-1 (ESA (European Space Agency) Remote Sensing) and ERS-2 satellite altimeters. The ERS satellite tracks are much more closely spaced (0.72 deg longitude vs. 2.83 deg for T/P), but the tidal estimates are less accurate than those for T/P. All altimeter estimates are also severely affected by noise in regions of high mesoscale variability, and we have obtained some success in reducing this contamination by employing a prior correction for mesoscale variability based on ten day detailed sea surface height maps developed by Le Traon and colleagues. These improvements allow us to more clearly define the internal tide surface field and the corresponding energy fluxes. Results from throughout the global ocean will be presented.
Improvement of force-sensor-based heart rate estimation using multichannel data fusion.
Bruser, Christoph; Kortelainen, Juha M; Winter, Stefan; Tenhunen, Mirja; Parkka, Juha; Leonhardt, Steffen
2015-01-01
The aim of this paper is to present and evaluate algorithms for heartbeat interval estimation from multiple spatially distributed force sensors integrated into a bed. Moreover, the benefit of using multichannel systems as opposed to a single sensor is investigated. While it might seem intuitive that multiple channels are superior to a single channel, the main challenge lies in finding suitable methods to actually leverage this potential. To this end, two algorithms for heart rate estimation from multichannel vibration signals are presented and compared against a single-channel sensing solution. The first method operates by analyzing the cepstrum computed from the average spectra of the individual channels, while the second method applies Bayesian fusion to three interval estimators, such as the autocorrelation, which are applied to each channel. This evaluation is based on 28 night-long sleep lab recordings during which an eight-channel polyvinylidene fluoride-based sensor array was used to acquire cardiac vibration signals. The recruited patients suffered from different sleep disorders of varying severity. From the sensor array data, a virtual single-channel signal was also derived for comparison by averaging the channels. The single-channel results achieved a beat-to-beat interval error of 2.2% with a coverage (i.e., percentage of the recording which could be analyzed) of 68.7%. In comparison, the best multichannel results attained a mean error and coverage of 1.0% and 81.0%, respectively. These results present statistically significant improvements of both metrics over the single-channel results (p < 0.05).
A model to estimate the cost effectiveness of the indoorenvironment improvements in office work
Seppanen, Olli; Fisk, William J.
2004-06-01
Deteriorated indoor climate is commonly related to increases in sick building syndrome symptoms, respiratory illnesses, sick leave, reduced comfort and losses in productivity. The cost of deteriorated indoor climate for the society is high. Some calculations show that the cost is higher than the heating energy costs of the same buildings. Also building-level calculations have shown that many measures taken to improve indoor air quality and climate are cost-effective when the potential monetary savings resulting from an improved indoor climate are included as benefits gained. As an initial step towards systemizing these building level calculations we have developed a conceptual model to estimate the cost-effectiveness of various measures. The model shows the links between the improvements in the indoor environment and the following potential financial benefits: reduced medical care cost, reduced sick leave, better performance of work, lower turn over of employees, and lower cost of building maintenance due to fewer complaints about indoor air quality and climate. The pathways to these potential benefits from changes in building technology and practices go via several human responses to the indoor environment such as infectious diseases, allergies and asthma, sick building syndrome symptoms, perceived air quality, and thermal environment. The model also includes the annual cost of investments, operation costs, and cost savings of improved indoor climate. The conceptual model illustrates how various factors are linked to each other. SBS symptoms are probably the most commonly assessed health responses in IEQ studies and have been linked to several characteristics of buildings and IEQ. While the available evidence indicates that SBS symptoms can affect these outcomes and suspects that such a linkage exists, at present we can not quantify the relationships sufficiently for cost-benefit modeling. New research and analyses of existing data to quantify the financial
Improving root-zone soil moisture estimations using dynamic root growth and crop phenology
NASA Astrophysics Data System (ADS)
Hashemian, Minoo; Ryu, Dongryeol; Crow, Wade T.; Kustas, William P.
2015-12-01
Water Energy Balance (WEB) Soil Vegetation Atmosphere Transfer (SVAT) modelling can be used to estimate soil moisture by forcing the model with observed data such as precipitation and solar radiation. Recently, an innovative approach that assimilates remotely sensed thermal infrared (TIR) observations into WEB-SVAT to improve the results has been proposed. However, the efficacy of the model-observation integration relies on the model's realistic representation of soil water processes. Here, we explore methods to improve the soil water processes of a simple WEB-SVAT model by adopting and incorporating an exponential root water uptake model with water stress compensation and establishing a more appropriate soil-biophysical linkage between root-zone moisture content, above-ground states and biophysical indices. The existing WEB-SVAT model is extended to a new Multi-layer WEB-SVAT with Dynamic Root distribution (MWSDR) that has five soil layers. Impacts of plant root depth variations, growth stages and phenological cycle of the vegetation on transpiration are considered in developing stages. Hydrometeorological and biogeophysical measurements collected from two experimental sites, one in Dookie, Victoria, Australia and the other in Ponca, Oklahoma, USA, are used to validate the new model. Results demonstrate that MWSDR provides improved soil moisture, transpiration and evaporation predictions which, in turn, can provide an improved physical basis for assimilating remotely sensed data into the model. Results also show the importance of having an adequate representation of vegetation-related transpiration process for an appropriate simulation of water transfer in a complicated system of soil, plants and atmosphere.
On improving low-cost IMU performance for online trajectory estimation
NASA Astrophysics Data System (ADS)
Yudanto, Risang; Ompusunggu, Agusmian P.; Bey-Temsamani, Abdellatif
2015-05-01
We have developed an automatic mitigation method for compensating drifts occurring in low-cost Inertial Measurement Units (IMU), using MEMS (Microelectromechanical systems) accelerometers and gyros, and applied the method for online trajectory estimation of a moving robot arm. The method is based on an automatic detection of system's states which triggers an online (i.e. automatic) recalibration of the sensors parameters. Stationary tests have proven an absolute reduction of drift, mainly due to random walk noise at ambient conditions, up to ~50% by using the recalibrated sensor parameters instead of using the nominal parameters obtained from sensor's datasheet. The proposed calibration methodology works online without needing manual interventions and adaptively compensates drifts under different working conditions. Notably, the proposed method requires neither any information from an aiding sensor nor a priori knowledge about system's model and/or constraints. It is experimentally shown in this paper that the method improves online trajectory estimations of the robot using a low-cost IMU consisting of MEMS-based accelerometer and gyroscope. Applications of the proposed method cover automotive, machinery and robotics industries.
How to use fMRI functional localizers to improve EEG/MEG source estimation
Cottereau, Benoit R.; Ales, Justin M.; Norcia, Anthony M.
2015-01-01
EEG and MEG have excellent temporal resolution, but the estimation of the neural sources that generate the signals recorded by the sensors is a difficult, ill-posed problem. The high spatial resolution of functional MRI makes it an ideal tool to improve the localization of the EEG/MEG sources using data fusion. However, the combination of the two techniques remains challenging, as the neural generators of the EEG/MEG and BOLD signals might in some cases be very different. Here we describe a data fusion approach that was developed by our team over the last decade in which fMRI is used to provide source constraints that are based on functional areas defined individually for each subject. This mini-review describes the different steps that are necessary to perform source estimation using this approach. It also provides a list of pitfalls that should be avoided when doing fMRI-informed EEG/MEG source imaging. Finally, it describes the advantages of using a ROI-based approach for group-level analysis and for the study of sensory systems. PMID:25088693
Improved radar data processing algorithms for quantitative rainfall estimation in real time.
Krämer, S; Verworn, H R
2009-01-01
This paper describes a new methodology to process C-band radar data for direct use as rainfall input to hydrologic and hydrodynamic models and in real time control of urban drainage systems. In contrast to the adjustment of radar data with the help of rain gauges, the new approach accounts for the microphysical properties of current rainfall. In a first step radar data are corrected for attenuation. This phenomenon has been identified as the main cause for the general underestimation of radar rainfall. Systematic variation of the attenuation coefficients within predefined bounds allows robust reflectivity profiling. Secondly, event specific R-Z relations are applied to the corrected radar reflectivity data in order to generate quantitative reliable radar rainfall estimates. The results of the methodology are validated by a network of 37 rain gauges located in the Emscher and Lippe river basins. Finally, the relevance of the correction methodology for radar rainfall forecasts is demonstrated. It has become clearly obvious, that the new methodology significantly improves the radar rainfall estimation and rainfall forecasts. The algorithms are applicable in real time.
NASA Astrophysics Data System (ADS)
Ferreira, A.; Teegavarapu, R. S.; Pathak, C. S.
2009-12-01
Use of appropriate reflectivity (Z)-rain rate(R) relationships is crucial for accurate estimation of precipitation amounts using radar. The spatial and temporal variability of several storm patterns combined with availability of several variants of Z-R relationships makes this task very difficult. This study evaluates the use of optimization models for optimizing the traditional Z-R functional relationships with constants and coefficients for different storm types and seasons. Optimization model formulations using nonlinear programming methods are investigated and developed in this study. The Z-R relationships will be evaluated for optimized coefficients and exponents based on train and test data. The train data will be used to develop the optimal values of coefficients and constants and the test data will be used for assessment. In order to evaluate the optimal relationships developed as a part of the study, reflectivity data collected from NCDC and rain gage data are analyzed for a region in South Florida. Exhaustive evaluation of Z-R relationships in improving precipitation estimates with and without optimization formulations will be attempted in this study.
Consistent Estimates of Tsunami Energy Show Promise for Improved Early Warning
NASA Astrophysics Data System (ADS)
Titov, V.; Song, Y. Tony; Tang, L.; Bernard, E. N.; Bar-Sever, Y.; Wei, Y.
2016-05-01
Early tsunami warning critically hinges on rapid determination of the tsunami hazard potential in real-time, before waves inundate critical coastlines. Tsunami energy can quickly characterize the destructive potential of generated waves. Traditional seismic analysis is inadequate to accurately predict a tsunami's energy. Recently, two independent approaches have been proposed to determine tsunami source energy: one inverted from the Deep-ocean Assessment and Reporting of Tsunamis (DART) data during the tsunami propagation, and the other derived from the land-based coastal global positioning system (GPS) during tsunami generation. Here, we focus on assessing these two approaches with data from the March 11, 2011 Japanese tsunami. While the GPS approach takes into consideration the dynamic earthquake process, the DART inversion approach provides the actual tsunami energy estimation of the propagating tsunami waves; both approaches lead to consistent energy scales for previously studied tsunamis. Encouraged by these promising results, we examined a real-time approach to determine tsunami source energy by combining these two methods: first, determine the tsunami source from the globally expanding GPS network immediately after an earthquake for near-field early warnings; and then to refine the tsunami energy estimate from nearby DART measurements for improving forecast accuracy and early cancelations. The combination of these two real-time networks may offer an appealing opportunity for: early determination of the tsunami threat for the purpose of saving more lives, and early cancelation of tsunami warnings to avoid unnecessary false alarms.
Improved Subspace Estimation for Low-Rank Model-Based Accelerated Cardiac Imaging
Hitchens, T. Kevin; Wu, Yijen L.; Ho, Chien; Liang, Zhi-Pei
2014-01-01
Sparse sampling methods have emerged as effective tools to accelerate cardiac magnetic resonance imaging (MRI). Low-rank model-based cardiac imaging uses a pre-determined temporal subspace for image reconstruction from highly under-sampled (k, t)-space data and has been demonstrated effective for high-speed cardiac MRI. The accuracy of the temporal subspace is a key factor in these methods, yet little work has been published on data acquisition strategies to improve subspace estimation. This paper investigates the use of non-Cartesian k-space trajectories to replace the Cartesian trajectories which are omnipresent but are highly sensitive to readout direction. We also propose “self-navigated” pulse sequences which collect both navigator data (for determining the temporal subspace) and imaging data after every RF pulse, allowing for even greater acceleration. We investigate subspace estimation strategies through analysis of phantom images and demonstrate in vivo cardiac imaging in rats and mice without the use of ECG or respiratory gating. The proposed methods achieved 3-D imaging of wall motion, first-pass myocardial perfusion, and late gadolinium enhancement in rats at 74 frames per second (fps), as well as 2-D imaging of wall motion in mice at 97 fps. PMID:24801352
How to use fMRI functional localizers to improve EEG/MEG source estimation.
Cottereau, Benoit R; Ales, Justin M; Norcia, Anthony M
2015-07-30
EEG and MEG have excellent temporal resolution, but the estimation of the neural sources that generate the signals recorded by the sensors is a difficult, ill-posed problem. The high spatial resolution of functional MRI makes it an ideal tool to improve the localization of the EEG/MEG sources using data fusion. However, the combination of the two techniques remains challenging, as the neural generators of the EEG/MEG and BOLD signals might in some cases be very different. Here we describe a data fusion approach that was developed by our team over the last decade in which fMRI is used to provide source constraints that are based on functional areas defined individually for each subject. This mini-review describes the different steps that are necessary to perform source estimation using this approach. It also provides a list of pitfalls that should be avoided when doing fMRI-informed EEG/MEG source imaging. Finally, it describes the advantages of using a ROI-based approach for group-level analysis and for the study of sensory systems.
Brassey, Charlotte A.; Gardiner, James D.
2015-01-01
Body mass is a fundamental physical property of an individual and has enormous bearing upon ecology and physiology. Generating reliable estimates for body mass is therefore a necessary step in many palaeontological studies. Whilst early reconstructions of mass in extinct species relied upon isolated skeletal elements, volumetric techniques are increasingly applied to fossils when skeletal completeness allows. We apply a new ‘alpha shapes’ (α-shapes) algorithm to volumetric mass estimation in quadrupedal mammals. α-shapes are defined by: (i) the underlying skeletal structure to which they are fitted; and (ii) the value α, determining the refinement of fit. For a given skeleton, a range of α-shapes may be fitted around the individual, spanning from very coarse to very fine. We fit α-shapes to three-dimensional models of extant mammals and calculate volumes, which are regressed against mass to generate predictive equations. Our optimal model is characterized by a high correlation coefficient and mean square error (r2=0.975, m.s.e.=0.025). When applied to the woolly mammoth (Mammuthus primigenius) and giant ground sloth (Megatherium americanum), we reconstruct masses of 3635 and 3706 kg, respectively. We consider α-shapes an improvement upon previous techniques as resulting volumes are less sensitive to uncertainties in skeletal reconstructions, and do not require manual separation of body segments from skeletons. PMID:26361559
NASA Astrophysics Data System (ADS)
Chasmer, L.; Xi, Z.; Hopkinson, C.
2015-12-01
The development of high-resolution Terrestrial Laser Scanning (TLS) system can expose sub-canopy details with flexible scanning angles. This advantage of preciseness makes TLS an ideal source to integrate to Airborne Laser Scanning (ALS), with regard to the potential for removing ALS's penetration bias. The popular treatment of the integration is simply spatial co-registration or TLS-derived inventory statistics, without further concerning the rich geometrical information from TLS. This poster proposes a profile assimilation approach for ALS and TLS integration, in order to improve the plot-level estimation of tree height and biomass. The overlapped ALS and TLS data were first co-registered into compound point clouds and the canopy structure were reconstructed from the compound. The canopy profile of ALS was then calibrated from the reconstructed canopy profile using a Kalman filter. The calibration was applied to the rest ALS canopy profile, from which the new estimation of tree height and biomass can be derived. Our study site is located in Vivian, Toronto. The area was flown with an Optech Titan operating at 532, 1064 and 1550 nm wavelengths one week before the TLS data collection in July, 2015. The trees scans by TLS were halved to calibrate and to validate our proposed integration approach. Additional validation was also conducted via in-situ inventory.
Improvements in Virtual Sensors: Using Spatial Information to Estimate Remote Sensing Spectra
NASA Technical Reports Server (NTRS)
Oza, Nikunj C.; Srivastava, Ashok N.; Stroeve, Julienne
2005-01-01
Various instruments are used to create images of the Earth and other objects in the universe in a diverse set of wavelength bands with the aim of understanding natural phenomena. Sometimes these instruments are built in a phased approach, with additional measurement capabilities added in later phases. In other cases, technology may mature to the point that the instrument offers new measurement capabilities that were not planned in the original design of the instrument. In still other cases, high resolution spectral measurements may be too costly to perform on a large sample and therefore lower resolution spectral instruments are used to take the majority of measurements. Many applied science questions that are relevant to the earth science remote sensing community require analysis of enormous amounts of data that were generated by instruments with disparate measurement capabilities. In past work [1], we addressed this problem using Virtual Sensors: a method that uses models trained on spectrally rich (high spectral resolution) data to "fill in" unmeasured spectral channels in spectrally poor (low spectral resolution) data. We demonstrated this method by using models trained on the high spectral resolution Terra MODIS instrument to estimate what the equivalent of the MODIS 1.6 micron channel would be for the NOAA AVHRR2 instrument. The scientific motivation for the simulation of the 1.6 micron channel is to improve the ability of the AVHRR2 sensor to detect clouds over snow and ice. This work contains preliminary experiments demonstrating that the use of spatial information can improve our ability to estimate these spectra.
NASA Astrophysics Data System (ADS)
Lin, Mo; Li, Rui; Li, Jilin
2007-11-01
This paper deals with several key points including parameter estimation such as frequency of arrival (FOA), time of arrival (TOA) estimation algorithm and signal processing techniques in Medium-altitude Earth Orbit Local User Terminals (MEOLUT) based on Cospas-Sarsat Medium-altitude Earth Orbit Search and Rescue system (MEOSAR). Based on an analytical description of distress beacon, improved TOA and FOA estimation methods have been proposed. An improved FOA estimation method which integrates bi-FOA measurement, FFT method, Rife algorithm and Gaussian window is proposed to improve the accuracy of FOA estimation. In addition, TPD algorithm and signal correlation techniques are used to achieve a high performance of TOA estimation. Parameter estimation problems are solved by proposed FOA/TOA methods under quite poor Carrier-to-Noise (C/N0). A number of simulations are done to show the improvements. FOA and TOA estimation error are lower than 0.1Hz and 11μs respectively which is very high system requirement for MEOSAR system MEOLUT.
NASA Astrophysics Data System (ADS)
Curotto, E.
2015-12-01
Structural optimizations, classical NVT ensemble, and variational Monte Carlo simulations of ion Stockmayer clusters parameterized to approximate the Li+(CH3NO2)n (n = 1-20) systems are performed. The Metropolis algorithm enhanced by the parallel tempering strategy is used to measure internal energies and heat capacities, and a parallel version of the genetic algorithm is employed to obtain the most important minima. The first solvation sheath is octahedral and this feature remains the dominant theme in the structure of clusters with n ≥ 6. The first "magic number" is identified using the adiabatic solvent dissociation energy, and it marks the completion of the second solvation layer for the lithium ion-nitromethane clusters. It corresponds to the n = 18 system, a solvated ion with the first sheath having octahedral symmetry, weakly bound to an eight-membered and a four-membered ring crowning a vertex of the octahedron. Variational Monte Carlo estimates of the adiabatic solvent dissociation energy reveal that quantum effects further enhance the stability of the n = 18 system relative to its neighbors.
Chen, Shi-Yi; Deng, Feilong; Huang, Ying; Jia, Xianbo; Liu, Yi-Ping; Lai, Song-Jia
2016-04-01
Clustering of 16s rRNA amplicon sequences into operational taxonomic units (OTUs) is the most common bioinformatics pipeline for investigating microbial community by high-throughput sequencing technologies. However, the existing algorithms of OTUs clustering still remain to be improved at reliability. Here we propose an improved method (bioOTU) that first assigns taxonomy to unique tags at genus level for separating the error-free sequences of known species in reference database from artifacts, and then cluster them into OTUs by different strategies. The remaining tags, which fail to be clustered in the previous step, are further subjected to independent OTUs clustering by the optimized algorithm of heuristic clustering. The performance tests on both mock and real communities revealed that bioOTU is powerful for recovering the underlying profiles at both microbial composition and abundance, and it also produces comparable or less number of OTUs in comparison with the prevailing tools of Mothur and UPARSE. The bioOTU is implemented in C and Python languages with source codes freely available on the GitHub repository.
Chen, Shi-Yi; Deng, Feilong; Huang, Ying; Jia, Xianbo; Liu, Yi-Ping; Lai, Song-Jia
2016-04-01
Clustering of 16s rRNA amplicon sequences into operational taxonomic units (OTUs) is the most common bioinformatics pipeline for investigating microbial community by high-throughput sequencing technologies. However, the existing algorithms of OTUs clustering still remain to be improved at reliability. Here we propose an improved method (bioOTU) that first assigns taxonomy to unique tags at genus level for separating the error-free sequences of known species in reference database from artifacts, and then cluster them into OTUs by different strategies. The remaining tags, which fail to be clustered in the previous step, are further subjected to independent OTUs clustering by the optimized algorithm of heuristic clustering. The performance tests on both mock and real communities revealed that bioOTU is powerful for recovering the underlying profiles at both microbial composition and abundance, and it also produces comparable or less number of OTUs in comparison with the prevailing tools of Mothur and UPARSE. The bioOTU is implemented in C and Python languages with source codes freely available on the GitHub repository. PMID:26950196
Qu, Long; Nettleton, Dan; Dekkers, Jack C M
2012-12-01
Given a large number of t-statistics, we consider the problem of approximating the distribution of noncentrality parameters (NCPs) by a continuous density. This problem is closely related to the control of false discovery rates (FDR) in massive hypothesis testing applications, e.g., microarray gene expression analysis. Our methodology is similar to, but improves upon, the existing approach by Ruppert, Nettleton, and Hwang (2007, Biometrics, 63, 483-495). We provide parametric, nonparametric, and semiparametric estimators for the distribution of NCPs, as well as estimates of the FDR and local FDR. In the parametric situation, we assume that the NCPs follow a distribution that leads to an analytically available marginal distribution for the test statistics. In the nonparametric situation, we use convex combinations of basis density functions to estimate the density of the NCPs. A sequential quadratic programming procedure is developed to maximize the penalized likelihood. The smoothing parameter is selected with the approximate network information criterion. A semiparametric estimator is also developed to combine both parametric and nonparametric fits. Simulations show that, under a variety of situations, our density estimates are closer to the underlying truth and our FDR estimates are improved compared with alternative methods. Data-based simulations and the analyses of two microarray datasets are used to evaluate the performance in realistic situations.
Improvement of Epicentral Direction Estimation by P-wave Polarization Analysis
NASA Astrophysics Data System (ADS)
Oshima, Mitsutaka
2016-04-01
Polarization analysis has been used to analyze the polarization characteristics of waves and developed in various spheres, for example, electromagnetics, optics, and seismology. As for seismology, polarization analysis is used to discriminate seismic phases or to enhance specific phase (e.g., Flinn, 1965)[1], by taking advantage of the difference in polarization characteristics of seismic phases. In earthquake early warning, polarization analysis is used to estimate the epicentral direction using single station, based on the polarization direction of P-wave portion in seismic records (e.g., Smart and Sproules(1981) [2], Noda et al.,(2012) [3]). Therefore, improvement of the Estimation of Epicentral Direction by Polarization Analysis (EEDPA) directly leads to enhance the accuracy and promptness of earthquake early warning. In this study, the author tried to improve EEDPA by using seismic records of events occurred around Japan from 2003 to 2013. The author selected the events that satisfy following conditions. MJMA larger than 6.5 (JMA: Japan Meteorological Agency). Seismic records are available at least 3 stations within 300km in epicentral distance. Seismic records obtained at stations with no information on seismometer orientation were excluded, so that precise and quantitative evaluation of accuracy of EEDPA becomes possible. In the analysis, polarization has calculated by Vidale(1986) [4] that extended the method proposed by Montalbetti and Kanasewich(1970)[5] to use analytical signal. As a result of the analysis, the author found that accuracy of EEDPA improves by about 15% if velocity records, not displacement records, are used contrary to the author's expectation. Use of velocity records enables reduction of CPU time in integration of seismic records and improvement in promptness of EEDPA, although this analysis is still rough and further scrutiny is essential. At this moment, the author used seismic records that obtained by simply integrating acceleration
Keller, Brad M.; Nathan, Diane L.; Wang Yan; Zheng Yuanjie; Gee, James C.; Conant, Emily F.; Kontos, Despina
2012-08-15
Purpose: The amount of fibroglandular tissue content in the breast as estimated mammographically, commonly referred to as breast percent density (PD%), is one of the most significant risk factors for developing breast cancer. Approaches to quantify breast density commonly focus on either semiautomated methods or visual assessment, both of which are highly subjective. Furthermore, most studies published to date investigating computer-aided assessment of breast PD% have been performed using digitized screen-film mammograms, while digital mammography is increasingly replacing screen-film mammography in breast cancer screening protocols. Digital mammography imaging generates two types of images for analysis, raw (i.e., 'FOR PROCESSING') and vendor postprocessed (i.e., 'FOR PRESENTATION'), of which postprocessed images are commonly used in clinical practice. Development of an algorithm which effectively estimates breast PD% in both raw and postprocessed digital mammography images would be beneficial in terms of direct clinical application and retrospective analysis. Methods: This work proposes a new algorithm for fully automated quantification of breast PD% based on adaptive multiclass fuzzy c-means (FCM) clustering and support vector machine (SVM) classification, optimized for the imaging characteristics of both raw and processed digital mammography images as well as for individual patient and image characteristics. Our algorithm first delineates the breast region within the mammogram via an automated thresholding scheme to identify background air followed by a straight line Hough transform to extract the pectoral muscle region. The algorithm then applies adaptive FCM clustering based on an optimal number of clusters derived from image properties of the specific mammogram to subdivide the breast into regions of similar gray-level intensity. Finally, a SVM classifier is trained to identify which clusters within the breast tissue are likely fibroglandular, which are then
Shon, Hyun Kyong; Yoon, Sohee; Moon, Jeong Hee; Lee, Tae Geol
2016-06-09
The popularity of argon gas cluster ion beams (Ar-GCIB) as primary ion beams in time-of-flight secondary ion mass spectrometry (TOF-SIMS) has increased because the molecular ions of large organic- and biomolecules can be detected with less damage to the sample surfaces. However, Ar-GCIB is limited by poor mass resolution as well as poor mass accuracy. The inferior quality of the mass resolution in a TOF-SIMS spectrum obtained by using Ar-GCIB compared to the one obtained by a bismuth liquid metal cluster ion beam and others makes it difficult to identify unknown peaks because of the mass interference from the neighboring peaks. However, in this study, the authors demonstrate improved mass resolution in TOF-SIMS using Ar-GCIB through the delayed extraction of secondary ions, a method typically used in TOF mass spectrometry to increase mass resolution. As for poor mass accuracy, although mass calibration using internal peaks with low mass such as hydrogen and carbon is a common approach in TOF-SIMS, it is unsuited to the present study because of the disappearance of the low-mass peaks in the delayed extraction mode. To resolve this issue, external mass calibration, another regularly used method in TOF-MS, was adapted to enhance mass accuracy in the spectrum and image generated by TOF-SIMS using Ar-GCIB in the delayed extraction mode. By producing spectra analyses of a peptide mixture and bovine serum albumin protein digested with trypsin, along with image analyses of rat brain samples, the authors demonstrate for the first time the enhancement of mass resolution and mass accuracy for the purpose of analyzing large biomolecules in TOF-SIMS using Ar-GCIB through the use of delayed extraction and external mass calibration.
Bhattacharya, Anindya; Chowdhury, Nirmalya; De, Rajat K
2015-01-01
Performance of clustering algorithms is largely dependent on selected similarity measure. Efficiency in handling outliers is a major contributor to the success of a similarity measure. Better the ability of similarity measure in measuring similarity between genes in the presence of outliers, better will be the performance of the clustering algorithm in forming biologically relevant groups of genes. In the present article, we discuss the problem of handling outliers with different existing similarity measures and introduce the concepts of Relative Sample Outlier (RSO). We formulate new similarity, called Weighted Sample Similarity (WSS), incorporated in Euclidean distance and Pearson correlation coefficient and then use them in various clustering and biclustering algorithms to group different gene expression profiles. Our results suggest that WSS improves performance, in terms of finding biologically relevant groups of genes, of all the considered clustering algorithms.
Longo, Giovanni; Ioannidu, Caterina Alexandra; Scotto d’Abusco, Anna; Superti, Fabiana; Misiano, Carlo; Zanoni, Robertino; Politi, Laura; Mazzola, Luca; Iosi, Francesca; Mura, Francesco; Scandurra, Roberto
2016-01-01
Introduction Recently, we introduced a new deposition method, based on Ion Plating Plasma Assisted technology, to coat titanium implants with a thin but hard nanostructured layer composed of titanium carbide and titanium oxides, clustered around graphitic carbon. The nanostructured layer has a double effect: protects the bulk titanium against the harsh conditions of biological tissues and in the same time has a stimulating action on osteoblasts. Results The aim of this work is to describe the biological effects of this layer on osteoblasts cultured in vitro. We demonstrate that the nanostructured layer causes an overexpression of many early genes correlated to proteins involved in bone turnover and an increase in the number of surface receptors for α3β1 integrin, talin, paxillin. Analyses at single-cell level, by scanning electron microscopy, atomic force microscopy, and single cell force spectroscopy, show how the proliferation, adhesion and spreading of cells cultured on coated titanium samples are higher than on uncoated titanium ones. Finally, the chemistry of the layer induces a better formation of blood clots and a higher number of adhered platelets, compared to the uncoated cases, and these are useful features to improve the speed of implant osseointegration. Conclusion In summary, the nanostructured TiC film, due to its physical and chemical properties, can be used to protect the implants and to improve their acceptance by the bone. PMID:27031101
BRIGHTEST X-RAY CLUSTERS OF GALAXIES IN THE CFHTLS WIDE FIELDS: CATALOG AND OPTICAL MASS ESTIMATOR
Mirkazemi, M.; Finoguenov, A.; Lerchster, M.; Erfanianfar, G.; Seitz, S.; Pereira, M. J.; Egami, E.; Tanaka, M.; Brimioulle, F.; Kettula, K.; McCracken, H. J.; Mellier, Y.; Kneib, J. P.; Rykoff, E.; Erben, T.; Taylor, J. E.
2015-01-20
The Canada-France-Hawaii Telescope Legacy Survey (CFHTLS) presents a unique data set for weak-lensing studies, having high-quality imaging and deep multiband photometry. We have initiated an XMM-CFHTLS project to provide X-ray observations of the brightest X-ray-selected clusters within the wide CFHTLS area. Performance of these observations and the high quality of CFHTLS data allow us to revisit the identification of X-ray sources, introducing automated reproducible algorithms, based on the multicolor red sequence finder. We have also introduced a new optical mass proxy. We provide the calibration of the red sequence observed in the Canada-France-Hawaii filters and compare the results with the traditional single-color red sequence and photo-z. We test the identification algorithm on the subset of highly significant XMM clusters and identify 100% of the sample. We find that the integrated z-band luminosity of the red sequence galaxies correlates well with the X-ray luminosity, with a surprisingly small scatter of 0.20 dex. We further use the multicolor red sequence to reduce spurious detections in the full XMM and ROSAT All-Sky Survey (RASS) data sets, resulting in catalogs of 196 and 32 clusters, respectively. We made spectroscopic follow-up observations of some of these systems with HECTOSPEC and in combination with BOSS DR9 data. We also describe the modifications needed to the source detection algorithm in order to maintain high purity of extended sources in the shallow X-ray data. We also present the scaling relation between X-ray luminosity and velocity dispersion.
The climate impact of ship NOx emissions: an improved estimate accounting for plume chemistry
NASA Astrophysics Data System (ADS)
Holmes, C. D.; Prather, M. J.; Vinken, G. C. M.
2014-07-01
Nitrogen oxide (NOx) emissions from maritime shipping produce ozone (O3) and hydroxyl radicals (OH), which in turn destroy methane (CH4). The balance between this warming (due to O3) and cooling (due to CH4) determines the net effect of ship NOx on climate. Previous estimates of the chemical impact and radiative forcing (RF) of ship NOx have generally assumed that plumes of ship exhaust are instantly diluted into model grid cells spanning hundreds of kilometers, even though this is known to produce biased results. Here we improve the parametric representation of exhaust-gas chemistry developed in the GEOS-Chem chemical transport model (CTM) to provide the first estimate of RF from shipping that accounts for sub-grid-scale ship plume chemistry. The CTM now calculates O3 production and CH4 loss both within and outside the exhaust plumes and also accounts for the effect of wind speed. With the improved modeling of plumes, ship NOx perturbations are smaller than suggested by the ensemble of past global modeling studies, but if we assume instant dilution of ship NOx on the grid scale, the CTM reproduces previous model results. Our best estimates of the RF components from increasing ship NOx emissions by 1 Tg(N) yr-1 are smaller than that given in the past literature: + 3.4 ± 0.85 mW m-2 (1σ confidence interval) from the short-lived ozone increase, -5.7 ± 1.3 mW m-2 from the CH4 decrease, and -1.7 ± 0.7 mW m-2 from the long-lived O3 decrease that accompanies the CH4 change. The resulting net RF is -4.0 ± 2.0 mW m-2 for emissions of 1 Tg(N) yr-1. Due to non-linearity in O3 production as a function of background NOx, RF from large changes in ship NOx emissions, such as the increase since preindustrial times, is about 20% larger than this RF value for small marginal emission changes. Using sensitivity tests in one CTM, we quantify sources of uncertainty in the RF components and causes of the ±30% spread in past model results; the main source of uncertainty is the
Yao, Yu; Cheng, Kai; Zhou, Zhi-Jie; Zhang, Bang-Cheng; Dong, Chao; Zheng, Sen
2015-11-01
A tracked vehicle has been widely used in exploring unknown environments and military fields. In current methods for suiting soil conditions, soil parameters need to be given and the traction performance cannot always be satisfied on soft soil. To solve the problem, it is essential to estimate track-soil parameters in real-time. Therefore, a detailed mathematical model is proposed for the first time. Furthermore, a novel algorithm which is composed of Kalman filter (KF) and improved strong tracking filter (STF) is developed for online track-soil estimation and named as KF-ISTF. By this method, the KF is used to estimate slip parameters, and the ISTF is used to estimate motion states. Then the key soil parameters can be estimated by using a suitable soil model. The experimental results show that equipped with the estimation algorithm, the proposed model can be used to estimate the track-soil parameters, and make the traction performance satisfied with soil conditions.
Williams, Kristine; Herman, Ruth; Bontempo, Daniel
2014-01-01
Purpose of the study Assisted living (AL) residents are at risk for cognitive and functional declines that eventually reduce their ability to care for themselves, thereby triggering nursing home placement. In developing a method to slow this decline, the efficacy of Reasoning Exercises in Assisted Living (REAL), a cognitive training intervention that teaches everyday reasoning and problem-solving skills to AL residents, was tested. Design and methods At thirteen randomized Midwestern facilities, AL residents whose Mini Mental State Examination scores ranged from 19–29 either were trained in REAL or a vitamin education attention control program or received no treatment at all. For 3 weeks, treated groups received personal training in their respective programs. Results Scores on the Every Day Problems Test for Cognitively Challenged Elders (EPCCE) and on the Direct Assessment of Functional Status (DAFS) showed significant increases only for the REAL group. For EPCCE, change from baseline immediately postintervention was +3.10 (P<0.01), and there was significant retention at the 3-month follow-up (d=2.71; P<0.01). For DAFS, change from baseline immediately postintervention was +3.52 (P<0.001), although retention was not as strong. Neither the attention nor the no-treatment control groups had significant gains immediately postintervention or at follow-up assessments. Post hoc across-group comparison of baseline change also highlights the benefits of REAL training. For EPCCE, the magnitude of gain was significantly larger in the REAL group versus the no-treatment control group immediately postintervention (d=3.82; P<0.01) and at the 3-month follow-up (d=3.80; P<0.01). For DAFS, gain magnitude immediately postintervention for REAL was significantly greater compared with in the attention control group (d=4.73; P<0.01). Implications REAL improves skills in everyday problem solving, which may allow AL residents to maintain self-care and extend AL residency. This benefit
NASA Astrophysics Data System (ADS)
Gautam, M. R.; Zhu, J.; Ye, M.; Meyer, P. D.; Hassan, A. E.
2008-12-01
ANN-PTFs have become popular means of mapping easily available soil data into hard-to-measure soil hydraulic parameters in the recent years. These parameters and their distributions are the indispensable inputs to subsurface flow and transport models which provide basis for environmental planning, management and decision making. While improved ANN prediction together with the preservation of probability distributions of hydraulic parameters in ANN training is important, ANN-PTFs have been typically found using conventional ANN training approach with the mean square error as an error function, which may not preserve the probability distribution of the parameters. Moreover, the conventional ANN training can itself introduce correlation among predicted parameters and could not preserve the actual correlation among the measured parameters. The present study describes approaches to deal with such shortcomings of conventional ANN- PTF training algorithms by using new types of error functions and presents a group of improved ANN-PTF models developed on the basis of the new approaches with different levels of data availability. In the study, the bootstrap method is used as part of ANN-PTF development for generating independent training and validation sets, and calculating uncertainty estimates of the ANN predictions. The results demonstrate the merit of the new approaches of the ANN training and the physical significance of various types of less costly soil data in the prediction of soil hydraulic parameters.
Zhang, X L; Su, G F; Yuan, H Y; Chen, J G; Huang, Q Y
2014-09-15
Atmospheric dispersion models play an important role in nuclear power plant accident management. A reliable estimation of radioactive material distribution in short range (about 50 km) is in urgent need for population sheltering and evacuation planning. However, the meteorological data and the source term which greatly influence the accuracy of the atmospheric dispersion models are usually poorly known at the early phase of the emergency. In this study, a modified ensemble Kalman filter data assimilation method in conjunction with a Lagrangian puff-model is proposed to simultaneously improve the model prediction and reconstruct the source terms for short range atmospheric dispersion using the off-site environmental monitoring data. Four main uncertainty parameters are considered: source release rate, plume rise height, wind speed and wind direction. Twin experiments show that the method effectively improves the predicted concentration distribution, and the temporal profiles of source release rate and plume rise height are also successfully reconstructed. Moreover, the time lag in the response of ensemble Kalman filter is shortened. The method proposed here can be a useful tool not only in the nuclear power plant accident emergency management but also in other similar situation where hazardous material is released into the atmosphere.
Estimating the value of improved wastewater treatment: the case of River Ganga, India.
Birol, Ekin; Das, Sukanya
2010-11-01
In this paper we employ a stated preference environmental valuation technique, namely the choice experiment method, to estimat