A new statistical distance scale for planetary nebulae
NASA Astrophysics Data System (ADS)
Ali, Alaa; Ismail, H. A.; Alsolami, Z.
2015-05-01
In the first part of the present article we discuss the consistency among different individual distance methods of Galactic planetary nebulae, while in the second part we develop a new statistical distance scale based on a calibrating sample of well determined distances. A set composed of 315 planetary nebulae with individual distances are extracted from the literature. Inspecting the data set indicates that the accuracy of distances is varying among different individual methods and also among different sources where the same individual method was applied. Therefore, we derive a reliable weighted mean distance for each object by considering the influence of the distance error and the weight of each individual method. The results reveal that the discussed individual methods are consistent with each other, except the gravity method that produces higher distances compared to other individual methods. From the initial data set, we construct a standard calibrating sample consists of 82 objects. This sample is restricted only to the objects with distances determined from at least two different individual methods, except few objects with trusted distances determined from the trigonometric, spectroscopic, and cluster membership methods. In addition to the well determined distances for this sample, it shows a lot of advantages over that used in the prior distance scales. This sample is used to recalibrate the mass-radius and radio surface brightness temperature-radius relationships. An average error of ˜30 % is estimated for the new distance scale. The newly distance scale is compared with the most widely used statistical scales in literature, where the results show that it is roughly similar to the majority of them within ˜±20 % difference. Furthermore, the new scale yields a weighted mean distance to the Galactic center of 7.6±1.35 kpc, which in good agreement with the very recent measure of Malkin 2013.
Field efficiency and bias of snag inventory methods
Robert S. Kenning; Mark J. Ducey; John C. Brissette; Jeffery H. Gove
2005-01-01
Snags and cavity trees are important components of forests, but can be difficult to inventory precisely and are not always included in inventories because of limited resources. We tested the application of N-tree distance sampling as a time-saving snag sampling method and compared N-tree distance sampling to fixed-area sampling and modified horizontal line sampling in...
An improved initialization center k-means clustering algorithm based on distance and density
NASA Astrophysics Data System (ADS)
Duan, Yanling; Liu, Qun; Xia, Shuyin
2018-04-01
Aiming at the problem of the random initial clustering center of k means algorithm that the clustering results are influenced by outlier data sample and are unstable in multiple clustering, a method of central point initialization method based on larger distance and higher density is proposed. The reciprocal of the weighted average of distance is used to represent the sample density, and the data sample with the larger distance and the higher density are selected as the initial clustering centers to optimize the clustering results. Then, a clustering evaluation method based on distance and density is designed to verify the feasibility of the algorithm and the practicality, the experimental results on UCI data sets show that the algorithm has a certain stability and practicality.
Power and sample-size estimation for microbiome studies using pairwise distances and PERMANOVA.
Kelly, Brendan J; Gross, Robert; Bittinger, Kyle; Sherrill-Mix, Scott; Lewis, James D; Collman, Ronald G; Bushman, Frederic D; Li, Hongzhe
2015-08-01
The variation in community composition between microbiome samples, termed beta diversity, can be measured by pairwise distance based on either presence-absence or quantitative species abundance data. PERMANOVA, a permutation-based extension of multivariate analysis of variance to a matrix of pairwise distances, partitions within-group and between-group distances to permit assessment of the effect of an exposure or intervention (grouping factor) upon the sampled microbiome. Within-group distance and exposure/intervention effect size must be accurately modeled to estimate statistical power for a microbiome study that will be analyzed with pairwise distances and PERMANOVA. We present a framework for PERMANOVA power estimation tailored to marker-gene microbiome studies that will be analyzed by pairwise distances, which includes: (i) a novel method for distance matrix simulation that permits modeling of within-group pairwise distances according to pre-specified population parameters; (ii) a method to incorporate effects of different sizes within the simulated distance matrix; (iii) a simulation-based method for estimating PERMANOVA power from simulated distance matrices; and (iv) an R statistical software package that implements the above. Matrices of pairwise distances can be efficiently simulated to satisfy the triangle inequality and incorporate group-level effects, which are quantified by the adjusted coefficient of determination, omega-squared (ω2). From simulated distance matrices, available PERMANOVA power or necessary sample size can be estimated for a planned microbiome study. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Power and sample-size estimation for microbiome studies using pairwise distances and PERMANOVA
Kelly, Brendan J.; Gross, Robert; Bittinger, Kyle; Sherrill-Mix, Scott; Lewis, James D.; Collman, Ronald G.; Bushman, Frederic D.; Li, Hongzhe
2015-01-01
Motivation: The variation in community composition between microbiome samples, termed beta diversity, can be measured by pairwise distance based on either presence–absence or quantitative species abundance data. PERMANOVA, a permutation-based extension of multivariate analysis of variance to a matrix of pairwise distances, partitions within-group and between-group distances to permit assessment of the effect of an exposure or intervention (grouping factor) upon the sampled microbiome. Within-group distance and exposure/intervention effect size must be accurately modeled to estimate statistical power for a microbiome study that will be analyzed with pairwise distances and PERMANOVA. Results: We present a framework for PERMANOVA power estimation tailored to marker-gene microbiome studies that will be analyzed by pairwise distances, which includes: (i) a novel method for distance matrix simulation that permits modeling of within-group pairwise distances according to pre-specified population parameters; (ii) a method to incorporate effects of different sizes within the simulated distance matrix; (iii) a simulation-based method for estimating PERMANOVA power from simulated distance matrices; and (iv) an R statistical software package that implements the above. Matrices of pairwise distances can be efficiently simulated to satisfy the triangle inequality and incorporate group-level effects, which are quantified by the adjusted coefficient of determination, omega-squared (ω2). From simulated distance matrices, available PERMANOVA power or necessary sample size can be estimated for a planned microbiome study. Availability and implementation: http://github.com/brendankelly/micropower. Contact: brendank@mail.med.upenn.edu or hongzhe@upenn.edu PMID:25819674
Potential, velocity, and density fields from sparse and noisy redshift-distance samples - Method
NASA Technical Reports Server (NTRS)
Dekel, Avishai; Bertschinger, Edmund; Faber, Sandra M.
1990-01-01
A method for recovering the three-dimensional potential, velocity, and density fields from large-scale redshift-distance samples is described. Galaxies are taken as tracers of the velocity field, not of the mass. The density field and the initial conditions are calculated using an iterative procedure that applies the no-vorticity assumption at an initial time and uses the Zel'dovich approximation to relate initial and final positions of particles on a grid. The method is tested using a cosmological N-body simulation 'observed' at the positions of real galaxies in a redshift-distance sample, taking into account their distance measurement errors. Malmquist bias and other systematic and statistical errors are extensively explored using both analytical techniques and Monte Carlo simulations.
Magnetic force microscopy with frequency-modulated capacitive tip-sample distance control
NASA Astrophysics Data System (ADS)
Zhao, X.; Schwenk, J.; Mandru, A. O.; Penedo, M.; Baćani, M.; Marioni, M. A.; Hug, H. J.
2018-01-01
In a step towards routinely achieving 10 nm spatial resolution with magnetic force microscopy, we have developed a robust method for active tip-sample distance control based on frequency modulation of the cantilever oscillation. It allows us to keep a well-defined tip-sample distance of the order of 10 nm within better than +/- 0.4 nm precision throughout the measurement even in the presence of energy dissipative processes, and is adequate for single-passage non-contact operation in vacuum. The cantilever is excited mechanically in a phase-locked loop to oscillate at constant amplitude on its first flexural resonance mode. This frequency is modulated by an electrostatic force gradient generated by tip-sample bias oscillating from a few hundred Hz up to a few kHz. The sum of the side bands’ amplitudes is a proxy for the tip-sample distance and can be used for tip-sample distance control. This method can also be extended to other scanning probe microscopy techniques.
Jeffrey H. Gove; Mark J. Ducey; Harry T. Valentine; Michael S. Williams
2013-01-01
Many new methods for sampling down coarse woody debris have been proposed in the last dozen or so years. One of the most promising in terms of field application, perpendicular distance sampling (PDS), has several variants that have been progressively introduced in the literature. In this study, we provide an overview of the different PDS variants and comprehensive...
A distance limited method for sampling downed coarse woody debris
Jeffrey H. Gove; Mark J. Ducey; Harry T. Valentine; Michael S. Williams
2012-01-01
A new sampling method for down coarse woody debris is proposed based on limiting the perpendicular distance from individual pieces to a randomly chosen sample point. Two approaches are presented that allow different protocols to be used to determine field measurements; estimators for each protocol are also developed. Both protocols are compared via simulation against...
NASA Astrophysics Data System (ADS)
Wang, Shu; Chen, Xiaodian; de Grijs, Richard; Deng, Licai
2018-01-01
Classical Cepheids are well-known and widely used distance indicators. As distance and extinction are usually degenerate, it is important to develop suitable methods to robustly anchor the distance scale. Here, we introduce a near-infrared optimal distance method to determine both the extinction values of and distances to a large sample of 288 Galactic classical Cepheids. The overall uncertainty in the derived distances is less than 4.9%. We compare our newly determined distances to the Cepheids in our sample with previously published distances to the same Cepheids with Hubble Space Telescope parallax measurements and distances based on the IR surface brightness method, Wesenheit functions, and the main-sequence fitting method. The systematic deviations in the distances determined here with respect to those of previous publications is less than 1%–2%. Hence, we constructed Galactic mid-IR period–luminosity (PL) relations for classical Cepheids in the four Wide-Field Infrared Survey Explorer (WISE) bands (W1, W2, W3, and W4) and the four Spitzer Space Telescope bands ([3.6], [4.5], [5.8], and [8.0]). Based on our sample of hundreds of Cepheids, the WISE PL relations have been determined for the first time; their dispersion is approximately 0.10 mag. Using the currently most complete sample, our Spitzer PL relations represent a significant improvement in accuracy, especially in the [3.6] band which has the smallest dispersion (0.066 mag). In addition, the average mid-IR extinction curve for Cepheids has been obtained: {A}W1/{A}{K{{s}}}≈ 0.560, {A}W2/{A}{K{{s}}}≈ 0.479, {A}W3/{A}{K{{s}}}≈ 0.507, {A}W4/{A}{K{{s}}}≈ 0.406, {A}[3.6]/{A}{K{{s}}}≈ 0.481, {A}[4.5]/{A}{K{{s}}}≈ 0.469, {A}[5.8]/{A}{K{{s}}}≈ 0.427, and {A}[8.0]/{A}{K{{s}}}≈ 0.427 {mag}.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Santamaria, L.; Siller, H. R.; Garcia-Ortiz, C. E., E-mail: cegarcia@cicese.mx
In this work, we present an alternative optical method to determine the probe-sample separation distance in a scanning near-field optical microscope. The experimental method is based in a Lloyd’s mirror interferometer and offers a measurement precision deviation of ∼100 nm using digital image processing and numerical analysis. The technique can also be strategically combined with the characterization of piezoelectric actuators and stability evaluation of the optical system. It also opens the possibility for the development of an automatic approximation control system valid for probe-sample distances from 5 to 500 μm.
Modeling abundance effects in distance sampling
Royle, J. Andrew; Dawson, D.K.; Bates, S.
2004-01-01
Distance-sampling methods are commonly used in studies of animal populations to estimate population density. A common objective of such studies is to evaluate the relationship between abundance or density and covariates that describe animal habitat or other environmental influences. However, little attention has been focused on methods of modeling abundance covariate effects in conventional distance-sampling models. In this paper we propose a distance-sampling model that accommodates covariate effects on abundance. The model is based on specification of the distance-sampling likelihood at the level of the sample unit in terms of local abundance (for each sampling unit). This model is augmented with a Poisson regression model for local abundance that is parameterized in terms of available covariates. Maximum-likelihood estimation of detection and density parameters is based on the integrated likelihood, wherein local abundance is removed from the likelihood by integration. We provide an example using avian point-transect data of Ovenbirds (Seiurus aurocapillus) collected using a distance-sampling protocol and two measures of habitat structure (understory cover and basal area of overstory trees). The model yields a sensible description (positive effect of understory cover, negative effect on basal area) of the relationship between habitat and Ovenbird density that can be used to evaluate the effects of habitat management on Ovenbird populations.
A Unimodal Model for Double Observer Distance Sampling Surveys.
Becker, Earl F; Christ, Aaron M
2015-01-01
Distance sampling is a widely used method to estimate animal population size. Most distance sampling models utilize a monotonically decreasing detection function such as a half-normal. Recent advances in distance sampling modeling allow for the incorporation of covariates into the distance model, and the elimination of the assumption of perfect detection at some fixed distance (usually the transect line) with the use of double-observer models. The assumption of full observer independence in the double-observer model is problematic, but can be addressed by using the point independence assumption which assumes there is one distance, the apex of the detection function, where the 2 observers are assumed independent. Aerially collected distance sampling data can have a unimodal shape and have been successfully modeled with a gamma detection function. Covariates in gamma detection models cause the apex of detection to shift depending upon covariate levels, making this model incompatible with the point independence assumption when using double-observer data. This paper reports a unimodal detection model based on a two-piece normal distribution that allows covariates, has only one apex, and is consistent with the point independence assumption when double-observer data are utilized. An aerial line-transect survey of black bears in Alaska illustrate how this method can be applied.
Selective Distance-Based K+ Quantification on Paper-Based Microfluidics.
Gerold, Chase T; Bakker, Eric; Henry, Charles S
2018-04-03
In this study, paper-based microfluidic devices (μPADs) capable of K + quantification in aqueous samples, as well as in human serum, using both colorimetric and distance-based methods are described. A lipophilic phase containing potassium ionophore I (valinomycin) was utilized to achieve highly selective quantification of K + in the presence of Na + , Li + , and Mg 2+ ions. Successful addition of a suspended lipophilic phase to a wax printed paper-based device is described and offers a solution to current approaches that rely on organic solvents, which damage wax barriers. The approach provides an avenue for future alkali/alkaline quantification utilizing μPADs. Colorimetric spot tests allowed for K + quantification from 0.1-5.0 mM using only 3.00 μL of sample solution. Selective distance-based quantification required small sample volumes (6.00 μL) and gave responses sensitive enough to distinguish between 1.0 and 2.5 mM of sample K + . μPADs using distance-based methods were also capable of differentiating between 4.3 and 6.9 mM K + in human serum samples. Distance-based methods required no digital analysis, electronic hardware, or pumps; any steps required for quantification could be carried out using the naked eye.
Van Berkel, Gary J [Clinton, TN; Kertesz, Vilmos [Knoxville, TN
2012-02-21
A system and method utilizes distance-measuring equipment including a laser sensor for controlling the collection instrument-to-surface distance during a sample collection process for use, for example, with mass spectrometric detection. The laser sensor is arranged in a fixed positional relationship with the collection instrument, and a signal is generated by way of the laser sensor which corresponds to the actual distance between the laser sensor and the surface. The actual distance between the laser sensor and the surface is compared to a target distance between the laser sensor and the surface when the collection instrument is arranged at a desired distance from the surface for sample collecting purposes, and adjustments are made, if necessary, so that the actual distance approaches the target distance.
Distance-limited perpendicular distance sampling for coarse woody debris: theory and field results
Mark J. Ducey; Micheal S. Williams; Jeffrey H. Gove; Steven Roberge; Robert S. Kenning
2013-01-01
Coarse woody debris (CWD) has been identified as an important component in many forest ecosystem processes. Perpendicular distance sampling (PDS) is one of the several efficient new methods that have been proposed for CWD inventory. One drawback of PDS is that the maximum search distance can be very large, especially if CWD diameters are large or the volume factor...
A novel heterogeneous training sample selection method on space-time adaptive processing
NASA Astrophysics Data System (ADS)
Wang, Qiang; Zhang, Yongshun; Guo, Yiduo
2018-04-01
The performance of ground target detection about space-time adaptive processing (STAP) decreases when non-homogeneity of clutter power is caused because of training samples contaminated by target-like signals. In order to solve this problem, a novel nonhomogeneous training sample selection method based on sample similarity is proposed, which converts the training sample selection into a convex optimization problem. Firstly, the existing deficiencies on the sample selection using generalized inner product (GIP) are analyzed. Secondly, the similarities of different training samples are obtained by calculating mean-hausdorff distance so as to reject the contaminated training samples. Thirdly, cell under test (CUT) and the residual training samples are projected into the orthogonal subspace of the target in the CUT, and mean-hausdorff distances between the projected CUT and training samples are calculated. Fourthly, the distances are sorted in order of value and the training samples which have the bigger value are selective preference to realize the reduced-dimension. Finally, simulation results with Mountain-Top data verify the effectiveness of the proposed method.
Joint learning of labels and distance metric.
Liu, Bo; Wang, Meng; Hong, Richang; Zha, Zhengjun; Hua, Xian-Sheng
2010-06-01
Machine learning algorithms frequently suffer from the insufficiency of training data and the usage of inappropriate distance metric. In this paper, we propose a joint learning of labels and distance metric (JLLDM) approach, which is able to simultaneously address the two difficulties. In comparison with the existing semi-supervised learning and distance metric learning methods that focus only on label prediction or distance metric construction, the JLLDM algorithm optimizes the labels of unlabeled samples and a Mahalanobis distance metric in a unified scheme. The advantage of JLLDM is multifold: 1) the problem of training data insufficiency can be tackled; 2) a good distance metric can be constructed with only very few training samples; and 3) no radius parameter is needed since the algorithm automatically determines the scale of the metric. Extensive experiments are conducted to compare the JLLDM approach with different semi-supervised learning and distance metric learning methods, and empirical results demonstrate its effectiveness.
Jaccard distance based weighted sparse representation for coarse-to-fine plant species recognition.
Zhang, Shanwen; Wu, Xiaowei; You, Zhuhong
2017-01-01
Leaf based plant species recognition plays an important role in ecological protection, however its application to large and modern leaf databases has been a long-standing obstacle due to the computational cost and feasibility. Recognizing such limitations, we propose a Jaccard distance based sparse representation (JDSR) method which adopts a two-stage, coarse to fine strategy for plant species recognition. In the first stage, we use the Jaccard distance between the test sample and each training sample to coarsely determine the candidate classes of the test sample. The second stage includes a Jaccard distance based weighted sparse representation based classification(WSRC), which aims to approximately represent the test sample in the training space, and classify it by the approximation residuals. Since the training model of our JDSR method involves much fewer but more informative representatives, this method is expected to overcome the limitation of high computational and memory costs in traditional sparse representation based classification. Comparative experimental results on a public leaf image database demonstrate that the proposed method outperforms other existing feature extraction and SRC based plant recognition methods in terms of both accuracy and computational speed.
Properties of star clusters - I. Automatic distance and extinction estimates
NASA Astrophysics Data System (ADS)
Buckner, Anne S. M.; Froebrich, Dirk
2013-12-01
Determining star cluster distances is essential to analyse their properties and distribution in the Galaxy. In particular, it is desirable to have a reliable, purely photometric distance estimation method for large samples of newly discovered cluster candidates e.g. from the Two Micron All Sky Survey, the UK Infrared Deep Sky Survey Galactic Plane Survey and VVV. Here, we establish an automatic method to estimate distances and reddening from near-infrared photometry alone, without the use of isochrone fitting. We employ a decontamination procedure of JHK photometry to determine the density of stars foreground to clusters and a galactic model to estimate distances. We then calibrate the method using clusters with known properties. This allows us to establish distance estimates with better than 40 per cent accuracy. We apply our method to determine the extinction and distance values to 378 known open clusters and 397 cluster candidates from the list of Froebrich, Scholz & Raftery. We find that the sample is biased towards clusters of a distance of approximately 3 kpc, with typical distances between 2 and 6 kpc. Using the cluster distances and extinction values, we investigate how the average extinction per kiloparsec distance changes as a function of the Galactic longitude. We find a systematic dependence that can be approximated by AH(l) [mag kpc-1] = 0.10 + 0.001 × |l - 180°|/° for regions more than 60° from the Galactic Centre.
DISTANCES TO DARK CLOUDS: COMPARING EXTINCTION DISTANCES TO MASER PARALLAX DISTANCES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Foster, Jonathan B.; Jackson, James M.; Stead, Joseph J.
We test two different methods of using near-infrared extinction to estimate distances to dark clouds in the first quadrant of the Galaxy using large near-infrared (Two Micron All Sky Survey and UKIRT Infrared Deep Sky Survey) surveys. Very long baseline interferometry parallax measurements of masers around massive young stars provide the most direct and bias-free measurement of the distance to these dark clouds. We compare the extinction distance estimates to these maser parallax distances. We also compare these distances to kinematic distances, including recent re-calibrations of the Galactic rotation curve. The extinction distance methods agree with the maser parallax distancesmore » (within the errors) between 66% and 100% of the time (depending on method and input survey) and between 85% and 100% of the time outside of the crowded Galactic center. Although the sample size is small, extinction distance methods reproduce maser parallax distances better than kinematic distances; furthermore, extinction distance methods do not suffer from the kinematic distance ambiguity. This validation gives us confidence that these extinction methods may be extended to additional dark clouds where maser parallaxes are not available.« less
Handwriting individualization using distance and rarity
NASA Astrophysics Data System (ADS)
Tang, Yi; Srihari, Sargur; Srinivasan, Harish
2012-01-01
Forensic individualization is the task of associating observed evidence with a specific source. The likelihood ratio (LR) is a quantitative measure that expresses the degree of uncertainty in individualization, where the numerator represents the likelihood that the evidence corresponds to the known and the denominator the likelihood that it does not correspond to the known. Since the number of parameters needed to compute the LR is exponential with the number of feature measurements, a commonly used simplification is the use of likelihoods based on distance (or similarity) given the two alternative hypotheses. This paper proposes an intermediate method which decomposes the LR as the product of two factors, one based on distance and the other on rarity. It was evaluated using a data set of handwriting samples, by determining whether two writing samples were written by the same/different writer(s). The accuracy of the distance and rarity method, as measured by error rates, is significantly better than the distance method.
ERIC Educational Resources Information Center
Karal, Hasan; Çebi, Ayça; Turgut, Yigit Emrah
2010-01-01
The aim of this study was to define the role of the assistant in a classroom environment where students are taught using video conference-based synchronous distance education. Qualitative research approach was adopted and, among purposeful sampling methods, criterion sampling method was preferred in the scope of the study. The study was carried…
Mark J. Ducey; Jeffrey H. Gove; Harry T. Valentine
2008-01-01
Perpendicular distance sampling (PDS) is a fast probability-proportional-to-size method for inventory of downed wood. However, previous development of PDS had limited the method to estimating only one variable (such as volume per hectare, or surface area per hectare) at a time. Here, we develop a general design-unbiased estimator for PDS. We then show how that...
Yeung, Edward S.; Gong, Xiaoyi
2004-09-07
The present invention provides a method of analyzing multiple samples simultaneously by absorption detection. The method comprises: (i) providing a planar array of multiple containers, each of which contains a sample comprising at least one absorbing species, (ii) irradiating the planar array of multiple containers with a light source and (iii) detecting absorption of light with a detetion means that is in line with the light source at a distance of at leaat about 10 times a cross-sectional distance of a container in the planar array of multiple containers. The absorption of light by a sample indicates the presence of an absorbing species in it. The method can further comprise: (iv) measuring the amount of absorption of light detected in (iii) indicating the amount of the absorbing species in the sample. Also provided by the present invention is a system for use in the abov metho.The system comprises; (i) a light source comrnpising or consisting essentially of at leaat one wavelength of light, the absorption of which is to be detected, (ii) a planar array of multiple containers, and (iii) a detection means that is in line with the light source and is positioned in line with and parallel to the planar array of multiple contiainers at a distance of at least about 10 times a cross-sectional distance of a container.
ERIC Educational Resources Information Center
de Oliveira Neto, Jose Dutra; dos Santos, Elaine Maria
2010-01-01
The objective of this study was to identify the methodological approaches employed in a sample of Brazilian distance education scientific literature and compare with similar publications in the United States. Brazilian sample articles (N = 983) published in several journals and meetings were compared with a sample of articles published in…
VizieR Online Data Catalog: Star clusters distances and extinctions (Buckner+, 2013)
NASA Astrophysics Data System (ADS)
Buckner, A. S. M.; Froebrich, D.
2014-10-01
Determining star cluster distances is essential to analyse their properties and distribution in the Galaxy. In particular, it is desirable to have a reliable, purely photometric distance estimation method for large samples of newly discovered cluster candidates e.g. from the Two Micron All Sky Survey, the UK Infrared Deep Sky Survey Galactic Plane Survey and VVV. Here, we establish an automatic method to estimate distances and reddening from near-infrared photometry alone, without the use of isochrone fitting. We employ a decontamination procedure of JHK photometry to determine the density of stars foreground to clusters and a galactic model to estimate distances. We then calibrate the method using clusters with known properties. This allows us to establish distance estimates with better than 40 percent accuracy. We apply our method to determine the extinction and distance values to 378 known open clusters and 397 cluster candidates from the list of Froebrich, Scholz & Raftery (2007MNRAS.374..399F, Cat. J/MNRAS/374/399). We find that the sample is biased towards clusters of a distance of approximately 3kpc, with typical distances between 2 and 6kpc. Using the cluster distances and extinction values, we investigate how the average extinction per kiloparsec distance changes as a function of the Galactic longitude. We find a systematic dependence that can be approximated by AH(l)[mag/kpc]=0.10+0.001x|l-180°|/° for regions more than 60° from the Galactic Centre. (1 data file).
Assessment of imaging quality in magnified phase CT of human bone tissue at the nanoscale
NASA Astrophysics Data System (ADS)
Yu, Boliang; Langer, Max; Pacureanu, Alexandra; Gauthier, Remy; Follet, Helene; Mitton, David; Olivier, Cecile; Cloetens, Peter; Peyrin, Francoise
2017-10-01
Bone properties at all length scales have a major impact on the fracture risk in disease such as osteoporosis. However, quantitative 3D data on bone tissue at the cellular scale are still rare. Here we propose to use magnified X-ray phase nano-CT to quantify bone ultra-structure in human bone, on the new setup developed on the beamline ID16A at the ESRF, Grenoble. Obtaining 3D images requires the application of phase retrieval prior to tomographic reconstruction. Phase retrieval is an ill-posed problem for which various approaches have been developed. Since image quality has a strong impact on the further quantification of bone tissue, our aim here is to evaluate different phase retrieval methods for imaging bone samples at the cellular scale. Samples from femurs of female donors were scanned using magnified phase nano-CT at voxel sizes of 120 and 30 nm with an energy of 33 keV. Four CT scans at varying sample-to-detector distances were acquired for each sample. We evaluated three phase retrieval methods adapted to these conditions: Paganin's method at single distance, Paganin's method extended to multiple distances, and the contrast transfer function (CTF) approach for pure phase objects. These methods were used as initialization to an iterative refinement step. Our results based on visual and quantitative assessment show that the use of several distances (as opposed to single one) clearly improves image quality and the two multi-distance phase retrieval methods give similar results. First results on the segmentation of osteocyte lacunae and canaliculi from such images are presented.
Van Berkel, Gary J.; Kertesz, Vilmos
2011-08-09
A system and method utilizes an image analysis approach for controlling the collection instrument-to-surface distance in a sampling system for use, for example, with mass spectrometric detection. Such an approach involves the capturing of an image of the collection instrument or the shadow thereof cast across the surface and the utilization of line average brightness (LAB) techniques to determine the actual distance between the collection instrument and the surface. The actual distance is subsequently compared to a target distance for re-optimization, as necessary, of the collection instrument-to-surface during an automated surface sampling operation.
Modeling abundance using hierarchical distance sampling
Royle, Andy; Kery, Marc
2016-01-01
In this chapter, we provide an introduction to classical distance sampling ideas for point and line transect data, and for continuous and binned distance data. We introduce the conditional and the full likelihood, and we discuss Bayesian analysis of these models in BUGS using the idea of data augmentation, which we discussed in Chapter 7. We then extend the basic ideas to the problem of hierarchical distance sampling (HDS), where we have multiple point or transect sample units in space (or possibly in time). The benefit of HDS in practice is that it allows us to directly model spatial variation in population size among these sample units. This is a preeminent concern of most field studies that use distance sampling methods, but it is not a problem that has received much attention in the literature. We show how to analyze HDS models in both the unmarked package and in the BUGS language for point and line transects, and for continuous and binned distance data. We provide a case study of HDS applied to a survey of the island scrub-jay on Santa Cruz Island, California.
A guide to the use of distance sampling to estimate abundance of Karner blue butterflies
Grundel, Ralph
2015-01-01
This guide is intended to describe the use of distance sampling as a method for evaluating the abundance of Karner blue butterflies at a location. Other methods for evaluating abundance exist, including mark-release-recapture and index counts derived from Pollard-Yates surveys, for example. Although this guide is not intended to be a detailed comparison of the pros and cons of each type of method, there are important preliminary considerations to think about before selecting any method for evaluating the abundance of Karner blue butterflies.
Perpendicular distance sampling: an alternative method for sampling downed coarse woody debris
Michael S. Williams; Jeffrey H. Gove
2003-01-01
Coarse woody debris (CWD) plays an important role in many forest ecosystem processes. In recent years, a number of new methods have been proposed to sample CWD. These methods select individual logs into the sample using some form of unequal probability sampling. One concern with most of these methods is the difficulty in estimating the volume of each log. A new method...
Zhang, Hong-guang; Lu, Jian-gang
2016-02-01
Abstract To overcome the problems of significant difference among samples and nonlinearity between the property and spectra of samples in spectral quantitative analysis, a local regression algorithm is proposed in this paper. In this algorithm, net signal analysis method(NAS) was firstly used to obtain the net analyte signal of the calibration samples and unknown samples, then the Euclidean distance between net analyte signal of the sample and net analyte signal of calibration samples was calculated and utilized as similarity index. According to the defined similarity index, the local calibration sets were individually selected for each unknown sample. Finally, a local PLS regression model was built on each local calibration sets for each unknown sample. The proposed method was applied to a set of near infrared spectra of meat samples. The results demonstrate that the prediction precision and model complexity of the proposed method are superior to global PLS regression method and conventional local regression algorithm based on spectral Euclidean distance.
Estimating abundance of mountain lions from unstructured spatial sampling
Russell, Robin E.; Royle, J. Andrew; Desimone, Richard; Schwartz, Michael K.; Edwards, Victoria L.; Pilgrim, Kristy P.; Mckelvey, Kevin S.
2012-01-01
Mountain lions (Puma concolor) are often difficult to monitor because of their low capture probabilities, extensive movements, and large territories. Methods for estimating the abundance of this species are needed to assess population status, determine harvest levels, evaluate the impacts of management actions on populations, and derive conservation and management strategies. Traditional mark–recapture methods do not explicitly account for differences in individual capture probabilities due to the spatial distribution of individuals in relation to survey effort (or trap locations). However, recent advances in the analysis of capture–recapture data have produced methods estimating abundance and density of animals from spatially explicit capture–recapture data that account for heterogeneity in capture probabilities due to the spatial organization of individuals and traps. We adapt recently developed spatial capture–recapture models to estimate density and abundance of mountain lions in western Montana. Volunteers and state agency personnel collected mountain lion DNA samples in portions of the Blackfoot drainage (7,908 km2) in west-central Montana using 2 methods: snow back-tracking mountain lion tracks to collect hair samples and biopsy darting treed mountain lions to obtain tissue samples. Overall, we recorded 72 individual capture events, including captures both with and without tissue sample collection and hair samples resulting in the identification of 50 individual mountain lions (30 females, 19 males, and 1 unknown sex individual). We estimated lion densities from 8 models containing effects of distance, sex, and survey effort on detection probability. Our population density estimates ranged from a minimum of 3.7 mountain lions/100 km2 (95% Cl 2.3–5.7) under the distance only model (including only an effect of distance on detection probability) to 6.7 (95% Cl 3.1–11.0) under the full model (including effects of distance, sex, survey effort, and distance x sex on detection probability). These numbers translate to a total estimate of 293 mountain lions (95% Cl 182–451) to 529 (95% Cl 245–870) within the Blackfoot drainage. Results from the distance model are similar to previous estimates of 3.6 mountain lions/100 km2 for the study area; however, results from all other models indicated greater numbers of mountain lions. Our results indicate that unstructured spatial sampling combined with spatial capture–recapture analysis can be an effective method for estimating large carnivore densities.
Automatic HTS force measurement instrument
Sanders, Scott T.; Niemann, Ralph C.
1999-01-01
A device for measuring the levitation force of a high temperature superconductor sample with respect to a reference magnet includes a receptacle for holding several high temperature superconductor samples each cooled to superconducting temperature. A rotatable carousel successively locates a selected one of the high temperature superconductor samples in registry with the reference magnet. Mechanism varies the distance between one of the high temperature superconductor samples and the reference magnet, and a sensor measures levitation force of the sample as a function of the distance between the reference magnet and the sample. A method is also disclosed.
NASA Astrophysics Data System (ADS)
Pan, Hao; Qu, Xinghua; Shi, Chunzhao; Zhang, Fumin; Li, Yating
2018-06-01
The non-uniform interval resampling method has been widely used in frequency modulated continuous wave (FMCW) laser ranging. In the large-bandwidth and long-distance measurements, the range peak is deteriorated due to the fiber dispersion mismatch. In this study, we analyze the frequency-sampling error caused by the mismatch and measure it using the spectroscopy of molecular frequency references line. By using the adjacent points' replacement and spline interpolation technique, the sampling errors could be eliminated. The results demonstrated that proposed method is suitable for resolution-enhancement and high-precision measurement. Moreover, using the proposed method, we achieved the precision of absolute distance less than 45 μm within 8 m.
Montaser, Akbar [Potomac, MD; Westphal, Craig S [Landenberg, PA; Kahen, Kaveh [Montgomery Village, MD; Rutkowski, William F [Arlington, VA
2008-01-08
An apparatus and method for providing direct liquid sample introduction using a nebulizer are provided. The apparatus and method include a short torch having an inner tube and an outer tube, and an elongated adapter having a cavity for receiving the nebulizer and positioning a nozzle tip of the nebulizer a predetermined distance from a tip of the outer tube of the short torch. The predetermined distance is preferably about 2-5 mm.
[Fast discrimination of edible vegetable oil based on Raman spectroscopy].
Zhou, Xiu-Jun; Dai, Lian-Kui; Li, Sheng
2012-07-01
A novel method to fast discriminate edible vegetable oils by Raman spectroscopy is presented. The training set is composed of different edible vegetable oils with known classes. Based on their original Raman spectra, baseline correction and normalization were applied to obtain standard spectra. Two characteristic peaks describing the unsaturated degree of vegetable oil were selected as feature vectors; then the centers of all classes were calculated. For an edible vegetable oil with unknown class, the same pretreatment and feature extraction methods were used. The Euclidian distances between the feature vector of the unknown sample and the center of each class were calculated, and the class of the unknown sample was finally determined by the minimum distance. For 43 edible vegetable oil samples from seven different classes, experimental results show that the clustering effect of each class was more obvious and the class distance was much larger with the new feature extraction method compared with PCA. The above classification model can be applied to discriminate unknown edible vegetable oils rapidly and accurately.
Multivariate Welch t-test on distances
2016-01-01
Motivation: Permutational non-Euclidean analysis of variance, PERMANOVA, is routinely used in exploratory analysis of multivariate datasets to draw conclusions about the significance of patterns visualized through dimension reduction. This method recognizes that pairwise distance matrix between observations is sufficient to compute within and between group sums of squares necessary to form the (pseudo) F statistic. Moreover, not only Euclidean, but arbitrary distances can be used. This method, however, suffers from loss of power and type I error inflation in the presence of heteroscedasticity and sample size imbalances. Results: We develop a solution in the form of a distance-based Welch t-test, TW2, for two sample potentially unbalanced and heteroscedastic data. We demonstrate empirically the desirable type I error and power characteristics of the new test. We compare the performance of PERMANOVA and TW2 in reanalysis of two existing microbiome datasets, where the methodology has originated. Availability and Implementation: The source code for methods and analysis of this article is available at https://github.com/alekseyenko/Tw2. Further guidance on application of these methods can be obtained from the author. Contact: alekseye@musc.edu PMID:27515741
Multivariate Welch t-test on distances.
Alekseyenko, Alexander V
2016-12-01
Permutational non-Euclidean analysis of variance, PERMANOVA, is routinely used in exploratory analysis of multivariate datasets to draw conclusions about the significance of patterns visualized through dimension reduction. This method recognizes that pairwise distance matrix between observations is sufficient to compute within and between group sums of squares necessary to form the (pseudo) F statistic. Moreover, not only Euclidean, but arbitrary distances can be used. This method, however, suffers from loss of power and type I error inflation in the presence of heteroscedasticity and sample size imbalances. We develop a solution in the form of a distance-based Welch t-test, [Formula: see text], for two sample potentially unbalanced and heteroscedastic data. We demonstrate empirically the desirable type I error and power characteristics of the new test. We compare the performance of PERMANOVA and [Formula: see text] in reanalysis of two existing microbiome datasets, where the methodology has originated. The source code for methods and analysis of this article is available at https://github.com/alekseyenko/Tw2 Further guidance on application of these methods can be obtained from the author. alekseye@musc.edu. © The Author 2016. Published by Oxford University Press.
Zeng, Y H; Chen, X H; Jiao, N Z
2007-12-01
To assess how completely the diversity of anoxygenic phototrophic bacteria (APB) was sampled in natural environments. All nucleotide sequences of the APB marker gene pufM from cultures and environmental clones were retrieved from the GenBank database. A set of cutoff values (sequence distances 0.06, 0.15 and 0.48 for species, genus, and (sub)phylum levels, respectively) was established using a distance-based grouping program. Analysis of the environmental clones revealed that current efforts on APB isolation and sampling in natural environments are largely inadequate. Analysis of the average distance between each identified genus and an uncultured environmental pufM sequence indicated that the majority of cultured APB genera lack environmental representatives. The distance-based grouping method is fast and efficient for bulk functional gene sequences analysis. The results clearly show that we are at a relatively early stage in sampling the global richness of APB species. Periodical assessment will undoubtedly facilitate in-depth analysis of potential biogeographical distribution pattern of APB. This is the first attempt to assess the present understanding of APB diversity in natural environments. The method used is also useful for assessing the diversity of other functional genes.
a Fast Method for Measuring the Similarity Between 3d Model and 3d Point Cloud
NASA Astrophysics Data System (ADS)
Zhang, Zongliang; Li, Jonathan; Li, Xin; Lin, Yangbin; Zhang, Shanxin; Wang, Cheng
2016-06-01
This paper proposes a fast method for measuring the partial Similarity between 3D Model and 3D point Cloud (SimMC). It is crucial to measure SimMC for many point cloud-related applications such as 3D object retrieval and inverse procedural modelling. In our proposed method, the surface area of model and the Distance from Model to point Cloud (DistMC) are exploited as measurements to calculate SimMC. Here, DistMC is defined as the weighted distance of the distances between points sampled from model and point cloud. Similarly, Distance from point Cloud to Model (DistCM) is defined as the average distance of the distances between points in point cloud and model. In order to reduce huge computational burdens brought by calculation of DistCM in some traditional methods, we define SimMC as the ratio of weighted surface area of model to DistMC. Compared to those traditional SimMC measuring methods that are only able to measure global similarity, our method is capable of measuring partial similarity by employing distance-weighted strategy. Moreover, our method is able to be faster than other partial similarity assessment methods. We demonstrate the superiority of our method both on synthetic data and laser scanning data.
Automatic HTS force measurement instrument
Sanders, S.T.; Niemann, R.C.
1999-03-30
A device is disclosed for measuring the levitation force of a high temperature superconductor sample with respect to a reference magnet includes a receptacle for holding several high temperature superconductor samples each cooled to superconducting temperature. A rotatable carousel successively locates a selected one of the high temperature superconductor samples in registry with the reference magnet. Mechanism varies the distance between one of the high temperature superconductor samples and the reference magnet, and a sensor measures levitation force of the sample as a function of the distance between the reference magnet and the sample. A method is also disclosed. 3 figs.
Super-Eddington accreting massive black holes as long-lived cosmological standards.
Wang, Jian-Min; Du, Pu; Valls-Gabaud, David; Hu, Chen; Netzer, Hagai
2013-02-22
Super-Eddington accreting massive black holes (SEAMBHs) reach saturated luminosities above a certain accretion rate due to photon trapping and advection in slim accretion disks. We show that these SEAMBHs could provide a new tool for estimating cosmological distances if they are properly identified by hard x-ray observations, in particular by the slope of their 2-10 keV continuum. To verify this idea we obtained black hole mass estimates and x-ray data for a sample of 60 narrow line Seyfert 1 galaxies that we consider to be the most promising SEAMBH candidates. We demonstrate that the distances derived by the new method for the objects in the sample get closer to the standard luminosity distances as the hard x-ray continuum gets steeper. The results allow us to analyze the requirements for using the method in future samples of active black holes and to demonstrate that the expected uncertainty, given large enough samples, can make them into a useful, new cosmological ruler.
Multiresolution Distance Volumes for Progressive Surface Compression
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laney, D E; Bertram, M; Duchaineau, M A
2002-04-18
We present a surface compression method that stores surfaces as wavelet-compressed signed-distance volumes. Our approach enables the representation of surfaces with complex topology and arbitrary numbers of components within a single multiresolution data structure. This data structure elegantly handles topological modification at high compression rates. Our method does not require the costly and sometimes infeasible base mesh construction step required by subdivision surface approaches. We present several improvements over previous attempts at compressing signed-distance functions, including an 0(n) distance transform, a zero set initialization method for triangle meshes, and a specialized thresholding algorithm. We demonstrate the potential of sampled distancemore » volumes for surface compression and progressive reconstruction for complex high genus surfaces.« less
A Segment-Based Trajectory Similarity Measure in the Urban Transportation Systems.
Mao, Yingchi; Zhong, Haishi; Xiao, Xianjian; Li, Xiaofang
2017-03-06
With the rapid spread of built-in GPS handheld smart devices, the trajectory data from GPS sensors has grown explosively. Trajectory data has spatio-temporal characteristics and rich information. Using trajectory data processing techniques can mine the patterns of human activities and the moving patterns of vehicles in the intelligent transportation systems. A trajectory similarity measure is one of the most important issues in trajectory data mining (clustering, classification, frequent pattern mining, etc.). Unfortunately, the main similarity measure algorithms with the trajectory data have been found to be inaccurate, highly sensitive of sampling methods, and have low robustness for the noise data. To solve the above problems, three distances and their corresponding computation methods are proposed in this paper. The point-segment distance can decrease the sensitivity of the point sampling methods. The prediction distance optimizes the temporal distance with the features of trajectory data. The segment-segment distance introduces the trajectory shape factor into the similarity measurement to improve the accuracy. The three kinds of distance are integrated with the traditional dynamic time warping algorithm (DTW) algorithm to propose a new segment-based dynamic time warping algorithm (SDTW). The experimental results show that the SDTW algorithm can exhibit about 57%, 86%, and 31% better accuracy than the longest common subsequence algorithm (LCSS), and edit distance on real sequence algorithm (EDR) , and DTW, respectively, and that the sensitivity to the noise data is lower than that those algorithms.
Pilliod, David S.; Goldberg, Caren S.; Arkle, Robert S.; Waits, Lisette P.
2013-01-01
Environmental DNA (eDNA) methods for detecting aquatic species are advancing rapidly, but with little evaluation of field protocols or precision of resulting estimates. We compared sampling results from traditional field methods with eDNA methods for two amphibians in 13 streams in central Idaho, USA. We also evaluated three water collection protocols and the influence of sampling location, time of day, and distance from animals on eDNA concentration in the water. We found no difference in detection or amount of eDNA among water collection protocols. eDNA methods had slightly higher detection rates than traditional field methods, particularly when species occurred at low densities. eDNA concentration was positively related to field-measured density, biomass, and proportion of transects occupied. Precision of eDNA-based abundance estimates increased with the amount of eDNA in the water and the number of replicate subsamples collected. eDNA concentration did not vary significantly with sample location in the stream, time of day, or distance downstream from animals. Our results further advance the implementation of eDNA methods for monitoring aquatic vertebrates in stream habitats.
STANDARDIZED ASSESSMENT METHOD (SAM) FOR RIVERINE MACROINVERTEBRATES
During the summer of 2001, twelve sites were sampled for macroinvertebrates, six each on the Great Miami and Kentucky Rivers. Sites were chosen in each river from those sampled in the 1999 methods comparison study to reflect a disturbance gradient. At each site, a total distanc...
Application of up-sampling and resolution scaling to Fresnel reconstruction of digital holograms.
Williams, Logan A; Nehmetallah, Georges; Aylo, Rola; Banerjee, Partha P
2015-02-20
Fresnel transform implementation methods using numerical preprocessing techniques are investigated in this paper. First, it is shown that up-sampling dramatically reduces the minimum reconstruction distance requirements and allows maximal signal recovery by eliminating aliasing artifacts which typically occur at distances much less than the Rayleigh range of the object. Second, zero-padding is employed to arbitrarily scale numerical resolution for the purpose of resolution matching multiple holograms, where each hologram is recorded using dissimilar geometric or illumination parameters. Such preprocessing yields numerical resolution scaling at any distance. Both techniques are extensively illustrated using experimental results.
Local Intrinsic Dimension Estimation by Generalized Linear Modeling.
Hino, Hideitsu; Fujiki, Jun; Akaho, Shotaro; Murata, Noboru
2017-07-01
We propose a method for intrinsic dimension estimation. By fitting the power of distance from an inspection point and the number of samples included inside a ball with a radius equal to the distance, to a regression model, we estimate the goodness of fit. Then, by using the maximum likelihood method, we estimate the local intrinsic dimension around the inspection point. The proposed method is shown to be comparable to conventional methods in global intrinsic dimension estimation experiments. Furthermore, we experimentally show that the proposed method outperforms a conventional local dimension estimation method.
Pettengill, James B; Pightling, Arthur W; Baugher, Joseph D; Rand, Hugh; Strain, Errol
2016-01-01
The adoption of whole-genome sequencing within the public health realm for molecular characterization of bacterial pathogens has been followed by an increased emphasis on real-time detection of emerging outbreaks (e.g., food-borne Salmonellosis). In turn, large databases of whole-genome sequence data are being populated. These databases currently contain tens of thousands of samples and are expected to grow to hundreds of thousands within a few years. For these databases to be of optimal use one must be able to quickly interrogate them to accurately determine the genetic distances among a set of samples. Being able to do so is challenging due to both biological (evolutionary diverse samples) and computational (petabytes of sequence data) issues. We evaluated seven measures of genetic distance, which were estimated from either k-mer profiles (Jaccard, Euclidean, Manhattan, Mash Jaccard, and Mash distances) or nucleotide sites (NUCmer and an extended multi-locus sequence typing (MLST) scheme). When analyzing empirical data (whole-genome sequence data from 18,997 Salmonella isolates) there are features (e.g., genomic, assembly, and contamination) that cause distances inferred from k-mer profiles, which treat absent data as informative, to fail to accurately capture the distance between samples when compared to distances inferred from differences in nucleotide sites. Thus, site-based distances, like NUCmer and extended MLST, are superior in performance, but accessing the computing resources necessary to perform them may be challenging when analyzing large databases.
Pettengill, James B.; Pightling, Arthur W.; Baugher, Joseph D.; ...
2016-11-10
The adoption of whole-genome sequencing within the public health realm for molecular characterization of bacterial pathogens has been followed by an increased emphasis on real-time detection of emerging outbreaks (e.g., food-borne Salmonellosis). In turn, large databases of whole-genome sequence data are being populated. These databases currently contain tens of thousands of samples and are expected to grow to hundreds of thousands within a few years. For these databases to be of optimal use one must be able to quickly interrogate them to accurately determine the genetic distances among a set of samples. Being able to do so is challenging duemore » to both biological (evolutionary diverse samples) and computational (petabytes of sequence data) issues. We evaluated seven measures of genetic distance, which were estimated from either k-mer profiles (Jaccard, Euclidean, Manhattan, Mash Jaccard, and Mash distances) or nucleotide sites (NUCmer and an extended multi-locus sequence typing (MLST) scheme). Finally, when analyzing empirical data (wholegenome sequence data from 18,997 Salmonella isolates) there are features (e.g., genomic, assembly, and contamination) that cause distances inferred from k-mer profiles, which treat absent data as informative, to fail to accurately capture the distance between samples when compared to distances inferred from differences in nucleotide sites. Thus, site-based distances, like NUCmer and extended MLST, are superior in performance, but accessing the computing resources necessary to perform them may be challenging when analyzing large databases.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pettengill, James B.; Pightling, Arthur W.; Baugher, Joseph D.
The adoption of whole-genome sequencing within the public health realm for molecular characterization of bacterial pathogens has been followed by an increased emphasis on real-time detection of emerging outbreaks (e.g., food-borne Salmonellosis). In turn, large databases of whole-genome sequence data are being populated. These databases currently contain tens of thousands of samples and are expected to grow to hundreds of thousands within a few years. For these databases to be of optimal use one must be able to quickly interrogate them to accurately determine the genetic distances among a set of samples. Being able to do so is challenging duemore » to both biological (evolutionary diverse samples) and computational (petabytes of sequence data) issues. We evaluated seven measures of genetic distance, which were estimated from either k-mer profiles (Jaccard, Euclidean, Manhattan, Mash Jaccard, and Mash distances) or nucleotide sites (NUCmer and an extended multi-locus sequence typing (MLST) scheme). Finally, when analyzing empirical data (wholegenome sequence data from 18,997 Salmonella isolates) there are features (e.g., genomic, assembly, and contamination) that cause distances inferred from k-mer profiles, which treat absent data as informative, to fail to accurately capture the distance between samples when compared to distances inferred from differences in nucleotide sites. Thus, site-based distances, like NUCmer and extended MLST, are superior in performance, but accessing the computing resources necessary to perform them may be challenging when analyzing large databases.« less
Minimum distance classification in remote sensing
NASA Technical Reports Server (NTRS)
Wacker, A. G.; Landgrebe, D. A.
1972-01-01
The utilization of minimum distance classification methods in remote sensing problems, such as crop species identification, is considered. Literature concerning both minimum distance classification problems and distance measures is reviewed. Experimental results are presented for several examples. The objective of these examples is to: (a) compare the sample classification accuracy of a minimum distance classifier, with the vector classification accuracy of a maximum likelihood classifier, and (b) compare the accuracy of a parametric minimum distance classifier with that of a nonparametric one. Results show the minimum distance classifier performance is 5% to 10% better than that of the maximum likelihood classifier. The nonparametric classifier is only slightly better than the parametric version.
Establishing an efficient way to utilize the drought resistance germplasm population in wheat.
Wang, Jiancheng; Guan, Yajing; Wang, Yang; Zhu, Liwei; Wang, Qitian; Hu, Qijuan; Hu, Jin
2013-01-01
Drought resistance breeding provides a hopeful way to improve yield and quality of wheat in arid and semiarid regions. Constructing core collection is an efficient way to evaluate and utilize drought-resistant germplasm resources in wheat. In the present research, 1,683 wheat varieties were divided into five germplasm groups (high resistant, HR; resistant, R; moderate resistant, MR; susceptible, S; and high susceptible, HS). The least distance stepwise sampling (LDSS) method was adopted to select core accessions. Six commonly used genetic distances (Euclidean distance, Euclid; Standardized Euclidean distance, Seuclid; Mahalanobis distance, Mahal; Manhattan distance, Manhat; Cosine distance, Cosine; and Correlation distance, Correlation) were used to assess genetic distances among accessions. Unweighted pair-group average (UPGMA) method was used to perform hierarchical cluster analysis. Coincidence rate of range (CR) and variable rate of coefficient of variation (VR) were adopted to evaluate the representativeness of the core collection. A method for selecting the ideal constructing strategy was suggested in the present research. A wheat core collection for the drought resistance breeding programs was constructed by the strategy selected in the present research. The principal component analysis showed that the genetic diversity was well preserved in that core collection.
Precise Distances for Main-belt Asteroids in Only Two Nights
NASA Astrophysics Data System (ADS)
Heinze, Aren N.; Metchev, Stanimir
2015-10-01
We present a method for calculating precise distances to asteroids using only two nights of data from a single location—far too little for an orbit—by exploiting the angular reflex motion of the asteroids due to Earth’s axial rotation. We refer to this as the rotational reflex velocity method. While the concept is simple and well-known, it has not been previously exploited for surveys of main belt asteroids (MBAs). We offer a mathematical development, estimates of the errors of the approximation, and a demonstration using a sample of 197 asteroids observed for two nights with a small, 0.9-m telescope. This demonstration used digital tracking to enhance detection sensitivity for faint asteroids, but our distance determination works with any detection method. Forty-eight asteroids in our sample had known orbits prior to our observations, and for these we demonstrate a mean fractional error of only 1.6% between the distances we calculate and those given in ephemerides from the Minor Planet Center. In contrast to our two-night results, distance determination by fitting approximate orbits requires observations spanning 7-10 nights. Once an asteroid’s distance is known, its absolute magnitude and size (given a statistically estimated albedo) may immediately be calculated. Our method will therefore greatly enhance the efficiency with which 4m and larger telescopes can probe the size distribution of small (e.g., 100 m) MBAs. This distribution remains poorly known, yet encodes information about the collisional evolution of the asteroid belt—and hence the history of the Solar System.
Absolute method of measuring magnetic susceptibility
Thorpe, A.; Senftle, F.E.
1959-01-01
An absolute method of standardization and measurement of the magnetic susceptibility of small samples is presented which can be applied to most techniques based on the Faraday method. The fact that the susceptibility is a function of the area under the curve of sample displacement versus distance of the magnet from the sample, offers a simple method of measuring the susceptibility without recourse to a standard sample. Typical results on a few substances are compared with reported values, and an error of less than 2% can be achieved. ?? 1959 The American Institute of Physics.
Estimating population size with correlated sampling unit estimates
David C. Bowden; Gary C. White; Alan B. Franklin; Joseph L. Ganey
2003-01-01
Finite population sampling theory is useful in estimating total population size (abundance) from abundance estimates of each sampled unit (quadrat). We develop estimators that allow correlated quadrat abundance estimates, even for quadrats in different sampling strata. Correlated quadrat abundance estimates based on markârecapture or distance sampling methods occur...
A revised moving cluster distance to the Pleiades open cluster
NASA Astrophysics Data System (ADS)
Galli, P. A. B.; Moraux, E.; Bouy, H.; Bouvier, J.; Olivares, J.; Teixeira, R.
2017-02-01
Context. The distance to the Pleiades open cluster has been extensively debated in the literature over several decades. Although different methods point to a discrepancy in the trigonometric parallaxes produced by the Hipparcos mission, the number of individual stars with known distances is still small compared to the number of cluster members to help solve this problem. Aims: We provide a new distance estimate for the Pleiades based on the moving cluster method, which will be useful to further discuss the so-called Pleiades distance controversy and compare it with the very precise parallaxes from the Gaia space mission. Methods: We apply a refurbished implementation of the convergent point search method to an updated census of Pleiades stars to calculate the convergent point position of the cluster from stellar proper motions. Then, we derive individual parallaxes for 64 cluster members using radial velocities compiled from the literature, and approximate parallaxes for another 1146 stars based on the spatial velocity of the cluster. This represents the largest sample of Pleiades stars with individual distances to date. Results: The parallaxes derived in this work are in good agreement with previous results obtained in different studies (excluding Hipparcos) for individual stars in the cluster. We report a mean parallax of 7.44 ± 0.08 mas and distance of pc that is consistent with the weighted mean of 135.0 ± 0.6 pc obtained from the non-Hipparcos results in the literature. Conclusions: Our result for the distance to the Pleiades open cluster is not consistent with the Hipparcos catalog, but favors the recent and more precise distance determination of 136.2 ± 1.2 pc obtained from Very Long Baseline Interferometry observations. It is also in good agreement with the mean distance of 133 ± 5 pc obtained from the first trigonometric parallaxes delivered by the Gaia satellite for the brightest cluster members in common with our sample. Full Table B.2 is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/598/A48
Irigoyen, Alejo J; Rojo, Irene; Calò, Antonio; Trobbiani, Gastón; Sánchez-Carnero, Noela; García-Charton, José A
2018-01-01
Underwater visual census (UVC) is the most common approach for estimating diversity, abundance and size of reef fishes in shallow and clear waters. Abundance estimation through UVC is particularly problematic in species occurring at low densities and/or highly aggregated because of their high variability at both spatial and temporal scales. The statistical power of experiments involving UVC techniques may be increased by augmenting the number of replicates or the area surveyed. In this work we present and test the efficiency of an UVC method based on diver towed GPS, the Tracked Roaming Transect (TRT), designed to maximize transect length (and thus the surveyed area) with respect to diving time invested in monitoring, as compared to Conventional Strip Transects (CST). Additionally, we analyze the effect of increasing transect width and length on the precision of density estimates by comparing TRT vs. CST methods using different fixed widths of 6 and 20 m (FW3 and FW10, respectively) and the Distance Sampling (DS) method, in which perpendicular distance of each fish or group of fishes to the transect line is estimated by divers up to 20 m from the transect line. The TRT was 74% more time and cost efficient than the CST (all transect widths considered together) and, for a given time, the use of TRT and/or increasing the transect width increased the precision of density estimates. In addition, since with the DS method distances of fishes to the transect line have to be estimated, and not measured directly as in terrestrial environments, errors in estimations of perpendicular distances can seriously affect DS density estimations. To assess the occurrence of distance estimation errors and their dependence on the observer's experience, a field experiment using wooden fish models was performed. We tested the precision and accuracy of density estimators based on fixed widths and the DS method. The accuracy of the estimates was measured comparing the actual total abundance with those estimated by divers using FW3, FW10, and DS estimators. Density estimates differed by 13% (range 0.1-31%) from the actual values (average = 13.09%; median = 14.16%). Based on our results we encourage the use of the Tracked Roaming Transect with Distance Sampling (TRT+DS) method for improving density estimates of species occurring at low densities and/or highly aggregated, as well as for exploratory rapid-assessment surveys in which divers could gather spatial ecological and ecosystem information on large areas during UVC.
2018-01-01
Underwater visual census (UVC) is the most common approach for estimating diversity, abundance and size of reef fishes in shallow and clear waters. Abundance estimation through UVC is particularly problematic in species occurring at low densities and/or highly aggregated because of their high variability at both spatial and temporal scales. The statistical power of experiments involving UVC techniques may be increased by augmenting the number of replicates or the area surveyed. In this work we present and test the efficiency of an UVC method based on diver towed GPS, the Tracked Roaming Transect (TRT), designed to maximize transect length (and thus the surveyed area) with respect to diving time invested in monitoring, as compared to Conventional Strip Transects (CST). Additionally, we analyze the effect of increasing transect width and length on the precision of density estimates by comparing TRT vs. CST methods using different fixed widths of 6 and 20 m (FW3 and FW10, respectively) and the Distance Sampling (DS) method, in which perpendicular distance of each fish or group of fishes to the transect line is estimated by divers up to 20 m from the transect line. The TRT was 74% more time and cost efficient than the CST (all transect widths considered together) and, for a given time, the use of TRT and/or increasing the transect width increased the precision of density estimates. In addition, since with the DS method distances of fishes to the transect line have to be estimated, and not measured directly as in terrestrial environments, errors in estimations of perpendicular distances can seriously affect DS density estimations. To assess the occurrence of distance estimation errors and their dependence on the observer’s experience, a field experiment using wooden fish models was performed. We tested the precision and accuracy of density estimators based on fixed widths and the DS method. The accuracy of the estimates was measured comparing the actual total abundance with those estimated by divers using FW3, FW10, and DS estimators. Density estimates differed by 13% (range 0.1–31%) from the actual values (average = 13.09%; median = 14.16%). Based on our results we encourage the use of the Tracked Roaming Transect with Distance Sampling (TRT+DS) method for improving density estimates of species occurring at low densities and/or highly aggregated, as well as for exploratory rapid-assessment surveys in which divers could gather spatial ecological and ecosystem information on large areas during UVC. PMID:29324887
Bukhari, Mahwish; Awan, M. Ali; Qazi, Ishtiaq A.; Baig, M. Anwar
2012-01-01
This paper illustrates systematic development of a convenient analytical method for the determination of chromium and cadmium in tannery wastewater using laser-induced breakdown spectroscopy (LIBS). A new approach was developed by which liquid was converted into solid phase sample surface using absorption paper for subsequent LIBS analysis. The optimized values of LIBS parameters were 146.7 mJ for chromium and 89.5 mJ for cadmium (laser pulse energy), 4.5 μs (delay time), 70 mm (lens to sample surface distance), and 7 mm (light collection system to sample surface distance). Optimized values of LIBS parameters demonstrated strong spectrum lines for each metal keeping the background noise at minimum level. The new method of preparing metal standards on absorption papers exhibited calibration curves with good linearity with correlation coefficients, R2 in the range of 0.992 to 0.998. The developed method was tested on real tannery wastewater samples for determination of chromium and cadmium. PMID:22567570
An active learning representative subset selection method using net analyte signal.
He, Zhonghai; Ma, Zhenhe; Luan, Jingmin; Cai, Xi
2018-05-05
To guarantee accurate predictions, representative samples are needed when building a calibration model for spectroscopic measurements. However, in general, it is not known whether a sample is representative prior to measuring its concentration, which is both time-consuming and expensive. In this paper, a method to determine whether a sample should be selected into a calibration set is presented. The selection is based on the difference of Euclidean norm of net analyte signal (NAS) vector between the candidate and existing samples. First, the concentrations and spectra of a group of samples are used to compute the projection matrix, NAS vector, and scalar values. Next, the NAS vectors of candidate samples are computed by multiplying projection matrix with spectra of samples. Scalar value of NAS is obtained by norm computation. The distance between the candidate set and the selected set is computed, and samples with the largest distance are added to selected set sequentially. Last, the concentration of the analyte is measured such that the sample can be used as a calibration sample. Using a validation test, it is shown that the presented method is more efficient than random selection. As a result, the amount of time and money spent on reference measurements is greatly reduced. Copyright © 2018 Elsevier B.V. All rights reserved.
An active learning representative subset selection method using net analyte signal
NASA Astrophysics Data System (ADS)
He, Zhonghai; Ma, Zhenhe; Luan, Jingmin; Cai, Xi
2018-05-01
To guarantee accurate predictions, representative samples are needed when building a calibration model for spectroscopic measurements. However, in general, it is not known whether a sample is representative prior to measuring its concentration, which is both time-consuming and expensive. In this paper, a method to determine whether a sample should be selected into a calibration set is presented. The selection is based on the difference of Euclidean norm of net analyte signal (NAS) vector between the candidate and existing samples. First, the concentrations and spectra of a group of samples are used to compute the projection matrix, NAS vector, and scalar values. Next, the NAS vectors of candidate samples are computed by multiplying projection matrix with spectra of samples. Scalar value of NAS is obtained by norm computation. The distance between the candidate set and the selected set is computed, and samples with the largest distance are added to selected set sequentially. Last, the concentration of the analyte is measured such that the sample can be used as a calibration sample. Using a validation test, it is shown that the presented method is more efficient than random selection. As a result, the amount of time and money spent on reference measurements is greatly reduced.
Forensic Comparison of Soil Samples Using Nondestructive Elemental Analysis.
Uitdehaag, Stefan; Wiarda, Wim; Donders, Timme; Kuiper, Irene
2017-07-01
Soil can play an important role in forensic cases in linking suspects or objects to a crime scene by comparing samples from the crime scene with samples derived from items. This study uses an adapted ED-XRF analysis (sieving instead of grinding to prevent destruction of microfossils) to produce elemental composition data of 20 elements. Different data processing techniques and statistical distances were evaluated using data from 50 samples and the log-LR cost (C llr ). The best performing combination, Canberra distance, relative data, and square root values, is used to construct a discriminative model. Examples of the spatial resolution of the method in crime scenes are shown for three locations, and sampling strategy is discussed. Twelve test cases were analyzed, and results showed that the method is applicable. The study shows how the combination of an analysis technique, a database, and a discriminative model can be used to compare multiple soil samples quickly. © 2016 American Academy of Forensic Sciences.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hudson, W.G.
Scapteriscus vicinus is the most important pest of turf and pasture grasses in Florida. This study develops a method of correlating sample results with true population density and provides the first quantitative information on spatial distribution and movement patterns of mole crickets. Three basic techniques for sampling mole crickets were compared: soil flushes, soil corer, and pitfall trapping. No statistical difference was found between the soil corer and soil flushing. Soil flushing was shown to be more sensitive to changes in population density than pitfall trapping. No technique was effective for sampling adults. Regression analysis provided a means of adjustingmore » for the effects of soil moisture and showed soil temperature to be unimportant in predicting efficiency of flush sampling. Cesium-137 was used to label females for subsequent location underground. Comparison of mean distance to nearest neighbor with the distance predicted by a random distribution model showed that the observed distance in the spring was significantly greater than hypothesized (Student's T-test, p < 0.05). Fall adult nearest neighbor distance was not different than predicted by the random distribution hypothesis.« less
Using multiple travel paths to estimate daily travel distance in arboreal, group-living primates.
Steel, Ruth Irene
2015-01-01
Primate field studies often estimate daily travel distance (DTD) in order to estimate energy expenditure and/or test foraging hypotheses. In group-living species, the center of mass (CM) method is traditionally used to measure DTD; a point is marked at the group's perceived center of mass at a set time interval or upon each move, and the distance between consecutive points is measured and summed. However, for groups using multiple travel paths, the CM method potentially creates a central path that is shorter than the individual paths and/or traverses unused areas. These problems may compromise tests of foraging hypotheses, since distance and energy expenditure could be underestimated. To better understand the magnitude of these potential biases, I designed and tested the multiple travel paths (MTP) method, in which DTD was calculated by recording all travel paths taken by the group's members, weighting each path's distance based on its proportional use by the group, and summing the weighted distances. To compare the MTP and CM methods, DTD was calculated using both methods in three groups of Udzungwa red colobus monkeys (Procolobus gordonorum; group size 30-43) for a random sample of 30 days between May 2009 and March 2010. Compared to the CM method, the MTP method provided significantly longer estimates of DTD that were more representative of the actual distance traveled and the areas used by a group. The MTP method is more time-intensive and requires multiple observers compared to the CM method. However, it provides greater accuracy for testing ecological and foraging models.
A Hybrid Semi-Supervised Anomaly Detection Model for High-Dimensional Data.
Song, Hongchao; Jiang, Zhuqing; Men, Aidong; Yang, Bo
2017-01-01
Anomaly detection, which aims to identify observations that deviate from a nominal sample, is a challenging task for high-dimensional data. Traditional distance-based anomaly detection methods compute the neighborhood distance between each observation and suffer from the curse of dimensionality in high-dimensional space; for example, the distances between any pair of samples are similar and each sample may perform like an outlier. In this paper, we propose a hybrid semi-supervised anomaly detection model for high-dimensional data that consists of two parts: a deep autoencoder (DAE) and an ensemble k -nearest neighbor graphs- ( K -NNG-) based anomaly detector. Benefiting from the ability of nonlinear mapping, the DAE is first trained to learn the intrinsic features of a high-dimensional dataset to represent the high-dimensional data in a more compact subspace. Several nonparametric KNN-based anomaly detectors are then built from different subsets that are randomly sampled from the whole dataset. The final prediction is made by all the anomaly detectors. The performance of the proposed method is evaluated on several real-life datasets, and the results confirm that the proposed hybrid model improves the detection accuracy and reduces the computational complexity.
A Hybrid Semi-Supervised Anomaly Detection Model for High-Dimensional Data
Jiang, Zhuqing; Men, Aidong; Yang, Bo
2017-01-01
Anomaly detection, which aims to identify observations that deviate from a nominal sample, is a challenging task for high-dimensional data. Traditional distance-based anomaly detection methods compute the neighborhood distance between each observation and suffer from the curse of dimensionality in high-dimensional space; for example, the distances between any pair of samples are similar and each sample may perform like an outlier. In this paper, we propose a hybrid semi-supervised anomaly detection model for high-dimensional data that consists of two parts: a deep autoencoder (DAE) and an ensemble k-nearest neighbor graphs- (K-NNG-) based anomaly detector. Benefiting from the ability of nonlinear mapping, the DAE is first trained to learn the intrinsic features of a high-dimensional dataset to represent the high-dimensional data in a more compact subspace. Several nonparametric KNN-based anomaly detectors are then built from different subsets that are randomly sampled from the whole dataset. The final prediction is made by all the anomaly detectors. The performance of the proposed method is evaluated on several real-life datasets, and the results confirm that the proposed hybrid model improves the detection accuracy and reduces the computational complexity. PMID:29270197
NASA Astrophysics Data System (ADS)
Wang, Xiao; Zhang, Hongfeng; Shen, Zongbao; Li, Jianwen; Qian, Qing; Liu, Huixia
2016-11-01
A novel laser shock synchronous welding and forming method is introduced, which utilizes laser-induced shock waves to accelerate the flyer plate towards the base plate to achieve the joining of dissimilar metals and forming in a specific shape of mold. The samples were obtained with different laser energies and standoff distances. The surface morphology and roughness of the samples were greatly affected by the laser energy and standoff distances. Fittability was investigated to examine the forming accuracy. The results showed that the samples replicate the mold features well. Straight and wavy interfaces with un-bonded regions in the center were observed through metallographic analysis. Moreover, Energy Disperse Spectroscopy analysis was conducted on the welding interface, and the results indicated that a short-distance elemental diffusion emerged in the welding interface. The nanoindentation hardness of the welding regions was measured to evaluate the welding interface. In addition, the Smoothed Particle Hydrodynamics method was employed to simulate the welding and forming process. It was shown that different standoff distances significantly affected the size of the welding regions and interface waveform characteristics. The numerical analysis results indicated that the opposite shear stress direction and effective plastic strain above a certain threshold are essential to successfully obtain welding and forming workpiece.
Kinematics of our Galaxy from the PMA and TGAS catalogues
NASA Astrophysics Data System (ADS)
Velichko, Anna B.; Akhmetov, Volodymyr S.; Fedorov, Peter N.
2018-04-01
We derive and compare kinematic parameters of the Galaxy using the PMA and Gaia TGAS data. Two methods are used in calculations: evaluation of the Ogorodnikov-Milne model (OMM) parameters by the least square method (LSM) and a decomposition on a set of vector spherical harmonics (VSH). We trace dependencies on the distance of the derived parameters including the Oort constants A and B and the rotational velocity of the Galaxy V rot at the Solar distance for the common sample of stars of mixed spectral composition of the PMA and TGAS catalogues. The distances were obtained from the TGAS parallaxes or from reduced proper motions for fainter stars. The A, B and V rot parameters derived from proper motions of both catalogues used show identical behaviour but the values are systematically shifted by about 0.5 mas/yr. The Oort B parameter derived from the PMA sample of red giants shows gradual decrease with increasing the distance while the Oort A has a minimum at about 2 kpc and then gradually increases. As for models chosen for calculations, first, we confirm conclusions of other authors about the existence of extra-model harmonics in the stellar velocity field. Secondly, not all parameters of the OMM are statistically significant, and the set of parameters depends on the stellar sample used.
Confronting the Gaia and NLTE spectroscopic parallaxes for the FGK stars
NASA Astrophysics Data System (ADS)
Sitnova, Tatyana; Mashonkina, Lyudmila; Pakhomov, Yury
2018-04-01
The understanding of the chemical evolution of the Galaxy relies on the stellar chemical composition. Accurate atmospheric parameters is a prerequisite of determination of accurate chemical abundances. For late type stars with known distance, surface gravity (log g) can be calculated from well-known relation between stellar mass, T eff, and absolute bolometric magnitude. This method weakly depends on model atmospheres, and provides reliable log g. However, accurate distances are available for limited number of stars. Another way to determine log g for cool stars is based on ionisation equilibrium, i.e. consistent abundances from lines of neutral and ionised species. In this study we determine atmospheric parameters moving step-by-step from well-studied nearby dwarfs to ultra-metal poor (UMP) giants. In each sample, we select stars with the most reliable T eff based on photometry and the distance-based log g, and compare with spectroscopic gravity calculated taking into account deviations from local thermodinamic equilibrium (LTE). After that, we apply spectroscopic method of log g determination to other stars of the sample with unknown distances.
Comparative analysis of 2D and 3D distance measurements to study spatial genome organization.
Finn, Elizabeth H; Pegoraro, Gianluca; Shachar, Sigal; Misteli, Tom
2017-07-01
The spatial organization of genomes is non-random, cell-type specific, and has been linked to cellular function. The investigation of spatial organization has traditionally relied extensively on fluorescence microscopy. The validity of the imaging methods used to probe spatial genome organization often depends on the accuracy and precision of distance measurements. Imaging-based measurements may either use 2 dimensional datasets or 3D datasets which include the z-axis information in image stacks. Here we compare the suitability of 2D vs 3D distance measurements in the analysis of various features of spatial genome organization. We find in general good agreement between 2D and 3D analysis with higher convergence of measurements as the interrogated distance increases, especially in flat cells. Overall, 3D distance measurements are more accurate than 2D distances, but are also more susceptible to noise. In particular, z-stacks are prone to error due to imaging properties such as limited resolution along the z-axis and optical aberrations, and we also find significant deviations from unimodal distance distributions caused by low sampling frequency in z. These deviations are ameliorated by significantly higher sampling frequency in the z-direction. We conclude that 2D distances are preferred for comparative analyses between cells, but 3D distances are preferred when comparing to theoretical models in large samples of cells. In general and for practical purposes, 2D distance measurements are preferable for many applications of analysis of spatial genome organization. Published by Elsevier Inc.
The distances of the Galactic Novae
NASA Astrophysics Data System (ADS)
Ozdonmez, Aykut; Guver, Tolga; Cabrera-Lavers, Antonio; Ak, Tansel
2016-07-01
Using location of the RC stars on the CMDs obtained from the UKIDSS, VISTA and 2MASS photometry, we have derived the reddening-distance relations towards each Galactic nova for which at least one independent reddening measurement exists. We were able to determine the distances of 72 Galactic novae and set lower limits on the distances of 45 systems. The reddening curves of the systems are presented. These curves can be also used to estimate reddening or the distance of any source, whose location is close to the position of the nova in our sample. The distance measurement method in our study can be easily applicable to any source, especially for ones that concentrated along the Galactic plane.
Spatial generalised linear mixed models based on distances.
Melo, Oscar O; Mateu, Jorge; Melo, Carlos E
2016-10-01
Risk models derived from environmental data have been widely shown to be effective in delineating geographical areas of risk because they are intuitively easy to understand. We present a new method based on distances, which allows the modelling of continuous and non-continuous random variables through distance-based spatial generalised linear mixed models. The parameters are estimated using Markov chain Monte Carlo maximum likelihood, which is a feasible and a useful technique. The proposed method depends on a detrending step built from continuous or categorical explanatory variables, or a mixture among them, by using an appropriate Euclidean distance. The method is illustrated through the analysis of the variation in the prevalence of Loa loa among a sample of village residents in Cameroon, where the explanatory variables included elevation, together with maximum normalised-difference vegetation index and the standard deviation of normalised-difference vegetation index calculated from repeated satellite scans over time. © The Author(s) 2013.
Ostrand, William D.; Drew, G.S.; Suryan, R.M.; McDonald, L.L.
1998-01-01
We compared strip transect and radio-tracking methods of determining foraging range of Black-legged Kittiwakes (Rissa tridactyla). The mean distance birds were observed from their colony determined by radio-tracking was significantly greater than the mean value calculated from strip transects. We determined that this difference was due to two sources of bias: (1) as distance from the colony increased, the area of available habitat also increased resulting in decreasing bird densities (bird spreading). Consequently, the probability of detecting birds during transect surveys also would decrease as distance from the colony increased, and (2) the maximum distance birds were observed from the colony during radio-tracking exceeded the extent of the strip transect survey. We compared the observed number of birds seen on the strip transect survey to the predictions of a model of the decreasing probability of detection due to bird spreading. Strip transect data were significantly different from modeled data; however, the field data were consistently equal to or below the model predictions, indicating a general conformity to the concept of declining detection at increasing distance. We conclude that radio-tracking data gave a more representative indication of foraging distances than did strip transect sampling. Previous studies of seabirds that have used strip transect sampling without accounting for bird spreading or the effects of study-area limitations probably underestimated foraging range.
NASA Astrophysics Data System (ADS)
Patej, A.; Eisenstein, D. J.
2018-07-01
We develop a formalism for measuring the cosmological distance scale from baryon acoustic oscillations (BAO) using the cross-correlation of a sparse redshift survey with a denser photometric sample. This reduces the shot noise that would otherwise affect the autocorrelation of the sparse spectroscopic map. As a proof of principle, we make the first on-sky application of this method to a sparse sample defined as the z > 0.6 tail of the Sloan Digital Sky Survey's (SDSS) BOSS/CMASS sample of galaxies and a dense photometric sample from SDSS DR9. We find a 2.8σ preference for the BAO peak in the cross-correlation at an effective z = 0.64, from which we measure the angular diameter distance DM(z = 0.64) = (2418 ± 73 Mpc)(rs/rs, fid). Accordingly, we expect that using this method to combine sparse spectroscopy with the deep, high-quality imaging that is just now becoming available will enable higher precision BAO measurements than possible with the spectroscopy alone.
NASA Astrophysics Data System (ADS)
Patej, Anna; Eisenstein, Daniel J.
2018-04-01
We develop a formalism for measuring the cosmological distance scale from baryon acoustic oscillations (BAO) using the cross-correlation of a sparse redshift survey with a denser photometric sample. This reduces the shot noise that would otherwise affect the auto-correlation of the sparse spectroscopic map. As a proof of principle, we make the first on-sky application of this method to a sparse sample defined as the z > 0.6 tail of the Sloan Digital Sky Survey's (SDSS) BOSS/CMASS sample of galaxies and a dense photometric sample from SDSS DR9. We find a 2.8σ preference for the BAO peak in the cross-correlation at an effective z = 0.64, from which we measure the angular diameter distance DM(z = 0.64) = (2418 ± 73 Mpc)(rs/rs, fid). Accordingly, we expect that using this method to combine sparse spectroscopy with the deep, high quality imaging that is just now becoming available will enable higher precision BAO measurements than possible with the spectroscopy alone.
NASA Astrophysics Data System (ADS)
Ihsani, Alvin; Farncombe, Troy
2016-02-01
The modelling of the projection operator in tomographic imaging is of critical importance especially when working with algebraic methods of image reconstruction. This paper proposes a distance-driven projection method which is targeted to single-pinhole single-photon emission computed tomograghy (SPECT) imaging since it accounts for the finite size of the pinhole, and the possible tilting of the detector surface in addition to other collimator-specific factors such as geometric sensitivity. The accuracy and execution time of the proposed method is evaluated by comparing to a ray-driven approach where the pinhole is sub-sampled with various sampling schemes. A point-source phantom whose projections were generated using OpenGATE was first used to compare the resolution of reconstructed images with each method using the full width at half maximum (FWHM). Furthermore, a high-activity Mini Deluxe Phantom (Data Spectrum Corp., Durham, NC, USA) SPECT resolution phantom was scanned using a Gamma Medica X-SPECT system and the signal-to-noise ratio (SNR) and structural similarity of reconstructed images was compared at various projection counts. Based on the reconstructed point-source phantom, the proposed distance-driven approach results in a lower FWHM than the ray-driven approach even when using a smaller detector resolution. Furthermore, based on the Mini Deluxe Phantom, it is shown that the distance-driven approach has consistently higher SNR and structural similarity compared to the ray-driven approach as the counts in measured projections deteriorates.
Using mark-recapture distance sampling methods on line transect surveys
Burt, Louise M.; Borchers, David L.; Jenkins, Kurt J.; Marques, Tigao A
2014-01-01
Synthesis and applications. Mark–recapture DS is a widely used method for estimating animal density and abundance when detection of animals at distance zero is not certain. Two observer configurations and three statistical models are described, and it is important to choose the most appropriate model for the observer configuration and target species in question. By way of making the methods more accessible to practicing ecologists, we describe the key ideas underlying MRDS methods, the sometimes subtle differences between them, and we illustrate these by applying different kinds of MRDS method to surveys of two different target species using different survey configurations.
ERIC Educational Resources Information Center
Mokoena, Sello
2017-01-01
This small-scale study focused on the experiences of student teachers towards teaching practice in an open and distance learning (ODL) institution in South Africa. The sample consisted of 65 fourth year students enrolled for Bachelor of Education, specializing in secondary school teaching. The mixed-method research design consisting of…
Wang, Bing; Fang, Aiqin; Heim, John; Bogdanov, Bogdan; Pugh, Scott; Libardoni, Mark; Zhang, Xiang
2010-01-01
A novel peak alignment algorithm using a distance and spectrum correlation optimization (DISCO) method has been developed for two-dimensional gas chromatography time-of-flight mass spectrometry (GC×GC/TOF-MS) based metabolomics. This algorithm uses the output of the instrument control software, ChromaTOF, as its input data. It detects and merges multiple peak entries of the same metabolite into one peak entry in each input peak list. After a z-score transformation of metabolite retention times, DISCO selects landmark peaks from all samples based on both two-dimensional retention times and mass spectrum similarity of fragment ions measured by Pearson’s correlation coefficient. A local linear fitting method is employed in the original two-dimensional retention time space to correct retention time shifts. A progressive retention time map searching method is used to align metabolite peaks in all samples together based on optimization of the Euclidean distance and mass spectrum similarity. The effectiveness of the DISCO algorithm is demonstrated using data sets acquired under different experiment conditions and a spiked-in experiment. PMID:20476746
The cluster-cluster correlation function. [of galaxies
NASA Technical Reports Server (NTRS)
Postman, M.; Geller, M. J.; Huchra, J. P.
1986-01-01
The clustering properties of the Abell and Zwicky cluster catalogs are studied using the two-point angular and spatial correlation functions. The catalogs are divided into eight subsamples to determine the dependence of the correlation function on distance, richness, and the method of cluster identification. It is found that the Corona Borealis supercluster contributes significant power to the spatial correlation function to the Abell cluster sample with distance class of four or less. The distance-limited catalog of 152 Abell clusters, which is not greatly affected by a single system, has a spatial correlation function consistent with the power law Xi(r) = 300r exp -1.8. In both the distance class four or less and distance-limited samples the signal in the spatial correlation function is a power law detectable out to 60/h Mpc. The amplitude of Xi(r) for clusters of richness class two is about three times that for richness class one clusters. The two-point spatial correlation function is sensitive to the use of estimated redshifts.
NASA Astrophysics Data System (ADS)
Kang, Shouqiang; Ma, Danyang; Wang, Yujing; Lan, Chaofeng; Chen, Qingguo; Mikulovich, V. I.
2017-03-01
To effectively assess different fault locations and different degrees of performance degradation of a rolling bearing with a unified assessment index, a novel state assessment method based on the relative compensation distance of multiple-domain features and locally linear embedding is proposed. First, for a single-sample signal, time-domain and frequency-domain indexes can be calculated for the original vibration signal and each sensitive intrinsic mode function obtained by improved ensemble empirical mode decomposition, and the singular values of the sensitive intrinsic mode function matrix can be extracted by singular value decomposition to construct a high-dimensional hybrid-domain feature vector. Second, a feature matrix can be constructed by arranging each feature vector of multiple samples, the dimensions of each row vector of the feature matrix can be reduced by the locally linear embedding algorithm, and the compensation distance of each fault state of the rolling bearing can be calculated using the support vector machine. Finally, the relative distance between different fault locations and different degrees of performance degradation and the normal-state optimal classification surface can be compensated, and on the basis of the proposed relative compensation distance, the assessment model can be constructed and an assessment curve drawn. Experimental results show that the proposed method can effectively assess different fault locations and different degrees of performance degradation of the rolling bearing under certain conditions.
An open-population hierarchical distance sampling model
Sollmann, Rachel; Beth Gardner,; Richard B Chandler,; Royle, J. Andrew; T Scott Sillett,
2015-01-01
Modeling population dynamics while accounting for imperfect detection is essential to monitoring programs. Distance sampling allows estimating population size while accounting for imperfect detection, but existing methods do not allow for direct estimation of demographic parameters. We develop a model that uses temporal correlation in abundance arising from underlying population dynamics to estimate demographic parameters from repeated distance sampling surveys. Using a simulation study motivated by designing a monitoring program for island scrub-jays (Aphelocoma insularis), we investigated the power of this model to detect population trends. We generated temporally autocorrelated abundance and distance sampling data over six surveys, using population rates of change of 0.95 and 0.90. We fit the data generating Markovian model and a mis-specified model with a log-linear time effect on abundance, and derived post hoc trend estimates from a model estimating abundance for each survey separately. We performed these analyses for varying number of survey points. Power to detect population changes was consistently greater under the Markov model than under the alternatives, particularly for reduced numbers of survey points. The model can readily be extended to more complex demographic processes than considered in our simulations. This novel framework can be widely adopted for wildlife population monitoring.
An open-population hierarchical distance sampling model.
Sollmann, Rahel; Gardner, Beth; Chandler, Richard B; Royle, J Andrew; Sillett, T Scott
2015-02-01
Modeling population dynamics while accounting for imperfect detection is essential to monitoring programs. Distance sampling allows estimating population size while accounting for imperfect detection, but existing methods do not allow for estimation of demographic parameters. We develop a model that uses temporal correlation in abundance arising from underlying population dynamics to estimate demographic parameters from repeated distance sampling surveys. Using a simulation study motivated by designing a monitoring program for Island Scrub-Jays (Aphelocoma insularis), we investigated the power of this model to detect population trends. We generated temporally autocorrelated abundance and distance sampling data over six surveys, using population rates of change of 0.95 and 0.90. We fit the data generating Markovian model and a mis-specified model with a log-linear time effect on abundance, and derived post hoc trend estimates from a model estimating abundance for each survey separately. We performed these analyses for varying numbers of survey points. Power to detect population changes was consistently greater under the Markov model than under the alternatives, particularly for reduced numbers of survey points. The model can readily be extended to more complex demographic processes than considered in our simulations. This novel framework can be widely adopted for wildlife population monitoring.
Study on Measuring the Viscosity of Lubricating Oil by Viscometer Based on Hele - Shaw Principle
NASA Astrophysics Data System (ADS)
Li, Longfei
2017-12-01
In order to explore the method of accurately measuring the viscosity value of oil samples using the viscometer based on Hele-Shaw principle, three different measurement methods are designed in the laboratory, and the statistical characteristics of the measured values are compared, in order to get the best measurement method. The results show that the oil sample to be measured is placed in the magnetic field formed by the magnet, and the oil sample can be sucked from the same distance from the magnet. The viscosity value of the sample can be measured accurately.
Lead (II) removal from natural soils by enhanced electrokinetic remediation.
Altin, Ahmet; Degirmenci, Mustafa
2005-01-20
Electrokinetic remediation is a very effective method to remove metal from fine-grained soils having low adsorption and buffering capacity. However, remediation of soil having high alkali and adsorption capacity via the electrokinetic method is a very difficult process. Therefore, enhancement techniques are required for use in these soil types. In this study, the effect of the presence of minerals having high alkali and cation exchange capacity in natural soil polluted with lead (II) was investigated by means of the efficiency of electrokinetic remediation method. Natural soil samples containing clinoptilolite, gypsum and calcite minerals were used in experimental studies. Moreover, a sample containing kaolinite minerals was studied to compare with the results obtained from other samples. Best results for soils bearing alkali and high sorption capacity minerals were obtained upon addition of 3 mol AcH and application of 20 V constant potential after a remediation period of 220 h. In these test conditions, lead (II) removal efficiencies for these samples varied between 60% and 70% up to 0.55 normalized distance. Under the same conditions, removal efficiencies in kaolinite sample varied between 50% and 95% up to 0.9 normalized distance.
Matsen IV, Frederick A.; Evans, Steven N.
2013-01-01
Principal components analysis (PCA) and hierarchical clustering are two of the most heavily used techniques for analyzing the differences between nucleic acid sequence samples taken from a given environment. They have led to many insights regarding the structure of microbial communities. We have developed two new complementary methods that leverage how this microbial community data sits on a phylogenetic tree. Edge principal components analysis enables the detection of important differences between samples that contain closely related taxa. Each principal component axis is a collection of signed weights on the edges of the phylogenetic tree, and these weights are easily visualized by a suitable thickening and coloring of the edges. Squash clustering outputs a (rooted) clustering tree in which each internal node corresponds to an appropriate “average” of the original samples at the leaves below the node. Moreover, the length of an edge is a suitably defined distance between the averaged samples associated with the two incident nodes, rather than the less interpretable average of distances produced by UPGMA, the most widely used hierarchical clustering method in this context. We present these methods and illustrate their use with data from the human microbiome. PMID:23505415
Point process statistics in atom probe tomography.
Philippe, T; Duguay, S; Grancher, G; Blavette, D
2013-09-01
We present a review of spatial point processes as statistical models that we have designed for the analysis and treatment of atom probe tomography (APT) data. As a major advantage, these methods do not require sampling. The mean distance to nearest neighbour is an attractive approach to exhibit a non-random atomic distribution. A χ(2) test based on distance distributions to nearest neighbour has been developed to detect deviation from randomness. Best-fit methods based on first nearest neighbour distance (1 NN method) and pair correlation function are presented and compared to assess the chemical composition of tiny clusters. Delaunay tessellation for cluster selection has been also illustrated. These statistical tools have been applied to APT experiments on microelectronics materials. Copyright © 2012 Elsevier B.V. All rights reserved.
Differentially Private Histogram Publication For Dynamic Datasets: An Adaptive Sampling Approach
Li, Haoran; Jiang, Xiaoqian; Xiong, Li; Liu, Jinfei
2016-01-01
Differential privacy has recently become a de facto standard for private statistical data release. Many algorithms have been proposed to generate differentially private histograms or synthetic data. However, most of them focus on “one-time” release of a static dataset and do not adequately address the increasing need of releasing series of dynamic datasets in real time. A straightforward application of existing histogram methods on each snapshot of such dynamic datasets will incur high accumulated error due to the composibility of differential privacy and correlations or overlapping users between the snapshots. In this paper, we address the problem of releasing series of dynamic datasets in real time with differential privacy, using a novel adaptive distance-based sampling approach. Our first method, DSFT, uses a fixed distance threshold and releases a differentially private histogram only when the current snapshot is sufficiently different from the previous one, i.e., with a distance greater than a predefined threshold. Our second method, DSAT, further improves DSFT and uses a dynamic threshold adaptively adjusted by a feedback control mechanism to capture the data dynamics. Extensive experiments on real and synthetic datasets demonstrate that our approach achieves better utility than baseline methods and existing state-of-the-art methods. PMID:26973795
A binary linear programming formulation of the graph edit distance.
Justice, Derek; Hero, Alfred
2006-08-01
A binary linear programming formulation of the graph edit distance for unweighted, undirected graphs with vertex attributes is derived and applied to a graph recognition problem. A general formulation for editing graphs is used to derive a graph edit distance that is proven to be a metric, provided the cost function for individual edit operations is a metric. Then, a binary linear program is developed for computing this graph edit distance, and polynomial time methods for determining upper and lower bounds on the solution of the binary program are derived by applying solution methods for standard linear programming and the assignment problem. A recognition problem of comparing a sample input graph to a database of known prototype graphs in the context of a chemical information system is presented as an application of the new method. The costs associated with various edit operations are chosen by using a minimum normalized variance criterion applied to pairwise distances between nearest neighbors in the database of prototypes. The new metric is shown to perform quite well in comparison to existing metrics when applied to a database of chemical graphs.
Liu, Zhenqiu; Hsiao, William; Cantarel, Brandi L; Drábek, Elliott Franco; Fraser-Liggett, Claire
2011-12-01
Direct sequencing of microbes in human ecosystems (the human microbiome) has complemented single genome cultivation and sequencing to understand and explore the impact of commensal microbes on human health. As sequencing technologies improve and costs decline, the sophistication of data has outgrown available computational methods. While several existing machine learning methods have been adapted for analyzing microbiome data recently, there is not yet an efficient and dedicated algorithm available for multiclass classification of human microbiota. By combining instance-based and model-based learning, we propose a novel sparse distance-based learning method for simultaneous class prediction and feature (variable or taxa, which is used interchangeably) selection from multiple treatment populations on the basis of 16S rRNA sequence count data. Our proposed method simultaneously minimizes the intraclass distance and maximizes the interclass distance with many fewer estimated parameters than other methods. It is very efficient for problems with small sample sizes and unbalanced classes, which are common in metagenomic studies. We implemented this method in a MATLAB toolbox called MetaDistance. We also propose several approaches for data normalization and variance stabilization transformation in MetaDistance. We validate this method on several real and simulated 16S rRNA datasets to show that it outperforms existing methods for classifying metagenomic data. This article is the first to address simultaneous multifeature selection and class prediction with metagenomic count data. The MATLAB toolbox is freely available online at http://metadistance.igs.umaryland.edu/. zliu@umm.edu Supplementary data are available at Bioinformatics online.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McQuinn, Kristen B. W.; Skillman, Evan D.; Dolphin, Andrew E.
Accurate distances are fundamental for interpreting various measured properties of galaxies. Surprisingly, many of the best-studied spiral galaxies in the Local Volume have distance uncertainties that are much larger than can be achieved with modern observation techniques. Using Hubble Space Telescope optical imaging, we use the tip of the red giant branch method to measure the distances to six galaxies that are included in the Spitzer Infrared Nearby Galaxies Survey program and its offspring surveys. The sample includes M63, M74, NGC 1291, NGC 4559, NGC 4625, and NGC 5398. We compare our results with distances reported to these galaxies basedmore » on a variety of methods. Depending on the technique, there can be a wide range in published distances, particularly from the Tully–Fisher relation. In addition, differences between the planetary nebular luminosity function and surface brightness fluctuation techniques can vary between galaxies, suggesting inaccuracies that cannot be explained by systematics in the calibrations. Our distances improve upon previous results, as we use a well-calibrated, stable distance indicator, precision photometry in an optimally selected field of view, and a Bayesian maximum likelihood technique that reduces measurement uncertainties.« less
Mirzaee, Seyyed Abbas; Nikaeen, Mahnaz; Hajizadeh, Yaghob; Nabavi, BiBi Fatemeh; Hassanzadeh, Akbar
2015-01-01
Background: Wastewater contains a variety of pathogens and bio -aerosols generated during the wastewater treatment process, which could be a potential health risk for exposed individuals. This study was carried out to detect Legionella spp. in the bio -aerosols generated from different processes of a wastewater treatment plant (WWTP) in Isfahan, Iran, and the downwind distances. Materials and Methods: A total of 54 air samples were collected and analyzed for the presence of Legionella spp. by a nested- polymerase chain reaction (PCR) assay. A liquid impingement biosampler was used to capture bio -aerosols. The weather conditions were also recorded. Results: Legionella were detected in 6% of the samples, including air samples above the aeration tank (1/9), belt filter press (1/9), and 250 m downwind (1/9). Conclusion: The result of this study revealed the presence of Legionella spp. in air samples of a WWTP and downwind distance, which consequently represent a potential health risk to the exposed individuals. PMID:25802817
Ensemble-Biased Metadynamics: A Molecular Simulation Method to Sample Experimental Distributions
Marinelli, Fabrizio; Faraldo-Gómez, José D.
2015-01-01
We introduce an enhanced-sampling method for molecular dynamics (MD) simulations referred to as ensemble-biased metadynamics (EBMetaD). The method biases a conventional MD simulation to sample a molecular ensemble that is consistent with one or more probability distributions known a priori, e.g., experimental intramolecular distance distributions obtained by double electron-electron resonance or other spectroscopic techniques. To this end, EBMetaD adds an adaptive biasing potential throughout the simulation that discourages sampling of configurations inconsistent with the target probability distributions. The bias introduced is the minimum necessary to fulfill the target distributions, i.e., EBMetaD satisfies the maximum-entropy principle. Unlike other methods, EBMetaD does not require multiple simulation replicas or the introduction of Lagrange multipliers, and is therefore computationally efficient and straightforward in practice. We demonstrate the performance and accuracy of the method for a model system as well as for spin-labeled T4 lysozyme in explicit water, and show how EBMetaD reproduces three double electron-electron resonance distance distributions concurrently within a few tens of nanoseconds of simulation time. EBMetaD is integrated in the open-source PLUMED plug-in (www.plumed-code.org), and can be therefore readily used with multiple MD engines. PMID:26083917
Unraveling the Tangles of Language Evolution
NASA Astrophysics Data System (ADS)
Petroni, F.; Serva, M.; Volchenkov, D.
2012-07-01
The relationships between languages molded by extremely complex social, cultural and political factors are assessed by an automated method, in which the distance between languages is estimated by the average normalized Levenshtein distance between words from the list of 200 meanings maximally resistant to change. A sequential process of language classification described by random walks on the matrix of lexical distances allows to represent complex relationships between languages geometrically, in terms of distances and angles. We have tested the method on a sample of 50 Indo-European and 50 Austronesian languages. The geometric representations of language taxonomy allows for making accurate interfaces on the most significant events of human history by tracing changes in language families through time. The Anatolian and Kurgan hypothesis of the Indo-European origin and the "express train" model of the Polynesian origin are thoroughly discussed.
ASSESSMENT OF LARGE RIVER BENTHIC MACROINVERTEBRATE ASSEMBLAGES
During the summer of 2001, twelve sites were sampled for macroinvertebrates, six each on the Great Miami and Kentucky Rivers. Sites were chosen in each river from those sampled in the 1999 methods comparison study to reflect a disturbance gradient. At each site, a total distanc...
ASSESSMENT OF LARGE RIVER MACROINVERTEBRATE ASSEMBLAGES
During the summer of 2001, twelve sites were sampled for macroinvertebrates, six each on the Great Miami and Kentucky Rivers. Sites were chosen in each river from those sampled in the 1999 methods comparison study to reflect a disturbance gradient. At each site, a total distanc...
Zettl, Thomas; Mathew, Rebecca S.; Seifert, Sönke; ...
2016-05-31
Accurate determination of molecular distances is fundamental to understanding the structure, dynamics, and conformational ensembles of biological macromolecules. Here we present a method to determine the full,distance,distribution between small (~7 Å) gold labels attached to macromolecules with very high-precision(≤1 Å) and on an absolute distance scale. Our method uses anomalous small-angle X-ray scattering close to a gold absorption edge to separate the gold-gold interference pattern from other scattering contributions. Results for 10-30 bp DNA constructs achieve excellent signal-to-noise and are in good agreement with previous results obtained by single-energy,SAXS measurements without requiring the preparation and measurement of single labeled andmore » unlabeled samples. Finally, the use of small gold labels in combination with ASAXS read out provides an attractive approach to determining molecular distance distributions that will be applicable to a broad range of macromolecular systems.« less
Schmidt, Joshua H; Wilson, Tammy L; Thompson, William L; Reynolds, Joel H
2017-07-01
Obtaining useful estimates of wildlife abundance or density requires thoughtful attention to potential sources of bias and precision, and it is widely understood that addressing incomplete detection is critical to appropriate inference. When the underlying assumptions of sampling approaches are violated, both increased bias and reduced precision of the population estimator may result. Bear ( Ursus spp.) populations can be difficult to sample and are often monitored using mark-recapture distance sampling (MRDS) methods, although obtaining adequate sample sizes can be cost prohibitive. With the goal of improving inference, we examined the underlying methodological assumptions and estimator efficiency of three datasets collected under an MRDS protocol designed specifically for bears. We analyzed these data using MRDS, conventional distance sampling (CDS), and open-distance sampling approaches to evaluate the apparent bias-precision tradeoff relative to the assumptions inherent under each approach. We also evaluated the incorporation of informative priors on detection parameters within a Bayesian context. We found that the CDS estimator had low apparent bias and was more efficient than the more complex MRDS estimator. When combined with informative priors on the detection process, precision was increased by >50% compared to the MRDS approach with little apparent bias. In addition, open-distance sampling models revealed a serious violation of the assumption that all bears were available to be sampled. Inference is directly related to the underlying assumptions of the survey design and the analytical tools employed. We show that for aerial surveys of bears, avoidance of unnecessary model complexity, use of prior information, and the application of open population models can be used to greatly improve estimator performance and simplify field protocols. Although we focused on distance sampling-based aerial surveys for bears, the general concepts we addressed apply to a variety of wildlife survey contexts.
Identifying species of moths (Lepidoptera) from Baihua Mountain, Beijing, China, using DNA barcodes
Liu, Xiao F; Yang, Cong H; Han, Hui L; Ward, Robert D; Zhang, Ai-bing
2014-01-01
DNA barcoding has become a promising means for the identification of organisms of all life-history stages. Currently, distance-based and tree-based methods are most widely used to define species boundaries and uncover cryptic species. However, there is no universal threshold of genetic distance values that can be used to distinguish taxonomic groups. Alternatively, DNA barcoding can deploy a “character-based” method, whereby species are identified through the discrete nucleotide substitutions. Our research focuses on the delimitation of moth species using DNA-barcoding methods. We analyzed 393 Lepidopteran specimens belonging to 80 morphologically recognized species with a standard cytochrome c oxidase subunit I (COI) sequencing approach, and deployed tree-based, distance-based, and diagnostic character-based methods to identify the taxa. The tree-based method divided the 393 specimens into 79 taxa (species), and the distance-based method divided them into 84 taxa (species). Although the diagnostic character-based method found only 39 so-identifiable species in the 80 species, with a reduction in sample size the accuracy rate substantially improved. For example, in the Arctiidae subset, all 12 species had diagnostics characteristics. Compared with traditional morphological method, molecular taxonomy performed well. All three methods enable the rapid delimitation of species, although they have different characteristics and different strengths. The tree-based and distance-based methods can be used for accurate species identification and biodiversity studies in large data sets, while the character-based method performs well in small data sets and can also be used as the foundation of species-specific biochips. PMID:25360280
Newell, Felicity L.; Sheehan, James; Wood, Petra Bohall; Rodewald, Amanda D.; Buehler, David A.; Keyser, Patrick D.; Larkin, Jeffrey L.; Beachy, Tiffany A.; Bakermans, Marja H.; Boves, Than J.; Evans, Andrea; George, Gregory A.; McDermott, Molly E.; Perkins, Kelly A.; White, Matthew; Wigley, T. Bently
2013-01-01
Point counts are commonly used to assess changes in bird abundance, including analytical approaches such as distance sampling that estimate density. Point-count methods have come under increasing scrutiny because effects of detection probability and field error are difficult to quantify. For seven forest songbirds, we compared fixed-radii counts (50 m and 100 m) and density estimates obtained from distance sampling to known numbers of birds determined by territory mapping. We applied point-count analytic approaches to a typical forest management question and compared results to those obtained by territory mapping. We used a before–after control impact (BACI) analysis with a data set collected across seven study areas in the central Appalachians from 2006 to 2010. Using a 50-m fixed radius, variance in error was at least 1.5 times that of the other methods, whereas a 100-m fixed radius underestimated actual density by >3 territories per 10 ha for the most abundant species. Distance sampling improved accuracy and precision compared to fixed-radius counts, although estimates were affected by birds counted outside 10-ha units. In the BACI analysis, territory mapping detected an overall treatment effect for five of the seven species, and effects were generally consistent each year. In contrast, all point-count methods failed to detect two treatment effects due to variance and error in annual estimates. Overall, our results highlight the need for adequate sample sizes to reduce variance, and skilled observers to reduce the level of error in point-count data. Ultimately, the advantages and disadvantages of different survey methods should be considered in the context of overall study design and objectives, allowing for trade-offs among effort, accuracy, and power to detect treatment effects.
Field evaluation of distance-estimation error during wetland-dependent bird surveys
Nadeau, Christopher P.; Conway, Courtney J.
2012-01-01
Context: The most common methods to estimate detection probability during avian point-count surveys involve recording a distance between the survey point and individual birds detected during the survey period. Accurately measuring or estimating distance is an important assumption of these methods; however, this assumption is rarely tested in the context of aural avian point-count surveys. Aims: We expand on recent bird-simulation studies to document the error associated with estimating distance to calling birds in a wetland ecosystem. Methods: We used two approaches to estimate the error associated with five surveyor's distance estimates between the survey point and calling birds, and to determine the factors that affect a surveyor's ability to estimate distance. Key results: We observed biased and imprecise distance estimates when estimating distance to simulated birds in a point-count scenario (x̄error = -9 m, s.d.error = 47 m) and when estimating distances to real birds during field trials (x̄error = 39 m, s.d.error = 79 m). The amount of bias and precision in distance estimates differed among surveyors; surveyors with more training and experience were less biased and more precise when estimating distance to both real and simulated birds. Three environmental factors were important in explaining the error associated with distance estimates, including the measured distance from the bird to the surveyor, the volume of the call and the species of bird. Surveyors tended to make large overestimations to birds close to the survey point, which is an especially serious error in distance sampling. Conclusions: Our results suggest that distance-estimation error is prevalent, but surveyor training may be the easiest way to reduce distance-estimation error. Implications: The present study has demonstrated how relatively simple field trials can be used to estimate the error associated with distance estimates used to estimate detection probability during avian point-count surveys. Evaluating distance-estimation errors will allow investigators to better evaluate the accuracy of avian density and trend estimates. Moreover, investigators who evaluate distance-estimation errors could employ recently developed models to incorporate distance-estimation error into analyses. We encourage further development of such models, including the inclusion of such models into distance-analysis software.
NASA Astrophysics Data System (ADS)
Sjöberg, Daniel; Larsson, Christer
2015-06-01
We present a method aimed at reducing uncertainties and instabilities when characterizing materials in waveguide setups. The method is based on measuring the S parameters for three different orientations of a rectangular sample block in a rectangular waveguide. The corresponding geometries are modeled in a commercial full-wave simulation program, taking any material parameters as input. The material parameters of the sample are found by minimizing the squared distance between measured and calculated S parameters. The information added by the different sample orientations is quantified using the Cramér-Rao lower bound. The flexibility of the method allows the determination of material parameters of an arbitrarily shaped sample that fits in the waveguide.
Yu, Yang; Li, Yingxia; Li, Ben; Shen, Zhenyao; Stenstrom, Michael K
2017-03-01
Lead (Pb) concentration in urban dust is often higher than background concentrations and can result in a wide range of health risks to local communities. To understand Pb distribution in urban dust and how multi-industrial activity affects Pb concentration, 21 sampling sites within the heavy industry city of Jilin, China, were analyzed for Pb concentration. Pb concentrations of all 21 urban dust samples from the Jilin City Center were higher than the background concentration for soil in Jilin Province. The analyses show that distance to industry is an important parameter determining health risks associated with Pb in urban dust. The Pb concentration showed an exponential decrease, with increasing distance from industry. Both maximum likelihood estimation and Bayesian analysis were used to estimate the exponential relationship between Pb concentration and distance to multi-industry areas. We found that Bayesian analysis was a better method with less uncertainty for estimating Pb dust concentrations based on their distance to multi-industry, and this approach is recommended for further study. Copyright © 2016. Published by Elsevier Inc.
Molecular phylogenetic trees - On the validity of the Goodman-Moore augmentation algorithm
NASA Technical Reports Server (NTRS)
Holmquist, R.
1979-01-01
A response is made to the reply of Nei and Tateno (1979) to the letter of Holmquist (1978) supporting the validity of the augmentation algorithm of Moore (1977) in reconstructions of nucleotide substitutions by means of the maximum parsimony principle. It is argued that the overestimation of the augmented numbers of nucleotide substitutions (augmented distances) found by Tateno and Nei (1978) is due to an unrepresentative data sample and that it is only necessary that evolution be stochastically uniform in different regions of the phylogenetic network for the augmentation method to be useful. The importance of the average value of the true distance over all links is explained, and the relative variances of the true and augmented distances are calculated to be almost identical. The effects of topological changes in the phylogenetic tree on the augmented distance and the question of the correctness of ancestral sequences inferred by the method of parsimony are also clarified.
ASSESSMENT OF LARGE RIVER MACROINVERTEBRATE ASSEMBLAGES: HOW FAR IS ENOUGH?
During the summer of 2001, twelve sites were sampled for macroinvertebrates, six each on the Great Miami and Kentucky Rivers. Sites were chosen in each river from those sampled in the 1999 methods comparison study to reflect a disturbance gradient. At each site, a total distanc...
NASA Technical Reports Server (NTRS)
Bonamente, Massimiliano; Joy, Marshall K.; Carlstrom, John E.; LaRoque, Samuel J.
2004-01-01
X-ray and Sunyaev-Zeldovich Effect data ca,n be combined to determine the distance to galaxy clusters. High-resolution X-ray data are now available from the Chandra Observatory, which provides both spatial and spectral information, and interferometric radio measurements of the Sunyam-Zeldovich Effect are available from the BIMA and 0VR.O arrays. We introduce a Monte Carlo Markov chain procedure for the joint analysis of X-ray and Sunyaev-Zeldovich Effect data. The advantages of this method are the high computational efficiency and the ability to measure the full probability distribution of all parameters of interest, such as the spatial and spectral properties of the cluster gas and the cluster distance. We apply this technique to the Chandra X-ray data and the OVRO radio data for the galaxy cluster Abell 611. Comparisons with traditional likelihood-ratio methods reveal the robustness of the method. This method will be used in a follow-up paper to determine the distance of a large sample of galaxy clusters for which high-resolution Chandra X-ray and BIMA/OVRO radio data are available.
Sved, J A; Yu, H; Dominiak, B; Gilchrist, A S
2003-02-01
Long-range dispersal of a species may involve either a single long-distance movement from a core population or spreading via unobserved intermediate populations. Where the new populations originate as small propagules, genetic drift may be extreme and gene frequency or assignment methods may not prove useful in determining the relation between the core population and outbreak samples. We describe computationally simple resampling methods for use in this situation to distinguish between the different modes of dispersal. First, estimates of heterozygosity can be used to test for direct sampling from the core population and to estimate the effective size of intermediate populations. Second, a test of sharing of alleles, particularly rare alleles, can show whether outbreaks are related to each other rather than arriving as independent samples from the core population. The shared-allele statistic also serves as a genetic distance measure that is appropriate for small samples. These methods were applied to data on a fruit fly pest species, Bactrocera tryoni, which is quarantined from some horticultural areas in Australia. We concluded that the outbreaks in the quarantine zone came from a heterogeneous set of genetically differentiated populations, possibly ones that overwinter in the vicinity of the quarantine zone.
Self-similar slip distributions on irregular shaped faults
NASA Astrophysics Data System (ADS)
Herrero, A.; Murphy, S.
2018-06-01
We propose a strategy to place a self-similar slip distribution on a complex fault surface that is represented by an unstructured mesh. This is possible by applying a strategy based on the composite source model where a hierarchical set of asperities, each with its own slip function which is dependent on the distance from the asperity centre. Central to this technique is the efficient, accurate computation of distance between two points on the fault surface. This is known as the geodetic distance problem. We propose a method to compute the distance across complex non-planar surfaces based on a corollary of the Huygens' principle. The difference between this method compared to others sample-based algorithms which precede it is the use of a curved front at a local level to calculate the distance. This technique produces a highly accurate computation of the distance as the curvature of the front is linked to the distance from the source. Our local scheme is based on a sequence of two trilaterations, producing a robust algorithm which is highly precise. We test the strategy on a planar surface in order to assess its ability to keep the self-similarity properties of a slip distribution. We also present a synthetic self-similar slip distribution on a real slab topography for a M8.5 event. This method for computing distance may be extended to the estimation of first arrival times in both complex 3D surfaces or 3D volumes.
Absolute Distances to Nearby Type Ia Supernovae via Light Curve Fitting Methods
NASA Astrophysics Data System (ADS)
Vinkó, J.; Ordasi, A.; Szalai, T.; Sárneczky, K.; Bányai, E.; Bíró, I. B.; Borkovits, T.; Hegedüs, T.; Hodosán, G.; Kelemen, J.; Klagyivik, P.; Kriskovics, L.; Kun, E.; Marion, G. H.; Marschalkó, G.; Molnár, L.; Nagy, A. P.; Pál, A.; Silverman, J. M.; Szakáts, R.; Szegedi-Elek, E.; Székely, P.; Szing, A.; Vida, K.; Wheeler, J. C.
2018-06-01
We present a comparative study of absolute distances to a sample of very nearby, bright Type Ia supernovae (SNe) derived from high cadence, high signal-to-noise, multi-band photometric data. Our sample consists of four SNe: 2012cg, 2012ht, 2013dy and 2014J. We present new homogeneous, high-cadence photometric data in Johnson–Cousins BVRI and Sloan g‧r‧i‧z‧ bands taken from two sites (Piszkesteto and Baja, Hungary), and the light curves are analyzed with publicly available light curve fitters (MLCS2k2, SNooPy2 and SALT2.4). When comparing the best-fit parameters provided by the different codes, it is found that the distance moduli of moderately reddened SNe Ia agree within ≲0.2 mag, and the agreement is even better (≲0.1 mag) for the highest signal-to-noise BVRI data. For the highly reddened SN 2014J the dispersion of the inferred distance moduli is slightly higher. These SN-based distances are in good agreement with the Cepheid distances to their host galaxies. We conclude that the current state-of-the-art light curve fitters for Type Ia SNe can provide consistent absolute distance moduli having less than ∼0.1–0.2 mag uncertainty for nearby SNe. Still, there is room for future improvements to reach the desired ∼0.05 mag accuracy in the absolute distance modulus.
Koehl, Anthony J; Long, Jeffrey C
2018-02-01
We present a model that partitions Nei's minimum genetic distance between admixed populations into components of admixture and genetic drift. We applied this model to 17 admixed populations in the Americas to examine how admixture and drift have contributed to the patterns of genetic diversity. We analyzed 618 short tandem repeat loci in 949 individuals from 49 population samples. Thirty-two samples serve as proxies for continental ancestors. Seventeen samples represent admixed populations: (4) African-American and (13) Latin American. We partition genetic distance, and then calculate fixation indices and principal coordinates to interpret our results. A computer simulation confirms that our method correctly estimates drift and admixture components of genetic distance when the assumptions of the model are met. The partition of genetic distance shows that both admixture and genetic drift contribute to patterns of genetic diversity. The admixture component of genetic distance provides evidence for two distinct axes of continental ancestry. However, the genetic distances show that ancestry contributes to only one axis of genetic differentiation. The genetic distances among the 13 Latin American populations in this analysis show contributions from both differences in ancestry and differences in genetic drift. By contrast, the genetic distances among the four African American populations in this analysis owe mostly to genetic drift because these groups have similar fractions of European and African ancestry. The genetic structure of admixed populations in the Americas reflects more than admixture. We show that the history of serial founder effects constrains the impact of admixture on allele frequencies to a single dimension. Genetic drift in the admixed populations imposed a new level of genetic structure onto that created by admixture. © 2017 Wiley Periodicals, Inc.
Method of Menu Selection by Gaze Movement Using AC EOG Signals
NASA Astrophysics Data System (ADS)
Kanoh, Shin'ichiro; Futami, Ryoko; Yoshinobu, Tatsuo; Hoshimiya, Nozomu
A method to detect the direction and the distance of voluntary eye gaze movement from EOG (electrooculogram) signals was proposed and tested. In this method, AC-amplified vertical and horizontal transient EOG signals were classified into 8-class directions and 2-class distances of voluntary eye gaze movements. A horizontal and a vertical EOGs during eye gaze movement at each sampling time were treated as a two-dimensional vector, and the center of gravity of the sample vectors whose norms were more than 80% of the maximum norm was used as a feature vector to be classified. By the classification using the k-nearest neighbor algorithm, it was shown that the averaged correct detection rates on each subject were 98.9%, 98.7%, 94.4%, respectively. This method can avoid strict EOG-based eye tracking which requires DC amplification of very small signal. It would be useful to develop robust human interfacing systems based on menu selection for severely paralyzed patients.
Adaptive phase k-means algorithm for waveform classification
NASA Astrophysics Data System (ADS)
Song, Chengyun; Liu, Zhining; Wang, Yaojun; Xu, Feng; Li, Xingming; Hu, Guangmin
2018-01-01
Waveform classification is a powerful technique for seismic facies analysis that describes the heterogeneity and compartments within a reservoir. Horizon interpretation is a critical step in waveform classification. However, the horizon often produces inconsistent waveform phase, and thus results in an unsatisfied classification. To alleviate this problem, an adaptive phase waveform classification method called the adaptive phase k-means is introduced in this paper. Our method improves the traditional k-means algorithm using an adaptive phase distance for waveform similarity measure. The proposed distance is a measure with variable phases as it moves from sample to sample along the traces. Model traces are also updated with the best phase interference in the iterative process. Therefore, our method is robust to phase variations caused by the interpretation horizon. We tested the effectiveness of our algorithm by applying it to synthetic and real data. The satisfactory results reveal that the proposed method tolerates certain waveform phase variation and is a good tool for seismic facies analysis.
Friese, Anika; Klees, Sylvia; Tenhagen, Bernd A.; Fetsch, Alexandra; Rösler, Uwe; Hartung, Jörg
2012-01-01
During 1 year, samples were taken on 4 days, one sample in each season, from pigs, the floor, and the air inside pig barns and from the ambient air and soil at different distances outside six commercial livestock-associated methicillin-resistant Staphylococcus aureus (LA-MRSA)-positive pig barns in the north and east of Germany. LA-MRSA was isolated from animals, floor, and air samples in the barn, showing a range of airborne LA-MRSA between 6 and 3,619 CFU/m3 (median, 151 CFU/m3). Downwind of the barns, LA-MRSA was detected in low concentrations (11 to 14 CFU/m3) at distances of 50 and 150 m; all upwind air samples were negative. In contrast, LA-MRSA was found on soil surfaces at distances of 50, 150, and 300 m downwind from all barns, but no statistical differences could be observed between the proportions of positive soil surface samples at the three different distances. Upwind of the barns, positive soil surface samples were found only sporadically. Significantly more positive LA-MRSA samples were found in summer than in the other seasons both in air and soil samples upwind and downwind of the pig barns. spa typing was used to confirm the identity of LA-MRSA types found inside and outside the barns. The results show that there is regular airborne LA-MRSA transmission and deposition, which are strongly influenced by wind direction and season, of up to at least 300 m around positive pig barns. The described boot sampling method seems suitable to characterize the contamination of the vicinity of LA-MRSA-positive pig barns by the airborne route. PMID:22685139
Schulz, Jochen; Friese, Anika; Klees, Sylvia; Tenhagen, Bernd A; Fetsch, Alexandra; Rösler, Uwe; Hartung, Jörg
2012-08-01
During 1 year, samples were taken on 4 days, one sample in each season, from pigs, the floor, and the air inside pig barns and from the ambient air and soil at different distances outside six commercial livestock-associated methicillin-resistant Staphylococcus aureus (LA-MRSA)-positive pig barns in the north and east of Germany. LA-MRSA was isolated from animals, floor, and air samples in the barn, showing a range of airborne LA-MRSA between 6 and 3,619 CFU/m(3) (median, 151 CFU/m(3)). Downwind of the barns, LA-MRSA was detected in low concentrations (11 to 14 CFU/m(3)) at distances of 50 and 150 m; all upwind air samples were negative. In contrast, LA-MRSA was found on soil surfaces at distances of 50, 150, and 300 m downwind from all barns, but no statistical differences could be observed between the proportions of positive soil surface samples at the three different distances. Upwind of the barns, positive soil surface samples were found only sporadically. Significantly more positive LA-MRSA samples were found in summer than in the other seasons both in air and soil samples upwind and downwind of the pig barns. spa typing was used to confirm the identity of LA-MRSA types found inside and outside the barns. The results show that there is regular airborne LA-MRSA transmission and deposition, which are strongly influenced by wind direction and season, of up to at least 300 m around positive pig barns. The described boot sampling method seems suitable to characterize the contamination of the vicinity of LA-MRSA-positive pig barns by the airborne route.
Hagler, James R; Thompson, Alison L; Stefanek, Melissa A; Machtley, Scott A
2018-03-01
A study was conducted that compared the effectiveness of a sweepnet versus a vacuum suction device for collecting arthropods in cotton. The study differs from previous research in that body-mounted action cameras (B-MACs) were used to record the activity of the person conducting the arthropod collections. The videos produced by the B-MACs were then analyzed with behavioral event recording software to quantify various aspects of the sampling process. The sampler's speed and the number of sampling sweeps or vacuum suctions taken over a fixed distance (12.2 m) of cotton were two of the more significant sampling characteristics quantified for each method. The arthropod counts obtained, combined with the analyses of the videos, enabled us to estimate arthropod sampling efficiency for each technique based on fixed distance, time, and sample unit measurements. Data revealed that the vacuuming was the most precise method for collecting arthropods in the relatively small cotton research plots. However, data also indicates that the sweepnet method would be more efficient for collecting most of the cotton-dwelling arthropod taxa, especially if the sampler could continuously sweep for at least 1 min or ≥80 m (e.g., in larger research plots). The B-MACs are inexpensive and non-cumbersome, the video images generated are outstanding, and they can be archived to provide permanent documentation of a research project. The methods described here could be useful for other types of field-based research to enhance data collection efficiency.
2018-01-01
Hyperspectral image classification with a limited number of training samples without loss of accuracy is desirable, as collecting such data is often expensive and time-consuming. However, classifiers trained with limited samples usually end up with a large generalization error. To overcome the said problem, we propose a fuzziness-based active learning framework (FALF), in which we implement the idea of selecting optimal training samples to enhance generalization performance for two different kinds of classifiers, discriminative and generative (e.g. SVM and KNN). The optimal samples are selected by first estimating the boundary of each class and then calculating the fuzziness-based distance between each sample and the estimated class boundaries. Those samples that are at smaller distances from the boundaries and have higher fuzziness are chosen as target candidates for the training set. Through detailed experimentation on three publically available datasets, we showed that when trained with the proposed sample selection framework, both classifiers achieved higher classification accuracy and lower processing time with the small amount of training data as opposed to the case where the training samples were selected randomly. Our experiments demonstrate the effectiveness of our proposed method, which equates favorably with the state-of-the-art methods. PMID:29304512
Bhat, Nagesh; Asawa, Kailash; Tak, Mridula; Shinde, Kushal; Singh, Anukriti; Gandhi, Neha; Gupta, Vivek Vardhan
2015-01-01
Background As of late, natural contamination has stimulated as a reaction of mechanical and other human exercises. In India, with the expanding industrialization, numerous unsafe substances are utilized or are discharged amid generation as cleans, exhaust, vapours and gasses. These substances at last are blended in the earth and causes health hazards. Objective To determine concentration of fluoride in soils and vegetables grown in the vicinity of Zinc Smelter, Debari, Udaipur, Rajasthan. Materials and Methods Samples of vegetables and soil were collected from areas situated at 0, 1, 2, 5, and 10 km distance from the zinc smelter, Debari. Three samples of vegetables (i.e. Cabbage, Onion and Tomato) and 3 samples of soil {one sample from the upper layer of soil (i.e. 0 to 20 cm) and one from the deep layer (i.e. 20 – 40 cm)} at each distance were collected. The soil and vegetable samples were sealed in clean polythene bags and transported to the laboratory for analysis. One sample each of water and fertilizer from each distance were also collected. Results The mean fluoride concentration in the vegetables grown varied between 0.36 ± 0.69 to 0.71 ± 0.90 ppm. The fluoride concentration in fertilizer and water sample from various distances was found to be in the range of 1.4 – 1.5 ppm and 1.8 – 1.9 ppm respectively. Conclusion The fluoride content of soil and vegetables was found to be higher in places near to the zinc smelter. PMID:26557620
Note: A simple image processing based fiducial auto-alignment method for sample registration.
Robertson, Wesley D; Porto, Lucas R; Ip, Candice J X; Nantel, Megan K T; Tellkamp, Friedjof; Lu, Yinfei; Miller, R J Dwayne
2015-08-01
A simple method for the location and auto-alignment of sample fiducials for sample registration using widely available MATLAB/LabVIEW software is demonstrated. The method is robust, easily implemented, and applicable to a wide variety of experiment types for improved reproducibility and increased setup speed. The software uses image processing to locate and measure the diameter and center point of circular fiducials for distance self-calibration and iterative alignment and can be used with most imaging systems. The method is demonstrated to be fast and reliable in locating and aligning sample fiducials, provided here by a nanofabricated array, with accuracy within the optical resolution of the imaging system. The software was further demonstrated to register, load, and sample the dynamically wetted array.
Color Filtering Localization for Three-Dimensional Underwater Acoustic Sensor Networks
Liu, Zhihua; Gao, Han; Wang, Wuling; Chang, Shuai; Chen, Jiaxing
2015-01-01
Accurate localization of mobile nodes has been an important and fundamental problem in underwater acoustic sensor networks (UASNs). The detection information returned from a mobile node is meaningful only if its location is known. In this paper, we propose two localization algorithms based on color filtering technology called PCFL and ACFL. PCFL and ACFL aim at collaboratively accomplishing accurate localization of underwater mobile nodes with minimum energy expenditure. They both adopt the overlapping signal region of task anchors which can communicate with the mobile node directly as the current sampling area. PCFL employs the projected distances between each of the task projections and the mobile node, while ACFL adopts the direct distance between each of the task anchors and the mobile node. The proportion factor of distance is also proposed to weight the RGB values. By comparing the nearness degrees of the RGB sequences between the samples and the mobile node, samples can be filtered out. The normalized nearness degrees are considered as the weighted standards to calculate the coordinates of the mobile nodes. The simulation results show that the proposed methods have excellent localization performance and can localize the mobile node in a timely way. The average localization error of PCFL is decreased by about 30.4% compared to the AFLA method. PMID:25774706
CORS BAADE-WESSELINK DISTANCE TO THE LMC NGC 1866 BLUE POPULOUS CLUSTER
DOE Office of Scientific and Technical Information (OSTI.GOV)
Molinaro, R.; Ripepi, V.; Marconi, M.
2012-03-20
We used optical, near-infrared photometry, and radial velocity data for a sample of 11 Cepheids belonging to the young LMC blue populous cluster NGC 1866 to estimate their radii and distances on the basis of the CORS Baade-Wesselink method. This technique, based on an accurate calibration of surface brightness as a function of (U - B), (V - K) colors, allows us to estimate, simultaneously, the linear radius and the angular diameter of Cepheid variables, and consequently to derive their distance. A rigorous error estimate on radii and distances was derived by using Monte Carlo simulations. Our analysis gives amore » distance modulus for NGC 1866 of 18.51 {+-} 0.03 mag, which is in agreement with several independent results.« less
Shipham, Ashlee; Schmidt, Daniel J; Hughes, Jane M
2013-01-01
Recent work has highlighted the need to account for hierarchical patterns of genetic structure when estimating evolutionary and ecological parameters of interest. This caution is particularly relevant to studies of riverine organisms, where hierarchical structure appears to be commonplace. Here, we indirectly estimate dispersal distance in a hierarchically structured freshwater fish, Mogurnda adspersa. Microsatellite and mitochondrial DNA (mtDNA) data were obtained for 443 individuals across 27 sites separated by an average of 1.3 km within creeks of southeastern Queensland, Australia. Significant genetic structure was found among sites (mtDNA Φ(ST) = 0.508; microsatellite F(ST) = 0.225, F'(ST) = 0.340). Various clustering methods produced congruent patterns of hierarchical structure reflecting stream architecture. Partial mantel tests identified contiguous sets of sample sites where isolation by distance (IBD) explained F(ST) variation without significant contribution of hierarchical structure. Analysis of mean natal dispersal distance (σ) within sets of IBD-linked sample sites suggested most dispersal occurs over less than 1 km, and the average effective density (D(e)) was estimated at 11.5 individuals km(-1); indicating sedentary behavior and small effective population size are responsible for the remarkable patterns of genetic structure observed. Our results demonstrate that Rousset's regression-based method is applicable to estimating the scale of dispersal in riverine organisms and that identifying contiguous populations that satisfy the assumptions of this model is achievable with genetic clustering methods and partial correlations.
An important issue surrounding assessment of riverine fish assemblages is the minimum amount of sampling distance needed to adequately determine biotic condition. Determining adequate sampling distance is important because sampling distance affects estimates of fish assemblage c...
USDA-ARS?s Scientific Manuscript database
Population genetic studies on a global scale may be hampered by the ability to acquire quality samples from distant countries. Preservation methods must be adequate to prevent the samples from decay during shipping, so an adequate quantity of quality DNA can be extracted for analysis, and materials...
PROPOSED STANDARDIZED ASSESSMENT METHODS (SAMS) FOR ELECTROFISHING LARGE RIVERS
The effects of electrofishing design and sampling distance were studied at 49 sites across four boatable rivers ranging in drainage area from 13,947 to 23,041 km2 in the Ohio River basin. Two general types of sites were sampled: Run-of-the-River (Free-flowing sites or with smal...
Methods for measuring populations of small, diurnal forest birds.
D.A. Manuwal; A.B. Carey
1991-01-01
Before a bird population is measured, the objectives of the study should be clearly defined. Important factors to be considered in designing a study are study site selection, plot size or transect length, distance between sampling points, duration of counts, and frequency and timing of sampling. Qualified field personnel are especially important. Assumptions applying...
Yamada, Kentaro; Henares, Terence G; Suzuki, Koji; Citterio, Daniel
2015-11-11
"Distance-based" detection motifs on microfluidic paper-based analytical devices (μPADs) allow quantitative analysis without using signal readout instruments in a similar manner to classical analogue thermometers. To realize a cost-effective and calibration-free distance-based assay of lactoferrin in human tear fluid on a μPAD not relying on antibodies or enzymes, we investigated the fluidic mobilities of the target protein and Tb(3+) cations used as the fluorescent detection reagent on surface-modified cellulosic filter papers. Chromatographic elution experiments in a tear-like sample matrix containing electrolytes and proteins revealed a collapse of attractive electrostatic interactions between lactoferrin or Tb(3+) and the cellulosic substrate, which was overcome by the modification of the paper surface with the sulfated polysaccharide ι-carrageenan. The resulting μPAD based on the fluorescence emission distance successfully analyzed 0-4 mg mL(-1) of lactoferrin in complex human tear matrix with a lower limit of detection of 0.1 mg mL(-1) by simple visual inspection. Assay results of 18 human tear samples including ocular disease patients and healthy volunteers showed good correlation to the reference ELISA method with a slope of 0.997 and a regression coefficient of 0.948. The distance-based quantitative signal and the good batch-to-batch fabrication reproducibility relying on printing methods enable quantitative analysis by simply reading out "concentration scale marks" printed on the μPAD without performing any calibration and using any signal readout instrument.
On the Accretion Rates of SW Sextantis Nova-like Variables
NASA Astrophysics Data System (ADS)
Ballouz, Ronald-Louis; Sion, Edward M.
2009-06-01
We present accretion rates for selected samples of nova-like variables having IUE archival spectra and distances uniformly determined using an infrared method by Knigge. A comparison with accretion rates derived independently with a multiparametric optimization modeling approach by Puebla et al. is carried out. The accretion rates of SW Sextantis nova-like systems are compared with the accretion rates of non-SW Sextantis systems in the Puebla et al. sample and in our sample, which was selected in the orbital period range of three to four and a half hours, with all systems having distances using the method of Knigge. Based upon the two independent modeling approaches, we find no significant difference between the accretion rates of SW Sextantis systems and non-SW Sextantis nova-like systems insofar as optically thick disk models are appropriate. We find little evidence to suggest that the SW Sex stars have higher accretion rates than other nova-like cataclysmic variables (CVs) above the period gap within the same range of orbital periods.
An Overview and Empirical Comparison of Distance Metric Learning Methods.
Moutafis, Panagiotis; Leng, Mengjun; Kakadiaris, Ioannis A
2016-02-16
In this paper, we first offer an overview of advances in the field of distance metric learning. Then, we empirically compare selected methods using a common experimental protocol. The number of distance metric learning algorithms proposed keeps growing due to their effectiveness and wide application. However, existing surveys are either outdated or they focus only on a few methods. As a result, there is an increasing need to summarize the obtained knowledge in a concise, yet informative manner. Moreover, existing surveys do not conduct comprehensive experimental comparisons. On the other hand, individual distance metric learning papers compare the performance of the proposed approach with only a few related methods and under different settings. This highlights the need for an experimental evaluation using a common and challenging protocol. To this end, we conduct face verification experiments, as this task poses significant challenges due to varying conditions during data acquisition. In addition, face verification is a natural application for distance metric learning because the encountered challenge is to define a distance function that: 1) accurately expresses the notion of similarity for verification; 2) is robust to noisy data; 3) generalizes well to unseen subjects; and 4) scales well with the dimensionality and number of training samples. In particular, we utilize well-tested features to assess the performance of selected methods following the experimental protocol of the state-of-the-art database labeled faces in the wild. A summary of the results is presented along with a discussion of the insights obtained and lessons learned by employing the corresponding algorithms.
NASA Astrophysics Data System (ADS)
Shi, Guang; Wang, Wen; Zhang, Fumin
2018-03-01
The measurement precision of frequency-modulated continuous-wave (FMCW) laser distance measurement should be proportional to the scanning range of the tunable laser. However, the commercial external cavity diode laser (ECDL) is not an ideal tunable laser source in practical applications. Due to the unavoidable mode hopping and scanning nonlinearity of the ECDL, the measurement precision of FMCW laser distance measurements can be substantially affected. Therefore, an FMCW laser ranging system with two auxiliary interferometers is proposed in this paper. Moreover, to eliminate the effects of ECDL, the frequency-sampling method and mode hopping influence suppression method are employed. Compared with a fringe counting interferometer, this FMCW laser ranging system has a measuring error of ± 20 μm at the distance of 5.8 m.
Anderson, Alexander S; Marques, Tiago A; Shoo, Luke P; Williams, Stephen E
2015-01-01
Indices of relative abundance do not control for variation in detectability, which can bias density estimates such that ecological processes are difficult to infer. Distance sampling methods can be used to correct for detectability, but in rainforest, where dense vegetation and diverse assemblages complicate sampling, information is lacking about factors affecting their application. Rare species present an additional challenge, as data may be too sparse to fit detection functions. We present analyses of distance sampling data collected for a diverse tropical rainforest bird assemblage across broad elevational and latitudinal gradients in North Queensland, Australia. Using audio and visual detections, we assessed the influence of various factors on Effective Strip Width (ESW), an intuitively useful parameter, since it can be used to calculate an estimate of density from count data. Body size and species exerted the most important influence on ESW, with larger species detectable over greater distances than smaller species. Secondarily, wet weather and high shrub density decreased ESW for most species. ESW for several species also differed between summer and winter, possibly due to seasonal differences in calling behavior. Distance sampling proved logistically intensive in these environments, but large differences in ESW between species confirmed the need to correct for detection probability to obtain accurate density estimates. Our results suggest an evidence-based approach to controlling for factors influencing detectability, and avenues for further work including modeling detectability as a function of species characteristics such as body size and call characteristics. Such models may be useful in developing a calibration for non-distance sampling data and for estimating detectability of rare species.
Anderson, Alexander S.; Marques, Tiago A.; Shoo, Luke P.; Williams, Stephen E.
2015-01-01
Indices of relative abundance do not control for variation in detectability, which can bias density estimates such that ecological processes are difficult to infer. Distance sampling methods can be used to correct for detectability, but in rainforest, where dense vegetation and diverse assemblages complicate sampling, information is lacking about factors affecting their application. Rare species present an additional challenge, as data may be too sparse to fit detection functions. We present analyses of distance sampling data collected for a diverse tropical rainforest bird assemblage across broad elevational and latitudinal gradients in North Queensland, Australia. Using audio and visual detections, we assessed the influence of various factors on Effective Strip Width (ESW), an intuitively useful parameter, since it can be used to calculate an estimate of density from count data. Body size and species exerted the most important influence on ESW, with larger species detectable over greater distances than smaller species. Secondarily, wet weather and high shrub density decreased ESW for most species. ESW for several species also differed between summer and winter, possibly due to seasonal differences in calling behavior. Distance sampling proved logistically intensive in these environments, but large differences in ESW between species confirmed the need to correct for detection probability to obtain accurate density estimates. Our results suggest an evidence-based approach to controlling for factors influencing detectability, and avenues for further work including modeling detectability as a function of species characteristics such as body size and call characteristics. Such models may be useful in developing a calibration for non-distance sampling data and for estimating detectability of rare species. PMID:26110433
The sensitivity of relative toxicity rankings by the USF/NASA test method to some test variables
NASA Technical Reports Server (NTRS)
Hilado, C. J.; Labossiere, L. A.; Leon, H. A.; Kourtides, D. A.; Parker, J. A.; Hsu, M.-T. S.
1976-01-01
Pyrolysis temperature and the distance between the source and sensor of effluents are two important variables in tests for relative toxicity. Modifications of the USF/NASA toxicity screening test method to increase the upper temperature limit of pyrolysis, reduce the distance between the sample and the test animals, and increase the chamber volume available for animal occupancy, did not significantly alter rankings of relative toxicity of four representative materials. The changes rendered some differences no longer significant, but did not reverse any rankings. The materials studied were cotton, wool, aromatic polyamide, and polybenzimidazole.
Detection of periodicity based on independence tests - III. Phase distance correlation periodogram
NASA Astrophysics Data System (ADS)
Zucker, Shay
2018-02-01
I present the Phase Distance Correlation (PDC) periodogram - a new periodicity metric, based on the Distance Correlation concept of Gábor Székely. For each trial period, PDC calculates the distance correlation between the data samples and their phases. PDC requires adaptation of the Székely's distance correlation to circular variables (phases). The resulting periodicity metric is best suited to sparse data sets, and it performs better than other methods for sawtooth-like periodicities. These include Cepheid and RR-Lyrae light curves, as well as radial velocity curves of eccentric spectroscopic binaries. The performance of the PDC periodogram in other contexts is almost as good as that of the Generalized Lomb-Scargle periodogram. The concept of phase distance correlation can be adapted also to astrometric data, and it has the potential to be suitable also for large evenly spaced data sets, after some algorithmic perfection.
Forester, James D; Im, Hae Kyung; Rathouz, Paul J
2009-12-01
Patterns of resource selection by animal populations emerge as a result of the behavior of many individuals. Statistical models that describe these population-level patterns of habitat use can miss important interactions between individual animals and characteristics of their local environment; however, identifying these interactions is difficult. One approach to this problem is to incorporate models of individual movement into resource selection models. To do this, we propose a model for step selection functions (SSF) that is composed of a resource-independent movement kernel and a resource selection function (RSF). We show that standard case-control logistic regression may be used to fit the SSF; however, the sampling scheme used to generate control points (i.e., the definition of availability) must be accommodated. We used three sampling schemes to analyze simulated movement data and found that ignoring sampling and the resource-independent movement kernel yielded biased estimates of selection. The level of bias depended on the method used to generate control locations, the strength of selection, and the spatial scale of the resource map. Using empirical or parametric methods to sample control locations produced biased estimates under stronger selection; however, we show that the addition of a distance function to the analysis substantially reduced that bias. Assuming a uniform availability within a fixed buffer yielded strongly biased selection estimates that could be corrected by including the distance function but remained inefficient relative to the empirical and parametric sampling methods. As a case study, we used location data collected from elk in Yellowstone National Park, USA, to show that selection and bias may be temporally variable. Because under constant selection the amount of bias depends on the scale at which a resource is distributed in the landscape, we suggest that distance always be included as a covariate in SSF analyses. This approach to modeling resource selection is easily implemented using common statistical tools and promises to provide deeper insight into the movement ecology of animals.
Muramatsu, Takashi; García-García, Hector M; Lee, Il Soo; Bruining, Nico; Onuma, Yoshinobu; Serruys, Patrick W
2012-01-01
The impact of the sampling rate (SR) of optical frequency domain imaging (OFDI) on quantitative assessment of in-stent structures (ISS) such as plaque prolapse and thrombus remains unexplored. OFDI after stenting was performed in ST-segment elevation myocardial infarction (STEMI) patients using a TERUMO OFDI system (Terumo Europe, Leuven, Belgium) with 160 frames/s and pullback speed of 20 mm/s. A total of 126 stented segments were analyzed. ISS were classified as either attached or non-attached to stent area boundaries. The volume, mean area and largest area of ISS were assessed according to 4 frequencies of SR, corresponding to distances between the analyzed frames of 0.125, 0.25, 0.50 and 1.0 mm. ISS volume was calculated by integrating cross-sectional ISS areas multiplied by each sampling distance using the disk summation method. The volume and mean area of ISS became significantly larger, while the largest area became significantly smaller as sampling distance became larger (1.11 mm(2) for 0.125 mm vs. 1.00 mm(2) for 1.0 mm, P for trend=0.036). In addition, variance of difference was positively associated with increasing width of sampling distance. Quantification of ISS is significantly influenced by the applied frequency of SR. This should be taken into account when designing future OFDI studies in which quantitative assessment of ISS is critical for the evaluation of STEMI patients.
Learning Human Actions by Combining Global Dynamics and Local Appearance.
Luo, Guan; Yang, Shuang; Tian, Guodong; Yuan, Chunfeng; Hu, Weiming; Maybank, Stephen J
2014-12-01
In this paper, we address the problem of human action recognition through combining global temporal dynamics and local visual spatio-temporal appearance features. For this purpose, in the global temporal dimension, we propose to model the motion dynamics with robust linear dynamical systems (LDSs) and use the model parameters as motion descriptors. Since LDSs live in a non-Euclidean space and the descriptors are in non-vector form, we propose a shift invariant subspace angles based distance to measure the similarity between LDSs. In the local visual dimension, we construct curved spatio-temporal cuboids along the trajectories of densely sampled feature points and describe them using histograms of oriented gradients (HOG). The distance between motion sequences is computed with the Chi-Squared histogram distance in the bag-of-words framework. Finally we perform classification using the maximum margin distance learning method by combining the global dynamic distances and the local visual distances. We evaluate our approach for action recognition on five short clips data sets, namely Weizmann, KTH, UCF sports, Hollywood2 and UCF50, as well as three long continuous data sets, namely VIRAT, ADL and CRIM13. We show competitive results as compared with current state-of-the-art methods.
Multiple-wavelength spectroscopic quantitation of light-absorbing species in scattering media
Nathel, Howard; Cartland, Harry E.; Colston, Jr., Billy W.; Everett, Matthew J.; Roe, Jeffery N.
2000-01-01
An oxygen concentration measurement system for blood hemoglobin comprises a multiple-wavelength low-coherence optical light source that is coupled by single mode fibers through a splitter and combiner and focused on both a target tissue sample and a reference mirror. Reflections from both the reference mirror and from the depths of the target tissue sample are carried back and mixed to produce interference fringes in the splitter and combiner. The reference mirror is set such that the distance traversed in the reference path is the same as the distance traversed into and back from the target tissue sample at some depth in the sample that will provide light attenuation information that is dependent on the oxygen in blood hemoglobin in the target tissue sample. Two wavelengths of light are used to obtain concentrations. The method can be used to measure total hemoglobin concentration [Hb.sub.deoxy +Hb.sub.oxy ] or total blood volume in tissue and in conjunction with oxygen saturation measurements from pulse oximetry can be used to absolutely quantify oxyhemoglobin [HbO.sub.2 ] in tissue. The apparatus and method provide a general means for absolute quantitation of an absorber dispersed in a highly scattering medium.
Khan, Md Nabiul Islam; Hijbeek, Renske; Berger, Uta; Koedam, Nico; Grueters, Uwe; Islam, S M Zahirul; Hasan, Md Asadul; Dahdouh-Guebas, Farid
2016-01-01
In the Point-Centred Quarter Method (PCQM), the mean distance of the first nearest plants in each quadrant of a number of random sample points is converted to plant density. It is a quick method for plant density estimation. In recent publications the estimator equations of simple PCQM (PCQM1) and higher order ones (PCQM2 and PCQM3, which uses the distance of the second and third nearest plants, respectively) show discrepancy. This study attempts to review PCQM estimators in order to find the most accurate equation form. We tested the accuracy of different PCQM equations using Monte Carlo Simulations in simulated (having 'random', 'aggregated' and 'regular' spatial patterns) plant populations and empirical ones. PCQM requires at least 50 sample points to ensure a desired level of accuracy. PCQM with a corrected estimator is more accurate than with a previously published estimator. The published PCQM versions (PCQM1, PCQM2 and PCQM3) show significant differences in accuracy of density estimation, i.e. the higher order PCQM provides higher accuracy. However, the corrected PCQM versions show no significant differences among them as tested in various spatial patterns except in plant assemblages with a strong repulsion (plant competition). If N is number of sample points and R is distance, the corrected estimator of PCQM1 is 4(4N - 1)/(π ∑ R2) but not 12N/(π ∑ R2), of PCQM2 is 4(8N - 1)/(π ∑ R2) but not 28N/(π ∑ R2) and of PCQM3 is 4(12N - 1)/(π ∑ R2) but not 44N/(π ∑ R2) as published. If the spatial pattern of a plant association is random, PCQM1 with a corrected equation estimator and over 50 sample points would be sufficient to provide accurate density estimation. PCQM using just the nearest tree in each quadrant is therefore sufficient, which facilitates sampling of trees, particularly in areas with just a few hundred trees per hectare. PCQM3 provides the best density estimations for all types of plant assemblages including the repulsion process. Since in practice, the spatial pattern of a plant association remains unknown before starting a vegetation survey, for field applications the use of PCQM3 along with the corrected estimator is recommended. However, for sparse plant populations, where the use of PCQM3 may pose practical limitations, the PCQM2 or PCQM1 would be applied. During application of PCQM in the field, care should be taken to summarize the distance data based on 'the inverse summation of squared distances' but not 'the summation of inverse squared distances' as erroneously published.
Al-Atiyat, R M; Aljumaah, R S
2014-08-27
This study aimed to estimate evolutionary distances and to reconstruct phylogeny trees between different Awassi sheep populations. Thirty-two sheep individuals from three different geographical areas of Jordan and the Kingdom of Saudi Arabia (KSA) were randomly sampled. DNA was extracted from the tissue samples and sequenced using the T7 promoter universal primer. Different phylogenetic trees were reconstructed from 0.64-kb DNA sequences using the MEGA software with the best general time reverse distance model. Three methods of distance estimation were then used. The maximum composite likelihood test was considered for reconstructing maximum likelihood, neighbor-joining and UPGMA trees. The maximum likelihood tree indicated three major clusters separated by cytosine (C) and thymine (T). The greatest distance was shown between the South sheep and North sheep. On the other hand, the KSA sheep as an outgroup showed shorter evolutionary distance to the North sheep population than to the others. The neighbor-joining and UPGMA trees showed quite reliable clusters of evolutionary differentiation of Jordan sheep populations from the Saudi population. The overall results support geographical information and ecological types of the sheep populations studied. Summing up, the resulting phylogeny trees may contribute to the limited information about the genetic relatedness and phylogeny of Awassi sheep in nearby Arab countries.
Chelgren, Nathan D.; Samora, Barbara; Adams, Michael J.; McCreary, Brome
2011-01-01
High variability in abundance, cryptic coloration, and small body size of newly metamorphosed anurans have limited demographic studies of this life-history stage. We used line-transect distance sampling and Bayesian methods to estimate the abundance and spatial distribution of newly metamorphosed Western Toads (Anaxyrus boreas) in terrestrial habitat surrounding a montane lake in central Washington, USA. We completed 154 line-transect surveys from the commencement of metamorphosis (15 September 2009) to the date of first snow accumulation in fall (1 October 2009), and located 543 newly metamorphosed toads. After accounting for variable detection probability associated with the extent of barren habitats, estimates of total surface abundance ranged from a posterior median of 3,880 (95% credible intervals from 2,235 to 12,600) in the first week of sampling to 12,150 (5,543 to 51,670) during the second week of sampling. Numbers of newly metamorphosed toads dropped quickly with increasing distance from the lakeshore in a pattern that differed over the three weeks of the study and contradicted our original hypotheses. Though we hypothesized that the spatial distribution of toads would initially be concentrated near the lake shore and then spread outward from the lake over time, we observed the opposite. Ninety-five percent of individuals occurred within 20, 16, and 15 m of shore during weeks one, two, and three respectively, probably reflecting continued emergence of newly metamorphosed toads from the lake and mortality or burrow use of dispersed individuals. Numbers of toads were highest near the inlet stream of the lake. Distance sampling may provide a useful method for estimating the surface abundance of newly metamorphosed toads and relating their space use to landscape variables despite uncertain and variable probability of detection. We discuss means of improving the precision of estimates of total abundance.
Variability of 137Cs inventory at a reference site in west-central Iran.
Bazshoushtari, Nasim; Ayoubi, Shamsollah; Abdi, Mohammad Reza; Mohammadi, Mohammad
2016-12-01
137 Cs technique has been widely used for the evaluation rates and patterns of soil erosion and deposition. This technique requires an accurate estimate of the values of 137 Cs inventory at the reference site. This study was conducted to evaluate the variability of the inventory of 137 Cs regarding to the sampling program including sample size, distance and sampling method at a reference site located in vicinity of Fereydan district in Isfahan province, west-central Iran. Two 3 × 8 grids were established comprising large grid (35 m length and 8 m width), and small grid (24 m length and 6 m width). At each grid intersection two soil samples were collected from 0 to 15 cm and 15-30 cm depths, totally 96 soil samples from 48 sampling points. Coefficients of variation for 137 Cs inventory in the soil samples was relatively low (CV = 15%), and the sampling distance and methods used did not significantly affect the 137 Cs inventories across the studied reference site. To obtain a satisfactory estimate of the mean 137 Cs activity in the reference sites, particularly those located in the semiarid regions, it is recommended to collect at least four samples along in a grid pattern 3 m apart. Copyright © 2016 Elsevier Ltd. All rights reserved.
Estimation of density of mongooses with capture-recapture and distance sampling
Corn, J.L.; Conroy, M.J.
1998-01-01
We captured mongooses (Herpestes javanicus) in live traps arranged in trapping webs in Antigua, West Indies, and used capture-recapture and distance sampling to estimate density. Distance estimation and program DISTANCE were used to provide estimates of density from the trapping-web data. Mean density based on trapping webs was 9.5 mongooses/ha (range, 5.9-10.2/ha); estimates had coefficients of variation ranging from 29.82-31.58% (X?? = 30.46%). Mark-recapture models were used to estimate abundance, which was converted to density using estimates of effective trap area. Tests of model assumptions provided by CAPTURE indicated pronounced heterogeneity in capture probabilities and some indication of behavioral response and variation over time. Mean estimated density was 1.80 mongooses/ha (range, 1.37-2.15/ha) with estimated coefficients of variation of 4.68-11.92% (X?? = 7.46%). Estimates of density based on mark-recapture data depended heavily on assumptions about animal home ranges; variances of densities also may be underestimated, leading to unrealistically narrow confidence intervals. Estimates based on trap webs require fewer assumptions, and estimated variances may be a more realistic representation of sampling variation. Because trap webs are established easily and provide adequate data for estimation in a few sample occasions, the method should be efficient and reliable for estimating densities of mongooses.
Generalized sample entropy analysis for traffic signals based on similarity measure
NASA Astrophysics Data System (ADS)
Shang, Du; Xu, Mengjia; Shang, Pengjian
2017-05-01
Sample entropy is a prevailing method used to quantify the complexity of a time series. In this paper a modified method of generalized sample entropy and surrogate data analysis is proposed as a new measure to assess the complexity of a complex dynamical system such as traffic signals. The method based on similarity distance presents a different way of signals patterns match showing distinct behaviors of complexity. Simulations are conducted over synthetic data and traffic signals for providing the comparative study, which is provided to show the power of the new method. Compared with previous sample entropy and surrogate data analysis, the new method has two main advantages. The first one is that it overcomes the limitation about the relationship between the dimension parameter and the length of series. The second one is that the modified sample entropy functions can be used to quantitatively distinguish time series from different complex systems by the similar measure.
Approximate geodesic distances reveal biologically relevant structures in microarray data.
Nilsson, Jens; Fioretos, Thoas; Höglund, Mattias; Fontes, Magnus
2004-04-12
Genome-wide gene expression measurements, as currently determined by the microarray technology, can be represented mathematically as points in a high-dimensional gene expression space. Genes interact with each other in regulatory networks, restricting the cellular gene expression profiles to a certain manifold, or surface, in gene expression space. To obtain knowledge about this manifold, various dimensionality reduction methods and distance metrics are used. For data points distributed on curved manifolds, a sensible distance measure would be the geodesic distance along the manifold. In this work, we examine whether an approximate geodesic distance measure captures biological similarities better than the traditionally used Euclidean distance. We computed approximate geodesic distances, determined by the Isomap algorithm, for one set of lymphoma and one set of lung cancer microarray samples. Compared with the ordinary Euclidean distance metric, this distance measure produced more instructive, biologically relevant, visualizations when applying multidimensional scaling. This suggests the Isomap algorithm as a promising tool for the interpretation of microarray data. Furthermore, the results demonstrate the benefit and importance of taking nonlinearities in gene expression data into account.
NASA Astrophysics Data System (ADS)
Vargas, E.; Cifuentes, A.; Alvarado, S.; Cabrera, H.; Delgado, O.; Calderón, A.; Marín, E.
2018-02-01
Photothermal beam deflection is a well-established technique for measuring thermal diffusivity. In this technique, a pump laser beam generates temperature variations on the surface of the sample to be studied. These variations transfer heat to the surrounding medium, which may be air or any other fluid. The medium in turn experiences a change in the refractive index, which will be proportional to the temperature field on the sample surface when the distance to this surface is small. A probe laser beam will suffer a deflection due to the refractive index periodical changes, which is usually monitored by means of a quadrant photodetector or a similar device aided by lock-in amplification. A linear relationship that arises in this technique is that given by the phase lag of the thermal wave as a function of the distance to a punctual heat source when unidimensional heat diffusion can be guaranteed. This relationship is useful in the calculation of the sample's thermal diffusivity, which can be obtained straightforwardly by the so-called slope method, if the pump beam modulation frequency is well-known. The measurement procedure requires the experimenter to displace the probe beam at a given distance from the heat source, measure the phase lag at that offset, and repeat this for as many points as desired. This process can be quite lengthy in dependence of the number points. In this paper, we propose a detection scheme, which overcomes this limitation and simplifies the experimental setup using a digital camera that substitutes all detection hardware utilizing motion detection techniques and software digital signal lock-in post-processing. In this work, the method is demonstrated using thin metallic filaments as samples.
Vargas, E; Cifuentes, A; Alvarado, S; Cabrera, H; Delgado, O; Calderón, A; Marín, E
2018-02-01
Photothermal beam deflection is a well-established technique for measuring thermal diffusivity. In this technique, a pump laser beam generates temperature variations on the surface of the sample to be studied. These variations transfer heat to the surrounding medium, which may be air or any other fluid. The medium in turn experiences a change in the refractive index, which will be proportional to the temperature field on the sample surface when the distance to this surface is small. A probe laser beam will suffer a deflection due to the refractive index periodical changes, which is usually monitored by means of a quadrant photodetector or a similar device aided by lock-in amplification. A linear relationship that arises in this technique is that given by the phase lag of the thermal wave as a function of the distance to a punctual heat source when unidimensional heat diffusion can be guaranteed. This relationship is useful in the calculation of the sample's thermal diffusivity, which can be obtained straightforwardly by the so-called slope method, if the pump beam modulation frequency is well-known. The measurement procedure requires the experimenter to displace the probe beam at a given distance from the heat source, measure the phase lag at that offset, and repeat this for as many points as desired. This process can be quite lengthy in dependence of the number points. In this paper, we propose a detection scheme, which overcomes this limitation and simplifies the experimental setup using a digital camera that substitutes all detection hardware utilizing motion detection techniques and software digital signal lock-in post-processing. In this work, the method is demonstrated using thin metallic filaments as samples.
NASA Technical Reports Server (NTRS)
Roth, Don J. (Inventor)
1996-01-01
An apparatus and method for determination of sample thickness and surface depression utilizing ultrasonic pulses is discussed. The sample is held in a predetermined position by a support member having a reference surface. Ultrasonic pulses travel through a medium of known velocity propagation and reflect off the reference surface and a sample surface. Time of flight data of surface echoes are converted to distances between sample surfaces to obtain computer-generated thickness profiles and surface mappings.
Analysing designed experiments in distance sampling
Stephen T. Buckland; Robin E. Russell; Brett G. Dickson; Victoria A. Saab; Donal N. Gorman; William M. Block
2009-01-01
Distance sampling is a survey technique for estimating the abundance or density of wild animal populations. Detection probabilities of animals inherently differ by species, age class, habitats, or sex. By incorporating the change in an observer's ability to detect a particular class of animals as a function of distance, distance sampling leads to density estimates...
Higo, Junichi; Dasgupta, Bhaskar; Mashimo, Tadaaki; Kasahara, Kota; Fukunishi, Yoshifumi; Nakamura, Haruki
2015-07-30
A novel enhanced conformational sampling method, virtual-system-coupled adaptive umbrella sampling (V-AUS), was proposed to compute 300-K free-energy landscape for flexible molecular docking, where a virtual degrees of freedom was introduced to control the sampling. This degree of freedom interacts with the biomolecular system. V-AUS was applied to complex formation of two disordered amyloid-β (Aβ30-35 ) peptides in a periodic box filled by an explicit solvent. An interpeptide distance was defined as the reaction coordinate, along which sampling was enhanced. A uniform conformational distribution was obtained covering a wide interpeptide distance ranging from the bound to unbound states. The 300-K free-energy landscape was characterized by thermodynamically stable basins of antiparallel and parallel β-sheet complexes and some other complex forms. Helices were frequently observed, when the two peptides contacted loosely or fluctuated freely without interpeptide contacts. We observed that V-AUS converged to uniform distribution more effectively than conventional AUS sampling did. © 2015 Wiley Periodicals, Inc.
Geostatistical modeling of riparian forest microclimate and its implications for sampling
Eskelson, B.N.I.; Anderson, P.D.; Hagar, J.C.; Temesgen, H.
2011-01-01
Predictive models of microclimate under various site conditions in forested headwater stream - riparian areas are poorly developed, and sampling designs for characterizing underlying riparian microclimate gradients are sparse. We used riparian microclimate data collected at eight headwater streams in the Oregon Coast Range to compare ordinary kriging (OK), universal kriging (UK), and kriging with external drift (KED) for point prediction of mean maximum air temperature (Tair). Several topographic and forest structure characteristics were considered as site-specific parameters. Height above stream and distance to stream were the most important covariates in the KED models, which outperformed OK and UK in terms of root mean square error. Sample patterns were optimized based on the kriging variance and the weighted means of shortest distance criterion using the simulated annealing algorithm. The optimized sample patterns outperformed systematic sample patterns in terms of mean kriging variance mainly for small sample sizes. These findings suggest methods for increasing efficiency of microclimate monitoring in riparian areas.
NASA Astrophysics Data System (ADS)
Lauer, Tod
1995-07-01
We request deep, near-IR (F814W) WFPC2 images of five nearby Brightest Cluster Galaxies (BCG) to calibrate the BCG Hubble diagram by the Surface Brightness Fluctuation (SBF) method. Lauer & Postman (1992) show that the BCG Hubble diagram measured out to 15,000 km s^-1 is highly linear. Calibration of the Hubble diagram zeropoint by SBF will thus yield an accurate far-field measure of H_0 based on the entire volume within 15,000 km s^-1, thus circumventing any strong biases caused by local peculiar velocity fields. This method of reaching the far field is contrasted with those using distance ratios between Virgo and Coma, or any other limited sample of clusters. HST is required as the ground-based SBF method is limited to <3,000 km s^-1. The high spatial resolution of HST allows precise measurement of the SBF signal at large distances, and allows easy recognition of globular clusters, background galaxies, and dust clouds in the BCG images that must be removed prior to SBF detection. The proposing team developed the SBF method, the first BCG Hubble diagram based on a full-sky, volume-limited BCG sample, played major roles in the calibration of WFPC and WFPC2, and are conducting observations of local galaxies that will validate the SBF zeropoint (through GTO programs). This work uses the SBF method to tie both the Cepheid and Local Group giant-branch distances generated by HST to the large scale Hubble flow, which is most accurately traced by BCGs.
Sved, J A; Yu, H; Dominiak, B; Gilchrist, A S
2003-01-01
Long-range dispersal of a species may involve either a single long-distance movement from a core population or spreading via unobserved intermediate populations. Where the new populations originate as small propagules, genetic drift may be extreme and gene frequency or assignment methods may not prove useful in determining the relation between the core population and outbreak samples. We describe computationally simple resampling methods for use in this situation to distinguish between the different modes of dispersal. First, estimates of heterozygosity can be used to test for direct sampling from the core population and to estimate the effective size of intermediate populations. Second, a test of sharing of alleles, particularly rare alleles, can show whether outbreaks are related to each other rather than arriving as independent samples from the core population. The shared-allele statistic also serves as a genetic distance measure that is appropriate for small samples. These methods were applied to data on a fruit fly pest species, Bactrocera tryoni, which is quarantined from some horticultural areas in Australia. We concluded that the outbreaks in the quarantine zone came from a heterogeneous set of genetically differentiated populations, possibly ones that overwinter in the vicinity of the quarantine zone. PMID:12618417
Far Field Modeling Methods For Characterizing Surface Detonations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garrett, A.
2015-10-08
Savannah River National Laboratory (SRNL) analyzed particle samples collected during experiments that were designed to replicate tests of nuclear weapons components that involve detonation of high explosives (HE). SRNL collected the particle samples in the HE debris cloud using innovative rocket propelled samplers. SRNL used scanning electronic microscopy to determine the elemental constituents of the particles and their size distributions. Depleted uranium composed about 7% of the particle contents. SRNL used the particle size distributions and elemental composition to perform transport calculations that indicate in many terrains and atmospheric conditions the uranium bearing particles will be transported long distances downwind.more » This research established that HE tests specific to nuclear proliferation should be detectable at long downwind distances by sampling airborne particles created by the test detonations.« less
VizieR Online Data Catalog: Outliers and similarity in APOGEE (Reis+, 2018)
NASA Astrophysics Data System (ADS)
Reis, I.; Poznanski, D.; Baron, D.; Zasowski, G.; Shahaf, S.
2017-11-01
t-SNE is a dimensionality reduction algorithm that is particularly well suited for the visualization of high-dimensional datasets. We use t-SNE to visualize our distance matrix. A-priori, these distances could define a space with almost as many dimensions as objects, i.e., tens of thousand of dimensions. Obviously, since many stars are quite similar, and their spectra are defined by a few physical parameters, the minimal spanning space might be smaller. By using t-SNE we can examine the structure of our sample projected into 2D. We use our distance matrix as input to the t-SNE algorithm and in return get a 2D map of the objects in our dataset. For each star in a sample of 183232 APOGEE stars, the APOGEE IDs of the 99 stars with most similar spectra (according to the method described in paper), ordered by similarity. (3 data files).
Zhong, Hua; Redo-Sanchez, Albert; Zhang, X-C
2006-10-02
We present terahertz (THz) reflective spectroscopic focal-plane imaging of four explosive and bio-chemical materials (2, 4-DNT, Theophylline, RDX and Glutamic Acid) at a standoff imaging distance of 0.4 m. The 2 dimension (2-D) nature of this technique enables a fast acquisition time and is very close to a camera-like operation, compared to the most commonly used point emission-detection and raster scanning configuration. The samples are identified by their absorption peaks extracted from the negative derivative of the reflection coefficient respect to the frequency (-dr/dv) of each pixel. Classification of the samples is achieved by using minimum distance classifier and neural network methods with a rate of accuracy above 80% and a false alarm rate below 8%. This result supports the future application of THz time-domain spectroscopy (TDS) in standoff distance sensing, imaging, and identification.
Berger, Jason; Upton, Colin; Springer, Elyah
2018-04-23
Visualization of nitrite residues is essential in gunshot distance determination. Current protocols for the detection of nitrites include, among other tests, the Modified Griess Test (MGT). This method is limited as nitrite residues are unstable in the environment and limited to partially burned gunpowder. Previous research demonstrated the ability of alkaline hydrolysis to convert nitrates to nitrites, allowing visualization of unburned gunpowder particles using the MGT. This is referred to as Total Nitrite Pattern Visualization (TNV). TNV techniques were modified and a study conducted to streamline the procedure outlined in the literature to maximize the efficacy of the TNV in casework, while reducing the required time from 1 h to 5 min, and enhancing effectiveness on blood-soiled samples. The TNV method was found to provide significant improvement in the ability to detect significant nitrite residues, without sacrificing efficiency, that would allow for the determination of the muzzle-to-target distance. © 2018 American Academy of Forensic Sciences.
Where is the game? Wild meat products authentication in South Africa: a case study.
D'Amato, Maria Eugenia; Alechine, Evguenia; Cloete, Kevin Wesley; Davison, Sean; Corach, Daniel
2013-03-01
Wild animals' meat is extensively consumed in South Africa, being obtained either from ranching, farming or hunting. To test the authenticity of the commercial labels of meat products in the local market, we obtained DNA sequence information from 146 samples (14 beef and 132 game labels) for barcoding cytochrome c oxidase subunit I and partial cytochrome b and mitochondrial fragments. The reliability of species assignments were evaluated using BLAST searches in GenBank, maximum likelihood phylogenetic analysis and the character-based method implemented in BLOG. The Kimura-2-parameter intra- and interspecific variation was evaluated for all matched species. The combined application of similarity, phylogenetic and character-based methods proved successful in species identification. Game meat samples showed 76.5% substitution, no beef samples were substituted. The substitutions showed a variety of domestic species (cattle, horse, pig, lamb), common game species in the market (kudu, gemsbok, ostrich, impala, springbok), uncommon species in the market (giraffe, waterbuck, bushbuck, duiker, mountain zebra) and extra-continental species (kangaroo). The mountain zebra Equus zebra is an International Union for Conservation of Nature (IUCN) red listed species. We also detected Damaliscus pygargus, which is composed of two subspecies with one listed by IUCN as 'near threatened'; however, these mitochondrial fragments were insufficient to distinguish between the subspecies. The genetic distance between African ungulate species often overlaps with within-species distance in cases of recent speciation events, and strong phylogeographic structure determines within-species distances that are similar to the commonly accepted distances between species. The reliability of commercial labeling of game meat in South Africa is very poor. The extensive substitution of wild game has important implications for conservation and commerce, and for the consumers making decisions on the basis of health, religious beliefs or personal choices.Distance would be a poor indicator for identification of African ungulates species. The efficiency of the character-based method is reliant upon availability of large reference data. The current higher availability of cytochrome b data would make this the marker of choice for African ungulates. The encountered problems of incomplete or erroneous information in databases are discussed.
Where is the game? Wild meat products authentication in South Africa: a case study
2013-01-01
Background Wild animals’ meat is extensively consumed in South Africa, being obtained either from ranching, farming or hunting. To test the authenticity of the commercial labels of meat products in the local market, we obtained DNA sequence information from 146 samples (14 beef and 132 game labels) for barcoding cytochrome c oxidase subunit I and partial cytochrome b and mitochondrial fragments. The reliability of species assignments were evaluated using BLAST searches in GenBank, maximum likelihood phylogenetic analysis and the character-based method implemented in BLOG. The Kimura-2-parameter intra- and interspecific variation was evaluated for all matched species. Results The combined application of similarity, phylogenetic and character-based methods proved successful in species identification. Game meat samples showed 76.5% substitution, no beef samples were substituted. The substitutions showed a variety of domestic species (cattle, horse, pig, lamb), common game species in the market (kudu, gemsbok, ostrich, impala, springbok), uncommon species in the market (giraffe, waterbuck, bushbuck, duiker, mountain zebra) and extra-continental species (kangaroo). The mountain zebra Equus zebra is an International Union for Conservation of Nature (IUCN) red listed species. We also detected Damaliscus pygargus, which is composed of two subspecies with one listed by IUCN as ‘near threatened’; however, these mitochondrial fragments were insufficient to distinguish between the subspecies. The genetic distance between African ungulate species often overlaps with within-species distance in cases of recent speciation events, and strong phylogeographic structure determines within-species distances that are similar to the commonly accepted distances between species. Conclusions The reliability of commercial labeling of game meat in South Africa is very poor. The extensive substitution of wild game has important implications for conservation and commerce, and for the consumers making decisions on the basis of health, religious beliefs or personal choices. Distance would be a poor indicator for identification of African ungulates species. The efficiency of the character-based method is reliant upon availability of large reference data. The current higher availability of cytochrome b data would make this the marker of choice for African ungulates. The encountered problems of incomplete or erroneous information in databases are discussed. PMID:23452350
Detection of alpha radiation in a beta radiation field
Mohagheghi, Amir H.; Reese, Robert P.
2001-01-01
An apparatus and method for detecting alpha particles in the presence of high activities of beta particles utilizing an alpha spectrometer. The apparatus of the present invention utilizes a magnetic field applied around the sample in an alpha spectrometer to deflect the beta particles from the sample prior to reaching the detector, thus permitting detection of low concentrations of alpha particles. In the method of the invention, the strength of magnetic field required to adequately deflect the beta particles and permit alpha particle detection is given by an algorithm that controls the field strength as a function of sample beta energy and the distance of the sample to the detector.
NASA Astrophysics Data System (ADS)
Wang, Kesheng; Cheng, Jia; Yao, Shiji; Lu, Yijia; Ji, Linhong; Xu, Dengfeng
2016-12-01
Electrostatic force measurement at the micro/nano scale is of great significance in science and engineering. In this paper, a reasonable way of applying voltage is put forward by taking an electrostatic chuck in a real integrated circuit manufacturing process as a sample, applying voltage in the probe and the sample electrode, respectively, and comparing the measurement effect of the probe oscillation phase difference by amplitude modulation atomic force microscopy. Based on the phase difference obtained from the experiment, the quantitative dependence of the absolute magnitude of the electrostatic force on the tip-sample distance and applied voltage is established by means of theoretical analysis and numerical simulation. The results show that the varying characteristics of the electrostatic force with the distance and voltage at the micro/nano scale are similar to those at the macroscopic scale. Electrostatic force gradually decays with increasing distance. Electrostatic force is basically proportional to the square of applied voltage. Meanwhile, the applicable conditions of the above laws are discussed. In addition, a comparison of the results in this paper with the results of the energy dissipation method shows the two are consistent in general. The error decreases with increasing distance, and the effect of voltage on the error is small.
Novel method for on-road emission factor measurements using a plume capture trailer.
Morawska, L; Ristovski, Z D; Johnson, G R; Jayaratne, E R; Mengersen, K
2007-01-15
The method outlined provides for emission factor measurements to be made for unmodified vehicles driving under real world conditions at minimal cost. The method consists of a plume capture trailer towed behind a test vehicle. The trailer collects a sample of the naturally diluted plume in a 200 L conductive bag and this is delivered immediately to a mobile laboratory for subsequent analysis of particulate and gaseous emissions. The method offers low test turnaround times with the potential to complete much larger numbers of emission factor measurements than have been possible using dynamometer testing. Samples can be collected at distances up to 3 m from the exhaust pipe allowing investigation of early dilution processes. Particle size distribution measurements, as well as particle number and mass emission factor measurements, based on naturally diluted plumes are presented. A dilution profile relating the plume dilution ratio to distance from the vehicle tail pipe for a diesel passenger vehicle is also presented. Such profiles are an essential input for new mechanistic roadway air quality models.
Mouradi, Rand; Desai, Nisarg; Erdemir, Ahmet; Agarwal, Ashok
2012-01-01
Recent studies have shown that exposing human semen samples to cell phone radiation leads to a significant decline in sperm parameters. In daily living, a cell phone is usually kept in proximity to the groin, such as in a trouser pocket, separated from the testes by multiple layers of tissue. The aim of this study was to calculate the distance between cell phone and semen sample to set up an in vitro experiment that can mimic real life conditions (cell phone in trouser pocket separated by multiple tissue layers). For this reason, a computational model of scrotal tissues was designed by considering these separating layers, the results of which were used in a series of simulations using the Finite Difference Time Domain (FDTD) method. To provide an equivalent effect of multiple tissue layers, these results showed that the distance between a cell phone and semen sample should be 0.8 cm to 1.8 cm greater than the anticipated distance between a cell phone and the testes.
A method for cone fitting based on certain sampling strategy in CMM metrology
NASA Astrophysics Data System (ADS)
Zhang, Li; Guo, Chaopeng
2018-04-01
A method of cone fitting in engineering is explored and implemented to overcome shortcomings of current fitting method. In the current method, the calculations of the initial geometric parameters are imprecise which cause poor accuracy in surface fitting. A geometric distance function of cone is constructed firstly, then certain sampling strategy is defined to calculate the initial geometric parameters, afterwards nonlinear least-squares method is used to fit the surface. The experiment is designed to verify accuracy of the method. The experiment data prove that the proposed method can get initial geometric parameters simply and efficiently, also fit the surface precisely, and provide a new accurate way to cone fitting in the coordinate measurement.
A Spatial Framework for Understanding Population Structure and Admixture.
Bradburd, Gideon S; Ralph, Peter L; Coop, Graham M
2016-01-01
Geographic patterns of genetic variation within modern populations, produced by complex histories of migration, can be difficult to infer and visually summarize. A general consequence of geographically limited dispersal is that samples from nearby locations tend to be more closely related than samples from distant locations, and so genetic covariance often recapitulates geographic proximity. We use genome-wide polymorphism data to build "geogenetic maps," which, when applied to stationary populations, produces a map of the geographic positions of the populations, but with distances distorted to reflect historical rates of gene flow. In the underlying model, allele frequency covariance is a decreasing function of geogenetic distance, and nonlocal gene flow such as admixture can be identified as anomalously strong covariance over long distances. This admixture is explicitly co-estimated and depicted as arrows, from the source of admixture to the recipient, on the geogenetic map. We demonstrate the utility of this method on a circum-Tibetan sampling of the greenish warbler (Phylloscopus trochiloides), in which we find evidence for gene flow between the adjacent, terminal populations of the ring species. We also analyze a global sampling of human populations, for which we largely recover the geography of the sampling, with support for significant histories of admixture in many samples. This new tool for understanding and visualizing patterns of population structure is implemented in a Bayesian framework in the program SpaceMix.
A Spatial Framework for Understanding Population Structure and Admixture
Bradburd, Gideon S.; Ralph, Peter L.; Coop, Graham M.
2016-01-01
Geographic patterns of genetic variation within modern populations, produced by complex histories of migration, can be difficult to infer and visually summarize. A general consequence of geographically limited dispersal is that samples from nearby locations tend to be more closely related than samples from distant locations, and so genetic covariance often recapitulates geographic proximity. We use genome-wide polymorphism data to build “geogenetic maps,” which, when applied to stationary populations, produces a map of the geographic positions of the populations, but with distances distorted to reflect historical rates of gene flow. In the underlying model, allele frequency covariance is a decreasing function of geogenetic distance, and nonlocal gene flow such as admixture can be identified as anomalously strong covariance over long distances. This admixture is explicitly co-estimated and depicted as arrows, from the source of admixture to the recipient, on the geogenetic map. We demonstrate the utility of this method on a circum-Tibetan sampling of the greenish warbler (Phylloscopus trochiloides), in which we find evidence for gene flow between the adjacent, terminal populations of the ring species. We also analyze a global sampling of human populations, for which we largely recover the geography of the sampling, with support for significant histories of admixture in many samples. This new tool for understanding and visualizing patterns of population structure is implemented in a Bayesian framework in the program SpaceMix. PMID:26771578
Klose, Daniel; Klare, Johann P.; Grohmann, Dina; Kay, Christopher W. M.; Werner, Finn; Steinhoff, Heinz-Jürgen
2012-01-01
Site specific incorporation of molecular probes such as fluorescent- and nitroxide spin-labels into biomolecules, and subsequent analysis by Förster resonance energy transfer (FRET) and double electron-electron resonance (DEER) can elucidate the distance and distance-changes between the probes. However, the probes have an intrinsic conformational flexibility due to the linker by which they are conjugated to the biomolecule. This property minimizes the influence of the label side chain on the structure of the target molecule, but complicates the direct correlation of the experimental inter-label distances with the macromolecular structure or changes thereof. Simulation methods that account for the conformational flexibility and orientation of the probe(s) can be helpful in overcoming this problem. We performed distance measurements using FRET and DEER and explored different simulation techniques to predict inter-label distances using the Rpo4/7 stalk module of the M. jannaschii RNA polymerase. This is a suitable model system because it is rigid and a high-resolution X-ray structure is available. The conformations of the fluorescent labels and nitroxide spin labels on Rpo4/7 were modeled using in vacuo molecular dynamics simulations (MD) and a stochastic Monte Carlo sampling approach. For the nitroxide probes we also performed MD simulations with explicit water and carried out a rotamer library analysis. Our results show that the Monte Carlo simulations are in better agreement with experiments than the MD simulations and the rotamer library approach results in plausible distance predictions. Because the latter is the least computationally demanding of the methods we have explored, and is readily available to many researchers, it prevails as the method of choice for the interpretation of DEER distance distributions. PMID:22761805
Accounting for imperfect detection of groups and individuals when estimating abundance.
Clement, Matthew J; Converse, Sarah J; Royle, J Andrew
2017-09-01
If animals are independently detected during surveys, many methods exist for estimating animal abundance despite detection probabilities <1. Common estimators include double-observer models, distance sampling models and combined double-observer and distance sampling models (known as mark-recapture-distance-sampling models; MRDS). When animals reside in groups, however, the assumption of independent detection is violated. In this case, the standard approach is to account for imperfect detection of groups, while assuming that individuals within groups are detected perfectly. However, this assumption is often unsupported. We introduce an abundance estimator for grouped animals when detection of groups is imperfect and group size may be under-counted, but not over-counted. The estimator combines an MRDS model with an N-mixture model to account for imperfect detection of individuals. The new MRDS-Nmix model requires the same data as an MRDS model (independent detection histories, an estimate of distance to transect, and an estimate of group size), plus a second estimate of group size provided by the second observer. We extend the model to situations in which detection of individuals within groups declines with distance. We simulated 12 data sets and used Bayesian methods to compare the performance of the new MRDS-Nmix model to an MRDS model. Abundance estimates generated by the MRDS-Nmix model exhibited minimal bias and nominal coverage levels. In contrast, MRDS abundance estimates were biased low and exhibited poor coverage. Many species of conservation interest reside in groups and could benefit from an estimator that better accounts for imperfect detection. Furthermore, the ability to relax the assumption of perfect detection of individuals within detected groups may allow surveyors to re-allocate resources toward detection of new groups instead of extensive surveys of known groups. We believe the proposed estimator is feasible because the only additional field data required are a second estimate of group size.
Accounting for imperfect detection of groups and individuals when estimating abundance
Clement, Matthew J.; Converse, Sarah J.; Royle, J. Andrew
2017-01-01
If animals are independently detected during surveys, many methods exist for estimating animal abundance despite detection probabilities <1. Common estimators include double-observer models, distance sampling models and combined double-observer and distance sampling models (known as mark-recapture-distance-sampling models; MRDS). When animals reside in groups, however, the assumption of independent detection is violated. In this case, the standard approach is to account for imperfect detection of groups, while assuming that individuals within groups are detected perfectly. However, this assumption is often unsupported. We introduce an abundance estimator for grouped animals when detection of groups is imperfect and group size may be under-counted, but not over-counted. The estimator combines an MRDS model with an N-mixture model to account for imperfect detection of individuals. The new MRDS-Nmix model requires the same data as an MRDS model (independent detection histories, an estimate of distance to transect, and an estimate of group size), plus a second estimate of group size provided by the second observer. We extend the model to situations in which detection of individuals within groups declines with distance. We simulated 12 data sets and used Bayesian methods to compare the performance of the new MRDS-Nmix model to an MRDS model. Abundance estimates generated by the MRDS-Nmix model exhibited minimal bias and nominal coverage levels. In contrast, MRDS abundance estimates were biased low and exhibited poor coverage. Many species of conservation interest reside in groups and could benefit from an estimator that better accounts for imperfect detection. Furthermore, the ability to relax the assumption of perfect detection of individuals within detected groups may allow surveyors to re-allocate resources toward detection of new groups instead of extensive surveys of known groups. We believe the proposed estimator is feasible because the only additional field data required are a second estimate of group size.
Distance correlation methods for discovering associations in large astrophysical databases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martínez-Gómez, Elizabeth; Richards, Mercedes T.; Richards, Donald St. P., E-mail: elizabeth.martinez@itam.mx, E-mail: mrichards@astro.psu.edu, E-mail: richards@stat.psu.edu
2014-01-20
High-dimensional, large-sample astrophysical databases of galaxy clusters, such as the Chandra Deep Field South COMBO-17 database, provide measurements on many variables for thousands of galaxies and a range of redshifts. Current understanding of galaxy formation and evolution rests sensitively on relationships between different astrophysical variables; hence an ability to detect and verify associations or correlations between variables is important in astrophysical research. In this paper, we apply a recently defined statistical measure called the distance correlation coefficient, which can be used to identify new associations and correlations between astrophysical variables. The distance correlation coefficient applies to variables of any dimension,more » can be used to determine smaller sets of variables that provide equivalent astrophysical information, is zero only when variables are independent, and is capable of detecting nonlinear associations that are undetectable by the classical Pearson correlation coefficient. Hence, the distance correlation coefficient provides more information than the Pearson coefficient. We analyze numerous pairs of variables in the COMBO-17 database with the distance correlation method and with the maximal information coefficient. We show that the Pearson coefficient can be estimated with higher accuracy from the corresponding distance correlation coefficient than from the maximal information coefficient. For given values of the Pearson coefficient, the distance correlation method has a greater ability than the maximal information coefficient to resolve astrophysical data into highly concentrated horseshoe- or V-shapes, which enhances classification and pattern identification. These results are observed over a range of redshifts beyond the local universe and for galaxies from elliptical to spiral.« less
Effect of distance-related heterogeneity on population size estimates from point counts
Efford, Murray G.; Dawson, Deanna K.
2009-01-01
Point counts are used widely to index bird populations. Variation in the proportion of birds counted is a known source of error, and for robust inference it has been advocated that counts be converted to estimates of absolute population size. We used simulation to assess nine methods for the conduct and analysis of point counts when the data included distance-related heterogeneity of individual detection probability. Distance from the observer is a ubiquitous source of heterogeneity, because nearby birds are more easily detected than distant ones. Several recent methods (dependent double-observer, time of first detection, time of detection, independent multiple-observer, and repeated counts) do not account for distance-related heterogeneity, at least in their simpler forms. We assessed bias in estimates of population size by simulating counts with fixed radius w over four time intervals (occasions). Detection probability per occasion was modeled as a half-normal function of distance with scale parameter sigma and intercept g(0) = 1.0. Bias varied with sigma/w; values of sigma inferred from published studies were often 50% for a 100-m fixed-radius count. More critically, the bias of adjusted counts sometimes varied more than that of unadjusted counts, and inference from adjusted counts would be less robust. The problem was not solved by using mixture models or including distance as a covariate. Conventional distance sampling performed well in simulations, but its assumptions are difficult to meet in the field. We conclude that no existing method allows effective estimation of population size from point counts.
A Discriminant Distance Based Composite Vector Selection Method for Odor Classification
Choi, Sang-Il; Jeong, Gu-Min
2014-01-01
We present a composite vector selection method for an effective electronic nose system that performs well even in noisy environments. Each composite vector generated from a electronic nose data sample is evaluated by computing the discriminant distance. By quantitatively measuring the amount of discriminative information in each composite vector, composite vectors containing informative variables can be distinguished and the final composite features for odor classification are extracted using the selected composite vectors. Using the only informative composite vectors can be also helpful to extract better composite features instead of using all the generated composite vectors. Experimental results with different volatile organic compound data show that the proposed system has good classification performance even in a noisy environment compared to other methods. PMID:24747735
Distance Metric Learning via Iterated Support Vector Machines.
Zuo, Wangmeng; Wang, Faqiang; Zhang, David; Lin, Liang; Huang, Yuchi; Meng, Deyu; Zhang, Lei
2017-07-11
Distance metric learning aims to learn from the given training data a valid distance metric, with which the similarity between data samples can be more effectively evaluated for classification. Metric learning is often formulated as a convex or nonconvex optimization problem, while most existing methods are based on customized optimizers and become inefficient for large scale problems. In this paper, we formulate metric learning as a kernel classification problem with the positive semi-definite constraint, and solve it by iterated training of support vector machines (SVMs). The new formulation is easy to implement and efficient in training with the off-the-shelf SVM solvers. Two novel metric learning models, namely Positive-semidefinite Constrained Metric Learning (PCML) and Nonnegative-coefficient Constrained Metric Learning (NCML), are developed. Both PCML and NCML can guarantee the global optimality of their solutions. Experiments are conducted on general classification, face verification and person re-identification to evaluate our methods. Compared with the state-of-the-art approaches, our methods can achieve comparable classification accuracy and are efficient in training.
Young, Sean G; Carrel, Margaret; Kitchen, Andrew; Malanson, George P; Tamerius, James; Ali, Mohamad; Kayali, Ghazi
2017-04-01
First introduced to Egypt in 2006, H5N1 highly pathogenic avian influenza has resulted in the death of millions of birds and caused over 350 infections and at least 117 deaths in humans. After a decade of viral circulation, outbreaks continue to occur and diffusion mechanisms between poultry farms remain unclear. Using landscape genetics techniques, we identify the distance models most strongly correlated with the genetic relatedness of the viruses, suggesting the most likely methods of viral diffusion within Egyptian poultry. Using 73 viral genetic sequences obtained from infected birds throughout northern Egypt between 2009 and 2015, we calculated the genetic dissimilarity between H5N1 viruses for all eight gene segments. Spatial correlation was evaluated using Mantel tests and correlograms and multiple regression of distance matrices within causal modeling and relative support frameworks. These tests examine spatial patterns of genetic relatedness, and compare different models of distance. Four models were evaluated: Euclidean distance, road network distance, road network distance via intervening markets, and a least-cost path model designed to approximate wild waterbird travel using niche modeling and circuit theory. Samples from backyard farms were most strongly correlated with least cost path distances. Samples from commercial farms were most strongly correlated with road network distances. Results were largely consistent across gene segments. Results suggest wild birds play an important role in viral diffusion between backyard farms, while commercial farms experience human-mediated diffusion. These results can inform avian influenza surveillance and intervention strategies in Egypt. Copyright © 2017 Elsevier B.V. All rights reserved.
Macrostructure from Microstructure: Generating Whole Systems from Ego Networks
Smith, Jeffrey A.
2014-01-01
This paper presents a new simulation method to make global network inference from sampled data. The proposed simulation method takes sampled ego network data and uses Exponential Random Graph Models (ERGM) to reconstruct the features of the true, unknown network. After describing the method, the paper presents two validity checks of the approach: the first uses the 20 largest Add Health networks while the second uses the Sociology Coauthorship network in the 1990's. For each test, I take random ego network samples from the known networks and use my method to make global network inference. I find that my method successfully reproduces the properties of the networks, such as distance and main component size. The results also suggest that simpler, baseline models provide considerably worse estimates for most network properties. I end the paper by discussing the bounds/limitations of ego network sampling. I also discuss possible extensions to the proposed approach. PMID:25339783
Cosmography by GRBs: Gamma Ray Bursts as possible distance indicators
NASA Astrophysics Data System (ADS)
Capozziello, S.; Izzo, L.
2009-10-01
A new method to constrain the cosmological equation of state is proposed by using combined samples of gammaray bursts (GRBs) and supernovae (SNeIa). The Chevallier-Polarski-Linder parameterization is adopted for the equation of state in order to find out a realistic approach to achieve the deceleration/acceleration transition phase of dark energy models. As results, we find that GRBs, calibrated by SNeIa, could be, at least, good distance indicators capable of discriminating cosmological models with respect to ΛCDM at high redshift.
Improved look-up table method of computer-generated holograms.
Wei, Hui; Gong, Guanghong; Li, Ni
2016-11-10
Heavy computation load and vast memory requirements are major bottlenecks of computer-generated holograms (CGHs), which are promising and challenging in three-dimensional displays. To solve these problems, an improved look-up table (LUT) method suitable for arbitrarily sampled object points is proposed and implemented on a graphics processing unit (GPU) whose reconstructed object quality is consistent with that of the coherent ray-trace (CRT) method. The concept of distance factor is defined, and the distance factors are pre-computed off-line and stored in a look-up table. The results show that while reconstruction quality close to that of the CRT method is obtained, the on-line computation time is dramatically reduced compared with the LUT method on the GPU and the memory usage is lower than that of the novel-LUT considerably. Optical experiments are carried out to validate the effectiveness of the proposed method.
NASA Technical Reports Server (NTRS)
Bonamente, Massimillano; Joy, Marshall K.; Carlstrom, John E.; Reese, Erik D.; LaRoque, Samuel J.
2004-01-01
X-ray and Sunyaev-Zel'dovich effect data can be combined to determine the distance to galaxy clusters. High-resolution X-ray data are now available from Chandra, which provides both spatial and spectral information, and Sunyaev-Zel'dovich effect data were obtained from the BIMA and Owens Valley Radio Observatory (OVRO) arrays. We introduce a Markov Chain Monte Carlo procedure for the joint analysis of X-ray and Sunyaev- Zel'dovich effect data. The advantages of this method are the high computational efficiency and the ability to measure simultaneously the probability distribution of all parameters of interest, such as the spatial and spectral properties of the cluster gas and also for derivative quantities such as the distance to the cluster. We demonstrate this technique by applying it to the Chandra X-ray data and the OVRO radio data for the galaxy cluster A611. Comparisons with traditional likelihood ratio methods reveal the robustness of the method. This method will be used in follow-up paper to determine the distances to a large sample of galaxy cluster.
Besseling, T H; Jose, J; Van Blaaderen, A
2015-02-01
Accurate distance measurement in 3D confocal microscopy is important for quantitative analysis, volume visualization and image restoration. However, axial distances can be distorted by both the point spread function (PSF) and by a refractive-index mismatch between the sample and immersion liquid, which are difficult to separate. Additionally, accurate calibration of the axial distances in confocal microscopy remains cumbersome, although several high-end methods exist. In this paper we present two methods to calibrate axial distances in 3D confocal microscopy that are both accurate and easily implemented. With these methods, we measured axial scaling factors as a function of refractive-index mismatch for high-aperture confocal microscopy imaging. We found that our scaling factors are almost completely linearly dependent on refractive index and that they were in good agreement with theoretical predictions that take the full vectorial properties of light into account. There was however a strong deviation with the theoretical predictions using (high-angle) geometrical optics, which predict much lower scaling factors. As an illustration, we measured the PSF of a correctly calibrated point-scanning confocal microscope and showed that a nearly index-matched, micron-sized spherical object is still significantly elongated due to this PSF, which signifies that care has to be taken when determining axial calibration or axial scaling using such particles. © 2014 The Authors Journal of Microscopy published by John Wiley & Sons Ltd on behalf of Royal Microscopical Society.
Heterogeneity Measurement Based on Distance Measure for Polarimetric SAR Data
NASA Astrophysics Data System (ADS)
Xing, Xiaoli; Chen, Qihao; Liu, Xiuguo
2018-04-01
To effectively test the scene heterogeneity for polarimetric synthetic aperture radar (PolSAR) data, in this paper, the distance measure is introduced by utilizing the similarity between the sample and pixels. Moreover, given the influence of the distribution and modeling texture, the K distance measure is deduced according to the Wishart distance measure. Specifically, the average of the pixels in the local window replaces the class center coherency or covariance matrix. The Wishart and K distance measure are calculated between the average matrix and the pixels. Then, the ratio of the standard deviation to the mean is established for the Wishart and K distance measure, and the two features are defined and applied to reflect the complexity of the scene. The proposed heterogeneity measure is proceeded by integrating the two features using the Pauli basis. The experiments conducted on the single-look and multilook PolSAR data demonstrate the effectiveness of the proposed method for the detection of the scene heterogeneity.
Performance of digital RGB reflectance color extraction for plaque lesion
NASA Astrophysics Data System (ADS)
Hashim, Hadzli; Taib, Mohd Nasir; Jailani, Rozita; Sulaiman, Saadiah; Baba, Roshidah
2005-01-01
Several clinical psoriasis lesion groups are been studied for digital RGB color features extraction. Previous works have used samples size that included all the outliers lying beyond the standard deviation factors from the peak histograms. This paper described the statistical performances of the RGB model with and without removing these outliers. Plaque lesion is experimented with other types of psoriasis. The statistical tests are compared with respect to three samples size; the original 90 samples, the first size reduction by removing outliers from 2 standard deviation distances (2SD) and the second size reduction by removing outliers from 1 standard deviation distance (1SD). Quantification of data images through the normal/direct and differential of the conventional reflectance method is considered. Results performances are concluded by observing the error plots with 95% confidence interval and findings of the inference T-tests applied. The statistical tests outcomes have shown that B component for conventional differential method can be used to distinctively classify plaque from the other psoriasis groups in consistent with the error plots finding with an improvement in p-value greater than 0.5.
Fukunishi, Yoshifumi; Mikami, Yoshiaki; Nakamura, Haruki
2005-09-01
We developed a new method to evaluate the distances and similarities between receptor pockets or chemical compounds based on a multi-receptor versus multi-ligand docking affinity matrix. The receptors were classified by a cluster analysis based on calculations of the distance between receptor pockets. A set of low homologous receptors that bind a similar compound could be classified into one cluster. Based on this line of reasoning, we proposed a new in silico screening method. According to this method, compounds in a database were docked to multiple targets. The new docking score was a slightly modified version of the multiple active site correction (MASC) score. Receptors that were at a set distance from the target receptor were not included in the analysis, and the modified MASC scores were calculated for the selected receptors. The choice of the receptors is important to achieve a good screening result, and our clustering of receptors is useful to this purpose. This method was applied to the analysis of a set of 132 receptors and 132 compounds, and the results demonstrated that this method achieves a high hit ratio, as compared to that of a uniform sampling, using a receptor-ligand docking program, Sievgene, which was newly developed with a good docking performance yielding 50.8% of the reconstructed complexes at a distance of less than 2 A RMSD.
A new template matching method based on contour information
NASA Astrophysics Data System (ADS)
Cai, Huiying; Zhu, Feng; Wu, Qingxiao; Li, Sicong
2014-11-01
Template matching is a significant approach in machine vision due to its effectiveness and robustness. However, most of the template matching methods are so time consuming that they can't be used to many real time applications. The closed contour matching method is a popular kind of template matching methods. This paper presents a new closed contour template matching method which is suitable for two dimensional objects. Coarse-to-fine searching strategy is used to improve the matching efficiency and a partial computation elimination scheme is proposed to further speed up the searching process. The method consists of offline model construction and online matching. In the process of model construction, triples and distance image are obtained from the template image. A certain number of triples which are composed by three points are created from the contour information that is extracted from the template image. The rule to select the three points is that the template contour is divided equally into three parts by these points. The distance image is obtained here by distance transform. Each point on the distance image represents the nearest distance between current point and the points on the template contour. During the process of matching, triples of the searching image are created with the same rule as the triples of the model. Through the similarity that is invariant to rotation, translation and scaling between triangles, the triples corresponding to the triples of the model are found. Then we can obtain the initial RST (rotation, translation and scaling) parameters mapping the searching contour to the template contour. In order to speed up the searching process, the points on the searching contour are sampled to reduce the number of the triples. To verify the RST parameters, the searching contour is projected into the distance image, and the mean distance can be computed rapidly by simple operations of addition and multiplication. In the fine searching process, the initial RST parameters are discrete to obtain the final accurate pose of the object. Experimental results show that the proposed method is reasonable and efficient, and can be used in many real time applications.
Squared Euclidean distance: a statistical test to evaluate plant community change
Raymond D. Ratliff; Sylvia R. Mori
1993-01-01
The concepts and a procedure for evaluating plant community change using the squared Euclidean distance (SED) resemblance function are described. Analyses are based on the concept that Euclidean distances constitute a sample from a population of distances between sampling units (SUs) for a specific number of times and SUs. With different times, the distances will be...
Optimal design of a plot cluster for monitoring
Charles T. Scott
1993-01-01
Traveling costs incurred during extensive forest surveys make cluster sampling cost-effective. Clusters are specified by the type of plots, plot size, number of plots, and the distance between plots within the cluster. A method to determine the optimal cluster design when different plot types are used for different forest resource attributes is described. The method...
Anomaly detection in reconstructed quantum states using a machine-learning technique
NASA Astrophysics Data System (ADS)
Hara, Satoshi; Ono, Takafumi; Okamoto, Ryo; Washio, Takashi; Takeuchi, Shigeki
2014-02-01
The accurate detection of small deviations in given density matrices is important for quantum information processing. Here we propose a method based on the concept of data mining. We demonstrate that the proposed method can more accurately detect small erroneous deviations in reconstructed density matrices, which contain intrinsic fluctuations due to the limited number of samples, than a naive method of checking the trace distance from the average of the given density matrices. This method has the potential to be a key tool in broad areas of physics where the detection of small deviations of quantum states reconstructed using a limited number of samples is essential.
NASA Astrophysics Data System (ADS)
He, Jiayi; Shang, Pengjian; Xiong, Hui
2018-06-01
Stocks, as the concrete manifestation of financial time series with plenty of potential information, are often used in the study of financial time series. In this paper, we utilize the stock data to recognize their patterns through out the dissimilarity matrix based on modified cross-sample entropy, then three-dimensional perceptual maps of the results are provided through multidimensional scaling method. Two modified multidimensional scaling methods are proposed in this paper, that is, multidimensional scaling based on Kronecker-delta cross-sample entropy (MDS-KCSE) and multidimensional scaling based on permutation cross-sample entropy (MDS-PCSE). These two methods use Kronecker-delta based cross-sample entropy and permutation based cross-sample entropy to replace the distance or dissimilarity measurement in classical multidimensional scaling (MDS). Multidimensional scaling based on Chebyshev distance (MDSC) is employed to provide a reference for comparisons. Our analysis reveals a clear clustering both in synthetic data and 18 indices from diverse stock markets. It implies that time series generated by the same model are easier to have similar irregularity than others, and the difference in the stock index, which is caused by the country or region and the different financial policies, can reflect the irregularity in the data. In the synthetic data experiments, not only the time series generated by different models can be distinguished, the one generated under different parameters of the same model can also be detected. In the financial data experiment, the stock indices are clearly divided into five groups. Through analysis, we find that they correspond to five regions, respectively, that is, Europe, North America, South America, Asian-Pacific (with the exception of mainland China), mainland China and Russia. The results also demonstrate that MDS-KCSE and MDS-PCSE provide more effective divisions in experiments than MDSC.
Cekic-Nagas, Isil; Egilmez, Ferhan; Ergun, Gulfem
2010-01-01
Objectives: The aim of this study was to compare the microhardness of five different resin composites at different irradiation distances (2 mm and 9 mm) by using three light curing units (quartz tungsten halogen, light emitting diodes and plasma arc). Methods: A total of 210 disc-shaped samples (2 mm height and 6 mm diameter) were prepared from different resin composites (Simile, Aelite Aesthetic Enamel, Clearfil AP-X, Grandio caps and Filtek Z250). Photoactivation was performed by using quartz tungsten halogen, light emitting diode and plasma arc curing units at two irradiation distances (2 mm and 9 mm). Then the samples (n=7/per group) were stored dry in dark at 37°C for 24 h. The Vickers hardness test was performed on the resin composite layer with a microhardness tester (Shimadzu HMV). Data were statistically analyzed using nonparametric Kruskal Wallis and Mann-Whitney U tests. Results: Statistical analysis revealed that the resin composite groups, the type of the light curing units and the irradiation distances have significant effects on the microhardness values (P<.05). Conclusions: Light curing unit and irradiation distance are important factors to be considered for obtaining adequate microhardness of different resin composite groups. PMID:20922164
Sun, Jun; Zhou, Xin; Wu, Xiaohong; Zhang, Xiaodong; Li, Qinglin
2016-02-26
Fast identification of moisture content in tobacco plant leaves plays a key role in the tobacco cultivation industry and benefits the management of tobacco plant in the farm. In order to identify moisture content of tobacco plant leaves in a fast and nondestructive way, a method involving Mahalanobis distance coupled with Monte Carlo cross validation(MD-MCCV) was proposed to eliminate outlier sample in this study. The hyperspectral data of 200 tobacco plant leaf samples of 20 moisture gradients were obtained using FieldSpc(®) 3 spectrometer. Savitzky-Golay smoothing(SG), roughness penalty smoothing(RPS), kernel smoothing(KS) and median smoothing(MS) were used to preprocess the raw spectra. In addition, Mahalanobis distance(MD), Monte Carlo cross validation(MCCV) and Mahalanobis distance coupled to Monte Carlo cross validation(MD-MCCV) were applied to select the outlier sample of the raw spectrum and four smoothing preprocessing spectra. Successive projections algorithm (SPA) was used to extract the most influential wavelengths. Multiple Linear Regression (MLR) was applied to build the prediction models based on preprocessed spectra feature in characteristic wavelengths. The results showed that the preferably four prediction model were MD-MCCV-SG (Rp(2) = 0.8401 and RMSEP = 0.1355), MD-MCCV-RPS (Rp(2) = 0.8030 and RMSEP = 0.1274), MD-MCCV-KS (Rp(2) = 0.8117 and RMSEP = 0.1433), MD-MCCV-MS (Rp(2) = 0.9132 and RMSEP = 0.1162). MD-MCCV algorithm performed best among MD algorithm, MCCV algorithm and the method without sample pretreatment algorithm in the eliminating outlier sample from 20 different moisture gradients of tobacco plant leaves and MD-MCCV can be used to eliminate outlier sample in the spectral preprocessing. Copyright © 2016 Elsevier Inc. All rights reserved.
Mixed Pattern Matching-Based Traffic Abnormal Behavior Recognition
Cui, Zhiming; Zhao, Pengpeng
2014-01-01
A motion trajectory is an intuitive representation form in time-space domain for a micromotion behavior of moving target. Trajectory analysis is an important approach to recognize abnormal behaviors of moving targets. Against the complexity of vehicle trajectories, this paper first proposed a trajectory pattern learning method based on dynamic time warping (DTW) and spectral clustering. It introduced the DTW distance to measure the distances between vehicle trajectories and determined the number of clusters automatically by a spectral clustering algorithm based on the distance matrix. Then, it clusters sample data points into different clusters. After the spatial patterns and direction patterns learned from the clusters, a recognition method for detecting vehicle abnormal behaviors based on mixed pattern matching was proposed. The experimental results show that the proposed technical scheme can recognize main types of traffic abnormal behaviors effectively and has good robustness. The real-world application verified its feasibility and the validity. PMID:24605045
NASA Astrophysics Data System (ADS)
Sandage, Allan
1999-12-01
Relative, reduced to absolute, magnitude distributions are obtained for Sb, Sbc, and Sc galaxies in the flux-limited Revised Shapley-Ames Catalog (RSA2) for each van den Bergh luminosity class (L), within each Hubble type (T). The method to isolate bias-free subsets of the total sample is via Spaenhauer diagrams, as in previous papers of this series. The distance-limited type and class-specific luminosity functions are normalized to numbers of galaxies per unit volume (105 Mpc3), rather than being left as relative functions, as in Paper V. The functions are calculated using kinematic absolute magnitudes, based on an arbitrary trial value of H0=50. Gaussian fits to the individual normalized functions are listed for each T and L subclass. As in Paper V, the data can be freed from the T and L dependencies by applying a correction of 0.23T+0.5L to the individual absolute magnitudes. Here, T=3 for Sb, 4 for Sbc, and 5 for Sc galaxies, and the L values range from 1 to 6 as the luminosity class changes from I to III-IV. The total luminosity function, obtained by combining the volume-normalized Sb, Sbc, and Sc individual luminosity functions, each corrected for the T and L dependencies, has an rms dispersion of 0.67 mag, similar to much of the Tully-Fisher parameter space. Absolute calibration of the trial kinematic absolute magnitudes is made using 27 galaxies with known T and L that also have Cepheid distances. This permits the systematic correction to the H0=50 kinematic absolute magnitudes of 0.22+/-0.12 mag, givingH0=55+/-3(internal) km s-1 Mpc-1 . The Cepheid distances are based on the Madore/Freedman Cepheid period-luminosity (PL) zero point that requires (m-M)0=18.50 for the LMC. Using the modern LMC modulus of (m-M)0=18.58 requires a 4% decrease in H0, giving a final value of H0=53+/-7 (external) by this method. These values of H0, based here on the method of luminosity functions, are in good agreement with (1) H0=55+/-5 by Theureau and coworkers from their bias-corrected Tully-Fisher method of ``normalized distances'' for field galaxies; (2) H0=56+/-4 from the method through the Virgo Cluster, as corrected to the global kinematic frame (Tammann and coworkers); and (3) H0=58+/-5 from Cepheid-calibrated Type Ia supernovae (Saha and coworkers). Our value here also disagrees with the final value from the NASA ``Key Project'' group value of H0=70+/-7. Analysis of the total flux-limited sample of Sb, Sbc, and Sc galaxies in the RSA2 by the present method, but uncorrected for selection bias, would give an incorrect value of H0=71 using the same Cepheid calibration. The effect of the bias is pernicious at the 30% level; either it must be corrected by the methods in the papers of this series, or the data must be restricted to the distance-limited subset of any sample, as is done here.
Heterogeneous Multi-Metric Learning for Multi-Sensor Fusion
2011-07-01
distance”. One of the most widely used methods is the k-nearest neighbor ( KNN ) method [4], which labels an input data sample to be the class with majority...despite of its simplicity, it can be an effective candidate and can be easily extended to handle multiple sensors. Distance based method such as KNN relies...Neighbor (LMNN) method [21] which will be briefly reviewed in the sequel. LMNN method tries to learn an optimal metric specifically for KNN classifier. The
An updated Type II supernova Hubble diagram
NASA Astrophysics Data System (ADS)
Gall, E. E. E.; Kotak, R.; Leibundgut, B.; Taubenberger, S.; Hillebrandt, W.; Kromer, M.; Burgett, W. S.; Chambers, K.; Flewelling, H.; Huber, M. E.; Kaiser, N.; Kudritzki, R. P.; Magnier, E. A.; Metcalfe, N.; Smith, K.; Tonry, J. L.; Wainscoat, R. J.; Waters, C.
2018-03-01
We present photometry and spectroscopy of nine Type II-P/L supernovae (SNe) with redshifts in the 0.045 ≲ z ≲ 0.335 range, with a view to re-examining their utility as distance indicators. Specifically, we apply the expanding photosphere method (EPM) and the standardized candle method (SCM) to each target, and find that both methods yield distances that are in reasonable agreement with each other. The current record-holder for the highest-redshift spectroscopically confirmed supernova (SN) II-P is PS1-13bni (z = 0.335-0.012+0.009), and illustrates the promise of Type II SNe as cosmological tools. We updated existing EPM and SCM Hubble diagrams by adding our sample to those previously published. Within the context of Type II SN distance measuring techniques, we investigated two related questions. First, we explored the possibility of utilising spectral lines other than the traditionally used Fe IIλ5169 to infer the photospheric velocity of SN ejecta. Using local well-observed objects, we derive an epoch-dependent relation between the strong Balmer line and Fe IIλ5169 velocities that is applicable 30 to 40 days post-explosion. Motivated in part by the continuum of key observables such as rise time and decline rates exhibited from II-P to II-L SNe, we assessed the possibility of using Hubble-flow Type II-L SNe as distance indicators. These yield similar distances as the Type II-P SNe. Although these initial results are encouraging, a significantly larger sample of SNe II-L would be required to draw definitive conclusions. Tables A.1, A.3, A.5, A.7, A.9, A.11, A.13, A.15 and A.17 are also available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/611/A25
Automated position control of a surface array relative to a liquid microjunction surface sampler
Van Berkel, Gary J.; Kertesz, Vilmos; Ford, Michael James
2007-11-13
A system and method utilizes an image analysis approach for controlling the probe-to-surface distance of a liquid junction-based surface sampling system for use with mass spectrometric detection. Such an approach enables a hands-free formation of the liquid microjunction used to sample solution composition from the surface and for re-optimization, as necessary, of the microjunction thickness during a surface scan to achieve a fully automated surface sampling system.
Writing for Distance Education. Samples Booklet.
ERIC Educational Resources Information Center
International Extension Coll., Cambridge (England).
Approaches to the format, design, and layout of printed instructional materials for distance education are illustrated in 36 samples designed to accompany the manual, "Writing for Distance Education." Each sample is presented on a single page with a note pointing out its key features. Features illustrated include use of typescript layout, a comic…
An Active Tutorial on Distance Sampling
ERIC Educational Resources Information Center
Richardson, Alice
2007-01-01
The technique of distance sampling is widely used to monitor biological populations. This paper documents an in-class activity to introduce students to the concepts and the mechanics of distance sampling in a simple situation that is relevant to their own experiences. Preparation details are described. Variations and extensions to the activity are…
Study of probe-sample distance for biomedical spectra measurement.
Wang, Bowen; Fan, Shuzhen; Li, Lei; Wang, Cong
2011-11-02
Fiber-based optical spectroscopy has been widely used for biomedical applications. However, the effect of probe-sample distance on the collection efficiency has not been well investigated. In this paper, we presented a theoretical model to maximize the illumination and collection efficiency in designing fiber optic probes for biomedical spectra measurement. This model was in general applicable to probes with single or multiple fibers at an arbitrary incident angle. In order to demonstrate the theory, a fluorescence spectrometer was used to measure the fluorescence of human finger skin at various probe-sample distances. The fluorescence spectrum and the total fluorescence intensity were recorded. The theoretical results show that for single fiber probes, contact measurement always provides the best results. While for multi-fiber probes, there is an optimal probe distance. When a 400- μm excitation fiber is used to deliver the light to the skin and another six 400- μm fibers surrounding the excitation fiber are used to collect the fluorescence signal, the experimental results show that human finger skin has very strong fluorescence between 475 nm and 700 nm under 450 nm excitation. The fluorescence intensity is heavily dependent on the probe-sample distance and there is an optimal probe distance. We investigated a number of probe-sample configurations and found that contact measurement could be the primary choice for single-fiber probes, but was very inefficient for multi-fiber probes. There was an optimal probe-sample distance for multi-fiber probes. By carefully choosing the probe-sample distance, the collection efficiency could be enhanced by 5-10 times. Our experiments demonstrated that the experimental results of the probe-sample distance dependence of collection efficiency in multi-fiber probes were in general agreement with our theory.
STELLAR X-RAY SOURCES IN THE CHANDRA COSMOS SURVEY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wright, N. J.; Drake, J. J.; Civano, F., E-mail: nwright@cfa.harvard.ed
2010-12-10
We present an analysis of the X-ray properties of a sample of solar- and late-type field stars identified in the Chandra Cosmic Evolution Survey (COSMOS), a deep (160 ks) and wide ({approx}0.9 deg{sup 2}) extragalactic survey. The sample of 60 sources was identified using both morphological and photometric star/galaxy separation methods. We determine X-ray count rates, extract spectra and light curves, and perform spectral fits to determine fluxes and plasma temperatures. Complementary optical and near-IR photometry is also presented and combined with spectroscopy for 48 of the sources to determine spectral types and distances for the sample. We find distancesmore » ranging from 30 pc to {approx}12 kpc, including a number of the most distant and highly active stellar X-ray sources ever detected. This stellar sample extends the known coverage of the L{sub X}-distance plane to greater distances and higher luminosities, but we do not detect as many intrinsically faint X-ray sources compared to previous surveys. Overall the sample is typically more luminous than the active Sun, representing the high-luminosity end of the disk and halo X-ray luminosity functions. The halo population appears to include both low-activity spectrally hard sources that may be emitting through thermal bremsstrahlung, as well as a number of highly active sources in close binaries.« less
Kinematic Distances: A Monte Carlo Method
NASA Astrophysics Data System (ADS)
Wenger, Trey V.; Balser, Dana S.; Anderson, L. D.; Bania, T. M.
2018-03-01
Distances to high-mass star-forming regions (HMSFRs) in the Milky Way are a crucial constraint on the structure of the Galaxy. Only kinematic distances are available for a majority of the HMSFRs in the Milky Way. Here, we compare the kinematic and parallax distances of 75 Galactic HMSFRs to assess the accuracy of kinematic distances. We derive the kinematic distances using three different methods: the traditional method using the Brand & Blitz rotation curve (Method A), the traditional method using the Reid et al. rotation curve and updated solar motion parameters (Method B), and a Monte Carlo technique (Method C). Methods B and C produce kinematic distances closest to the parallax distances, with median differences of 13% (0.43 {kpc}) and 17% (0.42 {kpc}), respectively. Except in the vicinity of the tangent point, the kinematic distance uncertainties derived by Method C are smaller than those of Methods A and B. In a large region of the Galaxy, the Method C kinematic distances constrain both the distances and the Galactocentric positions of HMSFRs more accurately than parallax distances. Beyond the tangent point along ℓ = 30°, for example, the Method C kinematic distance uncertainties reach a minimum of 10% of the parallax distance uncertainty at a distance of 14 {kpc}. We develop a prescription for deriving and applying the Method C kinematic distances and distance uncertainties. The code to generate the Method C kinematic distances is publicly available and may be utilized through an online tool.
On the construction of a time base and the elimination of averaging errors in proxy records
NASA Astrophysics Data System (ADS)
Beelaerts, V.; De Ridder, F.; Bauwens, M.; Schmitz, N.; Pintelon, R.
2009-04-01
Proxies are sources of climate information which are stored in natural archives (e.g. ice-cores, sediment layers on ocean floors and animals with calcareous marine skeletons). Measuring these proxies produces very short records and mostly involves sampling solid substrates, which is subject to the following two problems: Problem 1: Natural archives are equidistantly sampled at a distance grid along their accretion axis. Starting from these distance series, a time series needs to be constructed, as comparison of different data records is only meaningful on a time grid. The time series will be non-equidistant, as the accretion rate is non-constant. Problem 2: A typical example of sampling solid substrates is drilling. Because of the dimensions of the drill, the holes drilled will not be infinitesimally small. Consequently, samples are not taken at a point in distance, but rather over a volume in distance. This holds for most sampling methods in solid substrates. As a consequence, when the continuous proxy signal is sampled, it will be averaged over the volume of the sample, resulting in an underestimation of the amplitude. Whether this averaging effect is significant, depends on the volume of the sample and the variations of interest of the proxy signal. Starting from the measured signal, the continuous signal needs to be reconstructed in order eliminate these averaging errors. The aim is to provide an efficient identification algorithm to identify the non-linearities in the distance-time relationship, called time base distortions, and to correct for the averaging effects. Because this is a parametric method, an assumption about the proxy signal needs to be made: the proxy record on a time base is assumed to be harmonic, this is an obvious assumption because natural archives often exhibit a seasonal cycle. In a first approach the averaging effects are assumed to be in one direction only, i.e. the direction of the axis on which the measurements were performed. The measured averaged proxy signal is modeled by following signal model: -- Δ ∫ n+12Δδ- y(n,θ) = δ- 1Δ- y(m,θ)dm n-2 δ where m is the position, x(m) = Δm; θ are the unknown parameters and y(m,θ) is the proxy signal we want to identify (the proxy signal as found in the natural archive), which we model as: y(m, θ) = A +∑H [A sin(kωt(m ))+ A cos(kωt(m ))] 0 k=1 k k+H With t(m): t(m) = mTS + g(m )TS Here TS = 1/fS is the sampling period, fS the sampling frequency, and g(m) the unknown time base distortion (TBD). In this work a splines approximation of the TBD is chosen: ∑ g(m ) = b blφl(m ) l=1 where, b is a vector of unknown time base distortion parameters, and φ is a set of splines. The estimates of the unknown parameters were obtained with a nonlinear least squares algorithm. The vessel density measured in the mangrove tree R mucronata was used to illustrate the method. The vessel density is a proxy for the rain fall in tropical regions. The proxy data on the newly constructed time base showed a yearly periodicity, this is what we expected and the correction for the averaging effect increased the amplitude by 11.18%.
Agreement between methods of measurement of mean aortic wall thickness by MRI.
Rosero, Eric B; Peshock, Ronald M; Khera, Amit; Clagett, G Patrick; Lo, Hao; Timaran, Carlos
2009-03-01
To assess the agreement between three methods of calculation of mean aortic wall thickness (MAWT) using magnetic resonance imaging (MRI). High-resolution MRI of the infrarenal abdominal aorta was performed on 70 subjects with a history of coronary artery disease who were part of a multi-ethnic population-based sample. MAWT was calculated as the mean distance between the adventitial and luminal aortic boundaries using three different methods: average distance at four standard positions (AWT-4P), average distance at 100 automated positions (AWT-100P), and using a mathematical computation derived from the total vessel and luminal areas (AWT-VA). Bland-Altman plots and Passing-Bablok regression analyses were used to assess agreement between methods. Bland-Altman analyses demonstrated a positive bias of 3.02+/-7.31% between the AWT-VA and the AWT-4P methods, and of 1.76+/-6.82% between the AWT-100P and the AWT-4P methods. Passing-Bablok regression analyses demonstrated constant bias between the AWT-4P method and the other two methods. Proportional bias was, however, not evident among the three methods. MRI methods of measurement of MAWT using a limited number of positions of the aortic wall systematically underestimate the MAWT value compared with the method that calculates MAWT from the vessel areas. Copyright (c) 2009 Wiley-Liss, Inc.
Drummond, A; Rodrigo, A G
2000-12-01
Reconstruction of evolutionary relationships from noncontemporaneous molecular samples provides a new challenge for phylogenetic reconstruction methods. With recent biotechnological advances there has been an increase in molecular sequencing throughput, and the potential to obtain serial samples of sequences from populations, including rapidly evolving pathogens, is fast being realized. A new method called the serial-sample unweighted pair grouping method with arithmetic means (sUPGMA) is presented that reconstructs a genealogy or phylogeny of sequences sampled serially in time using a matrix of pairwise distances. The resulting tree depicts the terminal lineages of each sample ending at a different level consistent with the sample's temporal order. Since sUPGMA is a variant of UPGMA, it will perform best when sequences have evolved at a constant rate (i.e., according to a molecular clock). On simulated data, this new method performs better than standard cluster analysis under a variety of longitudinal sampling strategies. Serial-sample UPGMA is particularly useful for analysis of longitudinal samples of viruses and bacteria, as well as ancient DNA samples, with the minimal requirement that samples of sequences be ordered in time.
NASA Astrophysics Data System (ADS)
Zav'yalov, A. S.
2018-04-01
A variant of the method of partial waveguide filling is considered in which a sample is put into a waveguide through holes in wide waveguide walls at the distance equal to a quarter of the wavelength in the waveguide from a short-circuiter, and the total input impedance of the sample in the waveguide is directly measured. The equivalent circuit of the sample is found both without and with account of the hole. It is demonstrated that consideration of the edge effect makes it possible to obtain more exact values of the dielectric permittivity.
Distance determinations to shield galaxies from Hubble space telescope imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
McQuinn, Kristen B. W.; Skillman, Evan D.; Cannon, John M.
The Survey of H I in Extremely Low-mass Dwarf (SHIELD) galaxies is an ongoing multi-wavelength program to characterize the gas, star formation, and evolution in gas-rich, very low-mass galaxies. The galaxies were selected from the first ∼10% of the H I Arecibo Legacy Fast ALFA (ALFALFA) survey based on their inferred low H I mass and low baryonic mass, and all systems have recent star formation. Thus, the SHIELD sample probes the faint end of the galaxy luminosity function for star-forming galaxies. Here, we measure the distances to the 12 SHIELD galaxies to be between 5 and 12 Mpc bymore » applying the tip of the red giant method to the resolved stellar populations imaged by the Hubble Space Telescope. Based on these distances, the H I masses in the sample range from 4 × 10{sup 6} to 6 × 10{sup 7} M {sub ☉}, with a median H I mass of 1 × 10{sup 7} M {sub ☉}. The tip of the red giant branch distances are up to 73% farther than flow-model estimates in the ALFALFA catalog. Because of the relatively large uncertainties of flow-model distances, we are biased toward selecting galaxies from the ALFALFA catalog where the flow model underestimates the true distances. The measured distances allow for an assessment of the native environments around the sample members. Five of the galaxies are part of the NGC 672 and NGC 784 groups, which together constitute a single structure. One galaxy is part of a larger linear ensemble of nine systems that stretches 1.6 Mpc from end to end. Three galaxies reside in regions with 1-9 neighbors, and four galaxies are truly isolated with no known system identified within a radius of 1 Mpc.« less
Similarities and differences in cultural values between Iranian and Malaysian nursing students
Abdollahimohammad, Abdolghani; Jaafar, Rogayah; Rahim, Ahmad F. Abul
2014-01-01
Background: Cultural values are invisible and relatively constant in societies. The purpose of the present study is to find diversities in cultural values of Iranian and Malaysian nursing students. Materials and Methods: Convenience sampling method was used for this comparative-descriptive study to gather the data from full-time undergraduate degree nursing students in Iran and Malaysia. The data were collected using Values Survey Module 2008 and were analyzed by independent t-test. Results: The means of power distance, individualism, and uncertainty avoidance values were significantly different between the two study populations. Conclusions: The academics should acknowledge diversities in cultural values, especially in power distance index, to minimize misconceptions in teaching-learning environments. PMID:25400685
Quantum Algorithm for K-Nearest Neighbors Classification Based on the Metric of Hamming Distance
NASA Astrophysics Data System (ADS)
Ruan, Yue; Xue, Xiling; Liu, Heng; Tan, Jianing; Li, Xi
2017-11-01
K-nearest neighbors (KNN) algorithm is a common algorithm used for classification, and also a sub-routine in various complicated machine learning tasks. In this paper, we presented a quantum algorithm (QKNN) for implementing this algorithm based on the metric of Hamming distance. We put forward a quantum circuit for computing Hamming distance between testing sample and each feature vector in the training set. Taking advantage of this method, we realized a good analog for classical KNN algorithm by setting a distance threshold value t to select k - n e a r e s t neighbors. As a result, QKNN achieves O( n 3) performance which is only relevant to the dimension of feature vectors and high classification accuracy, outperforms Llyod's algorithm (Lloyd et al. 2013) and Wiebe's algorithm (Wiebe et al. 2014).
Problems with sampling desert tortoises: A simulation analysis based on field data
Freilich, J.E.; Camp, R.J.; Duda, J.J.; Karl, A.E.
2005-01-01
The desert tortoise (Gopherus agassizii) was listed as a U.S. threatened species in 1990 based largely on population declines inferred from mark-recapture surveys of 2.59-km2 (1-mi2) plots. Since then, several census methods have been proposed and tested, but all methods still pose logistical or statistical difficulties. We conducted computer simulations using actual tortoise location data from 2 1-mi2 plot surveys in southern California, USA, to identify strengths and weaknesses of current sampling strategies. We considered tortoise population estimates based on these plots as "truth" and then tested various sampling methods based on sampling smaller plots or transect lines passing through the mile squares. Data were analyzed using Schnabel's mark-recapture estimate and program CAPTURE. Experimental subsampling with replacement of the 1-mi2 data using 1-km2 and 0.25-km2 plot boundaries produced data sets of smaller plot sizes, which we compared to estimates from the 1-mi 2 plots. We also tested distance sampling by saturating a 1-mi 2 site with computer simulated transect lines, once again evaluating bias in density estimates. Subsampling estimates from 1-km2 plots did not differ significantly from the estimates derived at 1-mi2. The 0.25-km2 subsamples significantly overestimated population sizes, chiefly because too few recaptures were made. Distance sampling simulations were biased 80% of the time and had high coefficient of variation to density ratios. Furthermore, a prospective power analysis suggested limited ability to detect population declines as high as 50%. We concluded that poor performance and bias of both sampling procedures was driven by insufficient sample size, suggesting that all efforts must be directed to increasing numbers found in order to produce reliable results. Our results suggest that present methods may not be capable of accurately estimating desert tortoise populations.
Enhanced Conformational Sampling Using Replica Exchange with Collective-Variable Tempering.
Gil-Ley, Alejandro; Bussi, Giovanni
2015-03-10
The computational study of conformational transitions in RNA and proteins with atomistic molecular dynamics often requires suitable enhanced sampling techniques. We here introduce a novel method where concurrent metadynamics are integrated in a Hamiltonian replica-exchange scheme. The ladder of replicas is built with different strengths of the bias potential exploiting the tunability of well-tempered metadynamics. Using this method, free-energy barriers of individual collective variables are significantly reduced compared with simple force-field scaling. The introduced methodology is flexible and allows adaptive bias potentials to be self-consistently constructed for a large number of simple collective variables, such as distances and dihedral angles. The method is tested on alanine dipeptide and applied to the difficult problem of conformational sampling in a tetranucleotide.
An Optical Sensor for Measuring the Position and Slanting Direction of Flat Surfaces
Chen, Yu-Ta; Huang, Yen-Sheng; Liu, Chien-Sheng
2016-01-01
Automated optical inspection is a very important technique. For this reason, this study proposes an optical non-contact slanting surface measuring system. The essential features of the measurement system are obtained through simulations using the optical design software Zemax. The actual propagation of laser beams within the measurement system is traced by using a homogeneous transformation matrix (HTM), the skew-ray tracing method, and a first-order Taylor series expansion. Additionally, a complete mathematical model that describes the variations in light spots on photoelectric sensors and the corresponding changes in the sample orientation and distance was established. Finally, a laboratory prototype system was constructed on an optical bench to verify experimentally the proposed system. This measurement system can simultaneously detect the slanting angles (x, z) in the x and z directions of the sample and the distance (y) between the biconvex lens and the flat sample surface. PMID:27409619
An Optical Sensor for Measuring the Position and Slanting Direction of Flat Surfaces.
Chen, Yu-Ta; Huang, Yen-Sheng; Liu, Chien-Sheng
2016-07-09
Automated optical inspection is a very important technique. For this reason, this study proposes an optical non-contact slanting surface measuring system. The essential features of the measurement system are obtained through simulations using the optical design software Zemax. The actual propagation of laser beams within the measurement system is traced by using a homogeneous transformation matrix (HTM), the skew-ray tracing method, and a first-order Taylor series expansion. Additionally, a complete mathematical model that describes the variations in light spots on photoelectric sensors and the corresponding changes in the sample orientation and distance was established. Finally, a laboratory prototype system was constructed on an optical bench to verify experimentally the proposed system. This measurement system can simultaneously detect the slanting angles (x, z) in the x and z directions of the sample and the distance (y) between the biconvex lens and the flat sample surface.
Yokoyama, Hidekatsu
2012-01-01
Direct irradiation of a sample using a quartz oscillator operating at 250 MHz was performed for EPR measurements. Because a quartz oscillator is a frequency fixed oscillator, the operating frequency of an EPR resonator (loop-gap type) was tuned to that of the quartz oscillator by using a single-turn coil with a varactor diode attached (frequency shift coil). Because the frequency shift coil was mobile, the distance between the EPR resonator and the coil could be changed. Coarse control of the resonant frequency was achieved by changing this distance mechanically, while fine frequency control was implemented by changing the capacitance of the varactor electrically. In this condition, EPR measurements of a phantom (comprised of agar with a nitroxide radical and physiological saline solution) were made. To compare the presented method with a conventional method, the EPR measurements were also done by using a synthesizer at the same EPR frequency. In the conventional method, the noise level increased at high irradiation power. Because such an increase in the noise was not observed in the presented method, high sensitivity was obtained at high irradiation power. Copyright © 2011 Elsevier Inc. All rights reserved.
Fast Ordered Sampling of DNA Sequence Variants.
Greenberg, Anthony J
2018-05-04
Explosive growth in the amount of genomic data is matched by increasing power of consumer-grade computers. Even applications that require powerful servers can be quickly tested on desktop or laptop machines if we can generate representative samples from large data sets. I describe a fast and memory-efficient implementation of an on-line sampling method developed for tape drives 30 years ago. Focusing on genotype files, I test the performance of this technique on modern solid-state and spinning hard drives, and show that it performs well compared to a simple sampling scheme. I illustrate its utility by developing a method to quickly estimate genome-wide patterns of linkage disequilibrium (LD) decay with distance. I provide open-source software that samples loci from several variant format files, a separate program that performs LD decay estimates, and a C++ library that lets developers incorporate these methods into their own projects. Copyright © 2018 Greenberg.
Müller, M; Heumann, K G
2000-09-01
An isotope dilution inductively coupled plasma quadrupole mass spectrometric (ID-ICP-QMS) method was developed for the simultaneous determination of the platinum group elements Pt, Pd, Ru, and Ir in environmental samples. Spike solutions, enriched with the isotopes 194Pt, 108Pd, 99Ru, and 191Ir, were used for the isotope dilution step. Interfering elements were eliminated by chromatographic separation using an anion-exchange resin. Samples were dissolved with aqua regia in a high pressure asher. Additional dissolution of possible silicate portions by hydrofluoric acid was usually not necessary. Detection limits of 0.15 ng x g(-1), 0.075 ng x g(-1), and 0.015 ng x g(-1) were achieved for Pt, Pd, Ru, and Ir, respectively, using sample weights of only 0.2 g. The reliability of the ID-ICP-QMS method was demonstrated by analyzing a Canadian geological reference material and by participating in an interlaboratory study for the determination of platinum and palladium in a homogenized road dust sample. Surface soil, sampled at different distances from a highway, showed concentrations in the range of 0.1-87 ng x g(-1). An exponential decrease of the platinum and palladium concentration with increasing distance and a small anthropogenic contribution to the natural background concentration of ruthenium and iridium was found in these samples.
Measurement and classification methods using the ASAE S572-1 reference nozzles
USDA-ARS?s Scientific Manuscript database
An increasing number of spray nozzle and agrochemical manufacturers are incorporating droplet size measurements into both research and development with each laboratory invariably having their own sampling setup and procedures, particularly with regard to both measurement distance from the nozzle and...
Farnsworth, G.L.; Nichols, J.D.; Sauer, J.R.; Fancy, S.G.; Pollock, K.H.; Shriner, S.A.; Simons, T.R.; Ralph, C. John; Rich, Terrell D.
2005-01-01
Point counts are a standard sampling procedure for many bird species, but lingering concerns still exist about the quality of information produced from the method. It is well known that variation in observer ability and environmental conditions can influence the detection probability of birds in point counts, but many biologists have been reluctant to abandon point counts in favor of more intensive approaches to counting. However, over the past few years a variety of statistical and methodological developments have begun to provide practical ways of overcoming some of the problems with point counts. We describe some of these approaches, and show how they can be integrated into standard point count protocols to greatly enhance the quality of the information. Several tools now exist for estimation of detection probability of birds during counts, including distance sampling, double observer methods, time-depletion (removal) methods, and hybrid methods that combine these approaches. Many counts are conducted in habitats that make auditory detection of birds much more likely than visual detection. As a framework for understanding detection probability during such counts, we propose separating two components of the probability a bird is detected during a count into (1) the probability a bird vocalizes during the count and (2) the probability this vocalization is detected by an observer. In addition, we propose that some measure of the area sampled during a count is necessary for valid inferences about bird populations. This can be done by employing fixed-radius counts or more sophisticated distance-sampling models. We recommend any studies employing point counts be designed to estimate detection probability and to include a measure of the area sampled.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hardcastle, Nicholas; Bayliss, Adam; Wong, Jeannie Hsiu Ding
2012-08-15
Purpose: A recent field safety notice from TomoTherapy detailed the underdosing of small, off-axis targets when receiving high doses per fraction. This is due to angular undersampling in the dose calculation gantry angles. This study evaluates a correction method to reduce the underdosing, to be implemented in the current version (v4.1) of the TomoTherapy treatment planning software. Methods: The correction method, termed 'Super Sampling' involved the tripling of the number of gantry angles from which the dose is calculated during optimization and dose calculation. Radiochromic film was used to measure the dose to small targets at various off-axis distances receivingmore » a minimum of 21 Gy in one fraction. Measurements were also performed for single small targets at the center of the Lucy phantom, using radiochromic film and the dose magnifying glass (DMG). Results: Without super sampling, the peak dose deficit increased from 0% to 18% for a 10 mm target and 0% to 30% for a 5 mm target as off-axis target distances increased from 0 to 16.5 cm. When super sampling was turned on, the dose deficit trend was removed and all peak doses were within 5% of the planned dose. For measurements in the Lucy phantom at 9.7 cm off-axis, the positional and dose magnitude accuracy using super sampling was verified using radiochromic film and the DMG. Conclusions: A correction method implemented in the TomoTherapy treatment planning system which triples the angular sampling of the gantry angles used during optimization and dose calculation removes the underdosing for targets as small as 5 mm diameter, up to 16.5 cm off-axis receiving up to 21 Gy.« less
NASA Astrophysics Data System (ADS)
Pisani, Marco; Astrua, Milena; Zucco, Massimo
2018-02-01
We present a method to measure the temperature along the path of an optical interferometer based on the propagation of acoustic waves. It exploits the high sensitivity of the speed of sound to air temperature. In particular, it takes advantage of a technique where the generation of acoustic waves is synchronous with the amplitude modulation of a laser source. A photodetector converts the laser light into an electronic signal used as a reference, while the incoming acoustic waves are focused on a microphone and generate the measuring signal. Under this condition, the phase difference between the two signals substantially depends on the temperature of the air volume interposed between the sources and the receivers. A comparison with traditional temperature sensors highlighted the limit of the latter in the case of fast temperature variations and the advantage of a measurement integrated along the optical path instead of a sampling measurement. The capability of the acoustic method to compensate for the interferometric distance measurements due to air temperature variations has been demonstrated to the level of 0.1 °C corresponding to 10-7 on the refractive index of air. We applied the method indoor for distances up to 27 m, outdoor at 78 m and finally tested the acoustic thermometer over a distance of 182 m.
Peculiarities of the detection and identification of substance at long distance
NASA Astrophysics Data System (ADS)
Trofimov, Vyacheslav A.; Varentsova, Svetlana A.; Trofimov, Vladislav V.; Tikhomirov, Vasily V.
2014-05-01
Nowadays, the detection and identification of dangerous substances at long distance (several meters, for example) by using of THz pulse reflected from the object is an important problem. In this report we demonstrate possibility of THz signal measuring reflected from investigated object that is placed before a flat metallic mirror. A distance between the flat mirror and the parabolic mirror this mirror is equal to 3.5 meters. Therefore, at present time our measurements contain features of both transmission and reflection modes. The reflecting mirror is used because of weak average power of used femtosecond laser. Measurements were provided at room temperature and humidity about 60%. The aim of investigation was the detection of a substance in real condition. Chocolate and Cookies were used as samples for identification. We also discuss modified correlation criteria for the detection and identification of various substances using pulsed THz signal in the transmission and reflection mode at short distances of about 30-40 cm. These criteria are integral criteria in time and they are based on the SDA method. Proposed algorithms show both high probability of the substance identification and a reliability of realization in practice. We compare P-spectrum and SDA- methods in the paper and show that P-spectrum method is a partial case of SDAmethod.
A straightforward method to compute average stochastic oscillations from data samples.
Júlvez, Jorge
2015-10-19
Many biological systems exhibit sustained stochastic oscillations in their steady state. Assessing these oscillations is usually a challenging task due to the potential variability of the amplitude and frequency of the oscillations over time. As a result of this variability, when several stochastic replications are averaged, the oscillations are flattened and can be overlooked. This can easily lead to the erroneous conclusion that the system reaches a constant steady state. This paper proposes a straightforward method to detect and asses stochastic oscillations. The basis of the method is in the use of polar coordinates for systems with two species, and cylindrical coordinates for systems with more than two species. By slightly modifying these coordinate systems, it is possible to compute the total angular distance run by the system and the average Euclidean distance to a reference point. This allows us to compute confidence intervals, both for the average angular speed and for the distance to a reference point, from a set of replications. The use of polar (or cylindrical) coordinates provides a new perspective of the system dynamics. The mean trajectory that can be obtained by averaging the usual cartesian coordinates of the samples informs about the trajectory of the center of mass of the replications. In contrast to such a mean cartesian trajectory, the mean polar trajectory can be used to compute the average circular motion of those replications, and therefore, can yield evidence about sustained steady state oscillations. Both, the coordinate transformation and the computation of confidence intervals, can be carried out efficiently. This results in an efficient method to evaluate stochastic oscillations.
Virtual-reality techniques resolve the visual cues used by fruit flies to evaluate object distances.
Schuster, Stefan; Strauss, Roland; Götz, Karl G
2002-09-17
Insects can estimate distance or time-to-contact of surrounding objects from locomotion-induced changes in their retinal position and/or size. Freely walking fruit flies (Drosophila melanogaster) use the received mixture of different distance cues to select the nearest objects for subsequent visits. Conventional methods of behavioral analysis fail to elucidate the underlying data extraction. Here we demonstrate first comprehensive solutions of this problem by substituting virtual for real objects; a tracker-controlled 360 degrees panorama converts a fruit fly's changing coordinates into object illusions that require the perception of specific cues to appear at preselected distances up to infinity. An application reveals the following: (1) en-route sampling of retinal-image changes accounts for distance discrimination within a surprising range of at least 8-80 body lengths (20-200 mm). Stereopsis and peering are not involved. (2) Distance from image translation in the expected direction (motion parallax) outweighs distance from image expansion, which accounts for impact-avoiding flight reactions to looming objects. (3) The ability to discriminate distances is robust to artificially delayed updating of image translation. Fruit flies appear to interrelate self-motion and its visual feedback within a surprisingly long time window of about 2 s. The comparative distance inspection practiced in the small fruit fly deserves utilization in self-moving robots.
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Grijs, Richard; Wicker, James E.; Bono, Giuseppe
2014-05-01
The distance to the Large Magellanic Cloud (LMC) represents a key local rung of the extragalactic distance ladder yet the galaxy's distance modulus has long been an issue of contention, in particular in view of claims that most newly determined distance moduli cluster tightly—and with a small spread—around the 'canonical' distance modulus, (m – M){sub 0} = 18.50 mag. We compiled 233 separate LMC distance determinations published between 1990 and 2013. Our analysis of the individual distance moduli, as well as of their two-year means and standard deviations resulting from this largest data set of LMC distance moduli available tomore » date, focuses specifically on Cepheid and RR Lyrae variable-star tracer populations, as well as on distance estimates based on features in the observational Hertzsprung-Russell diagram. We conclude that strong publication bias is unlikely to have been the main driver of the majority of published LMC distance moduli. However, for a given distance tracer, the body of publications leading to the tightly clustered distances is based on highly non-independent tracer samples and analysis methods, hence leading to significant correlations among the LMC distances reported in subsequent articles. Based on a careful, weighted combination, in a statistical sense, of the main stellar population tracers, we recommend that a slightly adjusted canonical distance modulus of (m – M){sub 0} = 18.49 ± 0.09 mag be used for all practical purposes that require a general distance scale without the need for accuracies of better than a few percent.« less
Tanner, Timo; Antikainen, Osmo; Ehlers, Henrik; Yliruusi, Jouko
2017-06-30
With modern tableting machines large amounts of tablets are produced with high output. Consequently, methods to examine powder compression in a high-velocity setting are in demand. In the present study, a novel gravitation-based method was developed to examine powder compression. A steel bar is dropped on a punch to compress microcrystalline cellulose and starch samples inside the die. The distance of the bar is being read by a high-accuracy laser displacement sensor which provides a reliable distance-time plot for the bar movement. In-die height and density of the compact can be seen directly from this data, which can be examined further to obtain information on velocity, acceleration and energy distribution during compression. The energy consumed in compact formation could also be seen. Despite the high vertical compression speed, the method was proven to be cost-efficient, accurate and reproducible. Copyright © 2017 Elsevier B.V. All rights reserved.
Shear wave speed estimation by adaptive random sample consensus method.
Lin, Haoming; Wang, Tianfu; Chen, Siping
2014-01-01
This paper describes a new method for shear wave velocity estimation that is capable of extruding outliers automatically without preset threshold. The proposed method is an adaptive random sample consensus (ARANDSAC) and the metric used here is finding the certain percentage of inliers according to the closest distance criterion. To evaluate the method, the simulation and phantom experiment results were compared using linear regression with all points (LRWAP) and radon sum transform (RS) method. The assessment reveals that the relative biases of mean estimation are 20.00%, 4.67% and 5.33% for LRWAP, ARANDSAC and RS respectively for simulation, 23.53%, 4.08% and 1.08% for phantom experiment. The results suggested that the proposed ARANDSAC algorithm is accurate in shear wave speed estimation.
Distances to Nearby Galaxies via Long Period Variables
NASA Astrophysics Data System (ADS)
Jurcevic, John S.
A new method of measuring extra-Galactic distances has been developed based on the relationship between the luminosity of red supergiant variable (RSV) stars at optical wavelengths and the period of their luminosity variation. This period-luminosity (PL) relationship has been calibrated with RSVs from the Galactic Perseus OB1 association, the Large Magellanic Cloud, and M33 in the broadband optical R and I-bands, in a narrow part of the I-band at 8250 Å, and in the infrared K-band. By using these RSV PL relations, the distances to a sample of nearby galaxies (M101, NGC 2403, and NGC 2366) were determined. These galaxies were chosen because they had existing Cepheid based distances which allowed for a comparison between the two methods and provided a means of verifying the effectiveness of the RSV PL relation. The galaxies were also chosen to span a range of metallicity to allow an investigation of any effects due to metallicity differences. Photometry in the R-band was obtained over a period of three years for the galaxies with a coverage of 20, 17, and 13 epochs for M101, NGC 2403, and NGC 2366, respectively. By looking for red variable stars with periods in the range 100-1200 days the total number of RSVs discovered in the three galaxies was 123. Assuming a distance modulus for the Large Magellanic Cloud of 18.5 +/- 0.1 mag, single epoch I-band photometry of the RSVs was used to construct random phase PL relations resulting in distance moduli for M101, NGC 2403, and NGC 2366 of 29.40 +/- 0.16, 27.67 +/- 0.16, and 27.86 +/- 0.20 mag, respectively. Similarly, PL relations were also found using phase averaged R-band magnitudes which produced distance moduli of 29.09 +/- 0.16, 27.56 +/- 0.16, and 27.76 +/- 0.21 mag, respectively. These distances have been corrected for extinction by assuming values of E(B - V) = 0.10, 0.04, and 0.04 mag. The distances derived agree with those found via Cepheids which indicates that RSVs provide a very useful new method for measuring distances.
Bonetti, Marco; Pagano, Marcello
2005-03-15
The topic of this paper is the distribution of the distance between two points distributed independently in space. We illustrate the use of this interpoint distance distribution to describe the characteristics of a set of points within some fixed region. The properties of its sample version, and thus the inference about this function, are discussed both in the discrete and in the continuous setting. We illustrate its use in the detection of spatial clustering by application to a well-known leukaemia data set, and report on the results of a simulation experiment designed to study the power characteristics of the methods within that study region and in an artificial homogenous setting. Copyright (c) 2004 John Wiley & Sons, Ltd.
Hsiao, Chiaowen; Liu, Mengya; Stanton, Rick; McGee, Monnie; Qian, Yu
2015-01-01
Abstract Flow cytometry (FCM) is a fluorescence‐based single‐cell experimental technology that is routinely applied in biomedical research for identifying cellular biomarkers of normal physiological responses and abnormal disease states. While many computational methods have been developed that focus on identifying cell populations in individual FCM samples, very few have addressed how the identified cell populations can be matched across samples for comparative analysis. This article presents FlowMap‐FR, a novel method for cell population mapping across FCM samples. FlowMap‐FR is based on the Friedman–Rafsky nonparametric test statistic (FR statistic), which quantifies the equivalence of multivariate distributions. As applied to FCM data by FlowMap‐FR, the FR statistic objectively quantifies the similarity between cell populations based on the shapes, sizes, and positions of fluorescence data distributions in the multidimensional feature space. To test and evaluate the performance of FlowMap‐FR, we simulated the kinds of biological and technical sample variations that are commonly observed in FCM data. The results show that FlowMap‐FR is able to effectively identify equivalent cell populations between samples under scenarios of proportion differences and modest position shifts. As a statistical test, FlowMap‐FR can be used to determine whether the expression of a cellular marker is statistically different between two cell populations, suggesting candidates for new cellular phenotypes by providing an objective statistical measure. In addition, FlowMap‐FR can indicate situations in which inappropriate splitting or merging of cell populations has occurred during gating procedures. We compared the FR statistic with the symmetric version of Kullback–Leibler divergence measure used in a previous population matching method with both simulated and real data. The FR statistic outperforms the symmetric version of KL‐distance in distinguishing equivalent from nonequivalent cell populations. FlowMap‐FR was also employed as a distance metric to match cell populations delineated by manual gating across 30 FCM samples from a benchmark FlowCAP data set. An F‐measure of 0.88 was obtained, indicating high precision and recall of the FR‐based population matching results. FlowMap‐FR has been implemented as a standalone R/Bioconductor package so that it can be easily incorporated into current FCM data analytical workflows. © 2015 International Society for Advancement of Cytometry PMID:26274018
Hsiao, Chiaowen; Liu, Mengya; Stanton, Rick; McGee, Monnie; Qian, Yu; Scheuermann, Richard H
2016-01-01
Flow cytometry (FCM) is a fluorescence-based single-cell experimental technology that is routinely applied in biomedical research for identifying cellular biomarkers of normal physiological responses and abnormal disease states. While many computational methods have been developed that focus on identifying cell populations in individual FCM samples, very few have addressed how the identified cell populations can be matched across samples for comparative analysis. This article presents FlowMap-FR, a novel method for cell population mapping across FCM samples. FlowMap-FR is based on the Friedman-Rafsky nonparametric test statistic (FR statistic), which quantifies the equivalence of multivariate distributions. As applied to FCM data by FlowMap-FR, the FR statistic objectively quantifies the similarity between cell populations based on the shapes, sizes, and positions of fluorescence data distributions in the multidimensional feature space. To test and evaluate the performance of FlowMap-FR, we simulated the kinds of biological and technical sample variations that are commonly observed in FCM data. The results show that FlowMap-FR is able to effectively identify equivalent cell populations between samples under scenarios of proportion differences and modest position shifts. As a statistical test, FlowMap-FR can be used to determine whether the expression of a cellular marker is statistically different between two cell populations, suggesting candidates for new cellular phenotypes by providing an objective statistical measure. In addition, FlowMap-FR can indicate situations in which inappropriate splitting or merging of cell populations has occurred during gating procedures. We compared the FR statistic with the symmetric version of Kullback-Leibler divergence measure used in a previous population matching method with both simulated and real data. The FR statistic outperforms the symmetric version of KL-distance in distinguishing equivalent from nonequivalent cell populations. FlowMap-FR was also employed as a distance metric to match cell populations delineated by manual gating across 30 FCM samples from a benchmark FlowCAP data set. An F-measure of 0.88 was obtained, indicating high precision and recall of the FR-based population matching results. FlowMap-FR has been implemented as a standalone R/Bioconductor package so that it can be easily incorporated into current FCM data analytical workflows. © The Authors. Published by Wiley Periodicals, Inc. on behalf of ISAC.
Cunningham, Daniel J; Shearer, David A; Carter, Neil; Drawer, Scott; Pollard, Ben; Bennett, Mark; Eager, Robin; Cook, Christian J; Farrell, John; Russell, Mark; Kilduff, Liam P
2018-01-01
The assessment of competitive movement demands in team sports has traditionally relied upon global positioning system (GPS) analyses presented as fixed-time epochs (e.g., 5-40 min). More recently, presenting game data as a rolling average has become prevalent due to concerns over a loss of sampling resolution associated with the windowing of data over fixed periods. Accordingly, this study compared rolling average (ROLL) and fixed-time (FIXED) epochs for quantifying the peak movement demands of international rugby union match-play as a function of playing position. Elite players from three different squads (n = 119) were monitored using 10 Hz GPS during 36 matches played in the 2014-2017 seasons. Players categorised broadly as forwards and backs, and then by positional sub-group (FR: front row, SR: second row, BR: back row, HB: half back, MF: midfield, B3: back three) were monitored during match-play for peak values of high-speed running (>5 m·s-1; HSR) and relative distance covered (m·min-1) over 60-300 s using two types of sample-epoch (ROLL, FIXED). Irrespective of the method used, as the epoch length increased, values for the intensity of running actions decreased (e.g., For the backs using the ROLL method, distance covered decreased from 177.4 ± 20.6 m·min-1 in the 60 s epoch to 107.5 ± 13.3 m·min-1 for the 300 s epoch). For the team as a whole, and irrespective of position, estimates of fixed effects indicated significant between-method differences across all time-points for both relative distance covered and HSR. Movement demands were underestimated consistently by FIXED versus ROLL with differences being most pronounced using 60 s epochs (95% CI HSR: -6.05 to -4.70 m·min-1, 95% CI distance: -18.45 to -16.43 m·min-1). For all HSR time epochs except one, all backs groups increased more (p < 0.01) from FIXED to ROLL than the forward groups. Linear mixed modelling of ROLL data highlighted that for HSR (except 60 s epoch), SR was the only group not significantly different to FR. For relative distance covered all other position groups were greater than the FR (p < 0.05). The FIXED method underestimated both relative distance (~11%) and HSR values (up to ~20%) compared to the ROLL method. These differences were exaggerated for the HSR variable in the backs position who covered the greatest HSR distance; highlighting important consideration for those implementing the FIXED method of analysis. The data provides coaches with a worst-case scenario reference on the running demands required for periods of 60-300 s in length. This information offers novel insight into game demands and can be used to inform the design of training games to increase specificity of preparation for the most demanding phases of matches.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dong, C.; Department of Physics, SAPIENZA University of Rome, Piazzale A. Moro 5, 00185, Rome; Corsetti, S.
2015-06-23
We report a phenomenological approach for the quantification of the diameter of magnetic nanoparticles (MNPs) incorporated in non-ionic surfactant vesicles (niosomes) using magnetic force microscopy (MFM). After a simple specimen preparation, i.e., by putting a drop of solution containing MNPs-loaded niosomes on flat substrates, topography and MFM phase images are collected. To attempt the quantification of the diameter of entrapped MNPs, the method is calibrated on the sole MNPs deposited on the same substrates by analyzing the MFM signal as a function of the MNP diameter (at fixed tip-sample distance) and of the tip-sample distance (for selected MNPs). After calibration,more » the effective diameter of the MNPs entrapped in some niosomes is quantitatively deduced from MFM images.« less
Declustering of clustered preferential sampling for histogram and semivariogram inference
Olea, R.A.
2007-01-01
Measurements of attributes obtained more as a consequence of business ventures than sampling design frequently result in samplings that are preferential both in location and value, typically in the form of clusters along the pay. Preferential sampling requires preprocessing for the purpose of properly inferring characteristics of the parent population, such as the cumulative distribution and the semivariogram. Consideration of the distance to the nearest neighbor allows preparation of resampled sets that produce comparable results to those from previously proposed methods. Clustered sampling of size 140, taken from an exhaustive sampling, is employed to illustrate this approach. ?? International Association for Mathematical Geology 2007.
Automated working distance adjustment for a handheld OCT-Laryngoscope
NASA Astrophysics Data System (ADS)
Donner, Sabine; Bleeker, Sebastian; Ripken, Tammo; Krueger, Alexander
2014-03-01
Optical coherence tomography (OCT) is an imaging technique which enables diagnosis of vocal cord tissue structure by non-contact optical biopsies rather than invasive tissue biopsies. For diagnosis on awake patients OCT was adapted to a rigid indirect laryngoscope. The working distance must match the probe-sample distance, which varies from patient to patient. Therefore the endoscopic OCT sample arm has a variable working distance of 40 mm to 80 mm. The current axial position is identified by automated working distance adjustments based on image processing. The OCT reference plane and the focal plane of the sample arm are moved according to position errors. Repeated position adjustment during the whole diagnostic procedure keeps the tissue sample at the optimal axial position. The auto focus identifies and adjusts the working distance within the range of 50 mm within a maximum time of 2.7 s. Continuous image stabilisation reduces axial sample movement within the sampling depth for handheld OCT scanning. Rapid autofocus reduces the duration of the diagnostic procedure and axial position stabilisation eases the use of the OCT laryngoscope. Therefore this work is an important step towards the integration of OCT into indirect laryngoscopes.
NASA Astrophysics Data System (ADS)
Kitazaki, Tomoya; Mori, Keita; Yamamoto, Naoyuki; Wang, Congtao; Kawashima, Natsumi; Ishimaru, Ichiro
2017-07-01
We proposed the extremely compact beans-size snap-shot mid-infrared spectroscopy that will be able to be built in smartphones. And also the easy preparation method of thin-film samples generated by ultrasonic standing wave is proposed. Mid-infrared spectroscopy is able to identify material components and estimate component concentrations quantitatively from absorption spectra. But conventional spectral instruments were very large-size and too expensive to incorporate into daily life. And preparations of thin-film sample were very troublesome task. Because water absorption in mid-infrared lights is very strong, moisture-containing-sample thickness should be less than 100[μm]. Thus, midinfrared spectroscopy has been utilized only by analytical experts in their laboratories. Because ultrasonic standing wave is compressional wave, we can generate periodical refractive-index distributions inside of samples. A high refractiveindex plane is correspond to a reflection boundary. When we use a several MHz ultrasonic transducer, the distance between sample surface and generated first node become to be several ten μm. Thus, the double path of this distance is correspond to sample thickness. By combining these two proposed methods, as for liquid samples, urinary albumin and glucose concentrations will be able to be measured inside of toilet. And as for solid samples, by attaching these apparatus to earlobes, the enhancement of reflection lights from near skin surface will create a new path to realize the non-invasive blood glucose sensor. Using the small ultrasonic-transducer whose diameter was 10[mm] and applied voltage 8[V], we detected the internal reflection lights from colored water as liquid sample and acrylic board as solid sample.
Carbon Nano-particle Synthesized by Pulsed Arc Discharge Method as a Light Emitting Device
NASA Astrophysics Data System (ADS)
Ahmadi, Ramin; Ahmadi, Mohamad Taghi; Ismail, Razali
2018-07-01
Owing to the specific properties such as high mobility, ballistic carrier transport and light emission, carbon nano-particles (CNPs) have been employed in nanotechnology applications. In the presented work, the CNPs are synthesized by using the pulsed arc discharge method between two copper electrodes. The rectifying behaviour of produced CNPs is explored by assuming an Ohmic contact between the CNPs and the electrodes. The synthesized sample is characterized by electrical investigation and modelling. The current-voltage ( I- V) relationship is investigated and bright visible light emission from the produced CNPs was measured. The electroluminescence (EL) intensity was explored by changing the distance between two electrodes. An incremental behaviour on EL by a resistance gradient and distance reduction is identified.
Carbon Nano-particle Synthesized by Pulsed Arc Discharge Method as a Light Emitting Device
NASA Astrophysics Data System (ADS)
Ahmadi, Ramin; Ahmadi, Mohamad Taghi; Ismail, Razali
2018-04-01
Owing to the specific properties such as high mobility, ballistic carrier transport and light emission, carbon nano-particles (CNPs) have been employed in nanotechnology applications. In the presented work, the CNPs are synthesized by using the pulsed arc discharge method between two copper electrodes. The rectifying behaviour of produced CNPs is explored by assuming an Ohmic contact between the CNPs and the electrodes. The synthesized sample is characterized by electrical investigation and modelling. The current-voltage (I-V) relationship is investigated and bright visible light emission from the produced CNPs was measured. The electroluminescence (EL) intensity was explored by changing the distance between two electrodes. An incremental behaviour on EL by a resistance gradient and distance reduction is identified.
Cognitive assessment in mathematics with the least squares distance method.
Ma, Lin; Çetin, Emre; Green, Kathy E
2012-01-01
This study investigated the validation of comprehensive cognitive attributes of an eighth-grade mathematics test using the least squares distance method and compared performance on attributes by gender and region. A sample of 5,000 students was randomly selected from the data of the 2005 Turkish national mathematics assessment of eighth-grade students. Twenty-five math items were assessed for presence or absence of 20 cognitive attributes (content, cognitive processes, and skill). Four attributes were found to be misspecified or nonpredictive. However, results demonstrated the validity of cognitive attributes in terms of the revised set of 17 attributes. The girls had similar performance on the attributes as the boys. The students from the two eastern regions significantly underperformed on the most attributes.
Enhanced Conformational Sampling Using Replica Exchange with Collective-Variable Tempering
2015-01-01
The computational study of conformational transitions in RNA and proteins with atomistic molecular dynamics often requires suitable enhanced sampling techniques. We here introduce a novel method where concurrent metadynamics are integrated in a Hamiltonian replica-exchange scheme. The ladder of replicas is built with different strengths of the bias potential exploiting the tunability of well-tempered metadynamics. Using this method, free-energy barriers of individual collective variables are significantly reduced compared with simple force-field scaling. The introduced methodology is flexible and allows adaptive bias potentials to be self-consistently constructed for a large number of simple collective variables, such as distances and dihedral angles. The method is tested on alanine dipeptide and applied to the difficult problem of conformational sampling in a tetranucleotide. PMID:25838811
Incorporation of physical constraints in optimal surface search for renal cortex segmentation
NASA Astrophysics Data System (ADS)
Li, Xiuli; Chen, Xinjian; Yao, Jianhua; Zhang, Xing; Tian, Jie
2012-02-01
In this paper, we propose a novel approach for multiple surfaces segmentation based on the incorporation of physical constraints in optimal surface searching. We apply our new approach to solve the renal cortex segmentation problem, an important but not sufficiently researched issue. In this study, in order to better restrain the intensity proximity of the renal cortex and renal column, we extend the optimal surface search approach to allow for varying sampling distance and physical separation constraints, instead of the traditional fixed sampling distance and numerical separation constraints. The sampling distance of each vertex-column is computed according to the sparsity of the local triangular mesh. Then the physical constraint learned from a priori renal cortex thickness is applied to the inter-surface arcs as the separation constraints. Appropriate varying sampling distance and separation constraints were learnt from 6 clinical CT images. After training, the proposed approach was tested on a test set of 10 images. The manual segmentation of renal cortex was used as the reference standard. Quantitative analysis of the segmented renal cortex indicates that overall segmentation accuracy was increased after introducing the varying sampling distance and physical separation constraints (the average true positive volume fraction (TPVF) and false positive volume fraction (FPVF) were 83.96% and 2.80%, respectively, by using varying sampling distance and physical separation constraints compared to 74.10% and 0.18%, respectively, by using fixed sampling distance and numerical separation constraints). The experimental results demonstrated the effectiveness of the proposed approach.
Scanning mass spectrometry with integrated constant distance positioning
NASA Astrophysics Data System (ADS)
Li, Nan; Eckhard, Kathrin; Aßmann, Jens; Hagen, Volker; Otto, Horst; Chen, Xingxing; Schuhmann, Wolfgang; Muhler, Martin
2006-08-01
Scanning mass spectrometry is of growing importance for the characterization of catalytically active surfaces. The instrument presented here is capable of measuring catalytic activity spatially resolved by means of two concentric capillaries. The outer one is used for cofeeding reactants such as ethene and hydrogen to the sample surface, whereas the inner one is pumping off the product mixture as inlet to a quadrupole mass spectrometer. Three-dimensional measurements under stagnant-point flow conditions become possible based on a home-built capillary positioning unit. Step-motor driven positioning stages exhibiting a minimum step width of 2.5μm̸half step are used for the x, y positioning, and the step motor in z direction has a resolution of 1μm̸half step. The system is additionally equipped with a feedback loop for following the topography of the sample throughout scanning. Hence, the obtained catalytic data are unimpaired by signal changes caused by the morphology of the investigated structure. For distance control the argon ion current is used originating from externally fed argon diffusing into the confined space between the accurately positioned capillaries and the sample surface. A well-defined microchannel flow field with 400μm wide channels and 200μm wide mounds was chosen to evaluate the developed method. The catalytic activity of a Pt catalyst deposited on glassy carbon was successfully visualized in constant probe to sample distance. Simultaneously, the topography of the sample was recorded derived from the z positioning of the capillaries.
Chao, Zhi; Liao, Jing; Liang, Zhenbiao; Huang, Suhua; Zhang, Liang; Li, Junde
2014-01-01
Objective: To test the feasibility of DNA barcoding for accurate identification of Jinqian Baihua She and its adulterants. Materials and Methods: Standard cytochrome C oxidase subunit I (COI) gene fragments were sequenced for DNA barcoding of 39 samples from 9 snake species, including Bungarus multicinctus, the officially recognized origin animal by Chinese Pharmacopoeia, and other 8 adulterate species. The aligned sequences, 658 base pairs in length, were analyzed for divergence using the Kimura-2-parameter (K2P) distance model with MEGA5.0. Results: The mean intraspecific K2P distance was 0.0103 and the average interspecific genetic distance was 0.2178 in B. multicinctus, far greater than the minimal interspecific genetic distance of 0.027 recommended for species identification. A neighbor-joining (NJ) tree was constructed, in which each species formed a monophyletic clade with bootstrap supports of 100%. All the data were submitted to Barcode of Life Data system version 3.0 (BOLD, http://www.barcodinglife.org) under the project title “DNA barcoding Bungarus multicinctus and its adulterants”. Ten samples of commercially available crude drugs of JBS were identified using the identification engine provided by BOLD. All the samples were clearly identified at the species level, among which five were found to be the adulterants and identified as Dinodon rufozonatum. Conclusion: DNA barcoding using the standard COI gene fragments provides an effective and accurate means for JBS identification and authentication. PMID:25422545
Multiplexed Paper Analytical Device for Quantification of Metals using Distance-Based Detection
Cate, David M.; Noblitt, Scott D.; Volckens, John; Henry, Charles S.
2015-01-01
Exposure to metal-containing aerosols has been linked with adverse health outcomes for almost every organ in the human body. Commercially available techniques for quantifying particulate metals are time-intensive, laborious, and expensive; often sample analysis exceeds $100. We report a simple technique, based upon a distance-based detection motif, for quantifying metal concentrations of Ni, Cu, and Fe in airborne particulate matter using microfluidic paper-based analytical devices. Paper substrates are used to create sensors that are self-contained, self-timing, and require only a drop of sample for operation. Unlike other colorimetric approaches in paper microfluidics that rely on optical instrumentation for analysis, with distance-based detection, analyte is quantified visually based on the distance of a colorimetric reaction, similar to reading temperature on a thermometer. To demonstrate the effectiveness of this approach, Ni, Cu, and Fe were measured individually in single-channel devices; detection limits as low as 0.1, 0.1, and 0.05 µg were reported for Ni, Cu, and Fe. Multiplexed analysis of all three metals was achieved with detection limits of 1, 5, and 1 µg for Ni, Cu, and Fe. We also extended the dynamic range for multi-analyte detection by printing concentration gradients of colorimetric reagents using an off the shelf inkjet printer. Analyte selectivity was demonstrated for common interferences. To demonstrate utility of the method, Ni, Cu, and Fe were measured from samples of certified welding fume; levels measured with paper sensors matched known values determined gravimetrically. PMID:26009988
2D and 3D X-ray phase retrieval of multi-material objects using a single defocus distance.
Beltran, M A; Paganin, D M; Uesugi, K; Kitchen, M J
2010-03-29
A method of tomographic phase retrieval is developed for multi-material objects whose components each has a distinct complex refractive index. The phase-retrieval algorithm, based on the Transport-of-Intensity equation, utilizes propagation-based X-ray phase contrast images acquired at a single defocus distance for each tomographic projection. The method requires a priori knowledge of the complex refractive index for each material present in the sample, together with the total projected thickness of the object at each orientation. The requirement of only a single defocus distance per projection simplifies the experimental setup and imposes no additional dose compared to conventional tomography. The algorithm was implemented using phase contrast data acquired at the SPring-8 Synchrotron facility in Japan. The three-dimensional (3D) complex refractive index distribution of a multi-material test object was quantitatively reconstructed using a single X-ray phase-contrast image per projection. The technique is robust in the presence of noise, compared to conventional absorption based tomography.
Walking for Transportation: What do U.S. Adults Think is a Reasonable Distance and Time?
Watson, Kathleen B; Carlson, Susan A; Humbert-Rico, Tiffany; Carroll, Dianna D.; Fulton, Janet E
2015-01-01
Background Less than one-third of U.S. adults walk for transportation. Public health strategies to increase transportation walking would benefit from knowing what adults think is a reasonable distance to walk. Our purpose was to determine (1) what adults think is a reasonable distance and amount of time to walk and (2) whether there were differences in minutes spent transportation walking by what adults think is reasonable. Methods Analyses used a cross-sectional nationwide adult sample (n=3,653) participating in the 2010 Summer ConsumerStyles mail survey. Results Most adults (>90%) think transportation walking is reasonable. However, less than half (43%) think walking a mile or more or for 20 minutes or more is reasonable. What adults think is reasonable is similar across most demographic subgroups, except for older adults (≥ 65 years) who think shorter distances and times are reasonable. Trend analysis that adjust for demographic characteristics indicates adults who think longer distances and times are reasonable walk more. Conclusions Walking for short distances is acceptable to most U.S. adults. Public health programs designed to encourage longer distance trips may wish to improve supports for transportation walking to make walking longer distances seem easier and more acceptable to most U.S. adults. PMID:25158016
An improved algorithm for evaluating trellis phase codes
NASA Technical Reports Server (NTRS)
Mulligan, M. G.; Wilson, S. G.
1982-01-01
A method is described for evaluating the minimum distance parameters of trellis phase codes, including CPFSK, partial response FM, and more importantly, coded CPM (continuous phase modulation) schemes. The algorithm provides dramatically faster execution times and lesser memory requirements than previous algorithms. Results of sample calculations and timing comparisons are included.
An improved algorithm for evaluating trellis phase codes
NASA Technical Reports Server (NTRS)
Mulligan, M. G.; Wilson, S. G.
1984-01-01
A method is described for evaluating the minimum distance parameters of trellis phase codes, including CPFSK, partial response FM, and more importantly, coded CPM (continuous phase modulation) schemes. The algorithm provides dramatically faster execution times and lesser memory requirements than previous algorithms. Results of sample calculations and timing comparisons are included.
Accurate time delay technology in simulated test for high precision laser range finder
NASA Astrophysics Data System (ADS)
Chen, Zhibin; Xiao, Wenjian; Wang, Weiming; Xue, Mingxi
2015-10-01
With the continuous development of technology, the ranging accuracy of pulsed laser range finder (LRF) is higher and higher, so the maintenance demand of LRF is also rising. According to the dominant ideology of "time analog spatial distance" in simulated test for pulsed range finder, the key of distance simulation precision lies in the adjustable time delay. By analyzing and comparing the advantages and disadvantages of fiber and circuit delay, a method was proposed to improve the accuracy of the circuit delay without increasing the count frequency of the circuit. A high precision controllable delay circuit was designed by combining the internal delay circuit and external delay circuit which could compensate the delay error in real time. And then the circuit delay accuracy could be increased. The accuracy of the novel circuit delay methods proposed in this paper was actually measured by a high sampling rate oscilloscope actual measurement. The measurement result shows that the accuracy of the distance simulated by the circuit delay is increased from +/- 0.75m up to +/- 0.15m. The accuracy of the simulated distance is greatly improved in simulated test for high precision pulsed range finder.
Investigation of primordial black hole bursts using interplanetary network gamma-ray bursts
Ukwatta, Tilan Niranjan; Hurley, Kevin; MacGibbon, Jane H.; ...
2016-07-25
The detection of a gamma-ray burst (GRB) in the solar neighborhood would have very important implications for GRB phenomenology. The leading theories for cosmological GRBs would not be able to explain such events. The final bursts of evaporating primordial black holes (PBHs), however, would be a natural explanation for local GRBs. We present a novel technique that can constrain the distance to GRBs using detections from widely separated, non-imaging spacecraft. This method can determine the actual distance to the burst if it is local. We applied this method to constrain distances to a sample of 36 short-duration GRBs detected bymore » the Interplanetary Network (IPN) that show observational properties that are expected from PBH evaporations. These bursts have minimum possible distances in the 10 13–10 18 cm (7–10 5 au) range, which are consistent with the expected PBH energetics and with a possible origin in the solar neighborhood, although none of the bursts can be unambiguously demonstrated to be local. Furthermore, assuming that these bursts are real PBH events, we estimate lower limits on the PBH burst evaporation rate in the solar neighborhood.« less
Seed dispersal at alpine treeline: long distance dispersal maintains alpine treelines
NASA Astrophysics Data System (ADS)
Johnson, J. S.; Gaddis, K. D.; Cairns, D. M.; Krutovsky, K.
2016-12-01
Alpine treelines are expected to advance to higher elevations in conjunction with global warming. Nevertheless, the importance of reproductive method and seed dispersal distances at the alpine treeline ecotone remains unresolved. We address two research questions at mountain hemlock treelines on the Kenai Peninsula, Alaska: (1) What is the primary mode of reproduction, and (2) are recruits derived from local treeline populations or are they arriving from more distant seed sources? We addressed our research questions by exhaustively sampling mountain hemlock individuals along a single mountain slope and then genotyped DNA single nucleotide polymorphisms using a genotyping by sequencing approach (ddRAD Seq). First we assessed mode of reproduction by determining the proportion of sampled individuals with identical multilocus genotypes that are the product of clonal reproduction. Second, we used a categorical allocation based parentage analysis to identify parent-offspring pairs, so that the proportion of treeline reproduction events could be quantified spatially and dispersal distance measured. We identified sexual reproduction as the primary mode of reproduction at our study site. Seedling establishment was characterized by extensive cryptic seed dispersal and gene flow into the ecotone. The average dispersal distance was 73 meters with long distance dispersal identified as dispersal occurring at distances greater than 450 meters. We show that production of seeds within the alpine treeline ecotone is not a necessary requirement for treelines to advance to higher elevations in response to climate change. The extensive cryptic seed dispersal and gene flow into the alpine treeline ecotone is likely sufficient to propel the ecotone higher under more favorable climate.
von Cramon-Taubadel, Noreen; Schroeder, Lauren
2016-10-01
Estimation of the variance-covariance (V/CV) structure of fragmentary bioarchaeological populations requires the use of proxy extant V/CV parameters. However, it is currently unclear whether extant human populations exhibit equivalent V/CV structures. Random skewers (RS) and hierarchical analyses of common principal components (CPC) were applied to a modern human cranial dataset. Cranial V/CV similarity was assessed globally for samples of individual populations (jackknifed method) and for pairwise population sample contrasts. The results were examined in light of potential explanatory factors for covariance difference, such as geographic region, among-group distance, and sample size. RS analyses showed that population samples exhibited highly correlated multivariate responses to selection, and that differences in RS results were primarily a consequence of differences in sample size. The CPC method yielded mixed results, depending upon the statistical criterion used to evaluate the hierarchy. The hypothesis-testing (step-up) approach was deemed problematic due to sensitivity to low statistical power and elevated Type I errors. In contrast, the model-fitting (lowest AIC) approach suggested that V/CV matrices were proportional and/or shared a large number of CPCs. Pairwise population sample CPC results were correlated with cranial distance, suggesting that population history explains some of the variability in V/CV structure among groups. The results indicate that patterns of covariance in human craniometric samples are broadly similar but not identical. These findings have important implications for choosing extant covariance matrices to use as proxy V/CV parameters in evolutionary analyses of past populations. © 2016 Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kanjilal, Oindrila, E-mail: oindrila@civil.iisc.ernet.in; Manohar, C.S., E-mail: manohar@civil.iisc.ernet.in
The study considers the problem of simulation based time variant reliability analysis of nonlinear randomly excited dynamical systems. Attention is focused on importance sampling strategies based on the application of Girsanov's transformation method. Controls which minimize the distance function, as in the first order reliability method (FORM), are shown to minimize a bound on the sampling variance of the estimator for the probability of failure. Two schemes based on the application of calculus of variations for selecting control signals are proposed: the first obtains the control force as the solution of a two-point nonlinear boundary value problem, and, the secondmore » explores the application of the Volterra series in characterizing the controls. The relative merits of these schemes, vis-à-vis the method based on ideas from the FORM, are discussed. Illustrative examples, involving archetypal single degree of freedom (dof) nonlinear oscillators, and a multi-degree of freedom nonlinear dynamical system, are presented. The credentials of the proposed procedures are established by comparing the solutions with pertinent results from direct Monte Carlo simulations. - Highlights: • The distance minimizing control forces minimize a bound on the sampling variance. • Establishing Girsanov controls via solution of a two-point boundary value problem. • Girsanov controls via Volterra's series representation for the transfer functions.« less
NASA Astrophysics Data System (ADS)
Krueger, Susan; Khodadadi, Sheila; Clark, Nicholas; McAuley, Arnold; Cristiglio, Viviana; Theyencheri, Narayanan; Curtis, Joseph; Shalaev, Evgenyi
2015-03-01
For effective preservation, proteins are often stored as frozen solutions or in glassy states using a freeze-drying process. However, aggregation is often observed after freeze-thaw or reconstitution of freeze-dried powder and the stability of the protein is no longer assured. In this study, small-angle neutron and X-ray scattering (SANS and SAXS) have been used to investigate changes in protein-protein interaction distances of a model protein/cryoprotectant system of lysozyme/sorbitol/water, under representative pharmaceutical processing conditions. The results demonstrate the utility of SAXS and SANS methods to monitor protein crowding at different stages of freezing and drying. The SANS measurements of solution samples showed at least one protein interaction peak corresponding to an interaction distance of ~ 90 Å. In the frozen state, two protein interaction peaks were observed by SANS with corresponding interaction distances at 40 Å as well as 90 Å. On the other hand, both SAXS and SANS data for freeze-dried samples showed three peaks, suggesting interaction distances ranging from ~ 15 Å to 170 Å. Possible interpretations of these interaction peaks will be discussed, as well as the role of sorbitol as a cryoprotectant during the freezing and drying process.
ELECTROFISHING DISTANCE NEEDED TO ESTIMATE FISH SPECIES RICHNESS IN RAFTABLE WESTERN USA RIVERS
A critical issue in river monitoring is the minimum amount of sampling distance required to adequately represent the fish assemblage of a reach. Determining adequate sampling distance is important because it affects estimates of fish assemblage integrity and diversity at local a...
KECSA-Movable Type Implicit Solvation Model (KMTISM)
2015-01-01
Computation of the solvation free energy for chemical and biological processes has long been of significant interest. The key challenges to effective solvation modeling center on the choice of potential function and configurational sampling. Herein, an energy sampling approach termed the “Movable Type” (MT) method, and a statistical energy function for solvation modeling, “Knowledge-based and Empirical Combined Scoring Algorithm” (KECSA) are developed and utilized to create an implicit solvation model: KECSA-Movable Type Implicit Solvation Model (KMTISM) suitable for the study of chemical and biological systems. KMTISM is an implicit solvation model, but the MT method performs energy sampling at the atom pairwise level. For a specific molecular system, the MT method collects energies from prebuilt databases for the requisite atom pairs at all relevant distance ranges, which by its very construction encodes all possible molecular configurations simultaneously. Unlike traditional statistical energy functions, KECSA converts structural statistical information into categorized atom pairwise interaction energies as a function of the radial distance instead of a mean force energy function. Within the implicit solvent model approximation, aqueous solvation free energies are then obtained from the NVT ensemble partition function generated by the MT method. Validation is performed against several subsets selected from the Minnesota Solvation Database v2012. Results are compared with several solvation free energy calculation methods, including a one-to-one comparison against two commonly used classical implicit solvation models: MM-GBSA and MM-PBSA. Comparison against a quantum mechanics based polarizable continuum model is also discussed (Cramer and Truhlar’s Solvation Model 12). PMID:25691832
NASA Astrophysics Data System (ADS)
Stomp, Romain-Pierre
This thesis is devoted to the studies of self-assembled InAs quantum dots (QD) by low-temperature Atomic Force Microscopy (AFM) in frequency modulation mode. Several spectroscopic methods are developed to investigate single electron charging from a two-dimensional electron gas (2DEG) to an individual InAs QD. Furthermore, a new technique to measure the absolute tip-sample capacitance is also demonstrated. The main observables are the electrostatic force between the metal-coated AFM tip and sample as well as the sample-induced energy dissipation, and therefore no tunneling current has to be collected at the AFM tip. Measurements were performed by recording simultaneously the shift in the resonant frequency and the Q-factor degradation of the oscillating cantilever either as a function of tip-sample voltage or distance. The signature of single electron charging was detected as an abrupt change in the frequency shift as well as corresponding peaks in the dissipation. The main experimental features in the force agree well with the semi-classical theory of Coulomb blockade by considering the free energy of the system. The observed dissipation peaks can be understood as a back-action effect on the oscillating cantilever beam due to the fluctuation in time of electrons tunneling back and forth between the 2DEG and the QD. It was also possible to extract the absolute value of the tip-sample capacitance, as a consequence of the spectroscopic analysis of the electrostic force as a function of tip-sample distance for different values of the applied voltage. At the same time, the contact potential difference and the residual non-capacitive force could also be determined as a function of tip-sample distance.
Reconstruction based finger-knuckle-print verification with score level adaptive binary fusion.
Gao, Guangwei; Zhang, Lei; Yang, Jian; Zhang, Lin; Zhang, David
2013-12-01
Recently, a new biometrics identifier, namely finger knuckle print (FKP), has been proposed for personal authentication with very interesting results. One of the advantages of FKP verification lies in its user friendliness in data collection. However, the user flexibility in positioning fingers also leads to a certain degree of pose variations in the collected query FKP images. The widely used Gabor filtering based competitive coding scheme is sensitive to such variations, resulting in many false rejections. We propose to alleviate this problem by reconstructing the query sample with a dictionary learned from the template samples in the gallery set. The reconstructed FKP image can reduce much the enlarged matching distance caused by finger pose variations; however, both the intra-class and inter-class distances will be reduced. We then propose a score level adaptive binary fusion rule to adaptively fuse the matching distances before and after reconstruction, aiming to reduce the false rejections without increasing much the false acceptances. Experimental results on the benchmark PolyU FKP database show that the proposed method significantly improves the FKP verification accuracy.
NASA Astrophysics Data System (ADS)
Trofimov, Vyacheslav A.; Varentsova, Svetlana A.; Trofimov, Vladislav V.
2015-05-01
We show possibility of the detection and identification of substance at long distance (several metres, for example) using the THz pulse reflected from the object under the real conditions: at room temperature and humidity of about 70%. The main feature of this report consists in a demonstration of the detection and identification of substance using the computer processing of the noisy THz pulse. Amplitude of the useful signal is less than the amplitude of a noise. Nevertheless, it is possible to detect "fingerprint" frequencies of substance if these frequencies are known and the SDA method is used together with new assessments for probability estimation for presence of detected frequencies. Essential restrictions of the commonly used THz TDS method for the detection and identification under real conditions (at long distance about 3.5 m and at a high relative humidity more than 50%) are demonstrated using the physical experiment with chocolate bar and thick paper bag. We show also that the THz TDS method detects spectral features of dangerous substances even in the THz signals measured in laboratory conditions (at distance 30-40 cm from the receiver and at a low relative humidity less than 2%); the n-Si and p-Si semiconductors were used as neutral substances. However, the integral correlation and likeness criteria, based on SDA method, allow us to detect the absence of dangerous substances in the samples. Current results show feasibility of using the discussed method of the THz pulsed spectroscopy for the counter-terrorism problem.
Lin, Tao; Sun, Huijun; Chen, Zhong; You, Rongyi; Zhong, Jianhui
2007-12-01
Diffusion weighting in MRI is commonly achieved with the pulsed-gradient spin-echo (PGSE) method. When combined with spin-warping image formation, this method often results in ghosts due to the sample's macroscopic motion. It has been shown experimentally (Kennedy and Zhong, MRM 2004;52:1-6) that these motion artifacts can be effectively eliminated by the distant dipolar field (DDF) method, which relies on the refocusing of spatially modulated transverse magnetization by the DDF within the sample itself. In this report, diffusion-weighted images (DWIs) using both DDF and PGSE methods in the presence of macroscopic sample motion were simulated. Numerical simulation results quantify the dependence of signals in DWI on several key motion parameters and demonstrate that the DDF DWIs are much less sensitive to macroscopic sample motion than the traditional PGSE DWIs. The results also show that the dipolar correlation distance (d(c)) can alter contrast in DDF DWIs. The simulated results are in good agreement with the experimental results reported previously.
Cao-Hoang, Lan; Chaine, Aline; Grégoire, Lydie; Waché, Yves
2010-10-01
A sodium caseinate film containing nisin (1000 IU/cm(2)) was produced and used to control Listeria innocua in an artificially contaminated cheese. Mini red Babybel cheese was chosen as a model semi-soft cheese. L. innocua was both surface- and in-depth inoculated to investigate the effectiveness of the antimicrobial film as a function of the distance from the surface in contact with the film. The presence of the active film resulted in a 1.1 log CFU/g reduction in L. innocua counts in surface-inoculated cheese samples after one week of storage at 4 degrees C as compared to control samples. With regard to in-depth inoculated cheese samples, antimicrobial efficiency was found to be dependent on the distance from the surface in contact with the active films to the cheese matrix. The inactivation rates obtained were 1.1, 0.9 and 0.25 log CFU/g for distances from the contact surface of 1 mm, 2 mm and 3 mm, respectively. Our study demonstrates the potential application of sodium caseinate films containing nisin as a promising method to overcome problems associated with post-process contamination, thereby extending the shelf life and possibly enhancing the microbial safety of cheeses. 2010 Elsevier Ltd. All rights reserved.
Gender discrimination and prediction on the basis of facial metric information.
Fellous, J M
1997-07-01
Horizontal and vertical facial measurements are statistically independent. Discriminant analysis shows that five of such normalized distances explain over 95% of the gender differences of "training" samples and predict the gender of 90% novel test faces exhibiting various facial expressions. The robustness of the method and its results are assessed. It is argued that these distances (termed fiducial) are compatible with those found experimentally by psychophysical and neurophysiological studies. In consequence, partial explanations for the effects observed in these experiments can be found in the intrinsic statistical nature of the facial stimuli used.
Zarco-Perello, Salvador; Simões, Nuno
2017-01-01
Information about the distribution and abundance of the habitat-forming sessile organisms in marine ecosystems is of great importance for conservation and natural resource managers. Spatial interpolation methodologies can be useful to generate this information from in situ sampling points, especially in circumstances where remote sensing methodologies cannot be applied due to small-scale spatial variability of the natural communities and low light penetration in the water column. Interpolation methods are widely used in environmental sciences; however, published studies using these methodologies in coral reef science are scarce. We compared the accuracy of the two most commonly used interpolation methods in all disciplines, inverse distance weighting (IDW) and ordinary kriging (OK), to predict the distribution and abundance of hard corals, octocorals, macroalgae, sponges and zoantharians and identify hotspots of these habitat-forming organisms using data sampled at three different spatial scales (5, 10 and 20 m) in Madagascar reef, Gulf of Mexico. The deeper sandy environments of the leeward and windward regions of Madagascar reef were dominated by macroalgae and seconded by octocorals. However, the shallow rocky environments of the reef crest had the highest richness of habitat-forming groups of organisms; here, we registered high abundances of octocorals and macroalgae, with sponges, Millepora alcicornis and zoantharians dominating in some patches, creating high levels of habitat heterogeneity. IDW and OK generated similar maps of distribution for all the taxa; however, cross-validation tests showed that IDW outperformed OK in the prediction of their abundances. When the sampling distance was at 20 m, both interpolation techniques performed poorly, but as the sampling was done at shorter distances prediction accuracies increased, especially for IDW. OK had higher mean prediction errors and failed to correctly interpolate the highest abundance values measured in situ , except for macroalgae, whereas IDW had lower mean prediction errors and high correlations between predicted and measured values in all cases when sampling was every 5 m. The accurate spatial interpolations created using IDW allowed us to see the spatial variability of each taxa at a biological and spatial resolution that remote sensing would not have been able to produce. Our study sets the basis for further research projects and conservation management in Madagascar reef and encourages similar studies in the region and other parts of the world where remote sensing technologies are not suitable for use.
Simões, Nuno
2017-01-01
Information about the distribution and abundance of the habitat-forming sessile organisms in marine ecosystems is of great importance for conservation and natural resource managers. Spatial interpolation methodologies can be useful to generate this information from in situ sampling points, especially in circumstances where remote sensing methodologies cannot be applied due to small-scale spatial variability of the natural communities and low light penetration in the water column. Interpolation methods are widely used in environmental sciences; however, published studies using these methodologies in coral reef science are scarce. We compared the accuracy of the two most commonly used interpolation methods in all disciplines, inverse distance weighting (IDW) and ordinary kriging (OK), to predict the distribution and abundance of hard corals, octocorals, macroalgae, sponges and zoantharians and identify hotspots of these habitat-forming organisms using data sampled at three different spatial scales (5, 10 and 20 m) in Madagascar reef, Gulf of Mexico. The deeper sandy environments of the leeward and windward regions of Madagascar reef were dominated by macroalgae and seconded by octocorals. However, the shallow rocky environments of the reef crest had the highest richness of habitat-forming groups of organisms; here, we registered high abundances of octocorals and macroalgae, with sponges, Millepora alcicornis and zoantharians dominating in some patches, creating high levels of habitat heterogeneity. IDW and OK generated similar maps of distribution for all the taxa; however, cross-validation tests showed that IDW outperformed OK in the prediction of their abundances. When the sampling distance was at 20 m, both interpolation techniques performed poorly, but as the sampling was done at shorter distances prediction accuracies increased, especially for IDW. OK had higher mean prediction errors and failed to correctly interpolate the highest abundance values measured in situ, except for macroalgae, whereas IDW had lower mean prediction errors and high correlations between predicted and measured values in all cases when sampling was every 5 m. The accurate spatial interpolations created using IDW allowed us to see the spatial variability of each taxa at a biological and spatial resolution that remote sensing would not have been able to produce. Our study sets the basis for further research projects and conservation management in Madagascar reef and encourages similar studies in the region and other parts of the world where remote sensing technologies are not suitable for use. PMID:29204321
Automatic measurement for dimensional changes of woven fabrics based on texture
NASA Astrophysics Data System (ADS)
Liu, Jihong; Jiang, Hongxia; Liu, X.; Chai, Zhilei
2014-01-01
Dimensional change or shrinkage is an important functional attribute of woven fabrics that affects their basic function and price in the market. This paper presents a machine vision system that evaluates the shrinkage of woven fabrics by analyzing the change of fabric construction. The proposed measurement method has three features. (i) There will be no stain of shrinkage markers on the fabric specimen compared to the existing measurement method. (ii) The system can be used on fabric with reduced area. (iii) The system can be installed and used as a laboratory or industrial application system. The method processed can process the image of the fabric and is divided into four steps: acquiring a relative image from the sample of the woven fabric, obtaining a gray image and then the segmentation of the warp and weft from the fabric based on fast Fourier transform and inverse fast Fourier transform, calculation of the distance of the warp or weft sets by gray projection method and character shrinkage of the woven fabric by the average distance, coefficient of variation of distance and so on. Experimental results on virtual and physical woven fabrics indicated that the method provided could obtain the shrinkage information of woven fabric in detail. The method was programmed by Matlab software, and a graphical user interface was built by Delphi. The program has potential for practical use in the textile industry.
Nanoporous Anodic Alumina 3D FDTD Modelling for a Broad Range of Inter-pore Distances
NASA Astrophysics Data System (ADS)
Bertó-Roselló, Francesc; Xifré-Pérez, Elisabet; Ferré-Borrull, Josep; Pallarès, Josep; Marsal, Lluis F.
2016-08-01
The capability of the finite difference time domain (FDTD) method for the numerical modelling of the optical properties of nanoporous anodic alumina (NAA) in a broad range of inter-pore distances is evaluated. FDTD permits taking into account in the same numerical framework all the structural features of NAA, such as the texturization of the interfaces or the incorporation of electrolyte anions in the aluminium oxide host. The evaluation is carried out by comparing reflectance measurements from two samples with two very different inter-pore distances with the simulation results. Results show that considering the texturization is crucial to obtain good agreement with the measurements. On the other hand, including the anionic layer in the model leads to a second-order contribution to the reflectance spectrum.
Nanoporous Anodic Alumina 3D FDTD Modelling for a Broad Range of Inter-pore Distances.
Bertó-Roselló, Francesc; Xifré-Pérez, Elisabet; Ferré-Borrull, Josep; Pallarès, Josep; Marsal, Lluis F
2016-12-01
The capability of the finite difference time domain (FDTD) method for the numerical modelling of the optical properties of nanoporous anodic alumina (NAA) in a broad range of inter-pore distances is evaluated. FDTD permits taking into account in the same numerical framework all the structural features of NAA, such as the texturization of the interfaces or the incorporation of electrolyte anions in the aluminium oxide host. The evaluation is carried out by comparing reflectance measurements from two samples with two very different inter-pore distances with the simulation results. Results show that considering the texturization is crucial to obtain good agreement with the measurements. On the other hand, including the anionic layer in the model leads to a second-order contribution to the reflectance spectrum.
spsann - optimization of sample patterns using spatial simulated annealing
NASA Astrophysics Data System (ADS)
Samuel-Rosa, Alessandro; Heuvelink, Gerard; Vasques, Gustavo; Anjos, Lúcia
2015-04-01
There are many algorithms and computer programs to optimize sample patterns, some private and others publicly available. A few have only been presented in scientific articles and text books. This dispersion and somewhat poor availability is holds back to their wider adoption and further development. We introduce spsann, a new R-package for the optimization of sample patterns using spatial simulated annealing. R is the most popular environment for data processing and analysis. Spatial simulated annealing is a well known method with widespread use to solve optimization problems in the soil and geo-sciences. This is mainly due to its robustness against local optima and easiness of implementation. spsann offers many optimizing criteria for sampling for variogram estimation (number of points or point-pairs per lag distance class - PPL), trend estimation (association/correlation and marginal distribution of the covariates - ACDC), and spatial interpolation (mean squared shortest distance - MSSD). spsann also includes the mean or maximum universal kriging variance (MUKV) as an optimizing criterion, which is used when the model of spatial variation is known. PPL, ACDC and MSSD were combined (PAN) for sampling when we are ignorant about the model of spatial variation. spsann solves this multi-objective optimization problem scaling the objective function values using their maximum absolute value or the mean value computed over 1000 random samples. Scaled values are aggregated using the weighted sum method. A graphical display allows to follow how the sample pattern is being perturbed during the optimization, as well as the evolution of its energy state. It is possible to start perturbing many points and exponentially reduce the number of perturbed points. The maximum perturbation distance reduces linearly with the number of iterations. The acceptance probability also reduces exponentially with the number of iterations. R is memory hungry and spatial simulated annealing is a computationally intensive method. As such, many strategies were used to reduce the computation time and memory usage: a) bottlenecks were implemented in C++, b) a finite set of candidate locations is used for perturbing the sample points, and c) data matrices are computed only once and then updated at each iteration instead of being recomputed. spsann is available at GitHub under a licence GLP Version 2.0 and will be further developed to: a) allow the use of a cost surface, b) implement other sensitive parts of the source code in C++, c) implement other optimizing criteria, d) allow to add or delete points to/from an existing point pattern.
NASA Astrophysics Data System (ADS)
Wüst, Sabine; Wendt, Verena; Linz, Ricarda; Bittner, Michael
2017-09-01
Cubic splines with equidistant spline sampling points are a common method in atmospheric science, used for the approximation of background conditions by means of filtering superimposed fluctuations from a data series. What is defined as background or superimposed fluctuation depends on the specific research question. The latter also determines whether the spline or the residuals - the subtraction of the spline from the original time series - are further analysed.Based on test data sets, we show that the quality of approximation of the background state does not increase continuously with an increasing number of spline sampling points and/or decreasing distance between two spline sampling points. Splines can generate considerable artificial oscillations in the background and the residuals.We introduce a repeating spline approach which is able to significantly reduce this phenomenon. We apply it not only to the test data but also to TIMED-SABER temperature data and choose the distance between two spline sampling points in a way that is sensitive for a large spectrum of gravity waves.
Liu, Dan; Li, Xingrui; Zhou, Junkai; Liu, Shibo; Tian, Tian; Song, Yanling; Zhu, Zhi; Zhou, Leiji; Ji, Tianhai; Yang, Chaoyong
2017-10-15
Enzyme-linked immunosorbent assay (ELISA) is a popular laboratory technique for detection of disease-specific protein biomarkers with high specificity and sensitivity. However, ELISA requires labor-intensive and time-consuming procedures with skilled operators and spectroscopic instrumentation. Simplification of the procedures and miniaturization of the devices are crucial for ELISA-based point-of-care (POC) testing in resource-limited settings. Here, we present a fully integrated, instrument-free, low-cost and portable POC platform which integrates the process of ELISA and the distance readout into a single microfluidic chip. Based on manipulation using a permanent magnet, the process is initiated by moving magnetic beads with capture antibody through different aqueous phases containing ELISA reagents to form bead/antibody/antigen/antibody sandwich structure, and finally converts the molecular recognition signal into a highly sensitive distance readout for visual quantitative bioanalysis. Without additional equipment and complicated operations, our integrated ELISA-Chip with distance readout allows ultrasensitive quantitation of disease biomarkers within 2h. The ELISA-Chip method also showed high specificity, good precision and great accuracy. Furthermore, the ELISA-Chip system is highly applicable as a sandwich-based platform for the detection of a variety of protein biomarkers. With the advantages of visual analysis, easy operation, high sensitivity, and low cost, the integrated sample-in-answer-out ELISA-Chip with distance readout shows great potential for quantitative POCT in resource-limited settings. Copyright © 2017. Published by Elsevier B.V.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jonsson, Jacob C.; Branden, Henrik
2006-10-19
This paper demonstrates a method to determine thebidirectional transfer distribution function (BTDF) using an integratingsphere. Information about the sample's angle dependent scattering isobtained by making transmittance measurements with the sample atdifferent distances from the integrating sphere. Knowledge about theilluminated area of the sample and the geometry of the sphere port incombination with the measured data combines to an system of equationsthat includes the angle dependent transmittance. The resulting system ofequations is an ill-posed problem which rarely gives a physical solution.A solvable system is obtained by using Tikhonov regularization on theill-posed problem. The solution to this system can then be usedmore » to obtainthe BTDF. Four bulk-scattering samples were characterised using both twogoniophotometers and the described method to verify the validity of thenew method. The agreement shown is great for the more diffuse samples.The solution to the low-scattering samples contains unphysicaloscillations, butstill gives the correct shape of the solution. Theorigin of the oscillations and why they are more prominent inlow-scattering samples are discussed.« less
Distance management of inflammatory bowel disease: Systematic review and meta-analysis
Huang, Vivian W; Reich, Krista M; Fedorak, Richard N
2014-01-01
AIM: To review the effectiveness of distance management methods in the management of adult inflammatory bowel disease (IBD) patients. METHODS: A systematic review and meta-analysis of randomized controlled trials comparing distance management and standard clinic follow-up in the management of adult IBD patients. Distance management intervention was defined as any remote management method in which there is a patient self-management component whereby the patient interacts remotely via a self-guided management program, electronic interface, or self-directs open access to clinic follow up. The search strategy included electronic databases (Medline, PubMed, CINAHL, The Cochrane Central Register of Controlled Trials, EMBASE, KTPlus, Web of Science, and SCOPUS), conference proceedings, and internet search for web publications. The primary outcome was the mean difference in quality of life, and the secondary outcomes included mean difference in relapse rate, clinic visit rate, and hospital admission rate. Study selection, data extraction, and risk of bias assessment were completed by two independent reviewers. RESULTS: The search strategy identified a total of 4061 articles, but only 6 randomized controlled trials met the inclusion and exclusion criteria for the systematic review and meta-analysis. Three trials involved telemanagement, and three trials involved directed patient self-management and open access clinics. The total sample size was 1463 patients. There was a trend towards improved quality of life in distance management patients with an end IBDQ quality of life score being 7.28 (95%CI: -3.25-17.81) points higher than standard clinic follow-up. There was a significant decrease in the clinic visit rate among distance management patients mean difference -1.08 (95%CI: -1.60--0.55), but no significant change in relapse rate or hospital admission rate. CONCLUSION: Distance management of IBD significantly decreases clinic visit utilization, but does not significantly affect relapse rates or hospital admission rates. PMID:24574756
Coarse Point Cloud Registration by Egi Matching of Voxel Clusters
NASA Astrophysics Data System (ADS)
Wang, Jinhu; Lindenbergh, Roderik; Shen, Yueqian; Menenti, Massimo
2016-06-01
Laser scanning samples the surface geometry of objects efficiently and records versatile information as point clouds. However, often more scans are required to fully cover a scene. Therefore, a registration step is required that transforms the different scans into a common coordinate system. The registration of point clouds is usually conducted in two steps, i.e. coarse registration followed by fine registration. In this study an automatic marker-free coarse registration method for pair-wise scans is presented. First the two input point clouds are re-sampled as voxels and dimensionality features of the voxels are determined by principal component analysis (PCA). Then voxel cells with the same dimensionality are clustered. Next, the Extended Gaussian Image (EGI) descriptor of those voxel clusters are constructed using significant eigenvectors of each voxel in the cluster. Correspondences between clusters in source and target data are obtained according to the similarity between their EGI descriptors. The random sampling consensus (RANSAC) algorithm is employed to remove outlying correspondences until a coarse alignment is obtained. If necessary, a fine registration is performed in a final step. This new method is illustrated on scan data sampling two indoor scenarios. The results of the tests are evaluated by computing the point to point distance between the two input point clouds. The presented two tests resulted in mean distances of 7.6 mm and 9.5 mm respectively, which are adequate for fine registration.
NASA Astrophysics Data System (ADS)
Kuz'micheva, Galina M.; Timaeva, Olesya I.; Kaurova, Irina A.; Svetogorov, Roman D.; Mühlbauer, Martin J.
2018-03-01
Potassium dihydrogen phosphate KH2PO4 (KDP) single crystals activated with Ti4+ ions in the form of TiO2-x × nH2O nanoparticles in the η-phase, synthesized by the sulfate method using TiOSO4 × xH2SO4 × yH2O, have been first grown by the temperature lowering method and cut from pyramidal (P) and prismatic (Pr) growth sectors. The first-performed neutron powder diffraction investigation of P and Pr samples cut from the KDP:Ti4+ crystal allowed us to reveal vacancies in the K and H sites for both samples, their number being larger in the Pr structure compared to the P one. Taking into account the deficiency of the K and H sites, full occupation of the O site, presence of Ti4+ ions in the structure, and the electroneutrality condition, a partial substitution of (PO4)3- anion by the (SO4)2- one, larger for the Pr sample, was observed. The real compositions of P and Pr samples, correlated with the cation-anion internuclear distances, were refined. The dielectric permittivity of the Pr sample was significantly lower than that of the P one; it decreases with decreasing K-O, P-O, and O...H distances and increasing deficiency of the K and H sites.
Method for Controlling a Producing Zone of a Well in a Geological Formation
NASA Technical Reports Server (NTRS)
Arndt, G. Dickey (Inventor); Carl, James R. (Inventor); Byerly, Kent A. (Inventor); Amini, B. Jon (Inventor)
2005-01-01
System and methods for transmitting and receiving electromagnetic pulses through a geological formation. A preferably programmable transmitter having an all-digital portion in a preferred embodiment may be operated at frequencies below 1 MHz without loss of target resolution by transmitting and over sampling received long PN codes. A gated and stored portion of the received signal may be correlated with the PN code to determine distances of interfaces within the geological formation, such as the distance of a water interfaces from a wellbore. The received signal is oversampled preferably at rates such as five to fifty times as high as a carrier frequency. In one method of the invention, an oil well with multiple production zones may be kept in production by detecting an approaching water front in one of the production zones and shutting down that particular production zone thereby permitting the remaining production zones to continue operating.
Method for controlling a producing zone of a well in a geological formation
NASA Technical Reports Server (NTRS)
Arndt, G. Dickey (Inventor); Carl, James R. (Inventor); Byerly, Kent A. (Inventor); Amini, B. Jon (Inventor)
2005-01-01
System and methods for transmitting and receiving electromagnetic pulses through a geological formation. A preferably programmable transmitter having an all-digital portion in a preferred embodiment may be operated at frequencies below 1 MHz without loss of target resolution by transmitting and over sampling received long PN codes. A gated and stored portion of the received signal may be correlated with the PN code to determine distances of interfaces within the geological formation, such as the distance of a water interfaces from a wellbore. The received signal is oversampled preferably at rates such as five to fifty times as high as a carrier frequency. In one method of the invention, an oil well with multiple production zones may be kept in production by detecting an approaching water front in one of the production zones and shutting down that particular production zone thereby permitting the remaining production zones to continue operating.
NASA Astrophysics Data System (ADS)
Trofimov, Vyacheslav A.; Varentsova, Svetlana A.; Trofimov, Vladislav V.; Tikhomirov, Vasily V.
2015-08-01
Principal limitations of the standard THz-TDS method for the detection and identification are demonstrated under real conditions (at long distance of about 3.5 m and at a high relative humidity more than 50%) using neutral substances thick paper bag, paper napkins and chocolate. We show also that the THz-TDS method detects spectral features of dangerous substances even if the THz signals were measured in laboratory conditions (at distance 30-40 cm from the receiver and at a low relative humidity less than 2%); silicon-based semiconductors were used as the samples. However, the integral correlation criteria, based on SDA method, allows us to detect the absence of dangerous substances in the neutral substances. The discussed algorithm shows high probability of the substance identification and a reliability of realization in practice, especially for security applications and non-destructive testing.
Yan, Xuedong; Gao, Dan; Zhang, Fan; Zeng, Chen; Xiang, Wang; Zhang, Man
2013-01-01
This study investigated the spatial distribution of copper (Cu), zinc (Zn), cadmium (Cd), lead (Pb), chromium (Cr), cobalt (Co), nickel (Ni) and arsenic (As) in roadside topsoil in the Qinghai-Tibet Plateau and evaluated the potential environmental risks of these roadside heavy metals due to traffic emissions. A total of 120 topsoil samples were collected along five road segments in the Qinghai-Tibet Plateau. The nonlinear regression method was used to formulize the relationship between the metal concentrations in roadside soils and roadside distance. The Hakanson potential ecological risk index method was applied to assess the degrees of heavy metal contaminations. The regression results showed that both of the heavy metals’ concentrations and their ecological risk indices decreased exponentially with the increase of roadside distance. The large R square values of the regression models indicate that the exponential regression method can suitably describe the relationship between heavy metal accumulation and roadside distance. For the entire study region, there was a moderate level of potential ecological risk within a 10 m roadside distance. However, Cd was the only prominent heavy metal which posed potential hazard to the local soil ecosystem. Overall, the rank of risk contribution to the local environments among the eight heavy metals was Cd > As > Ni > Pb > Cu > Co > Zn > Cr. Considering that Cd is a more hazardous heavy metal than other elements for public health, the local government should pay special attention to this traffic-related environmental issue. PMID:23439515
Blair, Christopher; Bryson, Robert W
2017-11-01
Biodiversity reduction and loss continues to progress at an alarming rate, and thus, there is widespread interest in utilizing rapid and efficient methods for quantifying and delimiting taxonomic diversity. Single-locus species delimitation methods have become popular, in part due to the adoption of the DNA barcoding paradigm. These techniques can be broadly classified into tree-based and distance-based methods depending on whether species are delimited based on a constructed genealogy. Although the relative performance of these methods has been tested repeatedly with simulations, additional studies are needed to assess congruence with empirical data. We compiled a large data set of mitochondrial ND4 sequences from horned lizards (Phrynosoma) to elucidate congruence using four tree-based (single-threshold GMYC, multiple-threshold GMYC, bPTP, mPTP) and one distance-based (ABGD) species delimitation models. We were particularly interested in cases with highly uneven sampling and/or large differences in intraspecific diversity. Results showed a high degree of discordance among methods, with multiple-threshold GMYC and bPTP suggesting an unrealistically high number of species (29 and 26 species within the P. douglasii complex alone). The single-threshold GMYC model was the most conservative, likely a result of difficulty in locating the inflection point in the genealogies. mPTP and ABGD appeared to be the most stable across sampling regimes and suggested the presence of additional cryptic species that warrant further investigation. These results suggest that the mPTP model may be preferable in empirical data sets with highly uneven sampling or large differences in effective population sizes of species. © 2017 John Wiley & Sons Ltd.
Makowski, David; Bancal, Rémi; Bensadoun, Arnaud; Monod, Hervé; Messéan, Antoine
2017-09-01
According to E.U. regulations, the maximum allowable rate of adventitious transgene presence in non-genetically modified (GM) crops is 0.9%. We compared four sampling methods for the detection of transgenic material in agricultural non-GM maize fields: random sampling, stratified sampling, random sampling + ratio reweighting, random sampling + regression reweighting. Random sampling involves simply sampling maize grains from different locations selected at random from the field concerned. The stratified and reweighting sampling methods make use of an auxiliary variable corresponding to the output of a gene-flow model (a zero-inflated Poisson model) simulating cross-pollination as a function of wind speed, wind direction, and distance to the closest GM maize field. With the stratified sampling method, an auxiliary variable is used to define several strata with contrasting transgene presence rates, and grains are then sampled at random from each stratum. With the two methods involving reweighting, grains are first sampled at random from various locations within the field, and the observations are then reweighted according to the auxiliary variable. Data collected from three maize fields were used to compare the four sampling methods, and the results were used to determine the extent to which transgene presence rate estimation was improved by the use of stratified and reweighting sampling methods. We found that transgene rate estimates were more accurate and that substantially smaller samples could be used with sampling strategies based on an auxiliary variable derived from a gene-flow model. © 2017 Society for Risk Analysis.
NASA Astrophysics Data System (ADS)
Shinogle-Decker, Heather; Martinez-Rivera, Noraida; O'Brien, John; Powell, Richard D.; Joshi, Vishwas N.; Connell, Samuel; Rosa-Molinar, Eduardo
2018-02-01
A new correlative Förster Resonance Energy Transfer (FRET) microscopy method using FluoroNanogold™, a fluorescent immunoprobe with a covalently attached Nanogold® particle (1.4nm Au), overcomes resolution limitations in determining distances within synaptic nanoscale architecture. FRET by acceptor photobleaching has long been used as a method to increase fluorescence resolution. The transfer of energy from a donor to an acceptor generally occurs between 10-100Å, which is the relative distance between the donor molecule and the acceptor molecule. For the correlative FRET microscopy method using FluoroNanogold™, we immuno-labeled GFP-tagged-HeLa-expressing Connexin 35 (Cx35) with anti-GFP and with anti-Cx35/36 antibodies, and then photo-bleached the Cx before processing the sample for electron microscopic imaging. Preliminary studies reveal the use of Alexa Fluor® 594 FluoroNanogold™ slightly increases FRET distance to 70Å, in contrast to the 62.5Å using AlexaFluor 594®. Preliminary studies also show that using a FluoroNanogold™ probe inhibits photobleaching. After one photobleaching session, Alexa Fluor 594® fluorescence dropped to 19% of its original fluorescence; in contrast, after one photobleaching session, Alexa Fluor 594® FluoroNanogold™ fluorescence dropped to 53% of its original intensity. This result confirms that Alexa Fluor 594® FluoroNanogold™ is a much better donor probe than is Alexa Fluor 594®. The new method (a) creates a double confirmation method in determining structure and orientation of synaptic architecture, (b) allows development of a two-dimensional in vitro model to be used for precise testing of multiple parameters, and (c) increases throughput. Future work will include development of FluoroNanogold™ probes with different sizes of gold for additional correlative microscopy studies.
De Groot, G. A.; During, H. J.; Ansell, S. W.; Schneider, H.; Bremer, P.; Wubs, E. R. J.; Maas, J. W.; Korpelainen, H.; Erkens, R. H. J.
2012-01-01
Background and Aims Populations established by long-distance colonization are expected to show low levels of genetic variation per population, but strong genetic differentiation among populations. Whether isolated populations indeed show this genetic signature of isolation depends on the amount and diversity of diaspores arriving by long-distance dispersal, and time since colonization. For ferns, however, reliable estimates of long-distance dispersal rates remain largely unknown, and previous studies on fern population genetics often sampled older or non-isolated populations. Young populations in recent, disjunct habitats form a useful study system to improve our understanding of the genetic impact of long-distance dispersal. Methods Microsatellite markers were used to analyse the amount and distribution of genetic diversity in young populations of four widespread calcicole ferns (Asplenium scolopendrium, diploid; Asplenium trichomanes subsp. quadrivalens, tetraploid; Polystichum setiferum, diploid; and Polystichum aculeatum, tetraploid), which are rare in The Netherlands but established multiple populations in a forest (the Kuinderbos) on recently reclaimed Dutch polder land following long-distance dispersal. Reference samples from populations throughout Europe were used to assess how much of the existing variation was already present in the Kuinderbos. Key Results A large part of the Dutch and European genetic diversity in all four species was already found in the Kuinderbos. This diversity was strongly partitioned among populations. Most populations showed low genetic variation and high inbreeding coefficients, and were assigned to single, unique gene pools in cluster analyses. Evidence for interpopulational gene flow was low, except for the most abundant species. Conclusions The results show that all four species, diploids as well as polyploids, were capable of frequent long-distance colonization via single-spore establishment. This indicates that even isolated habitats receive dense and diverse spore rains, including genotypes capable of self-fertilization. Limited gene flow may conserve the genetic signature of multiple long-distance colonization events for several decades. PMID:22323427
Pagès, Loïc
2014-01-01
Background and Aims Root branching, and in particular acropetal branching, is a common and important developmental process for increasing the number of growing tips and defining the distribution of their meristem size. This study presents a new method for characterizing the results of this process in natura from scanned images of young, branched parts of excavated roots. The method involves the direct measurement or calculation of seven different traits. Methods Young plants of 45 species of dicots were sampled from fields and gardens with uniform soils. Roots were separated, scanned and then measured using ImageJ software to determine seven traits related to root diameter and interbranch distance. Results The traits exhibited large interspecific variations, and covariations reflecting trade-offs. For example, at the interspecies level, the spacing of lateral roots (interbranch distance along the parent root) was strongly correlated to the diameter of the finest roots found in the species, and showed a continuum between two opposite strategies: making dense and fine lateral roots, or thick and well-spaced laterals. Conclusions A simple method is presented for classification of branching patterns in roots that allows relatively quick sampling and measurements to be undertaken. The feasibilty of the method is demonstrated for dicotyledonous species and it has the potential to be developed more broadly for other species and a wider range of enivironmental conditions. PMID:25062886
Raknes, Guttorm; Hunskaar, Steinar
2014-01-01
We describe a method that uses crowdsourced postcode coordinates and Google maps to estimate average distance and travel time for inhabitants of a municipality to a casualty clinic in Norway. The new method was compared with methods based on population centroids, median distance and town hall location, and we used it to examine how distance affects the utilisation of out-of-hours primary care services. At short distances our method showed good correlation with mean travel time and distance. The utilisation of out-of-hours services correlated with postcode based distances similar to previous research. The results show that our method is a reliable and useful tool for estimating average travel distances and travel times.
Correction of bias in belt transect studies of immotile objects
Anderson, D.R.; Pospahala, R.S.
1970-01-01
Unless a correction is made, population estimates derived from a sample of belt transects will be biased if a fraction of, the individuals on the sample transects are not counted. An approach, useful for correcting this bias when sampling immotile populations using transects of a fixed width, is presented. The method assumes that a searcher's ability to find objects near the center of the transect is nearly perfect. The method utilizes a mathematical equation, estimated from the data, to represent the searcher's inability to find all objects at increasing distances from the center of the transect. An example of the analysis of data, formation of the equation, and application is presented using waterfowl nesting data collected in Colorado.
Lead burdens and behavioral impairments of the lined shore crab Pachygrapsus crassipes
Hui, Clifford A.
2002-01-01
Unless a correction is made, population estimates derived from a sample of belt transects will be biased if a fraction of, the individuals on the sample transects are not counted. An approach, useful for correcting this bias when sampling immotile populations using transects of a fixed width, is presented. The method assumes that a searcher's ability to find objects near the center of the transect is nearly perfect. The method utilizes a mathematical equation, estimated from the data, to represent the searcher's inability to find all objects at increasing distances from the center of the transect. An example of the analysis of data, formation of the equation, and application is presented using waterfowl nesting data collected in Colorado.
A class-based link prediction using Distance Dependent Chinese Restaurant Process
NASA Astrophysics Data System (ADS)
Andalib, Azam; Babamir, Seyed Morteza
2016-08-01
One of the important tasks in relational data analysis is link prediction which has been successfully applied on many applications such as bioinformatics, information retrieval, etc. The link prediction is defined as predicting the existence or absence of edges between nodes of a network. In this paper, we propose a novel method for link prediction based on Distance Dependent Chinese Restaurant Process (DDCRP) model which enables us to utilize the information of the topological structure of the network such as shortest path and connectivity of the nodes. We also propose a new Gibbs sampling algorithm for computing the posterior distribution of the hidden variables based on the training data. Experimental results on three real-world datasets show the superiority of the proposed method over other probabilistic models for link prediction problem.
A method to improve the range resolution in stepped frequency continuous wave radar
NASA Astrophysics Data System (ADS)
Kaczmarek, Paweł
2018-04-01
In the paper one of high range resolution methods - Aperture Sampling - was analysed. Unlike MUSIC based techniques it proved to be very efficient in terms of achieving unambiguous synthetic range profile for ultra-wideband stepped frequency continuous wave radar. Assuming that minimal distance required to separate two targets in depth (distance) corresponds to -3 dB width of received echo, AS provided a 30,8 % improvement in range resolution in analysed scenario, when compared to results of applying IFFT. Output data is far superior in terms of both improved range resolution and reduced side lobe level than used typically in this area Inverse Fourier Transform. Furthermore it does not require prior knowledge or an estimate of number of targets to be detected in a given scan.
Kinematic measurement from panned cinematography.
Gervais, P; Bedingfield, E W; Wronko, C; Kollias, I; Marchiori, G; Kuntz, J; Way, N; Kuiper, D
1989-06-01
Traditional 2-D cinematography has used a stationary camera with its optical axis perpendicular to the plane of motion. This method has constrained the size of the object plane or has introduced potential errors from a small subject image size with large object field widths. The purpose of this study was to assess a panning technique that could overcome the inherent limitations of small object field widths, small object image sizes and limited movement samples. The proposed technique used a series of reference targets in the object field that provided the necessary scales and origin translations. A 102 m object field was panned. Comparisons between criterion distances and film measured distances for field widths of 46 m and 22 m resulted in absolute mean differences that were comparable to that of the traditional method.
Quantitative endoscopy: initial accuracy measurements.
Truitt, T O; Adelman, R A; Kelly, D H; Willging, J P
2000-02-01
The geometric optics of an endoscope can be used to determine the absolute size of an object in an endoscopic field without knowing the actual distance from the object. This study explores the accuracy of a technique that estimates absolute object size from endoscopic images. Quantitative endoscopy involves calibrating a rigid endoscope to produce size estimates from 2 images taken with a known traveled distance between the images. The heights of 12 samples, ranging in size from 0.78 to 11.80 mm, were estimated with this calibrated endoscope. Backup distances of 5 mm and 10 mm were used for comparison. The mean percent error for all estimated measurements when compared with the actual object sizes was 1.12%. The mean errors for 5-mm and 10-mm backup distances were 0.76% and 1.65%, respectively. The mean errors for objects <2 mm and > or =2 mm were 0.94% and 1.18%, respectively. Quantitative endoscopy estimates endoscopic image size to within 5% of the actual object size. This method remains promising for quantitatively evaluating object size from endoscopic images. It does not require knowledge of the absolute distance of the endoscope from the object, rather, only the distance traveled by the endoscope between images.
Sidek, Khairul; Khali, Ibrahim
2012-01-01
In this paper, a person identification mechanism implemented with Cardioid based graph using electrocardiogram (ECG) is presented. Cardioid based graph has given a reasonably good classification accuracy in terms of differentiating between individuals. However, the current feature extraction method using Euclidean distance could be further improved by using Mahalanobis distance measurement producing extracted coefficients which takes into account the correlations of the data set. Identification is then done by applying these extracted features to Radial Basis Function Network. A total of 30 ECG data from MITBIH Normal Sinus Rhythm database (NSRDB) and MITBIH Arrhythmia database (MITDB) were used for development and evaluation purposes. Our experimentation results suggest that the proposed feature extraction method has significantly increased the classification performance of subjects in both databases with accuracy from 97.50% to 99.80% in NSRDB and 96.50% to 99.40% in MITDB. High sensitivity, specificity and positive predictive value of 99.17%, 99.91% and 99.23% for NSRDB and 99.30%, 99.90% and 99.40% for MITDB also validates the proposed method. This result also indicates that the right feature extraction technique plays a vital role in determining the persistency of the classification accuracy for Cardioid based person identification mechanism.
NASA Astrophysics Data System (ADS)
Giovanis, D. G.; Shields, M. D.
2018-07-01
This paper addresses uncertainty quantification (UQ) for problems where scalar (or low-dimensional vector) response quantities are insufficient and, instead, full-field (very high-dimensional) responses are of interest. To do so, an adaptive stochastic simulation-based methodology is introduced that refines the probability space based on Grassmann manifold variations. The proposed method has a multi-element character discretizing the probability space into simplex elements using a Delaunay triangulation. For every simplex, the high-dimensional solutions corresponding to its vertices (sample points) are projected onto the Grassmann manifold. The pairwise distances between these points are calculated using appropriately defined metrics and the elements with large total distance are sub-sampled and refined. As a result, regions of the probability space that produce significant changes in the full-field solution are accurately resolved. An added benefit is that an approximation of the solution within each element can be obtained by interpolation on the Grassmann manifold. The method is applied to study the probability of shear band formation in a bulk metallic glass using the shear transformation zone theory.
Hart, Michael L.; Drakopoulos, Michael; Reinhard, Christina; Connolley, Thomas
2013-01-01
A complete calibration method to characterize a static planar two-dimensional detector for use in X-ray diffraction at an arbitrary wavelength is described. This method is based upon geometry describing the point of intersection between a cone’s axis and its elliptical conic section. This point of intersection is neither the ellipse centre nor one of the ellipse focal points, but some other point which lies in between. The presented solution is closed form, algebraic and non-iterative in its application, and gives values for the X-ray beam energy, the sample-to-detector distance, the location of the beam centre on the detector surface and the detector tilt relative to the incident beam. Previous techniques have tended to require prior knowledge of either the X-ray beam energy or the sample-to-detector distance, whilst other techniques have been iterative. The new calibration procedure is performed by collecting diffraction data, in the form of diffraction rings from a powder standard, at known displacements of the detector along the beam path. PMID:24068840
Missing value imputation for gene expression data by tailored nearest neighbors.
Faisal, Shahla; Tutz, Gerhard
2017-04-25
High dimensional data like gene expression and RNA-sequences often contain missing values. The subsequent analysis and results based on these incomplete data can suffer strongly from the presence of these missing values. Several approaches to imputation of missing values in gene expression data have been developed but the task is difficult due to the high dimensionality (number of genes) of the data. Here an imputation procedure is proposed that uses weighted nearest neighbors. Instead of using nearest neighbors defined by a distance that includes all genes the distance is computed for genes that are apt to contribute to the accuracy of imputed values. The method aims at avoiding the curse of dimensionality, which typically occurs if local methods as nearest neighbors are applied in high dimensional settings. The proposed weighted nearest neighbors algorithm is compared to existing missing value imputation techniques like mean imputation, KNNimpute and the recently proposed imputation by random forests. We use RNA-sequence and microarray data from studies on human cancer to compare the performance of the methods. The results from simulations as well as real studies show that the weighted distance procedure can successfully handle missing values for high dimensional data structures where the number of predictors is larger than the number of samples. The method typically outperforms the considered competitors.
An interpolation method for stream habitat assessments
Sheehan, Kenneth R.; Welsh, Stuart A.
2015-01-01
Interpolation of stream habitat can be very useful for habitat assessment. Using a small number of habitat samples to predict the habitat of larger areas can reduce time and labor costs as long as it provides accurate estimates of habitat. The spatial correlation of stream habitat variables such as substrate and depth improves the accuracy of interpolated data. Several geographical information system interpolation methods (natural neighbor, inverse distance weighted, ordinary kriging, spline, and universal kriging) were used to predict substrate and depth within a 210.7-m2 section of a second-order stream based on 2.5% and 5.0% sampling of the total area. Depth and substrate were recorded for the entire study site and compared with the interpolated values to determine the accuracy of the predictions. In all instances, the 5% interpolations were more accurate for both depth and substrate than the 2.5% interpolations, which achieved accuracies up to 95% and 92%, respectively. Interpolations of depth based on 2.5% sampling attained accuracies of 49–92%, whereas those based on 5% percent sampling attained accuracies of 57–95%. Natural neighbor interpolation was more accurate than that using the inverse distance weighted, ordinary kriging, spline, and universal kriging approaches. Our findings demonstrate the effective use of minimal amounts of small-scale data for the interpolation of habitat over large areas of a stream channel. Use of this method will provide time and cost savings in the assessment of large sections of rivers as well as functional maps to aid the habitat-based management of aquatic species.
Cunningham, Daniel J.; Shearer, David A.; Carter, Neil; Drawer, Scott; Pollard, Ben; Bennett, Mark; Eager, Robin; Cook, Christian J.; Farrell, John; Russell, Mark
2018-01-01
The assessment of competitive movement demands in team sports has traditionally relied upon global positioning system (GPS) analyses presented as fixed-time epochs (e.g., 5–40 min). More recently, presenting game data as a rolling average has become prevalent due to concerns over a loss of sampling resolution associated with the windowing of data over fixed periods. Accordingly, this study compared rolling average (ROLL) and fixed-time (FIXED) epochs for quantifying the peak movement demands of international rugby union match-play as a function of playing position. Elite players from three different squads (n = 119) were monitored using 10 Hz GPS during 36 matches played in the 2014–2017 seasons. Players categorised broadly as forwards and backs, and then by positional sub-group (FR: front row, SR: second row, BR: back row, HB: half back, MF: midfield, B3: back three) were monitored during match-play for peak values of high-speed running (>5 m·s-1; HSR) and relative distance covered (m·min-1) over 60–300 s using two types of sample-epoch (ROLL, FIXED). Irrespective of the method used, as the epoch length increased, values for the intensity of running actions decreased (e.g., For the backs using the ROLL method, distance covered decreased from 177.4 ± 20.6 m·min-1 in the 60 s epoch to 107.5 ± 13.3 m·min-1 for the 300 s epoch). For the team as a whole, and irrespective of position, estimates of fixed effects indicated significant between-method differences across all time-points for both relative distance covered and HSR. Movement demands were underestimated consistently by FIXED versus ROLL with differences being most pronounced using 60 s epochs (95% CI HSR: -6.05 to -4.70 m·min-1, 95% CI distance: -18.45 to -16.43 m·min-1). For all HSR time epochs except one, all backs groups increased more (p < 0.01) from FIXED to ROLL than the forward groups. Linear mixed modelling of ROLL data highlighted that for HSR (except 60 s epoch), SR was the only group not significantly different to FR. For relative distance covered all other position groups were greater than the FR (p < 0.05). The FIXED method underestimated both relative distance (~11%) and HSR values (up to ~20%) compared to the ROLL method. These differences were exaggerated for the HSR variable in the backs position who covered the greatest HSR distance; highlighting important consideration for those implementing the FIXED method of analysis. The data provides coaches with a worst-case scenario reference on the running demands required for periods of 60–300 s in length. This information offers novel insight into game demands and can be used to inform the design of training games to increase specificity of preparation for the most demanding phases of matches. PMID:29621279
NASA Astrophysics Data System (ADS)
Padmanabhan, Nikhil; Xu, Xiaoying; Eisenstein, Daniel J.; Scalzo, Richard; Cuesta, Antonio J.; Mehta, Kushal T.; Kazin, Eyal
2012-12-01
We present the first application to density field reconstruction to a galaxy survey to undo the smoothing of the baryon acoustic oscillation (BAO) feature due to non-linear gravitational evolution and thereby improve the precision of the distance measurements possible. We apply the reconstruction technique to the clustering of galaxies from the Sloan Digital Sky Survey (SDSS) Data Release 7 (DR7) luminous red galaxy (LRG) sample, sharpening the BAO feature and achieving a 1.9 per cent measurement of the distance to z = 0.35. We update the reconstruction algorithm of Eisenstein et al. to account for the effects of survey geometry as well as redshift-space distortions and validate it on 160 LasDamas simulations. We demonstrate that reconstruction sharpens the BAO feature in the angle averaged galaxy correlation function, reducing the non-linear smoothing scale Σnl from 8.1 to 4.4 Mpc h-1. Reconstruction also significantly reduces the effects of redshift-space distortions at the BAO scale, isotropizing the correlation function. This sharpened BAO feature yields an unbiased distance estimate (<0.2 per cent) and reduces the scatter from 3.3 to 2.1 per cent. We demonstrate the robustness of these results to the various reconstruction parameters, including the smoothing scale, the galaxy bias and the linear growth rate. Applying this reconstruction algorithm to the SDSS LRG DR7 sample improves the significance of the BAO feature in these data from 3.3σ for the unreconstructed correlation function to 4.2σ after reconstruction. We estimate a relative distance scale DV/rs to z = 0.35 of 8.88 ± 0.17, where rs is the sound horizon and DV≡(DA2H-1)1/3 is a combination of the angular diameter distance DA and Hubble parameter H. Assuming a sound horizon of 154.25 Mpc, this translates into a distance measurement DV(z = 0.35) = 1.356 ± 0.025 Gpc. We find that reconstruction reduces the distance error in the DR7 sample from 3.5 to 1.9 per cent, equivalent to a survey with three times the volume of SDSS.
Karimzadeh, R; Hejazi, M J; Helali, H; Iranipour, S; Mohammadi, S A
2011-10-01
Eurygaster integriceps Puton (Hemiptera: Scutelleridae) is the most serious insect pest of wheat (Triticum aestivum L.) and barley (Hordeum vulgare L.) in Iran. In this study, spatio-temporal distribution of this pest was determined in wheat by using spatial analysis by distance indices (SADIE) and geostatistics. Global positioning and geographic information systems were used for spatial sampling and mapping the distribution of this insect. The study was conducted for three growing seasons in Gharamalek, an agricultural region to the west of Tabriz, Iran. Weekly sampling began when E. integriceps adults migrated to wheat fields from overwintering sites and ended when the new generation adults appeared at the end of season. The adults were sampled using 1- by 1-m quadrat and distance-walk methods. A sweep net was used for sampling the nymphs, and five 180° sweeps were considered as the sampling unit. The results of spatial analyses by using geostatistics and SADIE indicated that E. integriceps adults were clumped after migration to fields and had significant spatial dependency. The second- and third-instar nymphs showed aggregated spatial structure in the middle of growing season. At the end of the season, population distribution changed toward random or regular patterns; and fourth and fifth instars had weaker spatial structure compared with younger nymphs. In Iran, management measures for E. integriceps in wheat fields are mainly applied against overwintering adults, as well as second and third instars. Because of the aggregated distribution of these life stages, site-specific spraying of chemicals is feasible in managing E. integriceps.
[A New Distance Metric between Different Stellar Spectra: the Residual Distribution Distance].
Liu, Jie; Pan, Jing-chang; Luo, A-li; Wei, Peng; Liu, Meng
2015-12-01
Distance metric is an important issue for the spectroscopic survey data processing, which defines a calculation method of the distance between two different spectra. Based on this, the classification, clustering, parameter measurement and outlier data mining of spectral data can be carried out. Therefore, the distance measurement method has some effect on the performance of the classification, clustering, parameter measurement and outlier data mining. With the development of large-scale stellar spectral sky surveys, how to define more efficient distance metric on stellar spectra has become a very important issue in the spectral data processing. Based on this problem and fully considering of the characteristics and data features of the stellar spectra, a new distance measurement method of stellar spectra named Residual Distribution Distance is proposed. While using this method to measure the distance, the two spectra are firstly scaled and then the standard deviation of the residual is used the distance. Different from the traditional distance metric calculation methods of stellar spectra, when used to calculate the distance between stellar spectra, this method normalize the two spectra to the same scale, and then calculate the residual corresponding to the same wavelength, and the standard error of the residual spectrum is used as the distance measure. The distance measurement method can be used for stellar classification, clustering and stellar atmospheric physical parameters measurement and so on. This paper takes stellar subcategory classification as an example to test the distance measure method. The results show that the distance defined by the proposed method is more effective to describe the gap between different types of spectra in the classification than other methods, which can be well applied in other related applications. At the same time, this paper also studies the effect of the signal to noise ratio (SNR) on the performance of the proposed method. The result show that the distance is affected by the SNR. The smaller the signal-to-noise ratio is, the greater impact is on the distance; While SNR is larger than 10, the signal-to-noise ratio has little effect on the performance for the classification.
Spatial variability in airborne pollen concentrations.
Raynor, G S; Ogden, E C; Hayes, J V
1975-03-01
Tests were conducted to determine the relationship between airborne pollen concentrations and distance. Simultaneous samples were taken in 171 tests with sets of eight rotoslide samplers spaced from one to 486 M. apart in straight lines. Use of all possible pairs gave 28 separation distances. Tests were conducted over a 2-year period in urban and rural locations distant from major pollen sources during both tree and ragweed pollen seasons. Samples were taken at a height of 1.5 M. during 5-to 20-minute periods. Tests were grouped by pollen type, location, year, and direction of the wind relative to the line. Data were analyzed to evaluate variability without regard to sampler spacing and variability as a function of separation distance. The mean, standard deviation, coefficient of variation, ratio of maximum to the mean, and ratio of minimum to the mean were calculated for each test, each group of tests, and all cases. The average coefficient of variation is 0.21, the maximum over the mean, 1.39 and the minimum over the mean, 0.69. No relationship was found with experimental conditions. Samples taken at the minimum separation distance had a mean difference of 18 per cent. Differences between pairs of samples increased with distance in 10 of 13 groups. These results suggest that airborne pollens are not always well mixed in the lower atmosphere and that a sample becomes less representative with increasing distance from the sampling location.
[DNA barcoding and its utility in commonly-used medicinal snakes].
Huang, Yong; Zhang, Yue-yun; Zhao, Cheng-jian; Xu, Yong-li; Gu, Ying-le; Huang, Wen-qi; Lin, Kui; Li, Li
2015-03-01
Identification accuracy of traditional Chinese medicine is crucial for the traditional Chinese medicine research, production and application. DNA barcoding based on the mitochondrial gene coding for cytochrome c oxidase subunit I (COI), are more and more used for identification of traditional Chinese medicine. Using universal barcoding primers to sequence, we discussed the feasibility of DNA barcoding method for identification commonly-used medicinal snakes (a total of 109 samples belonging to 19 species 15 genera 6 families). The phylogenetic trees using Neighbor-joining were constructed. The results indicated that the mean content of G + C(46.5%) was lower than that of A + T (53.5%). As calculated by Kimera-2-parameter model, the mean intraspecies genetic distance of Trimeresurus albolabris, Ptyas dhumnades and Lycodon rufozonatus was greater than 2%. Further phylogenetic relationship results suggested that identification of one sample of T. albolabris was erroneous. The identification of some samples of P. dhumnades was also not correct, namely originally P. korros was identified as P. dhumnades. Factors influence on intraspecific genetic distance difference of L. rufozonatus need to be studied further. Therefore, DNA barcoding for identification of medicinal snakes is feasible, and greatly complements the morphological classification method. It is necessary to further study in identification of traditional Chinese medicine.
NASA Astrophysics Data System (ADS)
Breitfelder, J.; Mérand, A.; Kervella, P.; Gallenne, A.; Szabados, L.; Anderson, R. I.; Le Bouquin, J.-B.
2016-03-01
Context. The distance to pulsating stars is classically estimated using the parallax-of-pulsation (PoP) method, which combines spectroscopic radial velocity (RV) measurements and angular diameter (AD) estimates to derive the distance of the star. A particularly important application of this method is the determination of Cepheid distances in view of the calibration of their distance scale. However, the conversion of radial to pulsational velocities in the PoP method relies on a poorly calibrated parameter, the projection factor (p-factor). Aims: We aim to measure empirically the value of the p-factors of a homogeneous sample of nine bright Galactic Cepheids for which trigonometric parallaxes were measured with the Hubble Space Telescope (HST) Fine Guidance Sensor. Methods: We use the SPIPS algorithm, a robust implementation of the PoP method that combines photometry, interferometry, and radial velocity measurements in a global modeling of the pulsation of the star. We obtained new interferometric angular diameter measurements using the PIONIER instrument at the Very Large Telescope Interferometer (VLTI), completed by data from the literature. Using the known distance as an input, we derive the value of the p-factor of the nine stars of our sample and study its dependence with the pulsation period. Results: We find the following p-factors: p = 1.20 ± 0.12 for RT Aur, p = 1.48 ± 0.18 for T Vul, p = 1.14 ± 0.10 for FF Aql, p = 1.31 ± 0.19 for Y Sgr, p = 1.39 ± 0.09 for X Sgr, p = 1.35 ± 0.13 for W Sgr, p = 1.36 ± 0.08 for β Dor, p = 1.41 ± 0.10 for ζ Gem, and p = 1.23 ± 0.12 for ℓ Car. Conclusions: The values of the p-factors that we obtain are consistently close to p = 1.324 ± 0.024. We observe some dispersion around this average value, but the observed distribution is statistically consistent with a constant value of the p-factor as a function of the pulsation period (χ2 = 0.669). The error budget of our determination of the p-factor values is presently dominated by the uncertainty on the parallax, a limitation that will soon be waived by Gaia. Based on observations carried out with ESO facilities at Paranal Observatory under program 093.D-0316, 094.D-0773 and 094.D-0584.
Surface sampling techniques for 3D object inspection
NASA Astrophysics Data System (ADS)
Shih, Chihhsiong S.; Gerhardt, Lester A.
1995-03-01
While the uniform sampling method is quite popular for pointwise measurement of manufactured parts, this paper proposes three novel sampling strategies which emphasize 3D non-uniform inspection capability. They are: (a) the adaptive sampling, (b) the local adjustment sampling, and (c) the finite element centroid sampling techniques. The adaptive sampling strategy is based on a recursive surface subdivision process. Two different approaches are described for this adaptive sampling strategy. One uses triangle patches while the other uses rectangle patches. Several real world objects were tested using these two algorithms. Preliminary results show that sample points are distributed more closely around edges, corners, and vertices as desired for many classes of objects. Adaptive sampling using triangle patches is shown to generally perform better than both uniform and adaptive sampling using rectangle patches. The local adjustment sampling strategy uses a set of predefined starting points and then finds the local optimum position of each nodal point. This method approximates the object by moving the points toward object edges and corners. In a hybrid approach, uniform points sets and non-uniform points sets, first preprocessed by the adaptive sampling algorithm on a real world object were then tested using the local adjustment sampling method. The results show that the initial point sets when preprocessed by adaptive sampling using triangle patches, are moved the least amount of distance by the subsequently applied local adjustment method, again showing the superiority of this method. The finite element sampling technique samples the centroids of the surface triangle meshes produced from the finite element method. The performance of this algorithm was compared to that of the adaptive sampling using triangular patches. The adaptive sampling with triangular patches was once again shown to be better on different classes of objects.
Sindall, Paul; Lenton, John P.; Whytock, Katie; Tolfrey, Keith; Oyster, Michelle L.; Cooper, Rory A.; Goosey-Tolfrey, Victoria L.
2013-01-01
Purpose To compare the criterion validity and accuracy of a 1 Hz non-differential global positioning system (GPS) and data logger device (DL) for the measurement of wheelchair tennis court movement variables. Methods Initial validation of the DL device was performed. GPS and DL were fitted to the wheelchair and used to record distance (m) and speed (m/second) during (a) tennis field (b) linear track, and (c) match-play test scenarios. Fifteen participants were monitored at the Wheelchair British Tennis Open. Results Data logging validation showed underestimations for distance in right (DLR) and left (DLL) logging devices at speeds >2.5 m/second. In tennis-field tests, GPS underestimated distance in five drills. DLL was lower than both (a) criterion and (b) DLR in drills moving forward. Reversing drill direction showed that DLR was lower than (a) criterion and (b) DLL. GPS values for distance and average speed for match play were significantly lower than equivalent values obtained by DL (distance: 2816 (844) vs. 3952 (1109) m, P = 0.0001; average speed: 0.7 (0.2) vs. 1.0 (0.2) m/second, P = 0.0001). Higher peak speeds were observed in DL (3.4 (0.4) vs. 3.1 (0.5) m/second, P = 0.004) during tennis match play. Conclusions Sampling frequencies of 1 Hz are too low to accurately measure distance and speed during wheelchair tennis. GPS units with a higher sampling rate should be advocated in further studies. Modifications to existing DL devices may be required to increase measurement precision. Further research into the validity of movement devices during match play will further inform the demands and movement patterns associated with wheelchair tennis. PMID:23820154
Estimating the carbon in coarse woody debris with perpendicular distance sampling. Chapter 6
Harry T. Valentine; Jeffrey H. Gove; Mark J. Ducey; Timothy G. Gregoire; Michael S. Williams
2008-01-01
Perpendicular distance sampling (PDS) is a design for sampling the population of pieces of coarse woody debris (logs) in a forested tract. In application, logs are selected at sample points with probability proportional to volume. Consequently, aggregate log volume per unit land area can be estimated from tallies of logs at sample points. In this chapter we provide...
Microfabricated capillary array electrophoresis device and method
Simpson, Peter C.; Mathies, Richard A.; Woolley, Adam T.
2000-01-01
A capillary array electrophoresis (CAE) micro-plate with an array of separation channels connected to an array of sample reservoirs on the plate. The sample reservoirs are organized into one or more sample injectors. One or more waste reservoirs are provided to collect wastes from reservoirs in each of the sample injectors. Additionally, a cathode reservoir is also multiplexed with one or more separation channels. To complete the electrical path, an anode reservoir which is common to some or all separation channels is also provided on the micro-plate. Moreover, the channel layout keeps the distance from the anode to each of the cathodes approximately constant.
Microfabricated capillary array electrophoresis device and method
Simpson, Peter C.; Mathies, Richard A.; Woolley, Adam T.
2004-06-15
A capillary array electrophoresis (CAE) micro-plate with an array of separation channels connected to an array of sample reservoirs on the plate. The sample reservoirs are organized into one or more sample injectors. One or more waste reservoirs are provided to collect wastes from reservoirs in each of the sample injectors. Additionally, a cathode reservoir is also multiplexed with one or more separation channels. To complete the electrical path, an anode reservoir which is common to some or all separation channels is also provided on the micro-plate. Moreover, the channel layout keeps the distance from the anode to each of the cathodes approximately constant.
Talker Localization Based on Interference between Transmitted and Reflected Audible Sound
NASA Astrophysics Data System (ADS)
Nakayama, Masato; Nakasako, Noboru; Shinohara, Toshihiro; Uebo, Tetsuji
In many engineering fields, distance to targets is very important. General distance measurement method uses a time delay between transmitted and reflected waves, but it is difficult to estimate the short distance. On the other hand, the method using phase interference to measure the short distance has been known in the field of microwave radar. Therefore, we have proposed the distance estimation method based on interference between transmitted and reflected audible sound, which can measure the distance between microphone and target with one microphone and one loudspeaker. In this paper, we propose talker localization method based on distance estimation using phase interference. We expand the distance estimation method using phase interference into two microphones (microphone array) in order to estimate talker position. The proposed method can estimate talker position by measuring the distance and direction between target and microphone array. In addition, talker's speech is regarded as a noise in the proposed method. Therefore, we also propose combination of the proposed method and CSP (Cross-power Spectrum Phase analysis) method which is one of the DOA (Direction Of Arrival) estimation methods. We evaluated the performance of talker localization in real environments. The experimental result shows the effectiveness of the proposed method.
Prospects and Problems for Identification of Poisonous Plants in China using DNA Barcodes.
Xie, Lei; Wang, Ying Wei; Guan, Shan Yue; Xie, Li Jing; Long, Xin; Sun, Cheng Ye
2014-10-01
Poisonous plants are a deadly threat to public health in China. The traditional clinical diagnosis of the toxic plants is inefficient, fallible, and dependent upon experts. In this study, we tested the performance of DNA barcodes for identification of the most threatening poisonous plants in China. Seventy-four accessions of 27 toxic plant species in 22 genera and 17 families were sampled and three DNA barcodes (matK, rbcL, and ITS) were amplified, sequenced and tested. Three methods, Blast, pairwise global alignment (PWG) distance, and Tree-Building were tested for discrimination power. The primer universality of all the three markers was high. Except in the case of ITS for Hemerocallis minor, the three barcodes were successfully generated from all the selected species. Among the three methods applied, Blast showed the lowest discrimination rate, whereas PWG Distance and Tree-Building methods were equally effective. The ITS barcode showed highest discrimination rates using the PWG Distance and Tree-Building methods. When the barcodes were combined, discrimination rates were increased for the Blast method. DNA barcoding technique provides us a fast tool for clinical identification of poisonous plants in China. We suggest matK, rbcL, ITS used in combination as DNA barcodes for authentication of poisonous plants. Copyright © 2014 The Editorial Board of Biomedical and Environmental Sciences. Published by China CDC. All rights reserved.
Expansion patterns and parallaxes for planetary nebulae
NASA Astrophysics Data System (ADS)
Schönberner, D.; Balick, B.; Jacob, R.
2018-02-01
Aims: We aim to determine individual distances to a small number of rather round, quite regularly shaped planetary nebulae by combining their angular expansion in the plane of the sky with a spectroscopically measured expansion along the line of sight. Methods: We combined up to three epochs of Hubble Space Telescope imaging data and determined the angular proper motions of rim and shell edges and of other features. These results are combined with measured expansion speeds to determine individual distances by assuming that line of sight and sky-plane expansions are equal. We employed 1D radiation-hydrodynamics simulations of nebular evolution to correct for the difference between the spectroscopically measured expansion velocities of rim and shell and of their respective shock fronts. Results: Rim and shell are two independently expanding entities, driven by different physical mechanisms, although their model-based expansion timescales are quite similar. We derive good individual distances for 15 objects, and the main results are as follows: (i) distances derived from rim and shell agree well; (ii) comparison with the statistical distances in the literature gives reasonable agreement; (iii) our distances disagree with those derived by spectroscopic methods; (iv) central-star "plateau" luminosities range from about 2000 L⊙ to well below 10 000 L⊙, with a mean value at about 5000 L⊙, in excellent agreement with other samples of known distance (Galactic bulge, Magellanic Clouds, and K648 in the globular cluster M 15); (v) the central-star mass range is rather restricted: from about 0.53 to about 0.56 M⊙, with a mean value of 0.55 M⊙. Conclusions: The expansion measurements of nebular rim and shell edges confirm the predictions of radiation-hydrodynamics simulations and offer a reliable method for the evaluation of distances to suited objects. Results of this paper are based on observations made with the NASA/ESA Hubble Space Telescope in Cycle 16 (GO11122) and older data obtained from the Data Archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555.
Li, C T; Shi, C H; Wu, J G; Xu, H M; Zhang, H Z; Ren, Y L
2004-04-01
The selection of an appropriate sampling strategy and a clustering method is important in the construction of core collections based on predicted genotypic values in order to retain the greatest degree of genetic diversity of the initial collection. In this study, methods of developing rice core collections were evaluated based on the predicted genotypic values for 992 rice varieties with 13 quantitative traits. The genotypic values of the traits were predicted by the adjusted unbiased prediction (AUP) method. Based on the predicted genotypic values, Mahalanobis distances were calculated and employed to measure the genetic similarities among the rice varieties. Six hierarchical clustering methods, including the single linkage, median linkage, centroid, unweighted pair-group average, weighted pair-group average and flexible-beta methods, were combined with random, preferred and deviation sampling to develop 18 core collections of rice germplasm. The results show that the deviation sampling strategy in combination with the unweighted pair-group average method of hierarchical clustering retains the greatest degree of genetic diversities of the initial collection. The core collections sampled using predicted genotypic values had more genetic diversity than those based on phenotypic values.
Parasites as biological tags of fish stocks: a meta-analysis of their discriminatory power.
Poulin, Robert; Kamiya, Tsukushi
2015-01-01
The use of parasites as biological tags to discriminate among marine fish stocks has become a widely accepted method in fisheries management. Here, we first link this approach to its unstated ecological foundation, the decay in the similarity of the species composition of assemblages as a function of increasing distance between them, a phenomenon almost universal in nature. We explain how distance decay of similarity can influence the use of parasites as biological tags. Then, we perform a meta-analysis of 61 uses of parasites as tags of marine fish populations in multivariate discriminant analyses, obtained from 29 articles. Our main finding is that across all studies, the observed overall probability of correct classification of fish based on parasite data was about 71%. This corresponds to a two-fold improvement over the rate of correct classification expected by chance alone, and the average effect size (Zr = 0·463) computed from the original values was also indicative of a medium-to-large effect. However, none of the moderator variables included in the meta-analysis had a significant effect on the proportion of correct classification; these moderators included the total number of fish sampled, the number of parasite species used in the discriminant analysis, the number of localities from which fish were sampled, the minimum and maximum distance between any pair of sampling localities, etc. Therefore, there are no clear-cut situations in which the use of parasites as tags is more useful than others. Finally, we provide recommendations for the future usage of parasites as tags for stock discrimination, to ensure that future applications of the method achieve statistical rigour and a high discriminatory power.
Core Hunter 3: flexible core subset selection.
De Beukelaer, Herman; Davenport, Guy F; Fack, Veerle
2018-05-31
Core collections provide genebank curators and plant breeders a way to reduce size of their collections and populations, while minimizing impact on genetic diversity and allele frequency. Many methods have been proposed to generate core collections, often using distance metrics to quantify the similarity of two accessions, based on genetic marker data or phenotypic traits. Core Hunter is a multi-purpose core subset selection tool that uses local search algorithms to generate subsets relying on one or more metrics, including several distance metrics and allelic richness. In version 3 of Core Hunter (CH3) we have incorporated two new, improved methods for summarizing distances to quantify diversity or representativeness of the core collection. A comparison of CH3 and Core Hunter 2 (CH2) showed that these new metrics can be effectively optimized with less complex algorithms, as compared to those used in CH2. CH3 is more effective at maximizing the improved diversity metric than CH2, still ensures a high average and minimum distance, and is faster for large datasets. Using CH3, a simple stochastic hill-climber is able to find highly diverse core collections, and the more advanced parallel tempering algorithm further increases the quality of the core and further reduces variability across independent samples. We also evaluate the ability of CH3 to simultaneously maximize diversity, and either representativeness or allelic richness, and compare the results with those of the GDOpt and SimEli methods. CH3 can sample equally representative cores as GDOpt, which was specifically designed for this purpose, and is able to construct cores that are simultaneously more diverse, and either are more representative or have higher allelic richness, than those obtained by SimEli. In version 3, Core Hunter has been updated to include two new core subset selection metrics that construct cores for representativeness or diversity, with improved performance. It combines and outperforms the strengths of other methods, as it (simultaneously) optimizes a variety of metrics. In addition, CH3 is an improvement over CH2, with the option to use genetic marker data or phenotypic traits, or both, and improved speed. Core Hunter 3 is freely available on http://www.corehunter.org .
Nondestructive Method For Measuring The Scattering Coefficient Of Bulk Material
NASA Astrophysics Data System (ADS)
Groenhuis, R. A. J.; ten Bosch, J. J.
1981-05-01
During demineralization and remineralization of dental enamel its structure changes resulting in a change of the absorption and scattering coefficients of the enamel. By measuring these coefficients during demineralization and remineralization these processes can be monitored in a non-destructive way. For this purpose an experimental arrangement was made: a fibre illuminates a spot on the sample with monochromatic light with a wave-length between 400 nm and 700 nm; a photomultiplier measures the luminance of the light back-scattered by the sample as a function of the distance from the measuring snot to the spot of illumination. In a Monte Carlo-model this luminance is simulated using the same geometry given the scattering and absorption coefficients in a sample. Then the scattering and absorption coefficients in the sample are determined by selecting the theoretical curve fitting the experimental one. Scattering coefficients below 10 mm-1 and absorption coefficients obtained with this method on calibration samples correspond well with those obtained with another method. Scattering coefficients above 10 mm-1 (paper samples) were measured ton low. This perhaps is caused by the anisotropic structure of paper sheets. The method is very suitable to measure the scattering and absorption coefficients of bulk materials.
Soil sampling kit and a method of sampling therewith
Thompson, Cyril V.
1991-01-01
A soil sampling device and a sample containment device for containing a soil sample is disclosed. In addition, a method for taking a soil sample using the soil sampling device and soil sample containment device to minimize the loss of any volatile organic compounds contained in the soil sample prior to analysis is disclosed. The soil sampling device comprises two close fitting, longitudinal tubular members of suitable length, the inner tube having the outward end closed. With the inner closed tube withdrawn a selected distance, the outer tube can be inserted into the ground or other similar soft material to withdraw a sample of material for examination. The inner closed end tube controls the volume of the sample taken and also serves to eject the sample. The soil sample containment device has a sealing member which is adapted to attach to an analytical apparatus which analyzes the volatile organic compounds contained in the sample. The soil sampling device in combination with the soil sample containment device allow an operator to obtain a soil sample containing volatile organic compounds and minimizing the loss of the volatile organic compounds prior to analysis of the soil sample for the volatile organic compounds.
Soil sampling kit and a method of sampling therewith
Thompson, C.V.
1991-02-05
A soil sampling device and a sample containment device for containing a soil sample is disclosed. In addition, a method for taking a soil sample using the soil sampling device and soil sample containment device to minimize the loss of any volatile organic compounds contained in the soil sample prior to analysis is disclosed. The soil sampling device comprises two close fitting, longitudinal tubular members of suitable length, the inner tube having the outward end closed. With the inner closed tube withdrawn a selected distance, the outer tube can be inserted into the ground or other similar soft material to withdraw a sample of material for examination. The inner closed end tube controls the volume of the sample taken and also serves to eject the sample. The soil sample containment device has a sealing member which is adapted to attach to an analytical apparatus which analyzes the volatile organic compounds contained in the sample. The soil sampling device in combination with the soil sample containment device allows an operator to obtain a soil sample containing volatile organic compounds and minimizing the loss of the volatile organic compounds prior to analysis of the soil sample for the volatile organic compounds. 11 figures.
Dual-stage trapped-flux magnet cryostat for measurements at high magnetic fields
Islam, Zahirul; Das, Ritesh K.; Weinstein, Roy
2015-04-14
A method and a dual-stage trapped-flux magnet cryostat apparatus are provided for implementing enhanced measurements at high magnetic fields. The dual-stage trapped-flux magnet cryostat system includes a trapped-flux magnet (TFM). A sample, for example, a single crystal, is adjustably positioned proximate to the surface of the TFM, using a translation stage such that the distance between the sample and the surface is selectively adjusted. A cryostat is provided with a first separate thermal stage provided for cooling the TFM and with a second separate thermal stage provided for cooling sample.
NASA Astrophysics Data System (ADS)
Garden, Christopher J.; Craw, Dave; Waters, Jonathan M.; Smith, Abigail
2011-12-01
Tracking and quantifying biological dispersal presents a major challenge in marine systems. Most existing methods for measuring dispersal are limited by poor resolution and/or high cost. Here we use geological data to quantify the frequency of long-distance dispersal in detached bull-kelp (Phaeophyceae: Durvillaea) in southern New Zealand. Geological resolution in this region is enhanced by the presence of a number of distinct and readily-identifiable geological terranes. We sampled 13,815 beach-cast bull-kelp plants across 130 km of coastline. Rocks were found attached to 2639 of the rafted plants, and were assigned to specific geological terranes (source regions) to quantify dispersal frequencies and distances. Although the majority of kelp-associated rock specimens were found to be locally-derived, a substantial number (4%) showed clear geological evidence of long-distance dispersal, several having travelled over 200 km from their original source regions. The proportion of local versus foreign clasts varied considerably between regions. While short-range dispersal clearly predominates, long-distance travel of detached bull-kelp plants is shown to be a common and ongoing process that has potential to connect isolated coastal populations. Geological analyses represent a cost-effective and powerful method for assigning large numbers of drifted macroalgae to their original source regions.
Cluster analysis of European Y-chromosomal STR haplotypes using the discrete Laplace method.
Andersen, Mikkel Meyer; Eriksen, Poul Svante; Morling, Niels
2014-07-01
The European Y-chromosomal short tandem repeat (STR) haplotype distribution has previously been analysed in various ways. Here, we introduce a new way of analysing population substructure using a new method based on clustering within the discrete Laplace exponential family that models the probability distribution of the Y-STR haplotypes. Creating a consistent statistical model of the haplotypes enables us to perform a wide range of analyses. Previously, haplotype frequency estimation using the discrete Laplace method has been validated. In this paper we investigate how the discrete Laplace method can be used for cluster analysis to further validate the discrete Laplace method. A very important practical fact is that the calculations can be performed on a normal computer. We identified two sub-clusters of the Eastern and Western European Y-STR haplotypes similar to results of previous studies. We also compared pairwise distances (between geographically separated samples) with those obtained using the AMOVA method and found good agreement. Further analyses that are impossible with AMOVA were made using the discrete Laplace method: analysis of the homogeneity in two different ways and calculating marginal STR distributions. We found that the Y-STR haplotypes from e.g. Finland were relatively homogeneous as opposed to the relatively heterogeneous Y-STR haplotypes from e.g. Lublin, Eastern Poland and Berlin, Germany. We demonstrated that the observed distributions of alleles at each locus were similar to the expected ones. We also compared pairwise distances between geographically separated samples from Africa with those obtained using the AMOVA method and found good agreement. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Reconstruction methods for phase-contrast tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Raven, C.
Phase contrast imaging with coherent x-rays can be distinguished in outline imaging and holography, depending on the wavelength {lambda}, the object size d and the object-to-detector distance r. When r << d{sup 2}{lambda}, phase contrast occurs only in regions where the refractive index fastly changes, i.e. at interfaces and edges in the sample. With increasing object-to-detector distance we come in the area of holographic imaging. The image contrast outside the shadow region of the object is due to interference of the direct, undiffracted beam and a beam diffracted by the object, or, in terms of holography, the interference of amore » reference wave with the object wave. Both, outline imaging and holography, offer the possibility to obtain three dimensional information of the sample in conjunction with a tomographic technique. But the data treatment and the kind of information one can obtain from the reconstruction is different.« less
Odontological approach to sexual dimorphism in southeastern France.
Lladeres, Emilie; Saliba-Serre, Bérengère; Sastre, Julien; Foti, Bruno; Tardivo, Delphine; Adalian, Pascal
2013-01-01
The aim of this study was to establish a prediction formula to allow for the determination of sex among the southeastern French population using dental measurements. The sample consisted of 105 individuals (57 males and 48 females, aged between 18 and 25 years). Dental measurements were calculated using Euclidean distances, in three-dimensional space, from point coordinates obtained by a Microscribe. A multiple logistic regression analysis was performed to establish the prediction formula. Among 12 selected dental distances, a stepwise logistic regression analysis highlighted the two most significant discriminate predictors of sex: one located at the mandible and the other at the maxilla. A cutpoint was proposed to prediction of true sex. The prediction formula was then tested on a validation sample (20 males and 34 females, aged between 18 and 62 years and with a history of orthodontics or restorative care) to evaluate the accuracy of the method. © 2012 American Academy of Forensic Sciences.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Conn, A. R.; Parker, Q. A.; Zucker, D. B.
In 'A Bayesian Approach to Locating the Red Giant Branch Tip Magnitude (Part I)', a new technique was introduced for obtaining distances using the tip of the red giant branch (TRGB) standard candle. Here we describe a useful complement to the technique with the potential to further reduce the uncertainty in our distance measurements by incorporating a matched-filter weighting scheme into the model likelihood calculations. In this scheme, stars are weighted according to their probability of being true object members. We then re-test our modified algorithm using random-realization artificial data to verify the validity of the generated posterior probability distributionsmore » (PPDs) and proceed to apply the algorithm to the satellite system of M31, culminating in a three-dimensional view of the system. Further to the distributions thus obtained, we apply a satellite-specific prior on the satellite distances to weight the resulting distance posterior distributions, based on the halo density profile. Thus in a single publication, using a single method, a comprehensive coverage of the distances to the companion galaxies of M31 is presented, encompassing the dwarf spheroidals Andromedas I-III, V, IX-XXVII, and XXX along with NGC 147, NGC 185, M33, and M31 itself. Of these, the distances to Andromedas XXIV-XXVII and Andromeda XXX have never before been derived using the TRGB. Object distances are determined from high-resolution tip magnitude posterior distributions generated using the Markov Chain Monte Carlo technique and associated sampling of these distributions to take into account uncertainties in foreground extinction and the absolute magnitude of the TRGB as well as photometric errors. The distance PPDs obtained for each object both with and without the aforementioned prior are made available to the reader in tabular form. The large object coverage takes advantage of the unprecedented size and photometric depth of the Pan-Andromeda Archaeological Survey. Finally, a preliminary investigation into the satellite density distribution within the halo is made using the obtained distance distributions. For simplicity, this investigation assumes a single power law for the density as a function of radius, with the slope of this power law examined for several subsets of the entire satellite sample.« less
A Probabilistic Approach to Fitting Period–luminosity Relations and Validating Gaia Parallaxes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sesar, Branimir; Fouesneau, Morgan; Bailer-Jones, Coryn A. L.
Pulsating stars, such as Cepheids, Miras, and RR Lyrae stars, are important distance indicators and calibrators of the “cosmic distance ladder,” and yet their period–luminosity–metallicity (PLZ) relations are still constrained using simple statistical methods that cannot take full advantage of available data. To enable optimal usage of data provided by the Gaia mission, we present a probabilistic approach that simultaneously constrains parameters of PLZ relations and uncertainties in Gaia parallax measurements. We demonstrate this approach by constraining PLZ relations of type ab RR Lyrae stars in near-infrared W 1 and W 2 bands, using Tycho- Gaia Astrometric Solution (TGAS) parallaxmore » measurements for a sample of ≈100 type ab RR Lyrae stars located within 2.5 kpc of the Sun. The fitted PLZ relations are consistent with previous studies, and in combination with other data, deliver distances precise to 6% (once various sources of uncertainty are taken into account). To a precision of 0.05 mas (1 σ ), we do not find a statistically significant offset in TGAS parallaxes for this sample of distant RR Lyrae stars (median parallax of 0.8 mas and distance of 1.4 kpc). With only minor modifications, our probabilistic approach can be used to constrain PLZ relations of other pulsating stars, and we intend to apply it to Cepheid and Mira stars in the near future.« less
NASA Astrophysics Data System (ADS)
Nguyen, K. L.; Merchiers, O.; Chapuis, P.-O.
2017-11-01
We compute the near-field radiative heat transfer between a hot AFM tip and a cold substrate. This contribution to the tip-sample heat transfer in Scanning Thermal Microscopy is often overlooked, despite its leading role when the tip is out of contact. For dielectrics, we provide power levels exchanged as a function of the tip-sample distance in vacuum and spatial maps of the heat flux deposited into the sample which indicate the near-contact spatial resolution. The results are compared to analytical expressions of the Proximity Flux Approximation. The numerical results are obtained by means of the Boundary Element Method (BEM) implemented in the SCUFF-EM software, and require first a thorough convergence analysis of the progressive implementation of this method to the thermal emission by a sphere, the radiative transfer between two spheres, and the radiative exchange between a sphere and a finite substrate.
Joint Inference of Population Assignment and Demographic History
Choi, Sang Chul; Hey, Jody
2011-01-01
A new approach to assigning individuals to populations using genetic data is described. Most existing methods work by maximizing Hardy–Weinberg and linkage equilibrium within populations, neither of which will apply for many demographic histories. By including a demographic model, within a likelihood framework based on coalescent theory, we can jointly study demographic history and population assignment. Genealogies and population assignments are sampled from a posterior distribution using a general isolation-with-migration model for multiple populations. A measure of partition distance between assignments facilitates not only the summary of a posterior sample of assignments, but also the estimation of the posterior density for the demographic history. It is shown that joint estimates of assignment and demographic history are possible, including estimation of population phylogeny for samples from three populations. The new method is compared to results of a widely used assignment method, using simulated and published empirical data sets. PMID:21775468
Disparities in Supports for Student Wellness Promotion Efforts among Secondary Schools in Minnesota
ERIC Educational Resources Information Center
Larson, Nicole; O'Connell, Michael; Davey, Cynthia S.; Caspi, Caitlin; Kubik, Martha Y.; Nanney, Marilyn S.
2017-01-01
Background: We examined whether there are differences in the presence of supports for student wellness promotion (1) between schools in city, suburban and rural locations and, (2) among rural schools, according to distance from a metropolitan center. Methods: The analysis was conducted in a sample of 309 secondary schools using 2012 Minnesota…
ERIC Educational Resources Information Center
Al-Azawei, Ahmed; Lundqvist, Karsten
2015-01-01
Online learning constitutes the most popular distance-learning method, with flexibility, accessibility, visibility, manageability and availability as its core features. However, current research indicates that its efficacy is not consistent across all learners. This study aimed to modify and extend the factors of the Technology Acceptance Model…
Anthropometry of a Fit Test Sample used in Evaluating the Current and Improved MCU-2/P Masks
1989-03-01
METHOD Forty-two head and face measurements were taken on each subject after he/she had completed MSA’s series of fit tests. Of these, 15 were measured...BREADTH Using spreading calipers, the hori- zontal distance between the fronto- temporale landmarks. 25 MEASUREMENT DESCRIPTIONS (cont’d) 13. NASAL
Tsafrir, D; Tsafrir, I; Ein-Dor, L; Zuk, O; Notterman, D A; Domany, E
2005-05-15
We introduce a novel unsupervised approach for the organization and visualization of multidimensional data. At the heart of the method is a presentation of the full pairwise distance matrix of the data points, viewed in pseudocolor. The ordering of points is iteratively permuted in search of a linear ordering, which can be used to study embedded shapes. Several examples indicate how the shapes of certain structures in the data (elongated, circular and compact) manifest themselves visually in our permuted distance matrix. It is important to identify the elongated objects since they are often associated with a set of hidden variables, underlying continuous variation in the data. The problem of determining an optimal linear ordering is shown to be NP-Complete, and therefore an iterative search algorithm with O(n3) step-complexity is suggested. By using sorting points into neighborhoods, i.e. SPIN to analyze colon cancer expression data we were able to address the serious problem of sample heterogeneity, which hinders identification of metastasis related genes in our data. Our methodology brings to light the continuous variation of heterogeneity--starting with homogeneous tumor samples and gradually increasing the amount of another tissue. Ordering the samples according to their degree of contamination by unrelated tissue allows the separation of genes associated with irrelevant contamination from those related to cancer progression. Software package will be available for academic users upon request.
Inferring Recent Demography from Isolation by Distance of Long Shared Sequence Blocks
Ringbauer, Harald; Coop, Graham
2017-01-01
Recently it has become feasible to detect long blocks of nearly identical sequence shared between pairs of genomes. These identity-by-descent (IBD) blocks are direct traces of recent coalescence events and, as such, contain ample signal to infer recent demography. Here, we examine sharing of such blocks in two-dimensional populations with local migration. Using a diffusion approximation to trace genetic ancestry, we derive analytical formulas for patterns of isolation by distance of IBD blocks, which can also incorporate recent population density changes. We introduce an inference scheme that uses a composite-likelihood approach to fit these formulas. We then extensively evaluate our theory and inference method on a range of scenarios using simulated data. We first validate the diffusion approximation by showing that the theoretical results closely match the simulated block-sharing patterns. We then demonstrate that our inference scheme can accurately and robustly infer dispersal rate and effective density, as well as bounds on recent dynamics of population density. To demonstrate an application, we use our estimation scheme to explore the fit of a diffusion model to Eastern European samples in the Population Reference Sample data set. We show that ancestry diffusing with a rate of σ≈50−−100 km/gen during the last centuries, combined with accelerating population growth, can explain the observed exponential decay of block sharing with increasing pairwise sample distance. PMID:28108588
The influence of phonological context on the sound errors of a speaker with Wernicke's aphasia.
Goldmann, R E; Schwartz, M F; Wilshire, C E
2001-09-01
A corpus of phonological errors produced in narrative speech by a Wernicke's aphasic speaker (R.W.B.) was tested for context effects using two new methods for establishing chance baselines. A reliable anticipatory effect was found using the second method, which estimated chance from the distance between phoneme repeats in the speech sample containing the errors. Relative to this baseline, error-source distances were shorter than expected for anticipations, but not perseverations. R.W.B.'s anticipation/perseveration ratio measured intermediate between a nonaphasic error corpus and that of a more severe aphasic speaker (both reported in Schwartz et al., 1994), supporting the view that the anticipatory bias correlates to severity. Finally, R.W.B's anticipations favored word-initial segments, although errors and sources did not consistently share word or syllable position. Copyright 2001 Academic Press.
Generalising Ward's Method for Use with Manhattan Distances.
Strauss, Trudie; von Maltitz, Michael Johan
2017-01-01
The claim that Ward's linkage algorithm in hierarchical clustering is limited to use with Euclidean distances is investigated. In this paper, Ward's clustering algorithm is generalised to use with l1 norm or Manhattan distances. We argue that the generalisation of Ward's linkage method to incorporate Manhattan distances is theoretically sound and provide an example of where this method outperforms the method using Euclidean distances. As an application, we perform statistical analyses on languages using methods normally applied to biology and genetic classification. We aim to quantify differences in character traits between languages and use a statistical language signature based on relative bi-gram (sequence of two letters) frequencies to calculate a distance matrix between 32 Indo-European languages. We then use Ward's method of hierarchical clustering to classify the languages, using the Euclidean distance and the Manhattan distance. Results obtained from using the different distance metrics are compared to show that the Ward's algorithm characteristic of minimising intra-cluster variation and maximising inter-cluster variation is not violated when using the Manhattan metric.
Preparation of stable silica surfaces for surface forces measurement
NASA Astrophysics Data System (ADS)
Ren, Huai-Yin; Mizukami, Masashi; Kurihara, Kazue
2017-09-01
A surface forces apparatus (SFA) measures the forces between two surfaces as a function of the surface separation distance. It is regarded as an essential tool for studying the interactions between two surfaces. However, sample surfaces used for the conventional SFA measurements have been mostly limited to thin (ca. 2-3 μm) micas, which are coated with silver layers (ca. 50 nm) on their back, due to the requirement of the distance determination by transmission mode optical interferometry called FECO (fringes of equal chromatic order). The FECO method has the advantage of determining the absolute distance, so it should be important to increase the availability of samples other than mica, which is chemically nonreactive and also requires significant efforts for cleaving. Recently, silica sheets have been occasionally used in place of mica, which increases the possibility of surface modification. However, in this case, the silver layer side of the sheet is glued on a cylindrical quartz disc using epoxy resin, which is not stable in organic solvents and can be easily swollen or dissolved. The preparation of substrates more stable under severe conditions, such as in organic solvents, is necessary for extending application of the measurement. In this study, we report an easy method for preparing stable silica layers of ca. 2 μm in thickness deposited on gold layers (41 nm)/silica discs by sputtering, then annealed to enhance the stability. The obtained silica layers were stable and showed no swelling in organic solvents such as ethanol and toluene.
NASA Astrophysics Data System (ADS)
Li, Xiang
2016-10-01
Blood glucose monitoring is of great importance for controlling diabetes procedure and preventing the complications. At present, the clinical blood glucose concentration measurement is invasive and could be replaced by noninvasive spectroscopy analytical techniques. Among various parameters of optical fiber probe used in spectrum measuring, the measurement distance is the key one. The Monte Carlo technique is a flexible method for simulating light propagation in tissue. The simulation is based on the random walks that photons make as they travel through tissue, which are chosen by statistically sampling the probability distributions for step size and angular deflection per scattering event. The traditional method for determine the optimal distance between transmitting fiber and detector is using Monte Carlo simulation to find out the point where most photons come out. But there is a problem. In the epidermal layer there is no artery, vein or capillary vessel. Thus, when photons propagate and interactive with tissue in epidermal layer, no information is given to the photons. A new criterion is proposed to determine the optimal distance, which is named effective path length in this paper. The path length of each photons travelling in dermis is recorded when running Monte-Carlo simulation, which is the effective path length defined above. The sum of effective path length of every photon at each point is calculated. The detector should be place on the point which has most effective path length. Then the optimal measuring distance between transmitting fiber and detector is determined.
View-invariant gait recognition method by three-dimensional convolutional neural network
NASA Astrophysics Data System (ADS)
Xing, Weiwei; Li, Ying; Zhang, Shunli
2018-01-01
Gait as an important biometric feature can identify a human at a long distance. View change is one of the most challenging factors for gait recognition. To address the cross view issues in gait recognition, we propose a view-invariant gait recognition method by three-dimensional (3-D) convolutional neural network. First, 3-D convolutional neural network (3DCNN) is introduced to learn view-invariant feature, which can capture the spatial information and temporal information simultaneously on normalized silhouette sequences. Second, a network training method based on cross-domain transfer learning is proposed to solve the problem of the limited gait training samples. We choose the C3D as the basic model, which is pretrained on the Sports-1M and then fine-tune C3D model to adapt gait recognition. In the recognition stage, we use the fine-tuned model to extract gait features and use Euclidean distance to measure the similarity of gait sequences. Sufficient experiments are carried out on the CASIA-B dataset and the experimental results demonstrate that our method outperforms many other methods.
Salo, Hanna; Berisha, Anna-Kaisa; Mäkinen, Joni
2016-03-01
This is the first study seasonally applying Sphagnum papillosum moss bags and vertical snow samples for monitoring atmospheric pollution. Moss bags, exposed in January, were collected together with snow samples by early March 2012 near the Harjavalta Industrial Park in southwest Finland. Magnetic, chemical, scanning electron microscopy-energy dispersive X-ray spectroscopy (SEM-EDX), K-means clustering, and Tomlinson pollution load index (PLI) data showed parallel spatial trends of pollution dispersal for both materials. Results strengthen previous findings that concentrate and slag handling activities were important (dust) emission sources while the impact from Cu-Ni smelter's pipe remained secondary at closer distances. Statistically significant correlations existed between the variables of snow and moss bags. As a summary, both methods work well for sampling and are efficient pollutant accumulators. Moss bags can be used also in winter conditions and they provide more homogeneous and better controlled sampling method than snow samples. Copyright © 2015. Published by Elsevier B.V.
The RMS survey: galactic distribution of massive star formation
NASA Astrophysics Data System (ADS)
Urquhart, J. S.; Figura, C. C.; Moore, T. J. T.; Hoare, M. G.; Lumsden, S. L.; Mottram, J. C.; Thompson, M. A.; Oudmaijer, R. D.
2014-01-01
We have used the well-selected sample of ˜1750 embedded, young, massive stars identified by the Red MSX Source (RMS) survey to investigate the Galactic distribution of recent massive star formation. We present molecular line observations for ˜800 sources without existing radial velocities. We describe the various methods used to assign distances extracted from the literature and solve the distance ambiguities towards approximately 200 sources located within the solar circle using archival H I data. These distances are used to calculate bolometric luminosities and estimate the survey completeness (˜2 × 104 L⊙). In total, we calculate the distance and luminosity of ˜1650 sources, one third of which are above the survey's completeness threshold. Examination of the sample's longitude, latitude, radial velocities and mid-infrared images has identified ˜120 small groups of sources, many of which are associated with well-known star formation complexes, such as G305, G333, W31, W43, W49 and W51. We compare the positional distribution of the sample with the expected locations of the spiral arms, assuming a model of the Galaxy consisting of four gaseous arms. The distribution of young massive stars in the Milky Way is spatially correlated with the spiral arms, with strong peaks in the source position and luminosity distributions at the arms' Galactocentric radii. The overall source and luminosity surface densities are both well correlated with the surface density of the molecular gas, which suggests that the massive star formation rate per unit molecular mass is approximately constant across the Galaxy. A comparison of the distribution of molecular gas and the young massive stars to that in other nearby spiral galaxies shows similar radial dependences. We estimate the total luminosity of the embedded massive star population to be ˜0.76 × 108 L⊙, 30 per cent of which is associated with the 10 most active star-forming complexes. We measure the scaleheight as a function of the Galactocentric distance and find that it increases only modestly from ˜20-30 pc between 4 and 8 kpc, but much more rapidly at larger distances.
Degen, Bernd; Blanc-Jolivet, Céline; Stierand, Katrin; Gillet, Elizabeth
2017-03-01
During the past decade, the use of DNA for forensic applications has been extensively implemented for plant and animal species, as well as in humans. Tracing back the geographical origin of an individual usually requires genetic assignment analysis. These approaches are based on reference samples that are grouped into populations or other aggregates and intend to identify the most likely group of origin. Often this grouping does not have a biological but rather a historical or political justification, such as "country of origin". In this paper, we present a new nearest neighbour approach to individual assignment or classification within a given but potentially imperfect grouping of reference samples. This method, which is based on the genetic distance between individuals, functions better in many cases than commonly used methods. We demonstrate the operation of our assignment method using two data sets. One set is simulated for a large number of trees distributed in a 120km by 120km landscape with individual genotypes at 150 SNPs, and the other set comprises experimental data of 1221 individuals of the African tropical tree species Entandrophragma cylindricum (Sapelli) genotyped at 61 SNPs. Judging by the level of correct self-assignment, our approach outperformed the commonly used frequency and Bayesian approaches by 15% for the simulated data set and by 5-7% for the Sapelli data set. Our new approach is less sensitive to overlapping sources of genetic differentiation, such as genetic differences among closely-related species, phylogeographic lineages and isolation by distance, and thus operates better even for suboptimal grouping of individuals. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Abdollahimohammad, Abdolghani; Ja’afar, Rogayah
2015-01-01
Purpose: The goal of the current study was to identify associations between the learning style of nursing students and their cultural values and demographic characteristics. Methods: A non-probability purposive sampling method was used to gather data from two populations. All 156 participants were female, Muslim, and full-time degree students. Data were collected from April to June 2010 using two reliable and validated questionnaires: the Learning Style Scales and the Values Survey Module 2008 (VSM 08). A simple linear regression was run for each predictor before conducting multiple linear regression analysis. The forward selection method was used for variable selection. P-values ≤0.05 and ≤0.1 were considered to indicate significance and marginal significance, respectively. Moreover, multi-group confirmatory factor analysis was performed to determine the invariance of the Farsi and English versions of the VSM 08. Results: The perceptive learning style was found to have a significant negative relationship with the power distance and monumentalism indices of the VSM 08. Moreover, a significant negative association was observed between the solitary learning style and the power distance index. However, no significant association was found between the analytic, competitive, and imaginative learning styles and cultural values (P>0.05). Likewise, no significant associations were observed between learning style, including the perceptive, solitary, analytic, competitive, and imaginative learning styles, and year of study or age (P>0.05). Conclusion: Students who reported low values on the power distance and monumentalism indices are more likely to prefer perceptive and solitary learning styles. Within each group of students in our study sample from the same school the year of study and age did not show any significant associations with learning style. PMID:26268831
Explosive detection technology
NASA Astrophysics Data System (ADS)
Doremus, Steven; Crownover, Robin
2017-05-01
The continuing proliferation of improvised explosive devices is an omnipresent threat to civilians and members of military and law enforcement around the world. The ability to accurately and quickly detect explosive materials from a distance would be an extremely valuable tool for mitigating the risk posed by these devices. A variety of techniques exist that are capable of accurately identifying explosive compounds, but an effective standoff technique is still yet to be realized. Most of the methods being investigated to fill this gap in capabilities are laser based. Raman spectroscopy is one such technique that has been demonstrated to be effective at a distance. Spatially Offset Raman Spectroscopy (SORS) is a technique capable of identifying chemical compounds inside of containers, which could be used to detect hidden explosive devices. Coherent Anti-Stokes Raman Spectroscopy (CARS) utilized a coherent pair of lasers to excite a sample, greatly increasing the response of sample while decreasing the strength of the lasers being used, which significantly improves the eye safety issue that typically hinders laser-based detection methods. Time-gating techniques are also being developed to improve the data collection from Raman techniques, which are often hindered fluorescence of the test sample in addition to atmospheric, substrate, and contaminant responses. Ultraviolet based techniques have also shown significant promise by greatly improved signal strength from excitation of resonance in many explosive compounds. Raman spectroscopy, which identifies compounds based on their molecular response, can be coupled with Laser Induced Breakdown Spectroscopy (LIBS) capable of characterizing the sample's atomic composition using a single laser.
Assessment of gene order computing methods for Alzheimer's disease
2013-01-01
Background Computational genomics of Alzheimer disease (AD), the most common form of senile dementia, is a nascent field in AD research. The field includes AD gene clustering by computing gene order which generates higher quality gene clustering patterns than most other clustering methods. However, there are few available gene order computing methods such as Genetic Algorithm (GA) and Ant Colony Optimization (ACO). Further, their performance in gene order computation using AD microarray data is not known. We thus set forth to evaluate the performances of current gene order computing methods with different distance formulas, and to identify additional features associated with gene order computation. Methods Using different distance formulas- Pearson distance and Euclidean distance, the squared Euclidean distance, and other conditions, gene orders were calculated by ACO and GA (including standard GA and improved GA) methods, respectively. The qualities of the gene orders were compared, and new features from the calculated gene orders were identified. Results Compared to the GA methods tested in this study, ACO fits the AD microarray data the best when calculating gene order. In addition, the following features were revealed: different distance formulas generated a different quality of gene order, and the commonly used Pearson distance was not the best distance formula when used with both GA and ACO methods for AD microarray data. Conclusion Compared with Pearson distance and Euclidean distance, the squared Euclidean distance generated the best quality gene order computed by GA and ACO methods. PMID:23369541
Quantifying Tip-Sample Interactions in Vacuum Using Cantilever-Based Sensors: An Analysis
NASA Astrophysics Data System (ADS)
Dagdeviren, Omur E.; Zhou, Chao; Altman, Eric I.; Schwarz, Udo D.
2018-04-01
Atomic force microscopy is an analytical characterization method that is able to image a sample's surface topography at high resolution while simultaneously probing a variety of different sample properties. Such properties include tip-sample interactions, the local measurement of which has gained much popularity in recent years. To this end, either the oscillation frequency or the oscillation amplitude and phase of the vibrating force-sensing cantilever are recorded as a function of tip-sample distance and subsequently converted into quantitative values for the force or interaction potential. Here, we theoretically and experimentally show that the force law obtained from such data acquired under vacuum conditions using the most commonly applied methods may deviate more than previously assumed from the actual interaction when the oscillation amplitude of the probe is of the order of the decay length of the force near the surface, which may result in a non-negligible error if correct absolute values are of importance. Caused by approximations made in the development of the mathematical reconstruction procedures, the related inaccuracies can be effectively suppressed by using oscillation amplitudes sufficiently larger than the decay length. To facilitate efficient data acquisition, we propose a technique that includes modulating the drive amplitude at a constant height from the surface while monitoring the oscillation amplitude and phase. Ultimately, such an amplitude-sweep-based force spectroscopy enables shorter data acquisition times and increased accuracy for quantitative chemical characterization compared to standard approaches that vary the tip-sample distance. An additional advantage is that since no feedback loop is active while executing the amplitude sweep, the force can be consistently recovered deep into the repulsive regime.
Zhang, Feng; Liu, Yang; Zhang, Hengdong; Ban, Yonghong; Wang, Jianfeng; Liu, Jian; Zhong, Lixing; Chen, Xianwen; Zhu, Baoli
2016-01-01
Lead pollution incidents have occurred frequently in mainland China, which has caused many lead poisoning incidents. This paper took a battery recycling factory as the subject, and focused on measuring the blood lead levels of environmental samples and all the children living around the factory, and analyzed the relationship between them. We collected blood samples from the surrounding residential area, as well as soil, water, vegetables. The atomic absorption method was applied to measure the lead content in these samples. The basic information of the generation procedure, operation type, habit and personal protect equipment was collected by an occupational hygiene investigation. Blood lead levels in 43.12% of the subjects exceeded 100 μg/L. The 50th and the 95th percentiles were 89 μg/L and 232 μg/L for blood lead levels in children, respectively, and the geometric mean was 94 μg/L. Children were stratified into groups by age, gender, parents’ occupation, distance and direction from the recycling plant. The difference of blood lead levels between groups was significant (p < 0.05). Four risk factors for elevated blood lead levels were found by logistic regression analysis, including younger age, male, shorter distance from the recycling plant, and parents with at least one working in the recycling plant. The rate of excess lead concentration in water was 6.25%, 6.06% in soil and 44.44% in leaf vegetables, which were all higher than the Chinese environment standards. The shorter the distance to the factory, the higher the value of BLL and lead levels in vegetable and environment samples. The lead level in the environmental samples was higher downwind of the recycling plant. PMID:27240393
Zhang, Feng; Liu, Yang; Zhang, Hengdong; Ban, Yonghong; Wang, Jianfeng; Liu, Jian; Zhong, Lixing; Chen, Xianwen; Zhu, Baoli
2016-05-28
Lead pollution incidents have occurred frequently in mainland China, which has caused many lead poisoning incidents. This paper took a battery recycling factory as the subject, and focused on measuring the blood lead levels of environmental samples and all the children living around the factory, and analyzed the relationship between them. We collected blood samples from the surrounding residential area, as well as soil, water, vegetables. The atomic absorption method was applied to measure the lead content in these samples. The basic information of the generation procedure, operation type, habit and personal protect equipment was collected by an occupational hygiene investigation. Blood lead levels in 43.12% of the subjects exceeded 100 μg/L. The 50th and the 95th percentiles were 89 μg/L and 232 μg/L for blood lead levels in children, respectively, and the geometric mean was 94 μg/L. Children were stratified into groups by age, gender, parents' occupation, distance and direction from the recycling plant. The difference of blood lead levels between groups was significant (p < 0.05). Four risk factors for elevated blood lead levels were found by logistic regression analysis, including younger age, male, shorter distance from the recycling plant, and parents with at least one working in the recycling plant. The rate of excess lead concentration in water was 6.25%, 6.06% in soil and 44.44% in leaf vegetables, which were all higher than the Chinese environment standards. The shorter the distance to the factory, the higher the value of BLL and lead levels in vegetable and environment samples. The lead level in the environmental samples was higher downwind of the recycling plant.
[Discrimination of donkey meat by NIR and chemometrics].
Niu, Xiao-Ying; Shao, Li-Min; Dong, Fang; Zhao, Zhi-Lei; Zhu, Yan
2014-10-01
Donkey meat samples (n = 167) from different parts of donkey body (neck, costalia, rump, and tendon), beef (n = 47), pork (n = 51) and mutton (n = 32) samples were used to establish near-infrared reflectance spectroscopy (NIR) classification models in the spectra range of 4,000~12,500 cm(-1). The accuracies of classification models constructed by Mahalanobis distances analysis, soft independent modeling of class analogy (SIMCA) and least squares-support vector machine (LS-SVM), respectively combined with pretreatment of Savitzky-Golay smooth (5, 15 and 25 points) and derivative (first and second), multiplicative scatter correction and standard normal variate, were compared. The optimal models for intact samples were obtained by Mahalanobis distances analysis with the first 11 principal components (PCs) from original spectra as inputs and by LS-SVM with the first 6 PCs as inputs, and correctly classified 100% of calibration set and 98. 96% of prediction set. For minced samples of 7 mm diameter the optimal result was attained by LS-SVM with the first 5 PCs from original spectra as inputs, which gained an accuracy of 100% for calibration and 97.53% for prediction. For minced diameter of 5 mm SIMCA model with the first 8 PCs from original spectra as inputs correctly classified 100% of calibration and prediction. And for minced diameter of 3 mm Mahalanobis distances analysis and SIMCA models both achieved 100% accuracy for calibration and prediction respectively with the first 7 and 9 PCs from original spectra as inputs. And in these models, donkey meat samples were all correctly classified with 100% either in calibration or prediction. The results show that it is feasible that NIR with chemometrics methods is used to discriminate donkey meat from the else meat.
Cho, Seung-Hyun; Tong, Haiyan; McGee, John K.; Baldauf, Richard W.; Krantz, Q. Todd; Gilmour, M. Ian
2009-01-01
Background Epidemiologic studies have reported an association between proximity to highway traffic and increased cardiopulmonary illnesses. Objectives We investigated the effect of size-fractionated particulate matter (PM), obtained at different distances from a highway, on acute cardiopulmonary toxicity in mice. Methods We collected PM for 2 weeks in July–August 2006 using a three-stage (ultrafine, < 0.1 μm; fine, 0.1–2.5 μm; coarse, 2.5–10 μm) high-volume impactor at distances of 20 m [near road (NR)] and 275 m [far road (FR)] from an interstate highway in Raleigh, North Carolina. Samples were extracted in methanol, dried, diluted in saline, and then analyzed for chemical constituents. Female CD-1 mice received either 25 or 100 μg of each size fraction via oropharyngeal aspiration. At 4 and 18 hr postexposure, mice were assessed for pulmonary responsiveness to inhaled methacholine, biomarkers of lung injury and inflammation; ex vivo cardiac pathophysiology was assessed at 18 hr only. Results Overall chemical composition between NR and FR PM was similar, although NR samples comprised larger amounts of PM, endotoxin, and certain metals than did the FR samples. Each PM size fraction showed differences in ratios of major chemical classes. Both NR and FR coarse PM produced significant pulmonary inflammation irrespective of distance, whereas both NR and FR ultrafine PM induced cardiac ischemia–reperfusion injury. Conclusions On a comparative mass basis, the coarse and ultrafine PM affected the lung and heart, respectively. We observed no significant differences in the overall toxicity end points and chemical makeup between the NR and FR PM. The results suggest that PM of different size-specific chemistry might be associated with different toxicologic mechanisms in cardiac and pulmonary tissues. PMID:20049117
Barca, E; Castrignanò, A; Buttafuoco, G; De Benedetto, D; Passarella, G
2015-07-01
Soil survey is generally time-consuming, labor-intensive, and costly. Optimization of sampling scheme allows one to reduce the number of sampling points without decreasing or even increasing the accuracy of investigated attribute. Maps of bulk soil electrical conductivity (EC a ) recorded with electromagnetic induction (EMI) sensors could be effectively used to direct soil sampling design for assessing spatial variability of soil moisture. A protocol, using a field-scale bulk EC a survey, has been applied in an agricultural field in Apulia region (southeastern Italy). Spatial simulated annealing was used as a method to optimize spatial soil sampling scheme taking into account sampling constraints, field boundaries, and preliminary observations. Three optimization criteria were used. the first criterion (minimization of mean of the shortest distances, MMSD) optimizes the spreading of the point observations over the entire field by minimizing the expectation of the distance between an arbitrarily chosen point and its nearest observation; the second criterion (minimization of weighted mean of the shortest distances, MWMSD) is a weighted version of the MMSD, which uses the digital gradient of the grid EC a data as weighting function; and the third criterion (mean of average ordinary kriging variance, MAOKV) minimizes mean kriging estimation variance of the target variable. The last criterion utilizes the variogram model of soil water content estimated in a previous trial. The procedures, or a combination of them, were tested and compared in a real case. Simulated annealing was implemented by the software MSANOS able to define or redesign any sampling scheme by increasing or decreasing the original sampling locations. The output consists of the computed sampling scheme, the convergence time, and the cooling law, which can be an invaluable support to the process of sampling design. The proposed approach has found the optimal solution in a reasonable computation time. The use of bulk EC a gradient as an exhaustive variable, known at any node of an interpolation grid, has allowed the optimization of the sampling scheme, distinguishing among areas with different priority levels.
Measurement of the Length of an Optical Trap
NASA Technical Reports Server (NTRS)
Wrbanek, Susan Y.
2010-01-01
NASA Glenn has been involved in developing optical trapping and optical micromanipulation techniques in order to develop a tool that can be used to probe, characterize, and assemble nano and microscale materials to create microscale sensors for harsh flight environments. In order to be able to assemble a sensor or probe candidate sensor material, it is useful to know how far an optical trap can reach; that is, the distance beyond/below the stable trapping point through which an object will be drawn into the optical trap. Typically, to measure the distance over which an optical trap would influence matter in a horizontal (perpendicular to beam propagation) direction, it was common to hold an object in one optical trap, place a second optical trap a known distance away, turn off the first optical trap, and note if the object was moved into the second trap when it was turned on. The disadvantage of this technique is that it only gives information of trap influence distance in horizontal (x y) directions. No information about the distance of the influence of the trap is gained in the direction of propagation of the beam (the z direction). A method was developed to use a time-of-flight technique to determine the length along the propagation direction of an optical trap beam over which an object may be drawn into the optical trap. Test objects (polystyrene microspheres) were held in an optical trap in a water-filled sample chamber and raised to a pre-determined position near the top of the sample chamber. Next, the test objects were released by blocking the optical trap beam. The test objects were allowed to fall through the water for predetermined periods of time, at the end of which the trapping beam was unblocked. It was noted whether or not the test object returned to the optical trap or continued to fall. This determination of the length of an optical trap's influence by this manner assumes that the test object falls through the water in the sample chamber at terminal velocity for the duration of its fall, so that the distance of trap influence can be computed simply by: d = VTt, where d is the trap length (or distance of trap reach), VT is the terminal velocity of the test object, and t is the time interval over which the object is allowed to fall.
Pearson, Amber L
2016-09-20
Most water access studies involve self-reported measures such as time spent or simple spatial measures such as Euclidean distance from home to source. GPS-based measures of access are often considered actual access and have shown little correlation with self-reported measures. One main obstacle to widespread use of GPS-based measurement of access to water has been technological limitations (e.g., battery life). As such, GPS-based measures have been limited by time and in sample size. The aim of this pilot study was to develop and test a novel GPS unit, (≤4-week battery life, waterproof) to measure access to water. The GPS-based method was pilot-tested to estimate number of trips per day, time spent and distance traveled to source for all water collected over a 3-day period in five households in south-western Uganda. This method was then compared to self-reported measures and commonly used spatial measures of access for the same households. Time spent collecting water was significantly overestimated using a self-reported measure, compared to GPS-based (p < 0.05). In contrast, both the GIS Euclidean distances to nearest and actual primary source significantly underestimated distances traveled, compared to the GPS-based measurement of actual travel paths to water source (p < 0.05). Households did not consistently collect water from the source nearest their home. Comparisons between the GPS-based measure and self-reported meters traveled were not made, as respondents did not feel that they could accurately estimate distance. However, there was complete agreement between self-reported primary source and GPS-based. Reliance on cross-sectional self-reported or simple GIS measures leads to misclassification in water access measurement. This new method offers reductions in such errors and may aid in understanding dynamic measures of access to water for health studies.
Problems in determining the surface density of the Galactic disk
NASA Technical Reports Server (NTRS)
Statler, Thomas S.
1989-01-01
A new method is presented for determining the local surface density of the Galactic disk from distance and velocity measurements of stars toward the Galactic poles. The procedure is fully three-dimensional, approximating the Galactic potential by a potential of Staeckel form and using the analytic third integral to treat the tilt and the change of shape of the velocity ellipsoid consistently. Applying the procedure to artificial data superficially resembling the K dwarf sample of Kuijken and Gilmore (1988, 1989), it is shown that the current best estimates of local disk surface density are uncertain by at least 30 percent. Of this, about 25 percent is due to the size of the velocity sample, about 15 percent comes from uncertainties in the rotation curve and the solar galactocentric distance, and about 10 percent from ignorance of the shape of the velocity distribution above z = 1 kpc, the errors adding in quadrature. Increasing the sample size by a factor of 3 will reduce the error to 20 percent. To achieve 10 percent accuracy, observations will be needed along other lines of sight to constrain the shape of the velocity ellipsoid.
Multiparallel Three-Dimensional Optical Microscopy
NASA Technical Reports Server (NTRS)
Nguyen, Lam K.; Price, Jeffrey H.; Kellner, Albert L.; Bravo-Zanoquera, Miguel
2010-01-01
Multiparallel three-dimensional optical microscopy is a method of forming an approximate three-dimensional image of a microscope sample as a collection of images from different depths through the sample. The imaging apparatus includes a single microscope plus an assembly of beam splitters and mirrors that divide the output of the microscope into multiple channels. An imaging array of photodetectors in each channel is located at a different distance along the optical path from the microscope, corresponding to a focal plane at a different depth within the sample. The optical path leading to each photodetector array also includes lenses to compensate for the variation of magnification with distance so that the images ultimately formed on all the photodetector arrays are of the same magnification. The use of optical components common to multiple channels in a simple geometry makes it possible to obtain high light-transmission efficiency with an optically and mechanically simple assembly. In addition, because images can be read out simultaneously from all the photodetector arrays, the apparatus can support three-dimensional imaging at a high scanning rate.
Tetali, Shailaja; Edwards, Phil; Murthy, G V S; Roberts, I
2015-10-28
Although some 300 million Indian children travel to school every day, little is known about how they get there. This information is important for transport planners and public health authorities. This paper presents the development of a self-administered questionnaire and examines its reliability and validity in estimating distance and mode of travel to school in a low resource urban setting. We developed a questionnaire on children's travel to school. We assessed test re-test reliability by repeating the questionnaire one week later (n = 61). The questionnaire was improved and re-tested (n = 68). We examined the convergent validity of distance estimates by comparing estimates based on the nearest landmark to children's homes with a 'gold standard' based on one-to-one interviews with children using detailed maps (n = 50). Most questions showed fair to almost perfect agreement. Questions on usual mode of travel (κ 0.73- 0.84) and road injury (κ 0.61- 0.72) were found to be more reliable than those on parental permissions (κ 0.18- 0.30), perception of safety (κ 0.00- 0.54), and physical activity (κ -0.01- 0.07). The distance estimated by the nearest landmark method was not significantly different than the in-depth method for walking , 52 m [95 % CI -32 m to 135 m], 10 % of the mean difference, and for walking and cycling combined, 65 m [95 % CI -30 m to 159 m], 11 % of the mean difference. For children who used motorized transport (excluding private school bus), the nearest landmark method under-estimated distance by an average of 325 metres [95 % CI -664 m to 1314 m], 15 % of the mean difference. A self-administered questionnaire was found to provide reliable information on the usual mode of travel to school, and road injury, in a small sample of children in Hyderabad, India. The 'nearest landmark' method can be applied in similar low-resource settings, for a reasonably accurate estimate of the distance from a child's home to school.
Multivariate model of female black bear habitat use for a Geographic Information System
Clark, Joseph D.; Dunn, James E.; Smith, Kimberly G.
1993-01-01
Simple univariate statistical techniques may not adequately assess the multidimensional nature of habitats used by wildlife. Thus, we developed a multivariate method to model habitat-use potential using a set of female black bear (Ursus americanus) radio locations and habitat data consisting of forest cover type, elevation, slope, aspect, distance to roads, distance to streams, and forest cover type diversity score in the Ozark Mountains of Arkansas. The model is based on the Mahalanobis distance statistic coupled with Geographic Information System (GIS) technology. That statistic is a measure of dissimilarity and represents a standardized squared distance between a set of sample variates and an ideal based on the mean of variates associated with animal observations. Calculations were made with the GIS to produce a map containing Mahalanobis distance values within each cell on a 60- × 60-m grid. The model identified areas of high habitat use potential that could not otherwise be identified by independent perusal of any single map layer. This technique avoids many pitfalls that commonly affect typical multivariate analyses of habitat use and is a useful tool for habitat manipulation or mitigation to favor terrestrial vertebrates that use habitats on a landscape scale.
Genetic variability in Brazilian wheat cultivars assessed by microsatellite markers
2009-01-01
Wheat (Triticum aestivum) is one of the most important food staples in the south of Brazil. Understanding genetic variability among the assortment of Brazilian wheat is important for breeding. The aim of this work was to molecularly characterize the thirty-six wheat cultivars recommended for various regions of Brazil, and to assess mutual genetic distances, through the use of microsatellite markers. Twenty three polymorphic microsatellite markers (PMM) delineated all 36 of the samples, revealing a total of 74 simple sequence repeat (SSR) alleles, i.e. an average of 3.2 alleles per locus. Polymorphic information content (PIC value) calculated to assess the informativeness of each marker ranged from 0.20 to 0.79, with a mean of 0.49. Genetic distances among the 36 cultivars ranged from 0.10 (between cultivars Ocepar 18 and BRS 207) to 0.88 (between cultivars CD 101 and Fudancep 46), the mean distance being 0.48. Twelve groups were obtained by using the unweighted pair-group method with arithmetic means analysis (UPGMA), and thirteen through the Tocher method. Both methods produced similar clusters, with one to thirteen cultivars per group. The results indicate that these tools may be used to protect intellectual property and for breeding and selection programs. PMID:21637519
Student Retention in Distance Education: Are We Failing Our Students?
ERIC Educational Resources Information Center
Simpson, Ormond
2013-01-01
This paper brings together some data on student retention in distance education in the form of graduation rates at a sample of distance institutions. The paper suggests that there is a "distance education deficit" with many distance institutions having less than one-quarter of the graduation rates of conventional institutions. It looks…
Bøcher, Peder Klith; McCloy, Keith R
2006-02-01
In this investigation, the characteristics of the average local variance (ALV) function is investigated through the acquisition of images at different spatial resolutions of constructed scenes of regular patterns of black and white squares. It is shown that the ALV plot consistently peaks at a spatial resolution in which the pixels has a size corresponding to half the distance between scene objects, and that, under very specific conditions, it also peaks at a spatial resolution in which the pixel size corresponds to the whole distance between scene objects. It is argued that the peak at object distance when present is an expression of the Nyquist sample rate. The presence of this peak is, hence, shown to be a function of the matching between the phase of the scene pattern and the phase of the sample grid, i.e., the image. When these phases match, a clear and distinct peak is produced on the ALV plot. The fact that the peak at half the distance consistently occurs in the ALV plot is linked to the circumstance that the sampling interval (distance between pixels) and the extent of the sampling unit (size of pixels) are equal. Hence, at twice the Nyquist sampling rate, each fundamental period of the pattern is covered by four pixels; therefore, at least one pixel is always completely embedded within one pattern element, regardless of sample scene phase. If the objects in the scene are scattered with a distance larger than their extent, the peak will be related to the size by a factor larger than 1/2. This is suggested to be the explanation to the results presented by others that the ALV plot is related to scene-object size by a factor of 1/2-3/4.
Approximating the Generalized Voronoi Diagram of Closely Spaced Objects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edwards, John; Daniel, Eric; Pascucci, Valerio
2015-06-22
We present an algorithm to compute an approximation of the generalized Voronoi diagram (GVD) on arbitrary collections of 2D or 3D geometric objects. In particular, we focus on datasets with closely spaced objects; GVD approximation is expensive and sometimes intractable on these datasets using previous algorithms. With our approach, the GVD can be computed using commodity hardware even on datasets with many, extremely tightly packed objects. Our approach is to subdivide the space with an octree that is represented with an adjacency structure. We then use a novel adaptive distance transform to compute the distance function on octree vertices. Themore » computed distance field is sampled more densely in areas of close object spacing, enabling robust and parallelizable GVD surface generation. We demonstrate our method on a variety of data and show example applications of the GVD in 2D and 3D.« less
Wang, Kang; Xia, Xing-Hua
2006-03-31
The end of separation channel in a microchip was electrochemically mapped using the feedback imaging mode of scanning electrochemical microscopy (SECM). This method provides a convenient way for microchannel-electrode alignment in microchip capillary electrophoresis. Influence of electrode-to-channel positions on separation parameters in this capillary electrophoresis-electrochemical detection (CE-ED) was then investigated. For the trapezoid shaped microchannel, detection in the central area resulted in the best apparent separation efficiency and peak shape. In the electrode-to-channel distance ranging from 65 to 15mum, the limiting peak currents of dopamine increased with the decrease of the detection distance due to the limited diffusion and convection of the sample band. Results showed that radial position and axial distance of the detection electrode to microchannel was important for the improvement of separation parameters in CE amperometric detection.
Pulsed single-blow regenerator testing
NASA Technical Reports Server (NTRS)
Oldson, J. C.; Knowles, T. R.; Rauch, J.
1992-01-01
A pulsed single-blow method has been developed for testing of Stirling regenerator materials performance. The method uses a tubular flow arrangement with a steady gas flow passing through a regenerator matrix sample that packs the flow channel for a short distance. A wire grid heater spanning the gas flow channel is used to heat a plug of gas by approximately 2 K for approximately 350 ms. Foil thermocouples monitor the gas temperature entering and leaving the sample. Data analysis based on a 1D incompressible-flow thermal model allows the extraction of Stanton number. A figure of merit involving heat transfer and pressure drop is used to present results for steel screens and steel felt. The observations show a lower figure of merit for the materials tested than is expected based on correlations obtained by other methods.
Gabriele, Michelle L.; Ishikawa, Hiroshi; Wollstein, Gadi; Bilonick, Richard A.; Townsend, Kelly A.; Kagemann, Larry; Wojtkowski, Maciej; Srinivasan, Vivek J.; Fujimoto, James G.; Duker, Jay S.; Schuman, Joel S.
2009-01-01
PURPOSE To investigate the effect on optical coherence tomography (OCT) retinal nerve fiber layer (RNFL) thickness measurements of varying the standard 3.4-mm-diameter circle location. METHODS The optic nerve head (ONH) region of 17 eyes of 17 healthy subjects was imaged with high-speed, ultrahigh-resolution OCT (hsUHR-OCT; 501 × 180 axial scans covering a 6 × 6-mm area; scan time, 3.84 seconds) for a comprehensive sampling. This method allows for systematic simulation of the variable circle placement effect. RNFL thickness was measured on this three-dimensional dataset by using a custom-designed software program. RNFL thickness was resampled along a 3.4-mm-diameter circle centered on the ONH, then along 3.4-mm circles shifted horizontally (x-shift), vertically (y-shift) and diagonally up to ±500 µm (at 100-µm intervals). Linear mixed-effects models were used to determine RNFL thickness as a function of the scan circle shift. A model for the distance between the two thickest measurements along the RNFL thickness circular profile (peak distance) was also calculated. RESULTS RNFL thickness tended to decrease with both positive and negative x- and y-shifts. The range of shifts that caused a decrease greater than the variability inherent to the commercial device was greater in both nasal and temporal quadrants than in the superior and inferior ones. The model for peak distance demonstrated that as the scan moves nasally, the RNFL peak distance increases, and as the circle moves temporally, the distance decreases. Vertical shifts had a minimal effect on peak distance. CONCLUSIONS The location of the OCT scan circle affects RNFL thickness measurements. Accurate registration of OCT scans is essential for measurement reproducibility and longitudinal examination (ClinicalTrials.gov number, NCT00286637). PMID:18515577
Improved Compressive Sensing of Natural Scenes Using Localized Random Sampling
Barranca, Victor J.; Kovačič, Gregor; Zhou, Douglas; Cai, David
2016-01-01
Compressive sensing (CS) theory demonstrates that by using uniformly-random sampling, rather than uniformly-spaced sampling, higher quality image reconstructions are often achievable. Considering that the structure of sampling protocols has such a profound impact on the quality of image reconstructions, we formulate a new sampling scheme motivated by physiological receptive field structure, localized random sampling, which yields significantly improved CS image reconstructions. For each set of localized image measurements, our sampling method first randomly selects an image pixel and then measures its nearby pixels with probability depending on their distance from the initially selected pixel. We compare the uniformly-random and localized random sampling methods over a large space of sampling parameters, and show that, for the optimal parameter choices, higher quality image reconstructions can be consistently obtained by using localized random sampling. In addition, we argue that the localized random CS optimal parameter choice is stable with respect to diverse natural images, and scales with the number of samples used for reconstruction. We expect that the localized random sampling protocol helps to explain the evolutionarily advantageous nature of receptive field structure in visual systems and suggests several future research areas in CS theory and its application to brain imaging. PMID:27555464
ERIC Educational Resources Information Center
Syed, Mahbubur Rahman, Ed.
2009-01-01
The emerging field of advanced distance education delivers academic courses across time and distance, allowing educators and students to participate in a convenient learning method. "Methods and Applications for Advancing Distance Education Technologies: International Issues and Solutions" demonstrates communication technologies, intelligent…
Cyrill Brosset
1976-01-01
The acid properties of particles have been investigated by means of measuring the content of mainly strong acid in leaching solutions of particle samples and in drain water from trees. The measurements are based on Gran's plot and on a study of its curvature.
Carbon Nanotube Oscillator Surface Profiling Device and Method of Use
2011-11-15
distance p. The constants A and B are the Hamaker constants, which depend on the materials of the two interacting bodies. The total vdW inter...wall CNT 45 with 2L1 =2L2 =150 A and a sample of specific material char- acterized with atomic density a, Hamaker constants A and B, and a friction
NASA Astrophysics Data System (ADS)
Akmalov, Artem E.; Chistyakov, Alexander A.; Kotkovskii, Gennadii E.; Sychev, Alexei V.
2017-10-01
The ways for increasing the distance of non-contact sampling up to 40 cm for a field asymmetric ion mobility (FAIM) spectrometer are formulated and implemented by the use of laser desorption and active shaper of the vortex flow. Numerical modeling of air sampling flows was made and the sampling device for a laser-based FAIM spectrometer on the basis of high speed rotating impeller, located coaxial with the ion source, was designed. The dependence of trinitrotoluene vapors signal on the rotational speed and the optimization of the value of the sampling flow were obtained. The effective distance of sampling is increased up to 28 cm for trinitrotoluene vapors detection by a FAIM spectrometer with a rotating impeller. The distance is raised up to 40 cm using laser irradiation of traces of explosives. It is shown that efficient desorption of low-volatile explosives is achieved at laser intensity 107 W / cm2 , wavelength λ=266 nm, pulse energy about 1mJ and pulse frequency not less than 10 Hz under ambient conditions. The ways of optimization of internal gas flows of a FAIM spectrometer for the work at increased sampling distances are discussed.
Narayan, Lakshmi; Dodd, Richard S.; O’Hara, Kevin L.
2015-01-01
Premise of the study: Identifying clonal lineages in asexually reproducing plants using microsatellite markers is complicated by the possibility of nonidentical genotypes from the same clonal lineage due to somatic mutations, null alleles, and scoring errors. We developed and tested a clonal identification protocol that is robust to these issues for the asexually reproducing hexaploid tree species coast redwood (Sequoia sempervirens). Methods: Microsatellite data from four previously published and two newly developed primers were scored using a modified protocol, and clones were identified using Bruvo genetic distances. The effectiveness of this clonal identification protocol was assessed using simulations and by genotyping a test set of paired samples of different tissue types from the same trees. Results: Data from simulations showed that our protocol allowed us to accurately identify clonal lineages. Multiple test samples from the same trees were identified correctly, although certain tissue type pairs had larger genetic distances on average. Discussion: The methods described in this paper will allow for the accurate identification of coast redwood clones, facilitating future studies of the reproductive ecology of this species. The techniques used in this paper can be applied to studies of other clonal organisms as well. PMID:25798341
Appleton, P L; Quyn, A J; Swift, S; Näthke, I
2009-05-01
Visualizing overall tissue architecture in three dimensions is fundamental for validating and integrating biochemical, cell biological and visual data from less complex systems such as cultured cells. Here, we describe a method to generate high-resolution three-dimensional image data of intact mouse gut tissue. Regions of highest interest lie between 50 and 200 mum within this tissue. The quality and usefulness of three-dimensional image data of tissue with such depth is limited owing to problems associated with scattered light, photobleaching and spherical aberration. Furthermore, the highest-quality oil-immersion lenses are designed to work at a maximum distance of =10-15 mum into the sample, further compounding the ability to image at high-resolution deep within tissue. We show that manipulating the refractive index of the mounting media and decreasing sample opacity greatly improves image quality such that the limiting factor for a standard, inverted multi-photon microscope is determined by the working distance of the objective as opposed to detectable fluorescence. This method negates the need for mechanical sectioning of tissue and enables the routine generation of high-quality, quantitative image data that can significantly advance our understanding of tissue architecture and physiology.
Genomic Characterization Helps Dissecting an Outbreak of Listeriosis in Northern Italy
Comandatore, Francesco; Corbella, Marta; Andreoli, Giuseppina; Scaltriti, Erika; Aguzzi, Massimo; Gaiarsa, Stefano; Mariani, Bianca; Morganti, Marina; Bandi, Claudio; Fabbi, Massimo; Marone, Piero; Pongolini, Stefano; Sassera, Davide
2017-01-01
Introduction Listeria monocytogenes (Lm) is a bacterium widely distributed in nature and able to contaminate food processing environments, including those of dairy products. Lm is a primary public health issue, due to the very low infectious dose and the ability to produce severe outcomes, in particular in elderly, newborns, pregnant women and immunocompromised patients. Methods In the period between April and July 2015, an increased number of cases of listeriosis was observed in the area of Pavia, Northern Italy. An epidemiological investigation identified a cheesemaking small organic farm as the possible origin of the outbreak. In this work we present the results of the retrospective epidemiological study that we performed using molecular biology and genomic epidemiology methods. The strains sampled from patients and those from the target farm's cheese were analyzed using PFGE and whole genome sequencing (WGS) based methods. The performed WGS based analyses included: a) in-silico MLST typing; b) SNPs calling and genetic distance evaluation; c) determination of the resistance and virulence genes profiles; d) SNPs based phylogenetic reconstruction. Results Three of the patient strains and all the cheese strains resulted to belong to the same phylogenetic cluster, in Sequence Type 29. A further accurate SNPs analysis revealed that two of the three patient strains and all the cheese strains were highly similar (0.8 SNPs of average distance) and exhibited a higer distance from the third patient isolate (9.4 SNPs of average distance). Discussion Despite the global agreement among the results of the PFGE and WGS epidemiological studies, the latter approach agree with epidemiological data in indicating that one the patient strains could have originated from a different source. This result highlights that WGS methods can allow to better PMID:28856063
NASA Astrophysics Data System (ADS)
Rivenson, Yair; Wu, Chris; Wang, Hongda; Zhang, Yibo; Ozcan, Aydogan
2017-03-01
Microscopic imaging of biological samples such as pathology slides is one of the standard diagnostic methods for screening various diseases, including cancer. These biological samples are usually imaged using traditional optical microscopy tools; however, the high cost, bulkiness and limited imaging throughput of traditional microscopes partially restrict their deployment in resource-limited settings. In order to mitigate this, we previously demonstrated a cost-effective and compact lens-less on-chip microscopy platform with a wide field-of-view of >20-30 mm^2. The lens-less microscopy platform has shown its effectiveness for imaging of highly connected biological samples, such as pathology slides of various tissue samples and smears, among others. This computational holographic microscope requires a set of super-resolved holograms acquired at multiple sample-to-sensor distances, which are used as input to an iterative phase recovery algorithm and holographic reconstruction process, yielding high-resolution images of the samples in phase and amplitude channels. Here we demonstrate that in order to reconstruct clinically relevant images with high resolution and image contrast, we require less than 50% of the previously reported nominal number of holograms acquired at different sample-to-sensor distances. This is achieved by incorporating a loose sparsity constraint as part of the iterative holographic object reconstruction. We demonstrate the success of this sparsity-based computational lens-less microscopy platform by imaging pathology slides of breast cancer tissue and Papanicolaou (Pap) smears.
Comparative Issues and Methods in Organizational Diagnosis. Report II. The Decision Tree Approach.
organizational diagnosis . The advantages and disadvantages of the decision-tree approach generally, and in this study specifically, are examined. A pre-test, using a civilian sample of 174 work groups with Survey of Organizations data, was conducted to assess various decision-tree classification criteria, in terms of their similarity to the distance function used by Bowers and Hausser (1977). The results suggested the use of a large developmental sample, which should result in more distinctly defined boundary lines between classification profiles. Also, the decision matrix
Model-Based Design of Long-Distance Tracer Transport Experiments in Plants.
Bühler, Jonas; von Lieres, Eric; Huber, Gregor J
2018-01-01
Studies of long-distance transport of tracer isotopes in plants offer a high potential for functional phenotyping, but so far measurement time is a bottleneck because continuous time series of at least 1 h are required to obtain reliable estimates of transport properties. Hence, usual throughput values are between 0.5 and 1 samples h -1 . Here, we propose to increase sample throughput by introducing temporal gaps in the data acquisition of each plant sample and measuring multiple plants one after each other in a rotating scheme. In contrast to common time series analysis methods, mechanistic tracer transport models allow the analysis of interrupted time series. The uncertainties of the model parameter estimates are used as a measure of how much information was lost compared to complete time series. A case study was set up to systematically investigate different experimental schedules for different throughput scenarios ranging from 1 to 12 samples h -1 . Selected designs with only a small amount of data points were found to be sufficient for an adequate parameter estimation, implying that the presented approach enables a substantial increase of sample throughput. The presented general framework for automated generation and evaluation of experimental schedules allows the determination of a maximal sample throughput and the respective optimal measurement schedule depending on the required statistical reliability of data acquired by future experiments.
Kongskov, Rasmus Dalgas; Jørgensen, Jakob Sauer; Poulsen, Henning Friis; Hansen, Per Christian
2016-04-01
Classical reconstruction methods for phase-contrast tomography consist of two stages: phase retrieval and tomographic reconstruction. A novel algebraic method combining the two was suggested by Kostenko et al. [Opt. Express21, 12185 (2013)OPEXFF1094-408710.1364/OE.21.012185], and preliminary results demonstrated improved reconstruction compared with a given two-stage method. Using simulated free-space propagation experiments with a single sample-detector distance, we thoroughly compare the novel method with the two-stage method to address limitations of the preliminary results. We demonstrate that the novel method is substantially more robust toward noise; our simulations point to a possible reduction in counting times by an order of magnitude.
Research of mine water source identification based on LIF technology
NASA Astrophysics Data System (ADS)
Zhou, Mengran; Yan, Pengcheng
2016-09-01
According to the problem that traditional chemical methods to the mine water source identification takes a long time, put forward a method for rapid source identification system of mine water inrush based on the technology of laser induced fluorescence (LIF). Emphatically analyzes the basic principle of LIF technology. The hardware composition of LIF system are analyzed and the related modules were selected. Through the fluorescence experiment with the water samples of coal mine in the LIF system, fluorescence spectra of water samples are got. Traditional water source identification mainly according to the ion concentration representative of the water, but it is hard to analysis the ion concentration of the water from the fluorescence spectra. This paper proposes a simple and practical method of rapid identification of water by fluorescence spectrum, which measure the space distance between unknown water samples and standard samples, and then based on the clustering analysis, the category of the unknown water sample can be get. Water source identification for unknown samples verified the reliability of the LIF system, and solve the problem that the current coal mine can't have a better real-time and online monitoring on water inrush, which is of great significance for coal mine safety in production.
Where and when should sensors move? Sampling using the expected value of information.
de Bruin, Sytze; Ballari, Daniela; Bregt, Arnold K
2012-11-26
In case of an environmental accident, initially available data are often insufficient for properly managing the situation. In this paper, new sensor observations are iteratively added to an initial sample by maximising the global expected value of information of the points for decision making. This is equivalent to minimizing the aggregated expected misclassification costs over the study area. The method considers measurement error and different costs for class omissions and false class commissions. Constraints imposed by a mobile sensor web are accounted for using cost distances to decide which sensor should move to the next sample location. The method is demonstrated using synthetic examples of static and dynamic phenomena. This allowed computation of the true misclassification costs and comparison with other sampling approaches. The probability of local contamination levels being above a given critical threshold were computed by indicator kriging. In the case of multiple sensors being relocated simultaneously, a genetic algorithm was used to find sets of suitable new measurement locations. Otherwise, all grid nodes were searched exhaustively, which is computationally demanding. In terms of true misclassification costs, the method outperformed random sampling and sampling based on minimisation of the kriging variance.
Where and When Should Sensors Move? Sampling Using the Expected Value of Information
de Bruin, Sytze; Ballari, Daniela; Bregt, Arnold K.
2012-01-01
In case of an environmental accident, initially available data are often insufficient for properly managing the situation. In this paper, new sensor observations are iteratively added to an initial sample by maximising the global expected value of information of the points for decision making. This is equivalent to minimizing the aggregated expected misclassification costs over the study area. The method considers measurement error and different costs for class omissions and false class commissions. Constraints imposed by a mobile sensor web are accounted for using cost distances to decide which sensor should move to the next sample location. The method is demonstrated using synthetic examples of static and dynamic phenomena. This allowed computation of the true misclassification costs and comparison with other sampling approaches. The probability of local contamination levels being above a given critical threshold were computed by indicator kriging. In the case of multiple sensors being relocated simultaneously, a genetic algorithm was used to find sets of suitable new measurement locations. Otherwise, all grid nodes were searched exhaustively, which is computationally demanding. In terms of true misclassification costs, the method outperformed random sampling and sampling based on minimisation of the kriging variance. PMID:23443379
Rainfall Observed Over Bangladesh 2000-2008: A Comparison of Spatial Interpolation Methods
NASA Astrophysics Data System (ADS)
Pervez, M.; Henebry, G. M.
2010-12-01
In preparation for a hydrometeorological study of freshwater resources in the greater Ganges-Brahmaputra region, we compared the results of four methods of spatial interpolation applied to point measurements of daily rainfall over Bangladesh during a seven year period (2000-2008). Two univariate (inverse distance weighted and spline-regularized and tension) and two multivariate geostatistical (ordinary kriging and kriging with external drift) methods were used to interpolate daily observations from a network of 221 rain gauges across Bangladesh spanning an area of 143,000 sq km. Elevation and topographic index were used as the covariates in the geostatistical methods. The validity of the interpolated maps was analyzed through cross-validation. The quality of the methods was assessed through the Pearson and Spearman correlations and root mean square error measurements of accuracy in cross-validation. Preliminary results indicated that the univariate methods performed better than the geostatistical methods at daily scales, likely due to the relatively dense sampled point measurements and a weak correlation between the rainfall and covariates at daily scales in this region. Inverse distance weighted produced the better results than the spline. For the days with extreme or high rainfall—spatially and quantitatively—the correlation between observed and interpolated estimates appeared to be high (r2 ~ 0.6 RMSE ~ 10mm), although for low rainfall days the correlations were poor (r2 ~ 0.1 RMSE ~ 3mm). The performance quality of these methods was influenced by the density of the sample point measurements, the quantity of the observed rainfall along with spatial extent, and an appropriate search radius defining the neighboring points. Results indicated that interpolated rainfall estimates at daily scales may introduce uncertainties in the successive hydrometeorological analysis. Interpolations at 5-day, 10-day, 15-day, and monthly time scales are currently under investigation.
Li, ZhiLiang; Wu, ShiRong; Chen, ZeCong; Ye, Nancy; Yang, ShengXi; Liao, ChunYang; Zhang, MengJun; Yang, Li; Mei, Hu; Yang, Yan; Zhao, Na; Zhou, Yuan; Zhou, Ping; Xiong, Qing; Xu, Hong; Liu, ShuShen; Ling, ZiHua; Chen, Gang; Li, GenRong
2007-10-01
Only from the primary structures of peptides, a new set of descriptors called the molecular electronegativity edge-distance vector (VMED) was proposed and applied to describing and characterizing the molecular structures of oligopeptides and polypeptides, based on the electronegativity of each atom or electronic charge index (ECI) of atomic clusters and the bonding distance between atom-pairs. Here, the molecular structures of antigenic polypeptides were well expressed in order to propose the automated technique for the computerized identification of helper T lymphocyte (Th) epitopes. Furthermore, a modified MED vector was proposed from the primary structures of polypeptides, based on the ECI and the relative bonding distance of the fundamental skeleton groups. The side-chains of each amino acid were here treated as a pseudo-atom. The developed VMED was easy to calculate and able to work. Some quantitative model was established for 28 immunogenic or antigenic polypeptides (AGPP) with 14 (1-14) A(d) and 14 other restricted activities assigned as "1"(+) and "0"(-), respectively. The latter comprised 6 A(b)(15-20), 3 A(k)(21-23), 2 E(k)(24-26), 2 H-2(k)(27 and 28) restricted sequences. Good results were obtained with 90% correct classification (only 2 wrong ones for 20 training samples) and 100% correct prediction (none wrong for 8 testing samples); while contrastively 100% correct classification (none wrong for 20 training samples) and 88% correct classification (1 wrong for 8 testing samples). Both stochastic samplings and cross validations were performed to demonstrate good performance. The described method may also be suitable for estimation and prediction of classes I and II for major histocompatibility antigen (MHC) epitope of human. It will be useful in immune identification and recognition of proteins and genes and in the design and development of subunit vaccines. Several quantitative structure activity relationship (QSAR) models were developed for various oligopeptides and polypeptides including 58 dipeptides and 31 pentapeptides with angiotensin converting enzyme (ACE) inhibition by multiple linear regression (MLR) method. In order to explain the ability to characterize molecular structure of polypeptides, a molecular modeling investigation on QSAR was performed for functional prediction of polypeptide sequences with antigenic activity and heptapeptide sequences with tachykinin activity through quantitative sequence-activity models (QSAMs) by the molecular electronegativity edge-distance vector (VMED). The results showed that VMED exhibited both excellent structural selectivity and good activity prediction. Moreover, the results showed that VMED behaved quite well for both QSAR and QSAM of poly-and oligopeptides, which exhibited both good estimation ability and prediction power, equal to or better than those reported in the previous references. Finally, a preliminary conclusion was drawn: both classical and modified MED vectors were very useful structural descriptors. Some suggestions were proposed for further studies on QSAR/QSAM of proteins in various fields.
NASA Astrophysics Data System (ADS)
Talamonti, James J.; Kay, Richard B.; Krebs, Danny J.
1996-05-01
A numerical model was developed to emulate the capabilities of systems performing noncontact absolute distance measurements. The model incorporates known methods to minimize signal processing and digital sampling errors and evaluates the accuracy limitations imposed by spectral peak isolation by using Hanning, Blackman, and Gaussian windows in the fast Fourier transform technique. We applied this model to the specific case of measuring the relative lengths of a compound Michelson interferometer. By processing computer-simulated data through our model, we project the ultimate precision for ideal data, and data containing AM-FM noise. The precision is shown to be limited by nonlinearities in the laser scan. absolute distance, interferometer.
Sexual dimorphism in the human face assessed by euclidean distance matrix analysis.
Ferrario, V F; Sforza, C; Pizzini, G; Vogel, G; Miani, A
1993-01-01
The form of any object can be viewed as a combination of size and shape. A recently proposed method (euclidean distance matrix analysis) can differentiate between size and shape differences. It has been applied to analyse the sexual dimorphism in facial form in a sample of 108 healthy young adults (57 men, 51 women). The face was wider and longer in men than in women. A global shape difference was demonstrated, the male face being more rectangular and the female face more square. Gender variations involved especially the lower third of the face and, in particular, the position of the pogonion relative to the other structures. PMID:8300436
NASA Astrophysics Data System (ADS)
Chen, J.; Chen, W.; Dou, A.; Li, W.; Sun, Y.
2018-04-01
A new information extraction method of damaged buildings rooted in optimal feature space is put forward on the basis of the traditional object-oriented method. In this new method, ESP (estimate of scale parameter) tool is used to optimize the segmentation of image. Then the distance matrix and minimum separation distance of all kinds of surface features are calculated through sample selection to find the optimal feature space, which is finally applied to extract the image of damaged buildings after earthquake. The overall extraction accuracy reaches 83.1 %, the kappa coefficient 0.813. The new information extraction method greatly improves the extraction accuracy and efficiency, compared with the traditional object-oriented method, and owns a good promotional value in the information extraction of damaged buildings. In addition, the new method can be used for the information extraction of different-resolution images of damaged buildings after earthquake, then to seek the optimal observation scale of damaged buildings through accuracy evaluation. It is supposed that the optimal observation scale of damaged buildings is between 1 m and 1.2 m, which provides a reference for future information extraction of damaged buildings.
Phylogenetic Analysis of Genome Rearrangements among Five Mammalian Orders
Luo, Haiwei; Arndt, William; Zhang, Yiwei; Shi, Guanqun; Alekseyev, Max; Tang, Jijun; Hughes, Austin L.; Friedman, Robert
2015-01-01
Evolutionary relationships among placental mammalian orders have been controversial. Whole genome sequencing and new computational methods offer opportunities to resolve the relationships among 10 genomes belonging to the mammalian orders Primates, Rodentia, Carnivora, Perissodactyla and Artiodactyla. By application of the double cut and join distance metric, where gene order is the phylogenetic character, we computed genomic distances among the sampled mammalian genomes. With a marsupial outgroup, the gene order tree supported a topology in which Rodentia fell outside the cluster of Primates, Carnivora, Perissodactyla, and Artiodactyla. Results of breakpoint reuse rate and synteny block length analyses were consistent with the prediction of random breakage model, which provided a diagnostic test to support use of gene order as an appropriate phylogenetic character in this study. We the influence of rate differences among lineages and other factors that may contribute to different resolutions of mammalian ordinal relationships by different methods of phylogenetic reconstruction. PMID:22929217
Medium Frequency Pseudo Noise Geological Radar
NASA Technical Reports Server (NTRS)
Arndt, G. Dickey (Inventor); Carl, James R. (Inventor); Byerly, Kent A. (Inventor); Amini, B. Jon (Inventor)
2003-01-01
System and methods are disclosed for transmitting and receiving electromagnetic pulses through a geological formation. A preferably programmable transmitter having an all-digital portion in a preferred embodiment may be operated at frequencies below 1 MHz without loss of target resolution by transmitting and over sampling received long PN codes. A gated and stored portion of the received signal may be correlated with the PN code to determine distances of interfaces within the geological formation, such as the distance of a water interfaces from a wellbore. The received signal is oversampled preferably at rates such as five to fifty times as high as a carrier frequency. In one method of the invention, an oil well with multiple production zones may be kept in production by detecting an approaching water front in one of the production zones and shutting down that particular production zone thereby permitting the remaining production zones to continue operating.
Calibration of GRB Luminosity Relations with Cosmography
NASA Astrophysics Data System (ADS)
Gao, He; Liang, Nan; Zhu, Zong-Hong
For the use of gamma-ray bursts (GRBs) to probe cosmology in a cosmology-independent way, a new method has been proposed to obtain luminosity distances of GRBs by interpolating directly from the Hubble diagram of SNe Ia, and then calibrating GRB relations at high redshift. In this paper, following the basic assumption in the interpolation method that objects at the same redshift should have the same luminosity distance, we propose another approach to calibrate GRB luminosity relations with cosmographic fitting directly from SN Ia data. In cosmography, there is a well-known fitting formula which can reflect the Hubble relation between luminosity distance and redshift with cosmographic parameters which can be fitted from observation data. Using the Cosmographic fitting results from the Union set of SNe Ia, we calibrate five GRB relations using GRB sample at z ≤ 1.4 and deduce distance moduli of GRBs at 1.4 < z ≤ 6.6 by generalizing above calibrated relations at high redshift. Finally, we constrain the dark energy parameterization models of the Chevallier-Polarski-Linder (CPL) model, the Jassal-Bagla-Padmanabhan (JBP) model and the Alam model with GRB data at high redshift, as well as with the cosmic microwave background radiation (CMB) and the baryonic acoustic oscillation (BAO) observations, and we find the ΛCDM model is consistent with the current data in 1-σ confidence region.
Wells, James E.; Bono, James L.; Woodbury, Bryan L.; Kalchayanand, Norasak; Norman, Keri N.; Suslow, Trevor V.; López-Velasco, Gabriela; Millner, Patricia D.
2014-01-01
The impact of proximity to a beef cattle feedlot on Escherichia coli O157:H7 contamination of leafy greens was examined. In each of 2 years, leafy greens were planted in nine plots located 60, 120, and 180 m from a cattle feedlot (3 plots at each distance). Leafy greens (270) and feedlot manure samples (100) were collected six different times from June to September in each year. Both E. coli O157:H7 and total E. coli bacteria were recovered from leafy greens at all plot distances. E. coli O157:H7 was recovered from 3.5% of leafy green samples per plot at 60 m, which was higher (P < 0.05) than the 1.8% of positive samples per plot at 180 m, indicating a decrease in contamination as distance from the feedlot was increased. Although E. coli O157:H7 was not recovered from air samples at any distance, total E. coli was recovered from air samples at the feedlot edge and all plot distances, indicating that airborne transport of the pathogen can occur. Results suggest that risk for airborne transport of E. coli O157:H7 from cattle production is increased when cattle pen surfaces are very dry and when this situation is combined with cattle management or cattle behaviors that generate airborne dust. Current leafy green field distance guidelines of 120 m (400 feet) may not be adequate to limit the transmission of E. coli O157:H7 to produce crops planted near concentrated animal feeding operations. Additional research is needed to determine safe set-back distances between cattle feedlots and crop production that will reduce fresh produce contamination. PMID:25452286
Molecular analysis confirms the long-distance transport of Juniperus ashei pollen
Mohanty, Rashmi Prava; Buchheim, Mark Alan; Anderson, James; Levetin, Estelle
2017-01-01
Although considered rare, airborne pollen can be deposited far from its place of origin under a confluence of favorable conditions. Temporally anomalous records of Cupressacean pollen collected from January air samples in London, Ontario, Canada have been cited as a new case of long-distance transport. Data on pollination season implicated Juniperus ashei (mountain cedar), with populations in central Texas and south central Oklahoma, as the nearest source of the Cupressacean pollen in the Canadian air samples. This finding is of special significance given the allergenicity of mountain cedar pollen. While microscopy is used extensively to identify particles in the air spora, pollen from all members of the Cupressaceae, including Juniperus, are morphologically indistinguishable. Consequently, we implemented a molecular approach to characterize Juniperus pollen using PCR in order to test the long-distance transport hypothesis. Our PCR results using species-specific primers confirmed that the anomalous Cupressacean pollen collected in Canada was from J. ashei. Forward trajectory analysis from source areas in Texas and the Arbuckle Mountains in Oklahoma and backward trajectory analysis from the destination area near London, Ontario were completed using models implemented in HYSPLIT4 (Hybrid Single-Particle Lagrangian Integrated Trajectory). Results from these trajectory analyses strongly supported the conclusion that the J. ashei pollen detected in Canada had its origins in Texas or Oklahoma. The results from the molecular findings are significant as they provide a new method to confirm the long-distance transport of pollen that bears allergenic importance. PMID:28273170
The Flow-field From Galaxy Groups In 2MASS
NASA Astrophysics Data System (ADS)
Crook, Aidan; Huchra, J.; Macri, L.; Masters, K.; Jarrett, T.
2011-01-01
We present the first model of a flow-field in the nearby Universe (cz < 12,000 km/s) constructed from groups of galaxies identified in an all-sky flux-limited survey. The Two Micron All-Sky Redshift Survey (2MRS), upon which the model is based, represents the most complete survey of its class and, with near-IR fluxes, provides the optimal method for tracing baryonic matter in the nearby Universe. Peculiar velocities are reconstructed self-consistently with a density-field based upon groups identified in the 2MRS Ks<11.75 catalog. The model predicts infall toward Virgo, Perseus-Pisces, Hydra-Centaurus, Norma, Coma, Shapley and Hercules, and most notably predicts backside-infall into the Norma Cluster. We discuss the application of the model as a predictor of galaxy distances using only angular position and redshift measurements. By calibrating the model using measured distances to galaxies inside 3000 km/s, we show that, for a randomly-sampled 2MRS galaxy, improvement in the estimated distance over the application of Hubble's law is expected to be 30%, and considerably better in the proximity of clusters. We test the model using distance estimates from the SFI++ sample, and find evidence for improvement over the application of Hubble's law to galaxies inside 4000 km/s, although the performance varies depending on the location of the target. This work has been supported by NSF grant AST 0406906 and the Massachusetts Institute of Technology Bruno Rossi and Whiteman Fellowships.
Single-Image Distance Measurement by a Smart Mobile Device.
Chen, Shangwen; Fang, Xianyong; Shen, Jianbing; Wang, Linbo; Shao, Ling
2017-12-01
Existing distance measurement methods either require multiple images and special photographing poses or only measure the height with a special view configuration. We propose a novel image-based method that can measure various types of distance from single image captured by a smart mobile device. The embedded accelerometer is used to determine the view orientation of the device. Consequently, pixels can be back-projected to the ground, thanks to the efficient calibration method using two known distances. Then the distance in pixel is transformed to a real distance in centimeter with a linear model parameterized by the magnification ratio. Various types of distance specified in the image can be computed accordingly. Experimental results demonstrate the effectiveness of the proposed method.
Smouse, P E; Dyer, R J; Westfall, R D; Sork, V L
2001-02-01
Gene flow is a key factor in the spatial genetic structure in spatially distributed species. Evolutionary biologists interested in microevolutionary processess and conservation biologists interested in the impact of landscape change require a method that measures the real time process of gene movement. We present a novel two-generation (parent-offspring) approach to the study of genetic structure (TwoGener) that allows us to quantify heterogeneity among the male gamete pools sampled by maternal trees scattered across the landscape and to estimate mean pollination distance and effective neighborhood size. First, we describe the model's elements: genetic distance matrices to estimate intergametic distances, molecular analysis of variance to determine whether pollen profiles differ among mothers, and optimal sampling considerations. Second, we evaluate the model's effectiveness by simulating spatially distributed populations. Spatial heterogeneity in male gametes can be estimated by phiFT, a male gametic analogue of Wright's F(ST) and an inverse function of mean pollination distance. We illustrate TwoGener in cases where the male gamete can be categorically or ambiguously determined. This approach does not require the high level of genetic resolution needed by parentage analysis, but the ambiguous case is vulnerable to bias in the absence of adequate genetic resolution. Finally, we apply TwoGener to an empirical study of Quercus alba in Missouri Ozark forests. We find that phiFT = 0.06, translating into about eight effective pollen donors per female and an effective pollination neighborhood as a circle of radius about 17 m. Effective pollen movement in Q. alba is more restricted than previously realized, even though pollen is capable of moving large distances. This case study illustrates that, with a modest investment in field survey and laboratory analysis, the TwoGener approach permits inferences about landscape-level gene movements.
Sample selection via angular distance in the space of the arguments of an artificial neural network
NASA Astrophysics Data System (ADS)
Fernández Jaramillo, J. M.; Mayerle, R.
2018-05-01
In the construction of an artificial neural network (ANN) a proper data splitting of the available samples plays a major role in the training process. This selection of subsets for training, testing and validation affects the generalization ability of the neural network. Also the number of samples has an impact in the time required for the design of the ANN and the training. This paper introduces an efficient and simple method for reducing the set of samples used for training a neural network. The method reduces the required time to calculate the network coefficients, while keeping the diversity and avoiding overtraining the ANN due the presence of similar samples. The proposed method is based on the calculation of the angle between two vectors, each one representing one input of the neural network. When the angle formed among samples is smaller than a defined threshold only one input is accepted for the training. The accepted inputs are scattered throughout the sample space. Tidal records are used to demonstrate the proposed method. The results of a cross-validation show that with few inputs the quality of the outputs is not accurate and depends on the selection of the first sample, but as the number of inputs increases the accuracy is improved and differences among the scenarios with a different starting sample have and important reduction. A comparison with the K-means clustering algorithm shows that for this application the proposed method with a smaller number of samples is producing a more accurate network.
[Optimization of cluster analysis based on drug resistance profiles of MRSA isolates].
Tani, Hiroya; Kishi, Takahiko; Gotoh, Minehiro; Yamagishi, Yuka; Mikamo, Hiroshige
2015-12-01
We examined 402 methicillin-resistant Staphylococcus aureus (MRSA) strains isolated from clinical specimens in our hospital between November 19, 2010 and December 27, 2011 to evaluate the similarity between cluster analysis of drug susceptibility tests and pulsed-field gel electrophoresis (PFGE). The results showed that the 402 strains tested were classified into 27 PFGE patterns (151 subtypes of patterns). Cluster analyses of drug susceptibility tests with the cut-off distance yielding a similar classification capability showed favorable results--when the MIC method was used, and minimum inhibitory concentration (MIC) values were used directly in the method, the level of agreement with PFGE was 74.2% when 15 drugs were tested. The Unweighted Pair Group Method with Arithmetic mean (UPGMA) method was effective when the cut-off distance was 16. Using the SIR method in which susceptible (S), intermediate (I), and resistant (R) were coded as 0, 2, and 3, respectively, according to the Clinical and Laboratory Standards Institute (CLSI) criteria, the level of agreement with PFGE was 75.9% when the number of drugs tested was 17, the method used for clustering was the UPGMA, and the cut-off distance was 3.6. In addition, to assess the reproducibility of the results, 10 strains were randomly sampled from the overall test and subjected to cluster analysis. This was repeated 100 times under the same conditions. The results indicated good reproducibility of the results, with the level of agreement with PFGE showing a mean of 82.0%, standard deviation of 12.1%, and mode of 90.0% for the MIC method and a mean of 80.0%, standard deviation of 13.4%, and mode of 90.0% for the SIR method. In summary, cluster analysis for drug susceptibility tests is useful for the epidemiological analysis of MRSA.
A new method to quantify liner deformation within a prosthetic socket for below knee amputees.
Lenz, Amy L; Johnson, Katie A; Bush, Tamara Reid
2018-06-06
Many amputees who wear a leg prosthesis develop significant skin wounds on their residual limb. The exact cause of these wounds is unclear as little work has studied the interface between the prosthetic device and user. Our research objective was to develop a quantitative method for assessing displacement patterns of the gel liner during walking for patients with transtibial amputation. Using a reflective marker system and a custom clear socket, evaluations were conducted with a clear transparent test socket mounted over a plaster limb model and a deformable limb model. Distances between markers placed on the limb were measured with a digital caliper and then compared with data from the motion capture system. Additionally, the rigid plaster set-up was moved in the capture volume to simulate walking and evaluate if inter-marker distances changed in comparison to static data. Dynamic displacement trials were then collected to measure changes in inter-marker distance due to vertical elongation of the gel liner. Static and dynamic inter-marker distances within day and across days confirmed the ability to accurately capture displacements using this new approach. These results encourage this novel method to be applied to a sample of amputee patients during walking to assess displacements and the distribution of the liner deformation within the socket. The ability to capture changes in deformation of the gel liner will provide new data that will enable clinicians and researchers to improve design and fit of the prosthesis so the incidence of pressure ulcers can be reduced. Copyright © 2018 Elsevier Ltd. All rights reserved.
Wolgin, Michael; Grundmann, Markus J; Tchorz, Jörg P; Frank, Wilhelm; Kielbassa, Andrej M
2017-09-01
The present study investigated the accuracy of root canal preparation with regard to the integrity of the apical constriction (AC) using two different working length determination approaches: (1) the electronic method of working length determination (EWLD), and (2) the radiologic "gold standard" method (GS). Simulation models were constructed by arranging extracted human teeth by means of silicon bolstered gingiva masks, along with a conductive medium (alginate). Electronic working length determination (group 1; EWLD) and radiologic plus initial electronic working length determination for posterior comparability (group 2; GS) preceded manual root canal preparation of teeth in both groups. Master cones were inserted according to working lengths obtained from the group specific method. Subsequently, root apices (n=36) were longitudinally sectioned using a diamond-coated bur. The distance between the achieved apical endpoint of the endodontic preparation and the apical constriction (AC) was measured using digital photography. Then, distances between radiologically identified apical endpoints and AC (GS-AC) were compared with the corresponding distances EWLD-AC. Moreover, the postoperative status of the AC was examined with regard to both preparation approaches. Differences between distances GS-AC and EWLD-AC were not statistically significant (p >0.401) (Mann-Whitney-U). Among EWLD samples, 83% of the master cones exhibiting tugback at final insertion terminated close to the apical constriction (±0.5 mm), and no impairment of the minor diameter's integrity was observed. The sole use of EWLD allowed for a high accuracy of measurements and granted precise preparation of the apical regions. Copyright © 2017 Elsevier Ltd. All rights reserved.
Determination of Spatial Chromium Contamination of the Environment around Industrial Zones.
Homa, Dereje; Haile, Ermias; Washe, Alemayehu P
2016-01-01
This study was conducted to determine the spatial levels of chromium contamination of water, agricultural soil, and vegetables in the leather tanning industrial areas using spectrophotometric methods. The results showed elevated accumulation of total Cr ranging from 10.85 ± 0.885 mg/L to 39.696 ± 0.326 mg/L, 16.225 ± 0.12 mg/Kg to 1581.667 ± 0.122 mg/Kg, and 1.0758 ± 0.05348 mg/Kg to 11.75 ± 0.206 mg/Kg in water, agricultural soil, and vegetable samples, respectively. The highest levels of chromium (VI) found from the speciation study were 2.23 ± 0.032 mg/Kg and 0.322 ± 0.07 mg/L in soil and water samples, respectively, which decreased with distance from the tannery. Among the vegetables, the highest load of Cr(VI) was detected in onion root (0.048 ± 0.065 mg/Kg) and the lowest (0.004 ± 0.007 mg/Kg) in fruit of green pepper. The detected levels of Cr in all of the suggested samples were above the WHO permissible limits. The variations of the levels Cr(III) and Cr(VI) contamination of the environment with distance from the tannery were statistically significant ( p = 0.05). Similarly, significant difference in the levels of Cr among the tested vegetables was recorded. The levels increased with decreasing distance from the effluent channel.
Comparison of daily and weekly precipitation sampling efficiencies using automatic collectors
Schroder, L.J.; Linthurst, R.A.; Ellson, J.E.; Vozzo, S.F.
1985-01-01
Precipitation samples were collected for approximately 90 daily and 50 weekly sampling periods at Finley Farm, near Raleigh, North Carolina from August 1981 through October 1982. Ten wet-deposition samplers (AEROCHEM METRICS MODEL 301) were used; 4 samplers were operated for daily sampling, and 6 samplers were operated for weekly-sampling periods. This design was used to determine if: (1) collection efficiences of precipitation are affected by small distances between the Universal (Belfort) precipitation gage and collector; (2) measurable evaporation loss occurs and (3) pH and specific conductance of precipitation vary significantly within small distances. Average collection efficiencies were 97% for weekly sampling periods compared with the rain gage. Collection efficiencies were examined by seasons and precipitation volume. Neither factor significantly affected collection efficiency. No evaporation loss was found by comparing daily sampling to weekly sampling at the collection site, which was classified as a subtropical climate. Correlation coefficients for pH and specific conductance of daily samples and weekly samples ranged from 0.83 to 0.99.Precipitation samples were collected for approximately 90 daily and 50 weekly sampling periods at Finley farm, near Raleigh, North Carolina from August 1981 through October 1982. Ten wet-deposition samplers were used; 4 samplers were operated for daily sampling, and 6 samplers were operated for weekly-sampling periods. This design was used to determine if: (1) collection efficiencies of precipitation are affected by small distances between the University (Belfort) precipitation gage and collector; (2) measurable evaporation loss occurs and (3) pH and specific conductance of precipitation vary significantly within small distances.
Improved backward ray tracing with stochastic sampling
NASA Astrophysics Data System (ADS)
Ryu, Seung Taek; Yoon, Kyung-Hyun
1999-03-01
This paper presents a new technique that enhances the diffuse interreflection with the concepts of backward ray tracing. In this research, we have modeled the diffuse rays with the following conditions. First, as the reflection from the diffuse surfaces occurs in all directions, it is impossible to trace all of the reflected rays. We confined the diffuse rays by sampling the spherical angle out of the reflected rays around the normal vector. Second, the traveled distance of reflected energy from the diffuse surface differs according to the object's property, and has a comparatively short reflection distance. Considering the fact that the rays created on the diffuse surfaces affect relatively small area, it is very inefficient to trace all of the sampled diffused rays. Therefore, we set a fixed distance as the critical distance and all the rays beyond this distance are ignored. The result of this research is that as the improved backward ray tracing can model the illumination effects such as the color bleeding effects, we can replace the radiosity algorithm under the limited environment.
Aliakbarpour, Hamaseh; Rawi, Che Salmah Md
2011-08-01
Populations of several thrips species were estimated using yellow sticky traps in an orchard planted with mango, Mangifera indica L. during the dry and wet seasons beginning in late 2008-2009 on Penang Island, Malaysia. To determine the efficacy of using sticky traps to monitor thrips populations, we compared weekly population estimates on yellow sticky traps with thrips population sizes that were determined (using a CO(2) method) directly from mango panicles. Dispersal distance and direction of thrips movement out of the orchard also were studied using yellow sticky traps placed at three distances from the edge of the orchard in four cardinal directions facing into the orchard. The number of thrips associated with the mango panicles was found to be correlated with the number of thrips collected using the sticky trap method. The number of thrips captured by the traps decreased with increasing distance from the mango orchard in all directions. Density of thrips leaving the orchard was related to the surrounding vegetation. Our results demonstrate that sticky traps have the potential to satisfactorily estimate thrips populations in mango orchards and thus they can be effectively employed as a useful tactic for sampling thrips.
NASA Technical Reports Server (NTRS)
Colver, Gerald M.; Greene, Nathanael; Shoemaker, David; Xu, Hua
2003-01-01
The Electric Particulate Suspension (EPS) is a combustion ignition system being developed at Iowa State University for evaluating quenching effects of powders in microgravity (quenching distance, ignition energy, flammability limits). Because of the high cloud uniformity possible and its simplicity, the EPS method has potential for "benchmark" design of quenching flames that would provide NASA and the scientific community with a new fire standard. Microgravity is expected to increase suspension uniformity even further and extend combustion testing to higher concentrations (rich fuel limit) than is possible at normal gravity. Two new combustion parameters are being investigated with this new method: (1) the particle velocity distribution and (2) particle-oxidant slip velocity. Both walls and (inert) particles can be tested as quenching media. The EPS method supports combustion modeling by providing accurate measurement of flame-quenching distance as a parameter in laminar flame theory as it closely relates to characteristic flame thickness and flame structure. Because of its design simplicity, EPS is suitable for testing on the International Space Station (ISS). Laser scans showing stratification effects at 1-g have been studied for different materials, aluminum, glass, and copper. PTV/PIV and a leak hole sampling rig give particle velocity distribution with particle slip velocity evaluated using LDA. Sample quenching and ignition energy curves are given for aluminum powder. Testing is planned for the KC-135 and NASA s two second drop tower. Only 1-g ground-based data have been reported to date.
Rough Way for Academics: Distance Education
ERIC Educational Resources Information Center
Gursul, Fatih
2010-01-01
This study aims to compare the academics' perceptions about face to face and distance education, beside finding out the contributions of distance education to them, difficulties they experience in synchronous and asynchronous distance education environments and suggestions for possible solutions of the existing problems. The sample consists of 52…
Influence of scanning parameters on the estimation accuracy of control points of B-spline surfaces
NASA Astrophysics Data System (ADS)
Aichinger, Julia; Schwieger, Volker
2018-04-01
This contribution deals with the influence of scanning parameters like scanning distance, incidence angle, surface quality and sampling width on the average estimated standard deviations of the position of control points from B-spline surfaces which are used to model surfaces from terrestrial laser scanning data. The influence of the scanning parameters is analyzed by the Monte Carlo based variance analysis. The samples were generated for non-correlated and correlated data, leading to the samples generated by Latin hypercube and replicated Latin hypercube sampling algorithms. Finally, the investigations show that the most influential scanning parameter is the distance from the laser scanner to the object. The angle of incidence shows a significant effect for distances of 50 m and longer, while the surface quality contributes only negligible effects. The sampling width has no influence. Optimal scanning parameters can be found in the smallest possible object distance at an angle of incidence close to 0° in the highest surface quality. The consideration of correlations improves the estimation accuracy and underlines the importance of complete stochastic models for TLS measurements.
Feng, Shangguo; Jiang, Yan; Wang, Shang; Jiang, Mengying; Chen, Zhe; Ying, Qicai; Wang, Huizhong
2015-09-11
The over-collection and habitat destruction of natural Dendrobium populations for their commercial medicinal value has led to these plants being under severe threat of extinction. In addition, many Dendrobium plants are similarly shaped and easily confused during the absence of flowering stages. In the present study, we examined the application of the ITS2 region in barcoding and phylogenetic analyses of Dendrobium species (Orchidaceae). For barcoding, ITS2 regions of 43 samples in Dendrobium were amplified. In combination with sequences from GenBank, the sequences were aligned using Clustal W and genetic distances were computed using MEGA V5.1. The success rate of PCR amplification and sequencing was 100%. There was a significant divergence between the inter- and intra-specific genetic distances of ITS2 regions, while the presence of a barcoding gap was obvious. Based on the BLAST1, nearest distance and TaxonGAP methods, our results showed that the ITS2 regions could successfully identify the species of most Dendrobium samples examined; Second, we used ITS2 as a DNA marker to infer phylogenetic relationships of 64 Dendrobium species. The results showed that cluster analysis using the ITS2 region mainly supported the relationship between the species of Dendrobium established by traditional morphological methods and many previous molecular analyses. To sum up, the ITS2 region can not only be used as an efficient barcode to identify Dendrobium species, but also has the potential to contribute to the phylogenetic analysis of the genus Dendrobium.
Extensive dispersal of Roanoke logperch (Percina rex) inferred from genetic marker data
Roberts, James H.; Angermeier, Paul; Hallerman, Eric M.
2016-01-01
The dispersal ecology of most stream fishes is poorly characterised, complicating conservation efforts for these species. We used microsatellite DNA marker data to characterise dispersal patterns and effective population size (Ne) for a population of Roanoke logperchPercina rex, an endangered darter (Percidae). Juveniles and candidate parents were sampled for 2 years at sites throughout the Roanoke River watershed. Dispersal was inferred via genetic assignment tests (ATs), pedigree reconstruction (PR) and estimation of lifetime dispersal distance under a genetic isolation-by-distance model. Estimates of Ne varied from 105 to 1218 individuals, depending on the estimation method. Based on PR, polygamy was frequent in parents of both sexes, with individuals spawning with an average of 2.4 mates. The sample contained 61 half-sibling pairs, but only one parent–offspring pair and no full-sib pairs, which limited our ability to discriminate natal dispersal of juveniles from breeding dispersal of their parents between spawning events. Nonetheless, all methods indicated extensive dispersal. The AT indicated unrestricted dispersal among sites ≤15 km apart, while siblings inferred by the PR were captured an average of 14 km and up to 55 km apart. Model-based estimates of median lifetime dispersal distance (6–24 km, depending on assumptions) bracketed AT and PR estimates, indicating that widely dispersed individuals do, on average, contribute to gene flow. Extensive dispersal of P. rex suggests that darters and other small benthic stream fishes may be unexpectedly mobile. Monitoring and management activities for such populations should encompass entire watersheds to fully capture population dynamics.
Feng, Shangguo; Jiang, Yan; Wang, Shang; Jiang, Mengying; Chen, Zhe; Ying, Qicai; Wang, Huizhong
2015-01-01
The over-collection and habitat destruction of natural Dendrobium populations for their commercial medicinal value has led to these plants being under severe threat of extinction. In addition, many Dendrobium plants are similarly shaped and easily confused during the absence of flowering stages. In the present study, we examined the application of the ITS2 region in barcoding and phylogenetic analyses of Dendrobium species (Orchidaceae). For barcoding, ITS2 regions of 43 samples in Dendrobium were amplified. In combination with sequences from GenBank, the sequences were aligned using Clustal W and genetic distances were computed using MEGA V5.1. The success rate of PCR amplification and sequencing was 100%. There was a significant divergence between the inter- and intra-specific genetic distances of ITS2 regions, while the presence of a barcoding gap was obvious. Based on the BLAST1, nearest distance and TaxonGAP methods, our results showed that the ITS2 regions could successfully identify the species of most Dendrobium samples examined; Second, we used ITS2 as a DNA marker to infer phylogenetic relationships of 64 Dendrobium species. The results showed that cluster analysis using the ITS2 region mainly supported the relationship between the species of Dendrobium established by traditional morphological methods and many previous molecular analyses. To sum up, the ITS2 region can not only be used as an efficient barcode to identify Dendrobium species, but also has the potential to contribute to the phylogenetic analysis of the genus Dendrobium. PMID:26378526
Pairing call-response surveys and distance sampling for a mammalian carnivore
Hansen, Sara J. K.; Frair, Jacqueline L.; Underwood, Harold B.; Gibbs, James P.
2015-01-01
Density estimates accounting for differential animal detectability are difficult to acquire for wide-ranging and elusive species such as mammalian carnivores. Pairing distance sampling with call-response surveys may provide an efficient means of tracking changes in populations of coyotes (Canis latrans), a species of particular interest in the eastern United States. Blind field trials in rural New York State indicated 119-m linear error for triangulated coyote calls, and a 1.8-km distance threshold for call detectability, which was sufficient to estimate a detection function with precision using distance sampling. We conducted statewide road-based surveys with sampling locations spaced ≥6 km apart from June to August 2010. Each detected call (be it a single or group) counted as a single object, representing 1 territorial pair, because of uncertainty in the number of vocalizing animals. From 524 survey points and 75 detections, we estimated the probability of detecting a calling coyote to be 0.17 ± 0.02 SE, yielding a detection-corrected index of 0.75 pairs/10 km2 (95% CI: 0.52–1.1, 18.5% CV) for a minimum of 8,133 pairs across rural New York State. Importantly, we consider this an index rather than true estimate of abundance given the unknown probability of coyote availability for detection during our surveys. Even so, pairing distance sampling with call-response surveys provided a novel, efficient, and noninvasive means of monitoring populations of wide-ranging and elusive, albeit reliably vocal, mammalian carnivores. Our approach offers an effective new means of tracking species like coyotes, one that is readily extendable to other species and geographic extents, provided key assumptions of distance sampling are met.
Inventory implications of using sampling variances in estimation of growth model coefficients
Albert R. Stage; William R. Wykoff
2000-01-01
Variables based on stand densities or stocking have sampling errors that depend on the relation of tree size to plot size and on the spatial structure of the population, ignoring the sampling errors of such variables, which include most measures of competition used in both distance-dependent and distance-independent growth models, can bias the predictions obtained from...
Spigt, Mark; Seme, Assefa; Amogne, Ayanaw; Skrøvseth, Stein; Desta, Selamawit; Radloff, Scott; GeertJan, Dinant
2017-01-01
Background There is limited evidence of the linkage between contraceptive use, the range of methods available and level of contraceptive stocks at health facilities and distance to facility in developing countries. The present analysis aims at examining the influence of contraceptive method availability and distance to the nearby facilities on modern contraceptive utilization among married women in rural areas in Ethiopia using geo-referenced data. Methods We used data from the first round of surveys of the Performance Monitoring & Accountability 2020 project in Ethiopia (PMA2020/Ethiopia-2014). The survey was conducted in a sample of 200 enumeration areas (EAs) where for each EA, 35 households and up to 3 public or private health service delivery points (SDPs) were selected. The main outcome variable was individual use of a contraceptive method for married women in rural Ethiopia. Correlates of interest include distance to nearby health facilities, range of contraceptives available in facilities, household wealth index, and the woman’s educational status, age, and parity and whether she recently visited a health facility. This analysis primarily focuses on stock provision at public SDPs. Results Overall complete information was collected from 1763 married rural women ages 15–49 years and 198 SDPs in rural areas (97.1% public). Most rural women (93.9%) live within 5 kilometers of their nearest health post while a much lower proportion (52.2%) live within the same distance to the nearest health centers and hospital (0.8%), respectively. The main sources of modern contraceptive methods for married rural women were health posts (48.8%) and health centers (39.0%). The mean number of the types of contraceptive methods offered by hospitals, health centers and health posts was 6.2, 5.4 and 3.7 respectively. Modern contraceptive use (mCPR) among rural married women was 27.3% (95% CI: 25.3, 29.5). The percentage of rural married women who use modern contraceptives decreased as distance from the nearest SDP increased; 41.2%, 27.5%, 22.0%, and 22.6% of women living less than 2 kilometers, 2 to 3.9kilometers, 4 to 5.9 kilometers and 6 or more kilometers, respectively (p-value<0.01). Additionally, women who live close to facilities that offer a wider range of contraceptive methods were significantly more likely to use modern contraceptives. The mCPR ranged from 42.3% among women who live within 2 kilometers of facilities offering 3 or more methods to 22.5% among women living more than 6 kilometers away from the nearest facility with the same number (3 or more methods) available after adjusting for observed covariates. Conclusions Although the majority of the Ethiopian population lives within a relatively close distance to lower level facilities (health posts), the number and range of methods available (method choice) and proximity are independently associated with contraceptive utilization. By demonstrating the extent to which objective measures of distance (of relatively small magnitude) explain variation in contraceptive use among rural women, the study fills an important planning gap for family planning programs operating in resource limited settings. PMID:29131860
NASA Astrophysics Data System (ADS)
Stefansson, E. S.
2008-12-01
Creosote is a common wood preservative used to treat marine structures, such as docks and bulkheads. Treated dock pilings continually leach polycyclic aromatic hydrocarbons (PAHs) and other creosote compounds into the surrounding water and sediment. Over time, these compounds can accumulate in marine sediments, reaching much greater concentrations than those in seawater. The purpose of this study was to assess the extent of creosote contamination in sediments, at a series of distances from treated pilings. Three pilings were randomly selected from a railroad trestle in Fidalgo Bay, WA and sediment samples were collected at four distances from each: 0 meters, 0.5 meters, 1 meter, and 2 meters. Samples were used to conduct two bioassays: an amphipod bioassay (Rhepoxynius abronius) and a sand dollar embryo bioassay. Grain size and PAH content (using a fluorometric method) were also measured. Five samples in the amphipod bioassay showed significantly lower effective survival than the reference sediment. These consisted of samples closest to the piling at 0 and 0.5 meters. One 0 m sample in the sand dollar embryo bioassay also showed a significantly lower percentage of normal embryos than the reference sediment. Overall, results strongly suggest that creosote-contaminated sediments, particularly those closest to treated pilings, can negatively affect both amphipods and echinoderm embryos. Although chemical data were somewhat ambiguous, 0 m samples had the highest levels of PAHs, which corresponded to the lowest average survival in both bioassays. Relatively high levels of PAHs were found as far as 2 meters away from pilings. Therefore, we cannot say how far chemical contamination can spread from creosote-treated pilings, and at what distance this contamination can still affect marine organisms. These results, as well as future research, are essential to the success of proposed piling removal projects. In addition to creosote-treated pilings, contaminated sediments must be removed and disposed of properly, in order to make future piling removals as effective and beneficial to ecosystem health as possible.
2010-01-01
Background Breeding programs are usually reluctant to evaluate and use germplasm accessions other than the elite materials belonging to their advanced populations. The concept of core collections has been proposed to facilitate the access of potential users to samples of small sizes, representative of the genetic variability contained within the gene pool of a specific crop. The eventual large size of a core collection perpetuates the problem it was originally proposed to solve. The present study suggests that, in addition to the classic core collection concept, thematic core collections should be also developed for a specific crop, composed of a limited number of accessions, with a manageable size. Results The thematic core collection obtained meets the minimum requirements for a core sample - maintenance of at least 80% of the allelic richness of the thematic collection, with, approximately, 15% of its size. The method was compared with other methodologies based on the M strategy, and also with a core collection generated by random sampling. Higher proportions of retained alleles (in a core collection of equal size) or similar proportions of retained alleles (in a core collection of smaller size) were detected in the two methods based on the M strategy compared to the proposed methodology. Core sub-collections constructed by different methods were compared regarding the increase or maintenance of phenotypic diversity. No change on phenotypic diversity was detected by measuring the trait "Weight of 100 Seeds", for the tested sampling methods. Effects on linkage disequilibrium between unlinked microsatellite loci, due to sampling, are discussed. Conclusions Building of a thematic core collection was here defined by prior selection of accessions which are diverse for the trait of interest, and then by pairwise genetic distances, estimated by DNA polymorphism analysis at molecular marker loci. The resulting thematic core collection potentially reflects the maximum allele richness with the smallest sample size from a larger thematic collection. As an example, we used the development of a thematic core collection for drought tolerance in rice. It is expected that such thematic collections increase the use of germplasm by breeding programs and facilitate the study of the traits under consideration. The definition of a core collection to study drought resistance is a valuable contribution towards the understanding of the genetic control and the physiological mechanisms involved in water use efficiency in plants. PMID:20576152
Establishing Normative Reference Values for Standing Broad Jump among Hungarian Youth
ERIC Educational Resources Information Center
Saint-Maurice, Pedro F.; Laurson, Kelly R.; Kaj, Mónika; Csányi, Tamás
2015-01-01
Purpose: The purpose of this study was to examine age and sex trends in anaerobic power assessed by a standing broad jump and to determine norm-referenced values for youth in Hungary. Method: A sample of 2,427 Hungarian youth (1,360 boys and 1,067 girls) completed the standing broad jump twice, and the highest distance score was recorded. Quantile…
ERIC Educational Resources Information Center
Pendleton, Sara M.; Stanton, Bonita; Cottrell, Lesley A.; Marshall, Sharon; Pack, Robert; Burns, James; Gibson, Catherine; Wu, Ying; Li, Xiaoming; Cole, Matthew
2007-01-01
Purpose: To assess and compare youth satisfaction with two delivery approaches to a HIV/STD risk reduction intervention targeting adolescents: an on-site, face-to-face (FTF) approach versus a long distance interactive televised (DIT) approach. Methods: A convenience sample of 571 rural adolescents ages 12-16 years who participated in an HIV/STD…
NASA Astrophysics Data System (ADS)
Chen, Xiaodian; Wang, Shu; Deng, Licai; de Grijs, Richard
2018-06-01
Distances and extinction values are usually degenerate. To refine the distance to the general Galactic Center region, a carefully determined extinction law (taking into account the prevailing systematic errors) is urgently needed. We collected data for 55 classical Cepheids projected toward the Galactic Center region to derive the near- to mid-infrared extinction law using three different approaches. The relative extinction values obtained are {A}J/{A}{K{{s}}}=3.005,{A}H/{A}{K{{s}}}=1.717, {A}[3.6]/{A}{K{{s}}}=0.478,{A}[4.5]/{A}{K{{s}}}=0.341, {A}[5.8]/{A}{K{{s}}}=0.234,{A}[8.0]/{A}{K{{s}}} =0.321,{A}W1/{A}{K{{s}}}=0.506, and {A}W2/{A}{K{{s}}}=0.340. We also calculated the corresponding systematic errors. Compared with previous work, we report an extremely low and steep mid-infrared extinction law. Using a seven-passband “optimal distance” method, we improve the mean distance precision to our sample of 55 Cepheids to 4%. Based on four confirmed Galactic Center Cepheids, a solar Galactocentric distance of R 0 = 8.10 ± 0.19 ± 0.22 kpc is determined, featuring an uncertainty that is close to the limiting distance accuracy (2.8%) for Galactic Center Cepheids.
Generating virtual training samples for sparse representation of face images and face recognition
NASA Astrophysics Data System (ADS)
Du, Yong; Wang, Yu
2016-03-01
There are many challenges in face recognition. In real-world scenes, images of the same face vary with changing illuminations, different expressions and poses, multiform ornaments, or even altered mental status. Limited available training samples cannot convey these possible changes in the training phase sufficiently, and this has become one of the restrictions to improve the face recognition accuracy. In this article, we view the multiplication of two images of the face as a virtual face image to expand the training set and devise a representation-based method to perform face recognition. The generated virtual samples really reflect some possible appearance and pose variations of the face. By multiplying a training sample with another sample from the same subject, we can strengthen the facial contour feature and greatly suppress the noise. Thus, more human essential information is retained. Also, uncertainty of the training data is simultaneously reduced with the increase of the training samples, which is beneficial for the training phase. The devised representation-based classifier uses both the original and new generated samples to perform the classification. In the classification phase, we first determine K nearest training samples for the current test sample by calculating the Euclidean distances between the test sample and training samples. Then, a linear combination of these selected training samples is used to represent the test sample, and the representation result is used to classify the test sample. The experimental results show that the proposed method outperforms some state-of-the-art face recognition methods.
Breeding population density and habitat use of Swainson's warblers in a Georgia floodplain forest
Wright, E.A.
2002-01-01
I examined density and habitat use of a Swainson's Warbler (Limnothlypis swainsonii) breeding population in Georgia. This songbird species is inadequately monitored, and may be declining due to anthropogenic alteration of floodplain forest breeding habitats. I used distance sampling methods to estimate density, finding 9.4 singing males/ha (CV = 0.298). Individuals were encountered too infrequently to produce a Iow-variance estimate, and distance sampling thus may be impracticable for monitoring this relatively rare species. I developed a set of multivariate habitat models using binary logistic regression techniques, based on measurement of 22 variables in 56 plots occupied by Swainson's Warblers and 110 unoccupied plots. Occupied areas were characterized by high stem density of cane (Arundinaria gigantea) and other shrub layer vegetation, and presence of abundant and accessible leaf litter. I recommend two habitat models, which correctly classified 87-89% of plots in cross-validation runs, for potential use in habitat assessment at other locations.
NASA Astrophysics Data System (ADS)
Heidaryan, Narges; Eshghi, Hosein
2017-09-01
Large-scale silicon oxide nanowires (SiOx NWs) with a diameter about 250 nm on silicon wafers were synthesized by thermal evaporation of silicon monoxide (SiO) powder. In order to investigate the role of distance on the physical properties of SiOx NWs, Si substrates were positioned at 5 cm and 10 cm apart from the boat position set at 1150∘C. The local temperatues of the samples were 1100∘C and 1050∘C, respectively. The SEM images and EDS spectra showed interweaved networks of SiOx NWs with x = 0.62 and 0.65 in these layers. The XRD patterns showed S1 has a polycrystalline structure (cristobalite), while S2 has amorphous nature. The PL spectra showed an intense blue peak at 468 nm in S1, and a violet peak at 427 nm in S2 that could be related to the differences in the crystallite structures and oxygen vacancies in these samples.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schaefer, Bastian; Goedecker, Stefan, E-mail: stefan.goedecker@unibas.ch
2016-07-21
An analysis of the network defined by the potential energy minima of multi-atomic systems and their connectivity via reaction pathways that go through transition states allows us to understand important characteristics like thermodynamic, dynamic, and structural properties. Unfortunately computing the transition states and reaction pathways in addition to the significant energetically low-lying local minima is a computationally demanding task. We here introduce a computationally efficient method that is based on a combination of the minima hopping global optimization method and the insight that uphill barriers tend to increase with increasing structural distances of the educt and product states. This methodmore » allows us to replace the exact connectivity information and transition state energies with alternative and approximate concepts. Without adding any significant additional cost to the minima hopping global optimization approach, this method allows us to generate an approximate network of the minima, their connectivity, and a rough measure for the energy needed for their interconversion. This can be used to obtain a first qualitative idea on important physical and chemical properties by means of a disconnectivity graph analysis. Besides the physical insight obtained by such an analysis, the gained knowledge can be used to make a decision if it is worthwhile or not to invest computational resources for an exact computation of the transition states and the reaction pathways. Furthermore it is demonstrated that the here presented method can be used for finding physically reasonable interconversion pathways that are promising input pathways for methods like transition path sampling or discrete path sampling.« less
Evaluation of subset matching methods and forms of covariate balance.
de Los Angeles Resa, María; Zubizarreta, José R
2016-11-30
This paper conducts a Monte Carlo simulation study to evaluate the performance of multivariate matching methods that select a subset of treatment and control observations. The matching methods studied are the widely used nearest neighbor matching with propensity score calipers and the more recently proposed methods, optimal matching of an optimally chosen subset and optimal cardinality matching. The main findings are: (i) covariate balance, as measured by differences in means, variance ratios, Kolmogorov-Smirnov distances, and cross-match test statistics, is better with cardinality matching because by construction it satisfies balance requirements; (ii) for given levels of covariate balance, the matched samples are larger with cardinality matching than with the other methods; (iii) in terms of covariate distances, optimal subset matching performs best; (iv) treatment effect estimates from cardinality matching have lower root-mean-square errors, provided strong requirements for balance, specifically, fine balance, or strength-k balance, plus close mean balance. In standard practice, a matched sample is considered to be balanced if the absolute differences in means of the covariates across treatment groups are smaller than 0.1 standard deviations. However, the simulation results suggest that stronger forms of balance should be pursued in order to remove systematic biases due to observed covariates when a difference in means treatment effect estimator is used. In particular, if the true outcome model is additive, then marginal distributions should be balanced, and if the true outcome model is additive with interactions, then low-dimensional joints should be balanced. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Alves, E O S; Cerqueira-Silva, C B M; Souza, A M; Santos, C A F; Lima Neto, F P; Corrêa, R X
2012-03-14
We investigated seven distance measures in a set of observations of physicochemical variables of mango (Mangifera indica) submitted to multivariate analyses (distance, projection and grouping). To estimate the distance measurements, five mango progeny (total of 25 genotypes) were analyzed, using six fruit physicochemical descriptors (fruit weight, equatorial diameter, longitudinal diameter, total soluble solids in °Brix, total titratable acidity, and pH). The distance measurements were compared by the Spearman correlation test, projection in two-dimensional space and grouping efficiency. The Spearman correlation coefficients between the seven distance measurements were, except for the Mahalanobis' generalized distance (0.41 ≤ rs ≤ 0.63), high and significant (rs ≥ 0.91; P < 0.001). Regardless of the origin of the distance matrix, the unweighted pair group method with arithmetic mean grouping method proved to be the most adequate. The various distance measurements and grouping methods gave different values for distortion (-116.5 ≤ D ≤ 74.5), cophenetic correlation (0.26 ≤ rc ≤ 0.76) and stress (-1.9 ≤ S ≤ 58.9). Choice of distance measurement and analysis methods influence the.
Central stars of planetary nebulae in the Galactic bulge
NASA Astrophysics Data System (ADS)
Hultzsch, P. J. N.; Puls, J.; Méndez, R. H.; Pauldrach, A. W. A.; Kudritzki, R.-P.; Hoffmann, T. L.; McCarthy, J. K.
2007-06-01
Context: Optical high-resolution spectra of five central stars of planetary nebulae (CSPN) in the Galactic bulge have been obtained with Keck/HIRES in order to derive their parameters. Since the distance of the objects is quite well known, such a method has the advantage that stellar luminosities and masses can in principle be determined without relying on theoretical relations between both quantities. Aims: By alternatively combining the results of our spectroscopic investigation with evolutionary tracks, we obtain so-called spectroscopic distances, which can be compared with the known (average) distance of the bulge-CSPN. This offers the possibility to test the validity of model atmospheres and present date post-AGB evolution. Methods: We analyze optical H/He profiles of five Galactic bulge CSPN (plus one comparison object) by means of profile fitting based on state of the art non-LTE modeling tools, to constrain their basic atmospheric parameters (Teff, log g, helium abundance and wind strength). Masses and other stellar radius dependent quantities are obtained from both the known distances and from evolutionary tracks, and the results from both approaches are compared. Results: The major result of the present investigation is that the derived spectroscopic distances depend crucially on the applied reddening law. Assuming either standard reddening or values based on radio-Hβ extinctions, we find a mean distance of 9.0±1.6 kpc and 12.2±2.1 kpc, respectively. An “average extinction law” leads to a distance of 10.7±1.2 kpc, which is still considerably larger than the Galactic center distance of 8 kpc. In all cases, however, we find a remarkable internal agreement of the individual spectroscopic distances of our sample objects, within ±10% to ±15% for the different reddening laws. Conclusions: Due to the uncertain reddening correction, the analysis presented here cannot yet be regarded as a consistency check for our method, and a rigorous test of the CSPN evolution theory becomes only possible if this problem has been solved. The data presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W. M. Keck Foundation. Appendix A is only available in electronic form at http://www.aanda.org
NASA Technical Reports Server (NTRS)
Cremers, D. A.; Wiens, R. C.; Arp, Z. A.; Harris, R. D.; Maurice, S.
2003-01-01
One of the most fundamental pieces of information about any planetary body is the elemental composition of its surface materials. The Viking Martian landers employed XRF (x-ray fluorescence) and the MER rovers are carrying APXS (alpha-proton x-ray spectrometer) instruments upgraded from that used on the Pathfinder rover to supply elemental composition information for soils and rocks to which direct contact is possible. These in- situ analyses require that the lander or rover be in contact with the sample. In addition to in-situ instrumentation, the present generation of rovers carry instruments that operate at stand-off distances. The Mini-TES is an example of a stand-off instrument on the MER rovers. Other examples for future missions include infrared point spectrometers and microscopic-imagers that can operate at a distance. The main advantage of such types of analyses is obvious: the sensing element does not need to be in contact or even adjacent to the target sample. This opens up new sensing capabilities. For example, targets that cannot be reached by a rover due to impassable terrain or targets positioned on a cliff face can now be accessed using stand-off analysis. In addition, the duty cycle of stand-off analysis can be much greater than that provided by in-situ measurements because the stand-off analysis probe can be aimed rapidly at different features of interest eliminating the need for the rover to actually move to the target. Over the past five years we have been developing a stand-off method of elemental analysis based on atomic emission spectroscopy called laser-induced breakdown spectroscopy (LIBS). A laser-produced spark vaporizes and excites the target material, the elements of which emit at characteristic wavelengths. Using this method, material can be analyzed from within a radius of several tens of meters from the instrument platform. A relatively large area can therefore be sampled from a simple lander without requiring a rover or sampling arms. The placement of such an instrument on a rover would allow the sampling of locations distant from the landing site. Here we give a description of the LIBS method and its advantages. We discuss recent work on determining its characteristics for Mars exploration, including accuracy, detection limits, and suitability for determining the presence of water ice and hydrated minerals. We also give a description of prototype instruments we have tested in field settings.
Vafeiadi, Marina; Agramunt, Silvia; Papadopoulou, Eleni; Besselink, Harrie; Mathianaki, Kleopatra; Karakosta, Polyxeni; Spanaki, Ariana; Koutis, Antonis; Chatzi, Leda; Vrijheid, Martine
2012-01-01
Background: Anogenital distance in animals is used as a measure of fetal androgen action. Prenatal exposure to dioxins and dioxin-like compounds in rodents causes reproductive changes in male offspring and decreases anogenital distance. Objective: We assessed whether in utero exposure to dioxins and dioxin-like compounds adversely influences anogenital distance in newborns and young children (median age, 16 months; range, 1–31 months). Methods: We measured anogenital distance among participants of the “Rhea” mother–child cohort study in Crete and the Hospital del Mar (HMAR) cohort in Barcelona. Anogenital distance (AGD; anus to upper penis), anoscrotal distance (ASD; anus to scrotum), and penis width (PW) were measured in 119 newborn and 239 young boys; anoclitoral (ACD; anus to clitoris) and anofourchetal distance (AFD; anus to fourchette) were measured in 118 newborn and 223 young girls. We estimated plasma dioxin-like activity in maternal blood samples collected at delivery with the Dioxin-Responsive Chemically Activated LUciferase eXpression (DR CALUX®) bioassay. Results: Anogenital distances were sexually dimorphic, being longer in males than females. Plasma dioxin-like activity was negatively associated with AGD in male newborns. The estimated change in AGD per 10 pg CALUX®–toxic equivalent/g lipid increase was –0.44 mm (95% CI: –0.80, –0.08) after adjusting for confounders. Negative but smaller and nonsignificant associations were observed for AGD in young boys. No associations were found in girls. Conclusions: Male infants may be susceptible to endocrine-disrupting effects of dioxins. Our findings are consistent with the experimental animal evidence used by the Food and Agriculture Organization/World Health Organization to set recommendations for human dioxin intake. PMID:23171674
Exploring neighborhoods in the metagenome universe.
Aßhauer, Kathrin P; Klingenberg, Heiner; Lingner, Thomas; Meinicke, Peter
2014-07-14
The variety of metagenomes in current databases provides a rapidly growing source of information for comparative studies. However, the quantity and quality of supplementary metadata is still lagging behind. It is therefore important to be able to identify related metagenomes by means of the available sequence data alone. We have studied efficient sequence-based methods for large-scale identification of similar metagenomes within a database retrieval context. In a broad comparison of different profiling methods we found that vector-based distance measures are well-suitable for the detection of metagenomic neighbors. Our evaluation on more than 1700 publicly available metagenomes indicates that for a query metagenome from a particular habitat on average nine out of ten nearest neighbors represent the same habitat category independent of the utilized profiling method or distance measure. While for well-defined labels a neighborhood accuracy of 100% can be achieved, in general the neighbor detection is severely affected by a natural overlap of manually annotated categories. In addition, we present results of a novel visualization method that is able to reflect the similarity of metagenomes in a 2D scatter plot. The visualization method shows a similarly high accuracy in the reduced space as compared with the high-dimensional profile space. Our study suggests that for inspection of metagenome neighborhoods the profiling methods and distance measures can be chosen to provide a convenient interpretation of results in terms of the underlying features. Furthermore, supplementary metadata of metagenome samples in the future needs to comply with readily available ontologies for fine-grained and standardized annotation. To make profile-based k-nearest-neighbor search and the 2D-visualization of the metagenome universe available to the research community, we included the proposed methods in our CoMet-Universe server for comparative metagenome analysis.
Exploring Neighborhoods in the Metagenome Universe
Aßhauer, Kathrin P.; Klingenberg, Heiner; Lingner, Thomas; Meinicke, Peter
2014-01-01
The variety of metagenomes in current databases provides a rapidly growing source of information for comparative studies. However, the quantity and quality of supplementary metadata is still lagging behind. It is therefore important to be able to identify related metagenomes by means of the available sequence data alone. We have studied efficient sequence-based methods for large-scale identification of similar metagenomes within a database retrieval context. In a broad comparison of different profiling methods we found that vector-based distance measures are well-suitable for the detection of metagenomic neighbors. Our evaluation on more than 1700 publicly available metagenomes indicates that for a query metagenome from a particular habitat on average nine out of ten nearest neighbors represent the same habitat category independent of the utilized profiling method or distance measure. While for well-defined labels a neighborhood accuracy of 100% can be achieved, in general the neighbor detection is severely affected by a natural overlap of manually annotated categories. In addition, we present results of a novel visualization method that is able to reflect the similarity of metagenomes in a 2D scatter plot. The visualization method shows a similarly high accuracy in the reduced space as compared with the high-dimensional profile space. Our study suggests that for inspection of metagenome neighborhoods the profiling methods and distance measures can be chosen to provide a convenient interpretation of results in terms of the underlying features. Furthermore, supplementary metadata of metagenome samples in the future needs to comply with readily available ontologies for fine-grained and standardized annotation. To make profile-based k-nearest-neighbor search and the 2D-visualization of the metagenome universe available to the research community, we included the proposed methods in our CoMet-Universe server for comparative metagenome analysis. PMID:25026170
Cao, Yongze; Nakayama, Shota; Kumar, Pawan; Zhao, Yue; Kinoshita, Yukinori; Yoshimura, Satoru; Saito, Hitoshi
2018-05-03
For magnetic domain imaging with a very high spatial resolution by magnetic force microscopy the tip-sample distance should be as small as possible. However, magnetic imaging near sample surface is very difficult with conventional MFM because the interactive forces between tip and sample includes van der Waals and electrostatic forces along with magnetic force. In this study, we proposed an alternating magnetic force microscopy (A-MFM) which extract only magnetic force near sample surface without any topographic and electrical crosstalk. In the present method, the magnetization of a FeCo-GdOx superparamagnetic tip is modulated by an external AC magnetic field in order to measure the magnetic domain structure without any perturbation from the other forces near the sample surface. Moreover, it is demonstrated that the proposed method can also measure the strength and identify the polarities of the second derivative of the perpendicular stray field from a thin-film permanent magnet with DC demagnetized state and remanent state. © 2018 IOP Publishing Ltd.
Le Moyec, Laurence; Robert, Céline; Triba, Mohamed N; Billat, Véronique L; Mata, Xavier; Schibler, Laurent; Barrey, Eric
2014-01-01
During long distance endurance races, horses undergo high physiological and metabolic stresses. The adaptation processes involve the modulation of the energetic pathways in order to meet the energy demand. The aims were to evaluate the effects of long endurance exercise on the plasma metabolomic profiles and to investigate the relationships with the individual horse performances. The metabolomic profiles of the horses were analyzed using the non-dedicated methodology, NMR spectroscopy and statistical multivariate analysis. The advantage of this method is to investigate several metabolomic pathways at the same time in a single sample. The plasmas were obtained before exercise (BE) and post exercise (PE) from 69 horses competing in three endurance races at national level (130-160 km). Biochemical assays were also performed on the samples taken at PE. The proton NMR spectra were compared using the supervised orthogonal projection on latent structure method according to several factors. Among these factors, the race location was not significant whereas the effect of the race exercise (sample BE vs PE of same horse) was highly discriminating. This result was confirmed by the projection of unpaired samples (only BE or PE sample of different horses). The metabolomic profiles proved that protein, energetic and lipid metabolisms as well as glycoproteins content are highly affected by the long endurance exercise. The BE samples from finisher horses could be discriminated according to the racing speed based on their metabolomic lipid content. The PE samples could be discriminated according to the horse ranking position at the end of the race with lactate as unique correlated metabolite. As a conclusion, the metabolomic profiles of plasmas taken before and after the race provided a better understanding of the high energy demand and protein catabolism pathway that could expose the horses to metabolic disorders.
Acoustically levitated droplets: a contactless sampling method for fluorescence studies.
Leiterer, Jork; Grabolle, Markus; Rurack, Knut; Resch-Genger, Ute; Ziegler, Jan; Nann, Thomas; Panne, Ulrich
2008-01-01
Acoustic levitation is used as a new tool to study concentration-dependent processes in fluorescence spectroscopy. With this technique, small amounts of liquid and solid samples can be measured without the need for sample supports or containers, which often limits signal acquisition and can even alter sample properties due to interactions with the support material. We demonstrate that, because of the small sample volume, fluorescence measurements at high concentrations of an organic dye are possible without the limitation of inner-filter effects, which hamper such experiments in conventional, cuvette-based measurements. Furthermore, we show that acoustic levitation of liquid samples provides an experimentally simple way to study distance-dependent fluorescence modulations in semiconductor nanocrystals. The evaporation of the solvent during levitation leads to a continuous increase of solute concentration and can easily be monitored by laser-induced fluorescence.
Shiferaw, Solomon; Spigt, Mark; Seme, Assefa; Amogne, Ayanaw; Skrøvseth, Stein; Desta, Selamawit; Radloff, Scott; Tsui, Amy; GeertJan, Dinant
2017-01-01
There is limited evidence of the linkage between contraceptive use, the range of methods available and level of contraceptive stocks at health facilities and distance to facility in developing countries. The present analysis aims at examining the influence of contraceptive method availability and distance to the nearby facilities on modern contraceptive utilization among married women in rural areas in Ethiopia using geo-referenced data. We used data from the first round of surveys of the Performance Monitoring & Accountability 2020 project in Ethiopia (PMA2020/Ethiopia-2014). The survey was conducted in a sample of 200 enumeration areas (EAs) where for each EA, 35 households and up to 3 public or private health service delivery points (SDPs) were selected. The main outcome variable was individual use of a contraceptive method for married women in rural Ethiopia. Correlates of interest include distance to nearby health facilities, range of contraceptives available in facilities, household wealth index, and the woman's educational status, age, and parity and whether she recently visited a health facility. This analysis primarily focuses on stock provision at public SDPs. Overall complete information was collected from 1763 married rural women ages 15-49 years and 198 SDPs in rural areas (97.1% public). Most rural women (93.9%) live within 5 kilometers of their nearest health post while a much lower proportion (52.2%) live within the same distance to the nearest health centers and hospital (0.8%), respectively. The main sources of modern contraceptive methods for married rural women were health posts (48.8%) and health centers (39.0%). The mean number of the types of contraceptive methods offered by hospitals, health centers and health posts was 6.2, 5.4 and 3.7 respectively. Modern contraceptive use (mCPR) among rural married women was 27.3% (95% CI: 25.3, 29.5). The percentage of rural married women who use modern contraceptives decreased as distance from the nearest SDP increased; 41.2%, 27.5%, 22.0%, and 22.6% of women living less than 2 kilometers, 2 to 3.9kilometers, 4 to 5.9 kilometers and 6 or more kilometers, respectively (p-value<0.01). Additionally, women who live close to facilities that offer a wider range of contraceptive methods were significantly more likely to use modern contraceptives. The mCPR ranged from 42.3% among women who live within 2 kilometers of facilities offering 3 or more methods to 22.5% among women living more than 6 kilometers away from the nearest facility with the same number (3 or more methods) available after adjusting for observed covariates. Although the majority of the Ethiopian population lives within a relatively close distance to lower level facilities (health posts), the number and range of methods available (method choice) and proximity are independently associated with contraceptive utilization. By demonstrating the extent to which objective measures of distance (of relatively small magnitude) explain variation in contraceptive use among rural women, the study fills an important planning gap for family planning programs operating in resource limited settings.
Constraints on the Energy Density Content of the Universe Using Only Clusters of Galaxies
NASA Technical Reports Server (NTRS)
Molnar, Sandor M.; Haiman, Zoltan; Birkinshaw, Mark
2003-01-01
We demonstrate that it is possible to constrain the energy content of the Universe with high accuracy using observations of clusters of galaxies only. The degeneracies in the cosmological parameters are lifted by combining constraints from different observables of galaxy clusters. We show that constraints on cosmological parameters from galaxy cluster number counts as a function of redshift and accurate angular diameter distance measurements to clusters are complementary to each other and their combination can constrain the energy density content of the Universe well. The number counts can be obtained from X-ray and/or SZ (Sunyaev-Zeldovich effect) surveys, the angular diameter distances can be determined from deep observations of the intra-cluster gas using their thermal bremsstrahlung X-ray emission and the SZ effect (X-SZ method). In this letter we combine constraints from simulated cluster number counts expected from a 12 deg2 SZ cluster survey and constraints from simulated angular diameter distance measurements based on using the X-SZ method assuming an expected accuracy of 7% in the angular diameter distance determination of 70 clusters with redshifts less than 1.5. We find that R, can be determined within about 25%, A within 20%, and w within 16%. Any cluster survey can be used to select clusters for high accuracy distance measurements, but we assumed accurate angular diameter distance measurements for only 70 clusters since long observations are necessary to achieve high accuracy in distance measurements. Thus the question naturally arises: How to select clusters of galaxies for accurate diameter distance determinations? In this letter, as an example, we demonstrate that it is possible to optimize this selection changing the number of clusters observed, and the upper cut off of their redshift range. We show that constraints on cosmological parameters from combining cluster number counts and angular diameter distance measurements, as opposed to general expectations, will not improve substantially selecting clusters with redshifts higher than one. This important conclusion allow us to restrict our cluster sample to clusters closer than one, in a range where the observational time for accurate distance measurements are more manageable. Subject headings: cosmological parameters - cosmology: theory - galaxies: clusters: general - X-rays: galaxies: clusters
Etchemendy, Pablo E; Spiousas, Ignacio; Calcagno, Esteban R; Abregú, Ezequiel; Eguia, Manuel C; Vergara, Ramiro O
2018-06-01
In this study we evaluated whether a method of direct location is an appropriate response method for measuring auditory distance perception of far-field sound sources. We designed an experimental set-up that allows participants to indicate the distance at which they perceive the sound source by moving a visual marker. We termed this method Cross-Modal Direct Location (CMDL) since the response procedure involves the visual modality while the stimulus is presented through the auditory modality. Three experiments were conducted with sound sources located from 1 to 6 m. The first one compared the perceived distances obtained using either the CMDL device or verbal report (VR), which is the response method more frequently used for reporting auditory distance in the far field, and found differences on response compression and bias. In Experiment 2, participants reported visual distance estimates to the visual marker that were found highly accurate. Then, we asked the same group of participants to report VR estimates of auditory distance and found that the spatial visual information, obtained from the previous task, did not influence their reports. Finally, Experiment 3 compared the same responses that Experiment 1 but interleaving the methods, showing a weak, but complex, mutual influence. However, the estimates obtained with each method remained statistically different. Our results show that the auditory distance psychophysical functions obtained with the CMDL method are less susceptible to previously reported underestimation for distances over 2 m.
Groneberg, David A.
2016-01-01
We integrated recent improvements within the floating catchment area (FCA) method family into an integrated ‘iFCA`method. Within this method we focused on the distance decay function and its parameter. So far only distance decay functions with constant parameters have been applied. Therefore, we developed a variable distance decay function to be used within the FCA method. We were able to replace the impedance coefficient β by readily available distribution parameter (i.e. median and standard deviation (SD)) within a logistic based distance decay function. Hence, the function is shaped individually for every single population location by the median and SD of all population-to-provider distances within a global catchment size. Theoretical application of the variable distance decay function showed conceptually sound results. Furthermore, the existence of effective variable catchment sizes defined by the asymptotic approach to zero of the distance decay function was revealed, satisfying the need for variable catchment sizes. The application of the iFCA method within an urban case study in Berlin (Germany) confirmed the theoretical fit of the suggested method. In summary, we introduced for the first time, a variable distance decay function within an integrated FCA method. This function accounts for individual travel behaviors determined by the distribution of providers. Additionally, the function inherits effective variable catchment sizes and therefore obviates the need for determining variable catchment sizes separately. PMID:27391649
System and method for chromatography and electrophoresis using circular optical scanning
Balch, Joseph W.; Brewer, Laurence R.; Davidson, James C.; Kimbrough, Joseph R.
2001-01-01
A system and method is disclosed for chromatography and electrophoresis using circular optical scanning. One or more rectangular microchannel plates or radial microchannel plates has a set of analysis channels for insertion of molecular samples. One or more scanning devices repeatedly pass over the analysis channels in one direction at a predetermined rotational velocity and with a predetermined rotational radius. The rotational radius may be dynamically varied so as to monitor the molecular sample at various positions along a analysis channel. Sample loading robots may also be used to input molecular samples into the analysis channels. Radial microchannel plates are built from a substrate whose analysis channels are disposed at a non-parallel angle with respect to each other. A first step in the method accesses either a rectangular or radial microchannel plate, having a set of analysis channels, and second step passes a scanning device repeatedly in one direction over the analysis channels. As a third step, the scanning device is passed over the analysis channels at dynamically varying distances from a centerpoint of the scanning device. As a fourth step, molecular samples are loaded into the analysis channels with a robot.
Georgieva, Elka R.; Roy, Aritro S.; Grigoryants, Vladimir M.; Borbat, Petr P.; Earle, Keith A.; Scholes, Charles P.; Freed, Jack H.
2012-01-01
Pulsed dipolar ESR spectroscopy, DEER and DQC, require frozen samples. An important issue in the biological application of this technique is how the freezing rate and concentration of cryoprotectant could possibly affect the conformation of biomacromolecule and/or spin-label. We studied in detail the effect of these experimental variables on the distance distributions obtained by DEER from a series of doubly spin-labeled T4 lysozyme mutants. We found that the rate of sample freezing affects mainly the ensemble of spin-label rotamers, but the distance maxima remain essentially unchanged. This suggests that proteins frozen in a regular manner in liquid nitrogen faithfully maintain the distance-dependent structural properties in solution. We compared the results from rapidly freeze-quenched (≤100 μs) samples to those from commonly shock-frozen (slow freeze, 1s or longer) samples. For all the mutants studied we obtained inter-spin distance distributions, which were broader for rapidly frozen samples than for slowly frozen ones. We infer that rapid freezing trapped a larger ensemble of spin label rotamers; whereas, on the time-scale of slower freezing the protein and spin-label achieve a population showing fewer low-energy conformers. We used glycerol as a cryoprotectant in concentrations of 10% and 30% by weight. With 10% glycerol and slow freezing, we observed an increased slope of background signals, which in DEER is related to increased local spin concentration, in this case due to insufficient solvent vitrification, and therefore protein aggregation. This effect was considerably suppressed in slowly frozen samples containing 30% glycerol and rapidly frozen samples containing 10% glycerol. The assignment of bimodal distributions to tether rotamers as opposed to protein conformations is aided by comparing results using MTSL and 4-Bromo MTSL spin-labels. The latter usually produce narrower distance distributions. PMID:22341208
NASA Astrophysics Data System (ADS)
Clausen, M. P.; Colin-York, H.; Schneider, F.; Eggeling, C.; Fritzsche, M.
2017-02-01
Nanoscale spacing between the plasma membrane and the underlying cortical actin cytoskeleton profoundly modulates cellular morphology, mechanics, and function. Measuring this distance has been a key challenge in cell biology. Current methods for dissecting the nanoscale spacing either limit themselves to complex survey design using fixed samples or rely on diffraction-limited fluorescence imaging whose spatial resolution is insufficient to quantify distances on the nanoscale. Using dual-color super-resolution STED (stimulated-emission-depletion) microscopy, we here overcome this challenge and accurately measure the density distribution of the cortical actin cytoskeleton and the distance between the actin cortex and the membrane in live Jurkat T-cells. We found an asymmetric cortical actin density distribution with a mean width of 230 (+105/-125) nm. The spatial distances measured between the maximum density peaks of the cortex and the membrane were bi-modally distributed with mean values of 50 ± 15 nm and 120 ± 40 nm, respectively. Taken together with the finite width of the cortex, our results suggest that in some regions the cortical actin is closer than 10 nm to the membrane and a maximum of 20 nm in others.
GeneOnEarth: fitting genetic PC plots on the globe.
Torres-Sánchez, Sergio; Medina-Medina, Nuria; Gignoux, Chris; Abad-Grau, María M; González-Burchard, Esteban
2013-01-01
Principal component (PC) plots have become widely used to summarize genetic variation of individuals in a sample. The similarity between genetic distance in PC plots and geographical distance has shown to be quite impressive. However, in most situations, individual ancestral origins are not precisely known or they are heterogeneously distributed; hence, they are hardly linked to a geographical area. We have developed GeneOnEarth, a user-friendly web-based tool to help geneticists to understand whether a linear isolation-by-distance model may apply to a genetic data set; thus, genetic distances among a set of individuals resemble geographical distances among their origins. Its main goal is to allow users to first apply a by-view Procrustes method to visually learn whether this model holds. To do that, the user can choose the exact geographical area from an on line 2D or 3D world map by using, respectively, Google Maps or Google Earth, and rotate, flip, and resize the images. GeneOnEarth can also compute the optimal rotation angle using Procrustes analysis and assess statistical evidence of similarity when a different rotation angle has been chosen by the user. An online version of GeneOnEarth is available for testing and using purposes at http://bios.ugr.es/GeneOnEarth.
Material characterization using ultrasound tomography
NASA Astrophysics Data System (ADS)
Falardeau, Timothe; Belanger, Pierre
2018-04-01
Characterization of material properties can be performed using a wide array of methods e.g. X-ray diffraction or tensile testing. Each method leads to a limited set of material properties. This paper is interested in using ultrasound tomography to map speed of sound inside a material sample. The velocity inside the sample is directly related to its elastic properties. Recent develop-ments in ultrasound diffraction tomography have enabled velocity mapping of high velocity contrast objects using a combination of bent-ray time-of-flight tomography and diffraction tomography. In this study, ultrasound diffraction tomography was investigated using simulations in human bone phantoms. A finite element model was developed to assess the influence of the frequency, the number of transduction positions and the distance from the sample as well as to adapt the imaging algorithm. The average velocity in both regions of the bone phantoms were within 5% of the true value.
Williams, C.J.; Heglund, P.J.
2009-01-01
Habitat association models are commonly developed for individual animal species using generalized linear modeling methods such as logistic regression. We considered the issue of grouping species based on their habitat use so that management decisions can be based on sets of species rather than individual species. This research was motivated by a study of western landbirds in northern Idaho forests. The method we examined was to separately fit models to each species and to use a generalized Mahalanobis distance between coefficient vectors to create a distance matrix among species. Clustering methods were used to group species from the distance matrix, and multidimensional scaling methods were used to visualize the relations among species groups. Methods were also discussed for evaluating the sensitivity of the conclusions because of outliers or influential data points. We illustrate these methods with data from the landbird study conducted in northern Idaho. Simulation results are presented to compare the success of this method to alternative methods using Euclidean distance between coefficient vectors and to methods that do not use habitat association models. These simulations demonstrate that our Mahalanobis-distance- based method was nearly always better than Euclidean-distance-based methods or methods not based on habitat association models. The methods used to develop candidate species groups are easily explained to other scientists and resource managers since they mainly rely on classical multivariate statistical methods. ?? 2008 Springer Science+Business Media, LLC.
Pestana Passos, S; Dias Vanderlei, A; Ozcan, M; Felipe Valandro, L F; Felipe Valandro, L
2011-03-01
This study evaluated, by scanning electron microscope (SEM) and EDS, the effect of different strategies for silica coating (sandblasters, time and distance) of a glass-infiltrated ceramic (In-Ceram Alumina). Forty-one ceramic blocks were produced. For comparison of the three air-abrasion devices, 15 ceramic samples were divided in three groups (N.=5): Bioart, Microetcher and Ronvig (air-abrasion parameters: 20 s at a distance of 10 mm). For evaluation of the time and distance factors, ceramic samples (N.=5) were allocated in groups considering three applied times (5 s, 13 s and 20 s) and two distances (10 mm and 20 mm), using the Ronvig device. In a control sample, no surface treatment was performed. After that, the micro-morphologic analyzes of the ceramic surfaces were made using SEM. EDS analyzes were carried out to detect the % of silica on representative ceramic surface. ANOVA and Tukey tests were used to analyze the results. One-way ANOVA showed the silica deposition was different for different devices (P=0.0054). The Ronvig device promoted the highest silica coating compared to the other devices (Tukey test). Two-way ANOVA showed the distance and time factors did not affect significantly the silica deposition (application time and distance showed no statistical difference). The Ronvig device provided the most effective silica deposition on glass-infiltrated alumina ceramic surface and the studied time and distance for air-abrasion did not affect the silica coating.
Distance learning in academic health education.
Mattheos, N; Schittek, M; Attström, R; Lyon, H C
2001-05-01
Distance learning is an apparent alternative to traditional methods in education of health care professionals. Non-interactive distance learning, interactive courses and virtual learning environments exist as three different generations in distance learning, each with unique methodologies, strengths and potential. Different methodologies have been recommended for distance learning, varying from a didactic approach to a problem-based learning procedure. Accreditation, teamwork and personal contact between the tutors and the students during a course provided by distance learning are recommended as motivating factors in order to enhance the effectiveness of the learning. Numerous assessment methods for distance learning courses have been proposed. However, few studies report adequate tests for the effectiveness of the distance-learning environment. Available information indicates that distance learning may significantly decrease the cost of academic health education at all levels. Furthermore, such courses can provide education to students and professionals not accessible by traditional methods. Distance learning applications still lack the support of a solid theoretical framework and are only evaluated to a limited extent. Cases reported so far tend to present enthusiastic results, while more carefully-controlled studies suggest a cautious attitude towards distance learning. There is a vital need for research evidence to identify the factors of importance and variables involved in distance learning. The effectiveness of distance learning courses, especially in relation to traditional teaching methods, must therefore be further investigated.
Temporally flickering nanoparticles for compound cellular imaging and super resolution
NASA Astrophysics Data System (ADS)
Ilovitsh, Tali; Danan, Yossef; Meir, Rinat; Meiri, Amihai; Zalevsky, Zeev
2016-03-01
This work presents the use of flickering nanoparticles for imaging biological samples. The method has high noise immunity, and it enables the detection of overlapping types of GNPs, at significantly sub-diffraction distances, making it attractive for super resolving localization microscopy techniques. The method utilizes a lock-in technique at which the imaging of the sample is done using a time-modulated laser beam that match the number of the types of gold nanoparticles (GNPs) that label a given sample, and resulting in the excitation of the temporal flickering of the scattered light at known temporal frequencies. The final image where the GNPs are spatially separated is obtained using post processing where the proper spectral components corresponding to the different modulation frequencies are extracted. This allows the simultaneous super resolved imaging of multiple types of GNPs that label targets of interest within biological samples. Additionally applying the post-processing algorithm of the K-factor image decomposition algorithm can further improve the performance of the proposed approach.
Li, Zeyu; Li, Lei; Qin, Yu; Li, Guangbin; Wang, Du; Zhou, Xun
2016-09-05
We demonstrate the enhancement of resolution and image quality in terahertz (THz) lens-free in-line digital holography by sub-pixel sampling with double-distance reconstruction. Multiple sub-pixel shifted low-resolution (LR) holograms recorded by a pyroelectric array detector (100 μm × 100 μm pixel pitch, 124 × 124 pixels) are aligned precisely to synthesize a high-resolution (HR) hologram. By this method, the lateral resolution is no more limited by the pixel pitch, and lateral resolution of 150 μm is obtained, which corresponds to 1.26λ with respect to the illuminating wavelength of 118.8 μm (2.52 THz). Compared with other published works, to date, this is the highest resolution in THz digital holography when considering the illuminating wavelength. In addition, to suppress the twin-image and zero-order artifacts, the complex amplitude distributions of both object and illuminaing background wave fields are reconstructed simultaneously. This is achieved by iterative phase retrieval between the double HR holograms and background images at two recording planes, which does not require any constraints on object plane or a priori knowledge of the sample.
State-space modeling of population sizes and trends in Nihoa Finch and Millerbird
Gorresen, P. Marcos; Brinck, Kevin W.; Camp, Richard J.; Farmer, Chris; Plentovich, Sheldon M.; Banko, Paul C.
2016-01-01
Both of the 2 passerines endemic to Nihoa Island, Hawai‘i, USA—the Nihoa Millerbird (Acrocephalus familiaris kingi) and Nihoa Finch (Telespiza ultima)—are listed as endangered by federal and state agencies. Their abundances have been estimated by irregularly implemented fixed-width strip-transect sampling from 1967 to 2012, from which area-based extrapolation of the raw counts produced highly variable abundance estimates for both species. To evaluate an alternative survey method and improve abundance estimates, we conducted variable-distance point-transect sampling between 2010 and 2014. We compared our results to those obtained from strip-transect samples. In addition, we applied state-space models to derive improved estimates of population size and trends from the legacy time series of strip-transect counts. Both species were fairly evenly distributed across Nihoa and occurred in all or nearly all available habitat. Population trends for Nihoa Millerbird were inconclusive because of high within-year variance. Trends for Nihoa Finch were positive, particularly since the early 1990s. Distance-based analysis of point-transect counts produced mean estimates of abundance similar to those from strip-transects but was generally more precise. However, both survey methods produced biologically unrealistic variability between years. State-space modeling of the long-term time series of abundances obtained from strip-transect counts effectively reduced uncertainty in both within- and between-year estimates of population size, and allowed short-term changes in abundance trajectories to be smoothed into a long-term trend.
Atmospheric CO2 Concentrations from Aircraft for 1972-1981, CSIRO Monitoring Program
Beardsmore, David J. [Commonwealth Scientific and Industrial Research Organization (CSIRO), Victoria, Australia; Pearman, Graeme I. [Commonwealth Scientific and Industrial Research Organization (CSIRO), Victoria, Australia
2012-01-01
From 1972 through 1981, air samples were collected in glass flasks from aircraft at a variety of latitudes and altitudes over Australia, New Zealand, and Antarctica. The samples were analyzed for CO2 concentrations with nondispersive infrared gas analysis. The resulting data contain the sampling dates, type of aircraft, flight number, flask identification number, sampling time, geographic sector, distance in kilometers from the listed distance measuring equipment (DME) station, station number of the radio navigation distance measuring equipment, altitude of the aircraft above mean sea level, sample analysis date, flask pressure, tertiary standards used for the analysis, analyzer used, and CO2 concentration. These data represent the first published record of CO2 concentrations in the Southern Hemisphere expressed in the WMO 1981 CO2 Calibration Scale and provide a precise record of atmospheric CO2 concentrations in the troposphere and lower stratosphere over Australia and New Zealand.
ERIC Educational Resources Information Center
Wonacott, Michael E.
Both face-to-face and distance learning methods are currently being used in adult education and career and technical education. In theory, the advantages of face-to-face and distance learning methods complement each other. In practice, however, both face-to-face and information and communications technology (ICT)-based distance programs often rely…
Going the Distance: Are There Common Factors in High Performance Distance Learning? Research Report.
ERIC Educational Resources Information Center
Hawksley, Rosemary; Owen, Jane
Common factors among high-performing distance learning (DL) programs were examined through case studies at 9 further education colleges and 2 nonsector organizations in the United Kingdom and a backup survey of a sample of 50 distance learners at 5 of the colleges. The study methodology incorporated numerous principles of process benchmarking. The…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nguyen, Son N.; Liyu, Andrey V.; Chu, Rosalie K.
A new approach for constant distance mode mass spectrometry imaging of biological samples using nanospray desorption electrospray ionization (nano-DESI MSI) was developed by integrating a shear-force probe with nano-DESI probe. The technical concept and basic instrumental setup as well as general operation of the system are described. Mechanical dampening of resonant oscillations due to the presence of shear forces between the probe and the sample surface enables constant-distance imaging mode via a computer controlled closed feedback loop. The capability of simultaneous chemical and topographic imaging of complex biological samples is demonstrated using living Bacillus Subtilis ATCC 49760 colonies on agarmore » plates. The constant-distance mode nano-DESI MSI enabled imaging of many metabolites including non-ribosomal peptides (surfactin, plipastatin and iturin) and iron-bound heme on the surface of living bacterial colonies ranging in diameter from 10 mm to 13 mm with height variations of up to 0.8 mm above the agar plate. Co-registration of ion images to topographic images provided higher-contrast images. Constant-mode nano-DESI MSI is ideally suited for imaging biological samples of complex topography in their native state.« less
NASA Astrophysics Data System (ADS)
Yamagiwa, Masatomo; Ogawa, Takayuki; Minamikawa, Takeo; Abdelsalam, Dahi Ghareab; Okabe, Kyosuke; Tsurumachi, Noriaki; Mizutani, Yasuhiro; Iwata, Testuo; Yamamoto, Hirotsugu; Yasui, Takeshi
2018-06-01
Terahertz digital holography (THz-DH) has the potential to be used for non-destructive inspection of visibly opaque soft materials due to its good immunity to optical scattering and absorption. Although previous research on full-field off-axis THz-DH has usually been performed using Fresnel diffraction reconstruction, its minimum reconstruction distance occasionally prevents a sample from being placed near a THz imager to increase the signal-to-noise ratio in the hologram. In this article, we apply the angular spectrum method (ASM) for wavefront reconstruction in full-filed off-axis THz-DH because ASM is more accurate at short reconstruction distances. We demonstrate real-time phase imaging of a visibly opaque plastic sample with a phase resolution power of λ/49 at a frame rate of 3.5 Hz in addition to real-time amplitude imaging. We also perform digital focusing of the amplitude image for the same object with a depth selectivity of 447 μm. Furthermore, 3D imaging of visibly opaque silicon objects was achieved with a depth precision of 1.7 μm. The demonstrated results indicate the high potential of the proposed method for in-line or in-process non-destructive inspection of soft materials.
Distance-Based Configurational Entropy of Proteins from Molecular Dynamics Simulations
Fogolari, Federico; Corazza, Alessandra; Fortuna, Sara; Soler, Miguel Angel; VanSchouwen, Bryan; Brancolini, Giorgia; Corni, Stefano; Melacini, Giuseppe; Esposito, Gennaro
2015-01-01
Estimation of configurational entropy from molecular dynamics trajectories is a difficult task which is often performed using quasi-harmonic or histogram analysis. An entirely different approach, proposed recently, estimates local density distribution around each conformational sample by measuring the distance from its nearest neighbors. In this work we show this theoretically well grounded the method can be easily applied to estimate the entropy from conformational sampling. We consider a set of systems that are representative of important biomolecular processes. In particular: reference entropies for amino acids in unfolded proteins are obtained from a database of residues not participating in secondary structure elements;the conformational entropy of folding of β2-microglobulin is computed from molecular dynamics simulations using reference entropies for the unfolded state;backbone conformational entropy is computed from molecular dynamics simulations of four different states of the EPAC protein and compared with order parameters (often used as a measure of entropy);the conformational and rototranslational entropy of binding is computed from simulations of 20 tripeptides bound to the peptide binding protein OppA and of β2-microglobulin bound to a citrate coated gold surface. This work shows the potential of the method in the most representative biological processes involving proteins, and provides a valuable alternative, principally in the shown cases, where other approaches are problematic. PMID:26177039
Distance-Based Configurational Entropy of Proteins from Molecular Dynamics Simulations.
Fogolari, Federico; Corazza, Alessandra; Fortuna, Sara; Soler, Miguel Angel; VanSchouwen, Bryan; Brancolini, Giorgia; Corni, Stefano; Melacini, Giuseppe; Esposito, Gennaro
2015-01-01
Estimation of configurational entropy from molecular dynamics trajectories is a difficult task which is often performed using quasi-harmonic or histogram analysis. An entirely different approach, proposed recently, estimates local density distribution around each conformational sample by measuring the distance from its nearest neighbors. In this work we show this theoretically well grounded the method can be easily applied to estimate the entropy from conformational sampling. We consider a set of systems that are representative of important biomolecular processes. In particular: reference entropies for amino acids in unfolded proteins are obtained from a database of residues not participating in secondary structure elements;the conformational entropy of folding of β2-microglobulin is computed from molecular dynamics simulations using reference entropies for the unfolded state;backbone conformational entropy is computed from molecular dynamics simulations of four different states of the EPAC protein and compared with order parameters (often used as a measure of entropy);the conformational and rototranslational entropy of binding is computed from simulations of 20 tripeptides bound to the peptide binding protein OppA and of β2-microglobulin bound to a citrate coated gold surface. This work shows the potential of the method in the most representative biological processes involving proteins, and provides a valuable alternative, principally in the shown cases, where other approaches are problematic.
NASA Astrophysics Data System (ADS)
Yamagiwa, Masatomo; Ogawa, Takayuki; Minamikawa, Takeo; Abdelsalam, Dahi Ghareab; Okabe, Kyosuke; Tsurumachi, Noriaki; Mizutani, Yasuhiro; Iwata, Testuo; Yamamoto, Hirotsugu; Yasui, Takeshi
2018-04-01
Terahertz digital holography (THz-DH) has the potential to be used for non-destructive inspection of visibly opaque soft materials due to its good immunity to optical scattering and absorption. Although previous research on full-field off-axis THz-DH has usually been performed using Fresnel diffraction reconstruction, its minimum reconstruction distance occasionally prevents a sample from being placed near a THz imager to increase the signal-to-noise ratio in the hologram. In this article, we apply the angular spectrum method (ASM) for wavefront reconstruction in full-filed off-axis THz-DH because ASM is more accurate at short reconstruction distances. We demonstrate real-time phase imaging of a visibly opaque plastic sample with a phase resolution power of λ/49 at a frame rate of 3.5 Hz in addition to real-time amplitude imaging. We also perform digital focusing of the amplitude image for the same object with a depth selectivity of 447 μm. Furthermore, 3D imaging of visibly opaque silicon objects was achieved with a depth precision of 1.7 μm. The demonstrated results indicate the high potential of the proposed method for in-line or in-process non-destructive inspection of soft materials.
Sebastiani, Paola; Zhao, Zhenming; Abad-Grau, Maria M; Riva, Alberto; Hartley, Stephen W; Sedgewick, Amanda E; Doria, Alessandro; Montano, Monty; Melista, Efthymia; Terry, Dellara; Perls, Thomas T; Steinberg, Martin H; Baldwin, Clinton T
2008-01-01
Background One of the challenges of the analysis of pooling-based genome wide association studies is to identify authentic associations among potentially thousands of false positive associations. Results We present a hierarchical and modular approach to the analysis of genome wide genotype data that incorporates quality control, linkage disequilibrium, physical distance and gene ontology to identify authentic associations among those found by statistical association tests. The method is developed for the allelic association analysis of pooled DNA samples, but it can be easily generalized to the analysis of individually genotyped samples. We evaluate the approach using data sets from diverse genome wide association studies including fetal hemoglobin levels in sickle cell anemia and a sample of centenarians and show that the approach is highly reproducible and allows for discovery at different levels of synthesis. Conclusion Results from the integration of Bayesian tests and other machine learning techniques with linkage disequilibrium data suggest that we do not need to use too stringent thresholds to reduce the number of false positive associations. This method yields increased power even with relatively small samples. In fact, our evaluation shows that the method can reach almost 70% sensitivity with samples of only 100 subjects. PMID:18194558
Non-invasive genetic censusing and monitoring of primate populations.
Arandjelovic, Mimi; Vigilant, Linda
2018-03-01
Knowing the density or abundance of primate populations is essential for their conservation management and contextualizing socio-demographic and behavioral observations. When direct counts of animals are not possible, genetic analysis of non-invasive samples collected from wildlife populations allows estimates of population size with higher accuracy and precision than is possible using indirect signs. Furthermore, in contrast to traditional indirect survey methods, prolonged or periodic genetic sampling across months or years enables inference of group membership, movement, dynamics, and some kin relationships. Data may also be used to estimate sex ratios, sex differences in dispersal distances, and detect gene flow among locations. Recent advances in capture-recapture models have further improved the precision of population estimates derived from non-invasive samples. Simulations using these methods have shown that the confidence interval of point estimates includes the true population size when assumptions of the models are met, and therefore this range of population size minima and maxima should be emphasized in population monitoring studies. Innovations such as the use of sniffer dogs or anti-poaching patrols for sample collection are important to ensure adequate sampling, and the expected development of efficient and cost-effective genotyping by sequencing methods for DNAs derived from non-invasive samples will automate and speed analyses. © 2018 Wiley Periodicals, Inc.
Torres, M Eugenia Mosca; Puig, Silvia; Novillo, Agustina; Ovejero, Ramiro
2015-04-01
We conducted focal observations of vicuña, a year-around territorial mammal, to compare vigilance behaviour between territorial and bachelor males outside the reproductive season. We hypothesized that the time spent vigilant would depend on male social status, considering the potential effects of several variables: sampling year, group size, distances to the nearest neighbour and to a vega (mountain wetland). We fit GLM models to assess how these variables, and their interactions, affected time allocation of territorial and bachelor males. We found non significant differences between territorial and bachelor males in the time devoted to vigilance behaviour. Vigilance of territorial males was influenced by the sampling year and the distance to the vega. In turn, vigilance in bachelor males was influenced mainly by the sampling year, the group size and the distance to the vega. Our results suggest that sampling year and distance to the vega are more important than social factors in conditioning the behaviour of male vicuñas, during the non-reproductive season. Future studies of behaviour in water-dependant ungulates, should consider the influence of water and forage availabilities, and the interactions between group size and other variables. Copyright © 2015 Elsevier B.V. All rights reserved.
Precipitation in a lead calcium tin anode
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perez-Gonzalez, Francisco A., E-mail: fco.aurelio@inbox.com; Centro de Innovacion, Investigacion y Desarrollo en Ingenieria y Tecnologia, Universidad Autonoma de Nuevo Leon; Camurri, Carlos G., E-mail: ccamurri@udec.cl
Samples from a hot rolled sheet of a tin and calcium bearing lead alloy were solution heat treated at 300 Degree-Sign C and cooled down to room temperature at different rates; these samples were left at room temperature to study natural precipitation of CaSn{sub 3} particles. The samples were aged for 45 days before analysing their microstructure, which was carried out in a scanning electron microscope using secondary and backscattered electron detectors. Selected X-ray spectra analyses were conducted to verify the nature of the precipitates. Images were taken at different magnifications in both modes of observation to locate the precipitatesmore » and record their position within the images and calculate the distance between them. Differential scanning calorimeter analyses were conducted on selected samples. It was found that the mechanical properties of the material correlate with the minimum average distance between precipitates, which is related to the average cooling rate from solution heat treatment. - Highlights: Black-Right-Pointing-Pointer The distance between precipitates in a lead alloy is recorded. Black-Right-Pointing-Pointer The relationship between the distance and the cooling rate is established. Black-Right-Pointing-Pointer It is found that the strengthening of the alloy depends on the distance between precipitates.« less
Dynamics Sampling in Transition Pathway Space.
Zhou, Hongyu; Tao, Peng
2018-01-09
The minimum energy pathway contains important information describing the transition between two states on a potential energy surface (PES). Chain-of-states methods were developed to efficiently calculate minimum energy pathways connecting two stable states. In the chain-of-states framework, a series of structures are generated and optimized to represent the minimum energy pathway connecting two states. However, multiple pathways may exist connecting two existing states and should be identified to obtain a full view of the transitions. Therefore, we developed an enhanced sampling method, named as the direct pathway dynamics sampling (DPDS) method, to facilitate exploration of a PES for multiple pathways connecting two stable states as well as addition minima and their associated transition pathways. In the DPDS method, molecular dynamics simulations are carried out on the targeting PES within a chain-of-states framework to directly sample the transition pathway space. The simulations of DPDS could be regulated by two parameters controlling distance among states along the pathway and smoothness of the pathway. One advantage of the chain-of-states framework is that no specific reaction coordinates are necessary to generate the reaction pathway, because such information is implicitly represented by the structures along the pathway. The chain-of-states setup in a DPDS method greatly enhances the sufficient sampling in high-energy space between two end states, such as transition states. By removing the constraint on the end states of the pathway, DPDS will also sample pathways connecting minima on a PES in addition to the end points of the starting pathway. This feature makes DPDS an ideal method to directly explore transition pathway space. Three examples demonstrate the efficiency of DPDS methods in sampling the high-energy area important for reactions on the PES.
Genomic Characterization Helps Dissecting an Outbreak of Listeriosis in Northern Italy.
Comandatore, Francesco; Corbella, Marta; Andreoli, Giuseppina; Scaltriti, Erika; Aguzzi, Massimo; Gaiarsa, Stefano; Mariani, Bianca; Morganti, Marina; Bandi, Claudio; Fabbi, Massimo; Marone, Piero; Pongolini, Stefano; Sassera, Davide
2017-07-06
Listeria monocytogenes (Lm) is a bacterium widely distributed in nature and able to contaminate food processing environments, including those of dairy products. Lm is a primary public health issue, due to the very low infectious dose and the ability to produce severe outcomes, in particular in elderly, newborns, pregnant women and immunocompromised patients. In the period between April and July 2015, an increased number of cases of listeriosis was observed in the area of Pavia, Northern Italy. An epidemiological investigation identified a cheesemaking small organic farm as the possible origin of the outbreak. In this work we present the results of the retrospective epidemiological study that we performed using molecular biology and genomic epidemiology methods. The strains sampled from patients and those from the target farm's cheese were analyzed using PFGE and whole genome sequencing (WGS) based methods. The performed WGS based analyses included: a) in-silico MLST typing; b) SNPs calling and genetic distance evaluation; c) determination of the resistance and virulence genes profiles; d) SNPs based phylogenetic reconstruction. Three of the patient strains and all the cheese strains resulted to belong to the same phylogenetic cluster, in Sequence Type 29. A further accurate SNPs analysis revealed that two of the three patient strains and all the cheese strains were highly similar (0.8 SNPs of average distance) and exhibited a higer distance from the third patient isolate (9.4 SNPs of average distance). Despite the global agreement among the results of the PFGE and WGS epidemiological studies, the latter approach agree with epidemiological data in indicating that one the patient strains could have originated from a different source. This result highlights that WGS methods can allow to better.
NASA Astrophysics Data System (ADS)
Nordin, Noraimi Azlin Mohd; Omar, Mohd; Sharif, S. Sarifah Radiah
2017-04-01
Companies are looking forward to improve their productivity within their warehouse operations and distribution centres. In a typical warehouse operation, order picking contributes more than half percentage of the operating costs. Order picking is a benchmark in measuring the performance and productivity improvement of any warehouse management. Solving order picking problem is crucial in reducing response time and waiting time of a customer in receiving his demands. To reduce the response time, proper routing for picking orders is vital. Moreover, in production line, it is vital to always make sure the supplies arrive on time. Hence, a sample routing network will be applied on EP Manufacturing Berhad (EPMB) as a case study. The Dijkstra's algorithm and Dynamic Programming method are applied to find the shortest distance for an order picker in order picking. The results show that the Dynamic programming method is a simple yet competent approach in finding the shortest distance to pick an order that is applicable in a warehouse within a short time period.
Berry, Elaine D; Wells, James E; Bono, James L; Woodbury, Bryan L; Kalchayanand, Norasak; Norman, Keri N; Suslow, Trevor V; López-Velasco, Gabriela; Millner, Patricia D
2015-02-01
The impact of proximity to a beef cattle feedlot on Escherichia coli O157:H7 contamination of leafy greens was examined. In each of 2 years, leafy greens were planted in nine plots located 60, 120, and 180 m from a cattle feedlot (3 plots at each distance). Leafy greens (270) and feedlot manure samples (100) were collected six different times from June to September in each year. Both E. coli O157:H7 and total E. coli bacteria were recovered from leafy greens at all plot distances. E. coli O157:H7 was recovered from 3.5% of leafy green samples per plot at 60 m, which was higher (P < 0.05) than the 1.8% of positive samples per plot at 180 m, indicating a decrease in contamination as distance from the feedlot was increased. Although E. coli O157:H7 was not recovered from air samples at any distance, total E. coli was recovered from air samples at the feedlot edge and all plot distances, indicating that airborne transport of the pathogen can occur. Results suggest that risk for airborne transport of E. coli O157:H7 from cattle production is increased when cattle pen surfaces are very dry and when this situation is combined with cattle management or cattle behaviors that generate airborne dust. Current leafy green field distance guidelines of 120 m (400 feet) may not be adequate to limit the transmission of E. coli O157:H7 to produce crops planted near concentrated animal feeding operations. Additional research is needed to determine safe set-back distances between cattle feedlots and crop production that will reduce fresh produce contamination. Copyright © 2015, American Society for Microbiology. All Rights Reserved.
Kuhn, Fabian; Natsch, Andreas
2009-04-06
It is currently not fully established whether human individuals have a genetically determined, individual-specific body odour. Volatile carboxylic acids are a key class of known human body odorants. They are released from glutamine conjugates secreted in axillary skin by a specific Nalpha-acyl-glutamine-aminoacylase present in skin bacteria. Here, we report a quantitative investigation of these odorant acids in 12 pairs of monozygotic twins. Axilla secretions were sampled twice and treated with the Nalpha-acyl-glutamine-aminoacylase. The released acids were analysed as their methyl esters with comprehensive two-dimensional gas chromatography and time-of-flight mass spectrometry detection. The pattern of the analytes was compared with distance analysis. The distance was lowest between samples of the right and the left axilla taken on the same day from the same individual. It was clearly greater if the same subject was sampled on different days, but this intra-individual distance between samples was only slightly lower than the distance between samples taken from two monozygotic twins. A much greater distance was observed when comparing unrelated individuals. By applying cluster and principal component analyses, a clear clustering of samples taken from one pair of monozygotic twins was also confirmed. In conclusion, the specific pattern of precursors for volatile carboxylic acids is subject to a day-to-day variation, but there is a strong genetic contribution. Therefore, humans have a genetically determined body odour type that is at least partly composed of these odorant acids.
NASA Astrophysics Data System (ADS)
Gallenne, A.; Kervella, P.; Mérand, A.; Pietrzyński, G.; Gieren, W.; Nardetto, N.; Trahin, B.
2017-11-01
Context. The Baade-Wesselink (BW) method, which combines linear and angular diameter variations, is the most common method to determine the distances to pulsating stars. However, the projection factor, p-factor, used to convert radial velocities into pulsation velocities, is still poorly calibrated. This parameter is critical on the use of this technique, and often leads to 5-10% uncertainties on the derived distances. Aims: We focus on empirically measuring the p-factor of a homogeneous sample of 29 LMC and 10 SMC Cepheids for which an accurate average distances were estimated from eclipsing binary systems. Methods: We used the SPIPS algorithm, which is an implementation of the BW technique. Unlike other conventional methods, SPIPS combines all observables, i.e. radial velocities, multi-band photometry and interferometry into a consistent physical modelling to estimate the parameters of the stars. The large number and their redundancy insure its robustness and improves the statistical precision. Results: We successfully estimated the p-factor of several Magellanic Cloud Cepheids. Combined with our previous Galactic results, we find the following P-p relation: -0.08± 0.04(log P-1.18) + 1.24± 0.02. We find no evidence of a metallicity dependent p-factor. We also derive a new calibration of the period-radius relation, log R = 0.684± 0.007(log P-0.517) + 1.489± 0.002, with an intrinsic dispersion of 0.020. We detect an infrared excess for all stars at 3.6 μm and 4.5 μm, which might be the signature of circumstellar dust. We measure a mean offset of Δm3.6 = 0.057 ± 0.006 mag and Δm4.5 = 0.065 ± 0.008 mag. Conclusions: We provide a new P-p relation based on a multi-wavelength fit that can be used for the distance scale calibration from the BW method. The dispersion is due to the LMC and SMC width we took into account because individual Cepheids distances are unknown. The new P-R relation has a small intrinsic dispersion: 4.5% in radius. This precision will allow us to accurately apply the BW method to nearby galaxies. Finally, the infrared excesses we detect again raise the issue of using mid-IR wavelengths to derive period-luminosity relation and to calibrate the Hubble constant. These IR excesses might be the signature of circumstellar dust, and are never taken into account when applying the BW method at those wavelengths. Our measured offsets may give an average bias of 2.8% on the distances derived through mid-IR P-L relations.
NASA Technical Reports Server (NTRS)
Freedman, Wendy L.; Hughes, Shaun M.; Madore, Barry F.; Mould, Jeremy R.; Lee, Myung Gyoon; Stetson, Peter; Kennicutt, Robert C.; Turner, Anne; Ferrarese, Laura; Ford, Holland
1994-01-01
We report on the discovery of 30 new Cepheids in the nearby galaxy M81 based on observations using the Hubble Space Telescope (HST). The periods of these Cepheids lie in the range of 10-55 days, based on 18 independent epochs using the HST wide-band F555W filter. The HST F555W and F785LP data have been transformed to the Cousins standard V and I magnitude system using a ground-based calibration. Apparent period-luminosity relations at V and I were constructed, from which apparent distance moduli were measured with respect to assumed values of mu(sub 0) = 18.50 mag and E(B - V) = 0.10 mag for the Large Magellanic Cloud. The difference in the apparent V and I moduli yields a measure of the difference in the total mean extinction between the M81 and the LMC Cepheid samples. A low total mean extinction to the M81 sample of E(B - V) = 0.03 +/- 0.05 mag is obtained. The true distance modulus to M81 is determined to be 27.80 +/- 0.20 mag, corresponding to a distance of 3.63 +/- 0.34 Mpc. These data illustrate that with an optimal (power-law) sampling strategy, the HST provides a powerful tool for the discovery of extragalactic Cepheids and their application to the distance scale. M81 is the first calibrating galaxy in the target sample of the HST Key Project on the Extragalactic Distance Scale, the ultimate aim of which is to provide a value of the Hubble constant to 10% accuracy.
2013-01-01
Background An important issue concerning the worldwide fight against stigma is the evaluation of psychiatrists’ beliefs and attitudes toward schizophrenia and mental illness in general. However, there is as yet no consensus on this matter in the literature, and results vary according to the stigma dimension assessed and to the cultural background of the sample. The aim of this investigation was to search for profiles of stigmatizing beliefs related to schizophrenia in a national sample of psychiatrists in Brazil. Methods A sample of 1414 psychiatrists were recruited from among those attending the 2009 Brazilian Congress of Psychiatry. A questionnaire was applied in face-to-face interviews. The questionnaire addressed four stigma dimensions, all in reference to individuals with schizophrenia: stereotypes, restrictions, perceived prejudice and social distance. Stigma item scores were included in latent profile analyses; the resulting profiles were entered into multinomial logistic regression models with sociodemographics, in order to identify significant correlates. Results Three profiles were identified. The “no stigma” subjects (n = 337) characterized individuals with schizophrenia in a positive light, disagreed with restrictions, and displayed a low level of social distance. The “unobtrusive stigma” subjects (n = 471) were significantly younger and displayed the lowest level of social distance, although most of them agreed with involuntary admission and demonstrated a high level of perceived prejudice. The “great stigma” subjects (n = 606) negatively stereotyped individuals with schizophrenia, agreed with restrictions and scored the highest on the perceived prejudice and social distance dimensions. In comparison with the first two profiles, this last profile comprised a significantly larger number of individuals who were in frequent contact with a family member suffering from a psychiatric disorder, as well as comprising more individuals who had no such family member. Conclusions Our study not only provides additional data related to an under-researched area but also reveals that psychiatrists are a heterogeneous group regarding stigma toward schizophrenia. The presence of different stigma profiles should be evaluated in further studies; this could enable anti-stigma initiatives to be specifically designed to effectively target the stigmatizing group. PMID:23517184
The effect of Cs-137 short-range spatial variability on soil after the Chernobyl disaster
NASA Astrophysics Data System (ADS)
Martynenko, Vladimir; Vakulovsky, Sergey; Linnik, Vitaly
2014-05-01
After the Chernobyl accident of 1986, large areas of Russia were contaminated by 137Cs. Post-depositional redistribution of 137Cs fallout across the land surface resulting from mechanical, physical, chemical, and biological processes operating in the soil system and the grain size selectivity associated with soil erosion and sediment transport processes. Therefore of uppermost importance are data on evaluating 137Cs variability at short distances, obtained at the early period after the accident. Measurements of 137Cs deposit at the territory of Russia exposed to radioactive contamination were mainly conducted with the help of air-gamma survey, and were verified by soil sampling on test plots with size 10x10 m with control soil sampling using "envelope" method of fivefold soil sampling (1 sampling at the centre and 4 along the edges of the plot under study). Presented here are evaluation data of 137Cs contamination, obtained in the Bryansk, Yaroslav and Rostov regions in 1991. Test plots were selected at the distance of 50-100 m away from a road on matted areas with undisturbed soil structure. Test routes of sampling were made perpendicularly to directions crossing basic traces of radioactive contamination. Sampling measurements were carried out at Canberra and Ortec gamma spectrometers. Each of the 5 samples of the "envelope" was measured separately, soil mixing was not applied. 137Cs value for the Bryansk Region varied from 2,6 kBq/m2 to 2294 kBq/m2, at the territories of the Yaroslav and Rostov regions 137Cs value varied from 0,44 kBq/m2 to 5,1 kBq/m2 and 0,56 kBq/m2 to 22,2 kBq/m2, respectively. Statistical analysis of 137Cs deposit at different plots is a solid argumentation in favour of nonuniform distribution in various landscapes and at a different distance from the Chernobyl NPP. Such nonuniformity of 137Cs soil contamination in the limits of 10 m of the plot is most likely to be related to initial aerosol contamination nonuniformity at the moment of deposition.
Yadav, Bechu K V; Nandy, S
2015-05-01
Mapping forest biomass is fundamental for estimating CO₂ emissions, and planning and monitoring of forests and ecosystem productivity. The present study attempted to map aboveground woody biomass (AGWB) integrating forest inventory, remote sensing and geostatistical techniques, viz., direct radiometric relationships (DRR), k-nearest neighbours (k-NN) and cokriging (CoK) and to evaluate their accuracy. A part of the Timli Forest Range of Kalsi Soil and Water Conservation Division, Uttarakhand, India was selected for the present study. Stratified random sampling was used to collect biophysical data from 36 sample plots of 0.1 ha (31.62 m × 31.62 m) size. Species-specific volumetric equations were used for calculating volume and multiplied by specific gravity to get biomass. Three forest-type density classes, viz. 10-40, 40-70 and >70% of Shorea robusta forest and four non-forest classes were delineated using on-screen visual interpretation of IRS P6 LISS-III data of December 2012. The volume in different strata of forest-type density ranged from 189.84 to 484.36 m(3) ha(-1). The total growing stock of the forest was found to be 2,024,652.88 m(3). The AGWB ranged from 143 to 421 Mgha(-1). Spectral bands and vegetation indices were used as independent variables and biomass as dependent variable for DRR, k-NN and CoK. After validation and comparison, k-NN method of Mahalanobis distance (root mean square error (RMSE) = 42.25 Mgha(-1)) was found to be the best method followed by fuzzy distance and Euclidean distance with RMSE of 44.23 and 45.13 Mgha(-1) respectively. DRR was found to be the least accurate method with RMSE of 67.17 Mgha(-1). The study highlighted the potential of integrating of forest inventory, remote sensing and geostatistical techniques for forest biomass mapping.
Hongwarittorrn, Irin; Chaichanawongsaroj, Nuntaree; Laiwattanapaisal, Wanida
2017-12-01
A distance-based paper analytical device (dPAD) for loop mediated isothermal amplification (LAMP) detection based on distance measurement was proposed. This approach relied on visual detection by the length of colour developed on the dPAD with reference to semi-quantitative determination of the initial amount of genomic DNA. In this communication, E. coli DNA was chosen as a template DNA for LAMP reaction. In accordance with the principle, the dPAD was immobilized by polyethylenimine (PEI), which is a strong cationic polymer, in the hydrophilic channel of the paper device. Hydroxynaphthol blue (HNB), a colourimetric indicator for monitoring the change of magnesium ion concentration in the LAMP reaction, was used to react with the immobilized PEI. The positive charges of PEI react with the negative charges of free HNB in the LAMP reaction, producing a blue colour deposit on the paper device. Consequently, the apparently visual distance appeared within 5min and length of distance correlated to the amount of DNA in the sample. The distance-based PAD for the visual detection of the LAMP reaction could quantify the initial concentration of genomic DNA as low as 4.14 × 10 3 copiesµL -1 . This distance-based visual semi-quantitative platform is suitable for choice of LAMP detection method, particular in resource-limited settings because of the advantages of low cost, simple fabrication and operation, disposability and portable detection of the dPAD device. Copyright © 2017 Elsevier B.V. All rights reserved.
Outcomes of photorefractive keratectomy in patients with atypical topography.
Movahedan, Hossein; Namvar, Ehsan; Farvardin, Mohsen
2017-11-01
Photorefractive keratectomy (PRK) is at risk of serious complications such as corneal ectasia, which can reduce corrected distance visual acuity. The rate of complications of PRK is higher in patients with atypical topography. To determine the outcomes of photorefractive keratectomy in patients with atypical topography. This cross-sectional study was done in 2015 in Shiraz in Iran. We included 85 eyes in this study. The samples were selected using a simple random sampling method. All patients were under evaluation for uncorrected distance visual acuity, corrected distance visual acuity, manifest refraction, corneal topography, central corneal thickness using pentacam, slit-lamp microscopy, and detailed fondus evaluation. The postoperative examination was done 1-7 years after surgery. Data were analyzed using IBM SPSS 21.0 version. To analyze the data, descriptive statistics (frequency, percentage, mean, and standard deviation), chi-square, and independent samples t-test were used. We studied 85 eyes. Among the patients, 23 (27.1%) were male and 62 (72.9%) were female. Mean age of the participants was 28.25±5.55 years. Mean postoperative refraction was - 0.37±0.55 diopters. Keratoconus or corneal ectasia was not reported in any patient in this study. There was no statistically significant difference between SI index before and after operation (p=0.736). Mean preoperative refraction was -3.84 ± 1.46 diopters in males and -4.20±1.96 diopters in females; thus there was not statistically significant difference (p = 0.435). PRK is a safe and efficient photorefractive surgery and is associated with low complication rate in patients with atypical topography.
Method for evaluating wind turbine wake effects on wind farm performance
NASA Technical Reports Server (NTRS)
Neustadter, H. E.; Spera, D. A.
1985-01-01
A method of testing the performance of a cluster of wind turbine units an data analysis equations are presented which together form a simple and direct procedure for determining the reduction in energy output caused by the wake of an upwind turbine. This method appears to solve the problems presented by data scatter and wind variability. Test data from the three-unit Mod-2 wind turbine cluster at Goldendale, Washington, are analyzed to illustrate the application of the proposed method. In this sample case the reduction in energy was found to be about 10 percent when the Mod-2 units were separated a distance equal to seven diameters and winds were below rated.
Ultrasonic characterization of single drops of liquids
Sinha, Dipen N.
1998-01-01
Ultrasonic characterization of single drops of liquids. The present invention includes the use of two closely spaced transducers, or one transducer and a closely spaced reflector plate, to form an interferometer suitable for ultrasonic characterization of droplet-size and smaller samples without the need for a container. The droplet is held between the interferometer elements, whose distance apart may be adjusted, by surface tension. The surfaces of the interferometer elements may be readily cleansed by a stream of solvent followed by purified air when it is desired to change samples. A single drop of liquid is sufficient for high-quality measurement. Examples of samples which may be investigated using the apparatus and method of the present invention include biological specimens (tear drops; blood and other body fluid samples; samples from tumors, tissues, and organs; secretions from tissues and organs; snake and bee venom, etc.) for diagnostic evaluation, samples in forensic investigations, and detection of drugs in small quantities.
Abdollahimohammad, Abdolghani; Ja'afar, Rogayah
2015-01-01
The goal of the current study was to identify associations between the learning style of nursing students and their cultural values and demographic characteristics. A non-probability purposive sampling method was used to gather data from two populations. All 156 participants were female, Muslim, and full-time degree students. Data were collected from April to June 2010 using two reliable and validated questionnaires: the Learning Style Scales and the Values Survey Module 2008 (VSM 08). A simple linear regression was run for each predictor before conducting multiple linear regression analysis. The forward selection method was used for variable selection. P-values ≤0.05 and ≤0.1 were considered to indicate significance and marginal significance, respectively. Moreover, multi-group confirmatory factor analysis was performed to determine the invariance of the Farsi and English versions of the VSM 08. The perceptive learning style was found to have a significant negative relationship with the power distance and monumentalism indices of the VSM 08. Moreover, a significant negative association was observed between the solitary learning style and the power distance index. However, no significant association was found between the analytic, competitive, and imaginative learning styles and cultural values (P>0.05). Likewise, no significant associations were observed between learning style, including the perceptive, solitary, analytic, competitive, and imaginative learning styles, and year of study or age (P>0.05). Students who reported low values on the power distance and monumentalism indices are more likely to prefer perceptive and solitary learning styles. Within each group of students in our study sample from the same school the year of study and age did not show any significant associations with learning style.
ERIC Educational Resources Information Center
Randolph, Justus
2005-01-01
A high quality review of the distance learning literature from 1992-1999 concluded that most of the research on distance learning had serious methodological flaws. This paper presents the results of a small-scale replication of that review. From three leading distance education journals, a sample of 66 articles was categorized by study type and…
Automated Stitching of Microtubule Centerlines across Serial Electron Tomograms
Weber, Britta; Tranfield, Erin M.; Höög, Johanna L.; Baum, Daniel; Antony, Claude; Hyman, Tony; Verbavatz, Jean-Marc; Prohaska, Steffen
2014-01-01
Tracing microtubule centerlines in serial section electron tomography requires microtubules to be stitched across sections, that is lines from different sections need to be aligned, endpoints need to be matched at section boundaries to establish a correspondence between neighboring sections, and corresponding lines need to be connected across multiple sections. We present computational methods for these tasks: 1) An initial alignment is computed using a distance compatibility graph. 2) A fine alignment is then computed with a probabilistic variant of the iterative closest points algorithm, which we extended to handle the orientation of lines by introducing a periodic random variable to the probabilistic formulation. 3) Endpoint correspondence is established by formulating a matching problem in terms of a Markov random field and computing the best matching with belief propagation. Belief propagation is not generally guaranteed to converge to a minimum. We show how convergence can be achieved, nonetheless, with minimal manual input. In addition to stitching microtubule centerlines, the correspondence is also applied to transform and merge the electron tomograms. We applied the proposed methods to samples from the mitotic spindle in C. elegans, the meiotic spindle in X. laevis, and sub-pellicular microtubule arrays in T. brucei. The methods were able to stitch microtubules across section boundaries in good agreement with experts' opinions for the spindle samples. Results, however, were not satisfactory for the microtubule arrays. For certain experiments, such as an analysis of the spindle, the proposed methods can replace manual expert tracing and thus enable the analysis of microtubules over long distances with reasonable manual effort. PMID:25438148
Automated stitching of microtubule centerlines across serial electron tomograms.
Weber, Britta; Tranfield, Erin M; Höög, Johanna L; Baum, Daniel; Antony, Claude; Hyman, Tony; Verbavatz, Jean-Marc; Prohaska, Steffen
2014-01-01
Tracing microtubule centerlines in serial section electron tomography requires microtubules to be stitched across sections, that is lines from different sections need to be aligned, endpoints need to be matched at section boundaries to establish a correspondence between neighboring sections, and corresponding lines need to be connected across multiple sections. We present computational methods for these tasks: 1) An initial alignment is computed using a distance compatibility graph. 2) A fine alignment is then computed with a probabilistic variant of the iterative closest points algorithm, which we extended to handle the orientation of lines by introducing a periodic random variable to the probabilistic formulation. 3) Endpoint correspondence is established by formulating a matching problem in terms of a Markov random field and computing the best matching with belief propagation. Belief propagation is not generally guaranteed to converge to a minimum. We show how convergence can be achieved, nonetheless, with minimal manual input. In addition to stitching microtubule centerlines, the correspondence is also applied to transform and merge the electron tomograms. We applied the proposed methods to samples from the mitotic spindle in C. elegans, the meiotic spindle in X. laevis, and sub-pellicular microtubule arrays in T. brucei. The methods were able to stitch microtubules across section boundaries in good agreement with experts' opinions for the spindle samples. Results, however, were not satisfactory for the microtubule arrays. For certain experiments, such as an analysis of the spindle, the proposed methods can replace manual expert tracing and thus enable the analysis of microtubules over long distances with reasonable manual effort.
Paz, Andrea; Crawford, Andrew J
2012-11-01
Molecular markers offer a universal source of data for quantifying biodiversity. DNA barcoding uses a standardized genetic marker and a curated reference database to identify known species and to reveal cryptic diversity within wellsampled clades. Rapid biological inventories, e.g. rapid assessment programs (RAPs), unlike most barcoding campaigns, are focused on particular geographic localities rather than on clades. Because of the potentially sparse phylogenetic sampling, the addition of DNA barcoding to RAPs may present a greater challenge for the identification of named species or for revealing cryptic diversity. In this article we evaluate the use of DNA barcoding for quantifying lineage diversity within a single sampling site as compared to clade-based sampling, and present examples from amphibians. We compared algorithms for identifying DNA barcode clusters (e.g. species, cryptic species or Evolutionary Significant Units) using previously published DNA barcode data obtained from geography-based sampling at a site in Central Panama, and from clade-based sampling in Madagascar. We found that clustering algorithms based on genetic distance performed similarly on sympatric as well as clade-based barcode data, while a promising coalescent-based method performed poorly on sympatric data. The various clustering algorithms were also compared in terms of speed and software implementation. Although each method has its shortcomings in certain contexts, we recommend the use of the ABGD method, which not only performs fairly well under either sampling method, but does so in a few seconds and with a user-friendly Web interface.
Dental hygiene students' perceptions of distance learning: do they change over time?
Sledge, Rhonda; Vuk, Jasna; Long, Susan
2014-02-01
The University of Arkansas for Medical Sciences dental hygiene program established a distant site where the didactic curriculum was broadcast via interactive video from the main campus to the distant site, supplemented with on-line learning via Blackboard. This study compared the perceptions of students towards distance learning as they progressed through the 21 month curriculum. Specifically, the study sought to answer the following questions: Is there a difference in the initial perceptions of students on the main campus and at the distant site toward distance learning? Do students' perceptions change over time with exposure to synchronous distance learning over the course of the curriculum? All 39 subjects were women between the ages of 20 and 35 years. Of the 39 subjects, 37 were Caucasian and 2 were African-American. A 15-question Likert scale survey was administered at 4 different periods during the 21 month program to compare changes in perceptions toward distance learning as students progressed through the program. An independent sample t-test and ANOVA were utilized for statistical analysis. At the beginning of the program, independent samples t-test revealed that students at the main campus (n=34) perceived statistically significantly higher effectiveness of distance learning than students at the distant site (n=5). Repeated measures of ANOVA revealed that perceptions of students at the main campus on effectiveness and advantages of distance learning statistically significantly decreased whereas perceptions of students at distant site statistically significantly increased over time. Distance learning in the dental hygiene program was discussed, and replication of the study with larger samples of students was recommended.
Cosmic distance duality and cosmic transparency
NASA Astrophysics Data System (ADS)
Nair, Remya; Jhingan, Sanjay; Jain, Deepak
2012-12-01
We compare distance measurements obtained from two distance indicators, Supernovae observations (standard candles) and Baryon acoustic oscillation data (standard rulers). The Union2 sample of supernovae with BAO data from SDSS, 6dFGS and the latest BOSS and WiggleZ surveys is used in search for deviations from the distance duality relation. We find that the supernovae are brighter than expected from BAO measurements. The luminosity distances tend to be smaller then expected from angular diameter distance estimates as also found in earlier works on distance duality, but the trend is not statistically significant. This further constrains the cosmic transparency.
Optimization of propagation-based x-ray phase-contrast tomography for breast cancer imaging
NASA Astrophysics Data System (ADS)
Baran, P.; Pacile, S.; Nesterets, Y. I.; Mayo, S. C.; Dullin, C.; Dreossi, D.; Arfelli, F.; Thompson, D.; Lockie, D.; McCormack, M.; Taba, S. T.; Brun, F.; Pinamonti, M.; Nickson, C.; Hall, C.; Dimmock, M.; Zanconati, F.; Cholewa, M.; Quiney, H.; Brennan, P. C.; Tromba, G.; Gureyev, T. E.
2017-03-01
The aim of this study was to optimise the experimental protocol and data analysis for in-vivo breast cancer x-ray imaging. Results are presented of the experiment at the SYRMEP beamline of Elettra Synchrotron using the propagation-based phase-contrast mammographic tomography method, which incorporates not only absorption, but also x-ray phase information. In this study the images of breast tissue samples, of a size corresponding to a full human breast, with radiologically acceptable x-ray doses were obtained, and the degree of improvement of the image quality (from the diagnostic point of view) achievable using propagation-based phase-contrast image acquisition protocols with proper incorporation of x-ray phase retrieval into the reconstruction pipeline was investigated. Parameters such as the x-ray energy, sample-to-detector distance and data processing methods were tested, evaluated and optimized with respect to the estimated diagnostic value using a mastectomy sample with a malignant lesion. The results of quantitative evaluation of images were obtained by means of radiological assessment carried out by 13 experienced specialists. A comparative analysis was performed between the x-ray and the histological images of the specimen. The results of the analysis indicate that, within the investigated range of parameters, both the objective image quality characteristics and the subjective radiological scores of propagation-based phase-contrast images of breast tissues monotonically increase with the strength of phase contrast which in turn is directly proportional to the product of the radiation wavelength and the sample-to-detector distance. The outcomes of this study serve to define the practical imaging conditions and the CT reconstruction procedures appropriate for low-dose phase-contrast mammographic imaging of live patients at specially designed synchrotron beamlines.
Functional connectivity analysis in EEG source space: The choice of method
Knyazeva, Maria G.
2017-01-01
Functional connectivity (FC) is among the most informative features derived from EEG. However, the most straightforward sensor-space analysis of FC is unreliable owing to volume conductance effects. An alternative—source-space analysis of FC—is optimal for high- and mid-density EEG (hdEEG, mdEEG); however, it is questionable for widely used low-density EEG (ldEEG) because of inadequate surface sampling. Here, using simulations, we investigate the performance of the two source FC methods, the inverse-based source FC (ISFC) and the cortical partial coherence (CPC). To examine the effects of localization errors of the inverse method on the FC estimation, we simulated an oscillatory source with varying locations and SNRs. To compare the FC estimations by the two methods, we simulated two synchronized sources with varying between-source distance and SNR. The simulations were implemented for hdEEG, mdEEG, and ldEEG. We showed that the performance of both methods deteriorates for deep sources owing to their inaccurate localization and smoothing. The accuracy of both methods improves with the increasing between-source distance. The best ISFC performance was achieved using hd/mdEEG, while the best CPC performance was observed with ldEEG. In conclusion, with hdEEG, ISFC outperforms CPC and therefore should be the preferred method. In the studies based on ldEEG, the CPC is a method of choice. PMID:28727750
NASA Technical Reports Server (NTRS)
Federspiel, Martin; Sandage, Allan; Tammann, G. A.
1994-01-01
The observational selection bias properties of the large Mathewson-Ford-Buchhorn (MFB) sample of axies are demonstrated by showing that the apparent Hubble constant incorrectly increases outward when determined using Tully-Fisher (TF) photometric distances that are uncorreted for bias. It is further shown that the value of H(sub 0) so determined is also multivlaued at a given redshift when it is calculated by the TF method using galaxies with differenct line widths. The method of removing this unphysical contradiction is developed following the model of the bias set out in Paper II. The model developed further here shows that the appropriate TF magnitude of a galaxy that is drawn from a flux-limited catalog not only is a function of line width but, even in the most idealistic cases, requires a triple-entry correction depending on line width, apparent magnitude, and catalog limit. Using the distance-limited subset of the data, it is shown that the mean intrinsic dispersion of a bias-free TF relation is high. The dispersion depends on line width, decreasing from sigma(M) = 0.7 mag for galaxies with rotational velocities less than 100 km s(exp-1) to sigma(M) = 0.4 mag for galaxies with rotational velocities greater than 250 km s(exp-1). These dispersions are so large that the random errors of the bias-free TF distances are too gross to detect any peculiar motions of individual galaxies, but taken together the data show again the offset of 500 km s(exp-1) fond both by Dressler & Faber and by MFB for galaxies in the direction of the putative Great Attractor but described now in a different way. The maximum amplitude of the bulk streaming motion at the Local Group is approximately 500 km s(exp-1) but the perturbation dies out, approaching the Machian frame defined by the CMB at a distance of approximately 80 Mpc (v is approximately 4000 km s(exp -1)). This decay to zero perturbation at v is approximately 4000 km s(exp -1) argues against existing models with a single attraction at approximately 4500 km s(exp -1) (the Great Attactor model) pulling the local region. Rather, the cause of the perturbation appears to be the well-known clumpy mass distribution within 4000 km s(exp -1) in the busy directions of Hydra, Centaurus, Antila and Dorado, as postulated earlier (Tammann & Sandage 1985).
Neupane, Ghanashyam; McLing, Travis
2017-04-01
These brine samples are collected from the Soda Geyser (a thermal feature, temperature ~30 C) in Soda Springs, Idaho. These samples also represent the overthrust brines typical of oil and gas plays in western Wyoming. Samples were collected from the source and along the flow channel at different distances from the source. By collecting and analyzing these samples we are able to increase the density and quality of data from the western Wyoming oil and gas plays. Furthermore, the sampling approach also helped determine the systematic variation in REE concentration with the sampling distance from the source. Several geochemical processes are at work along the flow channels, such as degassing, precipitation, sorption, etc.
Probabilistic determination of probe locations from distance data
Xu, Xiao-Ping; Slaughter, Brian D.; Volkmann, Niels
2013-01-01
Distance constraints, in principle, can be employed to determine information about the location of probes within a three-dimensional volume. Traditional methods for locating probes from distance constraints involve optimization of scoring functions that measure how well the probe location fits the distance data, exploring only a small subset of the scoring function landscape in the process. These methods are not guaranteed to find the global optimum and provide no means to relate the identified optimum to all other optima in scoring space. Here, we introduce a method for the location of probes from distance information that is based on probability calculus. This method allows exploration of the entire scoring space by directly combining probability functions representing the distance data and information about attachment sites. The approach is guaranteed to identify the global optimum and enables the derivation of confidence intervals for the probe location as well as statistical quantification of ambiguities. We apply the method to determine the location of a fluorescence probe using distances derived by FRET and show that the resulting location matches that independently derived by electron microscopy. PMID:23770585
Learning Semantic Tags from Big Data for Clinical Text Representation.
Li, Yanpeng; Liu, Hongfang
2015-01-01
In clinical text mining, it is one of the biggest challenges to represent medical terminologies and n-gram terms in sparse medical reports using either supervised or unsupervised methods. Addressing this issue, we propose a novel method for word and n-gram representation at semantic level. We first represent each word by its distance with a set of reference features calculated by reference distance estimator (RDE) learned from labeled and unlabeled data, and then generate new features using simple techniques of discretization, random sampling and merging. The new features are a set of binary rules that can be interpreted as semantic tags derived from word and n-grams. We show that the new features significantly outperform classical bag-of-words and n-grams in the task of heart disease risk factor extraction in i2b2 2014 challenge. It is promising to see that semantics tags can be used to replace the original text entirely with even better prediction performance as well as derive new rules beyond lexical level.
Anomalous expansion of the copper-apical-oxygen distance in superconducting cuprate bilayers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Hua; Yacoby, Yizhak; Butko, Vladimir Y.
2010-08-27
We have introduced an improved x-ray phase-retrieval method with unprecedented speed of convergence and precision, and used it to determine with sub-Angstrom resolution the complete atomic structure of epitaxial La{sub 2-x}Sr{sub x}CuO{sub 4} ultrathin films. We focus on superconducting heterostructures built from constituent materials that are not superconducting in bulk samples. Single-phase metallic or superconducting films are also studied for comparison. The results show that this phase-retrieval diffraction method enables accurate measurement of structural modifications in near-surface layers, which may be critically important for elucidation of surface-sensitive experiments. Specifically we find that, while the copper-apical-oxygen distance remains approximately constant inmore » single-phase films, it shows a dramatic increase from the metallic-insulating interface of the bilayer towards the surface by as much as 0.45 {angstrom}. The apical-oxygen displacement is known to have a profound effect on the superconducting transition temperature.« less
Influence of Waveform Characteristics on LiDAR Ranging Accuracy and Precision
Yang, Bingwei; Xie, Xinhao; Li, Duan
2018-01-01
Time of flight (TOF) based light detection and ranging (LiDAR) is a technology for calculating distance between start/stop signals of time of flight. In lab-built LiDAR, two ranging systems for measuring flying time between start/stop signals include time-to-digital converter (TDC) that counts time between trigger signals and analog-to-digital converter (ADC) that processes the sampled start/stop pulses waveform for time estimation. We study the influence of waveform characteristics on range accuracy and precision of two kinds of ranging system. Comparing waveform based ranging (WR) with analog discrete return system based ranging (AR), a peak detection method (WR-PK) shows the best ranging performance because of less execution time, high ranging accuracy, and stable precision. Based on a novel statistic mathematical method maximal information coefficient (MIC), WR-PK precision has a high linear relationship with the received pulse width standard deviation. Thus keeping the received pulse width of measuring a constant distance as stable as possible can improve ranging precision. PMID:29642639
Molecular taxonomy of phytopathogenic fungi: a case study in Peronospora.
Göker, Markus; García-Blázquez, Gema; Voglmayr, Hermann; Tellería, M Teresa; Martín, María P
2009-07-29
Inappropriate taxon definitions may have severe consequences in many areas. For instance, biologically sensible species delimitation of plant pathogens is crucial for measures such as plant protection or biological control and for comparative studies involving model organisms. However, delimiting species is challenging in the case of organisms for which often only molecular data are available, such as prokaryotes, fungi, and many unicellular eukaryotes. Even in the case of organisms with well-established morphological characteristics, molecular taxonomy is often necessary to emend current taxonomic concepts and to analyze DNA sequences directly sampled from the environment. Typically, for this purpose clustering approaches to delineate molecular operational taxonomic units have been applied using arbitrary choices regarding the distance threshold values, and the clustering algorithms. Here, we report on a clustering optimization method to establish a molecular taxonomy of Peronospora based on ITS nrDNA sequences. Peronospora is the largest genus within the downy mildews, which are obligate parasites of higher plants, and includes various economically important pathogens. The method determines the distance function and clustering setting that result in an optimal agreement with selected reference data. Optimization was based on both taxonomy-based and host-based reference information, yielding the same outcome. Resampling and permutation methods indicate that the method is robust regarding taxon sampling and errors in the reference data. Tests with newly obtained ITS sequences demonstrate the use of the re-classified dataset in molecular identification of downy mildews. A corrected taxonomy is provided for all Peronospora ITS sequences contained in public databases. Clustering optimization appears to be broadly applicable in automated, sequence-based taxonomy. The method connects traditional and modern taxonomic disciplines by specifically addressing the issue of how to optimally account for both traditional species concepts and genetic divergence.
Molecular Taxonomy of Phytopathogenic Fungi: A Case Study in Peronospora
Göker, Markus; García-Blázquez, Gema; Voglmayr, Hermann; Tellería, M. Teresa; Martín, María P.
2009-01-01
Background Inappropriate taxon definitions may have severe consequences in many areas. For instance, biologically sensible species delimitation of plant pathogens is crucial for measures such as plant protection or biological control and for comparative studies involving model organisms. However, delimiting species is challenging in the case of organisms for which often only molecular data are available, such as prokaryotes, fungi, and many unicellular eukaryotes. Even in the case of organisms with well-established morphological characteristics, molecular taxonomy is often necessary to emend current taxonomic concepts and to analyze DNA sequences directly sampled from the environment. Typically, for this purpose clustering approaches to delineate molecular operational taxonomic units have been applied using arbitrary choices regarding the distance threshold values, and the clustering algorithms. Methodology Here, we report on a clustering optimization method to establish a molecular taxonomy of Peronospora based on ITS nrDNA sequences. Peronospora is the largest genus within the downy mildews, which are obligate parasites of higher plants, and includes various economically important pathogens. The method determines the distance function and clustering setting that result in an optimal agreement with selected reference data. Optimization was based on both taxonomy-based and host-based reference information, yielding the same outcome. Resampling and permutation methods indicate that the method is robust regarding taxon sampling and errors in the reference data. Tests with newly obtained ITS sequences demonstrate the use of the re-classified dataset in molecular identification of downy mildews. Conclusions A corrected taxonomy is provided for all Peronospora ITS sequences contained in public databases. Clustering optimization appears to be broadly applicable in automated, sequence-based taxonomy. The method connects traditional and modern taxonomic disciplines by specifically addressing the issue of how to optimally account for both traditional species concepts and genetic divergence. PMID:19641601
Real-time Interpolation for True 3-Dimensional Ultrasound Image Volumes
Ji, Songbai; Roberts, David W.; Hartov, Alex; Paulsen, Keith D.
2013-01-01
We compared trilinear interpolation to voxel nearest neighbor and distance-weighted algorithms for fast and accurate processing of true 3-dimensional ultrasound (3DUS) image volumes. In this study, the computational efficiency and interpolation accuracy of the 3 methods were compared on the basis of a simulated 3DUS image volume, 34 clinical 3DUS image volumes from 5 patients, and 2 experimental phantom image volumes. We show that trilinear interpolation improves interpolation accuracy over both the voxel nearest neighbor and distance-weighted algorithms yet achieves real-time computational performance that is comparable to the voxel nearest neighbor algrorithm (1–2 orders of magnitude faster than the distance-weighted algorithm) as well as the fastest pixel-based algorithms for processing tracked 2-dimensional ultrasound images (0.035 seconds per 2-dimesional cross-sectional image [76,800 pixels interpolated, or 0.46 ms/1000 pixels] and 1.05 seconds per full volume with a 1-mm3 voxel size [4.6 million voxels interpolated, or 0.23 ms/1000 voxels]). On the basis of these results, trilinear interpolation is recommended as a fast and accurate interpolation method for rectilinear sampling of 3DUS image acquisitions, which is required to facilitate subsequent processing and display during operating room procedures such as image-guided neurosurgery. PMID:21266563
Real-time interpolation for true 3-dimensional ultrasound image volumes.
Ji, Songbai; Roberts, David W; Hartov, Alex; Paulsen, Keith D
2011-02-01
We compared trilinear interpolation to voxel nearest neighbor and distance-weighted algorithms for fast and accurate processing of true 3-dimensional ultrasound (3DUS) image volumes. In this study, the computational efficiency and interpolation accuracy of the 3 methods were compared on the basis of a simulated 3DUS image volume, 34 clinical 3DUS image volumes from 5 patients, and 2 experimental phantom image volumes. We show that trilinear interpolation improves interpolation accuracy over both the voxel nearest neighbor and distance-weighted algorithms yet achieves real-time computational performance that is comparable to the voxel nearest neighbor algrorithm (1-2 orders of magnitude faster than the distance-weighted algorithm) as well as the fastest pixel-based algorithms for processing tracked 2-dimensional ultrasound images (0.035 seconds per 2-dimesional cross-sectional image [76,800 pixels interpolated, or 0.46 ms/1000 pixels] and 1.05 seconds per full volume with a 1-mm(3) voxel size [4.6 million voxels interpolated, or 0.23 ms/1000 voxels]). On the basis of these results, trilinear interpolation is recommended as a fast and accurate interpolation method for rectilinear sampling of 3DUS image acquisitions, which is required to facilitate subsequent processing and display during operating room procedures such as image-guided neurosurgery.
Peculiar velocity measurement in a clumpy universe
NASA Astrophysics Data System (ADS)
Habibi, Farhang; Baghram, Shant; Tavasoli, Saeed
Aims: In this work, we address the issue of peculiar velocity measurement in a perturbed Friedmann universe using the deviations from measured luminosity distances of standard candles from background FRW universe. We want to show and quantify the statement that in intermediate redshifts (0.5 < z < 2), deviations from the background FRW model are not uniquely governed by peculiar velocities. Luminosity distances are modified by gravitational lensing. We also want to indicate the importance of relativistic calculations for peculiar velocity measurement at all redshifts. Methods: For this task, we discuss the relativistic correction on luminosity distance and redshift measurement and show the contribution of each of the corrections as lensing term, peculiar velocity of the source and Sachs-Wolfe effect. Then, we use the SNe Ia sample of Union 2, to investigate the relativistic effects, we consider. Results: We show that, using the conventional peculiar velocity method, that ignores the lensing effect, will result in an overestimate of the measured peculiar velocities at intermediate redshifts. Here, we quantify this effect. We show that at low redshifts the lensing effect is negligible compare to the effect of peculiar velocity. From the observational point of view, we show that the uncertainties on luminosity of the present SNe Ia data prevent us from precise measuring the peculiar velocities even at low redshifts (z < 0.2).
How many fish? Comparison of two underwater visual sampling methods for monitoring fish communities
Sini, Maria; Vatikiotis, Konstantinos; Katsoupis, Christos
2018-01-01
Background Underwater visual surveys (UVSs) for monitoring fish communities are preferred over fishing surveys in certain habitats, such as rocky or coral reefs and seagrass beds and are the standard monitoring tool in many cases, especially in protected areas. However, despite their wide application there are potential biases, mainly due to imperfect detectability and the behavioral responses of fish to the observers. Methods The performance of two methods of UVSs were compared to test whether they give similar results in terms of fish population density, occupancy, species richness, and community composition. Distance sampling (line transects) and plot sampling (strip transects) were conducted at 31 rocky reef sites in the Aegean Sea (Greece) using SCUBA diving. Results Line transects generated significantly higher values of occupancy, species richness, and total fish density compared to strip transects. For most species, density estimates differed significantly between the two sampling methods. For secretive species and species avoiding the observers, the line transect method yielded higher estimates, as it accounted for imperfect detectability and utilized a larger survey area compared to the strip transect method. On the other hand, large-scale spatial patterns of species composition were similar for both methods. Discussion Overall, both methods presented a number of advantages and limitations, which should be considered in survey design. Line transects appear to be more suitable for surveying secretive species, while strip transects should be preferred at high fish densities and for species of high mobility. PMID:29942703
Kinematic structures of the solar neighbourhood revealed by Gaia DR1/TGAS and RAVE
NASA Astrophysics Data System (ADS)
Kushniruk, I.; Schirmer, T.; Bensby, T.
2017-12-01
Context. The velocity distribution of stars in the solar neighbourhood is inhomogeneous and rich with stellar streams and kinematic structures. These may retain important clues regarding the formation and dynamical history of the Milky Way. However, the nature and origin of many of the streams and structures is unclear, hindering our understanding of how the Milky Way formed and evolved. Aims: We aim to study the velocity distribution of stars of the solar neighbourhood and investigate the properties of individual kinematic structures in order to improve our understanding of their origins. Methods: Using the astrometric data provided by Gaia DR1/TGAS and radial velocities from RAVE DR5 we perform a wavelet analysis with the à trous algorithm of 55 831 stars that have U and V velocity uncertainties less than 4 km s-1. An auto-convolution histogram method is used to filter the output data, and we then run Monte Carlo simulations to verify that the detected structures are real and are not caused by noise due to velocity uncertainties. Additionally we analysed our stellar sample by splitting all stars into a nearby sample (<300 pc) and a distant sample (>300 pc), and two chemically defined samples that to a first degree represent the thin and the thick disks. Results: We detect 19 kinematic structures in the solar neighbourhood in the range of scales 3-16 km s-1 at the 3σ confidence level. Among them we identified well-known groups (such as Hercules, Sirius, Coma Berenices, Pleiades, and Wolf 630), confirmed recently detected groups (such as Antoja12 and Bobylev16), and detected a new structure at (U,V) ≈ (37,8) km s-1. Another three new groups are tentatively detected, but require further confirmation. Some of the detected groups show clear dependence on distance in the sense that they are only present in the nearby sample (<300 pc), and others appear to be correlated with chemistry as they are only present in one of the chemically defined thin and thick disk samples. Conclusions: With the much enlarged stellar sample and much increased precision in distances and proper motions, provided by Gaia DR1/TGAS we have shown that the velocity distribution of stars in the solar neighbourhood contains more structures than previously known. A new feature is discovered and three recently detected groups are confirmed at high confidence level. Dividing the sample based on distance and/or metallicity shows that there are variety of structures which form large-scale and small-scale groups; some of them have clear trends on metallicities, others are a mixture of both disk stars. Based on these findings we discuss possible origins of each group.
Comparative evaluation of ultrasound scanner accuracy in distance measurement
NASA Astrophysics Data System (ADS)
Branca, F. P.; Sciuto, S. A.; Scorza, A.
2012-10-01
The aim of the present study is to develop and compare two different automatic methods for accuracy evaluation in ultrasound phantom measurements on B-mode images: both of them give as a result the relative error e between measured distances, performed by 14 brand new ultrasound medical scanners, and nominal distances, among nylon wires embedded in a reference test object. The first method is based on a least squares estimation, while the second one applies the mean value of the same distance evaluated at different locations in ultrasound image (same distance method). Results for both of them are proposed and explained.
Clast comminution during pyroclastic density current transport: Mt St Helens
NASA Astrophysics Data System (ADS)
Dawson, B.; Brand, B. D.; Dufek, J.
2011-12-01
Volcanic clasts within pyroclastic density currents (PDCs) tend to be more rounded than those in fall deposits. This rounding reflects degrees of comminution during transport, which produces an increase in fine-grained ash with distance from source (Manga, M., Patel, A., Dufek., J. 2011. Bull Volcanol 73: 321-333). The amount of ash produced due to comminution can potentially affect runout distance, deposit sorting, the volume of ash lofted into the upper atmosphere, and increase internal pore pressure (e.g., Wohletz, K., Sheridan, M. F., Brown, W.K. 1989. J Geophy Res, 94, 15703-15721). For example, increased pore pressure has been shown to produce longer runout distances than non-comminuted PDC flows (e.g., Dufek, J., and M. Manga, 2008. J. Geophy Res, 113). We build on the work of Manga et al., (2011) by completing a pumice abrasion study for two well-exposed flow units from the May 18th, 1980 eruption of Mt St Helens (MSH). To quantify differences in comminution from source, sampling and the image analysis technique developed in Manga et al., 2010 was completed at distances proximal, medial, and distal from source. Within the units observed, data was taken from the base, middle, and pumice lobes within the outcrops. Our study is unique in that in addition to quantifying the degree of pumice rounding with distance from source, we also determine the possible range of ash sizes produced during comminution by analyzing bubble wall thickness of the pumice through petrographic and SEM analysis. The proportion of this ash size is then measured relative to the grain size of larger ash with distance from source. This allows us to correlate ash production with degree of rounding with distance from source, and determine the fraction of the fine ash produced due to comminution versus vent-fragmentation mechanisms. In addition we test the error in 2D analysis by completing a 3D image analysis of selected pumice samples using a Camsizer. We find that the roundness of PDC pumice at MSH increases with distance from source, as does the quantity of fine-grained ash. In addition, we have made the first steps towards determining the proportion of fine ash produced by comminution with distance from source. These results are being tested by numerical methods to understand the effect of an increase in fine ash on overall flow dynamics of the PDCs in which they were produced.
Learner Self-Regulation in Distance Education: A Cross-Cultural Study
ERIC Educational Resources Information Center
Al-Harthi, Aisha S.
2010-01-01
This study investigated cultural variations between two samples of Arab and American distance learners (N = 190). The overarching purpose was to chart the underlying relationships between learner self-regulation and cultural orientation within distance education environments using structural equation modeling. The study found significant…
Albanese, B.; Angermeier, P.L.; Gowan, C.
2003-01-01
Mark-recapture studies generate biased, or distance-weighted, movement data because short distances are sampled more frequently than long distances. Using models and field data, we determined how study design affects distance weighting and the movement distributions of stream fishes. We first modeled distance weighting as a function of recapture section length in an unbranching stream. The addition of an unsampled tributary to one of these models substantially increased distance weighting by decreasing the percentage of upstream distances that were sampled. Similarly, the presence of multiple tributaries in the field study resulted in severe bias. However, increasing recapture section length strongly affected distance weighting in both the model and the field study, producing a zone where the number of fish moving could be estimated with little bias. Subsampled data from the field study indicated that longer median (three of three species) and maximum distances (two of three species) can be detected by increasing the length of the recapture section. The effect was extreme for bluehead chub Nocomis leptocephalus, a highly mobile species, which exhibited a longer median distance (133 m versus 60 m), a longer maximum distance (1,144 m versus 708 m), and a distance distribution that differed in shape when the full (4,123-m recapture section) and subsampled (1,978-m recapture section) data sets were compared. Correction factors that adjust the observed number of movements to undersampled distances upwards and those to oversampled distances downwards could not mitigate the distance weighting imposed by the shorter recapture section. Future studies should identify the spatial scale over which movements can be accurately measured before data are collected. Increasing recapture section length a priori is far superior to using post hoc correction factors to reduce the influence of distance weighting on observed distributions. Implementing these strategies will be especially important in stream networks where fish can follow multiple pathways out of the recapture section.
Development of Automated Moment Tensor Software at the Prototype International Data Center
2000-09-01
Berkeley Digital Seismic Network stations in the 100 to 500 km distance range. With sufficient azimuthal coverage this method is found to perform...the solution reported by NIED (http://argent.geo.bosai.go.jp/ freesia /event/hypo/joho.html). The normal mechanism obtained by the three-component...Digital Seismic Network stations. These stations provide more than 100 degrees of azimuthal coverage, which is an adequate sampling of the focal
VNIR hyperspectral background characterization methods in adverse weather conditions
NASA Astrophysics Data System (ADS)
Romano, João M.; Rosario, Dalton; Roth, Luz
2009-05-01
Hyperspectral technology is currently being used by the military to detect regions of interest where potential targets may be located. Weather variability, however, may affect the ability for an algorithm to discriminate possible targets from background clutter. Nonetheless, different background characterization approaches may facilitate the ability for an algorithm to discriminate potential targets over a variety of weather conditions. In a previous paper, we introduced a new autonomous target size invariant background characterization process, the Autonomous Background Characterization (ABC) or also known as the Parallel Random Sampling (PRS) method, features a random sampling stage, a parallel process to mitigate the inclusion by chance of target samples into clutter background classes during random sampling; and a fusion of results at the end. In this paper, we will demonstrate how different background characterization approaches are able to improve performance of algorithms over a variety of challenging weather conditions. By using the Mahalanobis distance as the standard algorithm for this study, we compare the performance of different characterization methods such as: the global information, 2 stage global information, and our proposed method, ABC, using data that was collected under a variety of adverse weather conditions. For this study, we used ARDEC's Hyperspectral VNIR Adverse Weather data collection comprised of heavy, light, and transitional fog, light and heavy rain, and low light conditions.
Estimate Soil Erodibility Factors Distribution for Maioli Block
NASA Astrophysics Data System (ADS)
Lee, Wen-Ying
2014-05-01
The natural conditions in Taiwan are poor. Because of the steep slopes, rushing river and fragile geology, soil erosion turn into a serious problem. Not only undermine the sloping landscape, but also created sediment disaster like that reservoir sedimentation, river obstruction…etc. Therefore, predict and control the amount of soil erosion has become an important research topic. Soil erodibility factor (K) is a quantitative index of distinguish the ability of soil to resist the erosion separation and handling. Taiwan soil erodibility factors have been calculated 280 soil samples' erodibility factors by Wann and Huang (1989) use the Wischmeier and Smith nomorgraph. 221 samples were collected at the Maioli block in Miaoli. The coordinates of every sample point and the land use situations were recorded. The physical properties were analyzed for each sample. Three estimation methods, consist of Kriging, Inverse Distance Weighted (IDW) and Spline, were applied to estimate soil erodibility factors distribution for Maioli block by using 181 points data, and the remaining 40 points for the validation. Then, the SPSS regression analysis was used to comparison of the accuracy of the training data and validation data by three different methods. Then, the best method can be determined. In the future, we can used this method to predict the soil erodibility factors in other areas.
NASA Astrophysics Data System (ADS)
Arjmand, Yaser; Eshghi, Hosein
2016-03-01
In this paper, ZnO nanostructures have been synthesized by thermal evaporation process using metallic zinc powder in the presence of oxygen on p-Si (100) at different distances from the boat. The structural and optical characterizations have been carried out. The morphological study shows various shape nanostructures. XRD data indicate that all samples have a polycrystalline wurtzite hexagonal structure in such a way that the closer sample has a preferred orientation along (101) while the ones farther are grown along (002) direction. From the structural and optical data analysis, we found that the induced strains are the main parameter controlling the UV/green peaks ratios in the PL spectra of the studied samples.
The study of frequency-scan photothermal reflectance technique for thermal diffusivity measurement
Hua, Zilong; Ban, Heng; Hurley, David H.
2015-05-05
A frequency scan photothermal reflectance technique to measure thermal diffusivity of bulk samples is studied in this manuscript. Similar to general photothermal reflectance methods, an intensity-modulated heating laser and a constant intensity probe laser are used to determine the surface temperature response under sinusoidal heating. The approach involves fixing the distance between the heating and probe laser spots, recording the phase lag of reflected probe laser intensity with respect to the heating laser frequency modulation, and extracting thermal diffusivity using the phase lag – (frequency) 1/2 relation. The experimental validation is performed on three samples (SiO 2, CaF 2 andmore » Ge), which have a wide range of thermal diffusivities. The measured thermal diffusivity values agree closely with literature values. Lastly, compared to the commonly used spatial scan method, the experimental setup and operation of the frequency scan method are simplified, and the uncertainty level is equal to or smaller than that of the spatial scan method.« less
NASA Astrophysics Data System (ADS)
Budzan, Sebastian
2018-04-01
In this paper, the automatic method of grain detection and classification has been presented. As input, it uses a single digital image obtained from milling process of the copper ore with an high-quality digital camera. The grinding process is an extremely energy and cost consuming process, thus granularity evaluation process should be performed with high efficiency and time consumption. The method proposed in this paper is based on the three-stage image processing. First, using Seeded Region Growing (SRG) segmentation with proposed adaptive thresholding based on the calculation of Relative Standard Deviation (RSD) all grains are detected. In the next step results of the detection are improved using information about the shape of the detected grains using distance map. Finally, each grain in the sample is classified into one of the predefined granularity class. The quality of the proposed method has been obtained by using nominal granularity samples, also with a comparison to the other methods.
The study of frequency-scan photothermal reflectance technique for thermal diffusivity measurement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hua, Zilong; Ban, Heng; Hurley, David H.
A frequency scan photothermal reflectance technique to measure thermal diffusivity of bulk samples is studied in this manuscript. Similar to general photothermal reflectance methods, an intensity-modulated heating laser and a constant intensity probe laser are used to determine the surface temperature response under sinusoidal heating. The approach involves fixing the distance between the heating and probe laser spots, recording the phase lag of reflected probe laser intensity with respect to the heating laser frequency modulation, and extracting thermal diffusivity using the phase lag – (frequency) 1/2 relation. The experimental validation is performed on three samples (SiO 2, CaF 2 andmore » Ge), which have a wide range of thermal diffusivities. The measured thermal diffusivity values agree closely with literature values. Lastly, compared to the commonly used spatial scan method, the experimental setup and operation of the frequency scan method are simplified, and the uncertainty level is equal to or smaller than that of the spatial scan method.« less
Optimum nonparametric estimation of population density based on ordered distances
Patil, S.A.; Kovner, J.L.; Burnham, Kenneth P.
1982-01-01
The asymptotic mean and error mean square are determined for the nonparametric estimator of plant density by distance sampling proposed by Patil, Burnham and Kovner (1979, Biometrics 35, 597-604. On the basis of these formulae, a bias-reduced version of this estimator is given, and its specific form is determined which gives minimum mean square error under varying assumptions about the true probability density function of the sampled data. Extension is given to line-transect sampling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, X; Gao, H; Sharp, G
Purpose: Accurate image segmentation is a crucial step during image guided radiation therapy. This work proposes multi-atlas machine learning (MAML) algorithm for automated segmentation of head-and-neck CT images. Methods: As the first step, the algorithm utilizes normalized mutual information as similarity metric, affine registration combined with multiresolution B-Spline registration, and then fuses together using the label fusion strategy via Plastimatch. As the second step, the following feature selection strategy is proposed to extract five feature components from reference or atlas images: intensity (I), distance map (D), box (B), center of gravity (C) and stable point (S). The box feature Bmore » is novel. It describes a relative position from each point to minimum inscribed rectangle of ROI. The center-of-gravity feature C is the 3D Euclidean distance from a sample point to the ROI center of gravity, and then S is the distance of the sample point to the landmarks. Then, we adopt random forest (RF) in Scikit-learn, a Python module integrating a wide range of state-of-the-art machine learning algorithms as classifier. Different feature and atlas strategies are used for different ROIs for improved performance, such as multi-atlas strategy with reference box for brainstem, and single-atlas strategy with reference landmark for optic chiasm. Results: The algorithm was validated on a set of 33 CT images with manual contours using a leave-one-out cross-validation strategy. Dice similarity coefficients between manual contours and automated contours were calculated: the proposed MAML method had an improvement from 0.79 to 0.83 for brainstem and 0.11 to 0.52 for optic chiasm with respect to multi-atlas segmentation method (MA). Conclusion: A MAML method has been proposed for automated segmentation of head-and-neck CT images with improved performance. It provides the comparable result in brainstem and the improved result in optic chiasm compared with MA. Xuhua Ren and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000), and the Shanghai Pujiang Talent Program (#14PJ1404500).« less
Distance-based microfluidic quantitative detection methods for point-of-care testing.
Tian, Tian; Li, Jiuxing; Song, Yanling; Zhou, Leiji; Zhu, Zhi; Yang, Chaoyong James
2016-04-07
Equipment-free devices with quantitative readout are of great significance to point-of-care testing (POCT), which provides real-time readout to users and is especially important in low-resource settings. Among various equipment-free approaches, distance-based visual quantitative detection methods rely on reading the visual signal length for corresponding target concentrations, thus eliminating the need for sophisticated instruments. The distance-based methods are low-cost, user-friendly and can be integrated into portable analytical devices. Moreover, such methods enable quantitative detection of various targets by the naked eye. In this review, we first introduce the concept and history of distance-based visual quantitative detection methods. Then, we summarize the main methods for translation of molecular signals to distance-based readout and discuss different microfluidic platforms (glass, PDMS, paper and thread) in terms of applications in biomedical diagnostics, food safety monitoring, and environmental analysis. Finally, the potential and future perspectives are discussed.
The Use of DNA Barcoding in Identification and Conservation of Rosewood (Dalbergia spp.)
Hartvig, Ida; Czako, Mihaly; Kjær, Erik Dahl; Nielsen, Lene Rostgaard; Theilade, Ida
2015-01-01
The genus Dalbergia contains many valuable timber species threatened by illegal logging and deforestation, but knowledge on distributions and threats is often limited and accurate species identification difficult. The aim of this study was to apply DNA barcoding methods to support conservation efforts of Dalbergia species in Indochina. We used the recommended rbcL, matK and ITS barcoding markers on 95 samples covering 31 species of Dalbergia, and tested their discrimination ability with both traditional distance-based as well as different model-based machine learning methods. We specifically tested whether the markers could be used to solve taxonomic confusion concerning the timber species Dalbergia oliveri, and to identify the CITES-listed Dalbergia cochinchinensis. We also applied the barcoding markers to 14 samples of unknown identity. In general, we found that the barcoding markers discriminated among Dalbergia species with high accuracy. We found that ITS yielded the single highest discrimination rate (100%), but due to difficulties in obtaining high-quality sequences from degraded material, the better overall choice for Dalbergia seems to be the standard rbcL+matK barcode, as this yielded discrimination rates close to 90% and amplified well. The distance-based method TaxonDNA showed the highest identification rates overall, although a more complete specimen sampling is needed to conclude on the best analytic method. We found strong support for a monophyletic Dalbergia oliveri and encourage that this name is used consistently in Indochina. The CITES-listed Dalbergia cochinchinensis was successfully identified, and a species-specific assay can be developed from the data generated in this study for the identification of illegally traded timber. We suggest that the use of DNA barcoding is integrated into the work flow during floristic studies and at national herbaria in the region, as this could significantly increase the number of identified specimens and improve knowledge about species distributions. PMID:26375850