A binary linear programming formulation of the graph edit distance.
Justice, Derek; Hero, Alfred
2006-08-01
A binary linear programming formulation of the graph edit distance for unweighted, undirected graphs with vertex attributes is derived and applied to a graph recognition problem. A general formulation for editing graphs is used to derive a graph edit distance that is proven to be a metric, provided the cost function for individual edit operations is a metric. Then, a binary linear program is developed for computing this graph edit distance, and polynomial time methods for determining upper and lower bounds on the solution of the binary program are derived by applying solution methods for standard linear programming and the assignment problem. A recognition problem of comparing a sample input graph to a database of known prototype graphs in the context of a chemical information system is presented as an application of the new method. The costs associated with various edit operations are chosen by using a minimum normalized variance criterion applied to pairwise distances between nearest neighbors in the database of prototypes. The new metric is shown to perform quite well in comparison to existing metrics when applied to a database of chemical graphs.
Kinematic Distances: A Monte Carlo Method
NASA Astrophysics Data System (ADS)
Wenger, Trey V.; Balser, Dana S.; Anderson, L. D.; Bania, T. M.
2018-03-01
Distances to high-mass star-forming regions (HMSFRs) in the Milky Way are a crucial constraint on the structure of the Galaxy. Only kinematic distances are available for a majority of the HMSFRs in the Milky Way. Here, we compare the kinematic and parallax distances of 75 Galactic HMSFRs to assess the accuracy of kinematic distances. We derive the kinematic distances using three different methods: the traditional method using the Brand & Blitz rotation curve (Method A), the traditional method using the Reid et al. rotation curve and updated solar motion parameters (Method B), and a Monte Carlo technique (Method C). Methods B and C produce kinematic distances closest to the parallax distances, with median differences of 13% (0.43 {kpc}) and 17% (0.42 {kpc}), respectively. Except in the vicinity of the tangent point, the kinematic distance uncertainties derived by Method C are smaller than those of Methods A and B. In a large region of the Galaxy, the Method C kinematic distances constrain both the distances and the Galactocentric positions of HMSFRs more accurately than parallax distances. Beyond the tangent point along ℓ = 30°, for example, the Method C kinematic distance uncertainties reach a minimum of 10% of the parallax distance uncertainty at a distance of 14 {kpc}. We develop a prescription for deriving and applying the Method C kinematic distances and distance uncertainties. The code to generate the Method C kinematic distances is publicly available and may be utilized through an online tool.
Comparative evaluation of ultrasound scanner accuracy in distance measurement
NASA Astrophysics Data System (ADS)
Branca, F. P.; Sciuto, S. A.; Scorza, A.
2012-10-01
The aim of the present study is to develop and compare two different automatic methods for accuracy evaluation in ultrasound phantom measurements on B-mode images: both of them give as a result the relative error e between measured distances, performed by 14 brand new ultrasound medical scanners, and nominal distances, among nylon wires embedded in a reference test object. The first method is based on a least squares estimation, while the second one applies the mean value of the same distance evaluated at different locations in ultrasound image (same distance method). Results for both of them are proposed and explained.
Generalising Ward's Method for Use with Manhattan Distances.
Strauss, Trudie; von Maltitz, Michael Johan
2017-01-01
The claim that Ward's linkage algorithm in hierarchical clustering is limited to use with Euclidean distances is investigated. In this paper, Ward's clustering algorithm is generalised to use with l1 norm or Manhattan distances. We argue that the generalisation of Ward's linkage method to incorporate Manhattan distances is theoretically sound and provide an example of where this method outperforms the method using Euclidean distances. As an application, we perform statistical analyses on languages using methods normally applied to biology and genetic classification. We aim to quantify differences in character traits between languages and use a statistical language signature based on relative bi-gram (sequence of two letters) frequencies to calculate a distance matrix between 32 Indo-European languages. We then use Ward's method of hierarchical clustering to classify the languages, using the Euclidean distance and the Manhattan distance. Results obtained from using the different distance metrics are compared to show that the Ward's algorithm characteristic of minimising intra-cluster variation and maximising inter-cluster variation is not violated when using the Manhattan metric.
Remote logo detection using angle-distance histograms
NASA Astrophysics Data System (ADS)
Youn, Sungwook; Ok, Jiheon; Baek, Sangwook; Woo, Seongyoun; Lee, Chulhee
2016-05-01
Among all the various computer vision applications, automatic logo recognition has drawn great interest from industry as well as various academic institutions. In this paper, we propose an angle-distance map, which we used to develop a robust logo detection algorithm. The proposed angle-distance histogram is invariant against scale and rotation. The proposed method first used shape information and color characteristics to find the candidate regions and then applied the angle-distance histogram. Experiments show that the proposed method detected logos of various sizes and orientations.
The crowding factor method applied to parafoveal vision
Ghahghaei, Saeideh; Walker, Laura
2016-01-01
Crowding increases with eccentricity and is most readily observed in the periphery. During natural, active vision, however, central vision plays an important role. Measures of critical distance to estimate crowding are difficult in central vision, as these distances are small. Any overlap of flankers with the target may create an overlay masking confound. The crowding factor method avoids this issue by simultaneously modulating target size and flanker distance and using a ratio to compare crowded to uncrowded conditions. This method was developed and applied in the periphery (Petrov & Meleshkevich, 2011b). In this work, we apply the method to characterize crowding in parafoveal vision (<3.5 visual degrees) with spatial uncertainty. We find that eccentricity and hemifield have less impact on crowding than in the periphery, yet radial/tangential asymmetries are clearly preserved. There are considerable idiosyncratic differences observed between participants. The crowding factor method provides a powerful tool for examining crowding in central and peripheral vision, which will be useful in future studies that seek to understand visual processing under natural, active viewing conditions. PMID:27690170
On the minimum orbital intersection distance computation: a new effective method
NASA Astrophysics Data System (ADS)
Hedo, José M.; Ruíz, Manuel; Peláez, Jesús
2018-06-01
The computation of the Minimum Orbital Intersection Distance (MOID) is an old, but increasingly relevant problem. Fast and precise methods for MOID computation are needed to select potentially hazardous asteroids from a large catalogue. The same applies to debris with respect to spacecraft. An iterative method that strictly meets these two premises is presented.
Apparatus for in-situ calibration of instruments that measure fluid depth
Campbell, Melvin D.
1994-01-01
The present invention provides a method and apparatus for in-situ calibration of distance measuring equipment. The method comprises obtaining a first distance measurement in a first location, then obtaining at least one other distance measurement in at least one other location of a precisely known distance from the first location, and calculating a calibration constant. The method is applied specifically to calculating a calibration constant for obtaining fluid level and embodied in an apparatus using a pressure transducer and a spacer of precisely known length. The calibration constant is used to calculate the depth of a fluid from subsequent single pressure measurements at any submerged position.
Authenticating concealed private data while maintaining concealment
Thomas, Edward V [Albuquerque, NM; Draelos, Timothy J [Albuquerque, NM
2007-06-26
A method of and system for authenticating concealed and statistically varying multi-dimensional data comprising: acquiring an initial measurement of an item, wherein the initial measurement is subject to measurement error; applying a transformation to the initial measurement to generate reference template data; acquiring a subsequent measurement of an item, wherein the subsequent measurement is subject to measurement error; applying the transformation to the subsequent measurement; and calculating a Euclidean distance metric between the transformed measurements; wherein the calculated Euclidean distance metric is identical to a Euclidean distance metric between the measurement prior to transformation.
Determining the Depth of Infinite Horizontal Cylindrical Sources from Spontaneous Polarization Data
NASA Astrophysics Data System (ADS)
Cooper, G. R. J.; Stettler, E. H.
2017-03-01
Previously published semi-automatic interpretation methods that use ratios of analytic signal amplitudes of orders that differ by one to determine the distance to potential field sources are shown also to apply to self-potential (S.P.) data when the source is a horizontal cylinder. Local minima of the distance (when it becomes closest to zero) give the source depth. The method was applied to an S.P. anomaly from the Bourkes Luck potholes district in Mpumalanga Province, South Africa, and gave results that were confirmed by drilling.
ERIC Educational Resources Information Center
Said, Asnah; Syarif, Edy
2016-01-01
This research aimed to evaluate of online tutorial program design by applying problem-based learning Research Methods currently implemented in the system of Open Distance Learning (ODL). The students must take a Research Methods course to prepare themselves for academic writing projects. Problem-based learning basically emphasizes the process of…
Probabilistic determination of probe locations from distance data
Xu, Xiao-Ping; Slaughter, Brian D.; Volkmann, Niels
2013-01-01
Distance constraints, in principle, can be employed to determine information about the location of probes within a three-dimensional volume. Traditional methods for locating probes from distance constraints involve optimization of scoring functions that measure how well the probe location fits the distance data, exploring only a small subset of the scoring function landscape in the process. These methods are not guaranteed to find the global optimum and provide no means to relate the identified optimum to all other optima in scoring space. Here, we introduce a method for the location of probes from distance information that is based on probability calculus. This method allows exploration of the entire scoring space by directly combining probability functions representing the distance data and information about attachment sites. The approach is guaranteed to identify the global optimum and enables the derivation of confidence intervals for the probe location as well as statistical quantification of ambiguities. We apply the method to determine the location of a fluorescence probe using distances derived by FRET and show that the resulting location matches that independently derived by electron microscopy. PMID:23770585
Apparatus for in-situ calibration of instruments that measure fluid depth
Campbell, M.D.
1994-01-11
The present invention provides a method and apparatus for in-situ calibration of distance measuring equipment. The method comprises obtaining a first distance measurement in a first location, then obtaining at least one other distance measurement in at least one other location of a precisely known distance from the first location, and calculating a calibration constant. The method is applied specifically to calculating a calibration constant for obtaining fluid level and embodied in an apparatus using a pressure transducer and a spacer of precisely known length. The calibration constant is used to calculate the depth of a fluid from subsequent single pressure measurements at any submerged position. 8 figures.
Herbei, Radu; Kubatko, Laura
2013-03-26
Markov chains are widely used for modeling in many areas of molecular biology and genetics. As the complexity of such models advances, it becomes increasingly important to assess the rate at which a Markov chain converges to its stationary distribution in order to carry out accurate inference. A common measure of convergence to the stationary distribution is the total variation distance, but this measure can be difficult to compute when the state space of the chain is large. We propose a Monte Carlo method to estimate the total variation distance that can be applied in this situation, and we demonstrate how the method can be efficiently implemented by taking advantage of GPU computing techniques. We apply the method to two Markov chains on the space of phylogenetic trees, and discuss the implications of our findings for the development of algorithms for phylogenetic inference.
Water quality assessment with hierarchical cluster analysis based on Mahalanobis distance.
Du, Xiangjun; Shao, Fengjing; Wu, Shunyao; Zhang, Hanlin; Xu, Si
2017-07-01
Water quality assessment is crucial for assessment of marine eutrophication, prediction of harmful algal blooms, and environment protection. Previous studies have developed many numeric modeling methods and data driven approaches for water quality assessment. The cluster analysis, an approach widely used for grouping data, has also been employed. However, there are complex correlations between water quality variables, which play important roles in water quality assessment but have always been overlooked. In this paper, we analyze correlations between water quality variables and propose an alternative method for water quality assessment with hierarchical cluster analysis based on Mahalanobis distance. Further, we cluster water quality data collected form coastal water of Bohai Sea and North Yellow Sea of China, and apply clustering results to evaluate its water quality. To evaluate the validity, we also cluster the water quality data with cluster analysis based on Euclidean distance, which are widely adopted by previous studies. The results show that our method is more suitable for water quality assessment with many correlated water quality variables. To our knowledge, it is the first attempt to apply Mahalanobis distance for coastal water quality assessment.
A new statistical distance scale for planetary nebulae
NASA Astrophysics Data System (ADS)
Ali, Alaa; Ismail, H. A.; Alsolami, Z.
2015-05-01
In the first part of the present article we discuss the consistency among different individual distance methods of Galactic planetary nebulae, while in the second part we develop a new statistical distance scale based on a calibrating sample of well determined distances. A set composed of 315 planetary nebulae with individual distances are extracted from the literature. Inspecting the data set indicates that the accuracy of distances is varying among different individual methods and also among different sources where the same individual method was applied. Therefore, we derive a reliable weighted mean distance for each object by considering the influence of the distance error and the weight of each individual method. The results reveal that the discussed individual methods are consistent with each other, except the gravity method that produces higher distances compared to other individual methods. From the initial data set, we construct a standard calibrating sample consists of 82 objects. This sample is restricted only to the objects with distances determined from at least two different individual methods, except few objects with trusted distances determined from the trigonometric, spectroscopic, and cluster membership methods. In addition to the well determined distances for this sample, it shows a lot of advantages over that used in the prior distance scales. This sample is used to recalibrate the mass-radius and radio surface brightness temperature-radius relationships. An average error of ˜30 % is estimated for the new distance scale. The newly distance scale is compared with the most widely used statistical scales in literature, where the results show that it is roughly similar to the majority of them within ˜±20 % difference. Furthermore, the new scale yields a weighted mean distance to the Galactic center of 7.6±1.35 kpc, which in good agreement with the very recent measure of Malkin 2013.
[A New Distance Metric between Different Stellar Spectra: the Residual Distribution Distance].
Liu, Jie; Pan, Jing-chang; Luo, A-li; Wei, Peng; Liu, Meng
2015-12-01
Distance metric is an important issue for the spectroscopic survey data processing, which defines a calculation method of the distance between two different spectra. Based on this, the classification, clustering, parameter measurement and outlier data mining of spectral data can be carried out. Therefore, the distance measurement method has some effect on the performance of the classification, clustering, parameter measurement and outlier data mining. With the development of large-scale stellar spectral sky surveys, how to define more efficient distance metric on stellar spectra has become a very important issue in the spectral data processing. Based on this problem and fully considering of the characteristics and data features of the stellar spectra, a new distance measurement method of stellar spectra named Residual Distribution Distance is proposed. While using this method to measure the distance, the two spectra are firstly scaled and then the standard deviation of the residual is used the distance. Different from the traditional distance metric calculation methods of stellar spectra, when used to calculate the distance between stellar spectra, this method normalize the two spectra to the same scale, and then calculate the residual corresponding to the same wavelength, and the standard error of the residual spectrum is used as the distance measure. The distance measurement method can be used for stellar classification, clustering and stellar atmospheric physical parameters measurement and so on. This paper takes stellar subcategory classification as an example to test the distance measure method. The results show that the distance defined by the proposed method is more effective to describe the gap between different types of spectra in the classification than other methods, which can be well applied in other related applications. At the same time, this paper also studies the effect of the signal to noise ratio (SNR) on the performance of the proposed method. The result show that the distance is affected by the SNR. The smaller the signal-to-noise ratio is, the greater impact is on the distance; While SNR is larger than 10, the signal-to-noise ratio has little effect on the performance for the classification.
Potential, velocity, and density fields from sparse and noisy redshift-distance samples - Method
NASA Technical Reports Server (NTRS)
Dekel, Avishai; Bertschinger, Edmund; Faber, Sandra M.
1990-01-01
A method for recovering the three-dimensional potential, velocity, and density fields from large-scale redshift-distance samples is described. Galaxies are taken as tracers of the velocity field, not of the mass. The density field and the initial conditions are calculated using an iterative procedure that applies the no-vorticity assumption at an initial time and uses the Zel'dovich approximation to relate initial and final positions of particles on a grid. The method is tested using a cosmological N-body simulation 'observed' at the positions of real galaxies in a redshift-distance sample, taking into account their distance measurement errors. Malmquist bias and other systematic and statistical errors are extensively explored using both analytical techniques and Monte Carlo simulations.
Groneberg, David A.
2016-01-01
We integrated recent improvements within the floating catchment area (FCA) method family into an integrated ‘iFCA`method. Within this method we focused on the distance decay function and its parameter. So far only distance decay functions with constant parameters have been applied. Therefore, we developed a variable distance decay function to be used within the FCA method. We were able to replace the impedance coefficient β by readily available distribution parameter (i.e. median and standard deviation (SD)) within a logistic based distance decay function. Hence, the function is shaped individually for every single population location by the median and SD of all population-to-provider distances within a global catchment size. Theoretical application of the variable distance decay function showed conceptually sound results. Furthermore, the existence of effective variable catchment sizes defined by the asymptotic approach to zero of the distance decay function was revealed, satisfying the need for variable catchment sizes. The application of the iFCA method within an urban case study in Berlin (Germany) confirmed the theoretical fit of the suggested method. In summary, we introduced for the first time, a variable distance decay function within an integrated FCA method. This function accounts for individual travel behaviors determined by the distribution of providers. Additionally, the function inherits effective variable catchment sizes and therefore obviates the need for determining variable catchment sizes separately. PMID:27391649
Properties of star clusters - I. Automatic distance and extinction estimates
NASA Astrophysics Data System (ADS)
Buckner, Anne S. M.; Froebrich, Dirk
2013-12-01
Determining star cluster distances is essential to analyse their properties and distribution in the Galaxy. In particular, it is desirable to have a reliable, purely photometric distance estimation method for large samples of newly discovered cluster candidates e.g. from the Two Micron All Sky Survey, the UK Infrared Deep Sky Survey Galactic Plane Survey and VVV. Here, we establish an automatic method to estimate distances and reddening from near-infrared photometry alone, without the use of isochrone fitting. We employ a decontamination procedure of JHK photometry to determine the density of stars foreground to clusters and a galactic model to estimate distances. We then calibrate the method using clusters with known properties. This allows us to establish distance estimates with better than 40 per cent accuracy. We apply our method to determine the extinction and distance values to 378 known open clusters and 397 cluster candidates from the list of Froebrich, Scholz & Raftery. We find that the sample is biased towards clusters of a distance of approximately 3 kpc, with typical distances between 2 and 6 kpc. Using the cluster distances and extinction values, we investigate how the average extinction per kiloparsec distance changes as a function of the Galactic longitude. We find a systematic dependence that can be approximated by AH(l) [mag kpc-1] = 0.10 + 0.001 × |l - 180°|/° for regions more than 60° from the Galactic Centre.
NASA Astrophysics Data System (ADS)
Lawrence, Kurt C.; Park, Bosoon; Windham, William R.; Mao, Chengye; Poole, Gavin H.
2003-03-01
A method to calibrate a pushbroom hyperspectral imaging system for "near-field" applications in agricultural and food safety has been demonstrated. The method consists of a modified geometric control point correction applied to a focal plane array to remove smile and keystone distortion from the system. Once a FPA correction was applied, single wavelength and distance calibrations were used to describe all points on the FPA. Finally, a percent reflectance calibration, applied on a pixel-by-pixel basis, was used for accurate measurements for the hyperspectral imaging system. The method was demonstrated with a stationary prism-grating-prism, pushbroom hyperspectral imaging system. For the system described, wavelength and distance calibrations were used to reduce the wavelength errors to <0.5 nm and distance errors to <0.01mm (across the entrance slit width). The pixel-by-pixel percent reflectance calibration, which was performed at all wavelengths with dark current and 99% reflectance calibration-panel measurements, was verified with measurements on a certified gradient Spectralon panel with values ranging from about 14% reflectance to 99% reflectance with errors generally less than 5% at the mid-wavelength measurements. Results from the calibration method, indicate the hyperspectral imaging system has a usable range between 420 nm and 840 nm. Outside this range, errors increase significantly.
[Fast discrimination of edible vegetable oil based on Raman spectroscopy].
Zhou, Xiu-Jun; Dai, Lian-Kui; Li, Sheng
2012-07-01
A novel method to fast discriminate edible vegetable oils by Raman spectroscopy is presented. The training set is composed of different edible vegetable oils with known classes. Based on their original Raman spectra, baseline correction and normalization were applied to obtain standard spectra. Two characteristic peaks describing the unsaturated degree of vegetable oil were selected as feature vectors; then the centers of all classes were calculated. For an edible vegetable oil with unknown class, the same pretreatment and feature extraction methods were used. The Euclidian distances between the feature vector of the unknown sample and the center of each class were calculated, and the class of the unknown sample was finally determined by the minimum distance. For 43 edible vegetable oil samples from seven different classes, experimental results show that the clustering effect of each class was more obvious and the class distance was much larger with the new feature extraction method compared with PCA. The above classification model can be applied to discriminate unknown edible vegetable oils rapidly and accurately.
A line transect model for aerial surveys
Quang, Pham Xuan; Lanctot, Richard B.
1991-01-01
We employ a line transect method to estimate the density of the common and Pacific loon in the Yukon Flats National Wildlife Refuge from aerial survey data. Line transect methods have the advantage of automatically taking into account “visibility bias” due to detectability difference of animals at different distances from the transect line. However, line transect methods must overcome two difficulties when applied to inaccurate recording of sighting distances due to high travel speeds, so that in fact only a few reliable distance class counts are available. We propose a unimodal detection function that provides an estimate of the effective area lost due to the blind strip, under the assumption that a line of perfect detection exists parallel to the transect line. The unimodal detection function can also be applied when a blind strip is absent, and in certain instances when the maximum probability of detection is less than 100%. A simple bootstrap procedure to estimate standard error is illustrated. Finally, we present results from a small set of Monte Carlo experiments.
Distance correlation methods for discovering associations in large astrophysical databases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martínez-Gómez, Elizabeth; Richards, Mercedes T.; Richards, Donald St. P., E-mail: elizabeth.martinez@itam.mx, E-mail: mrichards@astro.psu.edu, E-mail: richards@stat.psu.edu
2014-01-20
High-dimensional, large-sample astrophysical databases of galaxy clusters, such as the Chandra Deep Field South COMBO-17 database, provide measurements on many variables for thousands of galaxies and a range of redshifts. Current understanding of galaxy formation and evolution rests sensitively on relationships between different astrophysical variables; hence an ability to detect and verify associations or correlations between variables is important in astrophysical research. In this paper, we apply a recently defined statistical measure called the distance correlation coefficient, which can be used to identify new associations and correlations between astrophysical variables. The distance correlation coefficient applies to variables of any dimension,more » can be used to determine smaller sets of variables that provide equivalent astrophysical information, is zero only when variables are independent, and is capable of detecting nonlinear associations that are undetectable by the classical Pearson correlation coefficient. Hence, the distance correlation coefficient provides more information than the Pearson coefficient. We analyze numerous pairs of variables in the COMBO-17 database with the distance correlation method and with the maximal information coefficient. We show that the Pearson coefficient can be estimated with higher accuracy from the corresponding distance correlation coefficient than from the maximal information coefficient. For given values of the Pearson coefficient, the distance correlation method has a greater ability than the maximal information coefficient to resolve astrophysical data into highly concentrated horseshoe- or V-shapes, which enhances classification and pattern identification. These results are observed over a range of redshifts beyond the local universe and for galaxies from elliptical to spiral.« less
A regularized approach for geodesic-based semisupervised multimanifold learning.
Fan, Mingyu; Zhang, Xiaoqin; Lin, Zhouchen; Zhang, Zhongfei; Bao, Hujun
2014-05-01
Geodesic distance, as an essential measurement for data dissimilarity, has been successfully used in manifold learning. However, most geodesic distance-based manifold learning algorithms have two limitations when applied to classification: 1) class information is rarely used in computing the geodesic distances between data points on manifolds and 2) little attention has been paid to building an explicit dimension reduction mapping for extracting the discriminative information hidden in the geodesic distances. In this paper, we regard geodesic distance as a kind of kernel, which maps data from linearly inseparable space to linear separable distance space. In doing this, a new semisupervised manifold learning algorithm, namely regularized geodesic feature learning algorithm, is proposed. The method consists of three techniques: a semisupervised graph construction method, replacement of original data points with feature vectors which are built by geodesic distances, and a new semisupervised dimension reduction method for feature vectors. Experiments on the MNIST, USPS handwritten digit data sets, MIT CBCL face versus nonface data set, and an intelligent traffic data set show the effectiveness of the proposed algorithm.
Boareto, Marcelo; Cesar, Jonatas; Leite, Vitor B P; Caticha, Nestor
2015-01-01
We introduce Supervised Variational Relevance Learning (Suvrel), a variational method to determine metric tensors to define distance based similarity in pattern classification, inspired in relevance learning. The variational method is applied to a cost function that penalizes large intraclass distances and favors small interclass distances. We find analytically the metric tensor that minimizes the cost function. Preprocessing the patterns by doing linear transformations using the metric tensor yields a dataset which can be more efficiently classified. We test our methods using publicly available datasets, for some standard classifiers. Among these datasets, two were tested by the MAQC-II project and, even without the use of further preprocessing, our results improve on their performance.
A virtual computer lab for distance biomedical technology education.
Locatis, Craig; Vega, Anibal; Bhagwat, Medha; Liu, Wei-Li; Conde, Jose
2008-03-13
The National Library of Medicine's National Center for Biotechnology Information offers mini-courses which entail applying concepts in biochemistry and genetics to search genomics databases and other information sources. They are highly interactive and involve use of 3D molecular visualization software that can be computationally taxing. Methods were devised to offer the courses at a distance so as to provide as much functionality of a computer lab as possible, the venue where they are normally taught. The methods, which can be employed with varied videoconferencing technology and desktop sharing software, were used to deliver mini-courses at a distance in pilot applications where students could see demonstrations by the instructor and the instructor could observe and interact with students working at their remote desktops. Student ratings of the learning experience and comments to open ended questions were similar to those when the courses are offered face to face. The real time interaction and the instructor's ability to access student desktops from a distance in order to provide individual assistance and feedback were considered invaluable. The technologies and methods mimic much of the functionality of computer labs and may be usefully applied in any context where content changes frequently, training needs to be offered on complex computer applications at a distance in real time, and where it is necessary for the instructor to monitor students as they work.
Test method for telescopes using a point source at a finite distance
NASA Technical Reports Server (NTRS)
Griner, D. B.; Zissa, D. E.; Korsch, D.
1985-01-01
A test method for telescopes that makes use of a focused ring formed by an annular aperture when using a point source at a finite distance is evaluated theoretically and experimentally. The results show that the concept can be applied to near-normal, as well as grazing incidence. It is particularly suited for X-ray telescopes because of their intrinsically narrow annular apertures, and because of the largely reduced diffraction effects.
VizieR Online Data Catalog: Star clusters distances and extinctions (Buckner+, 2013)
NASA Astrophysics Data System (ADS)
Buckner, A. S. M.; Froebrich, D.
2014-10-01
Determining star cluster distances is essential to analyse their properties and distribution in the Galaxy. In particular, it is desirable to have a reliable, purely photometric distance estimation method for large samples of newly discovered cluster candidates e.g. from the Two Micron All Sky Survey, the UK Infrared Deep Sky Survey Galactic Plane Survey and VVV. Here, we establish an automatic method to estimate distances and reddening from near-infrared photometry alone, without the use of isochrone fitting. We employ a decontamination procedure of JHK photometry to determine the density of stars foreground to clusters and a galactic model to estimate distances. We then calibrate the method using clusters with known properties. This allows us to establish distance estimates with better than 40 percent accuracy. We apply our method to determine the extinction and distance values to 378 known open clusters and 397 cluster candidates from the list of Froebrich, Scholz & Raftery (2007MNRAS.374..399F, Cat. J/MNRAS/374/399). We find that the sample is biased towards clusters of a distance of approximately 3kpc, with typical distances between 2 and 6kpc. Using the cluster distances and extinction values, we investigate how the average extinction per kiloparsec distance changes as a function of the Galactic longitude. We find a systematic dependence that can be approximated by AH(l)[mag/kpc]=0.10+0.001x|l-180°|/° for regions more than 60° from the Galactic Centre. (1 data file).
Spatial analysis of groundwater levels using Fuzzy Logic and geostatistical tools
NASA Astrophysics Data System (ADS)
Theodoridou, P. G.; Varouchakis, E. A.; Karatzas, G. P.
2017-12-01
The spatial variability evaluation of the water table of an aquifer provides useful information in water resources management plans. Geostatistical methods are often employed to map the free surface of an aquifer. In geostatistical analysis using Kriging techniques the selection of the optimal variogram is very important for the optimal method performance. This work compares three different criteria to assess the theoretical variogram that fits to the experimental one: the Least Squares Sum method, the Akaike Information Criterion and the Cressie's Indicator. Moreover, variable distance metrics such as the Euclidean, Minkowski, Manhattan, Canberra and Bray-Curtis are applied to calculate the distance between the observation and the prediction points, that affects both the variogram calculation and the Kriging estimator. A Fuzzy Logic System is then applied to define the appropriate neighbors for each estimation point used in the Kriging algorithm. The two criteria used during the Fuzzy Logic process are the distance between observation and estimation points and the groundwater level value at each observation point. The proposed techniques are applied to a data set of 250 hydraulic head measurements distributed over an alluvial aquifer. The analysis showed that the Power-law variogram model and Manhattan distance metric within ordinary kriging provide the best results when the comprehensive geostatistical analysis process is applied. On the other hand, the Fuzzy Logic approach leads to a Gaussian variogram model and significantly improves the estimation performance. The two different variogram models can be explained in terms of a fractional Brownian motion approach and of aquifer behavior at local scale. Finally, maps of hydraulic head spatial variability and of predictions uncertainty are constructed for the area with the two different approaches comparing their advantages and drawbacks.
Modern Geometric Methods of Distance Determination
NASA Astrophysics Data System (ADS)
Thévenin, Frédéric; Falanga, Maurizio; Kuo, Cheng Yu; Pietrzyński, Grzegorz; Yamaguchi, Masaki
2017-11-01
Building a 3D picture of the Universe at any distance is one of the major challenges in astronomy, from the nearby Solar System to distant Quasars and galaxies. This goal has forced astronomers to develop techniques to estimate or to measure the distance of point sources on the sky. While most distance estimates used since the beginning of the 20th century are based on our understanding of the physics of objects of the Universe: stars, galaxies, QSOs, the direct measures of distances are based on the geometric methods as developed in ancient Greece: the parallax, which has been applied to stars for the first time in the mid-19th century. In this review, different techniques of geometrical astrometry applied to various stellar and cosmological (Megamaser) objects are presented. They consist in parallax measurements from ground based equipment or from space missions, but also in the study of binary stars or, as we shall see, of binary systems in distant extragalactic sources using radio telescopes. The Gaia mission will be presented in the context of stellar physics and galactic structure, because this key space mission in astronomy will bring a breakthrough in our understanding of stars, galaxies and the Universe in their nature and evolution with time. Measuring the distance to a star is the starting point for an unbiased description of its physics and the estimate of its fundamental parameters like its age. Applying these studies to candles such as the Cepheids will impact our large distance studies and calibration of other candles. The text is constructed as follows: introducing the parallax concept and measurement, we shall present briefly the Gaia satellite which will be the future base catalogue of stellar astronomy in the near future. Cepheids will be discussed just after to demonstrate the state of the art in distance measurements in the Universe with these variable stars, with the objective of 1% of error in distances that could be applied to our closest galaxy the LMC, and better constrain the distances of large sub-structures around the Milky Way. Then exciting objects like X-Ray binaries will be presented in two parts corresponding to "low" or "high" mass stars with compact objects observed with X-ray satellites. We shall demonstrate the capability of these objects to have their distances measured with high accuracy with not only helps in the study of these objects but could also help to measure the distance of the structure they belong. For cosmological objects and large distances of megaparsecs, we shall present what has been developed for more than 20 years in the geometric distance measurements of MegaMasers, the ultimate goal being the estimation of the H0 parameter.
NASA Astrophysics Data System (ADS)
Gui, Luying; He, Jian; Qiu, Yudong; Yang, Xiaoping
2017-01-01
This paper presents a variational level set approach to segment lesions with compact shapes on medical images. In this study, we investigate to address the problem of segmentation for hepatocellular carcinoma which are usually of various shapes, variable intensities, and weak boundaries. An efficient constraint which is called the isoperimetric constraint to describe the compactness of shapes is applied in this method. In addition, in order to ensure the precise segmentation and stable movement of the level set, a distance regularization is also implemented in the proposed variational framework. Our method is applied to segment various hepatocellular carcinoma regions on Computed Tomography images with promising results. Comparison results also prove that the proposed method is more accurate than other two approaches.
An Exact Algorithm to Compute the Double-Cut-and-Join Distance for Genomes with Duplicate Genes.
Shao, Mingfu; Lin, Yu; Moret, Bernard M E
2015-05-01
Computing the edit distance between two genomes is a basic problem in the study of genome evolution. The double-cut-and-join (DCJ) model has formed the basis for most algorithmic research on rearrangements over the last few years. The edit distance under the DCJ model can be computed in linear time for genomes without duplicate genes, while the problem becomes NP-hard in the presence of duplicate genes. In this article, we propose an integer linear programming (ILP) formulation to compute the DCJ distance between two genomes with duplicate genes. We also provide an efficient preprocessing approach to simplify the ILP formulation while preserving optimality. Comparison on simulated genomes demonstrates that our method outperforms MSOAR in computing the edit distance, especially when the genomes contain long duplicated segments. We also apply our method to assign orthologous gene pairs among human, mouse, and rat genomes, where once again our method outperforms MSOAR.
A Distance Measure for Genome Phylogenetic Analysis
NASA Astrophysics Data System (ADS)
Cao, Minh Duc; Allison, Lloyd; Dix, Trevor
Phylogenetic analyses of species based on single genes or parts of the genomes are often inconsistent because of factors such as variable rates of evolution and horizontal gene transfer. The availability of more and more sequenced genomes allows phylogeny construction from complete genomes that is less sensitive to such inconsistency. For such long sequences, construction methods like maximum parsimony and maximum likelihood are often not possible due to their intensive computational requirement. Another class of tree construction methods, namely distance-based methods, require a measure of distances between any two genomes. Some measures such as evolutionary edit distance of gene order and gene content are computational expensive or do not perform well when the gene content of the organisms are similar. This study presents an information theoretic measure of genetic distances between genomes based on the biological compression algorithm expert model. We demonstrate that our distance measure can be applied to reconstruct the consensus phylogenetic tree of a number of Plasmodium parasites from their genomes, the statistical bias of which would mislead conventional analysis methods. Our approach is also used to successfully construct a plausible evolutionary tree for the γ-Proteobacteria group whose genomes are known to contain many horizontally transferred genes.
High Performance Automatic Character Skinning Based on Projection Distance
NASA Astrophysics Data System (ADS)
Li, Jun; Lin, Feng; Liu, Xiuling; Wang, Hongrui
2018-03-01
Skeleton-driven-deformation methods have been commonly used in the character deformations. The process of painting skin weights for character deformation is a long-winded task requiring manual tweaking. We present a novel method to calculate skinning weights automatically from 3D human geometric model and corresponding skeleton. The method first, groups each mesh vertex of 3D human model to a skeleton bone by the minimum distance from a mesh vertex to each bone. Secondly, calculates each vertex's weights to the adjacent bones by the vertex's projection point distance to the bone joints. Our method's output can not only be applied to any kind of skeleton-driven deformation, but also to motion capture driven (mocap-driven) deformation. Experiments results show that our method not only has strong generality and robustness, but also has high performance.
Chelliah, Kanthasamy; Raman, Ganesh G.; Muehleisen, Ralph T.
2016-07-07
This paper evaluates the performance of various regularization parameter choice methods applied to different approaches of nearfield acoustic holography when a very nearfield measurement is not possible. For a fixed grid resolution, the larger the hologram distance, the larger the error in the naive nearfield acoustic holography reconstructions. These errors can be smoothed out by using an appropriate order of regularization. In conclusion, this study shows that by using a fixed/manual choice of regularization parameter, instead of automated parameter choice methods, reasonably accurate reconstructions can be obtained even when the hologram distance is 16 times larger than the grid resolution.
NASA Astrophysics Data System (ADS)
Skórnik-Pokarowska, Urszula; Orłowski, Arkadiusz
2004-12-01
We calculate the ultrametric distance between the pairs of stocks that belong to the same portfolio. The ultrametric distance allows us to distinguish groups of shares that are related. In this way, we can construct a portfolio taxonomy that can be used for constructing an efficient portfolio. We also construct a portfolio taxonomy based not only on stock prices but also on economic indices such as liquidity ratio, debt ratio and sales profitability ratio. We show that a good investment strategy can be obtained by applying to the portfolio chosen by the taxonomy method the so-called Constant Rebalanced Portfolio.
Explosion yield estimation from pressure wave template matching
Arrowsmith, Stephen; Bowman, Daniel
2017-01-01
A method for estimating the yield of explosions from shock-wave and acoustic-wave measurements is presented. The method exploits full waveforms by comparing pressure measurements against an empirical stack of prior observations using scaling laws. The approach can be applied to measurements across a wide-range of source-to-receiver distances. The method is applied to data from two explosion experiments in different regions, leading to mean relative errors in yield estimates of 0.13 using prior data from the same region, and 0.2 when applied to a new region. PMID:28618805
Error Estimation for the Linearized Auto-Localization Algorithm
Guevara, Jorge; Jiménez, Antonio R.; Prieto, Jose Carlos; Seco, Fernando
2012-01-01
The Linearized Auto-Localization (LAL) algorithm estimates the position of beacon nodes in Local Positioning Systems (LPSs), using only the distance measurements to a mobile node whose position is also unknown. The LAL algorithm calculates the inter-beacon distances, used for the estimation of the beacons’ positions, from the linearized trilateration equations. In this paper we propose a method to estimate the propagation of the errors of the inter-beacon distances obtained with the LAL algorithm, based on a first order Taylor approximation of the equations. Since the method depends on such approximation, a confidence parameter τ is defined to measure the reliability of the estimated error. Field evaluations showed that by applying this information to an improved weighted-based auto-localization algorithm (WLAL), the standard deviation of the inter-beacon distances can be improved by more than 30% on average with respect to the original LAL method. PMID:22736965
NASA Astrophysics Data System (ADS)
Hara, T.
2012-12-01
Hara (2007. EPS, 59, 227 - 231) developed a method to determine earthquake magnitudes using durations of high frequency energy radiation and displacement amplitudes of tele-seismic events, and showed that it was applicable to huge events such as the 2004 Sumatra earthquake (Mw 9.0 after the Global CMT catalog. In the following the moment magnitude are from their estimates). Since Hara (2007) developed this method, we have been applying it to large shallow events, and confirmed its effectiveness. The results for several events are available at the web site of our institute (http://iisee.kenken.go.jp/quakes.htm). Also, Hara (2011. EPS, 63, 525-528) applied this method to the 2011 Off the Pacific Coast of Tohoku Earthquake (Mw 9.1), and showed that it worked well. In these applications, we used only waveform data recorded in the tele-seismic distance range (30 - 85 degrees). In order to have a magnitude estimate faster, it is necessary to analyze regional distance range data. In this study, we applied the method of Hara (2007) to waveform data recorded in the regional distance range (8 - 30 degrees) to investigate its applicability. We slightly modified the method by changing durations of times series used for analysis considering arrivals of high amplitude Rayleigh waves. We selected the six recent huge (their moment magnitude are equal to or greater than 8.5) earthquakes; they are the December 26, 2004 Sumatra (Mw 9.0), the March 28, 2005 Northern Sumatra (Mw 8,6), the September 12, 2007 Southern Sumatra (Mw 8.5), the February 27, 2010 Chile (Mw 8.8), the March 11, 2011 off the Pacific Coast of Tohoku (Mw 9.1), the April 11, 2012 off West Coast of Northern Sumatra (Mw 8.6). We retrieved BHZ channel waveform data from IRIS DMC. For the 2004 Sumatra and 2010 Chile earthquakes, only a few waveform data are available. The estimated magnitudes are 9.16, 8.66, 8.53, 8.83, 9.15, and 8.70, respectively. Also, the estimated high frequency energy radiation durations are consistent with the centroid time shifts of the Global CMT catalog. These preliminary results suggest that the method of Hara (2007) is applicable to waveform data recorded in the regional distance range. We plan to apply this method to smaller events to investigate a possible systematic deviation from analyses of tele-seismic records.
Using mark-recapture distance sampling methods on line transect surveys
Burt, Louise M.; Borchers, David L.; Jenkins, Kurt J.; Marques, Tigao A
2014-01-01
Synthesis and applications. Mark–recapture DS is a widely used method for estimating animal density and abundance when detection of animals at distance zero is not certain. Two observer configurations and three statistical models are described, and it is important to choose the most appropriate model for the observer configuration and target species in question. By way of making the methods more accessible to practicing ecologists, we describe the key ideas underlying MRDS methods, the sometimes subtle differences between them, and we illustrate these by applying different kinds of MRDS method to surveys of two different target species using different survey configurations.
NASA Astrophysics Data System (ADS)
Ji, Yuanbo; van der Geest, Rob J.; Nazarian, Saman; Lelieveldt, Boudewijn P. F.; Tao, Qian
2018-03-01
Anatomical objects in medical images very often have dual contours or surfaces that are highly correlated. Manually segmenting both of them by following local image details is tedious and subjective. In this study, we proposed a two-layer region-based level set method with a soft distance constraint, which not only regularizes the level set evolution at two levels, but also imposes prior information on wall thickness in an effective manner. By updating the level set function and distance constraint functions alternatingly, the method simultaneously optimizes both contours while regularizing their distance. The method was applied to segment the inner and outer wall of both left atrium (LA) and left ventricle (LV) from MR images, using a rough initialization from inside the blood pool. Compared to manual annotation from experience observers, the proposed method achieved an average perpendicular distance (APD) of less than 1mm for the LA segmentation, and less than 1.5mm for the LV segmentation, at both inner and outer contours. The method can be used as a practical tool for fast and accurate dual wall annotations given proper initialization.
Full velocity difference car-following model considering desired inter-vehicle distance
NASA Astrophysics Data System (ADS)
Xin, Tong; Yi, Liu; Rongjun, Cheng; Hongxia, Ge
Based on the full velocity difference car-following model, an improved car-following model is put forward by considering the driver’s desired inter-vehicle distance. The stability conditions are obtained by applying the control method. The results of theoretical analysis are used to demonstrate the advantages of our model. Numerical simulations are used to show that traffic congestion can be improved as the desired inter-vehicle distance is considered in the full velocity difference car-following model.
1980-05-28
Total Deviation Angles and Measured Inlet Axial Velocity . . . . 55 ix LIST OF FIGURES (Continued) Figure Page 19 Points Defining Blade Sections of...distance from leading edge to point of maximum camber along chord line ar tip vortex core radius AVR axial velocity ratio (Vx /V x c chord length CL tip...yaw ceoefficient d longitudinal distance from leading edge to tip vortex calculation point G distance from chord line to maximum camber point K cascade
Rismanchian, Farhood; Lee, Young Hoon
2017-07-01
This article proposes an approach to help designers analyze complex care processes and identify the optimal layout of an emergency department (ED) considering several objectives simultaneously. These objectives include minimizing the distances traveled by patients, maximizing design preferences, and minimizing the relocation costs. Rising demand for healthcare services leads to increasing demand for new hospital buildings as well as renovating existing ones. Operations management techniques have been successfully applied in both manufacturing and service industries to design more efficient layouts. However, high complexity of healthcare processes makes it challenging to apply these techniques in healthcare environments. Process mining techniques were applied to address the problem of complexity and to enhance healthcare process analysis. Process-related information, such as information about the clinical pathways, was extracted from the information system of an ED. A goal programming approach was then employed to find a single layout that would simultaneously satisfy several objectives. The layout identified using the proposed method improved the distances traveled by noncritical and critical patients by 42.2% and 47.6%, respectively, and minimized the relocation costs. This study has shown that an efficient placement of the clinical units yields remarkable improvements in the distances traveled by patients.
Reshadat, S; Saedi, S; Zangeneh, A; Ghasemi, S R; Gilan, N R; Karbasi, A; Bavandpoor, E
2015-09-08
Geographic information systems (GIS) analysis has not been widely used in underdeveloped countries to ensure that vulnerable populations have accessibility to primary health-care services. This study applied GIS methods to analyse the spatial accessibility to urban primary-care centres of the population in Kermanshah city, Islamic Republic of Iran, by age and sex groups. In a descriptive-analytical study over 3 time periods, network analysis, mean centre and standard distance methods were applied using ArcGIS 9.3. The analysis was based on a standard radius of 750 m distance from health centres, walking speed of 1 m/s and desired access time to health centres of 12.5 mins. The proportion of the population with inadequate geographical access to health centres rose from 47.3% in 1997 to 58.4% in 2012. The mean centre and standard distance mapping showed that the spatial distribution of health centres in Kermanshah needed to be adjusted to changes in population distribution.
KINETIC TOMOGRAPHY. I. A METHOD FOR MAPPING THE MILKY WAY’S INTERSTELLAR MEDIUM IN FOUR DIMENSIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tchernyshyov, Kirill; Peek, J. E. G.
2017-01-01
We have developed a method for deriving the distribution of the Milky Way’s interstellar medium as a function of longitude, latitude, distance, and line-of-sight velocity. This method takes as input maps of reddening as a function of longitude, latitude, distance, and maps of line emission as a function of longitude, latitude, and line-of-sight velocity. We have applied this method to data sets covering much of the Galactic plane. The output of this method correctly reproduces the line-of-sight velocities of high-mass star-forming regions with known distances from Reid et al. and qualitatively agrees with results from the Milky Way kinematics literature.more » These maps will be useful for measuring flows of gas around the Milky Way’s spiral arms and into and out of giant molecular clouds.« less
On Stellar Winds as a Source of Mass: Applying Bondi-Hoyle-Lyttleton Accretion
NASA Astrophysics Data System (ADS)
Detweiler, L. G.; Yates, K.; Siem, E.
2017-12-01
The interaction between planets orbiting stars and the stellar wind that stars emit is investigated and explored. The main goal of this research is to devise a method of calculating the amount of mass accumulated by an arbitrary planet from the stellar wind of its parent star via accretion processes. To achieve this goal, the Bondi-Hoyle-Lyttleton (BHL) mass accretion rate equation and model is employed. In order to use the BHL equation, various parameters of the stellar wind is required to be known, including the velocity, density, and speed of sound of the wind. In order to create a method that is applicable to arbitrary planets orbiting arbitrary stars, Eugene Parker's isothermal stellar wind model is used to calculate these stellar wind parameters. In an isothermal wind, the speed of sound is simple to compute, however the velocity and density equations are transcendental and so the solutions must be approximated using a numerical approximation method. By combining Eugene Parker's isothermal stellar wind model with the BHL accretion equation, a method for computing planetary accretion rates inside a star's stellar wind is realized. This method is then applied to a variety of scenarios. First, this method is used to calculate the amount of mass that our solar system's planets will accrete from the solar wind throughout our Sun's lifetime. Then, some theoretical situations are considered. We consider the amount of mass various brown dwarfs would accrete from the solar wind of our Sun throughout its lifetime if they were orbiting the Sun at Jupiter's distance. For very high mass brown dwarfs, a significant amount of mass is accreted. In the case of the brown dwarf 15 Sagittae B, it actually accretes enough mass to surpass the mass limit for hydrogen fusion. Since 15 Sagittae B is orbiting a star that is very similar to our Sun, this encouraged making calculations for 15 Sagittae B orbiting our Sun at its true distance from its star, 15 Sagittae. It was found that at this distance, it does not accrete enough mass to surpass the mass limit for hydrogen fusion. Finally, we apply this method to brown dwarfs orbiting a 15 solar mass star at Jupiter's distance. It is found that a significantly smaller amount of mass is accreted when compared to the same brown dwarfs orbiting our Sun at the same distance.
A Novel Method for Accurate Operon Predictions in All SequencedProkaryotes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Price, Morgan N.; Huang, Katherine H.; Alm, Eric J.
2004-12-01
We combine comparative genomic measures and the distance separating adjacent genes to predict operons in 124 completely sequenced prokaryotic genomes. Our method automatically tailors itself to each genome using sequence information alone, and thus can be applied to any prokaryote. For Escherichia coli K12 and Bacillus subtilis, our method is 85 and 83% accurate, respectively, which is similar to the accuracy of methods that use the same features but are trained on experimentally characterized transcripts. In Halobacterium NRC-1 and in Helicobacterpylori, our method correctly infers that genes in operons are separated by shorter distances than they are in E.coli, andmore » its predictions using distance alone are more accurate than distance-only predictions trained on a database of E.coli transcripts. We use microarray data from sixphylogenetically diverse prokaryotes to show that combining intergenic distance with comparative genomic measures further improves accuracy and that our method is broadly effective. Finally, we survey operon structure across 124 genomes, and find several surprises: H.pylori has many operons, contrary to previous reports; Bacillus anthracis has an unusual number of pseudogenes within conserved operons; and Synechocystis PCC6803 has many operons even though it has unusually wide spacings between conserved adjacent genes.« less
Point process statistics in atom probe tomography.
Philippe, T; Duguay, S; Grancher, G; Blavette, D
2013-09-01
We present a review of spatial point processes as statistical models that we have designed for the analysis and treatment of atom probe tomography (APT) data. As a major advantage, these methods do not require sampling. The mean distance to nearest neighbour is an attractive approach to exhibit a non-random atomic distribution. A χ(2) test based on distance distributions to nearest neighbour has been developed to detect deviation from randomness. Best-fit methods based on first nearest neighbour distance (1 NN method) and pair correlation function are presented and compared to assess the chemical composition of tiny clusters. Delaunay tessellation for cluster selection has been also illustrated. These statistical tools have been applied to APT experiments on microelectronics materials. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, Kesheng; Cheng, Jia; Yao, Shiji; Lu, Yijia; Ji, Linhong; Xu, Dengfeng
2016-12-01
Electrostatic force measurement at the micro/nano scale is of great significance in science and engineering. In this paper, a reasonable way of applying voltage is put forward by taking an electrostatic chuck in a real integrated circuit manufacturing process as a sample, applying voltage in the probe and the sample electrode, respectively, and comparing the measurement effect of the probe oscillation phase difference by amplitude modulation atomic force microscopy. Based on the phase difference obtained from the experiment, the quantitative dependence of the absolute magnitude of the electrostatic force on the tip-sample distance and applied voltage is established by means of theoretical analysis and numerical simulation. The results show that the varying characteristics of the electrostatic force with the distance and voltage at the micro/nano scale are similar to those at the macroscopic scale. Electrostatic force gradually decays with increasing distance. Electrostatic force is basically proportional to the square of applied voltage. Meanwhile, the applicable conditions of the above laws are discussed. In addition, a comparison of the results in this paper with the results of the energy dissipation method shows the two are consistent in general. The error decreases with increasing distance, and the effect of voltage on the error is small.
Metallicity-Corrected Tip of the Red Giant Branch Distances to M66 and M96
NASA Astrophysics Data System (ADS)
Mager, Violet; Madore, Barry F.; Freedman, Wendy L.
2018-06-01
We present distances to M66 and M96 obtained through measurements of the tip of the red giant branch (TRGB) in HST ACS/WFC images, and give details of our method. The TRGB can be difficult to determine in color-magnitude diagrams where it is not a hard, well-defined edge. We discuss our approach to this in our edge-detection algorithm. Furthermore, metals affect the magnitude of the TRGB as a function of color, creating a slope to the edge that has been dealt with in the past by applying a red color cut-off. We instead apply a metallicity correction to the data that removes this effect, increasing the number of useable stars and providing a more accurate distance measurement.
The expanding photosphere method applied to SN 1992am AT cz = 14 600 km/s
NASA Technical Reports Server (NTRS)
Schmidt, Brian P.; Kirshner, Robert P.; Eastman, Ronald G.; Hamuy, Mario; Phillips, Mark M.; Suntzeff, Nicholas B.; Maza, Jose; Filippenko, Alexei V.; Ho, Luis C.; Matheson, Thomas
1994-01-01
We present photometry and spectroscopy of Supernova (SN) 1992am for five months following its discovery by the Calan Cerro-Tololo Inter-American Observatory (CTIO) SN search. These data show SN 1992am to be type II-P, displaying hydrogen in its spectrum and the typical shoulder in its light curve. The photometric data and the distance from our own analysis are used to construct the supernova's bolometric light curve. Using the bolometric light curve, we estimate SN 1992am ejected approximately 0.30 solar mass of Ni-56, an amount four times larger than that of other well studied SNe II. SN 1992am's; host galaxy lies at a redshift of cz = 14 600 km s(exp -1), making it one of the most distant SNe II discovered, and an important application of the Expanding Photsphere Method. Since z = 0.05 is large enough for redshift-dependent effects to matter, we develop the technique to derive luminosity distances with the Expanding Photosphere Method at any redshift, and apply this method to SN 1992am. The derived distance, D = 180(sub -25) (sup +30) Mpc, is independent of all other rungs in the extragalactic distance ladder. The redshift of SN 1992am's host galaxy is sufficiently large that uncertainties due to perturbations in the smooth Hubble flow should be smaller than 10%. The Hubble ratio derived from the distance and redshift of this single object is H(sub 0) = 81(sub -15) (sup +17) km s(exp -1) Mpc(exp -1). In the future, with more of these distant objects, we hope to establish an independent and statistically robust estimate of H(sub 0) based solely on type II supernovae.
A low-cost method for estimating energy expenditure during soccer refereeing.
Ardigò, Luca Paolo; Padulo, Johnny; Zuliani, Andrea; Capelli, Carlo
2015-01-01
This study aimed to apply a validated bioenergetics model of sprint running to recordings obtained from commercial basic high-sensitivity global positioning system receivers to estimate energy expenditure and physical activity variables during soccer refereeing. We studied five Italian fifth division referees during 20 official matches while carrying the receivers. By applying the model to the recorded speed and acceleration data, we calculated energy consumption during activity, mass-normalised total energy consumption, total distance, metabolically equivalent distance and their ratio over the entire match and the two halves. Main results were as follows: (match) energy consumption = 4729 ± 608 kJ, mass normalised total energy consumption = 74 ± 8 kJ · kg(-1), total distance = 13,112 ± 1225 m, metabolically equivalent distance = 13,788 ± 1151 m and metabolically equivalent/total distance = 1.05 ± 0.05. By using a very low-cost device, it is possible to estimate the energy expenditure of soccer refereeing. The provided predicting mass-normalised total energy consumption versus total distance equation can supply information about soccer refereeing energy demand.
The sine method as a more accurate height predictor for hardwoods
Don C. Bragg
2007-01-01
Most hypsometers apply a mathematical technique that utilizes the tangent of angles and a horizontal distance to deliver the exact height of a tree under idealized circumstances. Unfortunately, these conditions are rarely met for hardwoods in the field. A ânewâ predictor based on sine and slope distance and discussed here does not require the same assumptions for...
Optimal Superpositioning of Flexible Molecule Ensembles
Gapsys, Vytautas; de Groot, Bert L.
2013-01-01
Analysis of the internal dynamics of a biological molecule requires the successful removal of overall translation and rotation. Particularly for flexible or intrinsically disordered peptides, this is a challenging task due to the absence of a well-defined reference structure that could be used for superpositioning. In this work, we started the analysis with a widely known formulation of an objective for the problem of superimposing a set of multiple molecules as variance minimization over an ensemble. A negative effect of this superpositioning method is the introduction of ambiguous rotations, where different rotation matrices may be applied to structurally similar molecules. We developed two algorithms to resolve the suboptimal rotations. The first approach minimizes the variance together with the distance of a structure to a preceding molecule in the ensemble. The second algorithm seeks for minimal variance together with the distance to the nearest neighbors of each structure. The newly developed methods were applied to molecular-dynamics trajectories and normal-mode ensembles of the Aβ peptide, RS peptide, and lysozyme. These new (to our knowledge) superpositioning methods combine the benefits of variance and distance between nearest-neighbor(s) minimization, providing a solution for the analysis of intrinsic motions of flexible molecules and resolving ambiguous rotations. PMID:23332072
Auch, Alexander F; Klenk, Hans-Peter; Göker, Markus
2010-01-28
DNA-DNA hybridization (DDH) is a widely applied wet-lab technique to obtain an estimate of the overall similarity between the genomes of two organisms. To base the species concept for prokaryotes ultimately on DDH was chosen by microbiologists as a pragmatic approach for deciding about the recognition of novel species, but also allowed a relatively high degree of standardization compared to other areas of taxonomy. However, DDH is tedious and error-prone and first and foremost cannot be used to incrementally establish a comparative database. Recent studies have shown that in-silico methods for the comparison of genome sequences can be used to replace DDH. Considering the ongoing rapid technological progress of sequencing methods, genome-based prokaryote taxonomy is coming into reach. However, calculating distances between genomes is dependent on multiple choices for software and program settings. We here provide an overview over the modifications that can be applied to distance methods based in high-scoring segment pairs (HSPs) or maximally unique matches (MUMs) and that need to be documented. General recommendations on determining HSPs using BLAST or other algorithms are also provided. As a reference implementation, we introduce the GGDC web server (http://ggdc.gbdp.org).
Multislice CT perfusion imaging of the lung in detection of pulmonary embolism
NASA Astrophysics Data System (ADS)
Hong, Helen; Lee, Jeongjin
2006-03-01
We propose a new subtraction technique for accurately imaging lung perfusion and efficiently detecting pulmonary embolism in chest MDCT angiography. Our method is composed of five stages. First, optimal segmentation technique is performed for extracting same volume of the lungs, major airway and vascular structures from pre- and post-contrast images with different lung density. Second, initial registration based on apex, hilar point and center of inertia (COI) of each unilateral lung is proposed to correct the gross translational mismatch. Third, initial alignment is refined by iterative surface registration. For fast and robust convergence of the distance measure to the optimal value, a 3D distance map is generated by the narrow-band distance propagation. Fourth, 3D nonlinear filter is applied to the lung parenchyma to compensate for residual spiral artifacts and artifacts caused by heart motion. Fifth, enhanced vessels are visualized by subtracting registered pre-contrast images from post-contrast images. To facilitate visualization of parenchyma enhancement, color-coded mapping and image fusion is used. Our method has been successfully applied to ten patients of pre- and post-contrast images in chest MDCT angiography. Experimental results show that the performance of our method is very promising compared with conventional methods with the aspects of its visual inspection, accuracy and processing time.
Determining attenuation properties of interfering fast and slow ultrasonic waves in cancellous bone.
Nelson, Amber M; Hoffman, Joseph J; Anderson, Christian C; Holland, Mark R; Nagatani, Yoshiki; Mizuno, Katsunori; Matsukawa, Mami; Miller, James G
2011-10-01
Previous studies have shown that interference between fast waves and slow waves can lead to observed negative dispersion in cancellous bone. In this study, the effects of overlapping fast and slow waves on measurements of the apparent attenuation as a function of propagation distance are investigated along with methods of analysis used to determine the attenuation properties. Two methods are applied to simulated data that were generated based on experimentally acquired signals taken from a bovine specimen. The first method uses a time-domain approach that was dictated by constraints imposed by the partial overlap of fast and slow waves. The second method uses a frequency-domain log-spectral subtraction technique on the separated fast and slow waves. Applying the time-domain analysis to the broadband data yields apparent attenuation behavior that is larger in the early stages of propagation and decreases as the wave travels deeper. In contrast, performing frequency-domain analysis on the separated fast waves and slow waves results in attenuation coefficients that are independent of propagation distance. Results suggest that features arising from the analysis of overlapping two-mode data may represent an alternate explanation for the previously reported apparent dependence on propagation distance of the attenuation coefficient of cancellous bone. © 2011 Acoustical Society of America
Determining attenuation properties of interfering fast and slow ultrasonic waves in cancellous bone
Nelson, Amber M.; Hoffman, Joseph J.; Anderson, Christian C.; Holland, Mark R.; Nagatani, Yoshiki; Mizuno, Katsunori; Matsukawa, Mami; Miller, James G.
2011-01-01
Previous studies have shown that interference between fast waves and slow waves can lead to observed negative dispersion in cancellous bone. In this study, the effects of overlapping fast and slow waves on measurements of the apparent attenuation as a function of propagation distance are investigated along with methods of analysis used to determine the attenuation properties. Two methods are applied to simulated data that were generated based on experimentally acquired signals taken from a bovine specimen. The first method uses a time-domain approach that was dictated by constraints imposed by the partial overlap of fast and slow waves. The second method uses a frequency-domain log-spectral subtraction technique on the separated fast and slow waves. Applying the time-domain analysis to the broadband data yields apparent attenuation behavior that is larger in the early stages of propagation and decreases as the wave travels deeper. In contrast, performing frequency-domain analysis on the separated fast waves and slow waves results in attenuation coefficients that are independent of propagation distance. Results suggest that features arising from the analysis of overlapping two-mode data may represent an alternate explanation for the previously reported apparent dependence on propagation distance of the attenuation coefficient of cancellous bone. PMID:21973378
Caprihan, A; Pearlson, G D; Calhoun, V D
2008-08-15
Principal component analysis (PCA) is often used to reduce the dimension of data before applying more sophisticated data analysis methods such as non-linear classification algorithms or independent component analysis. This practice is based on selecting components corresponding to the largest eigenvalues. If the ultimate goal is separation of data in two groups, then these set of components need not have the most discriminatory power. We measured the distance between two such populations using Mahalanobis distance and chose the eigenvectors to maximize it, a modified PCA method, which we call the discriminant PCA (DPCA). DPCA was applied to diffusion tensor-based fractional anisotropy images to distinguish age-matched schizophrenia subjects from healthy controls. The performance of the proposed method was evaluated by the one-leave-out method. We show that for this fractional anisotropy data set, the classification error with 60 components was close to the minimum error and that the Mahalanobis distance was twice as large with DPCA, than with PCA. Finally, by masking the discriminant function with the white matter tracts of the Johns Hopkins University atlas, we identified left superior longitudinal fasciculus as the tract which gave the least classification error. In addition, with six optimally chosen tracts the classification error was zero.
NASA Astrophysics Data System (ADS)
Wang, Shu; Chen, Xiaodian; de Grijs, Richard; Deng, Licai
2018-01-01
Classical Cepheids are well-known and widely used distance indicators. As distance and extinction are usually degenerate, it is important to develop suitable methods to robustly anchor the distance scale. Here, we introduce a near-infrared optimal distance method to determine both the extinction values of and distances to a large sample of 288 Galactic classical Cepheids. The overall uncertainty in the derived distances is less than 4.9%. We compare our newly determined distances to the Cepheids in our sample with previously published distances to the same Cepheids with Hubble Space Telescope parallax measurements and distances based on the IR surface brightness method, Wesenheit functions, and the main-sequence fitting method. The systematic deviations in the distances determined here with respect to those of previous publications is less than 1%–2%. Hence, we constructed Galactic mid-IR period–luminosity (PL) relations for classical Cepheids in the four Wide-Field Infrared Survey Explorer (WISE) bands (W1, W2, W3, and W4) and the four Spitzer Space Telescope bands ([3.6], [4.5], [5.8], and [8.0]). Based on our sample of hundreds of Cepheids, the WISE PL relations have been determined for the first time; their dispersion is approximately 0.10 mag. Using the currently most complete sample, our Spitzer PL relations represent a significant improvement in accuracy, especially in the [3.6] band which has the smallest dispersion (0.066 mag). In addition, the average mid-IR extinction curve for Cepheids has been obtained: {A}W1/{A}{K{{s}}}≈ 0.560, {A}W2/{A}{K{{s}}}≈ 0.479, {A}W3/{A}{K{{s}}}≈ 0.507, {A}W4/{A}{K{{s}}}≈ 0.406, {A}[3.6]/{A}{K{{s}}}≈ 0.481, {A}[4.5]/{A}{K{{s}}}≈ 0.469, {A}[5.8]/{A}{K{{s}}}≈ 0.427, and {A}[8.0]/{A}{K{{s}}}≈ 0.427 {mag}.
Research on cardiovascular disease prediction based on distance metric learning
NASA Astrophysics Data System (ADS)
Ni, Zhuang; Liu, Kui; Kang, Guixia
2018-04-01
Distance metric learning algorithm has been widely applied to medical diagnosis and exhibited its strengths in classification problems. The k-nearest neighbour (KNN) is an efficient method which treats each feature equally. The large margin nearest neighbour classification (LMNN) improves the accuracy of KNN by learning a global distance metric, which did not consider the locality of data distributions. In this paper, we propose a new distance metric algorithm adopting cosine metric and LMNN named COS-SUBLMNN which takes more care about local feature of data to overcome the shortage of LMNN and improve the classification accuracy. The proposed methodology is verified on CVDs patient vector derived from real-world medical data. The Experimental results show that our method provides higher accuracy than KNN and LMNN did, which demonstrates the effectiveness of the Risk predictive model of CVDs based on COS-SUBLMNN.
NASA Astrophysics Data System (ADS)
Onoyama, Takashi; Maekawa, Takuya; Kubota, Sen; Tsuruta, Setuso; Komoda, Norihisa
To build a cooperative logistics network covering multiple enterprises, a planning method that can build a long-distance transportation network is required. Many strict constraints are imposed on this type of problem. To solve these strict-constraint problems, a selfish constraint satisfaction genetic algorithm (GA) is proposed. In this GA, each gene of an individual satisfies only its constraint selfishly, disregarding the constraints of other genes in the same individuals. Moreover, a constraint pre-checking method is also applied to improve the GA convergence speed. The experimental result shows the proposed method can obtain an accurate solution in a practical response time.
Lateral migration of a microdroplet under optical forces in a uniform flow
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cho, Hyunjun; Chang, Cheong Bong; Jung, Jin Ho
2014-12-15
The behavior of a microdroplet in a uniform flow and subjected to a vertical optical force applied by a loosely focused Gaussian laser beam was studied numerically. The lattice Boltzmann method was applied to obtain the two-phase flow field, and the dynamic ray tracing method was adopted to calculate the optical force. The optical forces acting on the spherical droplets agreed well with the analytical values. The numerically predicted droplet migration distances agreed well with the experimentally obtained values. Simulations of the various flow and optical parameters showed that the droplet migration distance nondimensionalized by the droplet radius is proportionalmore » to the S number (z{sub d}/r{sub p} = 0.377S), which is the ratio of the optical force to the viscous drag. The effect of the surface tension was also examined. These results indicated that the surface tension influenced the droplet migration distance to a lesser degree than the flow and optical parameters. The results of the present work hold for the refractive indices of the mean fluid and the droplet being 1.33 and 1.59, respectively.« less
The Hetu'u Global Network: Measuring the Distance to the Sun Using the June 5th/6th Transit of Venus
ERIC Educational Resources Information Center
Faherty, Jacqueline K.; Rodriguez, David R.; Miller, Scott T.
2012-01-01
In the spirit of historic astronomical endeavors, we invited school groups across the globe to collaborate in a solar distance measurement using the rare June 5/6th transit of Venus. In total, we recruited 19 school groups spread over 6 continents and 10 countries to participate in our Hetu'u Global Network. Applying the methods of French…
Yousef, Malik; Khalifa, Waleed; AbedAllah, Loai
2016-12-22
The performance of many learning and data mining algorithms depends critically on suitable metrics to assess efficiency over the input space. Learning a suitable metric from examples may, therefore, be the key to successful application of these algorithms. We have demonstrated that the k-nearest neighbor (kNN) classification can be significantly improved by learning a distance metric from labeled examples. The clustering ensemble is used to define the distance between points in respect to how they co-cluster. This distance is then used within the framework of the kNN algorithm to define a classifier named ensemble clustering kNN classifier (EC-kNN). In many instances in our experiments we achieved highest accuracy while SVM failed to perform as well. In this study, we compare the performance of a two-class classifier using EC-kNN with different one-class and two-class classifiers. The comparison was applied to seven different plant microRNA species considering eight feature selection methods. In this study, the averaged results show that ECkNN outperforms all other methods employed here and previously published results for the same data. In conclusion, this study shows that the chosen classifier shows high performance when the distance metric is carefully chosen.
Yousef, Malik; Khalifa, Waleed; AbdAllah, Loai
2016-12-01
The performance of many learning and data mining algorithms depends critically on suitable metrics to assess efficiency over the input space. Learning a suitable metric from examples may, therefore, be the key to successful application of these algorithms. We have demonstrated that the k-nearest neighbor (kNN) classification can be significantly improved by learning a distance metric from labeled examples. The clustering ensemble is used to define the distance between points in respect to how they co-cluster. This distance is then used within the framework of the kNN algorithm to define a classifier named ensemble clustering kNN classifier (EC-kNN). In many instances in our experiments we achieved highest accuracy while SVM failed to perform as well. In this study, we compare the performance of a two-class classifier using EC-kNN with different one-class and two-class classifiers. The comparison was applied to seven different plant microRNA species considering eight feature selection methods. In this study, the averaged results show that EC-kNN outperforms all other methods employed here and previously published results for the same data. In conclusion, this study shows that the chosen classifier shows high performance when the distance metric is carefully chosen.
Distance measurement using frequency scanning interferometry with mode-hoped laser
NASA Astrophysics Data System (ADS)
Medhat, M.; Sobee, M.; Hussein, H. M.; Terra, O.
2016-06-01
In this paper, frequency scanning interferometry is implemented to measure distances up to 5 m absolutely. The setup consists of a Michelson interferometer, an external cavity tunable diode laser, and an ultra-low expansion (ULE) Fabry-Pérot (FP) cavity to measure the frequency scanning range. The distance is measured by acquiring simultaneously the interference fringes from, the Michelson and the FP interferometers, while scanning the laser frequency. An online fringe processing technique is developed to calculate the distance from the fringe ratio while removing the parts result from the laser mode-hops without significantly affecting the measurement accuracy. This fringe processing method enables accurate distance measurements up to 5 m with measurements repeatability ±3.9×10-6 L. An accurate translation stage is used to find the FP cavity free-spectral-range and therefore allow accurate measurement. Finally, the setup is applied for the short distance calibration of a laser distance meter (LDM).
NASA Astrophysics Data System (ADS)
Yin, Yanshu; Feng, Wenjie
2017-12-01
In this paper, a location-based multiple point statistics method is developed to model a non-stationary reservoir. The proposed method characterizes the relationship between the sedimentary pattern and the deposit location using the relative central position distance function, which alleviates the requirement that the training image and the simulated grids have the same dimension. The weights in every direction of the distance function can be changed to characterize the reservoir heterogeneity in various directions. The local integral replacements of data events, structured random path, distance tolerance and multi-grid strategy are applied to reproduce the sedimentary patterns and obtain a more realistic result. This method is compared with the traditional Snesim method using a synthesized 3-D training image of Poyang Lake and a reservoir model of Shengli Oilfield in China. The results indicate that the new method can reproduce the non-stationary characteristics better than the traditional method and is more suitable for simulation of delta-front deposits. These results show that the new method is a powerful tool for modelling a reservoir with non-stationary characteristics.
NASA Technical Reports Server (NTRS)
Bonamente, Massimiliano; Joy, Marshall K.; Carlstrom, John E.; LaRoque, Samuel J.
2004-01-01
X-ray and Sunyaev-Zeldovich Effect data ca,n be combined to determine the distance to galaxy clusters. High-resolution X-ray data are now available from the Chandra Observatory, which provides both spatial and spectral information, and interferometric radio measurements of the Sunyam-Zeldovich Effect are available from the BIMA and 0VR.O arrays. We introduce a Monte Carlo Markov chain procedure for the joint analysis of X-ray and Sunyaev-Zeldovich Effect data. The advantages of this method are the high computational efficiency and the ability to measure the full probability distribution of all parameters of interest, such as the spatial and spectral properties of the cluster gas and the cluster distance. We apply this technique to the Chandra X-ray data and the OVRO radio data for the galaxy cluster Abell 611. Comparisons with traditional likelihood-ratio methods reveal the robustness of the method. This method will be used in a follow-up paper to determine the distance of a large sample of galaxy clusters for which high-resolution Chandra X-ray and BIMA/OVRO radio data are available.
Decrease in Ground-Run Distance of Small Airplanes by Applying Electrically-Driven Wheels
NASA Astrophysics Data System (ADS)
Kobayashi, Hiroshi; Nishizawa, Akira
A new takeoff method for small airplanes was proposed. Ground-roll performance of an airplane driven by electrically-powered wheels was experimentally and computationally studied. The experiments verified that the ground-run distance was decreased by half with a combination of the powered driven wheels and propeller without increase of energy consumption during the ground-roll. The computational analysis showed the ground-run distance of the wheel-driven aircraft was independent of the motor power when the motor capability exceeded the friction between tires and ground. Furthermore, the distance was minimized when the angle of attack was set to the value so that the wing generated negative lift.
Interplay between strong correlation and adsorption distances: Co on Cu(001)
NASA Astrophysics Data System (ADS)
Bahlke, Marc Philipp; Karolak, Michael; Herrmann, Carmen
2018-01-01
Adsorbed transition metal atoms can have partially filled d or f shells due to strong on-site Coulomb interaction. Capturing all effects originating from electron correlation in such strongly correlated systems is a challenge for electronic structure methods. It requires a sufficiently accurate description of the atomistic structure (in particular bond distances and angles), which is usually obtained from first-principles Kohn-Sham density functional theory (DFT), which due to the approximate nature of the exchange-correlation functional may provide an unreliable description of strongly correlated systems. To elucidate the consequences of this popular procedure, we apply a combination of DFT with the Anderson impurity model (AIM), as well as DFT + U for a calculation of the potential energy surface along the Co/Cu(001) adsorption coordinate, and compare the results with those obtained from DFT. The adsorption minimum is shifted towards larger distances by applying DFT+AIM, or the much cheaper DFT +U method, compared to the corresponding spin-polarized DFT results, by a magnitude comparable to variations between different approximate exchange-correlation functionals (0.08 to 0.12 Å). This shift originates from an increasing correlation energy at larger adsorption distances, which can be traced back to the Co 3 dx y and 3 dz2 orbitals being more correlated as the adsorption distance is increased. We can show that such considerations are important, as they may strongly affect electronic properties such as the Kondo temperature.
Self-similar slip distributions on irregular shaped faults
NASA Astrophysics Data System (ADS)
Herrero, A.; Murphy, S.
2018-06-01
We propose a strategy to place a self-similar slip distribution on a complex fault surface that is represented by an unstructured mesh. This is possible by applying a strategy based on the composite source model where a hierarchical set of asperities, each with its own slip function which is dependent on the distance from the asperity centre. Central to this technique is the efficient, accurate computation of distance between two points on the fault surface. This is known as the geodetic distance problem. We propose a method to compute the distance across complex non-planar surfaces based on a corollary of the Huygens' principle. The difference between this method compared to others sample-based algorithms which precede it is the use of a curved front at a local level to calculate the distance. This technique produces a highly accurate computation of the distance as the curvature of the front is linked to the distance from the source. Our local scheme is based on a sequence of two trilaterations, producing a robust algorithm which is highly precise. We test the strategy on a planar surface in order to assess its ability to keep the self-similarity properties of a slip distribution. We also present a synthetic self-similar slip distribution on a real slab topography for a M8.5 event. This method for computing distance may be extended to the estimation of first arrival times in both complex 3D surfaces or 3D volumes.
Cepheids Geometrical Distances Using Space Interferometry
NASA Astrophysics Data System (ADS)
Marengo, M.; Karovska, M.; Sasselov, D. D.; Sanchez, M.
2004-05-01
A space based interferometer with a sub-milliarcsecond resolution in the UV-optical will provide a new avenue for the calibration of primary distance indicators with unprecedented accuracy, by allowing very accurate and stable measurements of Cepheids pulsation amplitudes at wavelengths not accessible from the ground. Sasselov & Karovska (1994) have shown that interferometers allow very accurate measurements of Cepheids distances by using a ``geometric'' variant of the Baade-Wesselink method. This method has been succesfully applied to derive distances and radii of nearby Cepheids using ground-based near-IR and optical interferometers, within a 15% accuracy level. Our study shows that the main source of error in these measurements is due to the perturbing effects of the Earth atmosphere, which is the limiting factor in the interferometer stability. A space interferometer will not suffer from this intrinsic limitations, and can potentially lead to improve astronomical distance measurements by an order of magnitude in precision. We discuss here the technical requirements that a space based facility will need to carry out this project, allowing distance measurements within a few percent accuracy level. We will finally discuss how a sub-milliarcsecond resolution will allow the direct distance determination for hundreds of galactic sources, and provide a substantial improvement in the zero-point of the Cepheid distance scale.
A Unimodal Model for Double Observer Distance Sampling Surveys.
Becker, Earl F; Christ, Aaron M
2015-01-01
Distance sampling is a widely used method to estimate animal population size. Most distance sampling models utilize a monotonically decreasing detection function such as a half-normal. Recent advances in distance sampling modeling allow for the incorporation of covariates into the distance model, and the elimination of the assumption of perfect detection at some fixed distance (usually the transect line) with the use of double-observer models. The assumption of full observer independence in the double-observer model is problematic, but can be addressed by using the point independence assumption which assumes there is one distance, the apex of the detection function, where the 2 observers are assumed independent. Aerially collected distance sampling data can have a unimodal shape and have been successfully modeled with a gamma detection function. Covariates in gamma detection models cause the apex of detection to shift depending upon covariate levels, making this model incompatible with the point independence assumption when using double-observer data. This paper reports a unimodal detection model based on a two-piece normal distribution that allows covariates, has only one apex, and is consistent with the point independence assumption when double-observer data are utilized. An aerial line-transect survey of black bears in Alaska illustrate how this method can be applied.
Mode-resolved frequency comb interferometry for high-accuracy long distance measurement
van den Berg, Steven. A.; van Eldik, Sjoerd; Bhattacharya, Nandini
2015-01-01
Optical frequency combs have developed into powerful tools for distance metrology. In this paper we demonstrate absolute long distance measurement using a single femtosecond frequency comb laser as a multi-wavelength source. By applying a high-resolution spectrometer based on a virtually imaged phased array, the frequency comb modes are resolved spectrally to the level of an individual mode. Having the frequency comb stabilized against an atomic clock, thousands of accurately known wavelengths are available for interferometry. From the spectrally resolved output of a Michelson interferometer a distance is derived. The presented measurement method combines spectral interferometry, white light interferometry and multi-wavelength interferometry in a single scheme. Comparison with a fringe counting laser interferometer shows an agreement within <10−8 for a distance of 50 m. PMID:26419282
Diaz-Balteiro, L; Belavenutti, P; Ezquerro, M; González-Pachón, J; Ribeiro Nobre, S; Romero, C
2018-05-15
There is an important body of literature using multi-criteria distance function methods for the aggregation of a battery of sustainability indicators in order to obtain a composite index. This index is considered to be a proxy of the sustainability goodness of a natural system. Although this approach has been profusely used in the literature, it is not exempt from difficulties and potential pitfalls. Thus, in this paper, a significant number of critical issues have been identified showing different procedures capable of avoiding, or at least of mitigating, the inherent potential pitfalls associated with each one. The recommendations made in the paper could increase the theoretical soundness of the multi-criteria distance function methods when this type of approach is applied in the sustainability field, thus increasing the accuracy and realism of the sustainability measurements obtained. Copyright © 2018 Elsevier Ltd. All rights reserved.
Walking Distance Estimation Using Walking Canes with Inertial Sensors
Suh, Young Soo
2018-01-01
A walking distance estimation algorithm for cane users is proposed using an inertial sensor unit attached to various positions on the cane. A standard inertial navigation algorithm using an indirect Kalman filter was applied to update the velocity and position of the cane during movement. For quadripod canes, a standard zero-velocity measurement-updating method is proposed. For standard canes, a velocity-updating method based on an inverted pendulum model is proposed. The proposed algorithms were verified by three walking experiments with two different types of canes and different positions of the sensor module. PMID:29342971
Park, Sam-Sik; Kim, Bo-Kyung; Moon, Ok-Kon; Choi, Wan-Suk
2015-01-01
[Purpose] The study investigated the effects of joint position on the distraction distance during Grade III glenohumeral joint distraction in healthy individuals. [Subjects and Methods] Twenty adults in their forties without shoulder disease were randomly divided into neutral position group (NPG; n = 7), resting position group (RPG; n = 7), and end range position group (ERPG; n = 6). After Kaltenborn Grade III distraction for 40s, the distance between glenoid fossa and humeral head was measured by ultrasound. [Results] The average distances between the humeral head and glenoid fossa before distraction were 2.86 ± 0.81, 3.21 ± 0.47, and 3.55 ± 0.59 mm for the NP, RP, and ERP groups. The distances after applying distraction were 3.12 ± 0.51, 3.86 ± 0.55, and 4.35 ± 0.32 mm. Between-group comparison after applying distraction revealed no significant differences between the NP and RP groups, while there was a statistically significant difference between the NP and RP groups, as well as between the NP and ERP groups. [Conclusion] Joint space was largest in ERP individuals when performing manual distraction. PMID:26644692
An automated method for tracking clouds in planetary atmospheres
NASA Astrophysics Data System (ADS)
Luz, D.; Berry, D. L.; Roos-Serote, M.
2008-05-01
We present an automated method for cloud tracking which can be applied to planetary images. The method is based on a digital correlator which compares two or more consecutive images and identifies patterns by maximizing correlations between image blocks. This approach bypasses the problem of feature detection. Four variations of the algorithm are tested on real cloud images of Jupiter's white ovals from the Galileo mission, previously analyzed in Vasavada et al. [Vasavada, A.R., Ingersoll, A.P., Banfield, D., Bell, M., Gierasch, P.J., Belton, M.J.S., Orton, G.S., Klaasen, K.P., Dejong, E., Breneman, H.H., Jones, T.J., Kaufman, J.M., Magee, K.P., Senske, D.A. 1998. Galileo imaging of Jupiter's atmosphere: the great red spot, equatorial region, and white ovals. Icarus, 135, 265, doi:10.1006/icar.1998.5984]. Direct correlation, using the sum of squared differences between image radiances as a distance estimator (baseline case), yields displacement vectors very similar to this previous analysis. Combining this distance estimator with the method of order ranks results in a technique which is more robust in the presence of outliers and noise and of better quality. Finally, we introduce a distance metric which, combined with order ranks, provides results of similar quality to the baseline case and is faster. The new approach can be applied to data from a number of space-based imaging instruments with a non-negligible gain in computing time.
Protein structure estimation from NMR data by matrix completion.
Li, Zhicheng; Li, Yang; Lei, Qiang; Zhao, Qing
2017-09-01
Knowledge of protein structures is very important to understand their corresponding physical and chemical properties. Nuclear Magnetic Resonance (NMR) spectroscopy is one of the main methods to measure protein structure. In this paper, we propose a two-stage approach to calculate the structure of a protein from a highly incomplete distance matrix, where most data are obtained from NMR. We first randomly "guess" a small part of unobservable distances by utilizing the triangle inequality, which is crucial for the second stage. Then we use matrix completion to calculate the protein structure from the obtained incomplete distance matrix. We apply the accelerated proximal gradient algorithm to solve the corresponding optimization problem. Furthermore, the recovery error of our method is analyzed, and its efficiency is demonstrated by several practical examples.
Adhikari, Badri; Trieu, Tuan; Cheng, Jianlin
2016-11-07
Reconstructing three-dimensional structures of chromosomes is useful for visualizing their shapes in a cell and interpreting their function. In this work, we reconstruct chromosomal structures from Hi-C data by translating contact counts in Hi-C data into Euclidean distances between chromosomal regions and then satisfying these distances using a structure reconstruction method rigorously tested in the field of protein structure determination. We first evaluate the robustness of the overall reconstruction algorithm on noisy simulated data at various levels of noise by comparing with some of the state-of-the-art reconstruction methods. Then, using simulated data, we validate that Spearman's rank correlation coefficient between pairwise distances in the reconstructed chromosomal structures and the experimental chromosomal contact counts can be used to find optimum conversion rules for transforming interaction frequencies to wish distances. This strategy is then applied to real Hi-C data at chromosome level for optimal transformation of interaction frequencies to wish distances and for ranking and selecting structures. The chromosomal structures reconstructed from a real-world human Hi-C dataset by our method were validated by the known two-compartment feature of the human chromosome organization. We also show that our method is robust with respect to the change of the granularity of Hi-C data, and consistently produces similar structures at different chromosomal resolutions. Chromosome3D is a robust method of reconstructing chromosome three-dimensional models using distance restraints obtained from Hi-C interaction frequency data. It is available as a web application and as an open source tool at http://sysbio.rnet.missouri.edu/chromosome3d/ .
Language distance and tree reconstruction
NASA Astrophysics Data System (ADS)
Petroni, Filippo; Serva, Maurizio
2008-08-01
Languages evolve over time according to a process in which reproduction, mutation and extinction are all possible. This is very similar to haploid evolution for asexual organisms and for the mitochondrial DNA of complex ones. Exploiting this similarity, it is possible, in principle, to verify hypotheses concerning the relationship among languages and to reconstruct their family tree. The key point is the definition of the distances among pairs of languages in analogy with the genetic distances among pairs of organisms. Distances can be evaluated by comparing grammar and/or vocabulary, but while it is difficult, if not impossible, to quantify grammar distance, it is possible to measure a distance from vocabulary differences. The method used by glottochronology computes distances from the percentage of shared 'cognates', which are words with a common historical origin. The weak point of this method is that subjective judgment plays a significant role. Here we define the distance of two languages by considering a renormalized edit distance among words with the same meaning and averaging over the two hundred words contained in a Swadesh list. In our approach the vocabulary of a language is the analogue of DNA for organisms. The advantage is that we avoid subjectivity and, furthermore, reproducibility of results is guaranteed. We apply our method to the Indo-European and the Austronesian groups, considering, in both cases, fifty different languages. The two trees obtained are, in many respects, similar to those found by glottochronologists, with some important differences as regards the positions of a few languages. In order to support these different results we separately analyze the structure of the distances of these languages with respect to all the others.
Measures of lexical distance between languages
NASA Astrophysics Data System (ADS)
Petroni, Filippo; Serva, Maurizio
2010-06-01
The idea of measuring distance between languages seems to have its roots in the work of the French explorer Dumont D’Urville (1832) [13]. He collected comparative word lists for various languages during his voyages aboard the Astrolabe from 1826 to 1829 and, in his work concerning the geographical division of the Pacific, he proposed a method for measuring the degree of relation among languages. The method used by modern glottochronology, developed by Morris Swadesh in the 1950s, measures distances from the percentage of shared cognates, which are words with a common historical origin. Recently, we proposed a new automated method which uses the normalized Levenshtein distances among words with the same meaning and averages on the words contained in a list. Recently another group of scholars, Bakker et al. (2009) [8] and Holman et al. (2008) [9], proposed a refined version of our definition including a second normalization. In this paper we compare the information content of our definition with the refined version in order to decide which of the two can be applied with greater success to resolve relationships among languages.
Yu, Yifei; Luo, Linqing; Li, Bo; Guo, Linfeng; Yan, Jize; Soga, Kenichi
2015-10-01
The measured distance error caused by double peaks in the BOTDRs (Brillouin optical time domain reflectometers) system is a kind of Brillouin scattering spectrum (BSS) deformation, discussed and simulated for the first time in the paper, to the best of the authors' knowledge. Double peak, as a kind of Brillouin spectrum deformation, is important in the enhancement of spatial resolution, measurement accuracy, and crack detection. Due to the variances of the peak powers of the BSS along the fiber, the measured starting point of a step-shape frequency transition region is shifted and results in distance errors. Zero-padded short-time-Fourier-transform (STFT) can restore the transition-induced double peaks in the asymmetric and deformed BSS, thus offering more accurate and quicker measurements than the conventional Lorentz-fitting method. The recovering method based on the double-peak detection and corresponding BSS deformation can be applied to calculate the real starting point, which can improve the distance accuracy of the STFT-based BOTDR system.
Evaluation on Cost Overrun Risks of Long-distance Water Diversion Project Based on SPA-IAHP Method
NASA Astrophysics Data System (ADS)
Yuanyue, Yang; Huimin, Li
2018-02-01
Large investment, long route, many change orders and etc. are main causes for costs overrun of long-distance water diversion project. This paper, based on existing research, builds a full-process cost overrun risk evaluation index system for water diversion project, apply SPA-IAHP method to set up cost overrun risk evaluation mode, calculate and rank weight of every risk evaluation indexes. Finally, the cost overrun risks are comprehensively evaluated by calculating linkage measure, and comprehensive risk level is acquired. SPA-IAHP method can accurately evaluate risks, and the reliability is high. By case calculation and verification, it can provide valid cost overrun decision making information to construction companies.
Why conventional detection methods fail in identifying the existence of contamination events.
Liu, Shuming; Li, Ruonan; Smith, Kate; Che, Han
2016-04-15
Early warning systems are widely used to safeguard water security, but their effectiveness has raised many questions. To understand why conventional detection methods fail to identify contamination events, this study evaluates the performance of three contamination detection methods using data from a real contamination accident and two artificial datasets constructed using a widely applied contamination data construction approach. Results show that the Pearson correlation Euclidean distance (PE) based detection method performs better for real contamination incidents, while the Euclidean distance method (MED) and linear prediction filter (LPF) method are more suitable for detecting sudden spike-like variation. This analysis revealed why the conventional MED and LPF methods failed to identify existence of contamination events. The analysis also revealed that the widely used contamination data construction approach is misleading. Copyright © 2016 Elsevier Ltd. All rights reserved.
Alignment-free genome tree inference by learning group-specific distance metrics.
Patil, Kaustubh R; McHardy, Alice C
2013-01-01
Understanding the evolutionary relationships between organisms is vital for their in-depth study. Gene-based methods are often used to infer such relationships, which are not without drawbacks. One can now attempt to use genome-scale information, because of the ever increasing number of genomes available. This opportunity also presents a challenge in terms of computational efficiency. Two fundamentally different methods are often employed for sequence comparisons, namely alignment-based and alignment-free methods. Alignment-free methods rely on the genome signature concept and provide a computationally efficient way that is also applicable to nonhomologous sequences. The genome signature contains evolutionary signal as it is more similar for closely related organisms than for distantly related ones. We used genome-scale sequence information to infer taxonomic distances between organisms without additional information such as gene annotations. We propose a method to improve genome tree inference by learning specific distance metrics over the genome signature for groups of organisms with similar phylogenetic, genomic, or ecological properties. Specifically, our method learns a Mahalanobis metric for a set of genomes and a reference taxonomy to guide the learning process. By applying this method to more than a thousand prokaryotic genomes, we showed that, indeed, better distance metrics could be learned for most of the 18 groups of organisms tested here. Once a group-specific metric is available, it can be used to estimate the taxonomic distances for other sequenced organisms from the group. This study also presents a large scale comparison between 10 methods--9 alignment-free and 1 alignment-based.
QCD phenomenology of static sources and gluonic excitations at short distances
NASA Astrophysics Data System (ADS)
Bali, Gunnar S.; Pineda, Antonio
2004-05-01
New lattice data for the Πu and Σ-u potentials at short distances are presented. We compare perturbation theory to the lower static hybrid potentials and find good agreement at short distances, once the renormalon ambiguities are accounted for. We use the nonperturbatively determined continuum-limit static hybrid and ground state potentials at short distances to determine the gluelump energies. The result is consistent with an estimate obtained from the gluelump data at finite lattice spacings. For the lightest gluelump, we obtain ΛRSB(νf=2.5r-10)=[2.25±0.10(latt.)±0.21(th.)±0.08(ΛMS¯)]r-10 in the quenched approximation with r-10≈400 MeV. We show that, to quote sensible numbers for the absolute values of the gluelump energies, it is necessary to handle the singularities of the singlet and octet potentials in the Borel plane. We propose to subtract the renormalons of the short-distance matching coefficients, the potentials in this case. For the singlet potential the leading renormalon is already known and related to that of the pole mass; for the octet potential a new renormalon appears, which we approximately evaluate. We also apply our methods to heavy-light mesons in the static limit and from the lattice simulations available in the literature we obtain the quenched result Λ¯RS(νf=2.5r-10)=[1.17±0.08(latt.)±0.13(th.)±0.09(ΛMS¯)]r-10. We calculate mb,MS¯(mb,MS¯) and apply our methods to gluinonia whose dynamics are governed by the singlet potential between adjoint sources. We can exclude nonstandard linear short-distance contributions to the static potentials, with good accuracy.
Cano-García, Angel E.; Lazaro, José Luis; Infante, Arturo; Fernández, Pedro; Pompa-Chacón, Yamilet; Espinoza, Felipe
2012-01-01
In this study, a camera to infrared diode (IRED) distance estimation problem was analyzed. The main objective was to define an alternative to measures depth only using the information extracted from pixel grey levels of the IRED image to estimate the distance between the camera and the IRED. In this paper, the standard deviation of the pixel grey level in the region of interest containing the IRED image is proposed as an empirical parameter to define a model for estimating camera to emitter distance. This model includes the camera exposure time, IRED radiant intensity and the distance between the camera and the IRED. An expression for the standard deviation model related to these magnitudes was also derived and calibrated using different images taken under different conditions. From this analysis, we determined the optimum parameters to ensure the best accuracy provided by this alternative. Once the model calibration had been carried out, a differential method to estimate the distance between the camera and the IRED was defined and applied, considering that the camera was aligned with the IRED. The results indicate that this method represents a useful alternative for determining the depth information. PMID:22778608
Cano-García, Angel E; Lazaro, José Luis; Infante, Arturo; Fernández, Pedro; Pompa-Chacón, Yamilet; Espinoza, Felipe
2012-01-01
In this study, a camera to infrared diode (IRED) distance estimation problem was analyzed. The main objective was to define an alternative to measures depth only using the information extracted from pixel grey levels of the IRED image to estimate the distance between the camera and the IRED. In this paper, the standard deviation of the pixel grey level in the region of interest containing the IRED image is proposed as an empirical parameter to define a model for estimating camera to emitter distance. This model includes the camera exposure time, IRED radiant intensity and the distance between the camera and the IRED. An expression for the standard deviation model related to these magnitudes was also derived and calibrated using different images taken under different conditions. From this analysis, we determined the optimum parameters to ensure the best accuracy provided by this alternative. Once the model calibration had been carried out, a differential method to estimate the distance between the camera and the IRED was defined and applied, considering that the camera was aligned with the IRED. The results indicate that this method represents a useful alternative for determining the depth information.
Approximate geodesic distances reveal biologically relevant structures in microarray data.
Nilsson, Jens; Fioretos, Thoas; Höglund, Mattias; Fontes, Magnus
2004-04-12
Genome-wide gene expression measurements, as currently determined by the microarray technology, can be represented mathematically as points in a high-dimensional gene expression space. Genes interact with each other in regulatory networks, restricting the cellular gene expression profiles to a certain manifold, or surface, in gene expression space. To obtain knowledge about this manifold, various dimensionality reduction methods and distance metrics are used. For data points distributed on curved manifolds, a sensible distance measure would be the geodesic distance along the manifold. In this work, we examine whether an approximate geodesic distance measure captures biological similarities better than the traditionally used Euclidean distance. We computed approximate geodesic distances, determined by the Isomap algorithm, for one set of lymphoma and one set of lung cancer microarray samples. Compared with the ordinary Euclidean distance metric, this distance measure produced more instructive, biologically relevant, visualizations when applying multidimensional scaling. This suggests the Isomap algorithm as a promising tool for the interpretation of microarray data. Furthermore, the results demonstrate the benefit and importance of taking nonlinearities in gene expression data into account.
A Surface Energy Transfer Nanoruler for Measuring Binding Site Distances on Live Cell Surfaces
Chen, Yan; O’Donoghue, Meghan B.; Huang, Yu-Fen; Kang, Huaizhi; Phillips, Joseph A.; Chen, Xiaolan; Estevez, M.-Carmen; Tan, Weihong
2010-01-01
Measuring distances at molecular length scales in living systems is a significant challenge. Methods like FRET have limitations due to short detection distances and strict orientations. Recently, surface energy transfer (SET) has been used in bulk solutions; however, it cannot be applied to living systems. Here, we have developed an SET nanoruler, using aptamer-gold-nanoparticle conjugates with different diameters, to monitor the distance between binding sites of a receptor on living cells. The nanoruler can measure separation distances well beyond the detection limit of FRET. Thus, for the first time, we have developed an effective SET nanoruler for live cells with long distance, easy construction, fast detection and low background. This is also the first time that the distance between the aptamer and antibody binding sites in the membrane protein PTK7 was measured accurately. The SET nanoruler represents the next leap forward to monitor structural components within living cell membranes. PMID:21038856
Efficient dual approach to distance metric learning.
Shen, Chunhua; Kim, Junae; Liu, Fayao; Wang, Lei; van den Hengel, Anton
2014-02-01
Distance metric learning is of fundamental interest in machine learning because the employed distance metric can significantly affect the performance of many learning methods. Quadratic Mahalanobis metric learning is a popular approach to the problem, but typically requires solving a semidefinite programming (SDP) problem, which is computationally expensive. The worst case complexity of solving an SDP problem involving a matrix variable of size D×D with O(D) linear constraints is about O(D(6.5)) using interior-point methods, where D is the dimension of the input data. Thus, the interior-point methods only practically solve problems exhibiting less than a few thousand variables. Because the number of variables is D(D+1)/2, this implies a limit upon the size of problem that can practically be solved around a few hundred dimensions. The complexity of the popular quadratic Mahalanobis metric learning approach thus limits the size of problem to which metric learning can be applied. Here, we propose a significantly more efficient and scalable approach to the metric learning problem based on the Lagrange dual formulation of the problem. The proposed formulation is much simpler to implement, and therefore allows much larger Mahalanobis metric learning problems to be solved. The time complexity of the proposed method is roughly O(D(3)), which is significantly lower than that of the SDP approach. Experiments on a variety of data sets demonstrate that the proposed method achieves an accuracy comparable with the state of the art, but is applicable to significantly larger problems. We also show that the proposed method can be applied to solve more general Frobenius norm regularized SDP problems approximately.
Combining coordination of motion actuators with driver steering interaction.
Tagesson, Kristoffer; Laine, Leo; Jacobson, Bengt
2015-01-01
A new method is suggested for coordination of vehicle motion actuators; where driver feedback and capabilities become natural elements in the prioritization. The method is using a weighted least squares control allocation formulation, where driver characteristics can be added as virtual force constraints. The approach is in particular suitable for heavy commercial vehicles that in general are over actuated. The method is applied, in a specific use case, by running a simulation of a truck applying automatic braking on a split friction surface. Here the required driver steering angle, to maintain the intended direction, is limited by a constant threshold. This constant is automatically accounted for when balancing actuator usage in the method. Simulation results show that the actual required driver steering angle can be expected to match the set constant well. Furthermore, the stopping distance is very much affected by this set capability of the driver to handle the lateral disturbance, as expected. In general the capability of the driver to handle disturbances should be estimated in real-time, considering driver mental state. By using the method it will then be possible to estimate e.g. stopping distance implied from this. The setup has the potential of even shortening the stopping distance, when the driver is estimated as active, this compared to currently available systems. The approach is feasible for real-time applications and requires only measurable vehicle quantities for parameterization. Examples of other suitable applications in scope of the method would be electronic stability control, lateral stability control at launch and optimal cornering arbitration.
Feature selection for the classification of traced neurons.
López-Cabrera, José D; Lorenzo-Ginori, Juan V
2018-06-01
The great availability of computational tools to calculate the properties of traced neurons leads to the existence of many descriptors which allow the automated classification of neurons from these reconstructions. This situation determines the necessity to eliminate irrelevant features as well as making a selection of the most appropriate among them, in order to improve the quality of the classification obtained. The dataset used contains a total of 318 traced neurons, classified by human experts in 192 GABAergic interneurons and 126 pyramidal cells. The features were extracted by means of the L-measure software, which is one of the most used computational tools in neuroinformatics to quantify traced neurons. We review some current feature selection techniques as filter, wrapper, embedded and ensemble methods. The stability of the feature selection methods was measured. For the ensemble methods, several aggregation methods based on different metrics were applied to combine the subsets obtained during the feature selection process. The subsets obtained applying feature selection methods were evaluated using supervised classifiers, among which Random Forest, C4.5, SVM, Naïve Bayes, Knn, Decision Table and the Logistic classifier were used as classification algorithms. Feature selection methods of types filter, embedded, wrappers and ensembles were compared and the subsets returned were tested in classification tasks for different classification algorithms. L-measure features EucDistanceSD, PathDistanceSD, Branch_pathlengthAve, Branch_pathlengthSD and EucDistanceAve were present in more than 60% of the selected subsets which provides evidence about their importance in the classification of this neurons. Copyright © 2018 Elsevier B.V. All rights reserved.
Vu, Lien T; Chen, Chao-Chang A; Yu, Chia-Wei
2018-02-05
This study aims to develop a new optical design method of soft multifocal contact lens (CLs) to obtain uniform optical power in large center-distance zone with optimized Non-Uniform Rational B-spline (NURBS). For the anterior surface profiles of CLs, the NURBS design curves are optimized to match given optical power distributions. Then, the NURBS in the center-distance zones are fitted in the corresponding spherical/aspheric curves for both data points and their centers of curvature to achieve the uniform power. Four cases of soft CLs have been manufactured by casting in shell molds by injection molding and then measured to verify the design specifications. Results of power profiles of these CLs are concord with the given clinical requirements of uniform powers in larger center-distance zone. The developed optical design method has been verified for multifocal CLs design and can be further applied for production of soft multifocal CLs.
Bilateral step length estimation using a single inertial measurement unit attached to the pelvis
2012-01-01
Background The estimation of the spatio-temporal gait parameters is of primary importance in both physical activity monitoring and clinical contexts. A method for estimating step length bilaterally, during level walking, using a single inertial measurement unit (IMU) attached to the pelvis is proposed. In contrast to previous studies, based either on a simplified representation of the human gait mechanics or on a general linear regressive model, the proposed method estimates the step length directly from the integration of the acceleration along the direction of progression. Methods The IMU was placed at pelvis level fixed to the subject's belt on the right side. The method was validated using measurements from a stereo-photogrammetric system as a gold standard on nine subjects walking ten laps along a closed loop track of about 25 m, varying their speed. For each loop, only the IMU data recorded in a 4 m long portion of the track included in the calibrated volume of the SP system, were used for the analysis. The method takes advantage of the cyclic nature of gait and it requires an accurate determination of the foot contact instances. A combination of a Kalman filter and of an optimally filtered direct and reverse integration applied to the IMU signals formed a single novel method (Kalman and Optimally filtered Step length Estimation - KOSE method). A correction of the IMU displacement due to the pelvic rotation occurring in gait was implemented to estimate the step length and the traversed distance. Results The step length was estimated for all subjects with less than 3% error. Traversed distance was assessed with less than 2% error. Conclusions The proposed method provided estimates of step length and traversed distance more accurate than any other method applied to measurements obtained from a single IMU that can be found in the literature. In healthy subjects, it is reasonable to expect that, errors in traversed distance estimation during daily monitoring activity would be of the same order of magnitude of those presented. PMID:22316235
A Monocular Vision Sensor-Based Obstacle Detection Algorithm for Autonomous Robots.
Lee, Tae-Jae; Yi, Dong-Hoon; Cho, Dong-Il Dan
2016-03-01
This paper presents a monocular vision sensor-based obstacle detection algorithm for autonomous robots. Each individual image pixel at the bottom region of interest is labeled as belonging either to an obstacle or the floor. While conventional methods depend on point tracking for geometric cues for obstacle detection, the proposed algorithm uses the inverse perspective mapping (IPM) method. This method is much more advantageous when the camera is not high off the floor, which makes point tracking near the floor difficult. Markov random field-based obstacle segmentation is then performed using the IPM results and a floor appearance model. Next, the shortest distance between the robot and the obstacle is calculated. The algorithm is tested by applying it to 70 datasets, 20 of which include nonobstacle images where considerable changes in floor appearance occur. The obstacle segmentation accuracies and the distance estimation error are quantitatively analyzed. For obstacle datasets, the segmentation precision and the average distance estimation error of the proposed method are 81.4% and 1.6 cm, respectively, whereas those for a conventional method are 57.5% and 9.9 cm, respectively. For nonobstacle datasets, the proposed method gives 0.0% false positive rates, while the conventional method gives 17.6%.
Parallel algorithms for the molecular conformation problem
NASA Astrophysics Data System (ADS)
Rajan, Kumar
Given a set of objects, and some of the pairwise distances between them, the problem of identifying the positions of the objects in the Euclidean space is referred to as the molecular conformation problem. This problem is known to be computationally difficult. One of the most important applications of this problem is the determination of the structure of molecules. In the case of molecular structure determination, usually only the lower and upper bounds on some of the interatomic distances are available. The process of obtaining a tighter set of bounds between all pairs of atoms, using the available interatomic distance bounds is referred to as bound-smoothing . One method for bound-smoothing is to use the limits imposed by the triangle inequality. The distance bounds so obtained can often be tightened further by applying the tetrangle inequality---the limits imposed on the six pairwise distances among a set of four atoms (instead of three for the triangle inequalities). The tetrangle inequality is expressed by the Cayley-Menger determinants. The sequential tetrangle-inequality bound-smoothing algorithm considers a quadruple of atoms at a time, and tightens the bounds on each of its six distances. The sequential algorithm is computationally expensive, and its application is limited to molecules with up to a few hundred atoms. Here, we conduct an experimental study of tetrangle-inequality bound-smoothing and reduce the sequential time by identifying the most computationally expensive portions of the process. We also present a simple criterion to determine which of the quadruples of atoms are likely to be tightened the most by tetrangle-inequality bound-smoothing. This test could be used to enhance the applicability of this process to large molecules. We map the problem of parallelizing tetrangle-inequality bound-smoothing to that of generating disjoint packing designs of a certain kind. We map this, in turn, to a regular-graph coloring problem, and present a simple, parallel algorithm for tetrangle-inequality bound-smoothing. We implement the parallel algorithm on the Intel Paragon X/PS, and apply it to real-life molecules. Our results show that with this parallel algorithm, tetrangle inequality can be applied to large molecules in a reasonable amount of time. We extend the regular graph to represent more general packing designs, and present a coloring algorithm for this graph. This can be used to generate constant-weight binary codes in parallel. Once a tighter set of distance bounds is obtained, the molecular conformation problem is usually formulated as a non-linear optimization problem, and a global optimization algorithm is then used to solve the problem. Here we present a parallel, deterministic algorithm for the optimization problem based on Interval Analysis. We implement our algorithm, using dynamic load balancing, on a network of Sun Ultra-Sparc workstations. Our experience with this algorithm shows that its application is limited to small instances of the molecular conformation problem, where the number of measured, pairwise distances is close to the maximum value. However, since the interval method eliminates a substantial portion of the initial search space very quickly, it can be used to prune the search space before any of the more efficient, nondeterministic methods can be applied.
NASA Astrophysics Data System (ADS)
Kokhanenko, Grigorii P.; Tarashchansky, Boris A.; Budnev, Nikolai M.; Mirgazov, Rashid R.
2006-02-01
Operation of the device ASP-15 is analyzed in the paper. The device is arranged in the south part of Lake Baikal, and it is capable of all-the-year-round measurements of hydro-optical characteristics at the depths down to 1300 m. The method for determining the absorption coefficient is based on measurement of the rate of decrease of the irradiance from an isotropic source with the distance between the source and the receiver. Possible reasons of appearance of anomalous dependences of the irradiance with the distance are revealed on the basis of numerical simulation, and the errors of the applied method are estimated. The experimental data obtained by means of the device ASP-15 last years are presented.
Predicting the helix packing of globular proteins by self-correcting distance geometry.
Mumenthaler, C; Braun, W
1995-05-01
A new self-correcting distance geometry method for predicting the three-dimensional structure of small globular proteins was assessed with a test set of 8 helical proteins. With the knowledge of the amino acid sequence and the helical segments, our completely automated method calculated the correct backbone topology of six proteins. The accuracy of the predicted structures ranged from 2.3 A to 3.1 A for the helical segments compared to the experimentally determined structures. For two proteins, the predicted constraints were not restrictive enough to yield a conclusive prediction. The method can be applied to all small globular proteins, provided the secondary structure is known from NMR analysis or can be predicted with high reliability.
GeneOnEarth: fitting genetic PC plots on the globe.
Torres-Sánchez, Sergio; Medina-Medina, Nuria; Gignoux, Chris; Abad-Grau, María M; González-Burchard, Esteban
2013-01-01
Principal component (PC) plots have become widely used to summarize genetic variation of individuals in a sample. The similarity between genetic distance in PC plots and geographical distance has shown to be quite impressive. However, in most situations, individual ancestral origins are not precisely known or they are heterogeneously distributed; hence, they are hardly linked to a geographical area. We have developed GeneOnEarth, a user-friendly web-based tool to help geneticists to understand whether a linear isolation-by-distance model may apply to a genetic data set; thus, genetic distances among a set of individuals resemble geographical distances among their origins. Its main goal is to allow users to first apply a by-view Procrustes method to visually learn whether this model holds. To do that, the user can choose the exact geographical area from an on line 2D or 3D world map by using, respectively, Google Maps or Google Earth, and rotate, flip, and resize the images. GeneOnEarth can also compute the optimal rotation angle using Procrustes analysis and assess statistical evidence of similarity when a different rotation angle has been chosen by the user. An online version of GeneOnEarth is available for testing and using purposes at http://bios.ugr.es/GeneOnEarth.
Atmospheric-radiation boundary conditions for high-frequency waves in time-distance helioseismology
NASA Astrophysics Data System (ADS)
Fournier, D.; Leguèbe, M.; Hanson, C. S.; Gizon, L.; Barucq, H.; Chabassier, J.; Duruflé, M.
2017-12-01
The temporal covariance between seismic waves measured at two locations on the solar surface is the fundamental observable in time-distance helioseismology. Above the acoustic cut-off frequency ( 5.3 mHz), waves are not trapped in the solar interior and the covariance function can be used to probe the upper atmosphere. We wish to implement appropriate radiative boundary conditions for computing the propagation of high-frequency waves in the solar atmosphere. We consider recently developed and published radiative boundary conditions for atmospheres in which sound-speed is constant and density decreases exponentially with radius. We compute the cross-covariance function using a finite element method in spherical geometry and in the frequency domain. The ratio between first- and second-skip amplitudes in the time-distance diagram is used as a diagnostic to compare boundary conditions and to compare with observations. We find that a boundary condition applied 500 km above the photosphere and derived under the approximation of small angles of incidence accurately reproduces the "infinite atmosphere" solution for high-frequency waves. When the radiative boundary condition is applied 2 Mm above the photosphere, we find that the choice of atmospheric model affects the time-distance diagram. In particular, the time-distance diagram exhibits double-ridge structure when using a Vernazza Avrett Loeser atmospheric model.
NASA Astrophysics Data System (ADS)
Aburas, Maher Milad; Ho, Yuek Ming; Ramli, Mohammad Firuz; Ash'aari, Zulfa Hanan
2017-07-01
The creation of an accurate simulation of future urban growth is considered one of the most important challenges in urban studies that involve spatial modeling. The purpose of this study is to improve the simulation capability of an integrated CA-Markov Chain (CA-MC) model using CA-MC based on the Analytical Hierarchy Process (AHP) and CA-MC based on Frequency Ratio (FR), both applied in Seremban, Malaysia, as well as to compare the performance and accuracy between the traditional and hybrid models. Various physical, socio-economic, utilities, and environmental criteria were used as predictors, including elevation, slope, soil texture, population density, distance to commercial area, distance to educational area, distance to residential area, distance to industrial area, distance to roads, distance to highway, distance to railway, distance to power line, distance to stream, and land cover. For calibration, three models were applied to simulate urban growth trends in 2010; the actual data of 2010 were used for model validation utilizing the Relative Operating Characteristic (ROC) and Kappa coefficient methods Consequently, future urban growth maps of 2020 and 2030 were created. The validation findings confirm that the integration of the CA-MC model with the FR model and employing the significant driving force of urban growth in the simulation process have resulted in the improved simulation capability of the CA-MC model. This study has provided a novel approach for improving the CA-MC model based on FR, which will provide powerful support to planners and decision-makers in the development of future sustainable urban planning.
Linear regression in astronomy. II
NASA Technical Reports Server (NTRS)
Feigelson, Eric D.; Babu, Gutti J.
1992-01-01
A wide variety of least-squares linear regression procedures used in observational astronomy, particularly investigations of the cosmic distance scale, are presented and discussed. The classes of linear models considered are (1) unweighted regression lines, with bootstrap and jackknife resampling; (2) regression solutions when measurement error, in one or both variables, dominates the scatter; (3) methods to apply a calibration line to new data; (4) truncated regression models, which apply to flux-limited data sets; and (5) censored regression models, which apply when nondetections are present. For the calibration problem we develop two new procedures: a formula for the intercept offset between two parallel data sets, which propagates slope errors from one regression to the other; and a generalization of the Working-Hotelling confidence bands to nonstandard least-squares lines. They can provide improved error analysis for Faber-Jackson, Tully-Fisher, and similar cosmic distance scale relations.
NASA Astrophysics Data System (ADS)
Galve, J. P.; Gutiérrez, F.; Remondo, J.; Bonachea, J.; Lucha, P.; Cendrero, A.
2009-10-01
Multiple sinkhole susceptibility models have been generated in three study areas of the Ebro Valley evaporite karst (NE Spain) applying different methods (nearest neighbour distance, sinkhole density, heuristic scoring system and probabilistic analysis) for each sinkhole type separately (cover collapse sinkholes, cover and bedrock collapse sinkholes and cover and bedrock sagging sinkholes). The quantitative and independent evaluation of the predictive capability of the models reveals that: (1) The most reliable susceptibility models are those derived from the nearest neighbour distance and sinkhole density. These models can be generated in a simple and rapid way from detailed geomorphological maps. (2) The reliability of the nearest neighbour distance and density models is conditioned by the degree of clustering of the sinkholes. Consequently, the karst areas in which sinkholes show a higher clustering are a priori more favourable for predicting new occurrences. (3) The predictive capability of the best models obtained in this research is significantly higher (12.5-82.5%) than that of the heuristic sinkhole susceptibility model incorporated into the General Urban Plan for the municipality of Zaragoza. Although the probabilistic approach provides lower quality results than the methods based on sinkhole proximity and density, it helps to identify the most significant factors and select the most effective mitigation strategies and may be applied to model susceptibility in different future scenarios.
From Add-On to Mainstream: Applying Distance Learning Models for ALL Students
ERIC Educational Resources Information Center
Zai, Robert, III.; Wesley, Threasa L.
2013-01-01
The use of distance learning technology has allowed Northern Kentucky University's W. Frank Steely Library to remove traditional boundaries between both distance and on-campus students. An emerging model that applies these distance learning methodologies to all students has proven effective for enhancing reference and instructional services. This…
Defining functional distance using manifold embeddings of gene ontology annotations
Lerman, Gilad; Shakhnovich, Boris E.
2007-01-01
Although rigorous measures of similarity for sequence and structure are now well established, the problem of defining functional relationships has been particularly daunting. Here, we present several manifold embedding techniques to compute distances between Gene Ontology (GO) functional annotations and consequently estimate functional distances between protein domains. To evaluate accuracy, we correlate the functional distance to the well established measures of sequence, structural, and phylogenetic similarities. Finally, we show that manual classification of structures into folds and superfamilies is mirrored by proximity in the newly defined function space. We show how functional distances place structure–function relationships in biological context resulting in insight into divergent and convergent evolution. The methods and results in this paper can be readily generalized and applied to a wide array of biologically relevant investigations, such as accuracy of annotation transference, the relationship between sequence, structure, and function, or coherence of expression modules. PMID:17595300
NASA Astrophysics Data System (ADS)
Dörr, Dominik; Joppich, Tobias; Schirmaier, Fabian; Mosthaf, Tobias; Kärger, Luise; Henning, Frank
2016-10-01
Thermoforming of continuously fiber reinforced thermoplastics (CFRTP) is ideally suited to thin walled and complex shaped products. By means of forming simulation, an initial validation of the producibility of a specific geometry, an optimization of the forming process and the prediction of fiber-reorientation due to forming is possible. Nevertheless, applied methods need to be validated. Therefor a method is presented, which enables the calculation of error measures for the mismatch between simulation results and experimental tests, based on measurements with a conventional coordinate measuring device. As a quantitative measure, describing the curvature is provided, the presented method is also suitable for numerical or experimental sensitivity studies on wrinkling behavior. The applied methods for forming simulation, implemented in Abaqus explicit, are presented and applied to a generic geometry. The same geometry is tested experimentally and simulation and test results are compared by the proposed validation method.
NASA Astrophysics Data System (ADS)
Kong, Jing
This thesis includes 4 pieces of work. In Chapter 1, we present the work with a method for examining mortality as it is seen to run in families, and lifestyle factors that are also seen to run in families, in a subpopulation of the Beaver Dam Eye Study that has died by 2011. We find significant distance correlations between death ages, lifestyle factors, and family relationships. Considering only sib pairs compared to unrelated persons, distance correlation between siblings and mortality is, not surprisingly, stronger than that between more distantly related family members and mortality. Chapter 2 introduces a feature screening procedure with the use of distance correlation and covariance. We demonstrate a property for distance covariance, which is incorporated in a novel feature screening procedure based on distance correlation as a stopping criterion. The approach is further implemented to two real examples, namely the famous small round blue cell tumors data and the Cancer Genome Atlas ovarian cancer data Chapter 3 pays attention to the right censored human longevity data and the estimation of lifetime expectancy. We propose a general framework of backward multiple imputation for estimating the conditional lifetime expectancy function and the variance of the estimator in the right censoring setting and prove the properties of the estimator. In addition, we apply the method to the Beaver Dam eye study data to study human longevity, where the expected human lifetime are modeled with smoothing spline ANOVA based on the covariates including baseline age, gender, lifestyle factors and disease variables. Chapter 4 compares two imputation methods for right censored data, namely the famous Buckley-James estimator and the backward imputation method proposed in Chapter 3 and shows that backward imputation method is less biased and more robust with heterogeneity.
A feasibility study for long-path multiple detection using a neural network
NASA Technical Reports Server (NTRS)
Feuerbacher, G. A.; Moebes, T. A.
1994-01-01
Least-squares inverse filters have found widespread use in the deconvolution of seismograms and the removal of multiples. The use of least-squares prediction filters with prediction distances greater than unity leads to the method of predictive deconvolution which can be used for the removal of long path multiples. The predictive technique allows one to control the length of the desired output wavelet by control of the predictive distance, and hence to specify the desired degree of resolution. Events which are periodic within given repetition ranges can be attenuated selectively. The method is thus effective in the suppression of rather complex reverberation patterns. A back propagation(BP) neural network is constructed to perform the detection of first arrivals of the multiples and therefore aid in the more accurate determination of the predictive distance of the multiples. The neural detector is applied to synthetic reflection coefficients and synthetic seismic traces. The processing results show that the neural detector is accurate and should lead to an automated fast method for determining predictive distances across vast amounts of data such as seismic field records. The neural network system used in this study was the NASA Software Technology Branch's NETS system.
Double-sideband frequency scanning interferometry for long-distance dynamic absolute measurement
NASA Astrophysics Data System (ADS)
Mo, Di; Wang, Ran; Li, Guang-zuo; Wang, Ning; Zhang, Ke-shu; Wu, Yi-rong
2017-11-01
Absolute distance measurements can be achieved by frequency scanning interferometry which uses a tunable laser. The main drawback of this method is that it is extremely sensitive to the movement of targets. In addition, since this method is limited to the linearity of frequency scanning, it is commonly used for close measurements within tens of meters. In order to solve these problems, a double-sideband frequency scanning interferometry system is presented in the paper. It generates two opposite frequency scanning signals through a fixed frequency laser and a Mach-Zehnder modulator. And the system distinguishes the two interference fringe patterns corresponding to the two signals by IQ demodulation (i.e., quadrature detection) of the echo. According to the principle of double-sideband modulation, the two signals have the same characteristics. Therefore, the error caused by the target movement can be effectively eliminated, which is similar to dual-laser frequency scanned interferometry. In addition, this method avoids the contradiction between laser frequency stability and swept performance. The system can be applied to measure the distance of the order of kilometers, which profits from the good linearity of frequency scanning. In the experiment, a precision about 3 μm was achieved for a kilometer-level distance.
NASA Astrophysics Data System (ADS)
Nordin, Noraimi Azlin Mohd; Omar, Mohd; Sharif, S. Sarifah Radiah
2017-04-01
Companies are looking forward to improve their productivity within their warehouse operations and distribution centres. In a typical warehouse operation, order picking contributes more than half percentage of the operating costs. Order picking is a benchmark in measuring the performance and productivity improvement of any warehouse management. Solving order picking problem is crucial in reducing response time and waiting time of a customer in receiving his demands. To reduce the response time, proper routing for picking orders is vital. Moreover, in production line, it is vital to always make sure the supplies arrive on time. Hence, a sample routing network will be applied on EP Manufacturing Berhad (EPMB) as a case study. The Dijkstra's algorithm and Dynamic Programming method are applied to find the shortest distance for an order picker in order picking. The results show that the Dynamic programming method is a simple yet competent approach in finding the shortest distance to pick an order that is applicable in a warehouse within a short time period.
Padé Approximant and Minimax Rational Approximation in Standard Cosmology
NASA Astrophysics Data System (ADS)
Zaninetti, Lorenzo
2016-02-01
The luminosity distance in the standard cosmology as given by $\\Lambda$CDM and consequently the distance modulus for supernovae can be defined by the Pad\\'e approximant. A comparison with a known analytical solution shows that the Pad\\'e approximant for the luminosity distance has an error of $4\\%$ at redshift $= 10$. A similar procedure for the Taylor expansion of the luminosity distance gives an error of $4\\%$ at redshift $=0.7 $; this means that for the luminosity distance, the Pad\\'e approximation is superior to the Taylor series. The availability of an analytical expression for the distance modulus allows applying the Levenberg--Marquardt method to derive the fundamental parameters from the available compilations for supernovae. A new luminosity function for galaxies derived from the truncated gamma probability density function models the observed luminosity function for galaxies when the observed range in absolute magnitude is modeled by the Pad\\'e approximant. A comparison of $\\Lambda$CDM with other cosmologies is done adopting a statistical point of view.
Heterogeneity Measurement Based on Distance Measure for Polarimetric SAR Data
NASA Astrophysics Data System (ADS)
Xing, Xiaoli; Chen, Qihao; Liu, Xiuguo
2018-04-01
To effectively test the scene heterogeneity for polarimetric synthetic aperture radar (PolSAR) data, in this paper, the distance measure is introduced by utilizing the similarity between the sample and pixels. Moreover, given the influence of the distribution and modeling texture, the K distance measure is deduced according to the Wishart distance measure. Specifically, the average of the pixels in the local window replaces the class center coherency or covariance matrix. The Wishart and K distance measure are calculated between the average matrix and the pixels. Then, the ratio of the standard deviation to the mean is established for the Wishart and K distance measure, and the two features are defined and applied to reflect the complexity of the scene. The proposed heterogeneity measure is proceeded by integrating the two features using the Pauli basis. The experiments conducted on the single-look and multilook PolSAR data demonstrate the effectiveness of the proposed method for the detection of the scene heterogeneity.
Estimation of distances to stars with stellar parameters from LAMOST
Carlin, Jeffrey L.; Liu, Chao; Newberg, Heidi Jo; ...
2015-06-05
Here, we present a method to estimate distances to stars with spectroscopically derived stellar parameters. The technique is a Bayesian approach with likelihood estimated via comparison of measured parameters to a grid of stellar isochrones, and returns a posterior probability density function for each star's absolute magnitude. We tailor this technique specifically to data from the Large Sky Area Multi-object Fiber Spectroscopic Telescope (LAMOST) survey. Because LAMOST obtains roughly 3000 stellar spectra simultaneously within each ~5-degree diameter "plate" that is observed, we can use the stellar parameters of the observed stars to account for the stellar luminosity function and targetmore » selection effects. This removes biasing assumptions about the underlying populations, both due to predictions of the luminosity function from stellar evolution modeling, and from Galactic models of stellar populations along each line of sight. Using calibration data of stars with known distances and stellar parameters, we show that our method recovers distances for most stars within ~20%, but with some systematic overestimation of distances to halo giants. We apply our code to the LAMOST database, and show that the current precision of LAMOST stellar parameters permits measurements of distances with ~40% error bars. This precision should improve as the LAMOST data pipelines continue to be refined.« less
Estimation of distances to stars with stellar parameters from LAMOST
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlin, Jeffrey L.; Liu, Chao; Newberg, Heidi Jo
Here, we present a method to estimate distances to stars with spectroscopically derived stellar parameters. The technique is a Bayesian approach with likelihood estimated via comparison of measured parameters to a grid of stellar isochrones, and returns a posterior probability density function for each star's absolute magnitude. We tailor this technique specifically to data from the Large Sky Area Multi-object Fiber Spectroscopic Telescope (LAMOST) survey. Because LAMOST obtains roughly 3000 stellar spectra simultaneously within each ~5-degree diameter "plate" that is observed, we can use the stellar parameters of the observed stars to account for the stellar luminosity function and targetmore » selection effects. This removes biasing assumptions about the underlying populations, both due to predictions of the luminosity function from stellar evolution modeling, and from Galactic models of stellar populations along each line of sight. Using calibration data of stars with known distances and stellar parameters, we show that our method recovers distances for most stars within ~20%, but with some systematic overestimation of distances to halo giants. We apply our code to the LAMOST database, and show that the current precision of LAMOST stellar parameters permits measurements of distances with ~40% error bars. This precision should improve as the LAMOST data pipelines continue to be refined.« less
On the weight of indels in genomic distances
2011-01-01
Background Classical approaches to compute the genomic distance are usually limited to genomes with the same content, without duplicated markers. However, differences in the gene content are frequently observed and can reflect important evolutionary aspects. A few polynomial time algorithms that include genome rearrangements, insertions and deletions (or substitutions) were already proposed. These methods often allow a block of contiguous markers to be inserted, deleted or substituted at once but result in distance functions that do not respect the triangular inequality and hence do not constitute metrics. Results In the present study we discuss the disruption of the triangular inequality in some of the available methods and give a framework to establish an efficient correction for two models recently proposed, one that includes insertions, deletions and double cut and join (DCJ) operations, and one that includes substitutions and DCJ operations. Conclusions We show that the proposed framework establishes the triangular inequality in both distances, by summing a surcharge on indel operations and on substitutions that depends only on the number of markers affected by these operations. This correction can be applied a posteriori, without interfering with the already available formulas to compute these distances. We claim that this correction leads to distances that are biologically more plausible. PMID:22151784
NASA Technical Reports Server (NTRS)
Bonamente, Massimillano; Joy, Marshall K.; Carlstrom, John E.; Reese, Erik D.; LaRoque, Samuel J.
2004-01-01
X-ray and Sunyaev-Zel'dovich effect data can be combined to determine the distance to galaxy clusters. High-resolution X-ray data are now available from Chandra, which provides both spatial and spectral information, and Sunyaev-Zel'dovich effect data were obtained from the BIMA and Owens Valley Radio Observatory (OVRO) arrays. We introduce a Markov Chain Monte Carlo procedure for the joint analysis of X-ray and Sunyaev- Zel'dovich effect data. The advantages of this method are the high computational efficiency and the ability to measure simultaneously the probability distribution of all parameters of interest, such as the spatial and spectral properties of the cluster gas and also for derivative quantities such as the distance to the cluster. We demonstrate this technique by applying it to the Chandra X-ray data and the OVRO radio data for the galaxy cluster A611. Comparisons with traditional likelihood ratio methods reveal the robustness of the method. This method will be used in follow-up paper to determine the distances to a large sample of galaxy cluster.
Novel Data Reduction Based on Statistical Similarity
Lee, Dongeun; Sim, Alex; Choi, Jaesik; ...
2016-07-18
Applications such as scientific simulations and power grid monitoring are generating so much data quickly that compression is essential to reduce storage requirement or transmission capacity. To achieve better compression, one is often willing to discard some repeated information. These lossy compression methods are primarily designed to minimize the Euclidean distance between the original data and the compressed data. But this measure of distance severely limits either reconstruction quality or compression performance. In this paper, we propose a new class of compression method by redefining the distance measure with a statistical concept known as exchangeability. This approach reduces the storagemore » requirement and captures essential features, while reducing the storage requirement. In this paper, we report our design and implementation of such a compression method named IDEALEM. To demonstrate its effectiveness, we apply it on a set of power grid monitoring data, and show that it can reduce the volume of data much more than the best known compression method while maintaining the quality of the compressed data. Finally, in these tests, IDEALEM captures extraordinary events in the data, while its compression ratios can far exceed 100.« less
Yin, H-L; Cao, W-F; Fu, Y; Tang, Y-L; Liu, Y; Chen, T-Y; Chen, Z-B
2014-09-15
Measurement-device-independent quantum key distribution (MDI-QKD) with decoy-state method is believed to be securely applied to defeat various hacking attacks in practical quantum key distribution systems. Recently, the coherent-state superpositions (CSS) have emerged as an alternative to single-photon qubits for quantum information processing and metrology. Here, in this Letter, CSS are exploited as the source in MDI-QKD. We present an analytical method that gives two tight formulas to estimate the lower bound of yield and the upper bound of bit error rate. We exploit the standard statistical analysis and Chernoff bound to perform the parameter estimation. Chernoff bound can provide good bounds in the long-distance MDI-QKD. Our results show that with CSS, both the security transmission distance and secure key rate are significantly improved compared with those of the weak coherent states in the finite-data case.
Analysis and machine mapping of the distribution of band recoveries
Cowardin, L.M.
1977-01-01
A method of calculating distance and bearing from banding site to recovery location based on the solution of a spherical triangle is presented. X and Y distances on an ordinate grid were applied to computer plotting of recoveries on a map. The advantages and disadvantages of tables of recoveries by State or degree block, axial lines, and distance of recovery from banding site for presentation and comparison of the spatial distribution of band recoveries are discussed. A special web-shaped partition formed by concentric circles about the point of banding and great circles at 30-degree intervals through the point of banding has certain advantages over other methods. Comparison of distributions by means of a X? contingency test is illustrated. The statistic V = X?/N can be used as a measure of difference between two distributions of band recoveries and its possible use is illustrated as a measure of the degree of migrational homing.
An improved tree height measurement technique tested on mature southern pines
Don C. Bragg
2008-01-01
Virtually all techniques for tree height determination follow one of two principles: similar triangles or the tangent method. Most people apply the latter approach, which uses the tangents of the angles to the top and bottom and a true horizontal distance to the subject tree. However, few adjust this method for ground slope, tree lean, crown shape, and crown...
Feature selection gait-based gender classification under different circumstances
NASA Astrophysics Data System (ADS)
Sabir, Azhin; Al-Jawad, Naseer; Jassim, Sabah
2014-05-01
This paper proposes a gender classification based on human gait features and investigates the problem of two variations: clothing (wearing coats) and carrying bag condition as addition to the normal gait sequence. The feature vectors in the proposed system are constructed after applying wavelet transform. Three different sets of feature are proposed in this method. First, Spatio-temporal distance that is dealing with the distance of different parts of the human body (like feet, knees, hand, Human Height and shoulder) during one gait cycle. The second and third feature sets are constructed from approximation and non-approximation coefficient of human body respectively. To extract these two sets of feature we divided the human body into two parts, upper and lower body part, based on the golden ratio proportion. In this paper, we have adopted a statistical method for constructing the feature vector from the above sets. The dimension of the constructed feature vector is reduced based on the Fisher score as a feature selection method to optimize their discriminating significance. Finally k-Nearest Neighbor is applied as a classification method. Experimental results demonstrate that our approach is providing more realistic scenario and relatively better performance compared with the existing approaches.
Fukunishi, Yoshifumi; Mikami, Yoshiaki; Nakamura, Haruki
2005-09-01
We developed a new method to evaluate the distances and similarities between receptor pockets or chemical compounds based on a multi-receptor versus multi-ligand docking affinity matrix. The receptors were classified by a cluster analysis based on calculations of the distance between receptor pockets. A set of low homologous receptors that bind a similar compound could be classified into one cluster. Based on this line of reasoning, we proposed a new in silico screening method. According to this method, compounds in a database were docked to multiple targets. The new docking score was a slightly modified version of the multiple active site correction (MASC) score. Receptors that were at a set distance from the target receptor were not included in the analysis, and the modified MASC scores were calculated for the selected receptors. The choice of the receptors is important to achieve a good screening result, and our clustering of receptors is useful to this purpose. This method was applied to the analysis of a set of 132 receptors and 132 compounds, and the results demonstrated that this method achieves a high hit ratio, as compared to that of a uniform sampling, using a receptor-ligand docking program, Sievgene, which was newly developed with a good docking performance yielding 50.8% of the reconstructed complexes at a distance of less than 2 A RMSD.
The structure of clusters of galaxies
NASA Astrophysics Data System (ADS)
Fox, David Charles
When infalling gas is accreted onto a cluster of galaxies, its kinetic energy is converted to thermal energy in a shock, heating the ions. Using a self-similar spherical model, we calculate the collisional heating of the electrons by the ions, and predict the electron and ion temperature profiles. While there are significant differences between the two, they occur at radii larger than currently observable, and too large to explain observed X-ray temperature declines in clusters. Numerical simulations by Navarro, Frenk, & White (1996) predict a universal dark matter density profile. We calculate the expected number of multiply-imaged background galaxies in the Hubble Deep Field due to foreground groups and clusters with this profile. Such groups are up to 1000 times less efficient at lensing than the standard singular isothermal spheres. However, with either profile, the expected number of galaxies lensed by groups in the Hubble Deep Field is at most one, consistent with the lack of clearly identified group lenses. X-ray and Sunyaev-Zel'dovich (SZ) effect observations can be combined to determine the distance to clusters of galaxies, provided the clusters are spherical. When applied to an aspherical cluster, this method gives an incorrect distance. We demonstrate a method for inferring the three-dimensional shape of a cluster and its correct distance from X-ray, SZ effect, and weak gravitational lensing observations, under the assumption of hydrostatic equilibrium. We apply this method to simple, analytic models of clusters, and to a numerically simulated cluster. Using artificial observations based on current X-ray and SZ effect instruments, we recover the true distance without detectable bias and with uncertainties of 4 percent.
Low-Cost Ultrasonic Distance Sensor Arrays with Networked Error Correction
Dai, Hongjun; Zhao, Shulin; Jia, Zhiping; Chen, Tianzhou
2013-01-01
Distance has been one of the basic factors in manufacturing and control fields, and ultrasonic distance sensors have been widely used as a low-cost measuring tool. However, the propagation of ultrasonic waves is greatly affected by environmental factors such as temperature, humidity and atmospheric pressure. In order to solve the problem of inaccurate measurement, which is significant within industry, this paper presents a novel ultrasonic distance sensor model using networked error correction (NEC) trained on experimental data. This is more accurate than other existing approaches because it uses information from indirect association with neighboring sensors, which has not been considered before. The NEC technique, focusing on optimization of the relationship of the topological structure of sensor arrays, is implemented for the compensation of erroneous measurements caused by the environment. We apply the maximum likelihood method to determine the optimal fusion data set and use a neighbor discovery algorithm to identify neighbor nodes at the top speed. Furthermore, we adopt the NEC optimization algorithm, which takes full advantage of the correlation coefficients for neighbor sensors. The experimental results demonstrate that the ranging errors of the NEC system are within 2.20%; furthermore, the mean absolute percentage error is reduced to 0.01% after three iterations of this method, which means that the proposed method performs extremely well. The optimized method of distance measurement we propose, with the capability of NEC, would bring a significant advantage for intelligent industrial automation. PMID:24013491
NASA Astrophysics Data System (ADS)
Wang, Yan-Jun; Liu, Qun
1999-03-01
Analysis of stock-recruitment (SR) data is most often done by fitting various SR relationship curves to the data. Fish population dynamics data often have stochastic variations and measurement errors, which usually result in a biased regression analysis. This paper presents a robust regression method, least median of squared orthogonal distance (LMD), which is insensitive to abnormal values in the dependent and independent variables in a regression analysis. Outliers that have significantly different variance from the rest of the data can be identified in a residual analysis. Then, the least squares (LS) method is applied to the SR data with defined outliers being down weighted. The application of LMD and LMD-based Reweighted Least Squares (RLS) method to simulated and real fisheries SR data is explored.
Orientation Control Method and System for Object in Motion
NASA Technical Reports Server (NTRS)
Whorton, Mark Stephen (Inventor); Redmon, Jr., John W. (Inventor); Cox, Mark D. (Inventor)
2012-01-01
An object in motion has a force applied thereto at a point of application. By moving the point of application such that the distance between the object's center-of-mass and the point of application is changed, the object's orientation can be changed/adjusted.
A revised moving cluster distance to the Pleiades open cluster
NASA Astrophysics Data System (ADS)
Galli, P. A. B.; Moraux, E.; Bouy, H.; Bouvier, J.; Olivares, J.; Teixeira, R.
2017-02-01
Context. The distance to the Pleiades open cluster has been extensively debated in the literature over several decades. Although different methods point to a discrepancy in the trigonometric parallaxes produced by the Hipparcos mission, the number of individual stars with known distances is still small compared to the number of cluster members to help solve this problem. Aims: We provide a new distance estimate for the Pleiades based on the moving cluster method, which will be useful to further discuss the so-called Pleiades distance controversy and compare it with the very precise parallaxes from the Gaia space mission. Methods: We apply a refurbished implementation of the convergent point search method to an updated census of Pleiades stars to calculate the convergent point position of the cluster from stellar proper motions. Then, we derive individual parallaxes for 64 cluster members using radial velocities compiled from the literature, and approximate parallaxes for another 1146 stars based on the spatial velocity of the cluster. This represents the largest sample of Pleiades stars with individual distances to date. Results: The parallaxes derived in this work are in good agreement with previous results obtained in different studies (excluding Hipparcos) for individual stars in the cluster. We report a mean parallax of 7.44 ± 0.08 mas and distance of pc that is consistent with the weighted mean of 135.0 ± 0.6 pc obtained from the non-Hipparcos results in the literature. Conclusions: Our result for the distance to the Pleiades open cluster is not consistent with the Hipparcos catalog, but favors the recent and more precise distance determination of 136.2 ± 1.2 pc obtained from Very Long Baseline Interferometry observations. It is also in good agreement with the mean distance of 133 ± 5 pc obtained from the first trigonometric parallaxes delivered by the Gaia satellite for the brightest cluster members in common with our sample. Full Table B.2 is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/598/A48
Long distance measurement-device-independent quantum key distribution with entangled photon sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Feihu; Qi, Bing; Liao, Zhongfa
2013-08-05
We present a feasible method that can make quantum key distribution (QKD), both ultra-long-distance and immune, to all attacks in the detection system. This method is called measurement-device-independent QKD (MDI-QKD) with entangled photon sources in the middle. By proposing a model and simulating a QKD experiment, we find that MDI-QKD with one entangled photon source can tolerate 77 dB loss (367 km standard fiber) in the asymptotic limit and 60 dB loss (286 km standard fiber) in the finite-key case with state-of-the-art detectors. Our general model can also be applied to other non-QKD experiments involving entanglement and Bell state measurements.
Computational experience with a parallel algorithm for tetrangle inequality bound smoothing.
Rajan, K; Deo, N
1999-09-01
Determining molecular structure from interatomic distances is an important and challenging problem. Given a molecule with n atoms, lower and upper bounds on interatomic distances can usually be obtained only for a small subset of the 2(n(n-1)) atom pairs, using NMR. Given the bounds so obtained on the distances between some of the atom pairs, it is often useful to compute tighter bounds on all the 2(n(n-1)) pairwise distances. This process is referred to as bound smoothing. The initial lower and upper bounds for the pairwise distances not measured are usually assumed to be 0 and infinity. One method for bound smoothing is to use the limits imposed by the triangle inequality. The distance bounds so obtained can often be tightened further by applying the tetrangle inequality--the limits imposed on the six pairwise distances among a set of four atoms (instead of three for the triangle inequalities). The tetrangle inequality is expressed by the Cayley-Menger determinants. For every quadruple of atoms, each pass of the tetrangle inequality bound smoothing procedure finds upper and lower limits on each of the six distances in the quadruple. Applying the tetrangle inequalities to each of the (4n) quadruples requires O(n4) time. Here, we propose a parallel algorithm for bound smoothing employing the tetrangle inequality. Each pass of our algorithm requires O(n3 log n) time on a REW PRAM (Concurrent Read Exclusive Write Parallel Random Access Machine) with O(log(n)n) processors. An implementation of this parallel algorithm on the Intel Paragon XP/S and its performance are also discussed.
Gao, Liqiang; Sun, Chao; Zhang, Chen; Zheng, Nenggan; Chen, Weidong; Zheng, Xiaoxiang
2013-01-01
Traditional automatic navigation methods for bio-robots are constrained to configured environments and thus can't be applied to tasks in unknown environments. With no consideration of bio-robot's own innate living ability and treating bio-robots in the same way as mechanical robots, those methods neglect the intelligence behavior of animals. This paper proposes a novel ratbot automatic navigation method in unknown environments using only reward stimulation and distance measurement. By utilizing rat's habit of thigmotaxis and its reward-seeking behavior, this method is able to incorporate rat's intrinsic intelligence of obstacle avoidance and path searching into navigation. Experiment results show that this method works robustly and can successfully navigate the ratbot to a target in the unknown environment. This work might put a solid base for application of ratbots and also has significant implication of automatic navigation for other bio-robots as well.
NASA Astrophysics Data System (ADS)
Kasyanenko, Valeriy Mitrofanovich
Measuring the three-dimensional structure of molecules, dynamics of structural changes, and energy transport on a molecular scale is important for many areas of natural science. Supplementing the widely used methods of x-ray diffraction, NMR, and optical spectroscopies, a two-dimensional infrared spectroscopy (2DIR) method was introduced about a decade ago. The 2DIR method measures pair-wise interactions between vibrational modes in molecules, thus acquiring molecular structural constraints such as distances between vibrating groups and the angles between their transition dipoles. The 2DIR method has been applied to a variety of molecular systems but in studying larger molecules such as proteins and peptides the method is facing challenges associated with the congestion of their vibrational spectra and delocalized character of their vibrational modes. To help extract structural information from such spectra and make efficient use of vibrational modes separated by large distances, a novel relaxation-assisted 2DIR method (RA 2DIR) has recently been proposed, which exploits the transport of excess vibrational energy from the initially excited mode. With the goal of further development of RA 2DIR, we applied it to a variety of molecular systems, including model compounds, transition-metal complexes, and isomers. The experiments revealed several novel effects which both enhance the power of RA 2DIR and bring a deeper understanding to the fundamental process of energy transport on a molecular level. We demonstrated that RA 2DIR can enhance greatly (27-fold) the cross-peak amplitude among spatially remote modes, which leads to an increase of the range of distances accessible for structural measurements by several fold. We demonstrated that the energy transport time correlates with the intermode distance. This correlation offers a new way for identifying connectivity patterns in molecules. We developed two models of energy transport in molecules. In one, a spatial overlap of vibrational modes determines the rate of energy transfer. Another model uses generalizations of Marcus theory of electron transfer applied to anharmonic vibrational transitions. These theoretical models reproduce well the main features of RA 2DIR measurements in a set of isomers where the energy transport is found to be affected by the three-dimensional structure as well as in transition-metal complexes, where the energy transport has to go through relatively weak coordination bonds and can be different from that occurring via covalent bonds.
Integrating concepts and skills: Slope and kinematics graphs
NASA Astrophysics Data System (ADS)
Tonelli, Edward P., Jr.
The concept of force is a foundational idea in physics. To predict the results of applying forces to objects, a student must be able to interpret data representing changes in distance, time, speed, and acceleration. Comprehension of kinematics concepts requires students to interpret motion graphs, where rates of change are represented as slopes of line segments. Studies have shown that majorities of students who show proficiency with mathematical concepts fail accurately to interpret motion graphs. The primary aim of this study was to examine how students apply their knowledge of slope when interpreting kinematics graphs. To answer the research questions a mixed methods research design, which included a survey and interviews, was adopted. Ninety eight (N=98) high school students completed surveys which were quantitatively analyzed along with qualitative information collected from interviews of students (N=15) and teachers ( N=2). The study showed that students who recalled methods for calculating slopes and speeds calculated slopes accurately, but calculated speeds inaccurately. When comparing the slopes and speeds, most students resorted to calculating instead of visual inspection. Most students recalled and applied memorized rules. Students who calculated slopes and speeds inaccurately failed to recall methods of calculating slopes and speeds, but when comparing speeds, these students connected the concepts of distance and time to the line segments and the rates of change they represented. This study's findings will likely help mathematics and science educators to better assist their students to apply their knowledge of the definition of slope and skills in kinematics concepts.
Use of units of measurement error in anthropometric comparisons.
Lucas, Teghan; Henneberg, Maciej
2017-09-01
Anthropometrists attempt to minimise measurement errors, however, errors cannot be eliminated entirely. Currently, measurement errors are simply reported. Measurement errors should be included into analyses of anthropometric data. This study proposes a method which incorporates measurement errors into reported values, replacing metric units with 'units of technical error of measurement (TEM)' by applying these to forensics, industrial anthropometry and biological variation. The USA armed forces anthropometric survey (ANSUR) contains 132 anthropometric dimensions of 3982 individuals. Concepts of duplication and Euclidean distance calculations were applied to the forensic-style identification of individuals in this survey. The National Size and Shape Survey of Australia contains 65 anthropometric measurements of 1265 women. This sample was used to show how a woman's body measurements expressed in TEM could be 'matched' to standard clothing sizes. Euclidean distances show that two sets of repeated anthropometric measurements of the same person cannot be matched (> 0) on measurements expressed in millimetres but can in units of TEM (= 0). Only 81 women can fit into any standard clothing size when matched using centimetres, with units of TEM, 1944 women fit. The proposed method can be applied to all fields that use anthropometry. Units of TEM are considered a more reliable unit of measurement for comparisons.
NASA Astrophysics Data System (ADS)
Clergeau, Jean-François; Ferraton, Matthieu; Guérard, Bruno; Khaplanov, Anton; Piscitelli, Francesco; Platz, Martin; Rigal, Jean-Marie; Van Esch, Patrick; Daullé, Thibault
2017-01-01
1D or 2D neutron position sensitive detectors with individual wire or strip readout using discriminators have the advantage of being able to treat several neutron impacts partially overlapping in time, hence reducing global dead time. A single neutron impact usually gives rise to several discriminator signals. In this paper, we introduce an information-theoretical definition of image resolution. Two point-like spots of neutron impacts with a given distance between them act as a source of information (each neutron hit belongs to one spot or the other), and the detector plus signal treatment is regarded as an imperfect communication channel that transmits this information. The maximal mutual information obtained from this channel as a function of the distance between the spots allows to define a calibration-independent measure of position resolution. We then apply this measure to quantify the power of position resolution of different algorithms treating these individual discriminator signals which can be implemented in firmware. The method is then applied to different detectors existing at the ILL. Center-of-gravity methods usually improve the position resolution over best-wire algorithms which are the standard way of treating these signals.
Confronting the Gaia and NLTE spectroscopic parallaxes for the FGK stars
NASA Astrophysics Data System (ADS)
Sitnova, Tatyana; Mashonkina, Lyudmila; Pakhomov, Yury
2018-04-01
The understanding of the chemical evolution of the Galaxy relies on the stellar chemical composition. Accurate atmospheric parameters is a prerequisite of determination of accurate chemical abundances. For late type stars with known distance, surface gravity (log g) can be calculated from well-known relation between stellar mass, T eff, and absolute bolometric magnitude. This method weakly depends on model atmospheres, and provides reliable log g. However, accurate distances are available for limited number of stars. Another way to determine log g for cool stars is based on ionisation equilibrium, i.e. consistent abundances from lines of neutral and ionised species. In this study we determine atmospheric parameters moving step-by-step from well-studied nearby dwarfs to ultra-metal poor (UMP) giants. In each sample, we select stars with the most reliable T eff based on photometry and the distance-based log g, and compare with spectroscopic gravity calculated taking into account deviations from local thermodinamic equilibrium (LTE). After that, we apply spectroscopic method of log g determination to other stars of the sample with unknown distances.
A single scan skeletonization algorithm: application to medical imaging of trabecular bone
NASA Astrophysics Data System (ADS)
Arlicot, Aurore; Amouriq, Yves; Evenou, Pierre; Normand, Nicolas; Guédon, Jean-Pierre
2010-03-01
Shape description is an important step in image analysis. The skeleton is used as a simple, compact representation of a shape. A skeleton represents the line centered in the shape and must be homotopic and one point wide. Current skeletonization algorithms compute the skeleton over several image scans, using either thinning algorithms or distance transforms. The principle of thinning is to delete points as one goes along, preserving the topology of the shape. On the other hand, the maxima of the local distance transform identifies the skeleton and is an equivalent way to calculate the medial axis. However, with this method, the skeleton obtained is disconnected so it is required to connect all the points of the medial axis to produce the skeleton. In this study we introduce a translated distance transform and adapt an existing distance driven homotopic algorithm to perform skeletonization with a single scan and thus allow the processing of unbounded images. This method is applied, in our study, on micro scanner images of trabecular bones. We wish to characterize the bone micro architecture in order to quantify bone integrity.
Detection of white spot lesions by segmenting laser speckle images using computer vision methods.
Gavinho, Luciano G; Araujo, Sidnei A; Bussadori, Sandra K; Silva, João V P; Deana, Alessandro M
2018-05-05
This paper aims to develop a method for laser speckle image segmentation of tooth surfaces for diagnosis of early stages caries. The method, applied directly to a raw image obtained by digital photography, is based on the difference between the speckle pattern of a carious lesion tooth surface area and that of a sound area. Each image is divided into blocks which are identified in a working matrix by their χ 2 distance between block histograms of the analyzed image and the reference histograms previously obtained by K-means from healthy (h_Sound) and lesioned (h_Decay) areas, separately. If the χ 2 distance between a block histogram and h_Sound is greater than the distance to h_Decay, this block is marked as decayed. The experiments showed that the method can provide effective segmentation for initial lesions. We used 64 images to test the algorithm and we achieved 100% accuracy in segmentation. Differences between the speckle pattern of a sound tooth surface region and a carious region, even in the early stage, can be evidenced by the χ 2 distance between histograms. This method proves to be more effective for segmenting the laser speckle image, which enhances the contrast between sound and lesioned tissues. The results were obtained with low computational cost. The method has the potential for early diagnosis in a clinical environment, through the development of low-cost portable equipment.
Distance dependence in photo-induced intramolecular electron transfer
NASA Astrophysics Data System (ADS)
Larsson, Sven; Volosov, Andrey
1986-09-01
The distance dependence of the rate of photo-induced electron transfer reactions is studied. A quantum mechanical method CNDO/S is applied to a series of molecules recently investigated by Hush et al. experimentally. The calculations show a large interaction through the saturated bridge which connects the two chromophores. The electronic matrix element HAB decreases a factor 10 in about 4 Å. There is also a decrease of the rate due to less exothermicity for the longer molecule. The results are in fair agreement with the experimental results.
1991-03-21
discussion of spectral factorability and motivations for broadband analysis, the report is subdivided into four main sections. In Section 1.0, we...estimates. The motivation for developing our multi-channel deconvolution method was to gain information about seismic sources, most notably, nuclear...with complex constraints for estimating the rupture history. Such methods (applied mostly to data sets that also include strong rmotion data), were
A Monocular Vision Sensor-Based Obstacle Detection Algorithm for Autonomous Robots
Lee, Tae-Jae; Yi, Dong-Hoon; Cho, Dong-Il “Dan”
2016-01-01
This paper presents a monocular vision sensor-based obstacle detection algorithm for autonomous robots. Each individual image pixel at the bottom region of interest is labeled as belonging either to an obstacle or the floor. While conventional methods depend on point tracking for geometric cues for obstacle detection, the proposed algorithm uses the inverse perspective mapping (IPM) method. This method is much more advantageous when the camera is not high off the floor, which makes point tracking near the floor difficult. Markov random field-based obstacle segmentation is then performed using the IPM results and a floor appearance model. Next, the shortest distance between the robot and the obstacle is calculated. The algorithm is tested by applying it to 70 datasets, 20 of which include nonobstacle images where considerable changes in floor appearance occur. The obstacle segmentation accuracies and the distance estimation error are quantitatively analyzed. For obstacle datasets, the segmentation precision and the average distance estimation error of the proposed method are 81.4% and 1.6 cm, respectively, whereas those for a conventional method are 57.5% and 9.9 cm, respectively. For nonobstacle datasets, the proposed method gives 0.0% false positive rates, while the conventional method gives 17.6%. PMID:26938540
An Adaptive Niching Genetic Algorithm using a niche size equalization mechanism
NASA Astrophysics Data System (ADS)
Nagata, Yuichi
Niching GAs have been widely investigated to apply genetic algorithms (GAs) to multimodal function optimization problems. In this paper, we suggest a new niching GA that attempts to form niches, each consisting of an equal number of individuals. The proposed GA can be applied also to combinatorial optimization problems by defining a distance metric in the search space. We apply the proposed GA to the job-shop scheduling problem (JSP) and demonstrate that the proposed niching method enhances the ability to maintain niches and improve the performance of GAs.
Geostatistics and spatial analysis in biological anthropology.
Relethford, John H
2008-05-01
A variety of methods have been used to make evolutionary inferences based on the spatial distribution of biological data, including reconstructing population history and detection of the geographic pattern of natural selection. This article provides an examination of geostatistical analysis, a method used widely in geology but which has not often been applied in biological anthropology. Geostatistical analysis begins with the examination of a variogram, a plot showing the relationship between a biological distance measure and the geographic distance between data points and which provides information on the extent and pattern of spatial correlation. The results of variogram analysis are used for interpolating values of unknown data points in order to construct a contour map, a process known as kriging. The methods of geostatistical analysis and discussion of potential problems are applied to a large data set of anthropometric measures for 197 populations in Ireland. The geostatistical analysis reveals two major sources of spatial variation. One pattern, seen for overall body and craniofacial size, shows an east-west cline most likely reflecting the combined effects of past population dispersal and settlement. The second pattern is seen for craniofacial height and shows an isolation by distance pattern reflecting rapid spatial changes in the midlands region of Ireland, perhaps attributable to the genetic impact of the Vikings. The correspondence of these results with other analyses of these data and the additional insights generated from variogram analysis and kriging illustrate the potential utility of geostatistical analysis in biological anthropology. (c) 2008 Wiley-Liss, Inc.
Discrimination of different sub-basins on Tajo River based on water influence factor
NASA Astrophysics Data System (ADS)
Bermudez, R.; Gascó, J. M.; Tarquis, A. M.; Saa-Requejo, A.
2009-04-01
Numeric taxonomy has been applied to classify Tajo basin water (Spain) till Portugal border. Several stations, a total of 52, that estimate 15 water variables have been used in this study. The different groups have been obtained applying a Euclidean distance among stations (distance classification) and a Euclidean distance between each station and the centroid estimated among them (centroid classification), varying the number of parameters and with or without variable typification. In order to compare the classification a log-log relation has been established, between number of groups created and distances, to select the best one. It has been observed that centroid classification is more appropriate following in a more logic way the natural constrictions than the minimum distance among stations. Variable typification doesn't improve the classification except when the centroid method is applied. Taking in consideration the ions and the sum of them as variables, the classification improved. Stations are grouped based on electric conductivity (CE), total anions (TA), total cations (TC) and ions ratio (Na/Ca and Mg/Ca). For a given classification and comparing the different groups created a certain variation in ions concentration and ions ratio are observed. However, the variation in each ion among groups is different depending on the case. For the last group, regardless the classification, the increase in all ions is general. Comparing the dendrograms, and groups that originated, Tajo river basin can be sub dived in five sub-basins differentiated by the main influence on water: 1. With a higher ombrogenic influence (rain fed). 2. With ombrogenic and pedogenic influence (rain and groundwater fed). 3. With pedogenic influence. 4. With lithogenic influence (geological bedrock). 5. With a higher ombrogenic and lithogenic influence added.
NASA Astrophysics Data System (ADS)
Gallenne, A.; Kervella, P.; Mérand, A.; Pietrzyński, G.; Gieren, W.; Nardetto, N.; Trahin, B.
2017-11-01
Context. The Baade-Wesselink (BW) method, which combines linear and angular diameter variations, is the most common method to determine the distances to pulsating stars. However, the projection factor, p-factor, used to convert radial velocities into pulsation velocities, is still poorly calibrated. This parameter is critical on the use of this technique, and often leads to 5-10% uncertainties on the derived distances. Aims: We focus on empirically measuring the p-factor of a homogeneous sample of 29 LMC and 10 SMC Cepheids for which an accurate average distances were estimated from eclipsing binary systems. Methods: We used the SPIPS algorithm, which is an implementation of the BW technique. Unlike other conventional methods, SPIPS combines all observables, i.e. radial velocities, multi-band photometry and interferometry into a consistent physical modelling to estimate the parameters of the stars. The large number and their redundancy insure its robustness and improves the statistical precision. Results: We successfully estimated the p-factor of several Magellanic Cloud Cepheids. Combined with our previous Galactic results, we find the following P-p relation: -0.08± 0.04(log P-1.18) + 1.24± 0.02. We find no evidence of a metallicity dependent p-factor. We also derive a new calibration of the period-radius relation, log R = 0.684± 0.007(log P-0.517) + 1.489± 0.002, with an intrinsic dispersion of 0.020. We detect an infrared excess for all stars at 3.6 μm and 4.5 μm, which might be the signature of circumstellar dust. We measure a mean offset of Δm3.6 = 0.057 ± 0.006 mag and Δm4.5 = 0.065 ± 0.008 mag. Conclusions: We provide a new P-p relation based on a multi-wavelength fit that can be used for the distance scale calibration from the BW method. The dispersion is due to the LMC and SMC width we took into account because individual Cepheids distances are unknown. The new P-R relation has a small intrinsic dispersion: 4.5% in radius. This precision will allow us to accurately apply the BW method to nearby galaxies. Finally, the infrared excesses we detect again raise the issue of using mid-IR wavelengths to derive period-luminosity relation and to calibrate the Hubble constant. These IR excesses might be the signature of circumstellar dust, and are never taken into account when applying the BW method at those wavelengths. Our measured offsets may give an average bias of 2.8% on the distances derived through mid-IR P-L relations.
VCSEL-based sensors for distance and velocity
NASA Astrophysics Data System (ADS)
Moench, Holger; Carpaij, Mark; Gerlach, Philipp; Gronenborn, Stephan; Gudde, Ralph; Hellmig, Jochen; Kolb, Johanna; van der Lee, Alexander
2016-03-01
VCSEL based sensors can measure distance and velocity in three dimensional space and are already produced in high quantities for professional and consumer applications. Several physical principles are used: VCSELs are applied as infrared illumination for surveillance cameras. High power arrays combined with imaging optics provide a uniform illumination of scenes up to a distance of several hundred meters. Time-of-flight methods use a pulsed VCSEL as light source, either with strong single pulses at low duty cycle or with pulse trains. Because of the sensitivity to background light and the strong decrease of the signal with distance several Watts of laser power are needed at a distance of up to 100m. VCSEL arrays enable power scaling and can provide very short pulses at higher power density. Applications range from extended functions in a smartphone over industrial sensors up to automotive LIDAR for driver assistance and autonomous driving. Self-mixing interference works with coherent laser photons scattered back into the cavity. It is therefore insensitive to environmental light. The method is used to measure target velocity and distance with very high accuracy at distances up to one meter. Single-mode VCSELs with integrated photodiode and grating stabilized polarization enable very compact and cost effective products. Besides the well know application as computer input device new applications with even higher accuracy or for speed over ground measurement in automobiles and up to 250km/h are investigated. All measurement methods exploit the known VCSEL properties like robustness, stability over temperature and the potential for packages with integrated optics and electronics. This makes VCSEL sensors ideally suited for new mass applications in consumer and automotive markets.
Development of an Empirical Local Magnitude Formula for Northern Oklahoma
NASA Astrophysics Data System (ADS)
Spriggs, N.; Karimi, S.; Moores, A. O.
2015-12-01
In this paper we focus on determining a local magnitude formula for northern Oklahoma that is unbiased with distance by empirically constraining the attenuation properties within the region of interest based on the amplitude of observed seismograms. For regional networks detecting events over several hundred kilometres, distance correction terms play an important role in determining the magnitude of an event. Standard distance correction terms such as Hutton and Boore (1987) may have a significant bias with distance if applied in a region with different attenuation properties, resulting in an incorrect magnitude. We have presented data from a regional network of broadband seismometers installed in bedrock in northern Oklahoma. The events with magnitude in the range of 2.0 and 4.5, distributed evenly across this network are considered. We find that existing models show a bias with respect to hypocentral distance. Observed amplitude measurements demonstrate that there is a significant Moho bounce effect that mandates the use of a trilinear attenuation model in order to avoid bias in the distance correction terms. We present two different approaches of local magnitude calibration. The first maintains the classic definition of local magnitude as proposed by Richter. The second method calibrates local magnitude so that it agrees with moment magnitude where a regional moment tensor can be computed. To this end, regional moment tensor solutions and moment magnitudes are computed for events with magnitude larger than 3.5 to allow calibration of local magnitude to moment magnitude. For both methods the new formula results in magnitudes systematically lower than previous values computed with Eaton's (1992) model. We compare the resulting magnitudes and discuss the benefits and drawbacks of each method. Our results highlight the importance of correct calibration of the distance correction terms for accurate local magnitude assessment in regional networks.
The modified "Rockfall Hazard Rating System": a new tool for roads risk assessment
NASA Astrophysics Data System (ADS)
Budetta, P.
2003-04-01
This paper contains a modified method for the analysis of rockfall hazard along roads and motorways. The method is derived from that one developed by Pierson et alii at the Oregon State Highway Division. The Rockfall Hazard Rating System (RHRS) provides a rational way to make informed decisions on where and how to spend construction funds. An exponential scoring graph is used to represent the increase in hazard that is reflected in the nine categories forming the classification (slope height, ditch effectiveness, average vehicle risk, percent of decision site distance, roadway width, geological character, quantity of rockfall/event, climate and rock fall history). The resulting total score contains the essential elements regarding the evaluation of the consequences ("cost of failure"). In the modified method, the rating for the categories "ditch effectiveness", "decision sight distance", "rodway width", "geologic characteristic" and "climate and water circulation" have been rendered more easy and objective. The main modifications regard the introduction of the Romana's Slope Mass Rating improving the estimate of the geologic characteristics, of the volume of the potentially unstable blocks and underground water circulation. Other modifications regard the scoring determination for the categories "decision sight distance" and "road geometry". For these categories, the Italian National Council's standards (CNR) have been used. The method must be applied in both the traffic directions because the percentage of reduction in the "decision sight distance" greatly affects the results. An application of the method to a 2-km-long section of the Sorrentine road (n° 145) in Southern Italy was pointed out. A high traffic intensity affects the entire section of the road and rockfalls periodically cause casualties, as well as a large amount of damage and traffic interruptions. The method was applied on seven cross section traces of slopes adjacent to the Sorrentine road and the total final scores range between 275 and 450. For these slopes, the analysis shows that the risk is unacceptable and it must reduced using urgent remedial works. Further applications in other geological environments are welcomed.
Self consistency grouping: a stringent clustering method
2012-01-01
Background Numerous types of clustering like single linkage and K-means have been widely studied and applied to a variety of scientific problems. However, the existing methods are not readily applicable for the problems that demand high stringency. Methods Our method, self consistency grouping, i.e. SCG, yields clusters whose members are closer in rank to each other than to any member outside the cluster. We do not define a distance metric; we use the best known distance metric and presume that it measures the correct distance. SCG does not impose any restriction on the size or the number of the clusters that it finds. The boundaries of clusters are determined by the inconsistencies in the ranks. In addition to the direct implementation that finds the complete structure of the (sub)clusters we implemented two faster versions. The fastest version is guaranteed to find only the clusters that are not subclusters of any other clusters and the other version yields the same output as the direct implementation but does so more efficiently. Results Our tests have demonstrated that SCG yields very few false positives. This was accomplished by introducing errors in the distance measurement. Clustering of protein domain representatives by structural similarity showed that SCG could recover homologous groups with high precision. Conclusions SCG has potential for finding biological relationships under stringent conditions. PMID:23320864
De Geuser, F; Lefebvre, W
2011-03-01
In this study, we propose a fast automatic method providing the matrix concentration in an atom probe tomography (APT) data set containing two phases or more. The principle of this method relies on the calculation of the relative amount of isolated solute atoms (i.e., not surrounded by a similar solute atom) as a function of a distance d in the APT reconstruction. Simulated data sets have been generated to test the robustness of this new tool and demonstrate that rapid and reproducible results can be obtained without the need of any user input parameter. The method has then been successfully applied to a ternary Al-Zn-Mg alloy containing a fine dispersion of hardening precipitates. The relevance of this method for direct estimation of matrix concentration is discussed and compared with the existing methodologies. Copyright © 2010 Wiley-Liss, Inc.
Conflict management based on belief function entropy in sensor fusion.
Yuan, Kaijuan; Xiao, Fuyuan; Fei, Liguo; Kang, Bingyi; Deng, Yong
2016-01-01
Wireless sensor network plays an important role in intelligent navigation. It incorporates a group of sensors to overcome the limitation of single detection system. Dempster-Shafer evidence theory can combine the sensor data of the wireless sensor network by data fusion, which contributes to the improvement of accuracy and reliability of the detection system. However, due to different sources of sensors, there may be conflict among the sensor data under uncertain environment. Thus, this paper proposes a new method combining Deng entropy and evidence distance to address the issue. First, Deng entropy is adopted to measure the uncertain information. Then, evidence distance is applied to measure the conflict degree. The new method can cope with conflict effectually and improve the accuracy and reliability of the detection system. An example is illustrated to show the efficiency of the new method and the result is compared with that of the existing methods.
Performance prediction of electrohydrodynamic thrusters by the perturbation method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shibata, H., E-mail: shibata@daedalus.k.u-tokyo.ac.jp; Watanabe, Y.; Suzuki, K.
2016-05-15
In this paper, we present a novel method for analyzing electrohydrodynamic (EHD) thrusters. The method is based on a perturbation technique applied to a set of drift-diffusion equations, similar to the one introduced in our previous study on estimating breakdown voltage. The thrust-to-current ratio is generalized to represent the performance of EHD thrusters. We have compared the thrust-to-current ratio obtained theoretically with that obtained from the proposed method under atmospheric air conditions, and we have obtained good quantitative agreement. Also, we have conducted a numerical simulation in more complex thruster geometries, such as the dual-stage thruster developed by Masuyama andmore » Barrett [Proc. R. Soc. A 469, 20120623 (2013)]. We quantitatively clarify the fact that if the magnitude of a third electrode voltage is low, the effective gap distance shortens, whereas if the magnitude of the third electrode voltage is sufficiently high, the effective gap distance lengthens.« less
Genetic structure of populations and differentiation in forest trees
Raymond P. Guries; F. Thomas Ledig
1981-01-01
Electrophoretic techniques permit population biologists to analyze genetic structure of natural populations by using large numbers of allozyme loci. Several methods of analysis have been applied to allozyme data, including chi-square contingency tests, F-statistics, and genetic distance. This paper compares such statistics for pitch pine (Pinus rigida...
Collaborative distance learning: Developing an online learning community
NASA Astrophysics Data System (ADS)
Stoytcheva, Maria
2017-12-01
The method of collaborative distance learning has been applied for years in a number of distance learning courses, but they are relatively few in foreign language learning. The context of this research is a hybrid distance learning of French for specific purposes, delivered through the platform UNIV-RcT (Strasbourg University), which combines collaborative activities for the realization of a common problem-solving task online. The study focuses on a couple of aspects: on-line interactions carried out in small, tutored groups and the process of community building online. By analyzing the learner's perceptions of community and collaborative learning, we have tried to understand the process of building and maintenance of online learning community and to see to what extent the collaborative distance learning contribute to the development of the competence expectations at the end of the course. The analysis of the results allows us to distinguish the advantages and limitations of this type of e-learning and thus evaluate their pertinence.
NASA Astrophysics Data System (ADS)
Pisani, Marco; Astrua, Milena; Zucco, Massimo
2018-02-01
We present a method to measure the temperature along the path of an optical interferometer based on the propagation of acoustic waves. It exploits the high sensitivity of the speed of sound to air temperature. In particular, it takes advantage of a technique where the generation of acoustic waves is synchronous with the amplitude modulation of a laser source. A photodetector converts the laser light into an electronic signal used as a reference, while the incoming acoustic waves are focused on a microphone and generate the measuring signal. Under this condition, the phase difference between the two signals substantially depends on the temperature of the air volume interposed between the sources and the receivers. A comparison with traditional temperature sensors highlighted the limit of the latter in the case of fast temperature variations and the advantage of a measurement integrated along the optical path instead of a sampling measurement. The capability of the acoustic method to compensate for the interferometric distance measurements due to air temperature variations has been demonstrated to the level of 0.1 °C corresponding to 10-7 on the refractive index of air. We applied the method indoor for distances up to 27 m, outdoor at 78 m and finally tested the acoustic thermometer over a distance of 182 m.
NASA Astrophysics Data System (ADS)
Worthy, Johnny L.; Holzinger, Marcus J.; Scheeres, Daniel J.
2018-06-01
The observation to observation measurement association problem for dynamical systems can be addressed by determining if the uncertain admissible regions produced from each observation have one or more points of intersection in state space. An observation association method is developed which uses an optimization based approach to identify local Mahalanobis distance minima in state space between two uncertain admissible regions. A binary hypothesis test with a selected false alarm rate is used to assess the probability that an intersection exists at the point(s) of minimum distance. The systemic uncertainties, such as measurement uncertainties, timing errors, and other parameter errors, define a distribution about a state estimate located at the local Mahalanobis distance minima. If local minima do not exist, then the observations are not associated. The proposed method utilizes an optimization approach defined on a reduced dimension state space to reduce the computational load of the algorithm. The efficacy and efficiency of the proposed method is demonstrated on observation data collected from the Georgia Tech Space Object Research Telescope.
Probing interferometric parallax with interplanetary spacecraft
NASA Astrophysics Data System (ADS)
Rodeghiero, G.; Gini, F.; Marchili, N.; Jain, P.; Ralston, J. P.; Dallacasa, D.; Naletto, G.; Possenti, A.; Barbieri, C.; Franceschini, A.; Zampieri, L.
2017-07-01
We describe an experimental scenario for testing a novel method to measure distance and proper motion of astronomical sources. The method is based on multi-epoch observations of amplitude or intensity correlations between separate receiving systems. This technique is called Interferometric Parallax, and efficiently exploits phase information that has traditionally been overlooked. The test case we discuss combines amplitude correlations of signals from deep space interplanetary spacecraft with those from distant galactic and extragalactic radio sources with the goal of estimating the interplanetary spacecraft distance. Interferometric parallax relies on the detection of wavefront curvature effects in signals collected by pairs of separate receiving systems. The method shows promising potentialities over current techniques when the target is unresolved from the background reference sources. Developments in this field might lead to the construction of an independent, geometrical cosmic distance ladder using a dedicated project and future generation instruments. We present a conceptual overview supported by numerical estimates of its performances applied to a spacecraft orbiting the Solar System. Simulations support the feasibility of measurements with a simple and time-saving observational scheme using current facilities.
NASA Astrophysics Data System (ADS)
Toma, G.; Apel, W. D.; Arteaga, J. C.; Bekk, K.; Bertaina, M.; Blümer, J.; Bozdog, H.; Brancus, I. M.; Buchholz, P.; Cantoni, E.; Chiavassa, A.; Cossavella, F.; Daumiller, K.; de Souza, V.; di Pierro, F.; Doll, P.; Engel, R.; Engler, J.; Finger, M.; Fuhrmann, D.; Ghia, P. L.; Gils, H. J.; Glasstetter, R.; Grupen, C.; Haungs, A.; Heck, D.; Hörandel, J. R.; Huege, T.; Isar, P. G.; Kampert, K.-H.; Kang, D.; Kickelbick, D.; Klages, H. O.; Link, K.; Łuczak, P.; Ludwig, M.; Mathes, H. J.; Mayer, H. J.; Melissas, M.; Milke, J.; Mitrica, B.; Morello, C.; Navarra, G.; Nehls, S.; Oehlschläger, J.; Ostapchenko, S.; Over, S.; Palmieri, N.; Petcu, M.; Pierog, T.; Rebel, H.; Roth, M.; Schieler, H.; Schröder, F.; Sima, O.; Trinchero, G. C.; Ulrich, H.; Weindl, A.; Wochele, J.; Wommer, M.; Zabierowski, J.
2010-11-01
Previous EAS investigations have shown that for a fixed primary energy the charged particle density becomes independent of the primary mass at certain (fixed) distances from the shower core. This feature can be used as an estimator for the primary energy. We present results on the reconstruction of the primary energy spectrum of cosmic rays from the experimentally recorded S(500) observable (the density of charged particles at 500 m distance to the shower core) using the KASCADE-Grande detector array. The KASCADE-Grande experiment is hosted by the Karlsruhe Institute for Technology-Campus North, Karlsruhe, Germany, and operated by an international collaboration. The constant intensity cut (CIC) method is applied to evaluate the attenuation of the S(500) observable with the zenith angle and is corrected for. A calibration of S(500) values with the primary energy has been worked out by simulations and was applied to the data to obtain the primary energy spectrum (in the energy range log10[E0/GeV]∈[7.5,9]). The systematic uncertainties induced by different sources are considered. In addition, a correction based on a response matrix is applied to account for the effects of shower-to-shower fluctuations on the spectral index of the reconstructed energy spectrum.
Modeling abundance using hierarchical distance sampling
Royle, Andy; Kery, Marc
2016-01-01
In this chapter, we provide an introduction to classical distance sampling ideas for point and line transect data, and for continuous and binned distance data. We introduce the conditional and the full likelihood, and we discuss Bayesian analysis of these models in BUGS using the idea of data augmentation, which we discussed in Chapter 7. We then extend the basic ideas to the problem of hierarchical distance sampling (HDS), where we have multiple point or transect sample units in space (or possibly in time). The benefit of HDS in practice is that it allows us to directly model spatial variation in population size among these sample units. This is a preeminent concern of most field studies that use distance sampling methods, but it is not a problem that has received much attention in the literature. We show how to analyze HDS models in both the unmarked package and in the BUGS language for point and line transects, and for continuous and binned distance data. We provide a case study of HDS applied to a survey of the island scrub-jay on Santa Cruz Island, California.
Deep Multimodal Distance Metric Learning Using Click Constraints for Image Ranking.
Yu, Jun; Yang, Xiaokang; Gao, Fei; Tao, Dacheng
2017-12-01
How do we retrieve images accurately? Also, how do we rank a group of images precisely and efficiently for specific queries? These problems are critical for researchers and engineers to generate a novel image searching engine. First, it is important to obtain an appropriate description that effectively represent the images. In this paper, multimodal features are considered for describing images. The images unique properties are reflected by visual features, which are correlated to each other. However, semantic gaps always exist between images visual features and semantics. Therefore, we utilize click feature to reduce the semantic gap. The second key issue is learning an appropriate distance metric to combine these multimodal features. This paper develops a novel deep multimodal distance metric learning (Deep-MDML) method. A structured ranking model is adopted to utilize both visual and click features in distance metric learning (DML). Specifically, images and their related ranking results are first collected to form the training set. Multimodal features, including click and visual features, are collected with these images. Next, a group of autoencoders is applied to obtain initially a distance metric in different visual spaces, and an MDML method is used to assign optimal weights for different modalities. Next, we conduct alternating optimization to train the ranking model, which is used for the ranking of new queries with click features. Compared with existing image ranking methods, the proposed method adopts a new ranking model to use multimodal features, including click features and visual features in DML. We operated experiments to analyze the proposed Deep-MDML in two benchmark data sets, and the results validate the effects of the method.
Starosta, K; Dewald, A; Dunomes, A; Adrich, P; Amthor, A M; Baumann, T; Bazin, D; Bowen, M; Brown, B A; Chester, A; Gade, A; Galaviz, D; Glasmacher, T; Ginter, T; Hausmann, M; Horoi, M; Jolie, J; Melon, B; Miller, D; Moeller, V; Norris, R P; Pissulla, T; Portillo, M; Rother, W; Shimbara, Y; Stolz, A; Vaman, C; Voss, P; Weisshaar, D; Zelevinsky, V
2007-07-27
Transition rate measurements are reported for the 2(1)+ and 2(2)+ states in N=Z 64Ge. The experimental results are in excellent agreement with large-scale shell-model calculations applying the recently developed GXPF1A interactions. The measurement was done using the recoil distance method (RDM) and a unique combination of state-of-the-art instruments at the National Superconducting Cyclotron Laboratory (NSCL). States of interest were populated via an intermediate-energy single-neutron knockout reaction. RDM studies of knockout and fragmentation reaction products hold the promise of reaching far from stability and providing lifetime information for excited states in a wide range of nuclei.
NASA Astrophysics Data System (ADS)
Starosta, K.; Dewald, A.; Dunomes, A.; Adrich, P.; Amthor, A. M.; Baumann, T.; Bazin, D.; Bowen, M.; Brown, B. A.; Chester, A.; Gade, A.; Galaviz, D.; Glasmacher, T.; Ginter, T.; Hausmann, M.; Horoi, M.; Jolie, J.; Melon, B.; Miller, D.; Moeller, V.; Norris, R. P.; Pissulla, T.; Portillo, M.; Rother, W.; Shimbara, Y.; Stolz, A.; Vaman, C.; Voss, P.; Weisshaar, D.; Zelevinsky, V.
2007-07-01
Transition rate measurements are reported for the 21+ and 22+ states in N=Z Ge64. The experimental results are in excellent agreement with large-scale shell-model calculations applying the recently developed GXPF1A interactions. The measurement was done using the recoil distance method (RDM) and a unique combination of state-of-the-art instruments at the National Superconducting Cyclotron Laboratory (NSCL). States of interest were populated via an intermediate-energy single-neutron knockout reaction. RDM studies of knockout and fragmentation reaction products hold the promise of reaching far from stability and providing lifetime information for excited states in a wide range of nuclei.
Complex amplitude reconstruction by iterative amplitude-phase retrieval algorithm with reference
NASA Astrophysics Data System (ADS)
Shen, Cheng; Guo, Cheng; Tan, Jiubin; Liu, Shutian; Liu, Zhengjun
2018-06-01
Multi-image iterative phase retrieval methods have been successfully applied in plenty of research fields due to their simple but efficient implementation. However, there is a mismatch between the measurement of the first long imaging distance and the sequential interval. In this paper, an amplitude-phase retrieval algorithm with reference is put forward without additional measurements or priori knowledge. It gets rid of measuring the first imaging distance. With a designed update formula, it significantly raises the convergence speed and the reconstruction fidelity, especially in phase retrieval. Its superiority over the original amplitude-phase retrieval (APR) method is validated by numerical analysis and experiments. Furthermore, it provides a conceptual design of a compact holographic image sensor, which can achieve numerical refocusing easily.
Normalized distance aggregation of discriminative features for person reidentification
NASA Astrophysics Data System (ADS)
Hou, Li; Han, Kang; Wan, Wanggen; Hwang, Jenq-Neng; Yao, Haiyan
2018-03-01
We propose an effective person reidentification method based on normalized distance aggregation of discriminative features. Our framework is built on the integration of three high-performance discriminative feature extraction models, including local maximal occurrence (LOMO), feature fusion net (FFN), and a concatenation of LOMO and FFN called LOMO-FFN, through two fast and discriminant metric learning models, i.e., cross-view quadratic discriminant analysis (XQDA) and large-scale similarity learning (LSSL). More specifically, we first represent all the cross-view person images using LOMO, FFN, and LOMO-FFN, respectively, and then apply each extracted feature representation to train XQDA and LSSL, respectively, to obtain the optimized individual cross-view distance metric. Finally, the cross-view person matching is computed as the sum of the optimized individual cross-view distance metric through the min-max normalization. Experimental results have shown the effectiveness of the proposed algorithm on three challenging datasets (VIPeR, PRID450s, and CUHK01).
Real-time stop sign detection and distance estimation using a single camera
NASA Astrophysics Data System (ADS)
Wang, Wenpeng; Su, Yuxuan; Cheng, Ming
2018-04-01
In modern world, the drastic development of driver assistance system has made driving a lot easier than before. In order to increase the safety onboard, a method was proposed to detect STOP sign and estimate distance using a single camera. In STOP sign detection, LBP-cascade classifier was applied to identify the sign in the image, and the principle of pinhole imaging was based for distance estimation. Road test was conducted using a detection system built with a CMOS camera and software developed by Python language with OpenCV library. Results shows that that the proposed system reach a detection accuracy of maximum of 97.6% at 10m, a minimum of 95.00% at 20m, and 5% max error in distance estimation. The results indicate that the system is effective and has the potential to be used in both autonomous driving and advanced driver assistance driving systems.
The application of multilayer elastic beam in MEMS safe and arming system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Guozhong, E-mail: liguozhong-bit@bit.edu.cn; Shi, Gengchen; Sui, Li
In this paper, a new approach for a multilayer elastic beam to provide a driving force and driving distance for a MEMS safe and arming system is presented. In particular this is applied where a monolayer elastic beam cannot provide adequate driving force and driving distance at the same time in limited space. Compared with thicker elastic beams, the bilayer elastic beam can provide twice the driving force of a monolayer beam to guarantee the MEMS safe and arming systems work reliably without decreasing the driving distance. In this paper, the theoretical analysis, numerical simulation and experimental verification of themore » multilayer elastic beam is presented. The numerical simulation and experimental results show that the bilayer elastic provides 1.8–2 times the driving force of a monolayer, and a method that improves driving force without reducing the driving distance.« less
Distance-based over-segmentation for single-frame RGB-D images
NASA Astrophysics Data System (ADS)
Fang, Zhuoqun; Wu, Chengdong; Chen, Dongyue; Jia, Tong; Yu, Xiaosheng; Zhang, Shihong; Qi, Erzhao
2017-11-01
Over-segmentation, known as super-pixels, is a widely used preprocessing step in segmentation algorithms. Oversegmentation algorithm segments an image into regions of perceptually similar pixels, but performs badly based on only color image in the indoor environments. Fortunately, RGB-D images can improve the performances on the images of indoor scene. In order to segment RGB-D images into super-pixels effectively, we propose a novel algorithm, DBOS (Distance-Based Over-Segmentation), which realizes full coverage of super-pixels on the image. DBOS fills the holes in depth images to fully utilize the depth information, and applies SLIC-like frameworks for fast running. Additionally, depth features such as plane projection distance are extracted to compute distance which is the core of SLIC-like frameworks. Experiments on RGB-D images of NYU Depth V2 dataset demonstrate that DBOS outperforms state-ofthe-art methods in quality while maintaining speeds comparable to them.
Developmental changes in hippocampal shape among preadolescent children.
Lin, Muqing; Fwu, Peter T; Buss, Claudia; Davis, Elysia P; Head, Kevin; Muftuler, L Tugan; Sandman, Curt A; Su, Min-Ying
2013-11-01
It is known that the largest developmental changes in the hippocampus take place during the prenatal period and during the first two years of postnatal life. Few studies have been conducted to address the normal developmental trajectory of the hippocampus during childhood. In this study shape analysis was applied to study the normal developing hippocampus in a group of 103 typically developing 6- to 10-year-old preadolescent children. The individual brain was normalized to a template, and then the hippocampus was manually segmented and further divided into the head, body, and tail sub-regions. Three different methods were applied for hippocampal shape analysis: radial distance mapping, surface-based template registration using the robust point matching (RPM) algorithm, and volume-based template registration using the Demons algorithm. All three methods show that the older children have bilateral expanded head segments compared to the younger children. The results analyzed based on radial distance to the centerline were consistent with those analyzed using template-based registration methods. In analyses stratified by sex, it was found that the age-associated anatomical changes were similar in boys and girls, but the age-association was strongest in girls. Total hippocampal volume and sub-regional volumes analyzed using manual segmentation did not show a significant age-association. Our results suggest that shape analysis is sensitive to detect sub-regional differences that are not revealed in volumetric analysis. The three methods presented in this study may be applied in future studies to investigate the normal developmental trajectory of the hippocampus in children. They may be further applied to detect early deviations from the normal developmental trajectory in young children for evaluating susceptibility for psychopathological disorders involving hippocampus. Copyright © 2013 ISDN. Published by Elsevier Ltd. All rights reserved.
Evidence for dwarf stars at D of about 100 kiloparsecs near the Sextans dwarf spheroidal galaxy
NASA Technical Reports Server (NTRS)
Gould, Andrew; Guhathakurta, Puragra; Richstone, Douglas; Flynn, Chris
1992-01-01
A method is presented for detecting individual, metal-poor, dwarf stars at distances less than about 150 kpc - a method specifically designed to filter out stars from among the much more numerous faint background field galaxies on the basis of broad-band colors. This technique is applied to two fields at high Galactic latitude, for which there are deep CCD data in four bands ranging from 3600 to 9000 A. The field in Sextans probably contains more than about five dwarf stars with BJ not greater than 25.5. These are consistent with being at a common distance about 100 kpc and lie about 1.7 deg from the newly discovered dwarf galaxy in Sextans whose distance is about 85 +/- 10 kpc. The stars lie near the major axis of the galaxy and are near or beyond the tidal radius. The second field, toward the south Galactic pole, may contain up to about five extra-Galactic stars, but these show no evidence for being at a common distance. Possible applications of this type technique are discussed, and it is shown that even very low surface brightness star clusters or dwarf galaxies may be detected at distances less than about 1 Mpc.
Cao, Ying J; Caffo, Brian S; Fuchs, Edward J; Lee, Linda A; Du, Yong; Li, Liye; Bakshi, Rahul P; Macura, Katarzyna; Khan, Wasif A; Wahl, Richard L; Grohskopf, Lisa A; Hendrix, Craig W
2012-01-01
AIMS We sought to describe quantitatively the distribution of rectally administered gels and seminal fluid surrogates using novel concentration–distance parameters that could be repeated over time. These methods are needed to develop rationally rectal microbicides to target and prevent HIV infection. METHODS Eight subjects were dosed rectally with radiolabelled and gadolinium-labelled gels to simulate microbicide gel and seminal fluid. Rectal doses were given with and without simulated receptive anal intercourse. Twenty-four hour distribution was assessed with indirect single photon emission computed tomography (SPECT)/computed tomography (CT) and magnetic resonance imaging (MRI), and direct assessment via sigmoidoscopic brushes. Concentration–distance curves were generated using an algorithm for fitting SPECT data in three dimensions. Three novel concentration–distance parameters were defined to describe quantitatively the distribution of radiolabels: maximal distance (Dmax), distance at maximal concentration (DCmax) and mean residence distance (Dave). RESULTS The SPECT/CT distribution of microbicide and semen surrogates was similar. Between 1 h and 24 h post dose, the surrogates migrated retrograde in all three parameters (relative to coccygeal level; geometric mean [95% confidence interval]): maximal distance (Dmax), 10 cm (8.6–12) to 18 cm (13–26), distance at maximal concentration (DCmax), 3.8 cm (2.7–5.3) to 4.2 cm (2.8–6.3) and mean residence distance (Dave), 4.3 cm (3.5–5.1) to 7.6 cm (5.3–11). Sigmoidoscopy and MRI correlated only roughly with SPECT/CT. CONCLUSIONS Rectal microbicide surrogates migrated retrograde during the 24 h following dosing. Spatial kinetic parameters estimated using three dimensional curve fitting of distribution data should prove useful for evaluating rectal formulations of drugs for HIV prevention and other indications. PMID:22404308
Distance majorization and its applications.
Chi, Eric C; Zhou, Hua; Lange, Kenneth
2014-08-01
The problem of minimizing a continuously differentiable convex function over an intersection of closed convex sets is ubiquitous in applied mathematics. It is particularly interesting when it is easy to project onto each separate set, but nontrivial to project onto their intersection. Algorithms based on Newton's method such as the interior point method are viable for small to medium-scale problems. However, modern applications in statistics, engineering, and machine learning are posing problems with potentially tens of thousands of parameters or more. We revisit this convex programming problem and propose an algorithm that scales well with dimensionality. Our proposal is an instance of a sequential unconstrained minimization technique and revolves around three ideas: the majorization-minimization principle, the classical penalty method for constrained optimization, and quasi-Newton acceleration of fixed-point algorithms. The performance of our distance majorization algorithms is illustrated in several applications.
Efficient visualization of urban spaces
NASA Astrophysics Data System (ADS)
Stamps, A. E.
2012-10-01
This chapter presents a new method for calculating efficiency and applies that method to the issues of selecting simulation media and evaluating the contextual fit of new buildings in urban spaces. The new method is called "meta-analysis". A meta-analytic review of 967 environments indicated that static color simulations are the most efficient media for visualizing urban spaces. For contextual fit, four original experiments are reported on how strongly five factors influence visual appeal of a street: architectural style, trees, height of a new building relative to the heights of existing buildings, setting back a third story, and distance. A meta-analysis of these four experiments and previous findings, covering 461 environments, indicated that architectural style, trees, and height had effects strong enough to warrant implementation, but the effects of setting back third stories and distance were too small to warrant implementation.
Retrieval evaluation and distance learning from perceived similarity between endomicroscopy videos.
André, Barbara; Vercauteren, Tom; Buchner, Anna M; Wallace, Michael B; Ayache, Nicholas
2011-01-01
Evaluating content-based retrieval (CBR) is challenging because it requires an adequate ground-truth. When the available groundtruth is limited to textual metadata such as pathological classes, retrieval results can only be evaluated indirectly, for example in terms of classification performance. In this study we first present a tool to generate perceived similarity ground-truth that enables direct evaluation of endomicroscopic video retrieval. This tool uses a four-points Likert scale and collects subjective pairwise similarities perceived by multiple expert observers. We then evaluate against the generated ground-truth a previously developed dense bag-of-visual-words method for endomicroscopic video retrieval. Confirming the results of previous indirect evaluation based on classification, our direct evaluation shows that this method significantly outperforms several other state-of-the-art CBR methods. In a second step, we propose to improve the CBR method by learning an adjusted similarity metric from the perceived similarity ground-truth. By minimizing a margin-based cost function that differentiates similar and dissimilar video pairs, we learn a weight vector applied to the visual word signatures of videos. Using cross-validation, we demonstrate that the learned similarity distance is significantly better correlated with the perceived similarity than the original visual-word-based distance.
Object-based change detection method using refined Markov random field
NASA Astrophysics Data System (ADS)
Peng, Daifeng; Zhang, Yongjun
2017-01-01
In order to fully consider the local spatial constraints between neighboring objects in object-based change detection (OBCD), an OBCD approach is presented by introducing a refined Markov random field (MRF). First, two periods of images are stacked and segmented to produce image objects. Second, object spectral and textual histogram features are extracted and G-statistic is implemented to measure the distance among different histogram distributions. Meanwhile, object heterogeneity is calculated by combining spectral and textual histogram distance using adaptive weight. Third, an expectation-maximization algorithm is applied for determining the change category of each object and the initial change map is then generated. Finally, a refined change map is produced by employing the proposed refined object-based MRF method. Three experiments were conducted and compared with some state-of-the-art unsupervised OBCD methods to evaluate the effectiveness of the proposed method. Experimental results demonstrate that the proposed method obtains the highest accuracy among the methods used in this paper, which confirms its validness and effectiveness in OBCD.
NASA Astrophysics Data System (ADS)
Zhou, Wei-Xing; Sornette, Didier
2007-07-01
We have recently introduced the “thermal optimal path” (TOP) method to investigate the real-time lead-lag structure between two time series. The TOP method consists in searching for a robust noise-averaged optimal path of the distance matrix along which the two time series have the greatest similarity. Here, we generalize the TOP method by introducing a more general definition of distance which takes into account possible regime shifts between positive and negative correlations. This generalization to track possible changes of correlation signs is able to identify possible transitions from one convention (or consensus) to another. Numerical simulations on synthetic time series verify that the new TOP method performs as expected even in the presence of substantial noise. We then apply it to investigate changes of convention in the dependence structure between the historical volatilities of the USA inflation rate and economic growth rate. Several measures show that the new TOP method significantly outperforms standard cross-correlation methods.
Huang, Chao-Chi; Chiu, Yang-Hung; Wen, Chih-Yu
2014-01-01
In a vehicular sensor network (VSN), the key design issue is how to organize vehicles effectively, such that the local network topology can be stabilized quickly. In this work, each vehicle with on-board sensors can be considered as a local controller associated with a group of communication members. In order to balance the load among the nodes and govern the local topology change, a group formation scheme using localized criteria is implemented. The proposed distributed topology control method focuses on reducing the rate of group member change and avoiding the unnecessary information exchange. Two major phases are sequentially applied to choose the group members of each vehicle using hybrid angle/distance information. The operation of Phase I is based on the concept of the cone-based method, which can select the desired vehicles quickly. Afterwards, the proposed time-slot method is further applied to stabilize the network topology. Given the network structure in Phase I, a routing scheme is presented in Phase II. The network behaviors are explored through simulation and analysis in a variety of scenarios. The results show that the proposed mechanism is a scalable and effective control framework for VSNs. PMID:25350506
Using self-organizing maps to classify humpback whale song units and quantify their similarity.
Allen, Jenny A; Murray, Anita; Noad, Michael J; Dunlop, Rebecca A; Garland, Ellen C
2017-10-01
Classification of vocal signals can be undertaken using a wide variety of qualitative and quantitative techniques. Using east Australian humpback whale song from 2002 to 2014, a subset of vocal signals was acoustically measured and then classified using a Self-Organizing Map (SOM). The SOM created (1) an acoustic dictionary of units representing the song's repertoire, and (2) Cartesian distance measurements among all unit types (SOM nodes). Utilizing the SOM dictionary as a guide, additional song recordings from east Australia were rapidly (manually) transcribed. To assess the similarity in song sequences, the Cartesian distance output from the SOM was applied in Levenshtein distance similarity analyses as a weighting factor to better incorporate unit similarity in the calculation (previously a qualitative process). SOMs provide a more robust and repeatable means of categorizing acoustic signals along with a clear quantitative measurement of sound type similarity based on acoustic features. This method can be utilized for a wide variety of acoustic databases especially those containing very large datasets and can be applied across the vocalization research community to help address concerns surrounding inconsistency in manual classification.
Lyman, Katie J; Keister, Kassiann; Gange, Kara; Mellinger, Christopher D; Hanson, Thomas A
2017-04-01
Limited quantitative, physiological evidence exists regarding the effectiveness of Kinesio® Taping methods, particularly with respect to the potential ability to impact underlying physiological joint space and structures. To better understand the impact of these techniques, the underlying physiological processes must be investigated in addition to the examination of more subjective measures related to pain in unhealthy tissues. The purpose of this study was to determine whether the Kinesio® Taping Space Correction Method created a significant difference in patellofemoral joint space, as quantified by diagnostic ultrasound. Pre-test/post-test prospective cohort study. Thirty-two participants with bilaterally healthy knees and no past history of surgery took part in the study. For each participant, diagnostic ultrasound was utilized to collect three measurements: the patellofemoral joint space, the distance from the skin to the superficial patella, and distance from the skin to the patellar tendon. The Kinesio® Taping Space Correction Method was then applied. After a ten-minute waiting period in a non-weight bearing position, all three measurements were repeated. Each participant served as his or her own control. Paired t tests showed a statistically significant difference (mean difference = 1.1 mm, t [3,1] = 2.823, p = 0.008, g = .465) between baseline and taped conditions in the space between the posterior surface of the patella to the medial femoral condyle. Neither the distance from the skin to the superficial patella nor the distance from the skin to the patellar tendon increased to a statistically significant degree. The application of the Kinesio® Taping Space Correction Method increases the patellofemoral joint space in healthy adults by increasing the distance between the patella and the medial femoral condyle, though it does not increase the distance from the skin to the superficial patella nor to the patellar tendon. 3.
An Improved Evidential-IOWA Sensor Data Fusion Approach in Fault Diagnosis
Zhou, Deyun; Zhuang, Miaoyan; Fang, Xueyi; Xie, Chunhe
2017-01-01
As an important tool of information fusion, Dempster–Shafer evidence theory is widely applied in handling the uncertain information in fault diagnosis. However, an incorrect result may be obtained if the combined evidence is highly conflicting, which may leads to failure in locating the fault. To deal with the problem, an improved evidential-Induced Ordered Weighted Averaging (IOWA) sensor data fusion approach is proposed in the frame of Dempster–Shafer evidence theory. In the new method, the IOWA operator is used to determine the weight of different sensor data source, while determining the parameter of the IOWA, both the distance of evidence and the belief entropy are taken into consideration. First, based on the global distance of evidence and the global belief entropy, the α value of IOWA is obtained. Simultaneously, a weight vector is given based on the maximum entropy method model. Then, according to IOWA operator, the evidence are modified before applying the Dempster’s combination rule. The proposed method has a better performance in conflict management and fault diagnosis due to the fact that the information volume of each evidence is taken into consideration. A numerical example and a case study in fault diagnosis are presented to show the rationality and efficiency of the proposed method. PMID:28927017
A comparison of different interpolation methods for wind data in Central Asia
NASA Astrophysics Data System (ADS)
Reinhardt, Katja; Samimi, Cyrus
2017-04-01
For the assessment of the global climate change and its consequences, the results of computer based climate models are of central importance. The quality of these results and the validity of the derived forecasts are strongly determined by the quality of the underlying climate data. However, in many parts of the world high resolution data are not available. This is particularly true for many regions in Central Asia, where the density of climatological stations has often to be described as thinned out. Due to this insufficient data base the use of statistical methods to improve the resolution of existing climate data is of crucial importance. Only this can provide a substantial data base for a well-founded analysis of past climate changes as well as for a reliable forecast of future climate developments for the particular region. The study presented here shows a comparison of different interpolation methods for the wind components u and v for a region in Central Asia with a pronounced topography. The aim of the study is to find out whether there is an optimal interpolation method which can equally be applied for all pressure levels or if different interpolation methods have to be applied for each pressure level. The European reanalysis data Era-Interim for the years 1989 - 2015 are used as input data for the pressure levels of 850 hPa, 500 hPa and 200 hPa. In order to improve the input data, two different interpolation procedures were applied: On the one hand pure interpolation methods were used, such as inverse distance weighting and ordinary kriging. On the other hand machine learning algorithms, generalized additive models and regression kriging were applied, considering additional influencing factors, e.g. geopotential and topography. As a result it can be concluded that regression kriging provides the best results for all pressure levels, followed by support vector machine, neural networks and ordinary kriging. Inverse distance weighting showed the worst results.
A study of polaritonic transparency in couplers made from excitonic materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, Mahi R.; Racknor, Chris
2015-03-14
We have studied light matter interaction in quantum dot and exciton-polaritonic coupler hybrid systems. The coupler is made by embedding two slabs of an excitonic material (CdS) into a host excitonic material (ZnO). An ensemble of non-interacting quantum dots is doped in the coupler. The bound exciton polariton states are calculated in the coupler using the transfer matrix method in the presence of the coupling between the external light (photons) and excitons. These bound exciton-polaritons interact with the excitons present in the quantum dots and the coupler is acting as a reservoir. The Schrödinger equation method has been used tomore » calculate the absorption coefficient in quantum dots. It is found that when the distance between two slabs (CdS) is greater than decay length of evanescent waves the absorption spectrum has two peaks and one minimum. The minimum corresponds to a transparent state in the system. However, when the distance between the slabs is smaller than the decay length of evanescent waves, the absorption spectra has three peaks and two transparent states. In other words, one transparent state can be switched to two transparent states when the distance between the two layers is modified. This could be achieved by applying stress and strain fields. It is also found that transparent states can be switched on and off by applying an external control laser field.« less
Method for joining metal by solid-state bonding
Burkhart, L. Elkin; Fultz, Chester R.; Maulden, Kerry A.
1979-01-01
The present development is directed to a method for joining metal at relatively low temperatures by solid-state bonding. Planar surfaces of the metal workpieces are placed in a parallel abutting relationship with one another. A load is applied to at least one of the workpieces for forcing the workpieces together while one of the workpieces is relatively slowly oscillated in a rotary motion over a distance of about 1.degree.. After a preselected number of oscillations, the rotary motion is terminated and the bond between the abutting surfaces is effected. An additional load may be applied to facilitate the bond after terminating the rotary motion.
Airborne ultrasound applied to anthropometry--physical and technical principles.
Lindström, K; Mauritzson, L; Benoni, G; Willner, S
1983-01-01
Airborne ultrasound has been utilized for remote measurement of distance, direction, size, form, volume and velocity. General anthropometrical measurements are performed with a newly constructed real-time linear array scanner. To make full use of the method, we expect a rapid development of high-frequency ultrasound transducers for use in air.
Reading Students' Minds: Design Assessment in Distance Education
ERIC Educational Resources Information Center
Jones, Derek
2014-01-01
This paper considers the design of assessment for students of design according to behaviourist versus experiential pedagogical approaches, relating these to output-oriented as opposed to process-oriented assessment methods. It is part case study and part recognition of the importance of process in design education and how this might be applied in…
Measuring Disorientation Based on the Needleman-Wunsch Algorithm
ERIC Educational Resources Information Center
Güyer, Tolga; Atasoy, Bilal; Somyürek, Sibel
2015-01-01
This study offers a new method to measure navigation disorientation in web based systems which is powerful learning medium for distance and open education. The Needleman-Wunsch algorithm is used to measure disorientation in a more precise manner. The process combines theoretical and applied knowledge from two previously distinct research areas,…
Methods for measuring populations of small, diurnal forest birds.
D.A. Manuwal; A.B. Carey
1991-01-01
Before a bird population is measured, the objectives of the study should be clearly defined. Important factors to be considered in designing a study are study site selection, plot size or transect length, distance between sampling points, duration of counts, and frequency and timing of sampling. Qualified field personnel are especially important. Assumptions applying...
On the reliability and limitations of the SPAC method with a directional wavefield
NASA Astrophysics Data System (ADS)
Luo, Song; Luo, Yinhe; Zhu, Lupei; Xu, Yixian
2016-03-01
The spatial autocorrelation (SPAC) method is one of the most efficient ways to extract phase velocities of surface waves from ambient seismic noise. Most studies apply the method based on the assumption that the wavefield of ambient noise is diffuse. However, the actual distribution of sources is neither diffuse nor stationary. In this study, we examined the reliability and limitations of the SPAC method with a directional wavefield. We calculated the SPAC coefficients and phase velocities from a directional wavefield for a four-layer model and characterized the limitations of the SPAC. We then applied the SPAC method to real data in Karamay, China. Our results show that, 1) the SPAC method can accurately measure surface wave phase velocities from a square array with a directional wavefield down to a wavelength of twice the shortest interstation distance; and 2) phase velocities obtained from real data by the SPAC method are stable and reliable, which demonstrates that this method can be applied to measure phase velocities in a square array with a directional wavefield.
Buxton, Eric C; De Muth, James E
2013-01-01
Constraints in geography and time require cost efficiencies in professional development for pharmacists. Distance learning, with its growing availability and lower intrinsic costs, will likely become more prevalent. The objective of this nonexperimental, postintervention study was to examine the perceptions of pharmacists attending a continuing education program. One group participated in the live presentation, whereas the second group joined via a simultaneous webcast. After the presentation, both groups were surveyed with identical questions concerning their perceptions of their learning environment, course content, and utility to their work. Comparisons across group responses to the summated scales were conducted through the use of Kruskal-Wallis tests. Analysis of the data showed that both the distance and local groups were demographically similar and that both groups were satisfied with the presentation method, audio and visual quality, and both felt that they would be able to apply what they learned in their practice. However, the local group was significantly more satisfied with the learning experience. Distance learning does provide a viable and more flexible method for pharmacy professional development, but does not yet replace the traditional learning environment in all facets of learner preference. Copyright © 2013 Elsevier Inc. All rights reserved.
Feature selection from a facial image for distinction of sasang constitution.
Koo, Imhoi; Kim, Jong Yeol; Kim, Myoung Geun; Kim, Keun Ho
2009-09-01
Recently, oriental medicine has received attention for providing personalized medicine through consideration of the unique nature and constitution of individual patients. With the eventual goal of globalization, the current trend in oriental medicine research is the standardization by adopting western scientific methods, which could represent a scientific revolution. The purpose of this study is to establish methods for finding statistically significant features in a facial image with respect to distinguishing constitution and to show the meaning of those features. From facial photo images, facial elements are analyzed in terms of the distance, angle and the distance ratios, for which there are 1225, 61 250 and 749 700 features, respectively. Due to the very large number of facial features, it is quite difficult to determine truly meaningful features. We suggest a process for the efficient analysis of facial features including the removal of outliers, control for missing data to guarantee data confidence and calculation of statistical significance by applying ANOVA. We show the statistical properties of selected features according to different constitutions using the nine distances, 10 angles and 10 rates of distance features that are finally established. Additionally, the Sasang constitutional meaning of the selected features is shown here.
Investigation of primordial black hole bursts using interplanetary network gamma-ray bursts
Ukwatta, Tilan Niranjan; Hurley, Kevin; MacGibbon, Jane H.; ...
2016-07-25
The detection of a gamma-ray burst (GRB) in the solar neighborhood would have very important implications for GRB phenomenology. The leading theories for cosmological GRBs would not be able to explain such events. The final bursts of evaporating primordial black holes (PBHs), however, would be a natural explanation for local GRBs. We present a novel technique that can constrain the distance to GRBs using detections from widely separated, non-imaging spacecraft. This method can determine the actual distance to the burst if it is local. We applied this method to constrain distances to a sample of 36 short-duration GRBs detected bymore » the Interplanetary Network (IPN) that show observational properties that are expected from PBH evaporations. These bursts have minimum possible distances in the 10 13–10 18 cm (7–10 5 au) range, which are consistent with the expected PBH energetics and with a possible origin in the solar neighborhood, although none of the bursts can be unambiguously demonstrated to be local. Furthermore, assuming that these bursts are real PBH events, we estimate lower limits on the PBH burst evaporation rate in the solar neighborhood.« less
Feature Selection from a Facial Image for Distinction of Sasang Constitution
Koo, Imhoi; Kim, Jong Yeol; Kim, Myoung Geun
2009-01-01
Recently, oriental medicine has received attention for providing personalized medicine through consideration of the unique nature and constitution of individual patients. With the eventual goal of globalization, the current trend in oriental medicine research is the standardization by adopting western scientific methods, which could represent a scientific revolution. The purpose of this study is to establish methods for finding statistically significant features in a facial image with respect to distinguishing constitution and to show the meaning of those features. From facial photo images, facial elements are analyzed in terms of the distance, angle and the distance ratios, for which there are 1225, 61 250 and 749 700 features, respectively. Due to the very large number of facial features, it is quite difficult to determine truly meaningful features. We suggest a process for the efficient analysis of facial features including the removal of outliers, control for missing data to guarantee data confidence and calculation of statistical significance by applying ANOVA. We show the statistical properties of selected features according to different constitutions using the nine distances, 10 angles and 10 rates of distance features that are finally established. Additionally, the Sasang constitutional meaning of the selected features is shown here. PMID:19745013
Optimal regionalization of extreme value distributions for flood estimation
NASA Astrophysics Data System (ADS)
Asadi, Peiman; Engelke, Sebastian; Davison, Anthony C.
2018-01-01
Regionalization methods have long been used to estimate high return levels of river discharges at ungauged locations on a river network. In these methods, discharge measurements from a homogeneous group of similar, gauged, stations are used to estimate high quantiles at a target location that has no observations. The similarity of this group to the ungauged location is measured in terms of a hydrological distance measuring differences in physical and meteorological catchment attributes. We develop a statistical method for estimation of high return levels based on regionalizing the parameters of a generalized extreme value distribution. The group of stations is chosen by optimizing over the attribute weights of the hydrological distance, ensuring similarity and in-group homogeneity. Our method is applied to discharge data from the Rhine basin in Switzerland, and its performance at ungauged locations is compared to that of other regionalization methods. For gauged locations we show how our approach improves the estimation uncertainty for long return periods by combining local measurements with those from the chosen group.
Passage of a star by a massive black hole
DOE Office of Scientific and Technical Information (OSTI.GOV)
Noltenius, R.A.; Katz, J.I.
1982-12-01
We have calculated the effects on a 1 M/sub sun/ star of passage by a 10/sup 4/ M/sub sun/ point mass (black hole) in an intially parabolic orbit with a variety of pericenter distances. Because this problem is three-dimensional, we use the gridless smoothed particle hydrodynamic method of Lucy, Gingold, and Monaghan. The tidal forces are found to induce rotation and radial and nonradial pulsations of the star. The loss of orbital energy to these internal motions leads to capture of the star by the black hole. For small pericenter distances, the outer layers of the star are disrupted, whilemore » at still smaller distances, the entire star is destroyed. These results may be applied to some X-ray sources, active galactic nuclei, and quasars.« less
NASA Astrophysics Data System (ADS)
Durato, M. V.; Albano, A. M.; Rapp, P. E.; Nawang, S. A.
2015-06-01
The validity of ERPs as indices of stable neurophysiological traits is partially dependent on their stability over time. Previous studies on ERP stability, however, have reported diverse stability estimates despite using the same component scoring methods. This present study explores a novel approach in investigating the longitudinal stability of average ERPs—that is, by treating the ERP waveform as a time series and then applying Euclidean Distance and Kolmogorov-Smirnov analyses to evaluate the similarity or dissimilarity between the ERP time series of different sessions or run pairs. Nonlinear dynamical analysis show that in the absence of a change in medical condition, the average ERPs of healthy human adults are highly longitudinally stable—as evaluated by both the Euclidean distance and the Kolmogorov-Smirnov test.
An application of Galactic parallax: the distance to the tidal stream GD-1
NASA Astrophysics Data System (ADS)
Eyre, Andy
2010-04-01
We assess the practicality of computing the distance to stellar streams in our Galaxy, using the method of Galactic parallax suggested by Eyre & Binney. We find that the uncertainty in Galactic parallax is dependent upon the specific geometry of the problem in question. In the case of the tidal stream GD-1, the problem geometry indicates that available proper-motion data, with individual accuracy ~4masyr-1, should allow estimation of its distance with about 50 per cent uncertainty. Proper motions accurate to ~1masyr-1, which are expected from the forthcoming Pan-STARRS PS-1 survey, will allow estimation of its distance to about 10 per cent uncertainty. Proper motions from the future Large Synoptic Survey Telescope (LSST) and Gaia projects will be more accurate still, and will allow the parallax for a stream 30 kpc distant to be measured with ~14 per cent uncertainty. We demonstrate the feasibility of the method and show that our uncertainty estimates are accurate by computing Galactic parallax using simulated data for the GD-1 stream. We also apply the method to actual data for the GD-1 stream, published by Koposov, Rix & Hogg. With the exception of one datum, the distances estimated using Galactic parallax match photometric estimates with less than 1 kpc discrepancy. The scatter in the distances recovered using Galactic parallax is very low, suggesting that the proper-motion uncertainty reported by Koposov et al. is in fact overestimated. We conclude that the GD-1 stream is (8 +/- 1)kpc distant, on a retrograde orbit inclined 37° to the plane, and that the visible portion of the stream is likely to be near pericentre.
Text Line Detection from Rectangle Traffic Panels of Natural Scene
NASA Astrophysics Data System (ADS)
Wang, Shiyuan; Huang, Linlin; Hu, Jian
2018-01-01
Traffic sign detection and recognition is very important for Intelligent Transportation. Among traffic signs, traffic panel contains rich information. However, due to low resolution and blur in the rectangular traffic panel, it is difficult to extract the character and symbols. In this paper, we propose a coarse-to-fine method to detect the Chinese character on traffic panels from natural scenes. Given a traffic panel Color Quantization is applied to extract candidate regions of Chinese characters. Second, a multi-stage filter based on learning is applied to discard the non-character regions. Third, we aggregate the characters for text lines by Distance Metric Learning method. Experimental results on real traffic images from Baidu Street View demonstrate the effectiveness of the proposed method.
Multi-resolution analysis for ear recognition using wavelet features
NASA Astrophysics Data System (ADS)
Shoaib, M.; Basit, A.; Faye, I.
2016-11-01
Security is very important and in order to avoid any physical contact, identification of human when they are moving is necessary. Ear biometric is one of the methods by which a person can be identified using surveillance cameras. Various techniques have been proposed to increase the ear based recognition systems. In this work, a feature extraction method for human ear recognition based on wavelet transforms is proposed. The proposed features are approximation coefficients and specific details of level two after applying various types of wavelet transforms. Different wavelet transforms are applied to find the suitable wavelet. Minimum Euclidean distance is used as a matching criterion. Results achieved by the proposed method are promising and can be used in real time ear recognition system.
Absolute method of measuring magnetic susceptibility
Thorpe, A.; Senftle, F.E.
1959-01-01
An absolute method of standardization and measurement of the magnetic susceptibility of small samples is presented which can be applied to most techniques based on the Faraday method. The fact that the susceptibility is a function of the area under the curve of sample displacement versus distance of the magnet from the sample, offers a simple method of measuring the susceptibility without recourse to a standard sample. Typical results on a few substances are compared with reported values, and an error of less than 2% can be achieved. ?? 1959 The American Institute of Physics.
Marcinkiewicz, Andrzej; Cybart, Adam; Chromińska-Szosland, Dorota
2002-01-01
The rapid development of science, technology, economy and the society has one along with the wide recognition of lifelong education and learning society concepts. Scientific centres worldwide conduct research how the access to the information and multimedia technology could bring about positive changes in our lives including improvement in education and the learning environment. Mankind development in conformity with social progress and sustainable development faces a new educational concept of learning society and open education in the information age, supported with multimedia and data processing technology. Constrains in resources availability for broadening the access to education had led to search for alternative, more time and cost-effective systems of education. One of them is distance learning, applied with success in many countries. The benefits of distance learning are well proven and can be extended to occupational medicine. Major advantages include: the integration of studies with work experience, flexibility, allowing studies to be matched to work requirements, perceived work and leisure timing, continuity of career progression. Likewise is in Poland this form of education becomes more and more popular. The distance education systems have been seen as an investment in human resource development. The vast variety of courses and educational stages makes possible the modern method of knowledge to be easily accessible. Experience of the School of Public Health in Łódź in distance learning had shown remarkable benefits of the method with comparable quality of intramural and distance learning in respect of the knowledge and experience gained by students.
Gu, Dongxiao; Liang, Changyong; Zhao, Huimin
2017-03-01
We present the implementation and application of a case-based reasoning (CBR) system for breast cancer related diagnoses. By retrieving similar cases in a breast cancer decision support system, oncologists can obtain powerful information or knowledge, complementing their own experiential knowledge, in their medical decision making. We observed two problems in applying standard CBR to this context: the abundance of different types of attributes and the difficulty in eliciting appropriate attribute weights from human experts. We therefore used a distance measure named weighted heterogeneous value distance metric, which can better deal with both continuous and discrete attributes simultaneously than the standard Euclidean distance, and a genetic algorithm for learning the attribute weights involved in this distance measure automatically. We evaluated our CBR system in two case studies, related to benign/malignant tumor prediction and secondary cancer prediction, respectively. Weighted heterogeneous value distance metric with genetic algorithm for weight learning outperformed several alternative attribute matching methods and several classification methods by at least 3.4%, reaching 0.938, 0.883, 0.933, and 0.984 in the first case study, and 0.927, 0.842, 0.939, and 0.989 in the second case study, in terms of accuracy, sensitivity×specificity, F measure, and area under the receiver operating characteristic curve, respectively. The evaluation result indicates the potential of CBR in the breast cancer diagnosis domain. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, Lei; Strehlow, Jan; Rühaak, Jan; Weiler, Florian; Diez, Yago; Gubern-Merida, Albert; Diekmann, Susanne; Laue, Hendrik; Hahn, Horst K.
2015-03-01
In breast cancer screening for high-risk women, follow-up magnetic resonance images (MRI) are acquired with a time interval ranging from several months up to a few years. Prior MRI studies may provide additional clinical value when examining the current one and thus have the potential to increase sensitivity and specificity of screening. To build a spatial correlation between suspicious findings in both current and prior studies, a reliable alignment method between follow-up studies is desirable. However, long time interval, different scanners and imaging protocols, and varying breast compression can result in a large deformation, which challenges the registration process. In this work, we present a fast and robust spatial alignment framework, which combines automated breast segmentation and current-prior registration techniques in a multi-level fashion. First, fully automatic breast segmentation is applied to extract the breast masks that are used to obtain an initial affine transform. Then, a non-rigid registration algorithm using normalized gradient fields as similarity measure together with curvature regularization is applied. A total of 29 subjects and 58 breast MR images were collected for performance assessment. To evaluate the global registration accuracy, the volume overlap and boundary surface distance metrics are calculated, resulting in an average Dice Similarity Coefficient (DSC) of 0.96 and root mean square distance (RMSD) of 1.64 mm. In addition, to measure local registration accuracy, for each subject a radiologist annotated 10 pairs of markers in the current and prior studies representing corresponding anatomical locations. The average distance error of marker pairs dropped from 67.37 mm to 10.86 mm after applying registration.
An iterative method for the localization of a neutron source in a large box (container)
NASA Astrophysics Data System (ADS)
Dubinski, S.; Presler, O.; Alfassi, Z. B.
2007-12-01
The localization of an unknown neutron source in a bulky box was studied. This can be used for the inspection of cargo, to prevent the smuggling of neutron and α emitters. It is important to localize the source from the outside for safety reasons. Source localization is necessary in order to determine its activity. A previous study showed that, by using six detectors, three on each parallel face of the box (460×420×200 mm 3), the location of the source can be found with an average distance of 4.73 cm between the real source position and the calculated one and a maximal distance of about 9 cm. Accuracy was improved in this work by applying an iteration method based on four fixed detectors and the successive iteration of positioning of an external calibrating source. The initial positioning of the calibrating source is the plane of detectors 1 and 2. This method finds the unknown source location with an average distance of 0.78 cm between the real source position and the calculated one and a maximum distance of 3.66 cm for the same box. For larger boxes, localization without iterations requires an increase in the number of detectors, while localization with iterations requires only an increase in the number of iteration steps. In addition to source localization, two methods for determining the activity of the unknown source were also studied.
Distance majorization and its applications
Chi, Eric C.; Zhou, Hua; Lange, Kenneth
2014-01-01
The problem of minimizing a continuously differentiable convex function over an intersection of closed convex sets is ubiquitous in applied mathematics. It is particularly interesting when it is easy to project onto each separate set, but nontrivial to project onto their intersection. Algorithms based on Newton’s method such as the interior point method are viable for small to medium-scale problems. However, modern applications in statistics, engineering, and machine learning are posing problems with potentially tens of thousands of parameters or more. We revisit this convex programming problem and propose an algorithm that scales well with dimensionality. Our proposal is an instance of a sequential unconstrained minimization technique and revolves around three ideas: the majorization-minimization principle, the classical penalty method for constrained optimization, and quasi-Newton acceleration of fixed-point algorithms. The performance of our distance majorization algorithms is illustrated in several applications. PMID:25392563
Sensor Drift Compensation Algorithm based on PDF Distance Minimization
NASA Astrophysics Data System (ADS)
Kim, Namyong; Byun, Hyung-Gi; Persaud, Krishna C.; Huh, Jeung-Soo
2009-05-01
In this paper, a new unsupervised classification algorithm is introduced for the compensation of sensor drift effects of the odor sensing system using a conducting polymer sensor array. The proposed method continues updating adaptive Radial Basis Function Network (RBFN) weights in the testing phase based on minimizing Euclidian Distance between two Probability Density Functions (PDFs) of a set of training phase output data and another set of testing phase output data. The output in the testing phase using the fixed weights of the RBFN are significantly dispersed and shifted from each target value due mostly to sensor drift effect. In the experimental results, the output data by the proposed methods are observed to be concentrated closer again to their own target values significantly. This indicates that the proposed method can be effectively applied to improved odor sensing system equipped with the capability of sensor drift effect compensation
Estimating unbiased phenological trends by adapting site-occupancy models.
Roth, Tobias; Strebel, Nicolas; Amrhein, Valentin
2014-08-01
As a response to climate warming, many animals and plants have been found to shift phenologies, such as appearance in spring or timing of reproduction. However, traditional measures for shifts in phenology that are based on observational data likely are biased due to a large influence of population size, observational effort, starting date of a survey, or other causes that may affect the probability of detecting a species. Understanding phenological responses of species to climate change, however, requires a robust measure that could be compared among studies and study years. Here, we developed a new method for estimating arrival and departure dates based on site-occupancy models. Using simulated data, we show that our method provided virtually unbiased estimates of phenological events even if detection probability or the number of sites occupied by the species is changing over time. To illustrate the flexibility of our method, we analyzed spring arrival of two long-distance migrant songbirds and the length of the flight period of two butterfly species, using data from a long-term biodiversity monitoring program in Switzerland. In contrast to many birds that migrate short distances, the two long-distance migrant songbirds tended to postpone average spring arrival by -0.5 days per year between 1995 and 2012. Furthermore, the flight period of the short-distance-flying butterfly species apparently became even shorter over the study period, while the flight period of the longer-distance-flying butterfly species remained relatively stable. Our method could be applied to temporally and spatially extensive data from a wide range of monitoring programs and citizen science projects, to help unravel how species and communities respond to global warming.
A class-based link prediction using Distance Dependent Chinese Restaurant Process
NASA Astrophysics Data System (ADS)
Andalib, Azam; Babamir, Seyed Morteza
2016-08-01
One of the important tasks in relational data analysis is link prediction which has been successfully applied on many applications such as bioinformatics, information retrieval, etc. The link prediction is defined as predicting the existence or absence of edges between nodes of a network. In this paper, we propose a novel method for link prediction based on Distance Dependent Chinese Restaurant Process (DDCRP) model which enables us to utilize the information of the topological structure of the network such as shortest path and connectivity of the nodes. We also propose a new Gibbs sampling algorithm for computing the posterior distribution of the hidden variables based on the training data. Experimental results on three real-world datasets show the superiority of the proposed method over other probabilistic models for link prediction problem.
A method to improve the range resolution in stepped frequency continuous wave radar
NASA Astrophysics Data System (ADS)
Kaczmarek, Paweł
2018-04-01
In the paper one of high range resolution methods - Aperture Sampling - was analysed. Unlike MUSIC based techniques it proved to be very efficient in terms of achieving unambiguous synthetic range profile for ultra-wideband stepped frequency continuous wave radar. Assuming that minimal distance required to separate two targets in depth (distance) corresponds to -3 dB width of received echo, AS provided a 30,8 % improvement in range resolution in analysed scenario, when compared to results of applying IFFT. Output data is far superior in terms of both improved range resolution and reduced side lobe level than used typically in this area Inverse Fourier Transform. Furthermore it does not require prior knowledge or an estimate of number of targets to be detected in a given scan.
NASA Astrophysics Data System (ADS)
Zhang, Ren-jie; Xu, Shuai; Shi, Jia-dong; Ma, Wen-chao; Ye, Liu
2015-11-01
In the paper, we researched the quantum phase transition (QPT) in the anisotropic spin XXZ model by exploiting the quantum renormalization group (QRG) method. The innovation point is that we adopt a new approach called trace distance discord to indicate the quantum correlation of the system. QPT after several iterations of renormalization in current system has been observed. Consequently, it opened the possibility of investigation of QPR in the geometric discord territory. While the anisotropy suppresses the correlation due to favoring of the alignment of spins, the DM interaction restores the spoiled correlation via creation of the quantum fluctuations. We also apply quantum renormalization group method to probe the thermodynamic limit of the model and emerging of nonanalytic behavior of the correlation.
Optimizing a desirable fare structure for a bus-subway corridor
Liu, Bing-Zheng; Ge, Ying-En; Cao, Kai; Jiang, Xi; Meng, Lingyun; Liu, Ding; Gao, Yunfeng
2017-01-01
This paper aims to optimize a desirable fare structure for the public transit service along a bus-subway corridor with the consideration of those factors related to equity in trip, including travel distance and comfort level. The travel distance factor is represented by the distance-based fare strategy, which is an existing differential strategy. The comfort level one is considered in the area-based fare strategy which is a new differential strategy defined in this paper. Both factors are referred to by the combined fare strategy which is composed of distance-based and area-based fare strategies. The flat fare strategy is applied to determine a reference level of social welfare and obtain the general passenger flow along transit lines, which is used to divide areas or zones along the corridor. This problem is formulated as a bi-level program, of which the upper level maximizes the social welfare and the lower level capturing traveler choice behavior is a variable-demand stochastic user equilibrium assignment model. A genetic algorithm is applied to solve the bi-level program while the method of successive averages is adopted to solve the lower-level model. A series of numerical experiments are carried out to illustrate the performance of the models and solution methods. Numerical results indicate that all three differential fare strategies play a better role in enhancing the social welfare than the flat fare strategy and that the fare structure under the combined fare strategy generates the highest social welfare and the largest resulting passenger demand, which implies that the more equity factors a differential fare strategy involves the more desirable fare structure the strategy has. PMID:28981508
Optimizing a desirable fare structure for a bus-subway corridor.
Liu, Bing-Zheng; Ge, Ying-En; Cao, Kai; Jiang, Xi; Meng, Lingyun; Liu, Ding; Gao, Yunfeng
2017-01-01
This paper aims to optimize a desirable fare structure for the public transit service along a bus-subway corridor with the consideration of those factors related to equity in trip, including travel distance and comfort level. The travel distance factor is represented by the distance-based fare strategy, which is an existing differential strategy. The comfort level one is considered in the area-based fare strategy which is a new differential strategy defined in this paper. Both factors are referred to by the combined fare strategy which is composed of distance-based and area-based fare strategies. The flat fare strategy is applied to determine a reference level of social welfare and obtain the general passenger flow along transit lines, which is used to divide areas or zones along the corridor. This problem is formulated as a bi-level program, of which the upper level maximizes the social welfare and the lower level capturing traveler choice behavior is a variable-demand stochastic user equilibrium assignment model. A genetic algorithm is applied to solve the bi-level program while the method of successive averages is adopted to solve the lower-level model. A series of numerical experiments are carried out to illustrate the performance of the models and solution methods. Numerical results indicate that all three differential fare strategies play a better role in enhancing the social welfare than the flat fare strategy and that the fare structure under the combined fare strategy generates the highest social welfare and the largest resulting passenger demand, which implies that the more equity factors a differential fare strategy involves the more desirable fare structure the strategy has.
New approach for logo recognition
NASA Astrophysics Data System (ADS)
Chen, Jingying; Leung, Maylor K. H.; Gao, Yongsheng
2000-03-01
The problem of logo recognition is of great interest in the document domain, especially for document database. By recognizing the logo we obtain semantic information about the document which may be useful in deciding whether or not to analyze the textual components. In order to develop a logo recognition method that is efficient to compute and product intuitively reasonable results, we investigate the Line Segment Hausdorff Distance on logo recognition. Researchers apply Hausdorff Distance to measure the dissimilarity of two point sets. It has been extended to match two sets of line segments. The new approach has the advantage to incorporate structural and spatial information to compute the dissimilarity. The added information can conceptually provide more and better distinctive capability for recognition. The proposed technique has been applied on line segments of logos with encouraging results that support the concept experimentally. This might imply a new way for logo recognition.
Realizing privacy preserving genome-wide association studies.
Simmons, Sean; Berger, Bonnie
2016-05-01
As genomics moves into the clinic, there has been much interest in using this medical data for research. At the same time the use of such data raises many privacy concerns. These circumstances have led to the development of various methods to perform genome-wide association studies (GWAS) on patient records while ensuring privacy. In particular, there has been growing interest in applying differentially private techniques to this challenge. Unfortunately, up until now all methods for finding high scoring SNPs in a differentially private manner have had major drawbacks in terms of either accuracy or computational efficiency. Here we overcome these limitations with a substantially modified version of the neighbor distance method for performing differentially private GWAS, and thus are able to produce a more viable mechanism. Specifically, we use input perturbation and an adaptive boundary method to overcome accuracy issues. We also design and implement a convex analysis based algorithm to calculate the neighbor distance for each SNP in constant time, overcoming the major computational bottleneck in the neighbor distance method. It is our hope that methods such as ours will pave the way for more widespread use of patient data in biomedical research. A python implementation is available at http://groups.csail.mit.edu/cb/DiffPriv/ bab@csail.mit.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
Secure satellite communication using multi-photon tolerant quantum communication protocol
NASA Astrophysics Data System (ADS)
Darunkar, Bhagyashri; Punekar, Nikhil; Verma, Pramode K.
2015-09-01
This paper proposes and analyzes the potential of a multi-photon tolerant quantum communication protocol to secure satellite communication. For securing satellite communication, quantum cryptography is the only known unconditionally secure method. A number of recent experiments have shown feasibility of satellite-aided global quantum key distribution (QKD) using different methods such as: Use of entangled photon pairs, decoy state methods, and entanglement swapping. The use of single photon in these methods restricts the distance and speed over which quantum cryptography can be applied. Contemporary quantum cryptography protocols like the BB84 and its variants suffer from the limitation of reaching the distances of only Low Earth Orbit (LEO) at the data rates of few kilobits per second. This makes it impossible to develop a general satellite-based secure global communication network using the existing protocols. The method proposed in this paper allows secure communication at the heights of the Medium Earth Orbit (MEO) and Geosynchronous Earth Orbit (GEO) satellites. The benefits of the proposed method are two-fold: First it enables the realization of a secure global communication network based on satellites and second it provides unconditional security for satellite networks at GEO heights. The multi-photon approach discussed in this paper ameliorates the distance and speed issues associated with quantum cryptography through the use of contemporary laser communication (lasercom) devices. This approach can be seen as a step ahead towards global quantum communication.
Warner, Graham C.; Helmer, Karl G.
2018-01-01
As the sharing of data is mandated by funding agencies and journals, reuse of data has become more prevalent. It becomes imperative, therefore, to develop methods to characterize the similarity of data. While users can group data based on the acquisition parameters stored in the file headers, these gives no indication whether a file can be combined with other data without increasing the variance in the data set. Methods have been implemented that characterize the signal-to-noise ratio or identify signal drop-outs in the raw image files, but potential users of data often have access to calculated metric maps and these are more difficult to characterize and compare. Here we describe a histogram-distance-based method applied to diffusion metric maps of fractional anisotropy and mean diffusivity that were generated using data extracted from a repository of clinically-acquired MRI data. We describe the generation of the data set, the pitfalls specific to diffusion MRI data, and the results of the histogram distance analysis. We find that, in general, data from GE scanners are less similar than are data from Siemens scanners. We also find that the distribution of distance metric values is not Gaussian at any selection of the acquisition parameters considered here (field strength, number of gradient directions, b-value, and vendor). PMID:29568257
Elastic models: a comparative study applied to retinal images.
Karali, E; Lambropoulou, S; Koutsouris, D
2011-01-01
In this work various methods of parametric elastic models are compared, namely the classical snake, the gradient vector field snake (GVF snake) and the topology-adaptive snake (t-snake), as well as the method of self-affine mapping system as an alternative to elastic models. We also give a brief overview of the methods used. The self-affine mapping system is implemented using an adapting scheme and minimum distance as optimization criterion, which is more suitable for weak edges detection. All methods are applied to glaucomatic retinal images with the purpose of segmenting the optical disk. The methods are compared in terms of segmentation accuracy and speed, as these are derived from cross-correlation coefficients between real and algorithm extracted contours and segmentation time, respectively. As a result, the method of self-affine mapping system presents adequate segmentation time and segmentation accuracy, and significant independence from initialization.
Method and apparatus for sensor fusion
NASA Technical Reports Server (NTRS)
Krishen, Kumar (Inventor); Shaw, Scott (Inventor); Defigueiredo, Rui J. P. (Inventor)
1991-01-01
Method and apparatus for fusion of data from optical and radar sensors by error minimization procedure is presented. The method was applied to the problem of shape reconstruction of an unknown surface at a distance. The method involves deriving an incomplete surface model from an optical sensor. The unknown characteristics of the surface are represented by some parameter. The correct value of the parameter is computed by iteratively generating theoretical predictions of the radar cross sections (RCS) of the surface, comparing the predicted and the observed values for the RCS, and improving the surface model from results of the comparison. Theoretical RCS may be computed from the surface model in several ways. One RCS prediction technique is the method of moments. The method of moments can be applied to an unknown surface only if some shape information is available from an independent source. The optical image provides the independent information.
Multiple-frequency continuous wave ultrasonic system for accurate distance measurement
NASA Astrophysics Data System (ADS)
Huang, C. F.; Young, M. S.; Li, Y. C.
1999-02-01
A highly accurate multiple-frequency continuous wave ultrasonic range-measuring system for use in air is described. The proposed system uses a method heretofore applied to radio frequency distance measurement but not to air-based ultrasonic systems. The method presented here is based upon the comparative phase shifts generated by three continuous ultrasonic waves of different but closely spaced frequencies. In the test embodiment to confirm concept feasibility, two low cost 40 kHz ultrasonic transducers are set face to face and used to transmit and receive ultrasound. Individual frequencies are transmitted serially, each generating its own phase shift. For any given frequency, the transmitter/receiver distance modulates the phase shift between the transmitted and received signals. Comparison of the phase shifts allows a highly accurate evaluation of target distance. A single-chip microcomputer-based multiple-frequency continuous wave generator and phase detector was designed to record and compute the phase shift information and the resulting distance, which is then sent to either a LCD or a PC. The PC is necessary only for calibration of the system, which can be run independently after calibration. Experiments were conducted to test the performance of the whole system. Experimentally, ranging accuracy was found to be within ±0.05 mm, with a range of over 1.5 m. The main advantages of this ultrasonic range measurement system are high resolution, low cost, narrow bandwidth requirements, and ease of implementation.
Analysis of signals under compositional noise with applications to SONAR data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tucker, J. Derek; Wu, Wei; Srivastava, Anuj
2013-07-09
In this paper, we consider the problem of denoising and classification of SONAR signals observed under compositional noise, i.e., they have been warped randomly along the x-axis. The traditional techniques do not account for such noise and, consequently, cannot provide a robust classification of signals. We apply a recent framework that: 1) uses a distance-based objective function for data alignment and noise reduction; and 2) leads to warping-invariant distances between signals for robust clustering and classification. We use this framework to introduce two distances that can be used for signal classification: a) a y-distance, which is the distance between themore » aligned signals; and b) an x-distance that measures the amount of warping needed to align the signals. We focus on the task of clustering and classifying objects, using acoustic spectrum (acoustic color), which is complicated by the uncertainties in aspect angles at data collections. Small changes in the aspect angles corrupt signals in a way that amounts to compositional noise. As a result, we demonstrate the use of the developed metrics in classification of acoustic color data and highlight improvements in signal classification over current methods.« less
Dependence of streamer density on electric field strength on positive electrode
NASA Astrophysics Data System (ADS)
Koki, Nakamura; Takahumi, Okuyama; Wang, Douyan; Takao, N.; Hidenori, Akiyama; Kumamoto University Collaboration
2015-09-01
Pulsed streamer discharge plasma, a type of non-thermal plasma, is known as generation method of reactive radicals and ozone and treatment of exhausted gas. From our previous research, the distance between electrodes has been considered a very important parameter for applications using pulsed streamer discharge. However, how the distance between electrodes affects the pulsed discharge hasn't been clarified. In this research, the propagation process of pulsed streamer discharge in a wire-plate electrode was observed using an ICCD camera for 4 electrodes having different distance between electrodes. The distance between electrodes was changeable at 45 mm, 40 mm, 35 mm, and 30 mm. The results show that, when the distance between electrodes was shortened, applied voltage with a pulse duration of 100 ns decreased from 80 to 60.3 kV. Conversely, discharge current increased from 149 to 190 A. Streamer head velocity became faster. On the other hand, Streamer head density at onset time of streamer head propagation didn't change. This is considered due to the electric field strength of streamer head at that time, in result, it was about 14 kV/mm under each distance between electrodes.
New method for distance-based close following safety indicator.
Sharizli, A A; Rahizar, R; Karim, M R; Saifizul, A A
2015-01-01
The increase in the number of fatalities caused by road accidents involving heavy vehicles every year has raised the level of concern and awareness on road safety in developing countries like Malaysia. Changes in the vehicle dynamic characteristics such as gross vehicle weight, travel speed, and vehicle classification will affect a heavy vehicle's braking performance and its ability to stop safely in emergency situations. As such, the aim of this study is to establish a more realistic new distance-based safety indicator called the minimum safe distance gap (MSDG), which incorporates vehicle classification (VC), speed, and gross vehicle weight (GVW). Commercial multibody dynamics simulation software was used to generate braking distance data for various heavy vehicle classes under various loads and speeds. By applying nonlinear regression analysis to the simulation results, a mathematical expression of MSDG has been established. The results show that MSDG is dynamically changed according to GVW, VC, and speed. It is envisaged that this new distance-based safety indicator would provide a more realistic depiction of the real traffic situation for safety analysis.
Mei, Jiangyuan; Liu, Meizhu; Wang, Yuan-Fang; Gao, Huijun
2016-06-01
Multivariate time series (MTS) datasets broadly exist in numerous fields, including health care, multimedia, finance, and biometrics. How to classify MTS accurately has become a hot research topic since it is an important element in many computer vision and pattern recognition applications. In this paper, we propose a Mahalanobis distance-based dynamic time warping (DTW) measure for MTS classification. The Mahalanobis distance builds an accurate relationship between each variable and its corresponding category. It is utilized to calculate the local distance between vectors in MTS. Then we use DTW to align those MTS which are out of synchronization or with different lengths. After that, how to learn an accurate Mahalanobis distance function becomes another key problem. This paper establishes a LogDet divergence-based metric learning with triplet constraint model which can learn Mahalanobis matrix with high precision and robustness. Furthermore, the proposed method is applied on nine MTS datasets selected from the University of California, Irvine machine learning repository and Robert T. Olszewski's homepage, and the results demonstrate the improved performance of the proposed approach.
Wang, Yi; Wan, Jianwu; Guo, Jun; Cheung, Yiu-Ming; Yuen, Pong C; Yi Wang; Jianwu Wan; Jun Guo; Yiu-Ming Cheung; Yuen, Pong C; Cheung, Yiu-Ming; Guo, Jun; Yuen, Pong C; Wan, Jianwu; Wang, Yi
2018-07-01
Similarity search is essential to many important applications and often involves searching at scale on high-dimensional data based on their similarity to a query. In biometric applications, recent vulnerability studies have shown that adversarial machine learning can compromise biometric recognition systems by exploiting the biometric similarity information. Existing methods for biometric privacy protection are in general based on pairwise matching of secured biometric templates and have inherent limitations in search efficiency and scalability. In this paper, we propose an inference-based framework for privacy-preserving similarity search in Hamming space. Our approach builds on an obfuscated distance measure that can conceal Hamming distance in a dynamic interval. Such a mechanism enables us to systematically design statistically reliable methods for retrieving most likely candidates without knowing the exact distance values. We further propose to apply Montgomery multiplication for generating search indexes that can withstand adversarial similarity analysis, and show that information leakage in randomized Montgomery domains can be made negligibly small. Our experiments on public biometric datasets demonstrate that the inference-based approach can achieve a search accuracy close to the best performance possible with secure computation methods, but the associated cost is reduced by orders of magnitude compared to cryptographic primitives.
Sidek, Khairul; Khali, Ibrahim
2012-01-01
In this paper, a person identification mechanism implemented with Cardioid based graph using electrocardiogram (ECG) is presented. Cardioid based graph has given a reasonably good classification accuracy in terms of differentiating between individuals. However, the current feature extraction method using Euclidean distance could be further improved by using Mahalanobis distance measurement producing extracted coefficients which takes into account the correlations of the data set. Identification is then done by applying these extracted features to Radial Basis Function Network. A total of 30 ECG data from MITBIH Normal Sinus Rhythm database (NSRDB) and MITBIH Arrhythmia database (MITDB) were used for development and evaluation purposes. Our experimentation results suggest that the proposed feature extraction method has significantly increased the classification performance of subjects in both databases with accuracy from 97.50% to 99.80% in NSRDB and 96.50% to 99.40% in MITDB. High sensitivity, specificity and positive predictive value of 99.17%, 99.91% and 99.23% for NSRDB and 99.30%, 99.90% and 99.40% for MITDB also validates the proposed method. This result also indicates that the right feature extraction technique plays a vital role in determining the persistency of the classification accuracy for Cardioid based person identification mechanism.
Yan, Xuedong; Gao, Dan; Zhang, Fan; Zeng, Chen; Xiang, Wang; Zhang, Man
2013-01-01
This study investigated the spatial distribution of copper (Cu), zinc (Zn), cadmium (Cd), lead (Pb), chromium (Cr), cobalt (Co), nickel (Ni) and arsenic (As) in roadside topsoil in the Qinghai-Tibet Plateau and evaluated the potential environmental risks of these roadside heavy metals due to traffic emissions. A total of 120 topsoil samples were collected along five road segments in the Qinghai-Tibet Plateau. The nonlinear regression method was used to formulize the relationship between the metal concentrations in roadside soils and roadside distance. The Hakanson potential ecological risk index method was applied to assess the degrees of heavy metal contaminations. The regression results showed that both of the heavy metals’ concentrations and their ecological risk indices decreased exponentially with the increase of roadside distance. The large R square values of the regression models indicate that the exponential regression method can suitably describe the relationship between heavy metal accumulation and roadside distance. For the entire study region, there was a moderate level of potential ecological risk within a 10 m roadside distance. However, Cd was the only prominent heavy metal which posed potential hazard to the local soil ecosystem. Overall, the rank of risk contribution to the local environments among the eight heavy metals was Cd > As > Ni > Pb > Cu > Co > Zn > Cr. Considering that Cd is a more hazardous heavy metal than other elements for public health, the local government should pay special attention to this traffic-related environmental issue. PMID:23439515
Strength loss in southern pine poles damaged by woodpeckers
R.W. Rumsey; G.E. Woodson
1973-01-01
Woodpecker damage caused extensive reductions in strength of 50-foot, class-2 utility poles, the amount depending on the cross-sectional area of wood removed and its distance from the apex. Two methods for estimating when damaged poles should be replaced proved to be conservative when applied to results of field tests. Such conservative predictions of falling loads...
Strength loss in southern pine poles damaged by woodpeckers
R.L. Rumsey; George E. Woodson
1973-01-01
Woodpecker damage caused extensive reductions in strength of 50-foot, class-2 utility poles, the amount depending on the cross-sectional area of wood removed and its distance from the apex. Two methods for estimating when damaged poles should be replaced proved to be conservative when applied to results of field rests. Such conservative predictions of failing loads...
Missing value imputation for gene expression data by tailored nearest neighbors.
Faisal, Shahla; Tutz, Gerhard
2017-04-25
High dimensional data like gene expression and RNA-sequences often contain missing values. The subsequent analysis and results based on these incomplete data can suffer strongly from the presence of these missing values. Several approaches to imputation of missing values in gene expression data have been developed but the task is difficult due to the high dimensionality (number of genes) of the data. Here an imputation procedure is proposed that uses weighted nearest neighbors. Instead of using nearest neighbors defined by a distance that includes all genes the distance is computed for genes that are apt to contribute to the accuracy of imputed values. The method aims at avoiding the curse of dimensionality, which typically occurs if local methods as nearest neighbors are applied in high dimensional settings. The proposed weighted nearest neighbors algorithm is compared to existing missing value imputation techniques like mean imputation, KNNimpute and the recently proposed imputation by random forests. We use RNA-sequence and microarray data from studies on human cancer to compare the performance of the methods. The results from simulations as well as real studies show that the weighted distance procedure can successfully handle missing values for high dimensional data structures where the number of predictors is larger than the number of samples. The method typically outperforms the considered competitors.
Bin Ratio-Based Histogram Distances and Their Application to Image Classification.
Hu, Weiming; Xie, Nianhua; Hu, Ruiguang; Ling, Haibin; Chen, Qiang; Yan, Shuicheng; Maybank, Stephen
2014-12-01
Large variations in image background may cause partial matching and normalization problems for histogram-based representations, i.e., the histograms of the same category may have bins which are significantly different, and normalization may produce large changes in the differences between corresponding bins. In this paper, we deal with this problem by using the ratios between bin values of histograms, rather than bin values' differences which are used in the traditional histogram distances. We propose a bin ratio-based histogram distance (BRD), which is an intra-cross-bin distance, in contrast with previous bin-to-bin distances and cross-bin distances. The BRD is robust to partial matching and histogram normalization, and captures correlations between bins with only a linear computational complexity. We combine the BRD with the ℓ1 histogram distance and the χ(2) histogram distance to generate the ℓ1 BRD and the χ(2) BRD, respectively. These combinations exploit and benefit from the robustness of the BRD under partial matching and the robustness of the ℓ1 and χ(2) distances to small noise. We propose a method for assessing the robustness of histogram distances to partial matching. The BRDs and logistic regression-based histogram fusion are applied to image classification. The experimental results on synthetic data sets show the robustness of the BRDs to partial matching, and the experiments on seven benchmark data sets demonstrate promising results of the BRDs for image classification.
Mapping the Space of Genomic Signatures
Kari, Lila; Hill, Kathleen A.; Sayem, Abu S.; Karamichalis, Rallis; Bryans, Nathaniel; Davis, Katelyn; Dattani, Nikesh S.
2015-01-01
We propose a computational method to measure and visualize interrelationships among any number of DNA sequences allowing, for example, the examination of hundreds or thousands of complete mitochondrial genomes. An "image distance" is computed for each pair of graphical representations of DNA sequences, and the distances are visualized as a Molecular Distance Map: Each point on the map represents a DNA sequence, and the spatial proximity between any two points reflects the degree of structural similarity between the corresponding sequences. The graphical representation of DNA sequences utilized, Chaos Game Representation (CGR), is genome- and species-specific and can thus act as a genomic signature. Consequently, Molecular Distance Maps could inform species identification, taxonomic classifications and, to a certain extent, evolutionary history. The image distance employed, Structural Dissimilarity Index (DSSIM), implicitly compares the occurrences of oligomers of length up to k (herein k = 9) in DNA sequences. We computed DSSIM distances for more than 5 million pairs of complete mitochondrial genomes, and used Multi-Dimensional Scaling (MDS) to obtain Molecular Distance Maps that visually display the sequence relatedness in various subsets, at different taxonomic levels. This general-purpose method does not require DNA sequence alignment and can thus be used to compare similar or vastly different DNA sequences, genomic or computer-generated, of the same or different lengths. We illustrate potential uses of this approach by applying it to several taxonomic subsets: phylum Vertebrata, (super)kingdom Protista, classes Amphibia-Insecta-Mammalia, class Amphibia, and order Primates. This analysis of an extensive dataset confirms that the oligomer composition of full mtDNA sequences can be a source of taxonomic information. This method also correctly finds the mtDNA sequences most closely related to that of the anatomically modern human (the Neanderthal, the Denisovan, and the chimp), and that the sequence most different from it in this dataset belongs to a cucumber. PMID:26000734
NASA Astrophysics Data System (ADS)
Dzuba, Sergei A.
2016-08-01
Pulsed double electron-electron resonance technique (DEER, or PELDOR) is applied to study conformations and aggregation of peptides, proteins, nucleic acids, and other macromolecules. For a pair of spin labels, experimental data allows for the determination of their distance distribution function, P(r). P(r) is derived as a solution of a first-kind Fredholm integral equation, which is an ill-posed problem. Here, we suggest regularization by increasing the distance discretization length to its upper limit where numerical integration still provides agreement with experiment. This upper limit is found to be well above the lower limit for which the solution instability appears because of the ill-posed nature of the problem. For solving the integral equation, Monte Carlo trials of P(r) functions are employed; this method has an obvious advantage of the fulfillment of the non-negativity constraint for P(r). The regularization by the increasing of distance discretization length for the case of overlapping broad and narrow distributions may be employed selectively, with this length being different for different distance ranges. The approach is checked for model distance distributions and for experimental data taken from literature for doubly spin-labeled DNA and peptide antibiotics.
Adaptive phase k-means algorithm for waveform classification
NASA Astrophysics Data System (ADS)
Song, Chengyun; Liu, Zhining; Wang, Yaojun; Xu, Feng; Li, Xingming; Hu, Guangmin
2018-01-01
Waveform classification is a powerful technique for seismic facies analysis that describes the heterogeneity and compartments within a reservoir. Horizon interpretation is a critical step in waveform classification. However, the horizon often produces inconsistent waveform phase, and thus results in an unsatisfied classification. To alleviate this problem, an adaptive phase waveform classification method called the adaptive phase k-means is introduced in this paper. Our method improves the traditional k-means algorithm using an adaptive phase distance for waveform similarity measure. The proposed distance is a measure with variable phases as it moves from sample to sample along the traces. Model traces are also updated with the best phase interference in the iterative process. Therefore, our method is robust to phase variations caused by the interpretation horizon. We tested the effectiveness of our algorithm by applying it to synthetic and real data. The satisfactory results reveal that the proposed method tolerates certain waveform phase variation and is a good tool for seismic facies analysis.
Unbiased estimates of galaxy scaling relations from photometric redshift surveys
NASA Astrophysics Data System (ADS)
Rossi, Graziano; Sheth, Ravi K.
2008-06-01
Many physical properties of galaxies correlate with one another, and these correlations are often used to constrain galaxy formation models. Such correlations include the colour-magnitude relation, the luminosity-size relation, the fundamental plane, etc. However, the transformation from observable (e.g. angular size, apparent brightness) to physical quantity (physical size, luminosity) is often distance dependent. Noise in the distance estimate will lead to biased estimates of these correlations, thus compromising the ability of photometric redshift surveys to constrain galaxy formation models. We describe two methods which can remove this bias. One is a generalization of the Vmax method, and the other is a maximum-likelihood approach. We illustrate their effectiveness by studying the size-luminosity relation in a mock catalogue, although both methods can be applied to other scaling relations as well. We show that if one simply uses photometric redshifts one obtains a biased relation; our methods correct for this bias and recover the true relation.
Zhang, Hong-guang; Lu, Jian-gang
2016-02-01
Abstract To overcome the problems of significant difference among samples and nonlinearity between the property and spectra of samples in spectral quantitative analysis, a local regression algorithm is proposed in this paper. In this algorithm, net signal analysis method(NAS) was firstly used to obtain the net analyte signal of the calibration samples and unknown samples, then the Euclidean distance between net analyte signal of the sample and net analyte signal of calibration samples was calculated and utilized as similarity index. According to the defined similarity index, the local calibration sets were individually selected for each unknown sample. Finally, a local PLS regression model was built on each local calibration sets for each unknown sample. The proposed method was applied to a set of near infrared spectra of meat samples. The results demonstrate that the prediction precision and model complexity of the proposed method are superior to global PLS regression method and conventional local regression algorithm based on spectral Euclidean distance.
Positioning for Effectiveness: Applying Marketing Concepts to Distance Education.
ERIC Educational Resources Information Center
Levenburg, Nancy
1997-01-01
Demonstrates how colleges can use distance education to attract and retain a "critical mass" of learners for distance programs. Explores alternative ways to view distance education market opportunities and determine which avenues to pursue. Suggests how to be more effective in all aspects of distance education programs. (13 citations) (YKH)
Optimisation of active suspension control inputs for improved performance of active safety systems
NASA Astrophysics Data System (ADS)
Čorić, Mirko; Deur, Joško; Xu, Li; Tseng, H. Eric; Hrovat, Davor
2018-01-01
A collocation-type control variable optimisation method is used to investigate the extent to which the fully active suspension (FAS) can be applied to improve the vehicle electronic stability control (ESC) performance and reduce the braking distance. First, the optimisation approach is applied to the scenario of vehicle stabilisation during the sine-with-dwell manoeuvre. The results are used to provide insights into different FAS control mechanisms for vehicle performance improvements related to responsiveness and yaw rate error reduction indices. The FAS control performance is compared to performances of the standard ESC system, optimal active brake system and combined FAS and ESC configuration. Second, the optimisation approach is employed to the task of FAS-based braking distance reduction for straight-line vehicle motion. Here, the scenarios of uniform and longitudinally or laterally non-uniform tyre-road friction coefficient are considered. The influences of limited anti-lock braking system (ABS) actuator bandwidth and limit-cycle ABS behaviour are also analysed. The optimisation results indicate that the FAS can provide competitive stabilisation performance and improved agility when compared to the ESC system, and that it can reduce the braking distance by up to 5% for distinctively non-uniform friction conditions.
NASA Astrophysics Data System (ADS)
Rahayu, U.; Darmayanti, T.; Widodo, A.; Redjeki, S.
2017-02-01
Self-regulated learning (SRL) is a part of students’ skills in which they manage, regulate, and monitor their learning process so they can reach their study goal. Students of distance education should comprise this skill. The aim of this research is to describe the development of distance students learning guide, namely “CEDAS strategy” designed for science students. The students’ guidance consists of seven principles, they are; selecting and applying learning strategy appropriately, managing time effectively, planning of learning realistically and accurately, achieving study goal, and doing self-evaluation continuously. The research method was qualitative descriptive. The research involved the students of Universitas Terbuka’ Biology education who participated in Animal Embryology course. The data were collected using a questionnaire and interview. Furthermore, it was analyzed by descriptive analyses. Research finding showed that during try out, most of the students stated that the learning guide was easy to understand, concise, interesting and encouraging for students to continue reading and learning. In the implementation stage, most students commented that the guide is easy to understand, long enough, and helpful so it can be used as a reference to study independently and to apply it in the daily basis.
Egorov, A D; Stepantsov, V I; Nosovskiĭ, A M; Shipov, A A
2009-01-01
Cluster analysis was applied to evaluate locomotion training (running and running intermingled with walking) of 13 cosmonauts on long-term ISS missions by the parameters of duration (min), distance (m) and intensity (km/h). Based on the results of analyses, the cosmonauts were distributed into three steady groups of 2, 5 and 6 persons. Distance and speed showed a statistical rise (p < 0.03) from group 1 to group 3. Duration of physical locomotion training was not statistically different in the groups (p = 0.125). Therefore, cluster analysis is an adequate method of evaluating fitness of cosmonauts on long-term missions.
Detecting Biosphere anomalies hotspots
NASA Astrophysics Data System (ADS)
Guanche-Garcia, Yanira; Mahecha, Miguel; Flach, Milan; Denzler, Joachim
2017-04-01
The current amount of satellite remote sensing measurements available allow for applying data-driven methods to investigate environmental processes. The detection of anomalies or abnormal events is crucial to monitor the Earth system and to analyze their impacts on ecosystems and society. By means of a combination of statistical methods, this study proposes an intuitive and efficient methodology to detect those areas that present hotspots of anomalies, i.e. higher levels of abnormal or extreme events or more severe phases during our historical records. Biosphere variables from a preliminary version of the Earth System Data Cube developed within the CAB-LAB project (http://earthsystemdatacube.net/) have been used in this study. This database comprises several atmosphere and biosphere variables expanding 11 years (2001-2011) with 8-day of temporal resolution and 0.25° of global spatial resolution. In this study, we have used 10 variables that measure the biosphere. The methodology applied to detect abnormal events follows the intuitive idea that anomalies are assumed to be time steps that are not well represented by a previously estimated statistical model [1].We combine the use of Autoregressive Moving Average (ARMA) models with a distance metric like Mahalanobis distance to detect abnormal events in multiple biosphere variables. In a first step we pre-treat the variables by removing the seasonality and normalizing them locally (μ=0,σ=1). Additionally we have regionalized the area of study into subregions of similar climate conditions, by using the Köppen climate classification. For each climate region and variable we have selected the best ARMA parameters by means of a Bayesian Criteria. Then we have obtained the residuals by comparing the fitted models with the original data. To detect the extreme residuals from the 10 variables, we have computed the Mahalanobis distance to the data's mean (Hotelling's T^2), which considers the covariance matrix of the joint distribution. The proposed methodology has been applied to different areas around the globe. The results show that the method is able to detect historic events and also provides a useful tool to define sensitive regions. This method and results have been developed within the framework of the project BACI (http://baci-h2020.eu/), which aims to integrate Earth Observation data to monitor the earth system and assessing the impacts of terrestrial changes. [1] V. Chandola, A., Banerjee and v., Kumar. Anomaly detection: a survey. ACM computing surveys (CSUR), vol. 41, n. 3, 2009. [2] P. Mahalanobis. On the generalised distance in statistics. Proceedings National Institute of Science, vol. 2, pp 49-55, 1936.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Conn, A. R.; Parker, Q. A.; Zucker, D. B.
In 'A Bayesian Approach to Locating the Red Giant Branch Tip Magnitude (Part I)', a new technique was introduced for obtaining distances using the tip of the red giant branch (TRGB) standard candle. Here we describe a useful complement to the technique with the potential to further reduce the uncertainty in our distance measurements by incorporating a matched-filter weighting scheme into the model likelihood calculations. In this scheme, stars are weighted according to their probability of being true object members. We then re-test our modified algorithm using random-realization artificial data to verify the validity of the generated posterior probability distributionsmore » (PPDs) and proceed to apply the algorithm to the satellite system of M31, culminating in a three-dimensional view of the system. Further to the distributions thus obtained, we apply a satellite-specific prior on the satellite distances to weight the resulting distance posterior distributions, based on the halo density profile. Thus in a single publication, using a single method, a comprehensive coverage of the distances to the companion galaxies of M31 is presented, encompassing the dwarf spheroidals Andromedas I-III, V, IX-XXVII, and XXX along with NGC 147, NGC 185, M33, and M31 itself. Of these, the distances to Andromedas XXIV-XXVII and Andromeda XXX have never before been derived using the TRGB. Object distances are determined from high-resolution tip magnitude posterior distributions generated using the Markov Chain Monte Carlo technique and associated sampling of these distributions to take into account uncertainties in foreground extinction and the absolute magnitude of the TRGB as well as photometric errors. The distance PPDs obtained for each object both with and without the aforementioned prior are made available to the reader in tabular form. The large object coverage takes advantage of the unprecedented size and photometric depth of the Pan-Andromeda Archaeological Survey. Finally, a preliminary investigation into the satellite density distribution within the halo is made using the obtained distance distributions. For simplicity, this investigation assumes a single power law for the density as a function of radius, with the slope of this power law examined for several subsets of the entire satellite sample.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loebman, Sarah R.; Ivezic, Zeljko; Quinn, Thomas R.
2012-10-10
We search for evidence of dark matter in the Milky Way by utilizing the stellar number density distribution and kinematics measured by the Sloan Digital Sky Survey (SDSS) to heliocentric distances exceeding {approx}10 kpc. We employ the cylindrically symmetric form of Jeans equations and focus on the morphology of the resulting acceleration maps, rather than the normalization of the total mass as done in previous, mostly local, studies. Jeans equations are first applied to a mock catalog based on a cosmologically derived N-body+SPH simulation, and the known acceleration (gradient of gravitational potential) is successfully recovered. The same simulation is alsomore » used to quantify the impact of dark matter on the total acceleration. We use Galfast, a code designed to quantitatively reproduce SDSS measurements and selection effects, to generate a synthetic stellar catalog. We apply Jeans equations to this catalog and produce two-dimensional maps of stellar acceleration. These maps reveal that in a Newtonian framework, the implied gravitational potential cannot be explained by visible matter alone. The acceleration experienced by stars at galactocentric distances of {approx}20 kpc is three times larger than what can be explained by purely visible matter. The application of an analytic method for estimating the dark matter halo axis ratio to SDSS data implies an oblate halo with q{sub DM} = 0.47 {+-} 0.14 within the same distance range. These techniques can be used to map the dark matter halo to much larger distances from the Galactic center using upcoming deep optical surveys, such as LSST.« less
GPS/DR Error Estimation for Autonomous Vehicle Localization.
Lee, Byung-Hyun; Song, Jong-Hwa; Im, Jun-Hyuck; Im, Sung-Hyuck; Heo, Moon-Beom; Jee, Gyu-In
2015-08-21
Autonomous vehicles require highly reliable navigation capabilities. For example, a lane-following method cannot be applied in an intersection without lanes, and since typical lane detection is performed using a straight-line model, errors can occur when the lateral distance is estimated in curved sections due to a model mismatch. Therefore, this paper proposes a localization method that uses GPS/DR error estimation based on a lane detection method with curved lane models, stop line detection, and curve matching in order to improve the performance during waypoint following procedures. The advantage of using the proposed method is that position information can be provided for autonomous driving through intersections, in sections with sharp curves, and in curved sections following a straight section. The proposed method was applied in autonomous vehicles at an experimental site to evaluate its performance, and the results indicate that the positioning achieved accuracy at the sub-meter level.
GPS/DR Error Estimation for Autonomous Vehicle Localization
Lee, Byung-Hyun; Song, Jong-Hwa; Im, Jun-Hyuck; Im, Sung-Hyuck; Heo, Moon-Beom; Jee, Gyu-In
2015-01-01
Autonomous vehicles require highly reliable navigation capabilities. For example, a lane-following method cannot be applied in an intersection without lanes, and since typical lane detection is performed using a straight-line model, errors can occur when the lateral distance is estimated in curved sections due to a model mismatch. Therefore, this paper proposes a localization method that uses GPS/DR error estimation based on a lane detection method with curved lane models, stop line detection, and curve matching in order to improve the performance during waypoint following procedures. The advantage of using the proposed method is that position information can be provided for autonomous driving through intersections, in sections with sharp curves, and in curved sections following a straight section. The proposed method was applied in autonomous vehicles at an experimental site to evaluate its performance, and the results indicate that the positioning achieved accuracy at the sub-meter level. PMID:26307997
Prospects and Problems for Identification of Poisonous Plants in China using DNA Barcodes.
Xie, Lei; Wang, Ying Wei; Guan, Shan Yue; Xie, Li Jing; Long, Xin; Sun, Cheng Ye
2014-10-01
Poisonous plants are a deadly threat to public health in China. The traditional clinical diagnosis of the toxic plants is inefficient, fallible, and dependent upon experts. In this study, we tested the performance of DNA barcodes for identification of the most threatening poisonous plants in China. Seventy-four accessions of 27 toxic plant species in 22 genera and 17 families were sampled and three DNA barcodes (matK, rbcL, and ITS) were amplified, sequenced and tested. Three methods, Blast, pairwise global alignment (PWG) distance, and Tree-Building were tested for discrimination power. The primer universality of all the three markers was high. Except in the case of ITS for Hemerocallis minor, the three barcodes were successfully generated from all the selected species. Among the three methods applied, Blast showed the lowest discrimination rate, whereas PWG Distance and Tree-Building methods were equally effective. The ITS barcode showed highest discrimination rates using the PWG Distance and Tree-Building methods. When the barcodes were combined, discrimination rates were increased for the Blast method. DNA barcoding technique provides us a fast tool for clinical identification of poisonous plants in China. We suggest matK, rbcL, ITS used in combination as DNA barcodes for authentication of poisonous plants. Copyright © 2014 The Editorial Board of Biomedical and Environmental Sciences. Published by China CDC. All rights reserved.
Applying Statistical Models and Parametric Distance Measures for Music Similarity Search
NASA Astrophysics Data System (ADS)
Lukashevich, Hanna; Dittmar, Christian; Bastuck, Christoph
Automatic deriving of similarity relations between music pieces is an inherent field of music information retrieval research. Due to the nearly unrestricted amount of musical data, the real-world similarity search algorithms have to be highly efficient and scalable. The possible solution is to represent each music excerpt with a statistical model (ex. Gaussian mixture model) and thus to reduce the computational costs by applying the parametric distance measures between the models. In this paper we discuss the combinations of applying different parametric modelling techniques and distance measures and weigh the benefits of each one against the others.
Weisberg, Andrew H
2013-10-01
A method for forming a composite structure according to one embodiment includes forming a first ply; and forming a second ply above the first ply. Forming each ply comprises: applying a bonding material to a tape, the tape comprising a fiber and a matrix, wherein the bonding material has a curing time of less than about 1 second; and adding the tape to a substrate for forming adjacent tape winds having about a constant distance therebetween. Additional systems, methods and articles of manufacture are also presented.
Group-theoretic models of the inversion process in bacterial genomes.
Egri-Nagy, Attila; Gebhardt, Volker; Tanaka, Mark M; Francis, Andrew R
2014-07-01
The variation in genome arrangements among bacterial taxa is largely due to the process of inversion. Recent studies indicate that not all inversions are equally probable, suggesting, for instance, that shorter inversions are more frequent than longer, and those that move the terminus of replication are less probable than those that do not. Current methods for establishing the inversion distance between two bacterial genomes are unable to incorporate such information. In this paper we suggest a group-theoretic framework that in principle can take these constraints into account. In particular, we show that by lifting the problem from circular permutations to the affine symmetric group, the inversion distance can be found in polynomial time for a model in which inversions are restricted to acting on two regions. This requires the proof of new results in group theory, and suggests a vein of new combinatorial problems concerning permutation groups on which group theorists will be needed to collaborate with biologists. We apply the new method to inferring distances and phylogenies for published Yersinia pestis data.
Penalized nonparametric scalar-on-function regression via principal coordinates
Reiss, Philip T.; Miller, David L.; Wu, Pei-Shien; Hua, Wen-Yu
2016-01-01
A number of classical approaches to nonparametric regression have recently been extended to the case of functional predictors. This paper introduces a new method of this type, which extends intermediate-rank penalized smoothing to scalar-on-function regression. In the proposed method, which we call principal coordinate ridge regression, one regresses the response on leading principal coordinates defined by a relevant distance among the functional predictors, while applying a ridge penalty. Our publicly available implementation, based on generalized additive modeling software, allows for fast optimal tuning parameter selection and for extensions to multiple functional predictors, exponential family-valued responses, and mixed-effects models. In an application to signature verification data, principal coordinate ridge regression, with dynamic time warping distance used to define the principal coordinates, is shown to outperform a functional generalized linear model. PMID:29217963
Rankin, Kristin M; Kroelinger, Charlan D; Rosenberg, Deborah; Barfield, Wanda D
2012-12-01
The purpose of this article is to summarize the methodology, partnerships, and products developed as a result of a distance-based workforce development initiative to improve analytic capacity among maternal and child health (MCH) epidemiologists in state health agencies. This effort was initiated by the Centers for Disease Control's MCH Epidemiology Program and faculty at the University of Illinois at Chicago to encourage and support the use of surveillance data by MCH epidemiologists and program staff in state agencies. Beginning in 2005, distance-based training in advanced analytic skills was provided to MCH epidemiologists. To support participants, this model of workforce development included: lectures about the practical application of innovative epidemiologic methods, development of multidisciplinary teams within and across agencies, and systematic, tailored technical assistance The goal of this initiative evolved to emphasize the direct application of advanced methods to the development of state data products using complex sample surveys, resulting in the articles published in this supplement to MCHJ. Innovative methods were applied by participating MCH epidemiologists, including regional analyses across geographies and datasets, multilevel analyses of state policies, and new indicator development. Support was provided for developing cross-state and regional partnerships and for developing and publishing the results of analytic projects. This collaboration was successful in building analytic capacity, facilitating partnerships and promoting surveillance data use to address state MCH priorities, and may have broader application beyond MCH epidemiology. In an era of decreasing resources, such partnership efforts between state and federal agencies and academia are essential for promoting effective data use.
Ritchie, J Brendan; Carlson, Thomas A
2016-01-01
A fundamental challenge for cognitive neuroscience is characterizing how the primitives of psychological theory are neurally implemented. Attempts to meet this challenge are a manifestation of what Fechner called "inner" psychophysics: the theory of the precise mapping between mental quantities and the brain. In his own time, inner psychophysics remained an unrealized ambition for Fechner. We suggest that, today, multivariate pattern analysis (MVPA), or neural "decoding," methods provide a promising starting point for developing an inner psychophysics. A cornerstone of these methods are simple linear classifiers applied to neural activity in high-dimensional activation spaces. We describe an approach to inner psychophysics based on the shared architecture of linear classifiers and observers under decision boundary models such as signal detection theory. Under this approach, distance from a decision boundary through activation space, as estimated by linear classifiers, can be used to predict reaction time in accordance with signal detection theory, and distance-to-bound models of reaction time. Our "neural distance-to-bound" approach is potentially quite general, and simple to implement. Furthermore, our recent work on visual object recognition suggests it is empirically viable. We believe the approach constitutes an important step along the path to an inner psychophysics that links mind, brain, and behavior.
Ponsoda, Vicente; Martínez, Kenia; Pineda-Pardo, José A; Abad, Francisco J; Olea, Julio; Román, Francisco J; Barbey, Aron K; Colom, Roberto
2017-02-01
Neuroimaging research involves analyses of huge amounts of biological data that might or might not be related with cognition. This relationship is usually approached using univariate methods, and, therefore, correction methods are mandatory for reducing false positives. Nevertheless, the probability of false negatives is also increased. Multivariate frameworks have been proposed for helping to alleviate this balance. Here we apply multivariate distance matrix regression for the simultaneous analysis of biological and cognitive data, namely, structural connections among 82 brain regions and several latent factors estimating cognitive performance. We tested whether cognitive differences predict distances among individuals regarding their connectivity pattern. Beginning with 3,321 connections among regions, the 36 edges better predicted by the individuals' cognitive scores were selected. Cognitive scores were related to connectivity distances in both the full (3,321) and reduced (36) connectivity patterns. The selected edges connect regions distributed across the entire brain and the network defined by these edges supports high-order cognitive processes such as (a) (fluid) executive control, (b) (crystallized) recognition, learning, and language processing, and (c) visuospatial processing. This multivariate study suggests that one widespread, but limited number, of regions in the human brain, supports high-level cognitive ability differences. Hum Brain Mapp 38:803-816, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Time reversal for localization of sources of infrasound signals in a windy stratified atmosphere.
Lonzaga, Joel B
2016-06-01
Time reversal is used for localizing sources of recorded infrasound signals propagating in a windy, stratified atmosphere. Due to the convective effect of the background flow, the back-azimuths of the recorded signals can be substantially different from the source back-azimuth, posing a significant difficulty in source localization. The back-propagated signals are characterized by negative group velocities from which the source back-azimuth and source-to-receiver (STR) distance can be estimated using the apparent back-azimuths and trace velocities of the signals. The method is applied to several distinct infrasound arrivals recorded by two arrays in the Netherlands. The infrasound signals were generated by the Buncefield oil depot explosion in the U.K. in December 2005. Analyses show that the method can be used to substantially enhance estimates of the source back-azimuth and the STR distance. In one of the arrays, for instance, the deviations between the measured back-azimuths of the signals and the known source back-azimuth are quite large (-1° to -7°), whereas the deviations between the predicted and known source back-azimuths are small with an absolute mean value of <1°. Furthermore, the predicted STR distance is off only by <5% of the known STR distance.
The Elimination of Transfer Distances Is an Important Part of Hospital Design.
Karvonen, Sauli; Nordback, Isto; Elo, Jussi; Havulinna, Jouni; Laine, Heikki-Jussi
2017-04-01
The objective of the present study was to describe how a specific patient flow analysis with from-to charts can be used in hospital design and layout planning. As part of a large renewal project at a university hospital, a detailed patient flow analysis was applied to planning the musculoskeletal surgery unit (orthopedics and traumatology, hand surgery, and plastic surgery). First, the main activities of the unit were determined. Next, the routes of all patients treated over the course of 1 year were studied, and their physical movements in the current hospital were calculated. An ideal layout of the new hospital was then generated to minimize transfer distances by placing the main activities with close to each other, according to the patient flow analysis. The actual architectural design was based on the ideal layout plan. Finally, we compared the current transfer distances to the distances patients will move in the new hospital. The methods enabled us to estimate an approximate 50% reduction in transfer distances for inpatients (from 3,100 km/year to 1,600 km/year) and 30% reduction for outpatients (from 2,100 km/year to 1,400 km/year). Patient transfers are nonvalue-added activities. This study demonstrates that a detailed patient flow analysis with from-to charts can substantially shorten transfer distances, thereby minimizing extraneous patient and personnel movements. This reduction supports productivity improvement, cross-professional teamwork, and patient safety by placing all patient flow activities close to each other. Thus, this method is a valuable additional tool in hospital design.
NASA Astrophysics Data System (ADS)
Penna, Pedro A. A.; Mascarenhas, Nelson D. A.
2018-02-01
The development of new methods to denoise images still attract researchers, who seek to combat the noise with the minimal loss of resolution and details, like edges and fine structures. Many algorithms have the goal to remove additive white Gaussian noise (AWGN). However, it is not the only type of noise which interferes in the analysis and interpretation of images. Therefore, it is extremely important to expand the filters capacity to different noise models present in li-terature, for example the multiplicative noise called speckle that is present in synthetic aperture radar (SAR) images. The state-of-the-art algorithms in remote sensing area work with similarity between patches. This paper aims to develop two approaches using the non local means (NLM), developed for AWGN. In our research, we expanded its capacity for intensity SAR ima-ges speckle. The first approach is grounded on the use of stochastic distances based on the G0 distribution without transforming the data to the logarithm domain, like homomorphic transformation. It takes into account the speckle and backscatter to estimate the parameters necessary to compute the stochastic distances on NLM. The second method uses a priori NLM denoising with a homomorphic transformation and applies the inverse Gamma distribution to estimate the parameters that were used into NLM with stochastic distances. The latter method also presents a new alternative to compute the parameters for the G0 distribution. Finally, this work compares and analyzes the synthetic and real results of the proposed methods with some recent filters of the literature.
Funane, Tsukasa; Atsumori, Hirokazu; Katura, Takusige; Obata, Akiko N; Sato, Hiroki; Tanikawa, Yukari; Okada, Eiji; Kiguchi, Masashi
2014-01-15
To quantify the effect of absorption changes in the deep tissue (cerebral) and shallow tissue (scalp, skin) layers on functional near-infrared spectroscopy (fNIRS) signals, a method using multi-distance (MD) optodes and independent component analysis (ICA), referred to as the MD-ICA method, is proposed. In previous studies, when the signal from the shallow tissue layer (shallow signal) needs to be eliminated, it was often assumed that the shallow signal had no correlation with the signal from the deep tissue layer (deep signal). In this study, no relationship between the waveforms of deep and shallow signals is assumed, and instead, it is assumed that both signals are linear combinations of multiple signal sources, which allows the inclusion of a "shared component" (such as systemic signals) that is contained in both layers. The method also assumes that the partial optical path length of the shallow layer does not change, whereas that of the deep layer linearly increases along with the increase of the source-detector (S-D) distance. Deep- and shallow-layer contribution ratios of each independent component (IC) are calculated using the dependence of the weight of each IC on the S-D distance. Reconstruction of deep- and shallow-layer signals are performed by the sum of ICs weighted by the deep and shallow contribution ratio. Experimental validation of the principle of this technique was conducted using a dynamic phantom with two absorbing layers. Results showed that our method is effective for evaluating deep-layer contributions even if there are high correlations between deep and shallow signals. Next, we applied the method to fNIRS signals obtained on a human head with 5-, 15-, and 30-mm S-D distances during a verbal fluency task, a verbal working memory task (prefrontal area), a finger tapping task (motor area), and a tetrametric visual checker-board task (occipital area) and then estimated the deep-layer contribution ratio. To evaluate the signal separation performance of our method, we used the correlation coefficients of a laser-Doppler flowmetry (LDF) signal and a nearest 5-mm S-D distance channel signal with the shallow signal. We demonstrated that the shallow signals have a higher temporal correlation with the LDF signals and with the 5-mm S-D distance channel than the deep signals. These results show the MD-ICA method can discriminate between deep and shallow signals. Copyright © 2013 Elsevier Inc. All rights reserved.
Determining accurate distances to nearby galaxies
NASA Astrophysics Data System (ADS)
Bonanos, Alceste Zoe
2005-11-01
Determining accurate distances to nearby or distant galaxies is a very simple conceptually, yet complicated in practice, task. Presently, distances to nearby galaxies are only known to an accuracy of 10-15%. The current anchor galaxy of the extragalactic distance scale is the Large Magellanic Cloud, which has large (10-15%) systematic uncertainties associated with it, because of its morphology, its non-uniform reddening and the unknown metallicity dependence of the Cepheid period-luminosity relation. This work aims to determine accurate distances to some nearby galaxies, and subsequently help reduce the error in the extragalactic distance scale and the Hubble constant H 0 . In particular, this work presents the first distance determination of the DIRECT Project to M33 with detached eclipsing binaries. DIRECT aims to obtain a new anchor galaxy for the extragalactic distance scale by measuring direct, accurate (to 5%) distances to two Local Group galaxies, M31 and M33, with detached eclipsing binaries. It involves a massive variability survey of these galaxies and subsequent photometric and spectroscopic follow-up of the detached binaries discovered. In this work, I also present a catalog of variable stars discovered in one of the DIRECT fields, M31Y, which includes 41 eclipsing binaries. Additionally, we derive the distance to the Draco Dwarf Spheroidal galaxy, with ~100 RR Lyrae found in our first CCD variability study of this galaxy. A "hybrid" method of discovering Cepheids with ground-based telescopes is described next. It involves applying the image subtraction technique on the images obtained from ground-based telescopes and then following them up with the Hubble Space Telescope to derive Cepheid period-luminosity distances. By re-analyzing ESO Very Large Telescope data on M83 (NGC 5236), we demonstrate that this method is much more powerful for detecting variability, especially in crowded fields. I finally present photometry for the Wolf-Rayet binary WR 20a, which confirmed that the system consists of two extremely massive stars and refined the values of the masses. It is the most massive binary known with an accurate mass determination.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Falconer, David A.; Tiwari, Sanjiv K.; Moore, Ronald L.
Projection errors limit the use of vector magnetograms of active regions (ARs) far from the disk center. In this Letter, for ARs observed up to 60° from the disk center, we demonstrate a method for measuring and reducing the projection error in the magnitude of any whole-AR parameter that is derived from a vector magnetogram that has been deprojected to the disk center. The method assumes that the center-to-limb curve of the average of the parameter’s absolute values, measured from the disk passage of a large number of ARs and normalized to each AR’s absolute value of the parameter atmore » central meridian, gives the average fractional projection error at each radial distance from the disk center. To demonstrate the method, we use a large set of large-flux ARs and apply the method to a whole-AR parameter that is among the simplest to measure: whole-AR magnetic flux. We measure 30,845 SDO /Helioseismic and Magnetic Imager vector magnetograms covering the disk passage of 272 large-flux ARs, each having whole-AR flux >10{sup 22} Mx. We obtain the center-to-limb radial-distance run of the average projection error in measured whole-AR flux from a Chebyshev fit to the radial-distance plot of the 30,845 normalized measured values. The average projection error in the measured whole-AR flux of an AR at a given radial distance is removed by multiplying the measured flux by the correction factor given by the fit. The correction is important for both the study of the evolution of ARs and for improving the accuracy of forecasts of an AR’s major flare/coronal mass ejection productivity.« less
Mallik, Saurav; Bhadra, Tapas; Mukherji, Ayan; Mallik, Saurav; Bhadra, Tapas; Mukherji, Ayan; Mallik, Saurav; Bhadra, Tapas; Mukherji, Ayan
2018-04-01
Association rule mining is an important technique for identifying interesting relationships between gene pairs in a biological data set. Earlier methods basically work for a single biological data set, and, in maximum cases, a single minimum support cutoff can be applied globally, i.e., across all genesets/itemsets. To overcome this limitation, in this paper, we propose dynamic threshold-based FP-growth rule mining algorithm that integrates gene expression, methylation and protein-protein interaction profiles based on weighted shortest distance to find the novel associations among different pairs of genes in multi-view data sets. For this purpose, we introduce three new thresholds, namely, Distance-based Variable/Dynamic Supports (DVS), Distance-based Variable Confidences (DVC), and Distance-based Variable Lifts (DVL) for each rule by integrating co-expression, co-methylation, and protein-protein interactions existed in the multi-omics data set. We develop the proposed algorithm utilizing these three novel multiple threshold measures. In the proposed algorithm, the values of , , and are computed for each rule separately, and subsequently it is verified whether the support, confidence, and lift of each evolved rule are greater than or equal to the corresponding individual , , and values, respectively, or not. If all these three conditions for a rule are found to be true, the rule is treated as a resultant rule. One of the major advantages of the proposed method compared with other related state-of-the-art methods is that it considers both the quantitative and interactive significance among all pairwise genes belonging to each rule. Moreover, the proposed method generates fewer rules, takes less running time, and provides greater biological significance for the resultant top-ranking rules compared to previous methods.
Earthquake Declustering via a Nearest-Neighbor Approach in Space-Time-Magnitude Domain
NASA Astrophysics Data System (ADS)
Zaliapin, I. V.; Ben-Zion, Y.
2016-12-01
We propose a new method for earthquake declustering based on nearest-neighbor analysis of earthquakes in space-time-magnitude domain. The nearest-neighbor approach was recently applied to a variety of seismological problems that validate the general utility of the technique and reveal the existence of several different robust types of earthquake clusters. Notably, it was demonstrated that clustering associated with the largest earthquakes is statistically different from that of small-to-medium events. In particular, the characteristic bimodality of the nearest-neighbor distances that helps separating clustered and background events is often violated after the largest earthquakes in their vicinity, which is dominated by triggered events. This prevents using a simple threshold between the two modes of the nearest-neighbor distance distribution for declustering. The current study resolves this problem hence extending the nearest-neighbor approach to the problem of earthquake declustering. The proposed technique is applied to seismicity of different areas in California (San Jacinto, Coso, Salton Sea, Parkfield, Ventura, Mojave, etc.), as well as to the global seismicity, to demonstrate its stability and efficiency in treating various clustering types. The results are compared with those of alternative declustering methods.
Color-coded depth information in volume-rendered magnetic resonance angiography
NASA Astrophysics Data System (ADS)
Smedby, Orjan; Edsborg, Karin; Henriksson, John
2004-05-01
Magnetic Resonance Angiography (MRA) and Computed Tomography Angiography (CTA) data are usually presented using Maximum Intensity Projection (MIP) or Volume Rendering Technique (VRT), but these often fail to demonstrate a stenosis if the projection angle is not suitably chosen. In order to make vascular stenoses visible in projection images independent of the choice of viewing angle, a method is proposed to supplement these images with colors representing the local caliber of the vessel. After preprocessing the volume image with a median filter, segmentation is performed by thresholding, and a Euclidean distance transform is applied. The distance to the background from each voxel in the vessel is mapped to a color. These colors can either be rendered directly using MIP or be presented together with opacity information based on the original image using VRT. The method was tested in a synthetic dataset containing a cylindrical vessel with stenoses in varying angles. The results suggest that the visibility of stenoses is enhanced by the color information. In clinical feasibility experiments, the technique was applied to clinical MRA data. The results are encouraging and indicate that the technique can be used with clinical images.
[A research in speech endpoint detection based on boxes-coupling generalization dimension].
Wang, Zimei; Yang, Cuirong; Wu, Wei; Fan, Yingle
2008-06-01
In this paper, a new calculating method of generalized dimension, based on boxes-coupling principle, is proposed to overcome the edge effects and to improve the capability of the speech endpoint detection which is based on the original calculating method of generalized dimension. This new method has been applied to speech endpoint detection. Firstly, the length of overlapping border was determined, and through calculating the generalized dimension by covering the speech signal with overlapped boxes, three-dimension feature vectors including the box dimension, the information dimension and the correlation dimension were obtained. Secondly, in the light of the relation between feature distance and similarity degree, feature extraction was conducted by use of common distance. Lastly, bi-threshold method was used to classify the speech signals. The results of experiment indicated that, by comparison with the original generalized dimension (OGD) and the spectral entropy (SE) algorithm, the proposed method is more robust and effective for detecting the speech signals which contain different kinds of noise in different signal noise ratio (SNR), especially in low SNR.
Path correction of free flight projectiles by cross firing of subsidiary projectiles
NASA Astrophysics Data System (ADS)
Stroem, L.
1982-10-01
Terminal guidance of gun-fired shells is described. The path is corrected by shooting out throw-bodies from the shell casing. The drawbacks of the method, e.g., casing deformation, were eliminated. Using deflagrating substances instead of explosives, higher impulses were obtained, and at lower pressure levels. At acceleration distances of only 10 to 15 mm throw-body speeds of 400 to 500 m/sec were noted, allowing this method to be applied to rotation-stabilized shells.
Zhao, Ni; Chen, Jun; Carroll, Ian M.; Ringel-Kulka, Tamar; Epstein, Michael P.; Zhou, Hua; Zhou, Jin J.; Ringel, Yehuda; Li, Hongzhe; Wu, Michael C.
2015-01-01
High-throughput sequencing technology has enabled population-based studies of the role of the human microbiome in disease etiology and exposure response. Distance-based analysis is a popular strategy for evaluating the overall association between microbiome diversity and outcome, wherein the phylogenetic distance between individuals’ microbiome profiles is computed and tested for association via permutation. Despite their practical popularity, distance-based approaches suffer from important challenges, especially in selecting the best distance and extending the methods to alternative outcomes, such as survival outcomes. We propose the microbiome regression-based kernel association test (MiRKAT), which directly regresses the outcome on the microbiome profiles via the semi-parametric kernel machine regression framework. MiRKAT allows for easy covariate adjustment and extension to alternative outcomes while non-parametrically modeling the microbiome through a kernel that incorporates phylogenetic distance. It uses a variance-component score statistic to test for the association with analytical p value calculation. The model also allows simultaneous examination of multiple distances, alleviating the problem of choosing the best distance. Our simulations demonstrated that MiRKAT provides correctly controlled type I error and adequate power in detecting overall association. “Optimal” MiRKAT, which considers multiple candidate distances, is robust in that it suffers from little power loss in comparison to when the best distance is used and can achieve tremendous power gain in comparison to when a poor distance is chosen. Finally, we applied MiRKAT to real microbiome datasets to show that microbial communities are associated with smoking and with fecal protease levels after confounders are controlled for. PMID:25957468
NASA Technical Reports Server (NTRS)
Madore, Barry F.; Freedman, Wendy L.
1995-01-01
Based on both empirical data for the nearby galaxies, and on computer simulations, we show that measuring the position of the tip of the first-ascent red-giant branch provides a means of obtaining the distances to nearby galaxies with a precision and accuracy comparable to using Cepheids and/or RR Lyrae variables. We present an analysis of synthetic I vs (V-I) color magnitude diagrams of Population 2 systems to investigate the use of the observed discontinuity in the I-band luminosity function as a primary distance indicator. In the simulations we quantify the effects (1) signal to noise, (2) crowding, (3) population size, and (4) non-giant-branch-star contamination, on the method adopted for detecting the discontinuity,, measuring its luminosity, and estimating its uncertainity. We discuss sources of systematic error in the context of observable parameters, such as the signal-to-noise ratio and/or surface brightness. The simulations are then scaled to observed color-magnitude diagrams. It is concluded, that from the ground the tip of the red-giant-branch method can be sucessfully used to determine distances accurate to +/- 10% for galaxies out to 3 Mpc (mu approximately 27.5 mag); and from space a factor of four further in distance (mu approximately 30.6 mag) can be reached using HST. This method can be applied whereever a metal-poor population (-2.0 less than Z less than -0.7) of red-giant stars is detected (whose age is in the range 7-17 Gyr), whether that population resides in the halo of a spiral galaxy, the extended outer disk of a dwarf irregular, or in the outer periphery of an elliptical galaxy.
Valizade Hasanloei, Mohammad Amin; Sheikhpour, Razieh; Sarram, Mehdi Agha; Sheikhpour, Elnaz; Sharifi, Hamdollah
2018-02-01
Quantitative structure-activity relationship (QSAR) is an effective computational technique for drug design that relates the chemical structures of compounds to their biological activities. Feature selection is an important step in QSAR based drug design to select the most relevant descriptors. One of the most popular feature selection methods for classification problems is Fisher score which aim is to minimize the within-class distance and maximize the between-class distance. In this study, the properties of Fisher criterion were extended for QSAR models to define the new distance metrics based on the continuous activity values of compounds with known activities. Then, a semi-supervised feature selection method was proposed based on the combination of Fisher and Laplacian criteria which exploits both compounds with known and unknown activities to select the relevant descriptors. To demonstrate the efficiency of the proposed semi-supervised feature selection method in selecting the relevant descriptors, we applied the method and other feature selection methods on three QSAR data sets such as serine/threonine-protein kinase PLK3 inhibitors, ROCK inhibitors and phenol compounds. The results demonstrated that the QSAR models built on the selected descriptors by the proposed semi-supervised method have better performance than other models. This indicates the efficiency of the proposed method in selecting the relevant descriptors using the compounds with known and unknown activities. The results of this study showed that the compounds with known and unknown activities can be helpful to improve the performance of the combined Fisher and Laplacian based feature selection methods.
NASA Astrophysics Data System (ADS)
Valizade Hasanloei, Mohammad Amin; Sheikhpour, Razieh; Sarram, Mehdi Agha; Sheikhpour, Elnaz; Sharifi, Hamdollah
2018-02-01
Quantitative structure-activity relationship (QSAR) is an effective computational technique for drug design that relates the chemical structures of compounds to their biological activities. Feature selection is an important step in QSAR based drug design to select the most relevant descriptors. One of the most popular feature selection methods for classification problems is Fisher score which aim is to minimize the within-class distance and maximize the between-class distance. In this study, the properties of Fisher criterion were extended for QSAR models to define the new distance metrics based on the continuous activity values of compounds with known activities. Then, a semi-supervised feature selection method was proposed based on the combination of Fisher and Laplacian criteria which exploits both compounds with known and unknown activities to select the relevant descriptors. To demonstrate the efficiency of the proposed semi-supervised feature selection method in selecting the relevant descriptors, we applied the method and other feature selection methods on three QSAR data sets such as serine/threonine-protein kinase PLK3 inhibitors, ROCK inhibitors and phenol compounds. The results demonstrated that the QSAR models built on the selected descriptors by the proposed semi-supervised method have better performance than other models. This indicates the efficiency of the proposed method in selecting the relevant descriptors using the compounds with known and unknown activities. The results of this study showed that the compounds with known and unknown activities can be helpful to improve the performance of the combined Fisher and Laplacian based feature selection methods.
NASA Astrophysics Data System (ADS)
Behringer, Reinhold
1995-12-01
A system for visual road recognition in far look-ahead distance, implemented in the autonomous road vehicle VaMP (a passenger car), is described. Visual cues of a road in a video image are the bright lane markings and the edges formed at the road borders. In a distance of more than 100 m, the most relevant road cue is the homogeneous road area, limited by the two border edges. These cues can be detected by the image processing module KRONOS applying edge detection techniques and areal 2D segmentation based on resolution triangles (analogous to a resolution pyramid). An estimation process performs an update of a state vector, which describes spatial road shape and vehicle orientation relative to the road. This state vector is estimated every 40 ms by exploiting knowledge about the vehicle movement (spatio-temporal model of vehicle dynamics) and the road design rules (clothoidal segments). Kalman filter techniques are applied to obtain an optimal estimate of the state vector by evaluating the measurements of the road border positions in the image sequence taken by a set of CCD cameras. The road consists of segments with piecewise constant curvature parameters. The borders between these segments can be detected by applying methods which have been developed for detection of discontinuities during time-discrete measurements. The road recognition system has been tested in autonomous rides with VaMP on public Autobahnen in real traffic at speeds up to 130 km/h.
RETROSPECTIVE DETECTION OF INTERLEAVED SLICE ACQUISITION PARAMETERS FROM FMRI DATA
Parker, David; Rotival, Georges; Laine, Andrew; Razlighi, Qolamreza R.
2015-01-01
To minimize slice excitation leakage to adjacent slices, interleaved slice acquisition is nowadays performed regularly in fMRI scanners. In interleaved slice acquisition, the number of slices skipped between two consecutive slice acquisitions is often referred to as the ‘interleave parameter’; the loss of this parameter can be catastrophic for the analysis of fMRI data. In this article we present a method to retrospectively detect the interleave parameter and the axis in which it is applied. Our method relies on the smoothness of the temporal-distance correlation function, which becomes disrupted along the axis on which interleaved slice acquisition is applied. We examined this method on simulated and real data in the presence of fMRI artifacts such as physiological noise, motion, etc. We also examined the reliability of this method in detecting different types of interleave parameters and demonstrated an accuracy of about 94% in more than 1000 real fMRI scans. PMID:26161244
NASA Astrophysics Data System (ADS)
Dong, Yayun; Yang, Xijun; Jin, Nan; Li, Wenwen; Yao, Chen; Tang, Houjun
2017-05-01
Shifting medium is a kind of metamaterial, which can optically shift a space or an object a certain distance away from its original position. Based on the shifting medium, we propose a concise pair of shifting slabs covering the transmitting or receiving coil in a two-coil wireless power transfer system to decrease the equivalent distance between the coils. The electromagnetic parameters of the shifting slabs are calculated by transformation optics. Numerical simulations validate that the shifting slabs can approximately shift the electromagnetic fields generated by the covered coil; thus, the magnetic coupling and the efficiency of the system are enhanced while remaining the physical transmission distance unchanged. We also verify the advantages of the shifting slabs over the magnetic superlens. Finally, we provide two methods to fabricate shifting slabs based on split-ring resonators.
Improvement of two-way continuous-variable quantum key distribution with virtual photon subtraction
NASA Astrophysics Data System (ADS)
Zhao, Yijia; Zhang, Yichen; Li, Zhengyu; Yu, Song; Guo, Hong
2017-08-01
We propose a method to improve the performance of two-way continuous-variable quantum key distribution protocol by virtual photon subtraction. The virtual photon subtraction implemented via non-Gaussian post-selection not only enhances the entanglement of two-mode squeezed vacuum state but also has advantages in simplifying physical operation and promoting efficiency. In two-way protocol, virtual photon subtraction could be applied on two sources independently. Numerical simulations show that the optimal performance of renovated two-way protocol is obtained with photon subtraction only used by Alice. The transmission distance and tolerable excess noise are improved by using the virtual photon subtraction with appropriate parameters. Moreover, the tolerable excess noise maintains a high value with the increase in distance so that the robustness of two-way continuous-variable quantum key distribution system is significantly improved, especially at long transmission distance.
An eclipsing-binary distance to the Large Magellanic Cloud accurate to two per cent.
Pietrzyński, G; Graczyk, D; Gieren, W; Thompson, I B; Pilecki, B; Udalski, A; Soszyński, I; Kozłowski, S; Konorski, P; Suchomska, K; Bono, G; Moroni, P G Prada; Villanova, S; Nardetto, N; Bresolin, F; Kudritzki, R P; Storm, J; Gallenne, A; Smolec, R; Minniti, D; Kubiak, M; Szymański, M K; Poleski, R; Wyrzykowski, L; Ulaczyk, K; Pietrukowicz, P; Górski, M; Karczmarek, P
2013-03-07
In the era of precision cosmology, it is essential to determine the Hubble constant to an accuracy of three per cent or better. At present, its uncertainty is dominated by the uncertainty in the distance to the Large Magellanic Cloud (LMC), which, being our second-closest galaxy, serves as the best anchor point for the cosmic distance scale. Observations of eclipsing binaries offer a unique opportunity to measure stellar parameters and distances precisely and accurately. The eclipsing-binary method was previously applied to the LMC, but the accuracy of the distance results was lessened by the need to model the bright, early-type systems used in those studies. Here we report determinations of the distances to eight long-period, late-type eclipsing systems in the LMC, composed of cool, giant stars. For these systems, we can accurately measure both the linear and the angular sizes of their components and avoid the most important problems related to the hot, early-type systems. The LMC distance that we derive from these systems (49.97 ± 0.19 (statistical) ± 1.11 (systematic) kiloparsecs) is accurate to 2.2 per cent and provides a firm base for a 3-per-cent determination of the Hubble constant, with prospects for improvement to 2 per cent in the future.
The Hetu'u Global Network: Measuring the Distance to the Sun with the Transit of Venus
NASA Astrophysics Data System (ADS)
Rodriguez, David; Faherty, J.
2013-01-01
In the spirit of historic astronomical endeavors, we invited school groups across the globe to collaborate in a solar distance measurement using the 2012 transit of Venus. In total, our group (stationed at Easter Island, Chile) recruited 19 school groups spread over 6 continents and 10 countries to participate in our Hetu’u Global Network. Applying the methods of French astronomer Joseph-Nicolas Delisle, we used individual second and third Venus-Sun contact times to calculate the distance to the Sun. Ten of the sites in our network had amiable weather; 8 of which measured second contact and 5 of which measured third contact leading to consistent solar distance measurements of 152+/-30 million km and 163+/-30 million km respectively. The distance to the Sun at the time of the transit was 152.25 million km; therefore, our measurements are also consistent within 1-sigma of the known value. The goal of our international school group network was to inspire the next generation of scientists using the excitement and accessibility of such a rare astronomical event. In the process, we connected hundreds of participating students representing a diverse, multi-cultural group with differing political, economic, and racial backgrounds.
Near field wireless power transfer using curved relay resonators for extended transfer distance
NASA Astrophysics Data System (ADS)
Zhu, D.; Clare, L.; Stark, B. H.; Beeby, S. P.
2015-12-01
This paper investigates the performance of a near field wireless power transfer system that uses curved relay resonator to extend transfer distance. Near field wireless power transfer operates based on the near-field electromagnetic coupling of coils. Such a system can transfer energy over a relatively short distance which is of the same order of dimensions of the coupled coils. The energy transfer distance can be increased using flat relay resonators. Recent developments in printing electronics and e-textiles have seen increasing demand of embedding electronics into fabrics. Near field wireless power transfer is one of the most promising methods to power electronics on fabrics. The concept can be applied to body-worn textiles by, for example, integrating a transmitter coil into upholstery, and a flexible receiver coil into garments. Flexible textile coils take on the shape of the supporting materials such as garments, and therefore curved resonator and receiver coils are investigated in this work. Experimental results showed that using curved relay resonator can effectively extend the wireless power transfer distance. However, as the curvature of the coil increases, the performance of the wireless power transfer, especially the maximum received power, deteriorates.
ERIC Educational Resources Information Center
Vrasidas, Charalambos, Ed.; Glass, Gene V., Ed.
This book describes the current state of developments in distance education and distributed learning. The volume brings together some of the leading contemporary contributors in the areas of educational technology and distance education. Topics covered include research and evaluation in distance education, online communities, faculty productivity,…
The application of vector concepts on two skew lines
NASA Astrophysics Data System (ADS)
Alghadari, F.; Turmudi; Herman, T.
2018-01-01
The purpose of this study is knowing how to apply vector concepts on two skew lines in three-dimensional (3D) coordinate and its utilization. Several mathematical concepts have a related function for the other, but the related between the concept of vector and 3D have not applied in learning classroom. In fact, there are studies show that female students have difficulties in learning of 3D than male. It is because of personal spatial intelligence. The relevance of vector concepts creates both learning achievement and mathematical ability of male and female students enables to be balanced. The distance like on a cube, cuboid, or pyramid whose are drawn on the rectangular coordinates of a point in space. Two coordinate points of the lines can be created a vector. The vector of two skew lines has the shortest distance and the angle. Calculating of the shortest distance is started to create two vectors as a representation of line by vector position concept, next to determining a norm-vector of two vector which was obtained by cross-product, and then to create a vector from two combination of pair-points which was passed by two skew line, the shortest distance is scalar orthogonal projection of norm-vector on a vector which is a combination of pair-points. While calculating the angle are used two vectors as a representation of line to dot-product, and the inverse of cosine is yield. The utilization of its application on mathematics learning and orthographic projection method.
Waveform inversion of acoustic waves for explosion yield estimation
Kim, K.; Rodgers, A. J.
2016-07-08
We present a new waveform inversion technique to estimate the energy of near-surface explosions using atmospheric acoustic waves. Conventional methods often employ air blast models based on a homogeneous atmosphere, where the acoustic wave propagation effects (e.g., refraction and diffraction) are not taken into account, and therefore, their accuracy decreases with increasing source-receiver distance. In this study, three-dimensional acoustic simulations are performed with a finite difference method in realistic atmospheres and topography, and the modeled acoustic Green's functions are incorporated into the waveform inversion for the acoustic source time functions. The strength of the acoustic source is related to explosionmore » yield based on a standard air blast model. The technique was applied to local explosions (<10 km) and provided reasonable yield estimates (<~30% error) in the presence of realistic topography and atmospheric structure. In conclusion, the presented method can be extended to explosions recorded at far distance provided proper meteorological specifications.« less
Senning, Eric N.; Aman, Teresa K.
2016-01-01
Biological membranes are complex assemblies of lipids and proteins that serve as platforms for cell signaling. We have developed a novel method for measuring the structure and dynamics of the membrane based on fluorescence resonance energy transfer (FRET). The method marries four technologies: (1) unroofing cells to isolate and access the cytoplasmic leaflet of the plasma membrane; (2) patch-clamp fluorometry (PCF) to measure currents and fluorescence simultaneously from a membrane patch; (3) a synthetic lipid with a metal-chelating head group to decorate the membrane with metal-binding sites; and (4) transition metal ion FRET (tmFRET) to measure short distances between a fluorescent probe and a transition metal ion on the membrane. We applied this method to measure the density and affinity of native and introduced metal-binding sites in the membrane. These experiments pave the way for measuring structural rearrangements of membrane proteins relative to the membrane. PMID:26755772
Waveform inversion of acoustic waves for explosion yield estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, K.; Rodgers, A. J.
We present a new waveform inversion technique to estimate the energy of near-surface explosions using atmospheric acoustic waves. Conventional methods often employ air blast models based on a homogeneous atmosphere, where the acoustic wave propagation effects (e.g., refraction and diffraction) are not taken into account, and therefore, their accuracy decreases with increasing source-receiver distance. In this study, three-dimensional acoustic simulations are performed with a finite difference method in realistic atmospheres and topography, and the modeled acoustic Green's functions are incorporated into the waveform inversion for the acoustic source time functions. The strength of the acoustic source is related to explosionmore » yield based on a standard air blast model. The technique was applied to local explosions (<10 km) and provided reasonable yield estimates (<~30% error) in the presence of realistic topography and atmospheric structure. In conclusion, the presented method can be extended to explosions recorded at far distance provided proper meteorological specifications.« less
NASA Astrophysics Data System (ADS)
Xia, Huipeng; Zhan, Lu; Xie, Bing
2017-02-01
A novel method for preparing ultrafine PbS powders involving sulfurization combined with inert gas condensation is developed in this paper, which is applicable to recycle Pb from lead paste of spent lead-acid batteries. Initially, the effects of the evaporation and condensation temperature, the inert gas pressure, the condensation distance and substrate on the morphology of as-obtained PbS ultrafine particles are intensively investigated using sulfur powders and lead particles as reagents. Highly dispersed and homogeneous PbS nanoparticles can be prepared under the optimized conditions which are 1223 K heating temperature, 573 K condensation temperature, 100 Pa inert gas pressure and 60 cm condensation distance. Furthermore, this method is successfully applied to recycle Pb from the lead paste of spent lead acid battery to prepare PbS ultrafine powders. This work does not only provide the theoretical fundamental for PbS preparation, but also provides a novel and efficient method for recycling spent lead-acid battery with high added-value products.
EEG character identification using stimulus sequences designed to maximize mimimal hamming distance.
Fukami, Tadanori; Shimada, Takamasa; Forney, Elliott; Anderson, Charles W
2012-01-01
In this study, we have improved upon the P300 speller Brain-Computer Interface paradigm by introducing a new character encoding method. Our concept in detection of the intended character is not based on a classification of target and nontarget responses, but based on an identifaction of the character which maximize the difference between P300 amplitudes in target and nontarget stimuli. Each bit included in the code corresponds to flashing character, '1', and non-flashing, '0'. Here, the codes were constructed in order to maximize the minimum hamming distance between the characters. Electroencephalography was used to identify the characters using a waveform calculated by adding and subtracting the response of the target and non-target stimulus according the codes respectively. This stimulus presentation method was applied to a 3×3 character matrix, and the results were compared with that of a conventional P300 speller of the same size. Our method reduced the time until the correct character was obtained by 24%.
Luo, Wei; Qi, Yi
2009-12-01
This paper presents an enhancement of the two-step floating catchment area (2SFCA) method for measuring spatial accessibility, addressing the problem of uniform access within the catchment by applying weights to different travel time zones to account for distance decay. The enhancement is proved to be another special case of the gravity model. When applying this enhanced 2SFCA (E2SFCA) to measure the spatial access to primary care physicians in a study area in northern Illinois, we find that it reveals spatial accessibility pattern that is more consistent with intuition and delineates more spatially explicit health professional shortage areas. It is easy to implement in GIS and straightforward to interpret.
Managing distance and covariate information with point-based clustering.
Whigham, Peter A; de Graaf, Brandon; Srivastava, Rashmi; Glue, Paul
2016-09-01
Geographic perspectives of disease and the human condition often involve point-based observations and questions of clustering or dispersion within a spatial context. These problems involve a finite set of point observations and are constrained by a larger, but finite, set of locations where the observations could occur. Developing a rigorous method for pattern analysis in this context requires handling spatial covariates, a method for constrained finite spatial clustering, and addressing bias in geographic distance measures. An approach, based on Ripley's K and applied to the problem of clustering with deliberate self-harm (DSH), is presented. Point-based Monte-Carlo simulation of Ripley's K, accounting for socio-economic deprivation and sources of distance measurement bias, was developed to estimate clustering of DSH at a range of spatial scales. A rotated Minkowski L1 distance metric allowed variation in physical distance and clustering to be assessed. Self-harm data was derived from an audit of 2 years' emergency hospital presentations (n = 136) in a New Zealand town (population ~50,000). Study area was defined by residential (housing) land parcels representing a finite set of possible point addresses. Area-based deprivation was spatially correlated. Accounting for deprivation and distance bias showed evidence for clustering of DSH for spatial scales up to 500 m with a one-sided 95 % CI, suggesting that social contagion may be present for this urban cohort. Many problems involve finite locations in geographic space that require estimates of distance-based clustering at many scales. A Monte-Carlo approach to Ripley's K, incorporating covariates and models for distance bias, are crucial when assessing health-related clustering. The case study showed that social network structure defined at the neighbourhood level may account for aspects of neighbourhood clustering of DSH. Accounting for covariate measures that exhibit spatial clustering, such as deprivation, are crucial when assessing point-based clustering.
NASA Astrophysics Data System (ADS)
Beaton, Rachael L.; Freedman, Wendy L.; Madore, Barry F.; Bono, Giuseppe; Carlson, Erika K.; Clementini, Gisella; Durbin, Meredith J.; Garofalo, Alessia; Hatt, Dylan; Jang, In Sung; Kollmeier, Juna A.; Lee, Myung Gyoon; Monson, Andrew J.; Rich, Jeffrey A.; Scowcroft, Victoria; Seibert, Mark; Sturch, Laura; Yang, Soung-Chul
2016-12-01
We present an overview of the Carnegie-Chicago Hubble Program, an ongoing program to obtain a 3% measurement of the Hubble constant (H 0) using alternative methods to the traditional Cepheid distance scale. We aim to establish a completely independent route to H 0 using RR Lyrae variables, the tip of the red giant branch (TRGB), and Type Ia supernovae (SNe Ia). This alternative distance ladder can be applied to galaxies of any Hubble type, of any inclination, and, using old stars in low-density environments, is robust to the degenerate effects of metallicity and interstellar extinction. Given the relatively small number of SNe Ia host galaxies with independently measured distances, these properties provide a great systematic advantage in the measurement of H 0 via the distance ladder. Initially, the accuracy of our value of H 0 will be set by the five Galactic RR Lyrae calibrators with Hubble Space Telescope Fine-Guidance Sensor parallaxes. With Gaia, both the RR Lyrae zero-point and TRGB method will be independently calibrated, the former with at least an order of magnitude more calibrators and the latter directly through parallax measurement of tip red giants. As the first end-to-end “distance ladder” completely independent of both Cepheid variables and the Large Magellanic Cloud, this path to H 0 will allow for the high-precision comparison at each rung of the traditional distance ladder that is necessary to understand tensions between this and other routes to H 0. Based on observations made with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. These observations are associated with programs #13472 and #13691.
Gusto, Gaelle; Schbath, Sophie
2005-01-01
We propose an original statistical method to estimate how the occurrences of a given process along a genome, genes or motifs for instance, may be influenced by the occurrences of a second process. More precisely, the aim is to detect avoided and/or favored distances between two motifs, for instance, suggesting possible interactions at a molecular level. For this, we consider occurrences along the genome as point processes and we use the so-called Hawkes' model. In such model, the intensity at position t depends linearly on the distances to past occurrences of both processes via two unknown profile functions to estimate. We perform a non parametric estimation of both profiles by using B-spline decompositions and a constrained maximum likelihood method. Finally, we use the AIC criterion for the model selection. Simulations show the excellent behavior of our estimation procedure. We then apply it to study (i) the dependence between gene occurrences along the E. coli genome and the occurrences of a motif known to be part of the major promoter for this bacterium, and (ii) the dependence between the yeast S. cerevisiae genes and the occurrences of putative polyadenylation signals. The results are coherent with known biological properties or previous predictions, meaning this method can be of great interest for functional motif detection, or to improve knowledge of some biological mechanisms.
NASA Astrophysics Data System (ADS)
Talamonti, James J.; Kay, Richard B.; Krebs, Danny J.
1996-05-01
A numerical model was developed to emulate the capabilities of systems performing noncontact absolute distance measurements. The model incorporates known methods to minimize signal processing and digital sampling errors and evaluates the accuracy limitations imposed by spectral peak isolation by using Hanning, Blackman, and Gaussian windows in the fast Fourier transform technique. We applied this model to the specific case of measuring the relative lengths of a compound Michelson interferometer. By processing computer-simulated data through our model, we project the ultimate precision for ideal data, and data containing AM-FM noise. The precision is shown to be limited by nonlinearities in the laser scan. absolute distance, interferometer.
Sexual dimorphism in the human face assessed by euclidean distance matrix analysis.
Ferrario, V F; Sforza, C; Pizzini, G; Vogel, G; Miani, A
1993-01-01
The form of any object can be viewed as a combination of size and shape. A recently proposed method (euclidean distance matrix analysis) can differentiate between size and shape differences. It has been applied to analyse the sexual dimorphism in facial form in a sample of 108 healthy young adults (57 men, 51 women). The face was wider and longer in men than in women. A global shape difference was demonstrated, the male face being more rectangular and the female face more square. Gender variations involved especially the lower third of the face and, in particular, the position of the pogonion relative to the other structures. PMID:8300436
NASA Astrophysics Data System (ADS)
Yoo, S. H.
2017-12-01
Monitoring seismologists have successfully used seismic coda for event discrimination and yield estimation for over a decade. In practice seismologists typically analyze long-duration, S-coda signals with high signal-to-noise ratios (SNR) at regional and teleseismic distances, since the single back-scattering model reasonably predicts decay of the late coda. However, seismic monitoring requirements are shifting towards smaller, locally recorded events that exhibit low SNR and short signal lengths. To be successful at characterizing events recorded at local distances, we must utilize the direct-phase arrivals, as well as the earlier part of the coda, which is dominated by multiple forward scattering. To remedy this problem, we have developed a new hybrid method known as full-waveform envelope template matching to improve predicted envelope fits over the entire waveform and account for direct-wave and early coda complexity. We accomplish this by including a multiple forward-scattering approximation in the envelope modeling of the early coda. The new hybrid envelope templates are designed to fit local and regional full waveforms and produce low-variance amplitude estimates, which will improve yield estimation and discrimination between earthquakes and explosions. To demonstrate the new technique, we applied our full-waveform envelope template-matching method to the six known North Korean (DPRK) underground nuclear tests and four aftershock events following the September 2017 test. We successfully discriminated the event types and estimated the yield for all six nuclear tests. We also applied the same technique to the 2015 Tianjin explosions in China, and another suspected low-yield explosion at the DPRK test site on May 12, 2010. Our results show that the new full-waveform envelope template-matching method significantly improves upon longstanding single-scattering coda prediction techniques. More importantly, the new method allows monitoring seismologists to extend coda-based techniques to lower magnitude thresholds and low-yield local explosions.
Deducing the Kinetics of Protein Synthesis In Vivo from the Transition Rates Measured In Vitro
Rudorf, Sophia; Thommen, Michael; Rodnina, Marina V.; Lipowsky, Reinhard
2014-01-01
The molecular machinery of life relies on complex multistep processes that involve numerous individual transitions, such as molecular association and dissociation steps, chemical reactions, and mechanical movements. The corresponding transition rates can be typically measured in vitro but not in vivo. Here, we develop a general method to deduce the in-vivo rates from their in-vitro values. The method has two basic components. First, we introduce the kinetic distance, a new concept by which we can quantitatively compare the kinetics of a multistep process in different environments. The kinetic distance depends logarithmically on the transition rates and can be interpreted in terms of the underlying free energy barriers. Second, we minimize the kinetic distance between the in-vitro and the in-vivo process, imposing the constraint that the deduced rates reproduce a known global property such as the overall in-vivo speed. In order to demonstrate the predictive power of our method, we apply it to protein synthesis by ribosomes, a key process of gene expression. We describe the latter process by a codon-specific Markov model with three reaction pathways, corresponding to the initial binding of cognate, near-cognate, and non-cognate tRNA, for which we determine all individual transition rates in vitro. We then predict the in-vivo rates by the constrained minimization procedure and validate these rates by three independent sets of in-vivo data, obtained for codon-dependent translation speeds, codon-specific translation dynamics, and missense error frequencies. In all cases, we find good agreement between theory and experiment without adjusting any fit parameter. The deduced in-vivo rates lead to smaller error frequencies than the known in-vitro rates, primarily by an improved initial selection of tRNA. The method introduced here is relatively simple from a computational point of view and can be applied to any biomolecular process, for which we have detailed information about the in-vitro kinetics. PMID:25358034
NASA Astrophysics Data System (ADS)
Périllat, Raphaël; Girard, Sylvain; Korsakissok, Irène; Mallet, Vinien
2015-04-01
In a previous study, the sensitivity of a long distance model was analyzed on the Fukushima Daiichi disaster case with the Morris screening method. It showed that a few variables, such as horizontal diffusion coefficient or clouds thickness, have a weak influence on most of the chosen outputs. The purpose of the present study is to apply a similar methodology on the IRSN's operational short distance atmospheric dispersion model, called pX. Atmospheric dispersion models are very useful in case of accidental releases of pollutant to minimize the population exposure during the accident and to obtain an accurate assessment of short and long term environmental and sanitary impact. Long range models are mostly used for consequences assessment while short range models are more adapted to the early phases of the crisis and are used to make prognosis. The Morris screening method was used to estimate the sensitivity of a set of outputs and to rank the inputs by their influences. The input ranking is highly dependent on the considered output, but a few variables seem to have a weak influence on most of them. This first step revealed that interactions and non-linearity are much more pronounced with the short range model than with the long range one. Afterward, the Sobol screening method was used to obtain more quantitative results on the same set of outputs. Using this method was possible for the short range model because it is far less computationally demanding than the long range model. The study also confronts two parameterizations, Doury's and Pasquill's models, to contrast their behavior. The Doury's model seems to excessively inflate the influence of some inputs compared to the Pasquill's model, such as the altitude of emission and the air stability which do not have the same role in the two models. The outputs of the long range model were dominated by only a few inputs. On the contrary, in this study the influence is shared more evenly between the inputs.
Puszka, Agathe; Hervé, Lionel; Planat-Chrétien, Anne; Koenig, Anne; Derouard, Jacques; Dinten, Jean-Marc
2013-01-01
We show how to apply the Mellin-Laplace transform to process time-resolved reflectance measurements for diffuse optical tomography. We illustrate this method on simulated signals incorporating the main sources of experimental noise and suggest how to fine-tune the method in order to detect the deepest absorbing inclusions and optimize their localization in depth, depending on the dynamic range of the measurement. To finish, we apply this method to measurements acquired with a setup including a femtosecond laser, photomultipliers and a time-correlated single photon counting board. Simulations and experiments are illustrated for a probe featuring the interfiber distance of 1.5 cm and show the potential of time-resolved techniques for imaging absorption contrast in depth with this geometry. PMID:23577292
NASA Astrophysics Data System (ADS)
Ortega, R.; Gutierrez, E.; Carciumaru, D. D.; Huesca-Perez, E.
2017-12-01
We present a method to compute the conditional and no-conditional probability density function (PDF) of the finite fault distance distribution (FFDD). Two cases are described: lines and areas. The case of lines has a simple analytical solution while, in the case of areas, the geometrical probability of a fault based on the strike, dip, and fault segment vertices is obtained using the projection of spheres in a piecewise rectangular surface. The cumulative distribution is computed by measuring the projection of a sphere of radius r in an effective area using an algorithm that estimates the area of a circle within a rectangle. In addition, we introduce the finite fault distance metrics. This distance is the distance where the maximum stress release occurs within the fault plane and generates a peak ground motion. Later, we can apply the appropriate ground motion prediction equations (GMPE) for PSHA. The conditional probability of distance given magnitude is also presented using different scaling laws. A simple model of constant distribution of the centroid at the geometrical mean is discussed, in this model hazard is reduced at the edges because the effective size is reduced. Nowadays there is a trend of using extended source distances in PSHA, however it is not possible to separate the fault geometry from the GMPE. With this new approach, it is possible to add fault rupture models separating geometrical and propagation effects.
Enhancing multi-view autostereoscopic displays by viewing distance control (VDC)
NASA Astrophysics Data System (ADS)
Jurk, Silvio; Duckstein, Bernd; Renault, Sylvain; Kuhlmey, Mathias; de la Barré, René; Ebner, Thomas
2014-03-01
Conventional multi-view displays spatially interlace various views of a 3D scene and form appropriate viewing channels. However, they only support sufficient stereo quality within a limited range around the nominal viewing distance (NVD). If this distance is maintained, two slightly divergent views are projected to the person's eyes, both covering the entire screen. With increasing deviations from the NVD the stereo image quality decreases. As a major drawback in usability, the manufacturer so far assigns this distance. We propose a software-based solution that corrects false view assignments depending on the distance of the viewer. Our novel approach enables continuous view adaptation based on the calculation of intermediate views and a column-bycolumn rendering method. The algorithm controls each individual subpixel and generates a new interleaving pattern from selected views. In addition, we use color-coded test content to verify its efficacy. This novel technology helps shifting the physically determined NVD to a user-defined distance thereby supporting stereopsis. The recent viewing positions can fall in front or behind the NVD of the original setup. Our algorithm can be applied to all multi-view autostereoscopic displays — independent of the ascent or the periodicity of the optical element. In general, the viewing distance can be corrected with a factor of more than 2.5. By creating a continuous viewing area the visualized 3D content is suitable even for persons with largely divergent intraocular distance — adults and children alike — without any deficiency in spatial perception.
DISTANCES TO DARK CLOUDS: COMPARING EXTINCTION DISTANCES TO MASER PARALLAX DISTANCES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Foster, Jonathan B.; Jackson, James M.; Stead, Joseph J.
We test two different methods of using near-infrared extinction to estimate distances to dark clouds in the first quadrant of the Galaxy using large near-infrared (Two Micron All Sky Survey and UKIRT Infrared Deep Sky Survey) surveys. Very long baseline interferometry parallax measurements of masers around massive young stars provide the most direct and bias-free measurement of the distance to these dark clouds. We compare the extinction distance estimates to these maser parallax distances. We also compare these distances to kinematic distances, including recent re-calibrations of the Galactic rotation curve. The extinction distance methods agree with the maser parallax distancesmore » (within the errors) between 66% and 100% of the time (depending on method and input survey) and between 85% and 100% of the time outside of the crowded Galactic center. Although the sample size is small, extinction distance methods reproduce maser parallax distances better than kinematic distances; furthermore, extinction distance methods do not suffer from the kinematic distance ambiguity. This validation gives us confidence that these extinction methods may be extended to additional dark clouds where maser parallaxes are not available.« less
Reconstruction of phylogenetic trees of prokaryotes using maximal common intervals.
Heydari, Mahdi; Marashi, Sayed-Amir; Tusserkani, Ruzbeh; Sadeghi, Mehdi
2014-10-01
One of the fundamental problems in bioinformatics is phylogenetic tree reconstruction, which can be used for classifying living organisms into different taxonomic clades. The classical approach to this problem is based on a marker such as 16S ribosomal RNA. Since evolutionary events like genomic rearrangements are not included in reconstructions of phylogenetic trees based on single genes, much effort has been made to find other characteristics for phylogenetic reconstruction in recent years. With the increasing availability of completely sequenced genomes, gene order can be considered as a new solution for this problem. In the present work, we applied maximal common intervals (MCIs) in two or more genomes to infer their distance and to reconstruct their evolutionary relationship. Additionally, measures based on uncommon segments (UCS's), i.e., those genomic segments which are not detected as part of any of the MCIs, are also used for phylogenetic tree reconstruction. We applied these two types of measures for reconstructing the phylogenetic tree of 63 prokaryotes with known COG (clusters of orthologous groups) families. Similarity between the MCI-based (resp. UCS-based) reconstructed phylogenetic trees and the phylogenetic tree obtained from NCBI taxonomy browser is as high as 93.1% (resp. 94.9%). We show that in the case of this diverse dataset of prokaryotes, tree reconstruction based on MCI and UCS outperforms most of the currently available methods based on gene orders, including breakpoint distance and DCJ. We additionally tested our new measures on a dataset of 13 closely-related bacteria from the genus Prochlorococcus. In this case, distances like rearrangement distance, breakpoint distance and DCJ proved to be useful, while our new measures are still appropriate for phylogenetic reconstruction. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Interactive visual exploration and analysis of origin-destination data
NASA Astrophysics Data System (ADS)
Ding, Linfang; Meng, Liqiu; Yang, Jian; Krisp, Jukka M.
2018-05-01
In this paper, we propose a visual analytics approach for the exploration of spatiotemporal interaction patterns of massive origin-destination data. Firstly, we visually query the movement database for data at certain time windows. Secondly, we conduct interactive clustering to allow the users to select input variables/features (e.g., origins, destinations, distance, and duration) and to adjust clustering parameters (e.g. distance threshold). The agglomerative hierarchical clustering method is applied for the multivariate clustering of the origin-destination data. Thirdly, we design a parallel coordinates plot for visualizing the precomputed clusters and for further exploration of interesting clusters. Finally, we propose a gradient line rendering technique to show the spatial and directional distribution of origin-destination clusters on a map view. We implement the visual analytics approach in a web-based interactive environment and apply it to real-world floating car data from Shanghai. The experiment results show the origin/destination hotspots and their spatial interaction patterns. They also demonstrate the effectiveness of our proposed approach.
NASA Astrophysics Data System (ADS)
Chawla, Viveak Kumar; Chanda, Arindam Kumar; Angra, Surjit
2018-03-01
The flexible manufacturing system (FMS) constitute of several programmable production work centers, material handling systems (MHSs), assembly stations and automatic storage and retrieval systems. In FMS, the automatic guided vehicles (AGVs) play a vital role in material handling operations and enhance the performance of the FMS in its overall operations. To achieve low makespan and high throughput yield in the FMS operations, it is highly imperative to integrate the production work centers schedules with the AGVs schedules. The Production schedule for work centers is generated by application of the Giffler and Thompson algorithm under four kind of priority hybrid dispatching rules. Then the clonal selection algorithm (CSA) is applied for the simultaneous scheduling to reduce backtracking as well as distance travel of AGVs within the FMS facility. The proposed procedure is computationally tested on the benchmark FMS configuration from the literature and findings from the investigations clearly indicates that the CSA yields best results in comparison of other applied methods from the literature.
Eng, K.; Tasker, Gary D.; Milly, P.C.D.
2005-01-01
Region-of-influence (RoI) approaches for estimating streamflow characteristics at ungaged sites were applied and evaluated in a case study of the 50-year peak discharge in the Gulf-Atlantic Rolling Plains of the southeastern United States. Linear regression against basin characteristics was performed for each ungaged site considered based on data from a region of influence containing the n closest gages in predictor variable (PRoI) or geographic (GRoI) space. Augmentation of this count based cutoff by a distance based cutoff also was considered. Prediction errors were evaluated for an independent (split-sampled) dataset. For the dataset and metrics considered here: (1) for either PRoI or GRoI, optimal results were found when the simpler count based cutoff, rather than the distance augmented cutoff, was used; (2) GRoI produced lower error than PRoI when applied indiscriminately over the entire study region; (3) PRoI performance improved considerably when RoI was restricted to predefined geographic subregions.
Computer analysis of ATR-FTIR spectra of paint samples for forensic purposes
NASA Astrophysics Data System (ADS)
Szafarska, Małgorzata; Woźniakiewicz, Michał; Pilch, Mariusz; Zięba-Palus, Janina; Kościelniak, Paweł
2009-04-01
A method of subtraction and normalization of IR spectra (MSN-IR) was developed and successfully applied to extract mathematically the pure paint spectrum from the spectrum of paint coat on different bases, both acquired by the Attenuated Total Reflectance Fourier Transform Infrared (ATR-FTIR) technique. The method consists of several stages encompassing several normalization and subtraction processes. The similarity of the spectrum obtained with the reference spectrum was estimated by means of the normalized Manhattan distance. The utility and performance of the method proposed were tested by examination of five different paints sprayed on plastic (polyester) foil and on fabric materials (cotton). It was found that the numerical algorithm applied is able - in contrast to other mathematical approaches conventionally used for the same aim - to reconstruct a pure paint IR spectrum effectively without a loss of chemical information provided. The approach allows the physical separation of a paint from a base to be avoided, hence a time and work-load of analysis to be considerably reduced. The results obtained prove that the method can be considered as a useful tool which can be applied to forensic purposes.
NASA Astrophysics Data System (ADS)
Starosta, K.; Dewald, A.
2007-04-01
Transition rate measurements are reported for the 2^+1 and 2^+2 states in the N=Z nucleus ^64Ge. The measurement was done utilizing the Recoil Distance Method (RDM) and a unique combination of state of the art instruments at the National Superconducting Cyclotron Laboratory (NSCL). States of interest were populated via an intermediate energy single neutron knock-out reaction. RDM studies of knock-out and fragmentation reaction products hold the promise of reaching far from stability and providing lifetime information for intermediate-spin excited states in a wide range of exotic nuclei. The large-scale Shell Model calculations applying the recently developed GXPF1A interaction are in excellent agreement with the above results. Theoretical analysis suggests that ^64Ge is a collective γ-soft anharmonic vibrator.
Traffic Sign Detection System for Locating Road Intersections and Roundabouts: The Chilean Case.
Villalón-Sepúlveda, Gabriel; Torres-Torriti, Miguel; Flores-Calero, Marco
2017-05-25
This paper presents a traffic sign detection method for signs close to road intersections and roundabouts, such as stop and yield (give way) signs. The proposed method relies on statistical templates built using color information for both segmentation and classification. The segmentation method uses the RGB-normalized (ErEgEb) color space for ROIs (Regions of Interest) generation based on a chromaticity filter, where templates at 10 scales are applied to the entire image. Templates consider the mean and standard deviation of normalized color of the traffic signs to build thresholding intervals where the expected color should lie for a given sign. The classification stage employs the information of the statistical templates over YCbCr and ErEgEb color spaces, for which the background has been previously removed by using a probability function that models the probability that the pixel corresponds to a sign given its chromaticity values. This work includes an analysis of the detection rate as a function of the distance between the vehicle and the sign. Such information is useful to validate the robustness of the approach and is often not included in the existing literature. The detection rates, as a function of distance, are compared to those of the well-known Viola-Jones method. The results show that for distances less than 48 m, the proposed method achieves a detection rate of 87.5 % and 95.4 % for yield and stop signs, respectively. For distances less than 30 m, the detection rate is 100 % for both signs. The Viola-Jones approach has detection rates below 20 % for distances between 30 and 48 m, and barely improves in the 20-30 m range with detection rates of up to 60 % . Thus, the proposed method provides a robust alternative for intersection detection that relies on statistical color-based templates instead of shape information. The experiments employed videos of traffic signs taken in several streets of Santiago, Chile, using a research platform implemented at the Robotics and Automation Laboratory of PUC to develop driver assistance systems.
Traffic Sign Detection System for Locating Road Intersections and Roundabouts: The Chilean Case
Villalón-Sepúlveda, Gabriel; Torres-Torriti, Miguel; Flores-Calero, Marco
2017-01-01
This paper presents a traffic sign detection method for signs close to road intersections and roundabouts, such as stop and yield (give way) signs. The proposed method relies on statistical templates built using color information for both segmentation and classification. The segmentation method uses the RGB-normalized (ErEgEb) color space for ROIs (Regions of Interest) generation based on a chromaticity filter, where templates at 10 scales are applied to the entire image. Templates consider the mean and standard deviation of normalized color of the traffic signs to build thresholding intervals where the expected color should lie for a given sign. The classification stage employs the information of the statistical templates over YCbCr and ErEgEb color spaces, for which the background has been previously removed by using a probability function that models the probability that the pixel corresponds to a sign given its chromaticity values. This work includes an analysis of the detection rate as a function of the distance between the vehicle and the sign. Such information is useful to validate the robustness of the approach and is often not included in the existing literature. The detection rates, as a function of distance, are compared to those of the well-known Viola–Jones method. The results show that for distances less than 48 m, the proposed method achieves a detection rate of 87.5% and 95.4% for yield and stop signs, respectively. For distances less than 30 m, the detection rate is 100% for both signs. The Viola–Jones approach has detection rates below 20% for distances between 30 and 48 m, and barely improves in the 20–30 m range with detection rates of up to 60%. Thus, the proposed method provides a robust alternative for intersection detection that relies on statistical color-based templates instead of shape information. The experiments employed videos of traffic signs taken in several streets of Santiago, Chile, using a research platform implemented at the Robotics and Automation Laboratory of PUC to develop driver assistance systems. PMID:28587071
ERIC Educational Resources Information Center
Larcinese, Valentino
2008-01-01
This article proposes and applies a simple method to measure the distance from a situation of uniform participation. First, a discrepancy index based on the use of generalized Lorenz curves is presented. This index can be expressed in terms of means and Gini indices of relevant characteristics in the populations of participants and that of a…
Hypothesis Support Mechanism for Mid-Level Visual Pattern Recognition
NASA Technical Reports Server (NTRS)
Amador, Jose J (Inventor)
2007-01-01
A method of mid-level pattern recognition provides for a pose invariant Hough Transform by parametrizing pairs of points in a pattern with respect to at least two reference points, thereby providing a parameter table that is scale- or rotation-invariant. A corresponding inverse transform may be applied to test hypothesized matches in an image and a distance transform utilized to quantify the level of match.
Perturbation analyses of intermolecular interactions
NASA Astrophysics Data System (ADS)
Koyama, Yohei M.; Kobayashi, Tetsuya J.; Ueda, Hiroki R.
2011-08-01
Conformational fluctuations of a protein molecule are important to its function, and it is known that environmental molecules, such as water molecules, ions, and ligand molecules, significantly affect the function by changing the conformational fluctuations. However, it is difficult to systematically understand the role of environmental molecules because intermolecular interactions related to the conformational fluctuations are complicated. To identify important intermolecular interactions with regard to the conformational fluctuations, we develop herein (i) distance-independent and (ii) distance-dependent perturbation analyses of the intermolecular interactions. We show that these perturbation analyses can be realized by performing (i) a principal component analysis using conditional expectations of truncated and shifted intermolecular potential energy terms and (ii) a functional principal component analysis using products of intermolecular forces and conditional cumulative densities. We refer to these analyses as intermolecular perturbation analysis (IPA) and distance-dependent intermolecular perturbation analysis (DIPA), respectively. For comparison of the IPA and the DIPA, we apply them to the alanine dipeptide isomerization in explicit water. Although the first IPA principal components discriminate two states (the α state and PPII (polyproline II) + β states) for larger cutoff length, the separation between the PPII state and the β state is unclear in the second IPA principal components. On the other hand, in the large cutoff value, DIPA eigenvalues converge faster than that for IPA and the top two DIPA principal components clearly identify the three states. By using the DIPA biplot, the contributions of the dipeptide-water interactions to each state are analyzed systematically. Since the DIPA improves the state identification and the convergence rate with retaining distance information, we conclude that the DIPA is a more practical method compared with the IPA. To test the feasibility of the DIPA for larger molecules, we apply the DIPA to the ten-residue chignolin folding in explicit water. The top three principal components identify the four states (native state, two misfolded states, and unfolded state) and their corresponding eigenfunctions identify important chignolin-water interactions to each state. Thus, the DIPA provides the practical method to identify conformational states and their corresponding important intermolecular interactions with distance information.
Perturbation analyses of intermolecular interactions.
Koyama, Yohei M; Kobayashi, Tetsuya J; Ueda, Hiroki R
2011-08-01
Conformational fluctuations of a protein molecule are important to its function, and it is known that environmental molecules, such as water molecules, ions, and ligand molecules, significantly affect the function by changing the conformational fluctuations. However, it is difficult to systematically understand the role of environmental molecules because intermolecular interactions related to the conformational fluctuations are complicated. To identify important intermolecular interactions with regard to the conformational fluctuations, we develop herein (i) distance-independent and (ii) distance-dependent perturbation analyses of the intermolecular interactions. We show that these perturbation analyses can be realized by performing (i) a principal component analysis using conditional expectations of truncated and shifted intermolecular potential energy terms and (ii) a functional principal component analysis using products of intermolecular forces and conditional cumulative densities. We refer to these analyses as intermolecular perturbation analysis (IPA) and distance-dependent intermolecular perturbation analysis (DIPA), respectively. For comparison of the IPA and the DIPA, we apply them to the alanine dipeptide isomerization in explicit water. Although the first IPA principal components discriminate two states (the α state and PPII (polyproline II) + β states) for larger cutoff length, the separation between the PPII state and the β state is unclear in the second IPA principal components. On the other hand, in the large cutoff value, DIPA eigenvalues converge faster than that for IPA and the top two DIPA principal components clearly identify the three states. By using the DIPA biplot, the contributions of the dipeptide-water interactions to each state are analyzed systematically. Since the DIPA improves the state identification and the convergence rate with retaining distance information, we conclude that the DIPA is a more practical method compared with the IPA. To test the feasibility of the DIPA for larger molecules, we apply the DIPA to the ten-residue chignolin folding in explicit water. The top three principal components identify the four states (native state, two misfolded states, and unfolded state) and their corresponding eigenfunctions identify important chignolin-water interactions to each state. Thus, the DIPA provides the practical method to identify conformational states and their corresponding important intermolecular interactions with distance information.
NASA Astrophysics Data System (ADS)
Abedi Gheshlaghi, Hassan; Feizizadeh, Bakhtiar
2017-09-01
Landslides in mountainous areas render major damages to residential areas, roads, and farmlands. Hence, one of the basic measures to reduce the possible damage is by identifying landslide-prone areas through landslide mapping by different models and methods. The purpose of conducting this study is to evaluate the efficacy of a combination of two models of the analytical network process (ANP) and fuzzy logic in landslide risk mapping in the Azarshahr Chay basin in northwest Iran. After field investigations and a review of research literature, factors affecting the occurrence of landslides including slope, slope aspect, altitude, lithology, land use, vegetation density, rainfall, distance to fault, distance to roads, distance to rivers, along with a map of the distribution of occurred landslides were prepared in GIS environment. Then, fuzzy logic was used for weighting sub-criteria, and the ANP was applied to weight the criteria. Next, they were integrated based on GIS spatial analysis methods and the landslide risk map was produced. Evaluating the results of this study by using receiver operating characteristic curves shows that the hybrid model designed by areas under the curve 0.815 has good accuracy. Also, according to the prepared map, a total of 23.22% of the area, amounting to 105.38 km2, is in the high and very high-risk class. Results of this research are great of importance for regional planning tasks and the landslide prediction map can be used for spatial planning tasks and for the mitigation of future hazards in the study area.
Raknes, Guttorm; Hunskaar, Steinar
2014-01-01
We describe a method that uses crowdsourced postcode coordinates and Google maps to estimate average distance and travel time for inhabitants of a municipality to a casualty clinic in Norway. The new method was compared with methods based on population centroids, median distance and town hall location, and we used it to examine how distance affects the utilisation of out-of-hours primary care services. At short distances our method showed good correlation with mean travel time and distance. The utilisation of out-of-hours services correlated with postcode based distances similar to previous research. The results show that our method is a reliable and useful tool for estimating average travel distances and travel times.
Shutin, Dmitriy; Zlobinskaya, Olga
2010-02-01
The goal of this contribution is to apply model-based information-theoretic measures to the quantification of relative differences between immunofluorescent signals. Several models for approximating the empirical fluorescence intensity distributions are considered, namely Gaussian, Gamma, Beta, and kernel densities. As a distance measure the Hellinger distance and the Kullback-Leibler divergence are considered. For the Gaussian, Gamma, and Beta models the closed-form expressions for evaluating the distance as a function of the model parameters are obtained. The advantages of the proposed quantification framework as compared to simple mean-based approaches are analyzed with numerical simulations. Two biological experiments are also considered. The first is the functional analysis of the p8 subunit of the TFIIH complex responsible for a rare hereditary multi-system disorder--trichothiodystrophy group A (TTD-A). In the second experiment the proposed methods are applied to assess the UV-induced DNA lesion repair rate. A good agreement between our in vivo results and those obtained with an alternative in vitro measurement is established. We believe that the computational simplicity and the effectiveness of the proposed quantification procedure will make it very attractive for different analysis tasks in functional proteomics, as well as in high-content screening. Copyright 2009 Elsevier Ireland Ltd. All rights reserved.
Near-Surface Flow Fields Deduced Using Correlation Tracking and Time-Distance Analysis
NASA Technical Reports Server (NTRS)
DeRosa, Marc; Duvall, T. L., Jr.; Toomre, Juri
1999-01-01
Near-photospheric flow fields on the Sun are deduced using two independent methods applied to the same time series of velocity images observed by SOI-MDI on SOHO. Differences in travel times between f modes entering and leaving each pixel measured using time-distance helioseismology are used to determine sites of supergranular outflows. Alternatively, correlation tracking analysis of mesogranular scales of motion applied to the same time series is used to deduce the near-surface flow field. These two approaches provide the means to assess the patterns and evolution of horizontal flows on supergranular scales even near disk center, which is not feasible with direct line-of-sight Doppler measurements. We find that the locations of the supergranular outflows seen in flow fields generated from correlation tracking coincide well with the locations of the outflows determined from the time-distance analysis, with a mean correlation coefficient after smoothing of bar-r(sub s) = 0.840. Near-surface velocity field measurements can used to study the evolution of the supergranular network, as merging and splitting events are observed to occur in these images. The data consist of one 2048-minute time series of high-resolution (0.6" pixels) line-of-sight velocity images taken by MDI on 1997 January 16-18 at a cadence of one minute.
Tetali, Shailaja; Edwards, Phil; Murthy, G V S; Roberts, I
2015-10-28
Although some 300 million Indian children travel to school every day, little is known about how they get there. This information is important for transport planners and public health authorities. This paper presents the development of a self-administered questionnaire and examines its reliability and validity in estimating distance and mode of travel to school in a low resource urban setting. We developed a questionnaire on children's travel to school. We assessed test re-test reliability by repeating the questionnaire one week later (n = 61). The questionnaire was improved and re-tested (n = 68). We examined the convergent validity of distance estimates by comparing estimates based on the nearest landmark to children's homes with a 'gold standard' based on one-to-one interviews with children using detailed maps (n = 50). Most questions showed fair to almost perfect agreement. Questions on usual mode of travel (κ 0.73- 0.84) and road injury (κ 0.61- 0.72) were found to be more reliable than those on parental permissions (κ 0.18- 0.30), perception of safety (κ 0.00- 0.54), and physical activity (κ -0.01- 0.07). The distance estimated by the nearest landmark method was not significantly different than the in-depth method for walking , 52 m [95 % CI -32 m to 135 m], 10 % of the mean difference, and for walking and cycling combined, 65 m [95 % CI -30 m to 159 m], 11 % of the mean difference. For children who used motorized transport (excluding private school bus), the nearest landmark method under-estimated distance by an average of 325 metres [95 % CI -664 m to 1314 m], 15 % of the mean difference. A self-administered questionnaire was found to provide reliable information on the usual mode of travel to school, and road injury, in a small sample of children in Hyderabad, India. The 'nearest landmark' method can be applied in similar low-resource settings, for a reasonably accurate estimate of the distance from a child's home to school.
Nonrigid iterative closest points for registration of 3D biomedical surfaces
NASA Astrophysics Data System (ADS)
Liang, Luming; Wei, Mingqiang; Szymczak, Andrzej; Petrella, Anthony; Xie, Haoran; Qin, Jing; Wang, Jun; Wang, Fu Lee
2018-01-01
Advanced 3D optical and laser scanners bring new challenges to computer graphics. We present a novel nonrigid surface registration algorithm based on Iterative Closest Point (ICP) method with multiple correspondences. Our method, called the Nonrigid Iterative Closest Points (NICPs), can be applied to surfaces of arbitrary topology. It does not impose any restrictions on the deformation, e.g. rigidity or articulation. Finally, it does not require parametrization of input meshes. Our method is based on an objective function that combines distance and regularization terms. Unlike the standard ICP, the distance term is determined based on multiple two-way correspondences rather than single one-way correspondences between surfaces. A Laplacian-based regularization term is proposed to take full advantage of multiple two-way correspondences. This term regularizes the surface movement by enforcing vertices to move coherently with their 1-ring neighbors. The proposed method achieves good performances when no global pose differences or significant amount of bending exists in the models, for example, families of similar shapes, like human femur and vertebrae models.
NASA Astrophysics Data System (ADS)
Chen, J.; Chen, W.; Dou, A.; Li, W.; Sun, Y.
2018-04-01
A new information extraction method of damaged buildings rooted in optimal feature space is put forward on the basis of the traditional object-oriented method. In this new method, ESP (estimate of scale parameter) tool is used to optimize the segmentation of image. Then the distance matrix and minimum separation distance of all kinds of surface features are calculated through sample selection to find the optimal feature space, which is finally applied to extract the image of damaged buildings after earthquake. The overall extraction accuracy reaches 83.1 %, the kappa coefficient 0.813. The new information extraction method greatly improves the extraction accuracy and efficiency, compared with the traditional object-oriented method, and owns a good promotional value in the information extraction of damaged buildings. In addition, the new method can be used for the information extraction of different-resolution images of damaged buildings after earthquake, then to seek the optimal observation scale of damaged buildings through accuracy evaluation. It is supposed that the optimal observation scale of damaged buildings is between 1 m and 1.2 m, which provides a reference for future information extraction of damaged buildings.
NASA Astrophysics Data System (ADS)
Skøien, J. O.; Gottschalk, L.; Leblois, E.
2009-04-01
Whereas geostatistical and objective methods mostly have been developed for observations with point support or a regular support, e.g. runoff related data can be assumed to have an irregular support in space, and sometimes also a temporal support. The correlations between observations and between observations and the prediction location are found through an integration of a point variogram or point correlation function, a method known as regularisation. Being a relatively simple method for observations with equal and regular support, it can be computationally demanding if the observations have irregular support. With improved speed of computers, solving such integrations has become easier, but there can still be numerical problems that are not easily solved even with high-resolution computations. This can particularly be a problem in hydrological sciences where catchments are overlapping, the correlations are high, and small numerical errors can give ill-posed covariance matrices. The problem increases with increasing number of spatial and/or temporal dimensions. Gottschalk [1993a; 1993b] suggested to replace the integration by a Taylor expansion, hence reducing the computation time considerably, and also expecting less numerical problems with the covariance matrices. In practice, the integrated correlation/semivariance between observations are replaced by correlations/semivariances using the so called Ghosh-distance. Although Gottschalk and collaborators have used the Ghosh-distance also in other papers [Sauquet, et al., 2000a; Sauquet, et al., 2000b], the properties of the simplification have not been examined in detail. Hence, we will here analyse the replacement of the integration by the use of Ghosh-distances, both in sense of the ability to reproduce regularised semivariogram and correlation values, and the influence on the final interpolated maps. Comparisons will be performed both for real observations with a support (hydrological data) and for more hypothetical observations with regular supports where analytical expressions for the regularised semivariances/correlations in some cases can be derived. The results indicate that the simplification is useful for spatial interpolation when the support of the observations has to be taken into account. The difference in semivariogram value or correlation value between the simplified method and the full integration is limited on short distances, increasing for larger distances. However, this is to some degree taken into account while fitting a model for the point process, so that the results after interpolation are less affected by the simplification. The method is of particular use if computation time is of importance, e.g. in the case of real-time mapping procedures. Gottschalk, L. (1993a) Correlation and covariance of runoff, Stochastic Hydrology and Hydraulics, 7, 85-101. Gottschalk, L. (1993b) Interpolation of runoff applying objective methods, Stochastic Hydrology and Hydraulics, 7, 269-281. Sauquet, E., L. Gottschalk, and E. Leblois (2000a) Mapping average annual runoff: a hierarchical approach applying a stochastic interpolation scheme, Hydrological Sciences Journal, 45, 799-815. Sauquet, E., I. Krasovskaia, and E. Leblois (2000b) Mapping mean monthly runoff pattern using EOF analysis, Hydrology and Earth System Sciences, 4, 79-93.
Making Distance Learning E.R.O.T.I.C.: Applying Interpretation Principles to Distance Learning
ERIC Educational Resources Information Center
Ross, Anne; Siepen, Greg; O'Connor, Sue
2003-01-01
Distance learners are self-directed learners traditionally taught via study books, collections of readings, and exercises to test understanding of learning packages. Despite advances in e-Learning environments and computer-based teaching interfaces, distance learners still lack opportunities to participate in exercises and debates available to…
Riley, William; Parsons, Helen; McCoy, Kim; Burns, Debra; Anderson, Donna; Lee, Suhna; Sainfort, François
2009-10-01
To test the feasibility and assess the preliminary impact of a unique statewide quality improvement (QI) training program designed for public health departments. One hundred and ninety-five public health employees/managers from 38 local health departments throughout Minnesota were selected to participate in a newly developed QI training program and 65 of those engaged in and completed eight expert-supported QI projects over a period of 10 months from June 2007 through March 2008. As part of the Minnesota Quality Improvement Initiative, a structured distance education QI training program was designed and deployed in a first large-scale pilot. To evaluate the preliminary impact of the program, a mixed-method evaluation design was used based on four dimensions: learner reaction, knowledge, intention to apply, and preliminary outcomes. Subjective ratings of three dimensions of training quality were collected from participants after each of the scheduled learning sessions. Pre- and post-QI project surveys were administered to collect participant reactions, knowledge, future intention to apply learning, and perceived outcomes. Monthly and final QI project reports were collected to further inform success and preliminary outcomes of the projects. The participants reported (1) high levels of satisfaction with the training sessions, (2) increased perception of the relevance of the QI techniques, (3) increased perceived knowledge of all specific QI methods and techniques, (4) increased confidence in applying QI techniques on future projects, (5) increased intention to apply techniques on future QI projects, and (6) high perceived success of, and satisfaction with, the projects. Finally, preliminary outcomes data show moderate to large improvements in quality and/or efficiency for six out of eight projects. QI methods and techniques can be successfully implemented in local public health agencies on a statewide basis using the collaborative model through distance training and expert facilitation. This unique training can improve both core and support processes and lead to favorable staff reactions, increased knowledge, and improved health outcomes. The program can be further improved and deployed and holds great promise to facilitate the successful dissemination of proven QI methods throughout local public health departments.
NASA Astrophysics Data System (ADS)
Andrianov, A. S.; Smirnova, T. V.; Shishov, V. I.; Gwinn, C.; Popov, M. V.
2017-06-01
Observations on the RadioAstron ground-space interferometer with the participation of the Green Bank and Arecibo ground telescopes at 1668 MHz have enabled studies of the characteristics of the interstellar plasma in the direction of the pulsar PSR B0525+21. The maximum projected baseline for the ground-space interferometer was 233 600 km. The scintillations in these observations were strong, and the spectrum of inhomogeneties in the interstellar plasma was a power law with index n = 3.74, corresponding to a Kolmogorov spectrum. A new method for estimating the size of the scattering disk was applied to estimate the scattering angle (scattering disk radius) in the direction toward PSR B0525+21, θ scat = 0.028 ± 0.002 milliarcsecond. The scattering in this direction occurs in a plasma layer located at a distance of 0.1 Z from the pulsar, where Z is the distance from the pulsar to the observer. For the adopted distance Z = 1.6 kpc, the screen is located at a distance of 1.44 kpc from the observer.
Lin, Fen-Fang; Wang, Ke; Yang, Ning; Yan, Shi-Guang; Zheng, Xin-Yu
2012-02-01
In this paper, some main factors such as soil type, land use pattern, lithology type, topography, road, and industry type that affect soil quality were used to precisely obtain the spatial distribution characteristics of regional soil quality, mutual information theory was adopted to select the main environmental factors, and decision tree algorithm See 5.0 was applied to predict the grade of regional soil quality. The main factors affecting regional soil quality were soil type, land use, lithology type, distance to town, distance to water area, altitude, distance to road, and distance to industrial land. The prediction accuracy of the decision tree model with the variables selected by mutual information was obviously higher than that of the model with all variables, and, for the former model, whether of decision tree or of decision rule, its prediction accuracy was all higher than 80%. Based on the continuous and categorical data, the method of mutual information theory integrated with decision tree could not only reduce the number of input parameters for decision tree algorithm, but also predict and assess regional soil quality effectively.
An information-based network approach for protein classification
Wan, Xiaogeng; Zhao, Xin; Yau, Stephen S. T.
2017-01-01
Protein classification is one of the critical problems in bioinformatics. Early studies used geometric distances and polygenetic-tree to classify proteins. These methods use binary trees to present protein classification. In this paper, we propose a new protein classification method, whereby theories of information and networks are used to classify the multivariate relationships of proteins. In this study, protein universe is modeled as an undirected network, where proteins are classified according to their connections. Our method is unsupervised, multivariate, and alignment-free. It can be applied to the classification of both protein sequences and structures. Nine examples are used to demonstrate the efficiency of our new method. PMID:28350835
The depth estimation of 3D face from single 2D picture based on manifold learning constraints
NASA Astrophysics Data System (ADS)
Li, Xia; Yang, Yang; Xiong, Hailiang; Liu, Yunxia
2018-04-01
The estimation of depth is virtual important in 3D face reconstruction. In this paper, we propose a t-SNE based on manifold learning constraints and introduce K-means method to divide the original database into several subset, and the selected optimal subset to reconstruct the 3D face depth information can greatly reduce the computational complexity. Firstly, we carry out the t-SNE operation to reduce the key feature points in each 3D face model from 1×249 to 1×2. Secondly, the K-means method is applied to divide the training 3D database into several subset. Thirdly, the Euclidean distance between the 83 feature points of the image to be estimated and the feature point information before the dimension reduction of each cluster center is calculated. The category of the image to be estimated is judged according to the minimum Euclidean distance. Finally, the method Kong D will be applied only in the optimal subset to estimate the depth value information of 83 feature points of 2D face images. Achieving the final depth estimation results, thus the computational complexity is greatly reduced. Compared with the traditional traversal search estimation method, although the proposed method error rate is reduced by 0.49, the number of searches decreases with the change of the category. In order to validate our approach, we use a public database to mimic the task of estimating the depth of face images from 2D images. The average number of searches decreased by 83.19%.
NASA Technical Reports Server (NTRS)
Kim, Min-Jeong; Jin, Jianjun; McCarty, Will; El Akkraoui, Amal; Todling, Ricardo; Gelaro, Ron
2018-01-01
Many numerical weather prediction (NWP) centers assimilate radiances affected by clouds and precipitation from microwave sensors, with the expectation that these data can provide critical constraints on meteorological parameters in dynamically sensitive regions to make significant impacts on forecast accuracy for precipitation. The Global Modeling and Assimilation Office (GMAO) at NASA Goddard Space Flight Center assimilates all-sky microwave radiance data from various microwave sensors such as all-sky GPM Microwave Imager (GMI) radiance in the Goddard Earth Observing System (GEOS) atmospheric data assimilation system (ADAS), which includes the GEOS atmospheric model, the Gridpoint Statistical Interpolation (GSI) atmospheric analysis system, and the Goddard Aerosol Assimilation System (GAAS). So far, most of NWP centers apply same large data thinning distances, that are used in clear-sky radiance data to avoid correlated observation errors, to all-sky microwave radiance data. For example, NASA GMAO is applying 145 km thinning distances for most of satellite radiance data including microwave radiance data in which all-sky approach is implemented. Even with these coarse observation data usage in all-sky assimilation approach, noticeable positive impacts from all-sky microwave data on hurricane track forecasts were identified in GEOS-5 system. The motivation of this study is based on the dynamic thinning distance method developed in our all-sky framework to use of denser data in cloudy and precipitating regions due to relatively small spatial correlations of observation errors. To investigate the benefits of all-sky microwave radiance on hurricane forecasts, several hurricane cases selected between 2016-2017 are examined. The dynamic thinning distance method is utilized in our all-sky approach to understand the sources and mechanisms to explain the benefits of all-sky microwave radiance data from various microwave radiance sensors like Advanced Microwave Sounder Unit (AMSU-A), Microwave Humidity Sounder (MHS), and GMI on GEOS-5 analyses and forecasts of various hurricanes.
Ionic diffusion and space charge polarization in structural characterization of biological tissues.
Jastrzebska, M; Kocot, A
2004-06-01
In this study, a new approach to the analysis of the low-frequency (1-10(7) Hz) dielectric spectra of biological tissue, has been described. The experimental results are interpreted in terms of ionic diffusion and space charge polarization according to Sawada's theory. The new presentation of dielectric spectra, i.e. ([Formula: see text]) [Formula: see text] has been used. This method results in peaks which are narrower and better resolved than both the measured loss peaks and an alternative loss quantity [Formula: see text]. The presented method and Sawada's expression have been applied to the analysis of changes in the spatial molecular structure of a collagen fibril network in pericardium tissue exposed to glutaraldehyde (GA), with respect to the native tissue. The diffusion coefficient of ions was estimated on the basis of a dielectric dispersion measurement for an aqueous NaCl solution with a well-calibrated distance between the electrodes. The fitting procedure of a theoretical function to the experimental data allowed us to determine three diffusive relaxation regions with three structural distance parameters d(s), describing the spatial arrangement of collagen fibrils in pericardium tissue. It has been found that a significant decrease in the structural distance d(s) from 87 nm to 45 nm may correspond to a reduction in the interfibrillar distance within GA cross-linked tissue.
Minimization of municipal solid waste transportation route in West Jakarta using Tabu Search method
NASA Astrophysics Data System (ADS)
Chaerul, M.; Mulananda, A. M.
2018-04-01
Indonesia still adopts the concept of collect-haul-dispose for municipal solid waste handling and it leads to the queue of the waste trucks at final disposal site (TPA). The study aims to minimize the total distance of waste transportation system by applying a Transshipment model. In this case, analogous of transshipment point is a compaction facility (SPA). Small capacity of trucks collects the waste from waste temporary collection points (TPS) to the compaction facility which located near the waste generator. After compacted, the waste is transported using big capacity of trucks to the final disposal site which is located far away from city. Problem related with the waste transportation can be solved using Vehicle Routing Problem (VRP). In this study, the shortest distance of route from truck pool to TPS, TPS to SPA, and SPA to TPA was determined by using meta-heuristic methods, namely Tabu Search 2 Phases. TPS studied is the container type with total 43 units throughout the West Jakarta City with 38 units of Armroll truck with capacity of 10 m3 each. The result determines the assignment of each truck from the pool to the selected TPS, SPA and TPA with the total minimum distance of 2,675.3 KM. The minimum distance causing the total cost for waste transportation to be spent by the government also becomes minimal.
Comparison of Genetic Algorithm and Hill Climbing for Shortest Path Optimization Mapping
NASA Astrophysics Data System (ADS)
Fronita, Mona; Gernowo, Rahmat; Gunawan, Vincencius
2018-02-01
Traveling Salesman Problem (TSP) is an optimization to find the shortest path to reach several destinations in one trip without passing through the same city and back again to the early departure city, the process is applied to the delivery systems. This comparison is done using two methods, namely optimization genetic algorithm and hill climbing. Hill Climbing works by directly selecting a new path that is exchanged with the neighbour's to get the track distance smaller than the previous track, without testing. Genetic algorithms depend on the input parameters, they are the number of population, the probability of crossover, mutation probability and the number of generations. To simplify the process of determining the shortest path supported by the development of software that uses the google map API. Tests carried out as much as 20 times with the number of city 8, 16, 24 and 32 to see which method is optimal in terms of distance and time computation. Based on experiments conducted with a number of cities 3, 4, 5 and 6 producing the same value and optimal distance for the genetic algorithm and hill climbing, the value of this distance begins to differ with the number of city 7. The overall results shows that these tests, hill climbing are more optimal to number of small cities and the number of cities over 30 optimized using genetic algorithms.
NASA Astrophysics Data System (ADS)
Bakar, Sumarni Abu; Ibrahim, Milbah
2017-08-01
The shortest path problem is a popular problem in graph theory. It is about finding a path with minimum length between a specified pair of vertices. In any network the weight of each edge is usually represented in a form of crisp real number and subsequently the weight is used in the calculation of shortest path problem using deterministic algorithms. However, due to failure, uncertainty is always encountered in practice whereby the weight of edge of the network is uncertain and imprecise. In this paper, a modified algorithm which utilized heuristic shortest path method and fuzzy approach is proposed for solving a network with imprecise arc length. Here, interval number and triangular fuzzy number in representing arc length of the network are considered. The modified algorithm is then applied to a specific example of the Travelling Salesman Problem (TSP). Total shortest distance obtained from this algorithm is then compared with the total distance obtained from traditional nearest neighbour heuristic algorithm. The result shows that the modified algorithm can provide not only on the sequence of visited cities which shown to be similar with traditional approach but it also provides a good measurement of total shortest distance which is lesser as compared to the total shortest distance calculated using traditional approach. Hence, this research could contribute to the enrichment of methods used in solving TSP.
Spatial accessibility to vaccination sites in a campaign against rabies in São Paulo city, Brazil.
Polo, Gina; Acosta, Carlos Mera; Dias, Ricardo Augusto
2013-08-01
It is estimated that the city of São Paulo has over 2.5 million dogs and 560 thousand cats. These populations are irregularly distributed throughout the territory, making it difficult to appropriately allocate health services focused on these species. To reasonably allocate vaccination sites, it is necessary to identify social groups and their access to the referred service. Rabies in dogs and cats has been an important zoonotic health issue in São Paulo and the key component of rabies control is vaccination. The present study aims to introduce an approach to quantify the potential spatial accessibility to the vaccination sites of the 2009 campaign against rabies in the city of São Paulo and solve the overestimation associated with the classic methodology that applies buffer zones around vaccination sites based on Euclidean (straight-line) distance. To achieve this, a Gaussian-based two-step floating catchment area method with a travel-friction coefficient was adapted in a geographic information system environment, using distances along a street network based on Dijkstra's algorithm (short path method). The choice of the distance calculation method affected the results in terms of the population covered. In general, areas with low accessibility for both dogs and cats were observed, especially in densely populated areas. The eastern zone of the city had higher accessibility values compared with peripheral and central zones. The Gaussian-based two-step floating catchment method with a travel-friction coefficient was used to assess the overestimation of the straight-line distance method, which is the most widely used method for coverage analysis. We conclude that this approach has the potential to improve the efficiency of resource use when planning rabies control programs in large urban environments such as São Paulo. The findings emphasize the need for surveillance and intervention in isolated areas. Copyright © 2013 Elsevier B.V. All rights reserved.
Talker Localization Based on Interference between Transmitted and Reflected Audible Sound
NASA Astrophysics Data System (ADS)
Nakayama, Masato; Nakasako, Noboru; Shinohara, Toshihiro; Uebo, Tetsuji
In many engineering fields, distance to targets is very important. General distance measurement method uses a time delay between transmitted and reflected waves, but it is difficult to estimate the short distance. On the other hand, the method using phase interference to measure the short distance has been known in the field of microwave radar. Therefore, we have proposed the distance estimation method based on interference between transmitted and reflected audible sound, which can measure the distance between microphone and target with one microphone and one loudspeaker. In this paper, we propose talker localization method based on distance estimation using phase interference. We expand the distance estimation method using phase interference into two microphones (microphone array) in order to estimate talker position. The proposed method can estimate talker position by measuring the distance and direction between target and microphone array. In addition, talker's speech is regarded as a noise in the proposed method. Therefore, we also propose combination of the proposed method and CSP (Cross-power Spectrum Phase analysis) method which is one of the DOA (Direction Of Arrival) estimation methods. We evaluated the performance of talker localization in real environments. The experimental result shows the effectiveness of the proposed method.
Evaluation of jamming efficiency for the protection of a single ground object
NASA Astrophysics Data System (ADS)
Matuszewski, Jan
2018-04-01
The electronic countermeasures (ECM) include methods to completely prevent or restrict the effective use of the electromagnetic spectrum by the opponent. The most widespread means of disorganizing the operation of electronic devices is to create active and passive radio-electronic jamming. The paper presents the way of jamming efficiency calculations for protecting ground objects against the radars mounted on the airborne platforms. The basic mathematical formulas for calculating the efficiency of active radar jamming are presented. The numerical calculations for ground object protection are made for two different electronic warfare scenarios: the jammer is placed very closely and in a determined distance from the protecting object. The results of these calculations are presented in the appropriate figures showing the minimal distance of effective jamming. The realization of effective radar jamming in electronic warfare systems depends mainly on the precise knowledge of radar and the jammer's technical parameters, the distance between them, the assumed value of the degradation coefficient, the conditions of electromagnetic energy propagation and the applied jamming method. The conclusions from these calculations facilitate making a decision regarding how jamming should be conducted to achieve high efficiency during the electronic warfare training.
Low Power Near Field Communication Methods for RFID Applications of SIM Cards.
Chen, Yicheng; Zheng, Zhaoxia; Gong, Mingyang; Yu, Fengqi
2017-04-14
Power consumption and communication distance have become crucial challenges for SIM card RFID (radio frequency identification) applications. The combination of long distance 2.45 GHz radio frequency (RF) technology and low power 2 kHz near distance communication is a workable scheme. In this paper, an ultra-low frequency 2 kHz near field communication (NFC) method suitable for SIM cards is proposed and verified in silicon. The low frequency transmission model based on electromagnetic induction is discussed. Different transmission modes are introduced and compared, which show that the baseband transmit mode has a better performance. The low-pass filter circuit and programmable gain amplifiers are applied for noise reduction and signal amplitude amplification. Digital-to-analog converters and comparators are used to judge the card approach and departure. A novel differential Manchester decoder is proposed to deal with the internal clock drift in range-controlled communication applications. The chip has been fully implemented in 0.18 µm complementary metal-oxide-semiconductor (CMOS) technology, with a 330 µA work current and a 45 µA idle current. The low frequency chip can be integrated into a radio frequency SIM card for near field RFID applications.
`Inter-Arrival Time' Inspired Algorithm and its Application in Clustering and Molecular Phylogeny
NASA Astrophysics Data System (ADS)
Kolekar, Pandurang S.; Kale, Mohan M.; Kulkarni-Kale, Urmila
2010-10-01
Bioinformatics, being multidisciplinary field, involves applications of various methods from allied areas of Science for data mining using computational approaches. Clustering and molecular phylogeny is one of the key areas in Bioinformatics, which help in study of classification and evolution of organisms. Molecular phylogeny algorithms can be divided into distance based and character based methods. But most of these methods are dependent on pre-alignment of sequences and become computationally intensive with increase in size of data and hence demand alternative efficient approaches. `Inter arrival time distribution' (IATD) is a popular concept in the theory of stochastic system modeling but its potential in molecular data analysis has not been fully explored. The present study reports application of IATD in Bioinformatics for clustering and molecular phylogeny. The proposed method provides IATDs of nucleotides in genomic sequences. The distance function based on statistical parameters of IATDs is proposed and distance matrix thus obtained is used for the purpose of clustering and molecular phylogeny. The method is applied on a dataset of 3' non-coding region sequences (NCR) of Dengue virus type 3 (DENV-3), subtype III, reported in 2008. The phylogram thus obtained revealed the geographical distribution of DENV-3 isolates. Sri Lankan DENV-3 isolates were further observed to be clustered in two sub-clades corresponding to pre and post Dengue hemorrhagic fever emergence groups. These results are consistent with those reported earlier, which are obtained using pre-aligned sequence data as an input. These findings encourage applications of the IATD based method in molecular phylogenetic analysis in particular and data mining in general.
A new method to quantify liner deformation within a prosthetic socket for below knee amputees.
Lenz, Amy L; Johnson, Katie A; Bush, Tamara Reid
2018-06-06
Many amputees who wear a leg prosthesis develop significant skin wounds on their residual limb. The exact cause of these wounds is unclear as little work has studied the interface between the prosthetic device and user. Our research objective was to develop a quantitative method for assessing displacement patterns of the gel liner during walking for patients with transtibial amputation. Using a reflective marker system and a custom clear socket, evaluations were conducted with a clear transparent test socket mounted over a plaster limb model and a deformable limb model. Distances between markers placed on the limb were measured with a digital caliper and then compared with data from the motion capture system. Additionally, the rigid plaster set-up was moved in the capture volume to simulate walking and evaluate if inter-marker distances changed in comparison to static data. Dynamic displacement trials were then collected to measure changes in inter-marker distance due to vertical elongation of the gel liner. Static and dynamic inter-marker distances within day and across days confirmed the ability to accurately capture displacements using this new approach. These results encourage this novel method to be applied to a sample of amputee patients during walking to assess displacements and the distribution of the liner deformation within the socket. The ability to capture changes in deformation of the gel liner will provide new data that will enable clinicians and researchers to improve design and fit of the prosthesis so the incidence of pressure ulcers can be reduced. Copyright © 2018 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
McLean, Scott; Gasperini, Lavinia; Rudgard, Stephen
2002-01-01
The distance learning experiences of the United Nations Food and Agriculture Organization led to the following suggestions for applying distance learning strategies to the challenges of food security and rural development: use distance learning for the right reasons, be sensitive to context, use existing infrastructure, engage stakeholders, and…
NASA Astrophysics Data System (ADS)
Mori, Toshifumi; Kato, Shigeki
2007-03-01
We present a method to evaluate the analytical gradient of reference interaction site model Møller-Plesset second order free energy with respect to solute nuclear coordinates. It is applied to calculate the geometries and energies in the equilibria of the Grignard reagent (CH 3MgCl) in dimethylether solvent. The Mg-Mg and Mg-Cl distances as well as the binding energies of solvents are largely affected by the dynamical electron correlation. The solvent effect on the Schlenk equilibrium is examined.
Han, Bing; Cohen, Deborah A.
2015-01-01
Introduction Accurate conceptualizations of neighborhood environments are important in the design of policies and programs aiming to improve access to healthy food. Neighborhood environments are often defined by administrative units or buffers around points of interest. An individual may eat and shop for food within or outside these areas, which may not reflect accessibility of food establishments. This article examines the relevance of different definitions of food environments. Methods We collected data on trips to food establishments using a 1-week food and travel diary and global positioning system devices. Spatial-temporal clustering methods were applied to identify homes and food establishments visited by study participants. Results We identified 513 visits to food establishments (sit-down restaurants, fast-food/convenience stores, malls or stores, groceries/supermarkets) by 135 participants in 5 US cities. The average distance between the food establishments and homes was 2.6 miles (standard deviation, 3.7 miles). Only 34% of the visited food establishments were within participants’ neighborhood census tract. Buffers of 1 or 2 miles around the home covered 55% to 65% of visited food establishments. There was a significant difference in the mean distances to food establishments types (P = .008). On average, participants traveled the longest distances to restaurants and the shortest distances to groceries/supermarkets. Conclusion Many definitions of the neighborhood food environment are misaligned with individual travel patterns, which may help explain the mixed findings in studies of neighborhood food environments. Neighborhood environments defined by actual travel activity may provide more insight on how the food environment influences dietary and food shopping choices. PMID:26247426
Financial time series analysis based on information categorization method
NASA Astrophysics Data System (ADS)
Tian, Qiang; Shang, Pengjian; Feng, Guochen
2014-12-01
The paper mainly applies the information categorization method to analyze the financial time series. The method is used to examine the similarity of different sequences by calculating the distances between them. We apply this method to quantify the similarity of different stock markets. And we report the results of similarity in US and Chinese stock markets in periods 1991-1998 (before the Asian currency crisis), 1999-2006 (after the Asian currency crisis and before the global financial crisis), and 2007-2013 (during and after global financial crisis) by using this method. The results show the difference of similarity between different stock markets in different time periods and the similarity of the two stock markets become larger after these two crises. Also we acquire the results of similarity of 10 stock indices in three areas; it means the method can distinguish different areas' markets from the phylogenetic trees. The results show that we can get satisfactory information from financial markets by this method. The information categorization method can not only be used in physiologic time series, but also in financial time series.
ERIC Educational Resources Information Center
Arce Espinoza, Lourdes; Monge Nájera, Julián
2015-01-01
The presentation of the intellectual work of others as their own by students is believed to be common worldwide. Punishments and detection software have failed to solve the problem and have important limitations themselves. To go to the root of the problem, we applied an online questionnaire to 344 university students and their 13 teachers. Our…
Yu, Fei; Ji, Zhanglong
2014-01-01
In response to the growing interest in genome-wide association study (GWAS) data privacy, the Integrating Data for Analysis, Anonymization and SHaring (iDASH) center organized the iDASH Healthcare Privacy Protection Challenge, with the aim of investigating the effectiveness of applying privacy-preserving methodologies to human genetic data. This paper is based on a submission to the iDASH Healthcare Privacy Protection Challenge. We apply privacy-preserving methods that are adapted from Uhler et al. 2013 and Yu et al. 2014 to the challenge's data and analyze the data utility after the data are perturbed by the privacy-preserving methods. Major contributions of this paper include new interpretation of the χ2 statistic in a GWAS setting and new results about the Hamming distance score, a key component for one of the privacy-preserving methods.
2014-01-01
In response to the growing interest in genome-wide association study (GWAS) data privacy, the Integrating Data for Analysis, Anonymization and SHaring (iDASH) center organized the iDASH Healthcare Privacy Protection Challenge, with the aim of investigating the effectiveness of applying privacy-preserving methodologies to human genetic data. This paper is based on a submission to the iDASH Healthcare Privacy Protection Challenge. We apply privacy-preserving methods that are adapted from Uhler et al. 2013 and Yu et al. 2014 to the challenge's data and analyze the data utility after the data are perturbed by the privacy-preserving methods. Major contributions of this paper include new interpretation of the χ2 statistic in a GWAS setting and new results about the Hamming distance score, a key component for one of the privacy-preserving methods. PMID:25521367
Advances in spatial epidemiology and geographic information systems.
Kirby, Russell S; Delmelle, Eric; Eberth, Jan M
2017-01-01
The field of spatial epidemiology has evolved rapidly in the past 2 decades. This study serves as a brief introduction to spatial epidemiology and the use of geographic information systems in applied research in epidemiology. We highlight technical developments and highlight opportunities to apply spatial analytic methods in epidemiologic research, focusing on methodologies involving geocoding, distance estimation, residential mobility, record linkage and data integration, spatial and spatio-temporal clustering, small area estimation, and Bayesian applications to disease mapping. The articles included in this issue incorporate many of these methods into their study designs and analytical frameworks. It is our hope that these studies will spur further development and utilization of spatial analysis and geographic information systems in epidemiologic research. Copyright © 2016 Elsevier Inc. All rights reserved.
Yang, Jesse Chieh-Szu; Lin, Kang-Ping; Wei, Hung-Wen; Chen, Wen-Chuan; Chiang, Chao-Ching; Chang, Ming-Chau; Tsai, Cheng-Lun; Lin, Kun-Jhih
2018-06-01
The far cortical locking (FCL) system, a novel bridge-plating technique, aims to deliver controlled and symmetric interfragmentary motion for a potential uniform callus distribution. However, clinical data for the practical use of this system are limited. The current study investigated the biomechanical effect of a locking plate/far cortical locking construct on a simulated comminuted diaphyseal fracture of the synthetic bones at different distance between the plate and the bone. Biomechanical in vitro experiments were performed using composite sawbones as bone models. A 10-mm osteotomy gap was created and bridged with FCL constructs to determine the construct stiffness, strength, and interfragmentary movement under axial compression, which comprised one of three methods: locking plates applied flush to bone, at 2 mm, or at 4 mm from the bone. The plate applied flush to the bone exhibited higher stiffness than those at 2 mm and 4 mm plate elevation. A homogeneous interfragmentary motion at the near and far cortices was observed for the plate at 2 mm, whereas a relatively large movement was observed at the far cortex for the plate applied at 4 mm. A plate-to-bone distance of 2 mm had the advantages of reducing axial stiffness and providing nearly parallel interfragmentary motion. The plate flush to the bone prohibits the dynamic function of the far cortical locking mechanism, and the 4-mm offset was too unstable for fracture healing. Copyright © 2018 IPEM. Published by Elsevier Ltd. All rights reserved.
A unified tensor level set for image segmentation.
Wang, Bin; Gao, Xinbo; Tao, Dacheng; Li, Xuelong
2010-06-01
This paper presents a new region-based unified tensor level set model for image segmentation. This model introduces a three-order tensor to comprehensively depict features of pixels, e.g., gray value and the local geometrical features, such as orientation and gradient, and then, by defining a weighted distance, we generalized the representative region-based level set method from scalar to tensor. The proposed model has four main advantages compared with the traditional representative method as follows. First, involving the Gaussian filter bank, the model is robust against noise, particularly the salt- and pepper-type noise. Second, considering the local geometrical features, e.g., orientation and gradient, the model pays more attention to boundaries and makes the evolving curve stop more easily at the boundary location. Third, due to the unified tensor pixel representation representing the pixels, the model segments images more accurately and naturally. Fourth, based on a weighted distance definition, the model possesses the capacity to cope with data varying from scalar to vector, then to high-order tensor. We apply the proposed method to synthetic, medical, and natural images, and the result suggests that the proposed method is superior to the available representative region-based level set method.
Česaitienė, Gabrielė; Česaitis, Kęstutis; Junevičius, Jonas; Venskutonis, Tadas
2017-07-04
BACKGROUND The aim of this study was to compare the reliability of panoramic radiography (PR) and cone beam computed tomography (CBCT) in the evaluation of the distance of the roots of lateral teeth to the inferior alveolar nerve canal (IANC). MATERIAL AND METHODS 100 PR and 100 CBCT images that met the selection criteria were selected from the database. In PR images, the distances were measured using an electronic caliper with 0.01 mm accuracy and white light x-ray film reviewer. Actual values of the measurements were calculated taking into consideration the magnification used in PR images (130%). Measurements on CBCT images were performed using i-CAT Vision software. Statistical data analysis was performed using R software and applying Welch's t-test and the Wilcoxon test. RESULTS There was no statistically significant difference in the mean distance from the root of the second premolar and the mesial and distal roots of the first molar to the IANC between PR and CBCT images. The difference in the mean distance from the mesial and distal roots of the second and the third molars to the IANC measured in PR and CBCT images was statistically significant. CONCLUSIONS PR may be uninformative or misleading when measuring the distance from the mesial and distal roots of the second and the third molars to the IANC.
Česaitienė, Gabrielė; Česaitis, Kęstutis; Junevičius, Jonas; Venskutonis, Tadas
2017-01-01
Background The aim of this study was to compare the reliability of panoramic radiography (PR) and cone beam computed tomography (CBCT) in the evaluation of the distance of the roots of lateral teeth to the inferior alveolar nerve canal (IANC). Material/Methods 100 PR and 100 CBCT images that met the selection criteria were selected from the database. In PR images, the distances were measured using an electronic caliper with 0.01 mm accuracy and white light x-ray film reviewer. Actual values of the measurements were calculated taking into consideration the magnification used in PR images (130%). Measurements on CBCT images were performed using i-CAT Vision software. Statistical data analysis was performed using R software and applying Welch’s t-test and the Wilcoxon test. Results There was no statistically significant difference in the mean distance from the root of the second premolar and the mesial and distal roots of the first molar to the IANC between PR and CBCT images. The difference in the mean distance from the mesial and distal roots of the second and the third molars to the IANC measured in PR and CBCT images was statistically significant. Conclusions PR may be uninformative or misleading when measuring the distance from the mesial and distal roots of the second and the third molars to the IANC. PMID:28674379
Wang, Xinmeng; Zhang, Na; Li, Miaomiao
2018-01-01
Background In order to improve subjective wellbeing of government servants engaged in environmental protection who work in high power distance in China, it is important to understand the impact mechanism of feedback. This study aims to analyze how feedback environment influences subjective wellbeing through basic psychological needs satisfaction and analyzing the moderating role of power distance. Method The study was designed as a cross-sectional study of 492 government servants engaged in environment protection in Shandong, China. Government servants who agreed to participate answered self-report questionnaires concerning demographic conditions, supervisor feedback environment, basic psychological need satisfaction, and power distance as well as subjective wellbeing. Results Employees in higher levels of supervisor feedback environment were more likely to experience subjective wellbeing. Full mediating effects were found for basic psychological needs satisfaction. Specifically, supervisor feedback environment firstly led to increased basic psychological needs satisfaction, which in turn resulted in increased subjective wellbeing. Additional analysis showed that the mediating effect of basic psychological needs satisfaction was stronger for employees who work in high power distance than in low power distance. Conclusion The results from the study indicate that supervisor feedback environment plays a vital role in improving subjective wellbeing of government servants engaged in environmental protection through basic psychological needs satisfaction, especially in high power distance. PMID:29662901
[Travel time and distances to Norwegian out-of-hours casualty clinics].
Raknes, Guttorm; Morken, Tone; Hunskår, Steinar
2014-11-01
Geographical factors have an impact on the utilisation of out-of-hours services. In this study we have investigated the travel distance to out-of-hours casualty clinics in Norwegian municipalities in 2011 and the number of municipalities covered by the proposed recommendations for secondary on-call arrangements due to long distances. We estimated the average maximum travel times and distances in Norwegian municipalities using a postcode-based method. Separate analyses were performed for municipalities with a single, permanently located casualty clinic. Altogether 417 out of 430 municipalities were included. We present the median value of the maximum travel times and distances for the included municipalities. The median maximum average travel distance for the municipalities was 19 km. The median maximum average travel time was 22 minutes. In 40 of the municipalities (10 %) the median maximum average travel time exceeded 60 minutes, and in 97 municipalities (23 %) the median maximum average travel time exceeded 40 minutes. The population of these groups comprised 2 % and 5 % of the country's total population respectively. For municipalities with permanent emergency facilities(N = 316), the median average flight time 16 minutes and median average distance 13 km.. In many municipalities, the inhabitants have a long average journey to out-of-hours emergency health services, but seen as a whole, the inhabitants of these municipalities account for a very small proportion of the Norwegian population. The results indicate that the proposed recommendations for secondary on-call duty based on long distances apply to only a small number of inhabitants. The recommendations should therefore be adjusted and reformulated to become more relevant.
Topological Distances Between Brain Networks
Lee, Hyekyoung; Solo, Victor; Davidson, Richard J.; Pollak, Seth D.
2018-01-01
Many existing brain network distances are based on matrix norms. The element-wise differences may fail to capture underlying topological differences. Further, matrix norms are sensitive to outliers. A few extreme edge weights may severely affect the distance. Thus it is necessary to develop network distances that recognize topology. In this paper, we introduce Gromov-Hausdorff (GH) and Kolmogorov-Smirnov (KS) distances. GH-distance is often used in persistent homology based brain network models. The superior performance of KS-distance is contrasted against matrix norms and GH-distance in random network simulations with the ground truths. The KS-distance is then applied in characterizing the multimodal MRI and DTI study of maltreated children.
Bang, Yoonsik; Kim, Jiyoung; Yu, Kiyun
2016-01-01
Wearable and smartphone technology innovations have propelled the growth of Pedestrian Navigation Services (PNS). PNS need a map-matching process to project a user’s locations onto maps. Many map-matching techniques have been developed for vehicle navigation services. These techniques are inappropriate for PNS because pedestrians move, stop, and turn in different ways compared to vehicles. In addition, the base map data for pedestrians are more complicated than for vehicles. This article proposes a new map-matching method for locating Global Positioning System (GPS) trajectories of pedestrians onto road network datasets. The theory underlying this approach is based on the Fréchet distance, one of the measures of geometric similarity between two curves. The Fréchet distance approach can provide reasonable matching results because two linear trajectories are parameterized with the time variable. Then we improved the method to be adaptive to the positional error of the GPS signal. We used an adaptation coefficient to adjust the search range for every input signal, based on the assumption of auto-correlation between consecutive GPS points. To reduce errors in matching, the reliability index was evaluated in real time for each match. To test the proposed map-matching method, we applied it to GPS trajectories of pedestrians and the road network data. We then assessed the performance by comparing the results with reference datasets. Our proposed method performed better with test data when compared to a conventional map-matching technique for vehicles. PMID:27782091
A chemogenomic analysis of the human proteome: application to enzyme families.
Bernasconi, Paul; Chen, Min; Galasinski, Scott; Popa-Burke, Ioana; Bobasheva, Anna; Coudurier, Louis; Birkos, Steve; Hallam, Rhonda; Janzen, William P
2007-10-01
Sequence-based phylogenies (SBP) are well-established tools for describing relationships between proteins. They have been used extensively to predict the behavior and sensitivity toward inhibitors of enzymes within a family. The utility of this approach diminishes when comparing proteins with little sequence homology. Even within an enzyme family, SBPs must be complemented by an orthogonal method that is independent of sequence to better predict enzymatic behavior. A chemogenomic approach is demonstrated here that uses the inhibition profile of a 130,000 diverse molecule library to uncover relationships within a set of enzymes. The profile is used to construct a semimetric additive distance matrix. This matrix, in turn, defines a sequence-independent phylogeny (SIP). The method was applied to 97 enzymes (kinases, proteases, and phosphatases). SIP does not use structural information from the molecules used for establishing the profile, thus providing a more heuristic method than the current approaches, which require knowledge of the specific inhibitor's structure. Within enzyme families, SIP shows a good overall correlation with SBP. More interestingly, SIP uncovers distances within families that are not recognizable by sequence-based methods. In addition, SIP allows the determination of distance between enzymes with no sequence homology, thus uncovering novel relationships not predicted by SBP. This chemogenomic approach, used in conjunction with SBP, should prove to be a powerful tool for choosing target combinations for drug discovery programs as well as for guiding the selection of profiling and liability targets.
Erlyana, Erlyana; Damrongplasit, Kannika Kampanya; Melnick, Glenn
2011-05-01
This study investigates the importance of medical fee and distance to health care provider on individual's decision to seek care in developing countries. The estimation method used a mixed logit model applied to data from the third wave of the Indonesian family life survey (2000). The key variables of interest include medical fee and distance to different types of health care provider and individual characteristic variables. Urban dweller's decision to choose health care providers are sensitive to the monetary cost of medical care as measured by medical fee but they are not sensitive to distance. For those who reside in rural area, they are sensitive to the non-medical component cost of care as measured by travel distance but they are not sensitive to medical fee. As a result of those findings, policy makers should consider different sets of policy instruments when attempting to expand health service's usage in urban and rural areas of Indonesia. To increase access in urban areas, we recommend expansion of health insurance coverage in order to lower out-of-pocket medical expenditures. As for rural areas, expansion of medical infrastructures to reduce commuting distance and costs will be needed to increase utilization. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Visual homing with a pan-tilt based stereo camera
NASA Astrophysics Data System (ADS)
Nirmal, Paramesh; Lyons, Damian M.
2013-01-01
Visual homing is a navigation method based on comparing a stored image of the goal location and the current image (current view) to determine how to navigate to the goal location. It is theorized that insects, such as ants and bees, employ visual homing methods to return to their nest. Visual homing has been applied to autonomous robot platforms using two main approaches: holistic and feature-based. Both methods aim at determining distance and direction to the goal location. Navigational algorithms using Scale Invariant Feature Transforms (SIFT) have gained great popularity in the recent years due to the robustness of the feature operator. Churchill and Vardy have developed a visual homing method using scale change information (Homing in Scale Space, HiSS) from SIFT. HiSS uses SIFT feature scale change information to determine distance between the robot and the goal location. Since the scale component is discrete with a small range of values, the result is a rough measurement with limited accuracy. We have developed a method that uses stereo data, resulting in better homing performance. Our approach utilizes a pan-tilt based stereo camera, which is used to build composite wide-field images. We use the wide-field images combined with stereo-data obtained from the stereo camera to extend the keypoint vector described in to include a new parameter, depth (z). Using this info, our algorithm determines the distance and orientation from the robot to the goal location. We compare our method with HiSS in a set of indoor trials using a Pioneer 3-AT robot equipped with a BumbleBee2 stereo camera. We evaluate the performance of both methods using a set of performance measures described in this paper.
TH-EF-207A-05: Feasibility of Applying SMEIR Method On Small Animal 4D Cone Beam CT Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhong, Y; Zhang, Y; Shao, Y
Purpose: Small animal cone beam CT imaging has been widely used in preclinical research. Due to the higher respiratory rate and heat beats of small animals, motion blurring is inevitable and needs to be corrected in the reconstruction. Simultaneous motion estimation and image reconstruction (SMEIR) method, which uses projection images of all phases, proved to be effective in motion model estimation and able to reconstruct motion-compensated images. We demonstrate the application of SMEIR for small animal 4D cone beam CT imaging by computer simulations on a digital rat model. Methods: The small animal CBCT imaging system was simulated with themore » source-to-detector distance of 300 mm and the source-to-object distance of 200 mm. A sequence of rat phantom were generated with 0.4 mm{sup 3} voxel size. The respiratory cycle was taken as 1.0 second and the motions were simulated with a diaphragm motion of 2.4mm and an anterior-posterior expansion of 1.6 mm. The projection images were calculated using a ray-tracing method, and 4D-CBCT were reconstructed using SMEIR and FDK methods. The SMEIR method iterates over two alternating steps: 1) motion-compensated iterative image reconstruction by using projections from all respiration phases and 2) motion model estimation from projections directly through a 2D-3D deformable registration of the image obtained in the first step to projection images of other phases. Results: The images reconstructed using SMEIR method reproduced the features in the original phantom. Projections from the same phase were also reconstructed using FDK method. Compared with the FDK results, the images from SMEIR method substantially improve the image quality with minimum artifacts. Conclusion: We demonstrate that it is viable to apply SMEIR method to reconstruct small animal 4D-CBCT images.« less
Sánchez, Daniel; Johnson, Nick; Li, Chao; Novak, Pavel; Rheinlaender, Johannes; Zhang, Yanjun; Anand, Uma; Anand, Praveen; Gorelik, Julia; Frolenkov, Gregory I.; Benham, Christopher; Lab, Max; Ostanin, Victor P.; Schäffer, Tilman E.; Klenerman, David; Korchev, Yuri E.
2008-01-01
Mechanosensitivity in living biological tissue is a study area of increasing importance, but investigative tools are often inadequate. We have developed a noncontact nanoscale method to apply quantified positive and negative force at defined positions to the soft responsive surface of living cells. The method uses applied hydrostatic pressure (0.1–150 kPa) through a pipette, while the pipette-sample separation is kept constant above the cell surface using ion conductance based distance feedback. This prevents any surface contact, or contamination of the pipette, allowing repeated measurements. We show that we can probe the local mechanical properties of living cells using increasing pressure, and hence measure the nanomechanical properties of the cell membrane and the underlying cytoskeleton in a variety of cells (erythrocytes, epithelium, cardiomyocytes and neurons). Because the cell surface can first be imaged without pressure, it is possible to relate the mechanical properties to the local cell topography. This method is well suited to probe the nanomechanical properties and mechanosensitivity of living cells. PMID:18515369
Assessment of gene order computing methods for Alzheimer's disease
2013-01-01
Background Computational genomics of Alzheimer disease (AD), the most common form of senile dementia, is a nascent field in AD research. The field includes AD gene clustering by computing gene order which generates higher quality gene clustering patterns than most other clustering methods. However, there are few available gene order computing methods such as Genetic Algorithm (GA) and Ant Colony Optimization (ACO). Further, their performance in gene order computation using AD microarray data is not known. We thus set forth to evaluate the performances of current gene order computing methods with different distance formulas, and to identify additional features associated with gene order computation. Methods Using different distance formulas- Pearson distance and Euclidean distance, the squared Euclidean distance, and other conditions, gene orders were calculated by ACO and GA (including standard GA and improved GA) methods, respectively. The qualities of the gene orders were compared, and new features from the calculated gene orders were identified. Results Compared to the GA methods tested in this study, ACO fits the AD microarray data the best when calculating gene order. In addition, the following features were revealed: different distance formulas generated a different quality of gene order, and the commonly used Pearson distance was not the best distance formula when used with both GA and ACO methods for AD microarray data. Conclusion Compared with Pearson distance and Euclidean distance, the squared Euclidean distance generated the best quality gene order computed by GA and ACO methods. PMID:23369541
Walsh, Neville G.; Cantrill, David J.; Holmes, Gareth D.; Murphy, Daniel J.
2017-01-01
In Australia, Poaceae tribe Poeae are represented by 19 genera and 99 species, including economically and environmentally important native and introduced pasture grasses [e.g. Poa (Tussock-grasses) and Lolium (Ryegrasses)]. We used this tribe, which are well characterised in regards to morphological diversity and evolutionary relationships, to test the efficacy of DNA barcoding methods. A reference library was generated that included 93.9% of species in Australia (408 individuals, x¯ = 3.7 individuals per species). Molecular data were generated for official plant barcoding markers (rbcL, matK) and the nuclear ribosomal internal transcribed spacer (ITS) region. We investigated accuracy of specimen identifications using distance- (nearest neighbour, best-close match, and threshold identification) and tree-based (maximum likelihood, Bayesian inference) methods and applied species discovery methods (automatic barcode gap discovery, Poisson tree processes) based on molecular data to assess congruence with recognised species. Across all methods, success rate for specimen identification of genera was high (87.5–99.5%) and of species was low (25.6–44.6%). Distance- and tree-based methods were equally ineffective in providing accurate identifications for specimens to species rank (26.1–44.6% and 25.6–31.3%, respectively). The ITS marker achieved the highest success rate for specimen identification at both generic and species ranks across the majority of methods. For distance-based analyses the best-close match method provided the greatest accuracy for identification of individuals with a high percentage of “correct” (97.6%) and a low percentage of “incorrect” (0.3%) generic identifications, based on the ITS marker. For tribe Poeae, and likely for other grass lineages, sequence data in the standard DNA barcode markers are not variable enough for accurate identification of specimens to species rank. For recently diverged grass species similar challenges are encountered in the application of genetic and morphological data to species delimitations, with taxonomic signal limited by extensive infra-specific variation and shared polymorphisms among species in both data types. PMID:29084279
Birch, Joanne L; Walsh, Neville G; Cantrill, David J; Holmes, Gareth D; Murphy, Daniel J
2017-01-01
In Australia, Poaceae tribe Poeae are represented by 19 genera and 99 species, including economically and environmentally important native and introduced pasture grasses [e.g. Poa (Tussock-grasses) and Lolium (Ryegrasses)]. We used this tribe, which are well characterised in regards to morphological diversity and evolutionary relationships, to test the efficacy of DNA barcoding methods. A reference library was generated that included 93.9% of species in Australia (408 individuals, [Formula: see text] = 3.7 individuals per species). Molecular data were generated for official plant barcoding markers (rbcL, matK) and the nuclear ribosomal internal transcribed spacer (ITS) region. We investigated accuracy of specimen identifications using distance- (nearest neighbour, best-close match, and threshold identification) and tree-based (maximum likelihood, Bayesian inference) methods and applied species discovery methods (automatic barcode gap discovery, Poisson tree processes) based on molecular data to assess congruence with recognised species. Across all methods, success rate for specimen identification of genera was high (87.5-99.5%) and of species was low (25.6-44.6%). Distance- and tree-based methods were equally ineffective in providing accurate identifications for specimens to species rank (26.1-44.6% and 25.6-31.3%, respectively). The ITS marker achieved the highest success rate for specimen identification at both generic and species ranks across the majority of methods. For distance-based analyses the best-close match method provided the greatest accuracy for identification of individuals with a high percentage of "correct" (97.6%) and a low percentage of "incorrect" (0.3%) generic identifications, based on the ITS marker. For tribe Poeae, and likely for other grass lineages, sequence data in the standard DNA barcode markers are not variable enough for accurate identification of specimens to species rank. For recently diverged grass species similar challenges are encountered in the application of genetic and morphological data to species delimitations, with taxonomic signal limited by extensive infra-specific variation and shared polymorphisms among species in both data types.
Automated Stitching of Microtubule Centerlines across Serial Electron Tomograms
Weber, Britta; Tranfield, Erin M.; Höög, Johanna L.; Baum, Daniel; Antony, Claude; Hyman, Tony; Verbavatz, Jean-Marc; Prohaska, Steffen
2014-01-01
Tracing microtubule centerlines in serial section electron tomography requires microtubules to be stitched across sections, that is lines from different sections need to be aligned, endpoints need to be matched at section boundaries to establish a correspondence between neighboring sections, and corresponding lines need to be connected across multiple sections. We present computational methods for these tasks: 1) An initial alignment is computed using a distance compatibility graph. 2) A fine alignment is then computed with a probabilistic variant of the iterative closest points algorithm, which we extended to handle the orientation of lines by introducing a periodic random variable to the probabilistic formulation. 3) Endpoint correspondence is established by formulating a matching problem in terms of a Markov random field and computing the best matching with belief propagation. Belief propagation is not generally guaranteed to converge to a minimum. We show how convergence can be achieved, nonetheless, with minimal manual input. In addition to stitching microtubule centerlines, the correspondence is also applied to transform and merge the electron tomograms. We applied the proposed methods to samples from the mitotic spindle in C. elegans, the meiotic spindle in X. laevis, and sub-pellicular microtubule arrays in T. brucei. The methods were able to stitch microtubules across section boundaries in good agreement with experts' opinions for the spindle samples. Results, however, were not satisfactory for the microtubule arrays. For certain experiments, such as an analysis of the spindle, the proposed methods can replace manual expert tracing and thus enable the analysis of microtubules over long distances with reasonable manual effort. PMID:25438148
Automated stitching of microtubule centerlines across serial electron tomograms.
Weber, Britta; Tranfield, Erin M; Höög, Johanna L; Baum, Daniel; Antony, Claude; Hyman, Tony; Verbavatz, Jean-Marc; Prohaska, Steffen
2014-01-01
Tracing microtubule centerlines in serial section electron tomography requires microtubules to be stitched across sections, that is lines from different sections need to be aligned, endpoints need to be matched at section boundaries to establish a correspondence between neighboring sections, and corresponding lines need to be connected across multiple sections. We present computational methods for these tasks: 1) An initial alignment is computed using a distance compatibility graph. 2) A fine alignment is then computed with a probabilistic variant of the iterative closest points algorithm, which we extended to handle the orientation of lines by introducing a periodic random variable to the probabilistic formulation. 3) Endpoint correspondence is established by formulating a matching problem in terms of a Markov random field and computing the best matching with belief propagation. Belief propagation is not generally guaranteed to converge to a minimum. We show how convergence can be achieved, nonetheless, with minimal manual input. In addition to stitching microtubule centerlines, the correspondence is also applied to transform and merge the electron tomograms. We applied the proposed methods to samples from the mitotic spindle in C. elegans, the meiotic spindle in X. laevis, and sub-pellicular microtubule arrays in T. brucei. The methods were able to stitch microtubules across section boundaries in good agreement with experts' opinions for the spindle samples. Results, however, were not satisfactory for the microtubule arrays. For certain experiments, such as an analysis of the spindle, the proposed methods can replace manual expert tracing and thus enable the analysis of microtubules over long distances with reasonable manual effort.
NASA Astrophysics Data System (ADS)
Sanhouse-García, Antonio J.; Rangel-Peraza, Jesús Gabriel; Bustos-Terrones, Yaneth; García-Ferrer, Alfonso; Mesas-Carrascosa, Francisco J.
2016-02-01
Land cover classification is often based on different characteristics between their classes, but with great homogeneity within each one of them. This cover is obtained through field work or by mean of processing satellite images. Field work involves high costs; therefore, digital image processing techniques have become an important alternative to perform this task. However, in some developing countries and particularly in Casacoima municipality in Venezuela, there is a lack of geographic information systems due to the lack of updated information and high costs in software license acquisition. This research proposes a low cost methodology to develop thematic mapping of local land use and types of coverage in areas with scarce resources. Thematic mapping was developed from CBERS-2 images and spatial information available on the network using open source tools. The supervised classification method per pixel and per region was applied using different classification algorithms and comparing them among themselves. Classification method per pixel was based on Maxver algorithms (maximum likelihood) and Euclidean distance (minimum distance), while per region classification was based on the Bhattacharya algorithm. Satisfactory results were obtained from per region classification, where overall reliability of 83.93% and kappa index of 0.81% were observed. Maxver algorithm showed a reliability value of 73.36% and kappa index 0.69%, while Euclidean distance obtained values of 67.17% and 0.61% for reliability and kappa index, respectively. It was demonstrated that the proposed methodology was very useful in cartographic processing and updating, which in turn serve as a support to develop management plans and land management. Hence, open source tools showed to be an economically viable alternative not only for forestry organizations, but for the general public, allowing them to develop projects in economically depressed and/or environmentally threatened areas.
Chen, Gang; Li, Jingyi; Ying, Qi; Sherman, Seth; Perkins, Neil; Rajeshwari, Sundaram; Mendola, Pauline
2014-01-01
In this study, Community Multiscale Air Quality (CMAQ) model was applied to predict ambient gaseous and particulate concentrations during 2001 to 2010 in 15 hospital referral regions (HRRs) using a 36-km horizontal resolution domain. An inverse distance weighting based method was applied to produce exposure estimates based on observation-fused regional pollutant concentration fields using the differences between observations and predictions at grid cells where air quality monitors were located. Although the raw CMAQ model is capable of producing satisfying results for O3 and PM2.5 based on EPA guidelines, using the observation data fusing technique to correct CMAQ predictions leads to significant improvement of model performance for all gaseous and particulate pollutants. Regional average concentrations were calculated using five different methods: 1) inverse distance weighting of observation data alone, 2) raw CMAQ results, 3) observation-fused CMAQ results, 4) population-averaged raw CMAQ results and 5) population-averaged fused CMAQ results. It shows that while O3 (as well as NOx) monitoring networks in the HRR regions are dense enough to provide consistent regional average exposure estimation based on monitoring data alone, PM2.5 observation sites (as well as monitors for CO, SO2, PM10 and PM2.5 components) are usually sparse and the difference between the average concentrations estimated by the inverse distance interpolated observations, raw CMAQ and fused CMAQ results can be significantly different. Population-weighted average should be used to account spatial variation in pollutant concentration and population density. Using raw CMAQ results or observations alone might lead to significant biases in health outcome analyses. PMID:24747248
Splint sterilization--a potential registration hazard in computer-assisted surgery.
Figl, Michael; Weber, Christoph; Assadian, Ojan; Toma, Cyril D; Traxler, Hannes; Seemann, Rudolf; Guevara-Rojas, Godoberto; Pöschl, Wolfgang P; Ewers, Rolf; Schicho, Kurt
2012-04-01
Registration of preoperative targeting information for the intraoperative situation is a crucial step in computer-assisted surgical interventions. Point-to-point registration using acrylic splints is among the most frequently used procedures. There are, however, no generally accepted recommendations for sterilization of the splint. An appropriate method for the thermolabile splint would be hydrogen peroxide-based plasma sterilization. This study evaluated the potential deformation of the splint undergoing such sterilization. Deformation was quantified using image-processing methods applied to computed tomographic (CT) volumes before and after sterilization. An acrylic navigation splint was used as the study object. Eight metallic markers placed in the splint were used for registration. Six steel spheres in the mouthpiece were used as targets. Two CT volumes of the splint were acquired before and after 5 sterilization cycles using a hydrogen peroxide sterilizer. Point-to-point registration was applied, and fiducial and target registration errors were computed. Surfaces were extracted from CT scans and Hausdorff distances were derived. Effectiveness of sterilization was determined using Geobacillus stearothermophilus. Fiducial-based registration of CT scans before and after sterilization resulted in a mean fiducial registration error of 0.74 mm; the target registration error in the mouthpiece was 0.15 mm. The Hausdorff distance, describing the maximal deformation of the splint, was 2.51 mm. Ninety percent of point-surface distances were shorter than 0.61 mm, and 95% were shorter than 0.73 mm. No bacterial growth was found after the sterilization process. Hydrogen peroxide-based low-temperature plasma sterilization does not deform the splint, which is the base for correct computer-navigated surgery. Copyright © 2012 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
Modification of the laser triangulation method for measuring the thickness of optical layers
NASA Astrophysics Data System (ADS)
Khramov, V. N.; Adamov, A. A.
2018-04-01
The problem of determining the thickness of thin films by the method of laser triangulation is considered. An expression is derived for the film thickness and the distance between the focused beams on the photo detector. The possibility of applying the chosen method for measuring thickness is in the range [0.1; 1] mm. We could resolve 2 individual light marks for a minimum film thickness of 0.23 mm. We resolved with the help of computer processing of photos with a resolution of 0.10 mm. The obtained results can be used in ophthalmology for express diagnostics during surgical operations on the corneal layer.
Calculation of two dimensional vortex/surface interference using panel methods
NASA Technical Reports Server (NTRS)
Maskew, B.
1980-01-01
The application of panel methods to the calculation of vortex/surface interference characteristics in two dimensional flow was studied over a range of situations starting with the simple case of a vortex above a plane and proceeding to the case of vortex separation from a prescribed point on a thick section. Low order and high order panel methods were examined, but the main factor influencing the accuracy of the solution was the distance between control stations in relation to the height of the vortex above the surface. Improvements over the basic solutions were demonstrated using a technique based on subpanels and an applied doublet distribution.
Caneva, Marco; Botticelli, Daniele; Carneiro Martins, Evandro Neto; Caneva, Martina; Lang, Niklaus P; Xavier, Samuel P
2017-12-01
To compare the sequential healing at the interface gap occurring between autologous bone grafts and recipient sites using two types of fixation techniques. Twenty-four adult male New Zealand white rabbits were used. Two bone grafts were collected from the calvaria and secured to the lateral aspect of the angle of mandible in each animal. Cortical perforations at the recipient sites were performed. However, no modifications were applied to the graft for its adaptation to the recipient site. Two types of fixation techniques by position or lag screws were applied. This was done by preparing osteotomy holes smaller or larger than the screw diameter, respectively. The animals were sacrificed after 3, 7, 20, and 40 days. After 3 days, the distance between the graft and the recipient site was similar between the two different fixations. Due to the anatomical shapes of the recipient sites and grafts, the distance between the two parts was lower in the central region (<0.1 mm) compared to the external regions of the graft (0.5-0.6 mm). The first evidence of small amounts of new (woven) bone was seen after 7 days, forming from the parent bone. The percentage increased after 20 and 40 days. After 40 days, the grafts were well incorporated within the recipient sites in both groups without any statistically significant difference. The present study did not show superiority of one method over another. A fixation to a recipient site with perforations may be sufficient for incorporating an autologous bone graft even if its adaptation is not perfect and irrespectively of the fixation method. Distances of approximately half millimeter were bridged with newly formed bone. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Central stars of planetary nebulae in the Galactic bulge
NASA Astrophysics Data System (ADS)
Hultzsch, P. J. N.; Puls, J.; Méndez, R. H.; Pauldrach, A. W. A.; Kudritzki, R.-P.; Hoffmann, T. L.; McCarthy, J. K.
2007-06-01
Context: Optical high-resolution spectra of five central stars of planetary nebulae (CSPN) in the Galactic bulge have been obtained with Keck/HIRES in order to derive their parameters. Since the distance of the objects is quite well known, such a method has the advantage that stellar luminosities and masses can in principle be determined without relying on theoretical relations between both quantities. Aims: By alternatively combining the results of our spectroscopic investigation with evolutionary tracks, we obtain so-called spectroscopic distances, which can be compared with the known (average) distance of the bulge-CSPN. This offers the possibility to test the validity of model atmospheres and present date post-AGB evolution. Methods: We analyze optical H/He profiles of five Galactic bulge CSPN (plus one comparison object) by means of profile fitting based on state of the art non-LTE modeling tools, to constrain their basic atmospheric parameters (Teff, log g, helium abundance and wind strength). Masses and other stellar radius dependent quantities are obtained from both the known distances and from evolutionary tracks, and the results from both approaches are compared. Results: The major result of the present investigation is that the derived spectroscopic distances depend crucially on the applied reddening law. Assuming either standard reddening or values based on radio-Hβ extinctions, we find a mean distance of 9.0±1.6 kpc and 12.2±2.1 kpc, respectively. An “average extinction law” leads to a distance of 10.7±1.2 kpc, which is still considerably larger than the Galactic center distance of 8 kpc. In all cases, however, we find a remarkable internal agreement of the individual spectroscopic distances of our sample objects, within ±10% to ±15% for the different reddening laws. Conclusions: Due to the uncertain reddening correction, the analysis presented here cannot yet be regarded as a consistency check for our method, and a rigorous test of the CSPN evolution theory becomes only possible if this problem has been solved. The data presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W. M. Keck Foundation. Appendix A is only available in electronic form at http://www.aanda.org
NASA Astrophysics Data System (ADS)
Morel, Eneas N.; Russo, Nélida A.; Torga, Jorge R.; Duchowicz, Ricardo
2016-01-01
We used an interferometric technique based on typical optical coherence tomography (OCT) schemes for measuring distances of industrial interest. The system employed as a light source a tunable erbium-doped fiber laser of ˜20-pm bandwidth with a tuning range between 1520 and 1570 nm. It has a sufficiently long coherence length to enable long depth range imaging. A set of fiber Bragg gratings was used as a self-calibration method, which has the advantage of being a passive system that requires no additional electronic devices. The proposed configuration and the coherence length of the laser enlarge the range of maximum distances that can be measured with the common OCT configuration, maintaining a good axial resolution. A measuring range slightly >17 cm was determined. The system performance was evaluated by studying the repeatability and axial resolution of the results when the same optical path difference was measured. Additionally, the thickness of a semitransparent medium was also measured.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fleury, Pierre; Uzan, Jean-Philippe; Larena, Julien, E-mail: fleury@iap.fr, E-mail: j.larena@ru.ac.za, E-mail: uzan@iap.fr
On the scale of the light beams subtended by small sources, e.g. supernovae, matter cannot be accurately described as a fluid, which questions the applicability of standard cosmic lensing to those cases. In this article, we propose a new formalism to deal with small-scale lensing as a diffusion process: the Sachs and Jacobi equations governing the propagation of narrow light beams are treated as Langevin equations. We derive the associated Fokker-Planck-Kolmogorov equations, and use them to deduce general analytical results on the mean and dispersion of the angular distance. This formalism is applied to random Einstein-Straus Swiss-cheese models, allowing usmore » to: (1) show an explicit example of the involved calculations; (2) check the validity of the method against both ray-tracing simulations and direct numerical integration of the Langevin equation. As a byproduct, we obtain a post-Kantowski-Dyer-Roeder approximation, accounting for the effect of tidal distortions on the angular distance, in excellent agreement with numerical results. Besides, the dispersion of the angular distance is correctly reproduced in some regimes.« less
The theory of stochastic cosmological lensing
NASA Astrophysics Data System (ADS)
Fleury, Pierre; Larena, Julien; Uzan, Jean-Philippe
2015-11-01
On the scale of the light beams subtended by small sources, e.g. supernovae, matter cannot be accurately described as a fluid, which questions the applicability of standard cosmic lensing to those cases. In this article, we propose a new formalism to deal with small-scale lensing as a diffusion process: the Sachs and Jacobi equations governing the propagation of narrow light beams are treated as Langevin equations. We derive the associated Fokker-Planck-Kolmogorov equations, and use them to deduce general analytical results on the mean and dispersion of the angular distance. This formalism is applied to random Einstein-Straus Swiss-cheese models, allowing us to: (1) show an explicit example of the involved calculations; (2) check the validity of the method against both ray-tracing simulations and direct numerical integration of the Langevin equation. As a byproduct, we obtain a post-Kantowski-Dyer-Roeder approximation, accounting for the effect of tidal distortions on the angular distance, in excellent agreement with numerical results. Besides, the dispersion of the angular distance is correctly reproduced in some regimes.
NASA Astrophysics Data System (ADS)
Song, Yongchen; Hao, Min; Zhao, Yuechao; Zhang, Liang
2014-12-01
In this study, the dual-chamber pressure decay method and magnetic resonance imaging (MRI) were used to dynamically visualize the gas diffusion process in liquid-saturated porous media, and the relationship of concentration-distance for gas diffusing into liquid-saturated porous media at different times were obtained by MR images quantitative analysis. A non-iterative finite volume method was successfully applied to calculate the local gas diffusion coefficient in liquid-saturated porous media. The results agreed very well with the conventional pressure decay method, thus it demonstrates that the method was feasible of determining the local diffusion coefficient of gas in liquid-saturated porous media at different times during diffusion process.
NASA Astrophysics Data System (ADS)
Padmanabhan, Nikhil; Xu, Xiaoying; Eisenstein, Daniel J.; Scalzo, Richard; Cuesta, Antonio J.; Mehta, Kushal T.; Kazin, Eyal
2012-12-01
We present the first application to density field reconstruction to a galaxy survey to undo the smoothing of the baryon acoustic oscillation (BAO) feature due to non-linear gravitational evolution and thereby improve the precision of the distance measurements possible. We apply the reconstruction technique to the clustering of galaxies from the Sloan Digital Sky Survey (SDSS) Data Release 7 (DR7) luminous red galaxy (LRG) sample, sharpening the BAO feature and achieving a 1.9 per cent measurement of the distance to z = 0.35. We update the reconstruction algorithm of Eisenstein et al. to account for the effects of survey geometry as well as redshift-space distortions and validate it on 160 LasDamas simulations. We demonstrate that reconstruction sharpens the BAO feature in the angle averaged galaxy correlation function, reducing the non-linear smoothing scale Σnl from 8.1 to 4.4 Mpc h-1. Reconstruction also significantly reduces the effects of redshift-space distortions at the BAO scale, isotropizing the correlation function. This sharpened BAO feature yields an unbiased distance estimate (<0.2 per cent) and reduces the scatter from 3.3 to 2.1 per cent. We demonstrate the robustness of these results to the various reconstruction parameters, including the smoothing scale, the galaxy bias and the linear growth rate. Applying this reconstruction algorithm to the SDSS LRG DR7 sample improves the significance of the BAO feature in these data from 3.3σ for the unreconstructed correlation function to 4.2σ after reconstruction. We estimate a relative distance scale DV/rs to z = 0.35 of 8.88 ± 0.17, where rs is the sound horizon and DV≡(DA2H-1)1/3 is a combination of the angular diameter distance DA and Hubble parameter H. Assuming a sound horizon of 154.25 Mpc, this translates into a distance measurement DV(z = 0.35) = 1.356 ± 0.025 Gpc. We find that reconstruction reduces the distance error in the DR7 sample from 3.5 to 1.9 per cent, equivalent to a survey with three times the volume of SDSS.
Volkmann, Niels
2004-01-01
Reduced representation templates are used in a real-space pattern matching framework to facilitate automatic particle picking from electron micrographs. The procedure consists of five parts. First, reduced templates are constructed either from models or directly from the data. Second, a real-space pattern matching algorithm is applied using the reduced representations as templates. Third, peaks are selected from the resulting score map using peak-shape characteristics. Fourth, the surviving peaks are tested for distance constraints. Fifth, a correlation-based outlier screening is applied. Test applications to a data set of keyhole limpet hemocyanin particles indicate that the method is robust and reliable.
Handwritten document age classification based on handwriting styles
NASA Astrophysics Data System (ADS)
Ramaiah, Chetan; Kumar, Gaurav; Govindaraju, Venu
2012-01-01
Handwriting styles are constantly changing over time. We approach the novel problem of estimating the approximate age of Historical Handwritten Documents using Handwriting styles. This system will have many applications in handwritten document processing engines where specialized processing techniques can be applied based on the estimated age of the document. We propose to learn a distribution over styles across centuries using Topic Models and to apply a classifier over weights learned in order to estimate the approximate age of the documents. We present a comparison of different distance metrics such as Euclidean Distance and Hellinger Distance within this application.
Extraction of breast lesions from ultrasound imagery: Bhattacharyya gradient flow approach
NASA Astrophysics Data System (ADS)
Torkaman, Mahsa; Sandhu, Romeil; Tannenbaum, Allen
2018-03-01
Breast cancer is one of the most commonly diagnosed neoplasms among American women and the second leading cause of death among women all over the world. In order to reduce the mortality rate and cost of treatment, early diagnosis and treatment are essential. Accurate and reliable diagnosis is required in order to ensure the most effective treatment and a second opinion is often advisable. In this paper, we address the problem of breast lesion detection from ultrasound imagery by means of active contours, whose evolution is driven by maximizing the Bhattacharyya distance1 between the probability density functions (PDFs). The proposed method was applied to ultrasound breast imagery, and the lesion boundary was obtained by maximizing the distance-based energy functional such that the maximum (optimal contour) is attained at the boundary of the potential lesion. We compared the results of the proposed method quantitatively using the Dice coefficient (similarity index)2 to well-known GrowCut segmentation method3 and demonstrated that Bhattacharyya approach outperforms GrowCut in most of the cases.
An effective fuzzy kernel clustering analysis approach for gene expression data.
Sun, Lin; Xu, Jiucheng; Yin, Jiaojiao
2015-01-01
Fuzzy clustering is an important tool for analyzing microarray data. A major problem in applying fuzzy clustering method to microarray gene expression data is the choice of parameters with cluster number and centers. This paper proposes a new approach to fuzzy kernel clustering analysis (FKCA) that identifies desired cluster number and obtains more steady results for gene expression data. First of all, to optimize characteristic differences and estimate optimal cluster number, Gaussian kernel function is introduced to improve spectrum analysis method (SAM). By combining subtractive clustering with max-min distance mean, maximum distance method (MDM) is proposed to determine cluster centers. Then, the corresponding steps of improved SAM (ISAM) and MDM are given respectively, whose superiority and stability are illustrated through performing experimental comparisons on gene expression data. Finally, by introducing ISAM and MDM into FKCA, an effective improved FKCA algorithm is proposed. Experimental results from public gene expression data and UCI database show that the proposed algorithms are feasible for cluster analysis, and the clustering accuracy is higher than the other related clustering algorithms.
NASA Astrophysics Data System (ADS)
Nekuchaev, A. O.; Shuteev, S. A.
2014-04-01
A new method of data transmission in DWDM systems along existing long-distance fiber-optic communication lines is proposed. The existing method, e.g., uses 32 wavelengths in the NRZ code with an average power of 16 conventional units (16 units and 16 zeros on the average) and transmission of 32 bits/cycle. In the new method, one of 124 wavelengths with a duration of one cycle each (at any time instant, no more than 16 obligatory different wavelengths) and capacity of 4 bits with an average power of 15 conventional units and rate of 64 bits/cycle is transmitted at every instant of a 1/16 cycle. The cross modulation and double Rayleigh scattering are significantly decreased owing to uniform distribution of power over time at different wavelengths. The time redundancy (forward error correction (FEC)) is about 7% and allows one to achieve a coding enhancement of about 6 dB by detecting and removing deletions and errors simultaneously.
A Minimal Path Searching Approach for Active Shape Model (ASM)-based Segmentation of the Lung.
Guo, Shengwen; Fei, Baowei
2009-03-27
We are developing a minimal path searching method for active shape model (ASM)-based segmentation for detection of lung boundaries on digital radiographs. With the conventional ASM method, the position and shape parameters of the model points are iteratively refined and the target points are updated by the least Mahalanobis distance criterion. We propose an improved searching strategy that extends the searching points in a fan-shape region instead of along the normal direction. A minimal path (MP) deformable model is applied to drive the searching procedure. A statistical shape prior model is incorporated into the segmentation. In order to keep the smoothness of the shape, a smooth constraint is employed to the deformable model. To quantitatively assess the ASM-MP segmentation, we compare the automatic segmentation with manual segmentation for 72 lung digitized radiographs. The distance error between the ASM-MP and manual segmentation is 1.75 ± 0.33 pixels, while the error is 1.99 ± 0.45 pixels for the ASM. Our results demonstrate that our ASM-MP method can accurately segment the lung on digital radiographs.
A minimal path searching approach for active shape model (ASM)-based segmentation of the lung
NASA Astrophysics Data System (ADS)
Guo, Shengwen; Fei, Baowei
2009-02-01
We are developing a minimal path searching method for active shape model (ASM)-based segmentation for detection of lung boundaries on digital radiographs. With the conventional ASM method, the position and shape parameters of the model points are iteratively refined and the target points are updated by the least Mahalanobis distance criterion. We propose an improved searching strategy that extends the searching points in a fan-shape region instead of along the normal direction. A minimal path (MP) deformable model is applied to drive the searching procedure. A statistical shape prior model is incorporated into the segmentation. In order to keep the smoothness of the shape, a smooth constraint is employed to the deformable model. To quantitatively assess the ASM-MP segmentation, we compare the automatic segmentation with manual segmentation for 72 lung digitized radiographs. The distance error between the ASM-MP and manual segmentation is 1.75 +/- 0.33 pixels, while the error is 1.99 +/- 0.45 pixels for the ASM. Our results demonstrate that our ASM-MP method can accurately segment the lung on digital radiographs.
A Minimal Path Searching Approach for Active Shape Model (ASM)-based Segmentation of the Lung
Guo, Shengwen; Fei, Baowei
2013-01-01
We are developing a minimal path searching method for active shape model (ASM)-based segmentation for detection of lung boundaries on digital radiographs. With the conventional ASM method, the position and shape parameters of the model points are iteratively refined and the target points are updated by the least Mahalanobis distance criterion. We propose an improved searching strategy that extends the searching points in a fan-shape region instead of along the normal direction. A minimal path (MP) deformable model is applied to drive the searching procedure. A statistical shape prior model is incorporated into the segmentation. In order to keep the smoothness of the shape, a smooth constraint is employed to the deformable model. To quantitatively assess the ASM-MP segmentation, we compare the automatic segmentation with manual segmentation for 72 lung digitized radiographs. The distance error between the ASM-MP and manual segmentation is 1.75 ± 0.33 pixels, while the error is 1.99 ± 0.45 pixels for the ASM. Our results demonstrate that our ASM-MP method can accurately segment the lung on digital radiographs. PMID:24386531
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eken Tuna, Kevin Mayeda, Abraham Hofstetter, Rengin Gok, Gonca Orgulu, Niyazi Turkelli
A recently developed coda magnitude methodology was applied to selected broadband stations in Turkey for the purpose of testing the coda method in a large, laterally complex region. As found in other, albeit smaller regions, coda envelope amplitude measurements are significantly less variable than distance-corrected direct wave measurements (i.e., L{sub g} and surface waves) by roughly a factor 3-to-4. Despite strong lateral crustal heterogeneity in Turkey, they found that the region could be adequately modeled assuming a simple 1-D, radially symmetric path correction. After calibrating the stations ISP, ISKB and MALT for local and regional distances, single-station moment-magnitude estimates (M{submore » W}) derived from the coda spectra were in excellent agreement with those determined from multistation waveform modeling inversions, exhibiting a data standard deviation of 0.17. Though the calibration was validated using large events, the results of the calibration will extend M{sub W} estimates to significantly smaller events which could not otherwise be waveform modeled. The successful application of the method is remarkable considering the significant lateral complexity in Turkey and the simple assumptions used in the coda method.« less
Kim, Heekang; Kwon, Soon; Kim, Sungho
2016-07-08
This paper proposes a vehicle light detection method using a hyperspectral camera instead of a Charge-Coupled Device (CCD) or Complementary metal-Oxide-Semiconductor (CMOS) camera for adaptive car headlamp control. To apply Intelligent Headlight Control (IHC), the vehicle headlights need to be detected. Headlights are comprised from a variety of lighting sources, such as Light Emitting Diodes (LEDs), High-intensity discharge (HID), and halogen lamps. In addition, rear lamps are made of LED and halogen lamp. This paper refers to the recent research in IHC. Some problems exist in the detection of headlights, such as erroneous detection of street lights or sign lights and the reflection plate of ego-car from CCD or CMOS images. To solve these problems, this study uses hyperspectral images because they have hundreds of bands and provide more information than a CCD or CMOS camera. Recent methods to detect headlights used the Spectral Angle Mapper (SAM), Spectral Correlation Mapper (SCM), and Euclidean Distance Mapper (EDM). The experimental results highlight the feasibility of the proposed method in three types of lights (LED, HID, and halogen).
Registration of 3D spectral OCT volumes using 3D SIFT feature point matching
NASA Astrophysics Data System (ADS)
Niemeijer, Meindert; Garvin, Mona K.; Lee, Kyungmoo; van Ginneken, Bram; Abràmoff, Michael D.; Sonka, Milan
2009-02-01
The recent introduction of next generation spectral OCT scanners has enabled routine acquisition of high resolution, 3D cross-sectional volumetric images of the retina. 3D OCT is used in the detection and management of serious eye diseases such as glaucoma and age-related macular degeneration. For follow-up studies, image registration is a vital tool to enable more precise, quantitative comparison of disease states. This work presents a registration method based on a recently introduced extension of the 2D Scale-Invariant Feature Transform (SIFT) framework1 to 3D.2 The SIFT feature extractor locates minima and maxima in the difference of Gaussian scale space to find salient feature points. It then uses histograms of the local gradient directions around each found extremum in 3D to characterize them in a 4096 element feature vector. Matching points are found by comparing the distance between feature vectors. We apply this method to the rigid registration of optic nerve head- (ONH) and macula-centered 3D OCT scans of the same patient that have only limited overlap. Three OCT data set pairs with known deformation were used for quantitative assessment of the method's robustness and accuracy when deformations of rotation and scaling were considered. Three-dimensional registration accuracy of 2.0+/-3.3 voxels was observed. The accuracy was assessed as average voxel distance error in N=1572 matched locations. The registration method was applied to 12 3D OCT scans (200 x 200 x 1024 voxels) of 6 normal eyes imaged in vivo to demonstrate the clinical utility and robustness of the method in a real-world environment.
Mobile laser scanning applied to the earth sciences
Brooks, Benjamin A.; Glennie, Craig; Hudnut, Kenneth W.; Ericksen, Todd; Hauser, Darren
2013-01-01
Lidar (light detection and ranging), a method by which the precise time of flight of emitted pulses of laser energy is measured and converted to distance for reflective targets, has helped scientists make topographic maps of Earth's surface at scales as fine as centimeters. These maps have allowed the discovery and analysis of myriad otherwise unstudied features, such as fault scarps, river channels, and even ancient ruins [Glennie et al., 2013b].
Embedded Creativity: Teaching Design Thinking via Distance Education
ERIC Educational Resources Information Center
Lloyd, Peter
2013-01-01
This paper shows how the design thinking skills of students learning at a distance can be consciously developed, and deliberately applied outside of the creative industries in what are termed 'embedded' contexts. The distance learning model of education pioneered by The Open University is briefly described before the technological…
Applying Leadership Theories to Distance Education Leadership
ERIC Educational Resources Information Center
Nworie, John
2012-01-01
The instructional delivery mode in distance education has been transitioning from the context of a physical classroom environment to a virtual learning environment or maintaining a hybrid of the two. However, most distance education programs in dual mode institutions are situated in traditional face-to-face instructional settings. Distance…
ERIC Educational Resources Information Center
Maffett, Sheryl Price
2007-01-01
Distance learning has been around since the old "course in a box" correspondence classes, but with the advent of sophisticated online course management systems, learning at a distance is contributing to a major paradigm shift in higher education. That shift includes applying corporate concepts to education--students, for example, are "consumers,"…
Measuring Long-Distance Romantic Relationships: A Validity Study
ERIC Educational Resources Information Center
Pistole, M. Carole; Roberts, Amber
2011-01-01
This study investigated aspects of construct validity for the scores of a new long-distance romantic relationship measure. A single-factor structure of the long-distance romantic relationship index emerged, with convergent and discriminant evidence of external validity, high internal consistency reliability, and applied utility of the scores.…
Effects of Distance Learning on Learning Effectiveness
ERIC Educational Resources Information Center
Liu, Hong-Cheng; Yen, Jih-Rong
2014-01-01
The development of computers in the past two decades has resulted in the changes of education in enterprises and schools. The advance of computer hardware and platforms allow colleges generally applying distance courses to instruction that both Ministry of Education and colleges have paid attention to the development of Distance Learning. To…
Virtual Bioinformatics Distance Learning Suite
ERIC Educational Resources Information Center
Tolvanen, Martti; Vihinen, Mauno
2004-01-01
Distance learning as a computer-aided concept allows students to take courses from anywhere at any time. In bioinformatics, computers are needed to collect, store, process, and analyze massive amounts of biological and biomedical data. We have applied the concept of distance learning in virtual bioinformatics to provide university course material…
An updated Type II supernova Hubble diagram
NASA Astrophysics Data System (ADS)
Gall, E. E. E.; Kotak, R.; Leibundgut, B.; Taubenberger, S.; Hillebrandt, W.; Kromer, M.; Burgett, W. S.; Chambers, K.; Flewelling, H.; Huber, M. E.; Kaiser, N.; Kudritzki, R. P.; Magnier, E. A.; Metcalfe, N.; Smith, K.; Tonry, J. L.; Wainscoat, R. J.; Waters, C.
2018-03-01
We present photometry and spectroscopy of nine Type II-P/L supernovae (SNe) with redshifts in the 0.045 ≲ z ≲ 0.335 range, with a view to re-examining their utility as distance indicators. Specifically, we apply the expanding photosphere method (EPM) and the standardized candle method (SCM) to each target, and find that both methods yield distances that are in reasonable agreement with each other. The current record-holder for the highest-redshift spectroscopically confirmed supernova (SN) II-P is PS1-13bni (z = 0.335-0.012+0.009), and illustrates the promise of Type II SNe as cosmological tools. We updated existing EPM and SCM Hubble diagrams by adding our sample to those previously published. Within the context of Type II SN distance measuring techniques, we investigated two related questions. First, we explored the possibility of utilising spectral lines other than the traditionally used Fe IIλ5169 to infer the photospheric velocity of SN ejecta. Using local well-observed objects, we derive an epoch-dependent relation between the strong Balmer line and Fe IIλ5169 velocities that is applicable 30 to 40 days post-explosion. Motivated in part by the continuum of key observables such as rise time and decline rates exhibited from II-P to II-L SNe, we assessed the possibility of using Hubble-flow Type II-L SNe as distance indicators. These yield similar distances as the Type II-P SNe. Although these initial results are encouraging, a significantly larger sample of SNe II-L would be required to draw definitive conclusions. Tables A.1, A.3, A.5, A.7, A.9, A.11, A.13, A.15 and A.17 are also available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/611/A25
NASA Astrophysics Data System (ADS)
Win, Khin Yadanar; Choomchuay, Somsak; Hamamoto, Kazuhiko
2017-06-01
The automated segmentation of cell nuclei is an essential stage in the quantitative image analysis of cell nuclei extracted from smear cytology images of pleural fluid. Cell nuclei can indicate cancer as the characteristics of cell nuclei are associated with cells proliferation and malignancy in term of size, shape and the stained color. Nevertheless, automatic nuclei segmentation has remained challenging due to the artifacts caused by slide preparation, nuclei heterogeneity such as the poor contrast, inconsistent stained color, the cells variation, and cells overlapping. In this paper, we proposed a watershed-based method that is capable to segment the nuclei of the variety of cells from cytology pleural fluid smear images. Firstly, the original image is preprocessed by converting into the grayscale image and enhancing by adjusting and equalizing the intensity using histogram equalization. Next, the cell nuclei are segmented using OTSU thresholding as the binary image. The undesirable artifacts are eliminated using morphological operations. Finally, the distance transform based watershed method is applied to isolate the touching and overlapping cell nuclei. The proposed method is tested with 25 Papanicolaou (Pap) stained pleural fluid images. The accuracy of our proposed method is 92%. The method is relatively simple, and the results are very promising.
Comparison of analytical methods for calculation of wind loads
NASA Technical Reports Server (NTRS)
Minderman, Donald J.; Schultz, Larry L.
1989-01-01
The following analysis is a comparison of analytical methods for calculation of wind load pressures. The analytical methods specified in ASCE Paper No. 3269, ANSI A58.1-1982, the Standard Building Code, and the Uniform Building Code were analyzed using various hurricane speeds to determine the differences in the calculated results. The winds used for the analysis ranged from 100 mph to 125 mph and applied inland from the shoreline of a large open body of water (i.e., an enormous lake or the ocean) a distance of 1500 feet or ten times the height of the building or structure considered. For a building or structure less than or equal to 250 feet in height acted upon by a wind greater than or equal to 115 mph, it was determined that the method specified in ANSI A58.1-1982 calculates a larger wind load pressure than the other methods. For a building or structure between 250 feet and 500 feet tall acted upon by a wind rangind from 100 mph to 110 mph, there is no clear choice of which method to use; for these cases, factors that must be considered are the steady-state or peak wind velocity, the geographic location, the distance from a large open body of water, and the expected design life and its risk factor.
Construction of ontology augmented networks for protein complex prediction.
Zhang, Yijia; Lin, Hongfei; Yang, Zhihao; Wang, Jian
2013-01-01
Protein complexes are of great importance in understanding the principles of cellular organization and function. The increase in available protein-protein interaction data, gene ontology and other resources make it possible to develop computational methods for protein complex prediction. Most existing methods focus mainly on the topological structure of protein-protein interaction networks, and largely ignore the gene ontology annotation information. In this article, we constructed ontology augmented networks with protein-protein interaction data and gene ontology, which effectively unified the topological structure of protein-protein interaction networks and the similarity of gene ontology annotations into unified distance measures. After constructing ontology augmented networks, a novel method (clustering based on ontology augmented networks) was proposed to predict protein complexes, which was capable of taking into account the topological structure of the protein-protein interaction network, as well as the similarity of gene ontology annotations. Our method was applied to two different yeast protein-protein interaction datasets and predicted many well-known complexes. The experimental results showed that (i) ontology augmented networks and the unified distance measure can effectively combine the structure closeness and gene ontology annotation similarity; (ii) our method is valuable in predicting protein complexes and has higher F1 and accuracy compared to other competing methods.
A Bayesian Approach to Locating the Red Giant Branch Tip Magnitude. I.
NASA Astrophysics Data System (ADS)
Conn, A. R.; Lewis, G. F.; Ibata, R. A.; Parker, Q. A.; Zucker, D. B.; McConnachie, A. W.; Martin, N. F.; Irwin, M. J.; Tanvir, N.; Fardal, M. A.; Ferguson, A. M. N.
2011-10-01
We present a new approach for identifying the tip of the red giant branch (TRGB) which, as we show, works robustly even on sparsely populated targets. Moreover, the approach is highly adaptable to the available data for the stellar population under study, with prior information readily incorporable into the algorithm. The uncertainty in the derived distances is also made tangible and easily calculable from posterior probability distributions. We provide an outline of the development of the algorithm and present the results of tests designed to characterize its capabilities and limitations. We then apply the new algorithm to three M31 satellites: Andromeda I, Andromeda II, and the fainter Andromeda XXIII, using data from the Pan-Andromeda Archaeological Survey (PAndAS), and derive their distances as 731(+ 5) + 18 (- 4) - 17 kpc, 634(+ 2) + 15 (- 2) - 14 kpc, and 733(+ 13) + 23 (- 11) - 22 kpc, respectively, where the errors appearing in parentheses are the components intrinsic to the method, while the larger values give the errors after accounting for additional sources of error. These results agree well with the best distance determinations in the literature and provide the smallest uncertainties to date. This paper is an introduction to the workings and capabilities of our new approach in its basic form, while a follow-up paper shall make full use of the method's ability to incorporate priors and use the resulting algorithm to systematically obtain distances to all of M31's satellites identifiable in the PAndAS survey area.
Geometric Aspects and Testing of the Galactic Center Distance Determination from Spiral Arm Segments
NASA Astrophysics Data System (ADS)
Nikiforov, I. I.; Veselova, A. V.
2018-02-01
We consider the problem of determining the geometric parameters of a Galactic spiral arm from its segment by including the distance to the spiral pole, i.e., the distance to the Galactic center ( R 0). The question about the number of points belonging to one turn of a logarithmic spiral and defining this spiral as a geometric figure has been investigated numerically and analytically by assuming the direction to the spiral pole (to the Galactic center) to be known. Based on the results obtained, in an effort to test the new approach, we have constructed a simplified method of solving the problem that consists in finding the median of the values for each parameter from all possible triplets of objects in the spiral arm segment satisfying the condition for the angular distance between objects. Applying the method to the data on the spatial distribution of masers in the Perseus and Scutum arms (the catalogue by Reid et al. (2014)) has led to an estimate of R 0 = 8.8 ± 0.5 kpc. The parameters of five spiral arm segments have been determined from masers of the same catalogue. We have confirmed the difference between the spiral arms in pitch angle. The pitch angles of the arms revealed by masers are shown to generally correlate with R 0 in the sense that an increase in R 0 leads to a growth in the absolute values of the pitch angles.
Transport of ions using RF Carpets in Helium Gas
NASA Astrophysics Data System (ADS)
Lambert, Keenan; Kelly, James; Brodeur, Maxime
2017-09-01
Radio-Frequency (RF) carpet are critical components of large volume gas cells used to thermalize radioactive ion beams produced at in-flight facilities. RF carpets are formed by a series of co-centric conductive rings on which an alternating potential (in the radio-frequency range) is applied with opposite polarity on adjacent rings. This results in a strong repelling force that keep the ions a certain distance from the carpet. The transport of ions using RF carpet is accomplished using either a potential gradient applied on the individual all strips or traveling wave (using the so-called `ion surfing method'). A test setup has been constructed at the University of Notre Dame to perform studies on the repelling of ions using RF carpets. This test setup has recently been improved by the addiction of circuitry elements allowing the transport of ions using the ion surfing method. The developed circuitry, together with transport results for various ion beam currents, electric force applied on the ions, and traveling wave amplitude and speed will be presented
Cutoff Probe for Tokamak SOL Measurement
NASA Astrophysics Data System (ADS)
Na, Byung-Keun; You, Kwang-Ho; Kim, Dae-Woong; You, Shin-Jae; Kim, Jung-Hyung; Chang, Hong-Young
2013-09-01
Since a cutoff probe was developed, there have been a lot of improvements in methodology and analysis for low temperature plasmas. However, in order to apply the cutoff probe to the Tokamak scrape-off layer (SOL), three important issues should be solved - speed, thermal protection, and short-distance (a few mm) wave propagation in magnetized plasmas. In this presentation, the improvement of cutoff probe for Tokamak is shown. The above problems can be solved using the following methods: (a) the cutoff probe can be used with short impulse of a few nano-seconds for speed improvement. (b) Ceramic covers were used for thermal protection. (c) In magnetized plasmas, the cutoff peak can be analyzed using circuit modeling and CST Microwave studio. To verify the proposed methods, the cutoff probe was applied to a Helicon plasma, and the results were compared to laser Thomson scattering results. Based on the result in the Helicon plasma, the cutoff probe will be applied to far-SOL region at the KSTAR 2013 campaign, and SOL region at the later campaign.
Improving CNN Performance Accuracies With Min-Max Objective.
Shi, Weiwei; Gong, Yihong; Tao, Xiaoyu; Wang, Jinjun; Zheng, Nanning
2017-06-09
We propose a novel method for improving performance accuracies of convolutional neural network (CNN) without the need to increase the network complexity. We accomplish the goal by applying the proposed Min-Max objective to a layer below the output layer of a CNN model in the course of training. The Min-Max objective explicitly ensures that the feature maps learned by a CNN model have the minimum within-manifold distance for each object manifold and the maximum between-manifold distances among different object manifolds. The Min-Max objective is general and able to be applied to different CNNs with insignificant increases in computation cost. Moreover, an incremental minibatch training procedure is also proposed in conjunction with the Min-Max objective to enable the handling of large-scale training data. Comprehensive experimental evaluations on several benchmark data sets with both the image classification and face verification tasks reveal that employing the proposed Min-Max objective in the training process can remarkably improve performance accuracies of a CNN model in comparison with the same model trained without using this objective.
Method for the depth corrected detection of ionizing events from a co-planar grids sensor
De Geronimo, Gianluigi [Syosset, NY; Bolotnikov, Aleksey E [South Setauket, NY; Carini, Gabriella [Port Jefferson, NY
2009-05-12
A method for the detection of ionizing events utilizing a co-planar grids sensor comprising a semiconductor substrate, cathode electrode, collecting grid and non-collecting grid. The semiconductor substrate is sensitive to ionizing radiation. A voltage less than 0 Volts is applied to the cathode electrode. A voltage greater than the voltage applied to the cathode is applied to the non-collecting grid. A voltage greater than the voltage applied to the non-collecting grid is applied to the collecting grid. The collecting grid and the non-collecting grid are summed and subtracted creating a sum and difference respectively. The difference and sum are divided creating a ratio. A gain coefficient factor for each depth (distance between the ionizing event and the collecting grid) is determined, whereby the difference between the collecting electrode and the non-collecting electrode multiplied by the corresponding gain coefficient is the depth corrected energy of an ionizing event. Therefore, the energy of each ionizing event is the difference between the collecting grid and the non-collecting grid multiplied by the corresponding gain coefficient. The depth of the ionizing event can also be determined from the ratio.
NASA Astrophysics Data System (ADS)
Lee, Ming-Wei; Chen, Yi-Chun
2014-02-01
In pinhole SPECT applied to small-animal studies, it is essential to have an accurate imaging system matrix, called H matrix, for high-spatial-resolution image reconstructions. Generally, an H matrix can be obtained by various methods, such as measurements, simulations or some combinations of both methods. In this study, a distance-weighted Gaussian interpolation method combined with geometric parameter estimations (DW-GIMGPE) is proposed. It utilizes a simplified grid-scan experiment on selected voxels and parameterizes the measured point response functions (PRFs) into 2D Gaussians. The PRFs of missing voxels are interpolated by the relations between the Gaussian coefficients and the geometric parameters of the imaging system with distance-weighting factors. The weighting factors are related to the projected centroids of voxels on the detector plane. A full H matrix is constructed by combining the measured and interpolated PRFs of all voxels. The PRFs estimated by DW-GIMGPE showed similar profiles as the measured PRFs. OSEM reconstructed images of a hot-rod phantom and normal rat myocardium demonstrated the effectiveness of the proposed method. The detectability of a SKE/BKE task on a synthetic spherical test object verified that the constructed H matrix provided comparable detectability to that of the H matrix acquired by a full 3D grid-scan experiment. The reduction in the acquisition time of a full 1.0-mm grid H matrix was about 15.2 and 62.2 times with the simplified grid pattern on 2.0-mm and 4.0-mm grid, respectively. A finer-grid H matrix down to 0.5-mm spacing interpolated by the proposed method would shorten the acquisition time by 8 times, additionally.
ERIC Educational Resources Information Center
Syed, Mahbubur Rahman, Ed.
2009-01-01
The emerging field of advanced distance education delivers academic courses across time and distance, allowing educators and students to participate in a convenient learning method. "Methods and Applications for Advancing Distance Education Technologies: International Issues and Solutions" demonstrates communication technologies, intelligent…
Li, Xue-chen; Jia, Peng-ying; Liu, Zhi-hui; Li, Li-chun; Dong, Li-fang
2008-12-01
In the present paper, stable glow discharges were obtained in air at low pressure with a dielectric barrier surface discharge device. Light emission from the discharge was detected by photomultiplier tubes and the research results show that the light signal exhibited one discharge pulse per half cycle of the applied voltage. The light pulses were asymmetric between the positive half cycle and the negative one of the applied voltage. The images of the glow surface discharge were processed by Photoshop software and the results indicate that the emission intensity remained almost constant for different places with the same distance from the powered electrode, while the emission intensity decreased with the distance from the powered electrode increasing. In dielectric barrier discharge, net electric field is determined by the applied voltage and the wall charges accumulated on the dielectric layer during the discharge, and consequently, it is important to obtain information about the net electric field distribution. For this purpose, optical emission spectroscopy method was used. The distribution of the net electric field can be deduced from the intensity ratio of spectral line 391.4 nm emitted from the first negative system of N2+ (B 2sigma u+ -->X 2sigma g+) to 337.1 nm emitted from the second positive system of N2 (C 3IIu-B 3IIg). The research results show that the electric field near the powered electric field is higher than at the edge of the discharge. These experimental results are very important for numerical study and industrial application of the surface discharge.
NASA Astrophysics Data System (ADS)
Kitada, N.; Inoue, N.; Tonagi, M.
2016-12-01
The purpose of Probabilistic Fault Displacement Hazard Analysis (PFDHA) is estimate fault displacement values and its extent of the impact. There are two types of fault displacement related to the earthquake fault: principal fault displacement and distributed fault displacement. Distributed fault displacement should be evaluated in important facilities, such as Nuclear Installations. PFDHA estimates principal fault and distributed fault displacement. For estimation, PFDHA uses distance-displacement functions, which are constructed from field measurement data. We constructed slip distance relation of principal fault displacement based on Japanese strike and reverse slip earthquakes in order to apply to Japan area that of subduction field. However, observed displacement data are sparse, especially reverse faults. Takao et al. (2013) tried to estimate the relation using all type fault systems (reverse fault and strike slip fault). After Takao et al. (2013), several inland earthquakes were occurred in Japan, so in this time, we try to estimate distance-displacement functions each strike slip fault type and reverse fault type especially add new fault displacement data set. To normalized slip function data, several criteria were provided by several researchers. We normalized principal fault displacement data based on several methods and compared slip-distance functions. The normalized by total length of Japanese reverse fault data did not show particular trend slip distance relation. In the case of segmented data, the slip-distance relationship indicated similar trend as strike slip faults. We will also discuss the relation between principal fault displacement distributions with source fault character. According to slip distribution function (Petersen et al., 2011), strike slip fault type shows the ratio of normalized displacement are decreased toward to the edge of fault. However, the data set of Japanese strike slip fault data not so decrease in the end of the fault. This result indicates that the fault displacement is difficult to appear at the edge of the fault displacement in Japan. This research was part of the 2014-2015 research project `Development of evaluating method for fault displacement` by the Secretariat of Nuclear Regulation Authority (NRA), Japan.
Optical measurements of absorption changes in two-layered diffusive media
NASA Astrophysics Data System (ADS)
Fabbri, Francesco; Sassaroli, Angelo; Henry, Michael E.; Fantini, Sergio
2004-04-01
We have used Monte Carlo simulations for a two-layered diffusive medium to investigate the effect of a superficial layer on the measurement of absorption variations from optical diffuse reflectance data processed by using: (a) a multidistance, frequency-domain method based on diffusion theory for a semi-infinite homogeneous medium; (b) a differential-pathlength-factor method based on a modified Lambert-Beer law for a homogeneous medium and (c) a two-distance, partial-pathlength method based on a modified Lambert-Beer law for a two-layered medium. Methods (a) and (b) lead to a single value for the absorption variation, whereas method (c) yields absorption variations for each layer. In the simulations, the optical coefficients of the medium were representative of those of biological tissue in the near-infrared. The thickness of the first layer was in the range 0.3-1.4 cm, and the source-detector distances were in the range 1-5 cm, which is typical of near-infrared diffuse reflectance measurements in tissue. The simulations have shown that (1) method (a) is mostly sensitive to absorption changes in the underlying layer, provided that the thickness of the superficial layer is ~0.6 cm or less; (2) method (b) is significantly affected by absorption changes in the superficial layer and (3) method (c) yields the absorption changes for both layers with a relatively good accuracy of ~4% for the superficial layer and ~10% for the underlying layer (provided that the absorption changes are less than 20-30% of the baseline value). We have applied all three methods of data analysis to near-infrared data collected on the forehead of a human subject during electroconvulsive therapy. Our results suggest that the multidistance method (a) and the two-distance partial-pathlength method (c) may better decouple the contributions to the optical signals that originate in deeper tissue (brain) from those that originate in more superficial tissue layers.
Tomlinson, Jo; Shaw, Tim; Munro, Ana; Johnson, Ros; Madden, D Lynne; Phillips, Rosemary; McGregor, Deborah
2013-11-01
Telecommuniciation technologies, including audio and videoconferencing facilities, afford geographically dispersed health professionals the opportunity to connect and collaborate with others. Recognised for enabling tele-consultations and tele-collaborations between teams of health care professionals and their patients, these technologies are also well suited to the delivery of distance learning programs, known as tele-learning. To determine whether tele-learning delivery methods achieve equivalent learning outcomes when compared with traditional face-to-face education delivery methods. A systematic literature review was commissioned by the NSW Ministry of Health to identify results relevant to programs applying tele-learning delivery methods in the provision of education to health professionals. The review found few studies that rigorously compared tele-learning with traditional formats. There was some evidence, however, to support the premise that tele-learning models achieve comparable learning outcomes and that participants are generally satisfied with and accepting of this delivery method. The review illustrated that tele-learning technologies not only enable distance learning opportunities, but achieve comparable learning outcomes to traditional face-to-face models. More rigorous evidence is required to strengthen these findings and should be the focus of future tele-learning research.
Parameter Estimation of a Spiking Silicon Neuron
Russell, Alexander; Mazurek, Kevin; Mihalaş, Stefan; Niebur, Ernst; Etienne-Cummings, Ralph
2012-01-01
Spiking neuron models are used in a multitude of tasks ranging from understanding neural behavior at its most basic level to neuroprosthetics. Parameter estimation of a single neuron model, such that the model’s output matches that of a biological neuron is an extremely important task. Hand tuning of parameters to obtain such behaviors is a difficult and time consuming process. This is further complicated when the neuron is instantiated in silicon (an attractive medium in which to implement these models) as fabrication imperfections make the task of parameter configuration more complex. In this paper we show two methods to automate the configuration of a silicon (hardware) neuron’s parameters. First, we show how a Maximum Likelihood method can be applied to a leaky integrate and fire silicon neuron with spike induced currents to fit the neuron’s output to desired spike times. We then show how a distance based method which approximates the negative log likelihood of the lognormal distribution can also be used to tune the neuron’s parameters. We conclude that the distance based method is better suited for parameter configuration of silicon neurons due to its superior optimization speed. PMID:23852978
Lorias Espinoza, Daniel; Ordorica Flores, Ricardo; Minor Martínez, Arturo; Gutiérrez Gnecchi, José Antonio
2014-06-01
Various methods for evaluating laparoscopic skill have been reported, but without detailed information on the configuration used they are difficult to reproduce. Here we present a method based on the trigonometric relationships between the instruments used in a laparoscopic training platform in order to provide a tool to aid in the reproducible assessment of surgical laparoscopic technique. The positions of the instruments were represented using triangles. Basic trigonometry was used to objectively establish the distances among the working ports RL, the placement of the optical port h', and the placement of the surgical target OT. The optimal configuration of a training platform depends on the selected working angles, the intracorporeal/extracorporeal lengths of the instrument, and the depth of the surgical target. We demonstrate that some distances, angles, and positions of the instruments are inappropriate for satisfactory laparoscopy. By applying basic trigonometric principles we can determine the ideal placement of the working ports and the optics in a simple, precise, and objective way. In addition, because the method is based on parameters known to be important in both the performance and quantitative quality of laparoscopy, the results are generalizable to different training platforms and types of laparoscopic surgery.
NASA Astrophysics Data System (ADS)
Qin, Xulei; Cong, Zhibin; Fei, Baowei
2013-11-01
An automatic segmentation framework is proposed to segment the right ventricle (RV) in echocardiographic images. The method can automatically segment both epicardial and endocardial boundaries from a continuous echocardiography series by combining sparse matrix transform, a training model, and a localized region-based level set. First, the sparse matrix transform extracts main motion regions of the myocardium as eigen-images by analyzing the statistical information of the images. Second, an RV training model is registered to the eigen-images in order to locate the position of the RV. Third, the training model is adjusted and then serves as an optimized initialization for the segmentation of each image. Finally, based on the initializations, a localized, region-based level set algorithm is applied to segment both epicardial and endocardial boundaries in each echocardiograph. Three evaluation methods were used to validate the performance of the segmentation framework. The Dice coefficient measures the overall agreement between the manual and automatic segmentation. The absolute distance and the Hausdorff distance between the boundaries from manual and automatic segmentation were used to measure the accuracy of the segmentation. Ultrasound images of human subjects were used for validation. For the epicardial and endocardial boundaries, the Dice coefficients were 90.8 ± 1.7% and 87.3 ± 1.9%, the absolute distances were 2.0 ± 0.42 mm and 1.79 ± 0.45 mm, and the Hausdorff distances were 6.86 ± 1.71 mm and 7.02 ± 1.17 mm, respectively. The automatic segmentation method based on a sparse matrix transform and level set can provide a useful tool for quantitative cardiac imaging.
Reducing the lag of accommodation by auditory biofeedback: A pilot study.
Wagner, Sandra; Ohlendorf, Arne; Schaeffel, Frank; Wahl, Siegfried
2016-12-01
The purpose of this study was to investigate whether a reduction of the accommodative lag is possible by training the accuracy of accommodation using auditory biofeedback. Accommodation responses were measured in thirty-one young adults with myopia for dioptric target distances of 2.0, 2.5, and 3.0D using an eccentric infrared photorefractor. For the biofeedback training, subjects were randomly assigned to an experimental (n=15) or a control group (n=16). Subjects of the experimental group were provided with two tones while fixating a target, one tone was related to their accommodative response and the second to the target distance. Their task was to match these tones. The control group did not receive any auditory biofeedback. Two different training methods were applied, a continuous training of 200s, and ten consecutive sessions of 20s each. The training effects on the lag of accommodation (change Δ) were highly variable. Regarding the entire study group, the observed change in the accommodative lag was greater at closer distances, while no difference between the two training methods was revealed. Nevertheless, seven experimental subjects reduced their lag by ⩾0.3D (3.0D target distance: Δ long =-0.29±0.20D, Δ short =-0.24±0.21D). This reduction was also seen in two control subjects. Remeasurement revealed that the average training effect cannot be preserved over a period of 5-7days. The current investigation has shown that the accuracy of accommodation can be trained in some subjects using auditory biofeedback for target distances of 2.5D or closer. Copyright © 2016 Elsevier Ltd. All rights reserved.
Liu, Yu; Hong, Yang; Lin, Chun-Yuan; Hung, Che-Lun
2015-01-01
The Smith-Waterman (SW) algorithm has been widely utilized for searching biological sequence databases in bioinformatics. Recently, several works have adopted the graphic card with Graphic Processing Units (GPUs) and their associated CUDA model to enhance the performance of SW computations. However, these works mainly focused on the protein database search by using the intertask parallelization technique, and only using the GPU capability to do the SW computations one by one. Hence, in this paper, we will propose an efficient SW alignment method, called CUDA-SWfr, for the protein database search by using the intratask parallelization technique based on a CPU-GPU collaborative system. Before doing the SW computations on GPU, a procedure is applied on CPU by using the frequency distance filtration scheme (FDFS) to eliminate the unnecessary alignments. The experimental results indicate that CUDA-SWfr runs 9.6 times and 96 times faster than the CPU-based SW method without and with FDFS, respectively.
Frequent statistics of link-layer bit stream data based on AC-IM algorithm
NASA Astrophysics Data System (ADS)
Cao, Chenghong; Lei, Yingke; Xu, Yiming
2017-08-01
At present, there are many relevant researches on data processing using classical pattern matching and its improved algorithm, but few researches on statistical data of link-layer bit stream. This paper adopts a frequent statistical method of link-layer bit stream data based on AC-IM algorithm for classical multi-pattern matching algorithms such as AC algorithm has high computational complexity, low efficiency and it cannot be applied to binary bit stream data. The method's maximum jump distance of the mode tree is length of the shortest mode string plus 3 in case of no missing? In this paper, theoretical analysis is made on the principle of algorithm construction firstly, and then the experimental results show that the algorithm can adapt to the binary bit stream data environment and extract the frequent sequence more accurately, the effect is obvious. Meanwhile, comparing with the classical AC algorithm and other improved algorithms, AC-IM algorithm has a greater maximum jump distance and less time-consuming.
Automated measurement of stent strut coverage in intravascular optical coherence tomography
NASA Astrophysics Data System (ADS)
Ahn, Chi Young; Kim, Byeong-Keuk; Hong, Myeong-Ki; Jang, Yangsoo; Heo, Jung; Joo, Chulmin; Seo, Jin Keun
2015-02-01
Optical coherence tomography (OCT) is a non-invasive, cross-sectional imaging modality that has become a prominent imaging method in percutaneous intracoronary intervention. We present an automated detection algorithm for stent strut coordinates and coverage in OCT images. The algorithm for stent strut detection is composed of a coordinate transformation from the polar to the Cartesian domains and application of second derivative operators in the radial and the circumferential directions. Local region-based active contouring was employed to detect lumen boundaries. We applied the method to the OCT pullback images acquired from human patients in vivo to quantitatively measure stent strut coverage. The validation studies against manual expert assessments demonstrated high Pearson's coefficients ( R = 0.99) in terms of the stent strut coordinates, with no significant bias. An averaged Hausdorff distance of < 120 μm was obtained for vessel border detection. Quantitative comparison in stent strut to vessel wall distance found a bias of < 12.3 μm and a 95% confidence of < 110 μm.
An analysis-by-synthesis approach to the estimation of vocal cord polyp features.
Koizumi, T; Taniguchi, S; Itakura, F
1993-09-01
This paper deals with a new noninvasive method of estimating vocal cord polyp features through hoarse-voice analysis. A noteworthy feature of this method is that it enables us not only to discriminate hoarse voices caused by pathological vocal cords with a single golf-ball-like polyp from normal voices, but also to estimate polyp features such as the mass and dimension of polyp through the use of a novel model of pathological vocal cords which has been devised to simulate the subtle movement of the vocal cords. A synthetic hoarse voice produced with a hoarse-voice synthesizer is compared with a natural hoarse voice caused by the vocal cord polyp in terms of a distance measure and the polyp features are estimated by minimizing the distance measure. Some estimates of polyp dimension that have been obtained by applying this procedure to hoarse voices are found to compare favorably with actual polyp dimensions, demonstrating that the procedure is effective for estimating the features of golf-ball-like vocal cord polyps.
Wang, Dansheng; Wang, Qinghua; Wang, Hao; Zhu, Hongping
2016-01-01
In the electromechanical impedance (EMI) method, the PZT patch performs the functions of both sensor and exciter. Due to the high frequency actuation and non-model based characteristics, the EMI method can be utilized to detect incipient structural damage. In recent years EMI techniques have been widely applied to monitor the health status of concrete and steel materials, however, studies on application to timber are limited. This paper will explore the feasibility of using the EMI technique for damage detection in timber specimens. In addition, the conventional damage index, namely root mean square deviation (RMSD) is employed to evaluate the level of damage. On that basis, a new damage index, Mahalanobis distance based on RMSD, is proposed to evaluate the damage severity of timber specimens. Experimental studies are implemented to detect notch and hole damage in the timber specimens. Experimental results verify the availability and robustness of the proposed damage index and its superiority over the RMSD indexes. PMID:27782088
Wang, Dansheng; Wang, Qinghua; Wang, Hao; Zhu, Hongping
2016-10-22
In the electromechanical impedance (EMI) method, the PZT patch performs the functions of both sensor and exciter. Due to the high frequency actuation and non-model based characteristics, the EMI method can be utilized to detect incipient structural damage. In recent years EMI techniques have been widely applied to monitor the health status of concrete and steel materials, however, studies on application to timber are limited. This paper will explore the feasibility of using the EMI technique for damage detection in timber specimens. In addition, the conventional damage index, namely root mean square deviation (RMSD) is employed to evaluate the level of damage. On that basis, a new damage index, Mahalanobis distance based on RMSD, is proposed to evaluate the damage severity of timber specimens. Experimental studies are implemented to detect notch and hole damage in the timber specimens. Experimental results verify the availability and robustness of the proposed damage index and its superiority over the RMSD indexes.
Entropic criterion for model selection
NASA Astrophysics Data System (ADS)
Tseng, Chih-Yuan
2006-10-01
Model or variable selection is usually achieved through ranking models according to the increasing order of preference. One of methods is applying Kullback-Leibler distance or relative entropy as a selection criterion. Yet that will raise two questions, why use this criterion and are there any other criteria. Besides, conventional approaches require a reference prior, which is usually difficult to get. Following the logic of inductive inference proposed by Caticha [Relative entropy and inductive inference, in: G. Erickson, Y. Zhai (Eds.), Bayesian Inference and Maximum Entropy Methods in Science and Engineering, AIP Conference Proceedings, vol. 707, 2004 (available from arXiv.org/abs/physics/0311093)], we show relative entropy to be a unique criterion, which requires no prior information and can be applied to different fields. We examine this criterion by considering a physical problem, simple fluids, and results are promising.
2011-01-01
Background The Prospective Space-Time scan statistic (PST) is widely used for the evaluation of space-time clusters of point event data. Usually a window of cylindrical shape is employed, with a circular or elliptical base in the space domain. Recently, the concept of Minimum Spanning Tree (MST) was applied to specify the set of potential clusters, through the Density-Equalizing Euclidean MST (DEEMST) method, for the detection of arbitrarily shaped clusters. The original map is cartogram transformed, such that the control points are spread uniformly. That method is quite effective, but the cartogram construction is computationally expensive and complicated. Results A fast method for the detection and inference of point data set space-time disease clusters is presented, the Voronoi Based Scan (VBScan). A Voronoi diagram is built for points representing population individuals (cases and controls). The number of Voronoi cells boundaries intercepted by the line segment joining two cases points defines the Voronoi distance between those points. That distance is used to approximate the density of the heterogeneous population and build the Voronoi distance MST linking the cases. The successive removal of edges from the Voronoi distance MST generates sub-trees which are the potential space-time clusters. Finally, those clusters are evaluated through the scan statistic. Monte Carlo replications of the original data are used to evaluate the significance of the clusters. An application for dengue fever in a small Brazilian city is presented. Conclusions The ability to promptly detect space-time clusters of disease outbreaks, when the number of individuals is large, was shown to be feasible, due to the reduced computational load of VBScan. Instead of changing the map, VBScan modifies the metric used to define the distance between cases, without requiring the cartogram construction. Numerical simulations showed that VBScan has higher power of detection, sensitivity and positive predicted value than the Elliptic PST. Furthermore, as VBScan also incorporates topological information from the point neighborhood structure, in addition to the usual geometric information, it is more robust than purely geometric methods such as the elliptic scan. Those advantages were illustrated in a real setting for dengue fever space-time clusters. PMID:21513556
Jourdren, Laurent; Delaveau, Thierry; Marquenet, Emelie; Jacq, Claude; Garcia, Mathilde
2010-07-01
Recent improvements in microscopy technology allow detection of single molecules of RNA, but tools for large-scale automatic analyses of particle distributions are lacking. An increasing number of imaging studies emphasize the importance of mRNA localization in the definition of cell territory or the biogenesis of cell compartments. CORSEN is a new tool dedicated to three-dimensional (3D) distance measurements from imaging experiments especially developed to access the minimal distance between RNA molecules and cellular compartment markers. CORSEN includes a 3D segmentation algorithm allowing the extraction and the characterization of the cellular objects to be processed--surface determination, aggregate decomposition--for minimal distance calculations. CORSEN's main contribution lies in exploratory statistical analysis, cell population characterization, and high-throughput assays that are made possible by the implementation of a batch process analysis. We highlighted CORSEN's utility for the study of relative positions of mRNA molecules and mitochondria: CORSEN clearly discriminates mRNA localized to the vicinity of mitochondria from those that are translated on free cytoplasmic polysomes. Moreover, it quantifies the cell-to-cell variations of mRNA localization and emphasizes the necessity for statistical approaches. This method can be extended to assess the evolution of the distance between specific mRNAs and other cellular structures in different cellular contexts. CORSEN was designed for the biologist community with the concern to provide an easy-to-use and highly flexible tool that can be applied for diverse distance quantification issues.
Planning Training Workload in Football Using Small-Sided Games' Density.
Sangnier, Sebastien; Cotte, Thierry; Brachet, Olivier; Coquart, Jeremy; Tourny, Claire
2018-05-08
Sangnier, S, Cotte, T, Brachet, O, Coquart, J, and Tourny, C. Planning training workload in football using small-sided games density. J Strength Cond Res XX(X): 000-000, 2018-To develop the physical qualities, the small-sided games' (SSGs) density may be essential in soccer. Small-sided games are games in which the pitch size, players' number, and rules are different to those for traditional soccer matches. The purpose was to assess the relation between training workload and SSGs' density. The 33 densities data (41 practice games and 3 full games) were analyzed through global positioning system (GPS) data collected from 25 professional soccer players (80.7 ± 7.0 kg; 1.83 ± 0.05 m; 26.4 ± 4.9 years). From total distance, distance metabolic power, sprint distance, and acceleration distance, the data GPS were divided into 4 categories: endurance, power, speed, and strength. Statistical analysis compared the relation between GPS values and SSGs' densities, and 3 methods were applied to assess models (R-squared, root-mean-square error, and Akaike information criterion). The results suggest that all the GPS data match the player's essential athletic skills. They were all correlated with the game's density. Acceleration distance, deceleration distance, metabolic power, and total distance followed a logarithmic regression model, whereas distance and number of sprints follow a linear regression model. The research reveals options to monitor the training workload. Coaches could anticipate the load resulting from the SSGs and adjust the field size to the players' number. Taking into account the field size during SSGs enables coaches to target the most favorable density for developing expected physical qualities. Calibrating intensity during SSGs would allow coaches to assess each athletic skill in the same conditions of intensity as in the competition.
Wang, Bo; Lucy, Katie A.; Schuman, Joel S.; Sigal, Ian A.; Bilonick, Richard A.; Kagemann, Larry; Kostanyan, Tigran; Lu, Chen; Liu, Jonathan; Grulkowski, Ireneusz; Fujimoto, James G.; Ishikawa, Hiroshi; Wollstein, Gadi
2016-01-01
Purpose To investigate how the lamina cribrosa (LC) microstructure changes with distance from the central retinal vessel trunk (CRVT), and to determine how this change differs in glaucoma. Methods One hundred nineteen eyes (40 healthy, 29 glaucoma suspect, and 50 glaucoma) of 105 subjects were imaged using swept-source optical coherence tomography (OCT). The CRVT was manually delineated at the level of the anterior LC surface. A line was fit to the distribution of LC microstructural parameters and distance from CRVT to measure the gradient (change in LC microstructure per distance from the CRVT) and intercept (LC microstructure near the CRVT). A linear mixed-effects model was used to determine the effect of diagnosis on the gradient and intercept of the LC microstructure with distance from the CRVT. A Kolmogorov-Smirnov test was applied to determine the difference in distribution between the diagnostic categories. Results The percent of visible LC in all scans was 26 ± 7%. Beam thickness and pore diameter decreased with distance from the CRVT. Glaucoma eyes had a larger decrease in beam thickness (−1.132 ± 0.503 μm, P = 0.028) and pore diameter (−0.913 ± 0.259 μm, P = 0.001) compared with healthy controls per 100 μm from the CRVT. Glaucoma eyes showed increased variability in both beam thickness and pore diameter relative to the distance from the CRVT compared with healthy eyes (P < 0.05). Conclusions These findings results demonstrate the importance of considering the anatomical location of CRVT in the assessment of the LC, as there is a relationship between the distance from the CRVT and the LC microstructure, which differs between healthy and glaucoma eyes. PMID:27286366
Koehl, Anthony J; Long, Jeffrey C
2018-02-01
We present a model that partitions Nei's minimum genetic distance between admixed populations into components of admixture and genetic drift. We applied this model to 17 admixed populations in the Americas to examine how admixture and drift have contributed to the patterns of genetic diversity. We analyzed 618 short tandem repeat loci in 949 individuals from 49 population samples. Thirty-two samples serve as proxies for continental ancestors. Seventeen samples represent admixed populations: (4) African-American and (13) Latin American. We partition genetic distance, and then calculate fixation indices and principal coordinates to interpret our results. A computer simulation confirms that our method correctly estimates drift and admixture components of genetic distance when the assumptions of the model are met. The partition of genetic distance shows that both admixture and genetic drift contribute to patterns of genetic diversity. The admixture component of genetic distance provides evidence for two distinct axes of continental ancestry. However, the genetic distances show that ancestry contributes to only one axis of genetic differentiation. The genetic distances among the 13 Latin American populations in this analysis show contributions from both differences in ancestry and differences in genetic drift. By contrast, the genetic distances among the four African American populations in this analysis owe mostly to genetic drift because these groups have similar fractions of European and African ancestry. The genetic structure of admixed populations in the Americas reflects more than admixture. We show that the history of serial founder effects constrains the impact of admixture on allele frequencies to a single dimension. Genetic drift in the admixed populations imposed a new level of genetic structure onto that created by admixture. © 2017 Wiley Periodicals, Inc.
Magnified reconstruction of digitally recorded holograms by Fresnel-Bluestein transform.
Restrepo, John F; Garcia-Sucerquia, Jorge
2010-11-20
A method for numerical reconstruction of digitally recorded holograms with variable magnification is presented. The proposed strategy allows for smaller, equal, or larger magnification than that achieved with Fresnel transform by introducing the Bluestein substitution into the Fresnel kernel. The magnification is obtained independent of distance, wavelength, and number of pixels, which enables the method to be applied in color digital holography and metrological applications. The approach is supported by experimental and simulation results in digital holography of objects of comparable dimensions with the recording device and in the reconstruction of holograms from digital in-line holographic microscopy.
Optical ranging and communication method based on all-phase FFT
NASA Astrophysics Data System (ADS)
Li, Zening; Chen, Gang
2014-10-01
This paper describes an optical ranging and communication method based on all-phase fast fourier transform (FFT). This kind of system is mainly designed for vehicle safety application. Particularly, the phase shift of the reflecting orthogonal frequency division multiplexing (OFDM) symbol is measured to determine the signal time of flight. Then the distance is calculated according to the time of flight. Several key factors affecting the phase measurement accuracy are studied. The all-phase FFT, which can reduce the effects of frequency offset, phase noise and the inter-carrier interference (ICI), is applied to measure the OFDM symbol phase shift.
Unsteady aerodynamic characterization of a military aircraft in vertical gusts
NASA Technical Reports Server (NTRS)
Lebozec, A.; Cocquerez, J. L.
1985-01-01
The effects of 2.5-m/sec vertical gusts on the flight characteristics of a 1:8.6 scale model of a Mirage 2000 aircraft in free flight at 35 m/sec over a distance of 30 m are investigated. The wind-tunnel setup and instrumentation are described; the impulse-response and local-coefficient-identification analysis methods applied are discussed in detail; and the modification and calibration of the gust-detection probes are reviewed. The results are presented in graphs, and good general agreement is obtained between model calculations using the two analysis methods and the experimental measurements.
UV gated Raman spectroscopy for standoff detection of explosives
NASA Astrophysics Data System (ADS)
Gaft, M.; Nagli, L.
2008-07-01
Real-time detection and identification of explosives at a standoff distance is a major issue in efforts to develop defense against so-called improvised explosive devices (IED). It is recognized that the only method, which is potentially capable to standoff detection of minimal amounts of explosives is laser-based spectroscopy. LDS technique belongs to trace detection, namely to its micro-particles variety. It is based on commonly held belief that surface contamination was very difficult to avoid and could be exploited for standoff detection. We have applied gated Raman spectroscopy for detection of main explosive materials, both factory and homemade. We developed and tested a Raman system for the field remote detection and identification of minimal amounts of explosives on relevant surfaces at a distance of up to 30 m.
Real-time interactive 3D manipulation of particles viewed in two orthogonal observation planes.
Perch-Nielsen, Ivan; Rodrigo, Peter; Glückstad, Jesper
2005-04-18
The generalized phase contrast (GPC) method has been applied to transform a single TEM00 beam into a manifold of counterpropagating-beam traps capable of real-time interactive manipulation of multiple microparticles in three dimensions (3D). This paper reports on the use of low numerical aperture (NA), non-immersion, objective lenses in an implementation of the GPC-based 3D trapping system. Contrary to high-NA based optical tweezers, the GPC trapping system demonstrated here operates with long working distance (>10 mm), and offers a wider manipulation region and a larger field of view for imaging through each of the two opposing objective lenses. As a consequence of the large working distance, simultaneous monitoring of the trapped particles in a second orthogonal observation plane is demonstrated.
NASA Astrophysics Data System (ADS)
Walwyn-Salas, G.; Czap, L.; Gomola, I.; Tamayo-García, J. A.
2016-07-01
The cylindrical NE2575 and spherical PTW32002 chamber types were tested in this paper to determine their performance at different source-chamber distances, field sizes and two radiation qualities. To ensure an accurate measurement, there is a need to apply a correction factor to NE2575 measurements at different distances because of differences found between the reference point defined by the manufacturer and the effective point of measurements. This correction factor for NE2575 secondary standard from the Center for Radiation Protection and Hygiene of Cuba was assessed with a 0.3% uncertainty using the results of three methods. Those laboratories that use the NE2575 chambers should take into consideration the performance characteristics tested in this paper to obtain accurate measurements.
Assessing the evolutionary rate of positional orthologous genes in prokaryotes using synteny data
Lemoine, Frédéric; Lespinet, Olivier; Labedan, Bernard
2007-01-01
Background Comparison of completely sequenced microbial genomes has revealed how fluid these genomes are. Detecting synteny blocks requires reliable methods to determining the orthologs among the whole set of homologs detected by exhaustive comparisons between each pair of completely sequenced genomes. This is a complex and difficult problem in the field of comparative genomics but will help to better understand the way prokaryotic genomes are evolving. Results We have developed a suite of programs that automate three essential steps to study conservation of gene order, and validated them with a set of 107 bacteria and archaea that cover the majority of the prokaryotic taxonomic space. We identified the whole set of shared homologs between two or more species and computed the evolutionary distance separating each pair of homologs. We applied two strategies to extract from the set of homologs a collection of valid orthologs shared by at least two genomes. The first computes the Reciprocal Smallest Distance (RSD) using the PAM distances separating pairs of homologs. The second method groups homologs in families and reconstructs each family's evolutionary tree, distinguishing bona fide orthologs as well as paralogs created after the last speciation event. Although the phylogenetic tree method often succeeds where RSD fails, the reverse could occasionally be true. Accordingly, we used the data obtained with either methods or their intersection to number the orthologs that are adjacent in for each pair of genomes, the Positional Orthologous Genes (POGs), and to further study their properties. Once all these synteny blocks have been detected, we showed that POGs are subject to more evolutionary constraints than orthologs outside synteny groups, whichever the taxonomic distance separating the compared organisms. Conclusion The suite of programs described in this paper allows a reliable detection of orthologs and is useful for evaluating gene order conservation in prokaryotes whichever their taxonomic distance. Thus, our approach will make easy the rapid identification of POGS in the next few years as we are expecting to be inundated with thousands of completely sequenced microbial genomes. PMID:18047665
Lee, Sangyoon; Hu, Xinda; Hua, Hong
2016-05-01
Many error sources have been explored in regards to the depth perception problem in augmented reality environments using optical see-through head-mounted displays (OST-HMDs). Nonetheless, two error sources are commonly neglected: the ray-shift phenomenon and the change in interpupillary distance (IPD). The first source of error arises from the difference in refraction for virtual and see-through optical paths caused by an optical combiner, which is required of OST-HMDs. The second occurs from the change in the viewer's IPD due to eye convergence. In this paper, we analyze the effects of these two error sources on near-field depth perception and propose methods to compensate for these two types of errors. Furthermore, we investigate their effectiveness through an experiment comparing the conditions with and without our error compensation methods applied. In our experiment, participants estimated the egocentric depth of a virtual and a physical object located at seven different near-field distances (40∼200 cm) using a perceptual matching task. Although the experimental results showed different patterns depending on the target distance, the results demonstrated that the near-field depth perception error can be effectively reduced to a very small level (at most 1 percent error) by compensating for the two mentioned error sources.
High-Dimensional Intrinsic Interpolation Using Gaussian Process Regression and Diffusion Maps
Thimmisetty, Charanraj A.; Ghanem, Roger G.; White, Joshua A.; ...
2017-10-10
This article considers the challenging task of estimating geologic properties of interest using a suite of proxy measurements. The current work recast this task as a manifold learning problem. In this process, this article introduces a novel regression procedure for intrinsic variables constrained onto a manifold embedded in an ambient space. The procedure is meant to sharpen high-dimensional interpolation by inferring non-linear correlations from the data being interpolated. The proposed approach augments manifold learning procedures with a Gaussian process regression. It first identifies, using diffusion maps, a low-dimensional manifold embedded in an ambient high-dimensional space associated with the data. Itmore » relies on the diffusion distance associated with this construction to define a distance function with which the data model is equipped. This distance metric function is then used to compute the correlation structure of a Gaussian process that describes the statistical dependence of quantities of interest in the high-dimensional ambient space. The proposed method is applicable to arbitrarily high-dimensional data sets. Here, it is applied to subsurface characterization using a suite of well log measurements. The predictions obtained in original, principal component, and diffusion space are compared using both qualitative and quantitative metrics. Considerable improvement in the prediction of the geological structural properties is observed with the proposed method.« less
High-Dimensional Intrinsic Interpolation Using Gaussian Process Regression and Diffusion Maps
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thimmisetty, Charanraj A.; Ghanem, Roger G.; White, Joshua A.
This article considers the challenging task of estimating geologic properties of interest using a suite of proxy measurements. The current work recast this task as a manifold learning problem. In this process, this article introduces a novel regression procedure for intrinsic variables constrained onto a manifold embedded in an ambient space. The procedure is meant to sharpen high-dimensional interpolation by inferring non-linear correlations from the data being interpolated. The proposed approach augments manifold learning procedures with a Gaussian process regression. It first identifies, using diffusion maps, a low-dimensional manifold embedded in an ambient high-dimensional space associated with the data. Itmore » relies on the diffusion distance associated with this construction to define a distance function with which the data model is equipped. This distance metric function is then used to compute the correlation structure of a Gaussian process that describes the statistical dependence of quantities of interest in the high-dimensional ambient space. The proposed method is applicable to arbitrarily high-dimensional data sets. Here, it is applied to subsurface characterization using a suite of well log measurements. The predictions obtained in original, principal component, and diffusion space are compared using both qualitative and quantitative metrics. Considerable improvement in the prediction of the geological structural properties is observed with the proposed method.« less
A method to calculate synthetic waveforms in stratified VTI media
NASA Astrophysics Data System (ADS)
Wang, W.; Wen, L.
2012-12-01
Transverse isotropy with a vertical axis of symmetry (VTI) may be an important material property in the Earth's interior. In this presentation, we develop a method to calculate synthetic seismograms for wave propagation in stratified VTI media. Our method is based on the generalized reflection and transmission method (GRTM) (Luco & Apsel 1983). We extend it to transversely isotropic VTI media. GRTM has the advantage of remaining stable in high frequency calculations compared to the Haskell Matrix method (Haskell 1964), which explicitly excludes the exponential growth terms in the propagation matrix and is limited to low frequency computation. In the implementation, we also improve GRTM in two aspects. 1) We apply the Shanks transformation (Bender & Orszag 1999) to improve the convergence rate of convergence. This improvement is especially important when the depths of source and receiver are close. 2) We adopt a self-adaptive Simpson integration method (Chen & Zhang 2001) in the discrete wavenumber integration so that the integration can still be efficiently carried out at large epicentral distances. Because the calculation is independent in each frequency, the program can also be effectively implemented in parallel computing. Our method provides a powerful tool to synthesize broadband seismograms of VIT media at a large epicenter distance range. We will present examples of using the method to study possible transverse isotropy in the upper mantle and the lowermost mantle.
The Application of Werner and Kaplan's Concept of "Distancing" to Children Who Are Deaf-Blind
ERIC Educational Resources Information Center
Bruce, Susan M.
2005-01-01
Through the process of distancing, children develop an understanding of the differences between themselves and others, themselves and objects, and objects and representations. Adults can support progressive distancing in children who are congenitally deaf-blind by applying strategies, such as the hand-under-hand exploration of objects, the…
Quality and Growth Implications of Incremental Costing Models for Distance Education Units
ERIC Educational Resources Information Center
Crawford, C. B.; Gould, Lawrence V.; King, Dennis; Parker, Carl
2010-01-01
The purpose of this article is to explore quality and growth implications emergent from various incremental costing models applied to distance education units. Prior research relative to costing models and three competing costing models useful in the current distance education environment are discussed. Specifically, the simple costing model, unit…
NASA Astrophysics Data System (ADS)
Ogwari, P.; DeShon, H. R.; Hornbach, M.
2017-12-01
Post-2008 earthquake rate increases in the Central United States have been associated with large-scale subsurface disposal of waste-fluids from oil and gas operations. The beginning of various earthquake sequences in Fort Worth and Permian basins have occurred in the absence of seismic stations at local distances to record and accurately locate hypocenters. Most typically, the initial earthquakes have been located using regional seismic network stations (>100km epicentral distance) and using global 1D velocity models, which usually results in large location uncertainty, especially in depth, does not resolve magnitude <2.5 events, and does not constrain the geometry of the activated fault(s). Here, we present a method to better resolve earthquake occurrence and location using matched filters and regional relative location when local data becomes available. We use the local distance data for high-resolution earthquake location, identifying earthquake templates and accurate source-station raypath velocities for the Pg and Lg phases at regional stations. A matched-filter analysis is then applied to seismograms recorded at US network stations and at adopted TA stations that record the earthquakes before and during the local network deployment period. Positive detections are declared based on manual review of associated with P and S arrivals on local stations. We apply hierarchical clustering to distinguish earthquakes that are both spatially clustered and spatially separated. Finally, we conduct relative earthquake and earthquake cluster location using regional station differential times. Initial analysis applied to the 2008-2009 DFW airport sequence in north Texas results in time continuous imaging of epicenters extending into 2014. Seventeen earthquakes in the USGS earthquake catalog scattered across a 10km2 area near DFW airport are relocated onto a single fault using these approaches. These techniques will also be applied toward imaging recent earthquakes in the Permian Basin near Pecos, TX.
Single-Image Distance Measurement by a Smart Mobile Device.
Chen, Shangwen; Fang, Xianyong; Shen, Jianbing; Wang, Linbo; Shao, Ling
2017-12-01
Existing distance measurement methods either require multiple images and special photographing poses or only measure the height with a special view configuration. We propose a novel image-based method that can measure various types of distance from single image captured by a smart mobile device. The embedded accelerometer is used to determine the view orientation of the device. Consequently, pixels can be back-projected to the ground, thanks to the efficient calibration method using two known distances. Then the distance in pixel is transformed to a real distance in centimeter with a linear model parameterized by the magnification ratio. Various types of distance specified in the image can be computed accordingly. Experimental results demonstrate the effectiveness of the proposed method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Derek; Sabondjian, Eric; Lawrence, Kailin
Purpose: To apply surface collimation for superficial flap HDR skin brachytherapy utilizing common clinical resources and to demonstrate the potential for OAR dose reduction within a clinically relevant setting. Methods: Two phantom setups were used. 3 mm lead collimation was applied to a solid slab phantom to determine appropriate geometries relating to collimation and dwell activation. The same collimation was applied to the temple of an anthropomorphic head phantom to demonstrate lens dose reduction. Each setup was simulated and planned to deliver 400 cGy to a 3 cm circular target to 3 mm depth. The control and collimated irradiations weremore » sequentially measured using calibrated radiochromic films. Results: Collimation for the slab phantom attenuated the dose beyond the collimator opening, decreasing the fall-off distances by half and reducing the area of healthy skin irradiated. Target coverage can be negatively impacted by a tight collimation margin, with the required margin approximated by the primary beam geometric penumbra. Surface collimation applied to the head phantom similarly attenuated the surrounding normal tissue dose while reducing the lens dose from 84 to 68 cGy. To ensure consistent setup between simulation and treatment, additional QA was performed including collimator markup, accounting for collimator placement uncertainties, standoff distance verification, and in vivo dosimetry. Conclusions: Surface collimation was shown to reduce normal tissue dose without compromising target coverage. Lens dose reduction was demonstrated on an anthropomorphic phantom within a clinical setting. Additional QA is proposed to ensure treatment fidelity.« less
High-order time-marching reinitialization for regional level-set functions
NASA Astrophysics Data System (ADS)
Pan, Shucheng; Lyu, Xiuxiu; Hu, Xiangyu Y.; Adams, Nikolaus A.
2018-02-01
In this work, the time-marching reinitialization method is extended to compute the unsigned distance function in multi-region systems involving arbitrary number of regions. High order and interface preservation are achieved by applying a simple mapping that transforms the regional level-set function to the level-set function and a high-order two-step reinitialization method which is a combination of the closest point finding procedure and the HJ-WENO scheme. The convergence failure of the closest point finding procedure in three dimensions is addressed by employing a proposed multiple junction treatment and a directional optimization algorithm. Simple test cases show that our method exhibits 4th-order accuracy for reinitializing the regional level-set functions and strictly satisfies the interface-preserving property. The reinitialization results for more complex cases with randomly generated diagrams show the capability our method for arbitrary number of regions N, with a computational effort independent of N. The proposed method has been applied to dynamic interfaces with different types of flows, and the results demonstrate high accuracy and robustness.
ERIC Educational Resources Information Center
Rogerson-Revell, Pamela; Nie, Ming; Armellini, Alejandro
2012-01-01
We researched the incorporation of three learning technologies (voice boards, i.e. voice-based discussion boards, e-book readers, and Second Life virtual world), into the Master's Programme in Applied Linguistics and Teaching English to Speakers of Other Languages offered by distance learning at the University of Leicester. This small-scale study…
Atlas-based identification of targets for functional radiosurgery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stancanello, Joseph; Romanelli, Pantaleo; Modugno, Nicola
2006-06-15
Functional disorders of the brain, such as Parkinson's disease, dystonia, epilepsy, and neuropathic pain, may exhibit poor response to medical therapy. In such cases, surgical intervention may become necessary. Modern surgical approaches to such disorders include radio-frequency lesioning and deep brain stimulation (DBS). The subthalamic nucleus (STN) is one of the most useful stereotactic targets available: STN DBS is known to induce substantial improvement in patients with end-stage Parkinson's disease. Other targets include the Globus Pallidus pars interna (GPi) for dystonia and Parkinson's disease, and the centromedian nucleus of the thalamus (CMN) for neuropathic pain. Radiosurgery is an attractive noninvasivemore » alternative to treat some functional brain disorders. The main technical limitation to radiosurgery is that the target can be selected only on the basis of magnetic resonance anatomy without electrophysiological confirmation. The aim of this work is to provide a method for the correct atlas-based identification of the target to be used in functional neurosurgery treatment planning. The coordinates of STN, CMN, and GPi were identified in the Talairach and Tournoux atlas and transformed to the corresponding regions of the Montreal Neurological Institute (MNI) electronic atlas. Binary masks describing the target nuclei were created. The MNI electronic atlas was deformed onto the patient magnetic resonance imaging-T1 scan by applying an affine transformation followed by a local nonrigid registration. The first transformation was based on normalized cross correlation and the second on optimization of a two-part objective function consisting of similarity criteria and weighted regularization. The obtained deformation field was then applied to the target masks. The minimum distance between the surface of an implanted electrode and the surface of the deformed mask was calculated. The validation of the method consisted of comparing the electrode-mask distance to the clinical outcome of the treatments in ten cases of bilateral DBS implants. Electrode placement may have an effect within a radius of stimulation equal to 2 mm, therefore the registration process is considered successful if error is less than 2 mm. The registrations of the MNI atlas onto the patient space succeeded in all cases. The comparison of the distance to the clinical outcome revealed good agreement: where the distance was high (at least in one implant), the clinical outcome was poor; where there was a close correlation between the structures, clinical outcome revealed an improvement of the pathological condition. In conclusion, the proposed method seems to provide a useful tool for the identification of the target nuclei for functional radiosurgery. Also, the method is applicable to other types of functional treatment.« less
NASA Astrophysics Data System (ADS)
Kacprzak, T.; Herbel, J.; Amara, A.; Réfrégier, A.
2018-02-01
Approximate Bayesian Computation (ABC) is a method to obtain a posterior distribution without a likelihood function, using simulations and a set of distance metrics. For that reason, it has recently been gaining popularity as an analysis tool in cosmology and astrophysics. Its drawback, however, is a slow convergence rate. We propose a novel method, which we call qABC, to accelerate ABC with Quantile Regression. In this method, we create a model of quantiles of distance measure as a function of input parameters. This model is trained on a small number of simulations and estimates which regions of the prior space are likely to be accepted into the posterior. Other regions are then immediately rejected. This procedure is then repeated as more simulations are available. We apply it to the practical problem of estimation of redshift distribution of cosmological samples, using forward modelling developed in previous work. The qABC method converges to nearly same posterior as the basic ABC. It uses, however, only 20% of the number of simulations compared to basic ABC, achieving a fivefold gain in execution time for our problem. For other problems the acceleration rate may vary; it depends on how close the prior is to the final posterior. We discuss possible improvements and extensions to this method.
A New Cloud and Aerosol Layer Detection Method Based on Micropulse Lidar Measurements
NASA Astrophysics Data System (ADS)
Wang, Q.; Zhao, C.; Wang, Y.; Li, Z.; Wang, Z.; Liu, D.
2014-12-01
A new algorithm is developed to detect aerosols and clouds based on micropulse lidar (MPL) measurements. In this method, a semi-discretization processing (SDP) technique is first used to inhibit the impact of increasing noise with distance, then a value distribution equalization (VDE) method is introduced to reduce the magnitude of signal variations with distance. Combined with empirical threshold values, clouds and aerosols are detected and separated. This method can detect clouds and aerosols with high accuracy, although classification of aerosols and clouds is sensitive to the thresholds selected. Compared with the existing Atmospheric Radiation Measurement (ARM) program lidar-based cloud product, the new method detects more high clouds. The algorithm was applied to a year of observations at both the U.S. Southern Great Plains (SGP) and China Taihu site. At SGP, the cloud frequency shows a clear seasonal variation with maximum values in winter and spring, and shows bi-modal vertical distributions with maximum frequency at around 3-6 km and 8-12 km. The annual averaged cloud frequency is about 50%. By contrast, the cloud frequency at Taihu shows no clear seasonal variation and the maximum frequency is at around 1 km. The annual averaged cloud frequency is about 15% higher than that at SGP.
Bernsdorf, Kamille Almer; Lau, Cathrine Juel; Andreasen, Anne Helms; Toft, Ulla; Lykke, Maja; Glümer, Charlotte
2017-11-01
Literature suggests that people living in areas with a wealth of unhealthy fast food options may show higher levels of fast food intake. Multilevel logistic regression analyses were applied to examine the association between GIS-located fast food outlets (FFOs) and self-reported fast food intake among adults (+ 16 years) in the Capital Region of Denmark (N = 48,305). Accessibility of FFOs was measured both as proximity (distance to nearest FFO) and density (number of FFOs within a 1km network buffer around home). Odds of fast food intake ≥ 1/week increased significantly with increasing FFO density and decreased significantly with increasing distance to the nearest FFO for distances ≤ 4km. For long distances (>4km), odds increased with increasing distance, although this applied only for car owners. Results suggest that Danish health promotion strategies need to consider the contribution of the built environment to unhealthy eating. Copyright © 2017 Elsevier Ltd. All rights reserved.
Trong Bui, Duong; Nguyen, Nhan Duc; Jeong, Gu-Min
2018-06-25
Human activity recognition and pedestrian dead reckoning are an interesting field because of their importance utilities in daily life healthcare. Currently, these fields are facing many challenges, one of which is the lack of a robust algorithm with high performance. This paper proposes a new method to implement a robust step detection and adaptive distance estimation algorithm based on the classification of five daily wrist activities during walking at various speeds using a smart band. The key idea is that the non-parametric adaptive distance estimator is performed after two activity classifiers and a robust step detector. In this study, two classifiers perform two phases of recognizing five wrist activities during walking. Then, a robust step detection algorithm, which is integrated with an adaptive threshold, peak and valley correction algorithm, is applied to the classified activities to detect the walking steps. In addition, the misclassification activities are fed back to the previous layer. Finally, three adaptive distance estimators, which are based on a non-parametric model of the average walking speed, calculate the length of each strike. The experimental results show that the average classification accuracy is about 99%, and the accuracy of the step detection is 98.7%. The error of the estimated distance is 2.2⁻4.2% depending on the type of wrist activities.
Location Distribution Optimization of Photographing Sites for Indoor Panorama Modeling
NASA Astrophysics Data System (ADS)
Zhang, S.; Wu, J.; Zhang, Y.; Zhang, X.; Xin, Z.; Liu, J.
2017-09-01
Generally, panoramas image modeling is costly and time-consuming because of photographing continuously to capture enough photos along the routes, especially in complicated indoor environment. Thus, difficulty follows for a wider applications of panoramic image modeling for business. It is indispensable to make a feasible arrangement of panorama sites locations because the locations influence the clarity, coverage and the amount of panoramic images under the condition of certain device. This paper is aim to propose a standard procedure to generate the specific location and total amount of panorama sites in indoor panoramas modeling. Firstly, establish the functional relationship between one panorama site and its objectives. Then, apply the relationship to panorama sites network. We propose the Distance Clarity function (FC and Fe) manifesting the mathematical relationship between panoramas and objectives distance or obstacle distance. The Distance Buffer function (FB) is modified from traditional buffer method to generate the coverage of panorama site. Secondly, transverse every point in possible area to locate possible panorama site, calculate the clarity and coverage synthetically. Finally select as little points as possible to satiate clarity requirement preferentially and then the coverage requirement. In the experiments, detailed parameters of camera lens are given. Still, more experiments parameters need trying out given that relationship between clarity and distance is device dependent. In short, through the function FC, Fe and FB, locations of panorama sites can be generated automatically and accurately.
NASA Astrophysics Data System (ADS)
Hao, J.; Zhang, J. H.; Yao, Z. X.
2017-12-01
We developed a method to restore the clipped seismic waveforms near epicenter using projection onto convex sets method (Zhang et al, 2016). This method was applied to rescue the local clipped waveforms of 2013 Mw 6.6 Lushan earthquake. We restored 88 out of 93 clipped waveforms of 38 broadband seismic stations of China Earthquake Networks (CEN). The epicenter distance of the nearest station to the epicenter that we can faithfully restore is only about 32 km. In order to investigate if the source parameters of earthquake could be determined exactly with the restored data, restored waveforms are utilized to get the mechanism of Lushan earthquake. We apply the generalized reflection-transmission coefficient matrix method to calculate the synthetic seismic records and simulated annealing method in inversion (Yao and Harkrider, 1983; Hao et al., 2012). We select 5 stations of CEN with the epicenter distance about 200km whose records aren't clipped and three-component velocity records are used. The result shows the strike, dip and rake angles of Lushan earthquake are 200o, 51o and 87o respectively, hereinafter "standard result". Then the clipped and restored seismic waveforms are applied respectively. The strike, dip and rake angles of clipped seismic waveforms are 184o, 53o and 72o respectively. The largest misfit of angle is 16o. In contrast, the strike, dip and rake angles of restored seismic waveforms are 198o, 51o and 87o respectively. It is very close to the "standard result". We also study the rupture history of Lushan earthquake constrained with the restored local broadband and teleseismic waves based on finite fault method (Hao et al., 2013). The result consists with that constrained with the strong motion and teleseismic waves (Hao et al., 2013), especially the location of the patch with larger slip. In real-time seismology, determining the source parameters as soon as possible is important. This method will help us to determine the mechanism of earthquake using the local clipped waveforms. Strong motion stations in China don't have good coverage at present. This method will help us to investigate the rupture history of large earthquake in China using the local clipped data of broadband stations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fiedler, D; Kuo, H; Bodner, W
2016-06-15
Purpose: To introduce a non-standard method of patient setup, using BellyBoard immobilization, to better utilize the localization and tracking potential of an RF-beacon system with EBRT for prostate cancer. Methods: An RF-beacon phantom was imaged using a wide bore CT scanner, both in a standard level position and with a known rotation (4° pitch and 7.5° yaw). A commercial treatment planning system (TPS) was used to determine positional coordinates of each beacon, and the centroid of the three beacons for both setups. For each setup at the Linac, kV AP and Rt Lateral images were obtained. A full characterization ofmore » the RF-beacon system in clinical mode was completed for various beacons’ array-to-centroid distances, which includes vertical, lateral, and longitudinal offset data, as well as pitch and yaw offset measurements for the tilted phantom. For the single patient who has been setup using the proposed BellyBoard method, a supine simulation was first obtained. When abdominal protrusion was found to be exceeding the limits of the RF-Beacon system through distance-based analysis in the TPS, the patient is re-simulated prone with the BellyBoard. Array to centroid distance is measured again in the TPS, and if found to be within the localization or tracking region it is applied. Results: Characterization of limitations for the RF-beacon system in clinical mode showed acceptable consistency of offset determination for phantom setup accuracy. The nonstandard patient setup method reduced the beacons’ centroid-to-array distance by 8.32cm, from 25.13cm to 16.81cm; completely out of tracking range (greater than 20cm) to within setup tracking range (less than 20cm). Conclusion: Using the RF-beacon system in combination with this novel patient setup can allow patients who would otherwise not be candidates for beacon enhanced EBRT to now be able to benefit from the reduced PTV margins of this treatment method.« less
Real-time inextensible surgical thread simulation.
Xu, Lang; Liu, Qian
2018-03-27
This paper discusses a real-time simulation method of inextensible surgical thread based on the Cosserat rod theory using position-based dynamics (PBD). The method realizes stable twining and knotting of surgical thread while including inextensibility, bending, twisting and coupling effects. The Cosserat rod theory is used to model the nonlinear elastic behavior of surgical thread. The surgical thread model is solved with PBD to achieve a real-time, extremely stable simulation. Due to the one-dimensional linear structure of surgical thread, the direct solution of the distance constraint based on tridiagonal matrix algorithm is used to enhance stretching resistance in every constraint projection iteration. In addition, continuous collision detection and collision response guarantee a large time step and high performance. Furthermore, friction is integrated into the constraint projection process to stabilize the twining of multiple threads and complex contact situations. Through comparisons with existing methods, the surgical thread maintains constant length under large deformation after applying the direct distance constraint in our method. The twining and knotting of multiple threads correspond to stable solutions to contact and friction forces. A surgical suture scene is also modeled to demonstrate the practicality and simplicity of our method. Our method achieves stable and fast simulation of inextensible surgical thread. Benefiting from the unified particle framework, the rigid body, elastic rod, and soft body can be simultaneously simulated. The method is appropriate for applications in virtual surgery that require multiple dynamic bodies.
Application of meta-analysis methods for identifying proteomic expression level differences.
Amess, Bob; Kluge, Wolfgang; Schwarz, Emanuel; Haenisch, Frieder; Alsaif, Murtada; Yolken, Robert H; Leweke, F Markus; Guest, Paul C; Bahn, Sabine
2013-07-01
We present new statistical approaches for identification of proteins with expression levels that are significantly changed when applying meta-analysis to two or more independent experiments. We showed that the Euclidean distance measure has reduced risk of false positives compared to the rank product method. Our Ψ-ranking method has advantages over the traditional fold-change approach by incorporating both the fold-change direction as well as the p-value. In addition, the second novel method, Π-ranking, considers the ratio of the fold-change and thus integrates all three parameters. We further improved the latter by introducing our third technique, Σ-ranking, which combines all three parameters in a balanced nonparametric approach. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Gillet, Natacha; Berstis, Laura; Wu, Xiaojing; ...
2016-09-09
In this paper, four methods to calculate charge transfer integrals in the context of bridge-mediated electron transfer are tested. These methods are based on density functional theory (DFT). We consider two perturbative Green's function effective Hamiltonian methods (first, at the DFT level of theory, using localized molecular orbitals; second, applying a tight-binding DFT approach, using fragment orbitals) and two constrained DFT implementations with either plane-wave or local basis sets. To assess the performance of the methods for through-bond (TB)-dominated or through-space (TS)-dominated transfer, different sets of molecules are considered. For through-bond electron transfer (ET), several molecules that were originally synthesizedmore » by Paddon-Row and co-workers for the deduction of electronic coupling values from photoemission and electron transmission spectroscopies, are analyzed. The tested methodologies prove to be successful in reproducing experimental data, the exponential distance decay constant and the superbridge effects arising from interference among ET pathways. For through-space ET, dedicated p-stacked systems with heterocyclopentadiene molecules were created and analyzed on the basis of electronic coupling dependence on donor-acceptor distance, structure of the bridge, and ET barrier height. The inexpensive fragment-orbital density functional tight binding (FODFTB) method gives similar results to constrained density functional theory (CDFT) and both reproduce the expected exponential decay of the coupling with donor-acceptor distances and the number of bridging units. Finally, these four approaches appear to give reliable results for both TB and TS ET and present a good alternative to expensive ab initio methodologies for large systems involving long-range charge transfers.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gillet, Natacha; Berstis, Laura; Wu, Xiaojing
In this paper, four methods to calculate charge transfer integrals in the context of bridge-mediated electron transfer are tested. These methods are based on density functional theory (DFT). We consider two perturbative Green's function effective Hamiltonian methods (first, at the DFT level of theory, using localized molecular orbitals; second, applying a tight-binding DFT approach, using fragment orbitals) and two constrained DFT implementations with either plane-wave or local basis sets. To assess the performance of the methods for through-bond (TB)-dominated or through-space (TS)-dominated transfer, different sets of molecules are considered. For through-bond electron transfer (ET), several molecules that were originally synthesizedmore » by Paddon-Row and co-workers for the deduction of electronic coupling values from photoemission and electron transmission spectroscopies, are analyzed. The tested methodologies prove to be successful in reproducing experimental data, the exponential distance decay constant and the superbridge effects arising from interference among ET pathways. For through-space ET, dedicated p-stacked systems with heterocyclopentadiene molecules were created and analyzed on the basis of electronic coupling dependence on donor-acceptor distance, structure of the bridge, and ET barrier height. The inexpensive fragment-orbital density functional tight binding (FODFTB) method gives similar results to constrained density functional theory (CDFT) and both reproduce the expected exponential decay of the coupling with donor-acceptor distances and the number of bridging units. Finally, these four approaches appear to give reliable results for both TB and TS ET and present a good alternative to expensive ab initio methodologies for large systems involving long-range charge transfers.« less
Gillet, Natacha; Berstis, Laura; Wu, Xiaojing; Gajdos, Fruzsina; Heck, Alexander; de la Lande, Aurélien; Blumberger, Jochen; Elstner, Marcus
2016-10-11
In this article, four methods to calculate charge transfer integrals in the context of bridge-mediated electron transfer are tested. These methods are based on density functional theory (DFT). We consider two perturbative Green's function effective Hamiltonian methods (first, at the DFT level of theory, using localized molecular orbitals; second, applying a tight-binding DFT approach, using fragment orbitals) and two constrained DFT implementations with either plane-wave or local basis sets. To assess the performance of the methods for through-bond (TB)-dominated or through-space (TS)-dominated transfer, different sets of molecules are considered. For through-bond electron transfer (ET), several molecules that were originally synthesized by Paddon-Row and co-workers for the deduction of electronic coupling values from photoemission and electron transmission spectroscopies, are analyzed. The tested methodologies prove to be successful in reproducing experimental data, the exponential distance decay constant and the superbridge effects arising from interference among ET pathways. For through-space ET, dedicated π-stacked systems with heterocyclopentadiene molecules were created and analyzed on the basis of electronic coupling dependence on donor-acceptor distance, structure of the bridge, and ET barrier height. The inexpensive fragment-orbital density functional tight binding (FODFTB) method gives similar results to constrained density functional theory (CDFT) and both reproduce the expected exponential decay of the coupling with donor-acceptor distances and the number of bridging units. These four approaches appear to give reliable results for both TB and TS ET and present a good alternative to expensive ab initio methodologies for large systems involving long-range charge transfers.
Multispectral plasmon coupling microscopy and its application in bio-imaging
NASA Astrophysics Data System (ADS)
Wang, Hongyun
A broad range of cellular activities, including receptor mediated endocytosis, signaling and receptor clustering, involve multi-body interactions between different cellular functionalities. Many of these interactions are dynamic in nature, making optical tools the method of choice for their investigation. Conventional optical microscopy has a resolution about 300nm, limited by the diffraction of light, which is insufficient to explore processes that occur on nanometer or tens of nanometer length scales. The aim of this thesis is to develop and validate a plasmon coupling microscopy (PCM), which utilizes the distance dependent spectral properties of coupled noble metal nanoparticles (NPs) to resolve distance changes between NP labels on deeply sub-diffraction length scales. This colorimetric approach is augmented with a polarization sensitive analysis of the scattered light of individual dimers to monitor simultaneously distance and orientation changes. The distance dependent polarization anisotropy in discrete dimers is investigated experimentally and theoretically. The performed analysis reveals that the polarization anisotropy is robust even against relatively large refractive index changes. The polarization sensitive PCM is then applied to characterize the lateral spatial organization of mammalian plasma membranes by analyzing the translational and rotational motion as well as the extension of discrete NP dimers during their diffusion on lysed HeLa cell membranes. The membrane is found to be compartmentalized with typical domain sizes on the order of 70nm. The functionality of plasmon coupling based imaging method is expanded further by developing a multispectral imaging modality for a quantitative analysis of the plasmon coupling between many noble metal immunolabels in a large field of view simultaneously. This approach provides information about the spatial organization of the silver nanoparticle labels and thus of targeted EGF receptor densities on the surface of epidermoid carcinoma cells (A431). Finally, multispectral plasmon coupling microscopy is applied to investigate the uptake and subsequent intracellular spatial distribution of silver nanoparticles in murine macrophage cells (J774A.1). The studies reveal that NP uptake is mediated by scavenger receptors and that the intracellular NP association and distribution are heterogeneous among cells in a cellular ensemble. The heterogeneity is demonstrated to be correlated with the maturation status of the macrophages.
Geodesic least squares regression on information manifolds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Verdoolaege, Geert, E-mail: geert.verdoolaege@ugent.be
We present a novel regression method targeted at situations with significant uncertainty on both the dependent and independent variables or with non-Gaussian distribution models. Unlike the classic regression model, the conditional distribution of the response variable suggested by the data need not be the same as the modeled distribution. Instead they are matched by minimizing the Rao geodesic distance between them. This yields a more flexible regression method that is less constrained by the assumptions imposed through the regression model. As an example, we demonstrate the improved resistance of our method against some flawed model assumptions and we apply thismore » to scaling laws in magnetic confinement fusion.« less
Spatial weighting approach in numerical method for disaggregation of MDGs indicators
NASA Astrophysics Data System (ADS)
Permai, S. D.; Mukhaiyar, U.; Satyaning PP, N. L. P.; Soleh, M.; Aini, Q.
2018-03-01
Disaggregation use to separate and classify the data based on certain characteristics or on administrative level. Disaggregated data is very important because some indicators not measured on all characteristics. Detailed disaggregation for development indicators is important to ensure that everyone benefits from development and support better development-related policymaking. This paper aims to explore different methods to disaggregate national employment-to-population ratio indicator to province- and city-level. Numerical approach applied to overcome the problem of disaggregation unavailability by constructing several spatial weight matrices based on the neighbourhood, Euclidean distance and correlation. These methods can potentially be used and further developed to disaggregate development indicators into lower spatial level even by several demographic characteristics.
Two nonlinear control schemes contrasted on a hydrodynamiclike model
NASA Technical Reports Server (NTRS)
Keefe, Laurence R.
1993-01-01
The principles of two flow control strategies, those of Huebler (Luescher and Huebler, 1989) and of Ott et al. (1990) are discussed, and the two schemes are compared for their ability to control shear flow, using fully developed and transitional solutions of the Ginzburg-Landau equation as models for such flows. It was found that the effectiveness of both methods in obtaining control of fully developed flows depended strongly on the 'distance' in state space between the uncontrolled flow and goal dynamics. There were conceptual difficulties in applying the Ott et al. method to transitional convectively unstable flows. On the other hand, the Huebler method worked well, within certain limitations, although at a large cost in energy terms.
NASA Astrophysics Data System (ADS)
Sudibyo, Hermida, L.; Suwardi
2017-11-01
Tapioca waste water is very difficult to treat; hence many tapioca factories could not treat it well. One of method which able to overcome this problem is electrodeposition. This process has high performance when it conducted using batch recycle process and use aluminum bipolar electrode. However, the optimum operation conditions are having a significant effect in the tapioca wastewater treatment using bath recycle process. In this research, The Taguchi method was successfully applied to know the optimum condition and the interaction between parameters in electrocoagulation process. The results show that current density, conductivity, electrode distance, and pH have a significant effect on the turbidity removal of cassava starch waste water.
Transurethral light delivery for prostate photoacoustic imaging
NASA Astrophysics Data System (ADS)
Lediju Bell, Muyinatu A.; Guo, Xiaoyu; Song, Danny Y.; Boctor, Emad M.
2015-03-01
Photoacoustic imaging has broad clinical potential to enhance prostate cancer detection and treatment, yet it is challenged by the lack of minimally invasive, deeply penetrating light delivery methods that provide sufficient visualization of targets (e.g., tumors, contrast agents, brachytherapy seeds). We constructed a side-firing fiber prototype for transurethral photoacoustic imaging of prostates with a dual-array (linear and curvilinear) transrectal ultrasound probe. A method to calculate the surface area and, thereby, estimate the laser fluence at this fiber tip was derived, validated, applied to various design parameters, and used as an input to three-dimensional Monte Carlo simulations. Brachytherapy seeds implanted in phantom, ex vivo, and in vivo canine prostates at radial distances of 5 to 30 mm from the urethra were imaged with the fiber prototype transmitting 1064 nm wavelength light with 2 to 8 mJ pulse energy. Prebeamformed images were displayed in real time at a rate of 3 to 5 frames per second to guide fiber placement and beamformed offline. A conventional delay-and-sum beamformer provided decreasing seed contrast (23 to 9 dB) with increasing urethra-to-target distance, while the short-lag spatial coherence beamformer provided improved and relatively constant seed contrast (28 to 32 dB) regardless of distance, thus improving multitarget visualization in single and combined curvilinear images acquired with the fiber rotating and the probe fixed. The proposed light delivery and beamforming methods promise to improve key prostate cancer detection and treatment strategies.
Tufto, Jarle; Lande, Russell; Ringsby, Thor-Harald; Engen, Steinar; Saether, Bernt-Erik; Walla, Thomas R; DeVries, Philip J
2012-07-01
1. We develop a Bayesian method for analysing mark-recapture data in continuous habitat using a model in which individuals movement paths are Brownian motions, life spans are exponentially distributed and capture events occur at given instants in time if individuals are within a certain attractive distance of the traps. 2. The joint posterior distribution of the dispersal rate, longevity, trap attraction distances and a number of latent variables representing the unobserved movement paths and time of death of all individuals is computed using Gibbs sampling. 3. An estimate of absolute local population density is obtained simply by dividing the Poisson counts of individuals captured at given points in time by the estimated total attraction area of all traps. Our approach for estimating population density in continuous habitat avoids the need to define an arbitrary effective trapping area that characterized previous mark-recapture methods in continuous habitat. 4. We applied our method to estimate spatial demography parameters in nine species of neotropical butterflies. Path analysis of interspecific variation in demographic parameters and mean wing length revealed a simple network of strong causation. Larger wing length increases dispersal rate, which in turn increases trap attraction distance. However, higher dispersal rate also decreases longevity, thus explaining the surprising observation of a negative correlation between wing length and longevity. © 2012 The Authors. Journal of Animal Ecology © 2012 British Ecological Society.
ERIC Educational Resources Information Center
Wakita, Takafumi; Ueshima, Natsumi; Noguchi, Hiroyuki
2012-01-01
This study examined whether the number of options in the Likert scale influences the psychological distance between categories. The most important assumption when using the Likert scale is that the psychological distance between options is equal. The authors proposed a new algorithm for calculating the scale values of options by applying item…
ERIC Educational Resources Information Center
Koszalka, Tiffany A.; Ganesan, Radha
2004-01-01
Course developers can be distracted from applying sound instructional design principles by the amount of flexibility offered through online course development resources (Kidney & Puckett, "Quarterly Review of Distance Education," 4 (2003), 203-212). Distance education course management systems (CMS) provide multiple features that can be easily…
Application of velocity filtering to optical-flow passive ranging
NASA Technical Reports Server (NTRS)
Barniv, Yair
1992-01-01
The performance of the velocity filtering method as applied to optical-flow passive ranging under real-world conditions is evaluated. The theory of the 3-D Fourier transform as applied to constant-speed moving points is reviewed, and the space-domain shift-and-add algorithm is derived from the general 3-D matched filtering formulation. The constant-speed algorithm is then modified to fit the actual speed encountered in the optical flow application, and the passband of that filter is found in terms of depth (sensor/object distance) so as to cover any given range of depths. Two algorithmic solutions for the problems associated with pixel interpolation and object expansion are developed, and experimental results are presented.
Slope monitoring by using 2-D resistivity method at Sungai Batu, Pulau Pinang, Malaysia
NASA Astrophysics Data System (ADS)
Azman, Muhamad Iqbal Mubarak Faharul; Yusof, Azim Hilmy Mohd; Ismail, Nur Azwin; Ismail, Noer El Hidayah
2017-07-01
Slope is a dynamic system of geo-environmental phenomena that related to the movement of the soil and rock masses. In Pulau Pinang, the occurrence of slope related phenomena such as landslide and rock fall has become a huge issue especially during rainy season as the government would have to invest more for the people safety. 2-D resistivity method is one of the geophysical methods that can be applied to overcome this issue thus prepare countermeasure actions. Monitoring is one of the common acquisition technique that has been used in solving such issue. This technique was applied to identify and monitor changes at the suspected area and thus, countermeasure steps can be taken accordingly and not blindfolded. Starting from August until November 2016, a 200 m survey line of 2-D resistivity survey had been conducted monthly at Sungai Batu, Pulau Pinang slope for monitoring purpose. Three resistivity ranges were able to detect within the subsurface. Resistivity value of 250 - 400 Ωm indicated the low resistivity value and interpreted as the weak zone located at distance of 90 - 120 m with depth of 10 m. Intermediate resistivity value was interpreted as weathered granite zone with resistivity value of 400 - 1500 Ωm was found at almost along survey line. High resistivity value was > 5000 Ωm and interpreted as granitic bedrock located at depth of > 20 m. Aside from weathered granite zone and weak zone, a fracture was found develop over time at distance of 130 - 140 m. The features found have the potential to be the cause for slope failure phenomena to occur. As a conclusion, monitoring slope using 2-D resistivity method is a success and indeed helpful in overcome landslide and rock fall issue as a pre-countermeasure action.
Gene flow analysis method, the D-statistic, is robust in a wide parameter space.
Zheng, Yichen; Janke, Axel
2018-01-08
We evaluated the sensitivity of the D-statistic, a parsimony-like method widely used to detect gene flow between closely related species. This method has been applied to a variety of taxa with a wide range of divergence times. However, its parameter space and thus its applicability to a wide taxonomic range has not been systematically studied. Divergence time, population size, time of gene flow, distance of outgroup and number of loci were examined in a sensitivity analysis. The sensitivity study shows that the primary determinant of the D-statistic is the relative population size, i.e. the population size scaled by the number of generations since divergence. This is consistent with the fact that the main confounding factor in gene flow detection is incomplete lineage sorting by diluting the signal. The sensitivity of the D-statistic is also affected by the direction of gene flow, size and number of loci. In addition, we examined the ability of the f-statistics, [Formula: see text] and [Formula: see text], to estimate the fraction of a genome affected by gene flow; while these statistics are difficult to implement to practical questions in biology due to lack of knowledge of when the gene flow happened, they can be used to compare datasets with identical or similar demographic background. The D-statistic, as a method to detect gene flow, is robust against a wide range of genetic distances (divergence times) but it is sensitive to population size. The D-statistic should only be applied with critical reservation to taxa where population sizes are large relative to branch lengths in generations.
NASA Astrophysics Data System (ADS)
Arimbi, Mentari Dian; Bustamam, Alhadi; Lestari, Dian
2017-03-01
Data clustering can be executed through partition or hierarchical method for many types of data including DNA sequences. Both clustering methods can be combined by processing partition algorithm in the first level and hierarchical in the second level, called hybrid clustering. In the partition phase some popular methods such as PAM, K-means, or Fuzzy c-means methods could be applied. In this study we selected partitioning around medoids (PAM) in our partition stage. Furthermore, following the partition algorithm, in hierarchical stage we applied divisive analysis algorithm (DIANA) in order to have more specific clusters and sub clusters structures. The number of main clusters is determined using Davies Bouldin Index (DBI) value. We choose the optimal number of clusters if the results minimize the DBI value. In this work, we conduct the clustering on 1252 HPV DNA sequences data from GenBank. The characteristic extraction is initially performed, followed by normalizing and genetic distance calculation using Euclidean distance. In our implementation, we used the hybrid PAM and DIANA using the R open source programming tool. In our results, we obtained 3 main clusters with average DBI value is 0.979, using PAM in the first stage. After executing DIANA in the second stage, we obtained 4 sub clusters for Cluster-1, 9 sub clusters for Cluster-2 and 2 sub clusters in Cluster-3, with the BDI value 0.972, 0.771, and 0.768 for each main cluster respectively. Since the second stage produce lower DBI value compare to the DBI value in the first stage, we conclude that this hybrid approach can improve the accuracy of our clustering results.
Alves, E O S; Cerqueira-Silva, C B M; Souza, A M; Santos, C A F; Lima Neto, F P; Corrêa, R X
2012-03-14
We investigated seven distance measures in a set of observations of physicochemical variables of mango (Mangifera indica) submitted to multivariate analyses (distance, projection and grouping). To estimate the distance measurements, five mango progeny (total of 25 genotypes) were analyzed, using six fruit physicochemical descriptors (fruit weight, equatorial diameter, longitudinal diameter, total soluble solids in °Brix, total titratable acidity, and pH). The distance measurements were compared by the Spearman correlation test, projection in two-dimensional space and grouping efficiency. The Spearman correlation coefficients between the seven distance measurements were, except for the Mahalanobis' generalized distance (0.41 ≤ rs ≤ 0.63), high and significant (rs ≥ 0.91; P < 0.001). Regardless of the origin of the distance matrix, the unweighted pair group method with arithmetic mean grouping method proved to be the most adequate. The various distance measurements and grouping methods gave different values for distortion (-116.5 ≤ D ≤ 74.5), cophenetic correlation (0.26 ≤ rc ≤ 0.76) and stress (-1.9 ≤ S ≤ 58.9). Choice of distance measurement and analysis methods influence the.
Etchemendy, Pablo E; Spiousas, Ignacio; Calcagno, Esteban R; Abregú, Ezequiel; Eguia, Manuel C; Vergara, Ramiro O
2018-06-01
In this study we evaluated whether a method of direct location is an appropriate response method for measuring auditory distance perception of far-field sound sources. We designed an experimental set-up that allows participants to indicate the distance at which they perceive the sound source by moving a visual marker. We termed this method Cross-Modal Direct Location (CMDL) since the response procedure involves the visual modality while the stimulus is presented through the auditory modality. Three experiments were conducted with sound sources located from 1 to 6 m. The first one compared the perceived distances obtained using either the CMDL device or verbal report (VR), which is the response method more frequently used for reporting auditory distance in the far field, and found differences on response compression and bias. In Experiment 2, participants reported visual distance estimates to the visual marker that were found highly accurate. Then, we asked the same group of participants to report VR estimates of auditory distance and found that the spatial visual information, obtained from the previous task, did not influence their reports. Finally, Experiment 3 compared the same responses that Experiment 1 but interleaving the methods, showing a weak, but complex, mutual influence. However, the estimates obtained with each method remained statistically different. Our results show that the auditory distance psychophysical functions obtained with the CMDL method are less susceptible to previously reported underestimation for distances over 2 m.
Least-squares Minimization Approaches to Interpret Total Magnetic Anomalies Due to Spheres
NASA Astrophysics Data System (ADS)
Abdelrahman, E. M.; El-Araby, T. M.; Soliman, K. S.; Essa, K. S.; Abo-Ezz, E. R.
2007-05-01
We have developed three different least-squares approaches to determine successively: the depth, magnetic angle, and amplitude coefficient of a buried sphere from a total magnetic anomaly. By defining the anomaly value at the origin and the nearest zero-anomaly distance from the origin on the profile, the problem of depth determination is transformed into the problem of finding a solution of a nonlinear equation of the form f(z)=0. Knowing the depth and applying the least-squares method, the magnetic angle and amplitude coefficient are determined using two simple linear equations. In this way, the depth, magnetic angle, and amplitude coefficient are determined individually from all observed total magnetic data. The method is applied to synthetic examples with and without random errors and tested on a field example from Senegal, West Africa. In all cases, the depth solutions are in good agreement with the actual ones.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loef, P.A.; Smed, T.; Andersson, G.
The minimum singular value of the power flow Jacobian matrix has been used as a static voltage stability index, indicating the distance between the studied operating point and the steady state voltage stability limit. In this paper a fast method to calculate the minimum singular value and the corresponding (left and right) singular vectors is presented. The main advantages of the developed algorithm are the small amount of computation time needed, and that it only requires information available from an ordinary program for power flow calculations. Furthermore, the proposed method fully utilizes the sparsity of the power flow Jacobian matrixmore » and hence the memory requirements for the computation are low. These advantages are preserved when applied to various submatrices of the Jacobian matrix, which can be useful in constructing special voltage stability indices. The developed algorithm was applied to small test systems as well as to a large (real size) system with over 1000 nodes, with satisfactory results.« less
NASA Astrophysics Data System (ADS)
Chen, Wei; Pourghasemi, Hamid Reza; Panahi, Mahdi; Kornejady, Aiding; Wang, Jiale; Xie, Xiaoshen; Cao, Shubo
2017-11-01
The spatial prediction of landslide susceptibility is an important prerequisite for the analysis of landslide hazards and risks in any area. This research uses three data mining techniques, such as an adaptive neuro-fuzzy inference system combined with frequency ratio (ANFIS-FR), a generalized additive model (GAM), and a support vector machine (SVM), for landslide susceptibility mapping in Hanyuan County, China. In the first step, in accordance with a review of the previous literature, twelve conditioning factors, including slope aspect, altitude, slope angle, topographic wetness index (TWI), plan curvature, profile curvature, distance to rivers, distance to faults, distance to roads, land use, normalized difference vegetation index (NDVI), and lithology, were selected. In the second step, a collinearity test and correlation analysis between the conditioning factors and landslides were applied. In the third step, we used three advanced methods, namely, ANFIS-FR, GAM, and SVM, for landslide susceptibility modeling. Subsequently, the results of their accuracy were validated using a receiver operating characteristic curve. The results showed that all three models have good prediction capabilities, while the SVM model has the highest prediction rate of 0.875, followed by the ANFIS-FR and GAM models with prediction rates of 0.851 and 0.846, respectively. Thus, the landslide susceptibility maps produced in the study area can be applied for management of hazards and risks in landslide-prone Hanyuan County.
NASA Astrophysics Data System (ADS)
Docobo, J. A.; Tamazian, V. S.; Campo, P. P.
2018-05-01
In the vast majority of cases when available astrometric measurements of a visual binary cover a very short orbital arc, it is practically impossible to calculate a good quality orbit. It is especially important for systems with pre-main-sequence components where standard mass-spectrum calibrations cannot be applied nor can a dynamical parallax be calculated. We have shown that the analytical method of Docobo allows us to put certain constraints on the most likely orbital solutions, using an available realistic estimate of the global mass of the system. As an example, we studied the interesting PMS binary, FW Tau AB, located in the Taurus-Auriga as well as investigated a range of its possible orbital solutions combined with an assumed distance between 120 and 160 pc. To maintain the total mass of FW Tau AB in a realistic range between 0.2 and 0.6M_{⊙}, minimal orbital periods should begin at 105, 150, 335, and 2300 yr for distances of 120, 130, 140, and 150 pc, respectively (no plausible orbits were found assuming a distance of 160 pc). An original criterion to establish the upper limit of the orbital period is applied. When the position angle in some astrometric measurements was flipped by 180°, orbits with periods close to 45 yr are also plausible. Three example orbits with periods of 44.6, 180, and 310 yr are presented.
A BAYESIAN APPROACH TO LOCATING THE RED GIANT BRANCH TIP MAGNITUDE. I
DOE Office of Scientific and Technical Information (OSTI.GOV)
Conn, A. R.; Parker, Q. A.; Zucker, D. B.
2011-10-20
We present a new approach for identifying the tip of the red giant branch (TRGB) which, as we show, works robustly even on sparsely populated targets. Moreover, the approach is highly adaptable to the available data for the stellar population under study, with prior information readily incorporable into the algorithm. The uncertainty in the derived distances is also made tangible and easily calculable from posterior probability distributions. We provide an outline of the development of the algorithm and present the results of tests designed to characterize its capabilities and limitations. We then apply the new algorithm to three M31 satellites:more » Andromeda I, Andromeda II, and the fainter Andromeda XXIII, using data from the Pan-Andromeda Archaeological Survey (PAndAS), and derive their distances as 731{sup (+5)+18}{sub (-4)-17} kpc, 634{sup (+2)+15}{sub (-2)-14} kpc, and 733{sup (+13)+23}{sub (-11)-22} kpc, respectively, where the errors appearing in parentheses are the components intrinsic to the method, while the larger values give the errors after accounting for additional sources of error. These results agree well with the best distance determinations in the literature and provide the smallest uncertainties to date. This paper is an introduction to the workings and capabilities of our new approach in its basic form, while a follow-up paper shall make full use of the method's ability to incorporate priors and use the resulting algorithm to systematically obtain distances to all of M31's satellites identifiable in the PAndAS survey area.« less
Visualizing phylogenetic tree landscapes.
Wilgenbusch, James C; Huang, Wen; Gallivan, Kyle A
2017-02-02
Genomic-scale sequence alignments are increasingly used to infer phylogenies in order to better understand the processes and patterns of evolution. Different partitions within these new alignments (e.g., genes, codon positions, and structural features) often favor hundreds if not thousands of competing phylogenies. Summarizing and comparing phylogenies obtained from multi-source data sets using current consensus tree methods discards valuable information and can disguise potential methodological problems. Discovery of efficient and accurate dimensionality reduction methods used to display at once in 2- or 3- dimensions the relationship among these competing phylogenies will help practitioners diagnose the limits of current evolutionary models and potential problems with phylogenetic reconstruction methods when analyzing large multi-source data sets. We introduce several dimensionality reduction methods to visualize in 2- and 3-dimensions the relationship among competing phylogenies obtained from gene partitions found in three mid- to large-size mitochondrial genome alignments. We test the performance of these dimensionality reduction methods by applying several goodness-of-fit measures. The intrinsic dimensionality of each data set is also estimated to determine whether projections in 2- and 3-dimensions can be expected to reveal meaningful relationships among trees from different data partitions. Several new approaches to aid in the comparison of different phylogenetic landscapes are presented. Curvilinear Components Analysis (CCA) and a stochastic gradient decent (SGD) optimization method give the best representation of the original tree-to-tree distance matrix for each of the three- mitochondrial genome alignments and greatly outperformed the method currently used to visualize tree landscapes. The CCA + SGD method converged at least as fast as previously applied methods for visualizing tree landscapes. We demonstrate for all three mtDNA alignments that 3D projections significantly increase the fit between the tree-to-tree distances and can facilitate the interpretation of the relationship among phylogenetic trees. We demonstrate that the choice of dimensionality reduction method can significantly influence the spatial relationship among a large set of competing phylogenetic trees. We highlight the importance of selecting a dimensionality reduction method to visualize large multi-locus phylogenetic landscapes and demonstrate that 3D projections of mitochondrial tree landscapes better capture the relationship among the trees being compared.
Williams, C.J.; Heglund, P.J.
2009-01-01
Habitat association models are commonly developed for individual animal species using generalized linear modeling methods such as logistic regression. We considered the issue of grouping species based on their habitat use so that management decisions can be based on sets of species rather than individual species. This research was motivated by a study of western landbirds in northern Idaho forests. The method we examined was to separately fit models to each species and to use a generalized Mahalanobis distance between coefficient vectors to create a distance matrix among species. Clustering methods were used to group species from the distance matrix, and multidimensional scaling methods were used to visualize the relations among species groups. Methods were also discussed for evaluating the sensitivity of the conclusions because of outliers or influential data points. We illustrate these methods with data from the landbird study conducted in northern Idaho. Simulation results are presented to compare the success of this method to alternative methods using Euclidean distance between coefficient vectors and to methods that do not use habitat association models. These simulations demonstrate that our Mahalanobis-distance- based method was nearly always better than Euclidean-distance-based methods or methods not based on habitat association models. The methods used to develop candidate species groups are easily explained to other scientists and resource managers since they mainly rely on classical multivariate statistical methods. ?? 2008 Springer Science+Business Media, LLC.
Roberson, A.M.; Andersen, D.E.; Kennedy, P.L.
2005-01-01
Broadcast surveys using conspecific calls are currently the most effective method for detecting northern goshawks (Accipiter gentilis) during the breeding season. These surveys typically use alarm calls during the nestling phase and juvenile food-begging calls during the fledgling-dependency phase. Because goshawks are most vocal during the courtship phase, we hypothesized that this phase would be an effective time to detect goshawks. Our objective was to improve current survey methodology by evaluating the probability of detecting goshawks at active nests in northern Minnesota in 3 breeding phases and at 4 broadcast distances and to determine the effective area surveyed per broadcast station. Unlike previous studies, we broadcast calls at only 1 distance per trial. This approach better quantifies (1) the relationship between distance and probability of detection, and (2) the effective area surveyed (EAS) per broadcast station. We conducted 99 broadcast trials at 14 active breeding areas. When pooled over all distances, detection rates were highest during the courtship (70%) and fledgling-dependency phases (68%). Detection rates were lowest during the nestling phase (28%), when there appeared to be higher variation in likelihood of detecting individuals. EAS per broadcast station was 39.8 ha during courtship and 24.8 ha during fledgling-dependency. Consequently, in northern Minnesota, broadcast stations may be spaced 712m and 562 m apart when conducting systematic surveys during courtship and fledgling-dependency, respectively. We could not calculate EAS for the nestling phase because probability of detection was not a simple function of distance from nest. Calculation of EAS could be applied to other areas where the probability of detection is a known function of distance.
On Computing Breakpoint Distances for Genomes with Duplicate Genes.
Shao, Mingfu; Moret, Bernard M E
2017-06-01
A fundamental problem in comparative genomics is to compute the distance between two genomes in terms of its higher level organization (given by genes or syntenic blocks). For two genomes without duplicate genes, we can easily define (and almost always efficiently compute) a variety of distance measures, but the problem is NP-hard under most models when genomes contain duplicate genes. To tackle duplicate genes, three formulations (exemplar, maximum matching, and any matching) have been proposed, all of which aim to build a matching between homologous genes so as to minimize some distance measure. Of the many distance measures, the breakpoint distance (the number of nonconserved adjacencies) was the first one to be studied and remains of significant interest because of its simplicity and model-free property. The three breakpoint distance problems corresponding to the three formulations have been widely studied. Although we provided last year a solution for the exemplar problem that runs very fast on full genomes, computing optimal solutions for the other two problems has remained challenging. In this article, we describe very fast, exact algorithms for these two problems. Our algorithms rely on a compact integer-linear program that we further simplify by developing an algorithm to remove variables, based on new results on the structure of adjacencies and matchings. Through extensive experiments using both simulations and biological data sets, we show that our algorithms run very fast (in seconds) on mammalian genomes and scale well beyond. We also apply these algorithms (as well as the classic orthology tool MSOAR) to create orthology assignment, then compare their quality in terms of both accuracy and coverage. We find that our algorithm for the "any matching" formulation significantly outperforms other methods in terms of accuracy while achieving nearly maximum coverage.
2013-01-01
Background The physical environment may play a crucial role in promoting older adults’ walking for transportation. However, previous studies on relationships between the physical environment and older adults’ physical activity behaviors have reported inconsistent findings. A possible explanation for these inconsistencies is the focus upon studying environmental factors separately rather than simultaneously. The current study aimed to investigate the cumulative influence of perceived favorable environmental factors on older adults’ walking for transportation. Additionally, the moderating effect of perceived distance to destinations on this relationship was studied. Methods The sample was comprised of 50,685 non-institutionalized older adults residing in Flanders (Belgium). Cross-sectional data on demographics, environmental perceptions and frequency of walking for transportation were collected by self-administered questionnaires in the period 2004-2010. Perceived distance to destinations was categorized into short, medium, and large distance to destinations. An environmental index (=a sum of favorable environmental factors, ranging from 0 to 7) was constructed to investigate the cumulative influence of favorable environmental factors. Multilevel logistic regression analyses were applied to predict probabilities of daily walking for transportation. Results For short distance to destinations, probability of daily walking for transportation was significantly higher when seven compared to three, four or five favorable environmental factors were present. For medium distance to destinations, probabilities significantly increased for an increase from zero to four favorable environmental factors. For large distance to destinations, no relationship between the environmental index and walking for transportation was observed. Conclusions Our findings suggest that the presence of multiple favorable environmental factors can motivate older adults to walk medium distances to facilities. Future research should focus upon the relationship between older adults’ physical activity and multiple environmental factors simultaneously instead of separately. PMID:23945285
Cao, Ying J; Caffo, Brian S; Fuchs, Edward J; Lee, Linda A; Du, Yong; Li, Liye; Bakshi, Rahul P; Macura, Katarzyna; Khan, Wasif A; Wahl, Richard L; Grohskopf, Lisa A; Hendrix, Craig W
2012-12-01
We sought to describe quantitatively the distribution of rectally administered gels and seminal fluid surrogates using novel concentration-distance parameters that could be repeated over time. These methods are needed to develop rationally rectal microbicides to target and prevent HIV infection. Eight subjects were dosed rectally with radiolabelled and gadolinium-labelled gels to simulate microbicide gel and seminal fluid. Rectal doses were given with and without simulated receptive anal intercourse. Twenty-four hour distribution was assessed with indirect single photon emission computed tomography (SPECT)/computed tomography (CT) and magnetic resonance imaging (MRI), and direct assessment via sigmoidoscopic brushes. Concentration-distance curves were generated using an algorithm for fitting SPECT data in three dimensions. Three novel concentration-distance parameters were defined to describe quantitatively the distribution of radiolabels: maximal distance (D(max) ), distance at maximal concentration (D(Cmax) ) and mean residence distance (D(ave) ). The SPECT/CT distribution of microbicide and semen surrogates was similar. Between 1 h and 24 h post dose, the surrogates migrated retrograde in all three parameters (relative to coccygeal level; geometric mean [95% confidence interval]): maximal distance (D(max) ), 10 cm (8.6-12) to 18 cm (13-26), distance at maximal concentration (D(Cmax) ), 3.8 cm (2.7-5.3) to 4.2 cm (2.8-6.3) and mean residence distance (D(ave) ), 4.3 cm (3.5-5.1) to 7.6 cm (5.3-11). Sigmoidoscopy and MRI correlated only roughly with SPECT/CT. Rectal microbicide surrogates migrated retrograde during the 24 h following dosing. Spatial kinetic parameters estimated using three dimensional curve fitting of distribution data should prove useful for evaluating rectal formulations of drugs for HIV prevention and other indications. © 2012 The Authors. British Journal of Clinical Pharmacology © 2012 The British Pharmacological Society.
2013-01-01
Background In many countries, financial assistance is awarded to physicians who settle in an area that is designated as a shortage area to prevent unequal accessibility to primary health care. Today, however, policy makers use fairly simple methods to define health care accessibility, with physician-to-population ratios (PPRs) within predefined administrative boundaries being overwhelmingly favoured. Our purpose is to verify whether these simple methods are accurate enough for adequately designating medical shortage areas and explore how these perform relative to more advanced GIS-based methods. Methods Using a geographical information system (GIS), we conduct a nation-wide study of accessibility to primary care physicians in Belgium using four different methods: PPR, distance to closest physician, cumulative opportunity, and floating catchment area (FCA) methods. Results The official method used by policy makers in Belgium (calculating PPR per physician zone) offers only a crude representation of health care accessibility, especially because large contiguous areas (physician zones) are considered. We found substantial differences in the number and spatial distribution of medical shortage areas when applying different methods. Conclusions The assessment of spatial health care accessibility and concomitant policy initiatives are affected by and dependent on the methodology used. The major disadvantage of PPR methods is its aggregated approach, masking subtle local variations. Some simple GIS methods overcome this issue, but have limitations in terms of conceptualisation of physician interaction and distance decay. Conceptually, the enhanced 2-step floating catchment area (E2SFCA) method, an advanced FCA method, was found to be most appropriate for supporting areal health care policies, since this method is able to calculate accessibility at a small scale (e.g. census tracts), takes interaction between physicians into account, and considers distance decay. While at present in health care research methodological differences and modifiable areal unit problems have remained largely overlooked, this manuscript shows that these aspects have a significant influence on the insights obtained. Hence, it is important for policy makers to ascertain to what extent their policy evaluations hold under different scales of analysis and when different methods are used. PMID:23964751
NASA Astrophysics Data System (ADS)
Yuan, Yangsheng; Chen, Yahong; Liang, Chunhao; Cai, Yangjian; Baykal, Yahya
2013-03-01
With the help of a tensor method, we derive an explicit expression for the on-axis scintillation index of a circular partially coherent dark hollow (DH) beam in weakly turbulent atmosphere. The derived formula can be applied to study the scintillation properties of a partially coherent Gaussian beam and a partially coherent flat-topped (FT) beam. The effect of spatial coherence on the scintillation properties of DH beam, FT beam and Gaussian beam is studied numerically and comparatively. Our results show that the advantage of a DH beam over a FT beam and a Gaussian beam for reducing turbulence-induced scintillation increases particularly at long propagation distances with the decrease of spatial coherence or the increase of the atmospheric turbulence, which will be useful for long-distance free-space optical communications.
NASA Astrophysics Data System (ADS)
Chen, Jingyun; Palmer, Samantha J.; Khan, Ali R.; Mckeown, Martin J.; Beg, Mirza Faial
2009-02-01
We apply a recently developed automated brain segmentation method, FS+LDDMM, to brain MRI scans from Parkinson's Disease (PD) subjects, and normal age-matched controls and compare the results to manual segmentation done by trained neuroscientists. The data set consisted of 14 PD subjects and 12 age-matched control subjects without neurologic disease and comparison was done on six subcortical brain structures (left and right caudate, putamen and thalamus). Comparison between automatic and manual segmentation was based on Dice Similarity Coefficient (Overlap Percentage), L1 Error, Symmetrized Hausdorff Distance and Symmetrized Mean Surface Distance. Results suggest that FS+LDDMM is well-suited for subcortical structure segmentation and further shape analysis in Parkinson's Disease. The asymmetry of the Dice Similarity Coefficient over shape change is also discussed based on the observation and measurement of FS+LDDMM segmentation results.
Determination of molecular configuration by debye length modulation.
Vacic, Aleksandar; Criscione, Jason M; Rajan, Nitin K; Stern, Eric; Fahmy, Tarek M; Reed, Mark A
2011-09-07
Silicon nanowire field effect transistors (FETs) have emerged as ultrasensitive, label-free biodetectors that operate by sensing bound surface charge. However, the ionic strength of the environment (i.e., the Debye length of the solution) dictates the effective magnitude of the surface charge. Here, we show that control of the Debye length determines the spatial extent of sensed bound surface charge on the sensor. We apply this technique to different methods of antibody immobilization, demonstrating different effective distances of induced charge from the sensor surface.
Velocity precision measurements using laser Doppler anemometry
NASA Astrophysics Data System (ADS)
Dopheide, D.; Taux, G.; Narjes, L.
1985-07-01
A Laser Doppler Anemometer (LDA) was calibrated to determine its applicability to high pressure measurements (up to 10 bars) for industrial purposes. The measurement procedure with LDA and the experimental computerized layouts are presented. The calibration procedure is based on absolute accuracy of Doppler frequency and calibration of interference strip intervals. A four-quadrant detector allows comparison of the interference strip distance measurements and computer profiles. Further development of LDA is recommended to increase accuracy (0.1% inaccuracy) and to apply the method industrially.
Distance learning in academic health education.
Mattheos, N; Schittek, M; Attström, R; Lyon, H C
2001-05-01
Distance learning is an apparent alternative to traditional methods in education of health care professionals. Non-interactive distance learning, interactive courses and virtual learning environments exist as three different generations in distance learning, each with unique methodologies, strengths and potential. Different methodologies have been recommended for distance learning, varying from a didactic approach to a problem-based learning procedure. Accreditation, teamwork and personal contact between the tutors and the students during a course provided by distance learning are recommended as motivating factors in order to enhance the effectiveness of the learning. Numerous assessment methods for distance learning courses have been proposed. However, few studies report adequate tests for the effectiveness of the distance-learning environment. Available information indicates that distance learning may significantly decrease the cost of academic health education at all levels. Furthermore, such courses can provide education to students and professionals not accessible by traditional methods. Distance learning applications still lack the support of a solid theoretical framework and are only evaluated to a limited extent. Cases reported so far tend to present enthusiastic results, while more carefully-controlled studies suggest a cautious attitude towards distance learning. There is a vital need for research evidence to identify the factors of importance and variables involved in distance learning. The effectiveness of distance learning courses, especially in relation to traditional teaching methods, must therefore be further investigated.
Sharp, L; Black, R J; Harkness, E F; McKinney, P A
1996-01-01
OBJECTIVES: The primary aims were to investigate the incidence of leukaemia and non-Hodgkin's lymphoma in children resident near seven nuclear sites in Scotland and to determine whether there was any evidence of a gradient in risk with distance of residence from a nuclear site. A secondary aim was to assess the power of statistical tests for increased risk of disease near a point source when applied in the context of census data for Scotland. METHODS: The study data set comprised 1287 cases of leukaemia and non-Hodgkin's lymphoma diagnosed in children aged under 15 years in the period 1968-93, validated for accuracy and completeness. A study zone around each nuclear site was constructed from enumeration districts within 25 km. Expected numbers were calculated, adjusting for sex, age, and indices of deprivation and urban-rural residence. Six statistical tests were evaluated. Stone's maximum likelihood ratio (unconditional application) was applied as the main test for general increased incidence across a study zone. The linear risk score based on enumeration districts (conditional application) was used as a secondary test for declining risk with distance from each site. RESULTS: More cases were observed (O) than expected (E) in the study zones around Rosyth naval base (O/E 1.02), Chapelcross electricity generating station (O/E 1.08), and Dounreay reprocessing plant (O/E 1.99). The maximum likelihood ratio test reached significance only for Dounreay (P = 0.030). The linear risk score test did not indicate a trend in risk with distance from any of the seven sites, including Dounreay. CONCLUSIONS: There was no evidence of a generally increased risk of childhood leukaemia and non-Hodgkin's lymphoma around nuclear sites in Scotland, nor any evidence of a trend of decreasing risk with distance from any of the sites. There was a significant excess risk in the zone around Dounreay, which was only partially accounted for by the sociodemographic characteristics of the area. The statistical power of tests for localised increased risk of disease around a point source should be assessed in each new setting in which they are applied. PMID:8994402
Using Distance Physical Education in Elite Class Soccer Referee Training: A Case Study
ERIC Educational Resources Information Center
Kizilet, Ali
2011-01-01
The objective of this study is to present a model in the framework of Distance Education (DE), which suggests that a Distance Physical Education Program (DPEP) could be applied to those who are at various ages, in various geographical locations, and are working in various professions as part-time or full-time professionals. The use of DE in…
He, Kaifei; Xu, Tianhe; Förste, Christoph; Petrovic, Svetozar; Barthelmes, Franz; Jiang, Nan; Flechtner, Frank
2016-01-01
When applying the Global Navigation Satellite System (GNSS) for precise kinematic positioning in airborne and shipborne gravimetry, multiple GNSS receiving equipment is often fixed mounted on the kinematic platform carrying the gravimetry instrumentation. Thus, the distances among these GNSS antennas are known and invariant. This information can be used to improve the accuracy and reliability of the state estimates. For this purpose, the known distances between the antennas are applied as a priori constraints within the state parameters adjustment. These constraints are introduced in such a way that their accuracy is taken into account. To test this approach, GNSS data of a Baltic Sea shipborne gravimetric campaign have been used. The results of our study show that an application of distance constraints improves the accuracy of the GNSS kinematic positioning, for example, by about 4 mm for the radial component. PMID:27043580
He, Kaifei; Xu, Tianhe; Förste, Christoph; Petrovic, Svetozar; Barthelmes, Franz; Jiang, Nan; Flechtner, Frank
2016-04-01
When applying the Global Navigation Satellite System (GNSS) for precise kinematic positioning in airborne and shipborne gravimetry, multiple GNSS receiving equipment is often fixed mounted on the kinematic platform carrying the gravimetry instrumentation. Thus, the distances among these GNSS antennas are known and invariant. This information can be used to improve the accuracy and reliability of the state estimates. For this purpose, the known distances between the antennas are applied as a priori constraints within the state parameters adjustment. These constraints are introduced in such a way that their accuracy is taken into account. To test this approach, GNSS data of a Baltic Sea shipborne gravimetric campaign have been used. The results of our study show that an application of distance constraints improves the accuracy of the GNSS kinematic positioning, for example, by about 4 mm for the radial component.
Assessment of rockfall risk along roads
NASA Astrophysics Data System (ADS)
Budetta, P.
2004-03-01
This paper contains a method for the analysis of rockfall risk along roads and motorways. The method is derived from the Rockfall Hazard Rating System (RHRS) developed by Pierson et al. (1990) at the Oregon State Highway Division. The RHRS provides a rational way to make informed decisions on where and how to spend construction funds. Exponential scoring functions are used to represent the increases, respectively, in hazard and in vulnerability that are reflected in the nine categories forming the classification. The resulting total score contains the essential elements regarding the evaluation of the degree of the exposition to the risk along roads. In the modified method, the ratings for the categories "ditch effectiveness", "geologic characteristic", "volume of rockfall/block size", "climate and water circulation" and "rockfall history" have been rendered more easy and objective. The main modifications regard the introduction of Slope Mass Rating by Romana (1985, 1988, 1991) improving the estimate of the geologic characteristics, of the volume of the potentially unstable blocks and the underground water circulation. Other modifications regard the scoring for the categories "decision sight distance" and "road geometry". For these categories, the Italian National Council's standards (Consiglio Nazionale delle Ricerche - CNR) have been used (CNR, 1980). The method must be applied in both the traffic directions because the percentage of reduction in the decision sight distance greatly affects the results. An application of the modified method to a 2km long section of the Sorrentine road (no 145) in Southern Italy was developed. A high traffic intensity affects the entire section of the road and rockfalls periodically cause casualties, as well as a large amount of damage and traffic interruptions. The method was applied to seven cross sections of slopes adjacent to the Sorrentine road. For these slopes, the analysis shows that the risk is unacceptable and it should be reduced using urgent remedial works.
Estimating the spatial position of marine mammals based on digital camera recordings
Hoekendijk, Jeroen P A; de Vries, Jurre; van der Bolt, Krissy; Greinert, Jens; Brasseur, Sophie; Camphuysen, Kees C J; Aarts, Geert
2015-01-01
Estimating the spatial position of organisms is essential to quantify interactions between the organism and the characteristics of its surroundings, for example, predator–prey interactions, habitat selection, and social associations. Because marine mammals spend most of their time under water and may appear at the surface only briefly, determining their exact geographic location can be challenging. Here, we developed a photogrammetric method to accurately estimate the spatial position of marine mammals or birds at the sea surface. Digital recordings containing landscape features with known geographic coordinates can be used to estimate the distance and bearing of each sighting relative to the observation point. The method can correct for frame rotation, estimates pixel size based on the reference points, and can be applied to scenarios with and without a visible horizon. A set of R functions was written to process the images and obtain accurate geographic coordinates for each sighting. The method is applied to estimate the spatiotemporal fine-scale distribution of harbour porpoises in a tidal inlet. Video recordings of harbour porpoises were made from land, using a standard digital single-lens reflex (DSLR) camera, positioned at a height of 9.59 m above mean sea level. Porpoises were detected up to a distance of ∽3136 m (mean 596 m), with a mean location error of 12 m. The method presented here allows for multiple detections of different individuals within a single video frame and for tracking movements of individuals based on repeated sightings. In comparison with traditional methods, this method only requires a digital camera to provide accurate location estimates. It especially has great potential in regions with ample data on local (a)biotic conditions, to help resolve functional mechanisms underlying habitat selection and other behaviors in marine mammals in coastal areas. PMID:25691982
An integrated model for detecting significant chromatin interactions from high-resolution Hi-C data
Carty, Mark; Zamparo, Lee; Sahin, Merve; González, Alvaro; Pelossof, Raphael; Elemento, Olivier; Leslie, Christina S.
2017-01-01
Here we present HiC-DC, a principled method to estimate the statistical significance (P values) of chromatin interactions from Hi-C experiments. HiC-DC uses hurdle negative binomial regression account for systematic sources of variation in Hi-C read counts—for example, distance-dependent random polymer ligation and GC content and mappability bias—and model zero inflation and overdispersion. Applied to high-resolution Hi-C data in a lymphoblastoid cell line, HiC-DC detects significant interactions at the sub-topologically associating domain level, identifying potential structural and regulatory interactions supported by CTCF binding sites, DNase accessibility, and/or active histone marks. CTCF-associated interactions are most strongly enriched in the middle genomic distance range (∼700 kb–1.5 Mb), while interactions involving actively marked DNase accessible elements are enriched both at short (<500 kb) and longer (>1.5 Mb) genomic distances. There is a striking enrichment of longer-range interactions connecting replication-dependent histone genes on chromosome 6, potentially representing the chromatin architecture at the histone locus body. PMID:28513628
Acoustic detection, tracking, and characterization of three tornadoes.
Frazier, William Garth; Talmadge, Carrick; Park, Joseph; Waxler, Roger; Assink, Jelle
2014-04-01
Acoustic data recorded at 1000 samples per second by two sensor arrays located at ranges of 1-113 km from three tornadoes that occurred on 24 May 2011 in Oklahoma are analyzed. Accurate bearings to the tornadoes have been obtained using beamforming methods applied to the data at infrasonic frequencies. Beamforming was not viable at audio frequencies, but the data demonstrate the ability to detect significant changes in the shape of the estimated power spectral density in the band encompassing 10 Hz to approximately 100 Hz at distances of practical value from the sensors. This suggests that arrays of more closely spaced sensors might provide better bearing accuracy at practically useful distances from a tornado. Additionally, a mathematical model, based on established relationships of aeroacoustic turbulence, is demonstrated to provide good agreement to the estimated power spectra produced by the tornadoes at different times and distances from the sensors. The results of this analysis indicate that, qualitatively, an inverse relationship appears to exist between the frequency of an observed peak of the power spectral density and the reported tornado intensity.
A Comparison of Weights Matrices on Computation of Dengue Spatial Autocorrelation
NASA Astrophysics Data System (ADS)
Suryowati, K.; Bekti, R. D.; Faradila, A.
2018-04-01
Spatial autocorrelation is one of spatial analysis to identify patterns of relationship or correlation between locations. This method is very important to get information on the dispersal patterns characteristic of a region and linkages between locations. In this study, it applied on the incidence of Dengue Hemorrhagic Fever (DHF) in 17 sub districts in Sleman, Daerah Istimewa Yogyakarta Province. The link among location indicated by a spatial weight matrix. It describe the structure of neighbouring and reflects the spatial influence. According to the spatial data, type of weighting matrix can be divided into two types: point type (distance) and the neighbourhood area (contiguity). Selection weighting function is one determinant of the results of the spatial analysis. This study use queen contiguity based on first order neighbour weights, queen contiguity based on second order neighbour weights, and inverse distance weights. Queen contiguity first order and inverse distance weights shows that there is the significance spatial autocorrelation in DHF, but not by queen contiguity second order. Queen contiguity first and second order compute 68 and 86 neighbour list
Theoretical study of actinide monocarbides (ThC, UC, PuC, and AmC)
NASA Astrophysics Data System (ADS)
Pogány, Peter; Kovács, Attila; Visscher, Lucas; Konings, Rudy J. M.
2016-12-01
A study of four representative actinide monocarbides, ThC, UC, PuC, and AmC, has been performed with relativistic quantum chemical calculations. The two applied methods were multireference complete active space second-order perturbation theory (CASPT2) including the Douglas-Kroll-Hess Hamiltonian with all-electron basis sets and density functional theory with the B3LYP exchange-correlation functional in conjunction with relativistic pseudopotentials. Beside the ground electronic states, the excited states up to 17 000 cm-1 have been determined. The molecular properties explored included the ground-state geometries, bonding properties, and the electronic absorption spectra. According to the occupation of the bonding orbitals, the calculated electronic states were classified into three groups, each leading to a characteristic bond distance range for the equilibrium geometry. The ground states of ThC, UC, and PuC have two doubly occupied π orbitals resulting in short bond distances between 1.8 and 2.0 Å, whereas the ground state of AmC has significant occupation of the antibonding orbitals, causing a bond distance of 2.15 Å.
Sicoli, Giuseppe; Mathis, Gérald; Aci-Sèche, Samia; Saint-Pierre, Christine; Boulard, Yves; Gasparutto, Didier; Gambarelli, Serge
2009-06-01
Double electron-electron resonance (DEER) was applied to determine nanometre spin-spin distances on DNA duplexes that contain selected structural alterations. The present approach to evaluate the structural features of DNA damages is thus related to the interspin distance changes, as well as to the flexibility of the overall structure deduced from the distance distribution. A set of site-directed nitroxide-labelled double-stranded DNA fragments containing defined lesions, namely an 8-oxoguanine, an abasic site or abasic site analogues, a nick, a gap and a bulge structure were prepared and then analysed by the DEER spectroscopic technique. New insights into the application of 4-pulse DEER sequence are also provided, in particular with respect to the spin probes' positions and the rigidity of selected systems. The lesion-induced conformational changes observed, which were supported by molecular dynamics studies, confirm the results obtained by other, more conventional, spectroscopic techniques. Thus, the experimental approaches described herein provide an efficient method for probing lesion-induced structural changes of nucleic acids.
Oliveri, Paolo; López, M Isabel; Casolino, M Chiara; Ruisánchez, Itziar; Callao, M Pilar; Medini, Luca; Lanteri, Silvia
2014-12-03
A new class-modeling method, referred to as partial least squares density modeling (PLS-DM), is presented. The method is based on partial least squares (PLS), using a distance-based sample density measurement as the response variable. Potential function probability density is subsequently calculated on PLS scores and used, jointly with residual Q statistics, to develop efficient class models. The influence of adjustable model parameters on the resulting performances has been critically studied by means of cross-validation and application of the Pareto optimality criterion. The method has been applied to verify the authenticity of olives in brine from cultivar Taggiasca, based on near-infrared (NIR) spectra recorded on homogenized solid samples. Two independent test sets were used for model validation. The final optimal model was characterized by high efficiency and equilibrate balance between sensitivity and specificity values, if compared with those obtained by application of well-established class-modeling methods, such as soft independent modeling of class analogy (SIMCA) and unequal dispersed classes (UNEQ). Copyright © 2014 Elsevier B.V. All rights reserved.
Sequence comparison alignment-free approach based on suffix tree and L-words frequency.
Soares, Inês; Goios, Ana; Amorim, António
2012-01-01
The vast majority of methods available for sequence comparison rely on a first sequence alignment step, which requires a number of assumptions on evolutionary history and is sometimes very difficult or impossible to perform due to the abundance of gaps (insertions/deletions). In such cases, an alternative alignment-free method would prove valuable. Our method starts by a computation of a generalized suffix tree of all sequences, which is completed in linear time. Using this tree, the frequency of all possible words with a preset length L-L-words--in each sequence is rapidly calculated. Based on the L-words frequency profile of each sequence, a pairwise standard Euclidean distance is then computed producing a symmetric genetic distance matrix, which can be used to generate a neighbor joining dendrogram or a multidimensional scaling graph. We present an improvement to word counting alignment-free approaches for sequence comparison, by determining a single optimal word length and combining suffix tree structures to the word counting tasks. Our approach is, thus, a fast and simple application that proved to be efficient and powerful when applied to mitochondrial genomes. The algorithm was implemented in Python language and is freely available on the web.
A Multi-Modal Face Recognition Method Using Complete Local Derivative Patterns and Depth Maps
Yin, Shouyi; Dai, Xu; Ouyang, Peng; Liu, Leibo; Wei, Shaojun
2014-01-01
In this paper, we propose a multi-modal 2D + 3D face recognition method for a smart city application based on a Wireless Sensor Network (WSN) and various kinds of sensors. Depth maps are exploited for the 3D face representation. As for feature extraction, we propose a new feature called Complete Local Derivative Pattern (CLDP). It adopts the idea of layering and has four layers. In the whole system, we apply CLDP separately on Gabor features extracted from a 2D image and depth map. Then, we obtain two features: CLDP-Gabor and CLDP-Depth. The two features weighted by the corresponding coefficients are combined together in the decision level to compute the total classification distance. At last, the probe face is assigned the identity with the smallest classification distance. Extensive experiments are conducted on three different databases. The results demonstrate the robustness and superiority of the new approach. The experimental results also prove that the proposed multi-modal 2D + 3D method is superior to other multi-modal ones and CLDP performs better than other Local Binary Pattern (LBP) based features. PMID:25333290
Nonlinear dimensionality reduction methods for synthetic biology biobricks' visualization.
Yang, Jiaoyun; Wang, Haipeng; Ding, Huitong; An, Ning; Alterovitz, Gil
2017-01-19
Visualizing data by dimensionality reduction is an important strategy in Bioinformatics, which could help to discover hidden data properties and detect data quality issues, e.g. data noise, inappropriately labeled data, etc. As crowdsourcing-based synthetic biology databases face similar data quality issues, we propose to visualize biobricks to tackle them. However, existing dimensionality reduction methods could not be directly applied on biobricks datasets. Hereby, we use normalized edit distance to enhance dimensionality reduction methods, including Isomap and Laplacian Eigenmaps. By extracting biobricks from synthetic biology database Registry of Standard Biological Parts, six combinations of various types of biobricks are tested. The visualization graphs illustrate discriminated biobricks and inappropriately labeled biobricks. Clustering algorithm K-means is adopted to quantify the reduction results. The average clustering accuracy for Isomap and Laplacian Eigenmaps are 0.857 and 0.844, respectively. Besides, Laplacian Eigenmaps is 5 times faster than Isomap, and its visualization graph is more concentrated to discriminate biobricks. By combining normalized edit distance with Isomap and Laplacian Eigenmaps, synthetic biology biobircks are successfully visualized in two dimensional space. Various types of biobricks could be discriminated and inappropriately labeled biobricks could be determined, which could help to assess crowdsourcing-based synthetic biology databases' quality, and make biobricks selection.
NASA Astrophysics Data System (ADS)
Chaudhari, Rajan; Heim, Andrew J.; Li, Zhijun
2015-05-01
Evidenced by the three-rounds of G-protein coupled receptors (GPCR) Dock competitions, improving homology modeling methods of helical transmembrane proteins including the GPCRs, based on templates of low sequence identity, remains an eminent challenge. Current approaches addressing this challenge adopt the philosophy of "modeling first, refinement next". In the present work, we developed an alternative modeling approach through the novel application of available multiple templates. First, conserved inter-residue interactions are derived from each additional template through conservation analysis of each template-target pairwise alignment. Then, these interactions are converted into distance restraints and incorporated in the homology modeling process. This approach was applied to modeling of the human β2 adrenergic receptor using the bovin rhodopsin and the human protease-activated receptor 1 as templates and improved model quality was demonstrated compared to the homology model generated by standard single-template and multiple-template methods. This method of "refined restraints first, modeling next", provides a fast and complementary way to the current modeling approaches. It allows rational identification and implementation of additional conserved distance restraints extracted from multiple templates and/or experimental data, and has the potential to be applicable to modeling of all helical transmembrane proteins.